text stringlengths 9 7.94M | subset stringclasses 1 value | meta dict | file_path stringclasses 1 value | question dict | answers listlengths |
|---|---|---|---|---|---|
\begin{document}
\title{Constrained information transmission on Erd\"os-R\'enyi graphs}
{\footnotesize \noindent $^{~1}$Universit\'e Paris Diderot -- Paris 7, Math\'ematiques,
case 7012, F--75205 Paris Cedex 13, France \\ \noindent e-mail: \texttt{comets@math.univ-paris-diderot.fr}
\noindent $^{~2}$Department of Statistics, Institute of Mathematics,
Statistics and Scientific Computation, University of Campinas -- UNICAMP, rua S\'ergio Buarque de Holanda 651, 13083--859, Campinas SP, Brazil\\ \noindent e-mails: \texttt{\{gallesco,popov,marinav\}@ime.unicamp.br}
}
\begin{abstract} We model the transmission of information of a message on the Erd\"os-R\'enyi random graph with parameters $(n,p)$ and limited resources. The vertices of the graph represent servers that may broadcast a message at random. Each server has a random emission capital that decreases by one at each emission. We examine two natural dynamics: in the first dynamics, an informed server performs all its attempts, then checks at each of them if the corresponding edge is open or not; in the second dynamics the informed server knows a priori who are its neighbors, and it performs all its attempts on its actual neighbors in the graph. In each case, we obtain first and second order asymptotics (law of large numbers and central limit theorem), when $n\to \infty$ and $p$ is fixed, for the final proportion of informed servers.
\\[.3cm]\textbf{Keywords:}
information transmission, rumor, labelled trees, Erd\"os-R\'enyi random graph
\\[.3cm]\textbf{AMS 2000 subject classifications:} Primary 90B30; secondary 05C81, 05C80, 60F05, 60J20, 92D30 \end{abstract}
\section{Introduction}
Information transmission with limited resources on a general graph is a natural problem which appears in various contexts and attracts an increasing interest. Consider a finite graph, each
vertex will be seen as a server with a finite resource (e.g., operating battery) given by an
independent random variable~$K$. Initially a message
comes to one of the servers, which will recast the message to its neighbors in the graph
as long as its battery allows. In turn, each neighbor starts to emit as soon as it receives the message, and so on, in an asynchronous
mode. The transmission stops at a finite time because of the resource constraint. A quantity of paramount interest is the final number of informed servers, i.e., of servers which ever receive the message.
Rumor models deal with ignorant individual (who ignore the rumor), spreaders (who know the
rumor and propagates it) and stiflers (who refuse to propagate it) in a population of fixed -- but
large -- size.
Two important models, usually presented in continuous time, are well-known:
the Maki-Thompson model \cite{MakiThompson}, and the Daley-Kendall model \cite{DaleyKendall} for which the number of eventual knowers obeys a law of large numbers \cite{Sudbury} and is asymptotically normal \cite{Pittel} {\color{black} (recently, a large deviations principle for the Maki-Thompson model was also obtained in \cite{Leb})}. Such results extend to a larger family of processes \cite{LebMachadoRodriguez} using weak convergence theory for Markov processes. Though we focus here on mean-field type models, we just mention that lattice models lead to different questions \cite{BertacchiZucca, ColettiRodriguezSchinazi, GalloGarciaVargasRodriguez14}. On the other hand, it is well understood that the scaling limit of mean-field models are models on Galton-Watson trees, cf. \cite{AlvesETAL, Bordenave, CDS12}. Rumor spreading models are alike epidemics propagation models, e.g. frog models \cite{AlvesMachadoPopov02, CQR07, FontesMachadoSarkar04, HoffmanJohnsonJunge14}, and the famous SIR (Susceptible, Infected, Recovered) model which has motivated a number of research papers. See \cite{DaleyGani} for a survey.
The analysis of random graphs has recently seen a remarkable development \cite{Bollobas, Hofstad}. Such graphs yield a natural framework for rumor spreading and epidemic dissemination with more realistic applications to human or biological world \cite{NevokeeETAL}. Then, due to lack of homogeneity, setting the threshold concept on firm grounds is already a difficult problem \cite{IshamHardenNevokee}, and the literature is abundant in simulation experiments but poor in rigorous results. On the Erd\"os-R\'enyi graph, the authors in \cite{FountoulakisHP} prove that the time needed for complete transmission in the push protocol (a synchronous dynamics without constraints) is equivalent to that of the complete graph \cite{FriezeGrimmett} provided that the average degree is significantly larger than $(\ln n)$. In general, it is reasonable to look for quantitative results from perturbations of the homogeneous case. \color{black} From the point of view of applications, the graph may be thought of as a wireless network, the vertices of which are battery-powered sensors with a limited energy capacity. The reader will find in Sect.1 of \cite{CDS12} a discussion of applications to the performance evaluation of information transmission in wireless networks. \color{black}
On the complete graph, the process can be reduced by homogeneity to a Markov chain in the quadrant with absorption on the axis, as recalled in the forthcoming Section \ref{sec:cgres}. For a random graph, fluctuations of the vertex degrees create inhomogeneities which make the above description non-Markovian and computations intractable. This can be already seen in the simplest example, the Erd\"os-R\'enyi graph. Homogeneity is present, not in the strict sense but in a statistical one, and independence is deeply rooted in its construction. From many perspectives, this random graph with fixed positive $p$ has been proved to be very similar to the complete graph as $n$ becomes large. In the present paper we show that the information transmission process on this random graph is a bounded perturbation of that on the complete graph with appropriate resource distribution. We will use the above mentioned similarities to construct couplings between information process with constraints on the complete graph and on the Erd\"os-R\'enyi graph. Then we control the discrepancy between the two models and its propagation as the process evolves.
\color{black} In this paper, we consider two natural dynamics of the information transmission process on the Erd\"os-R\'enyi graph: \begin{itemize} \item (i) an informed server performs $K$ attempts by choosing a server at random independently at each attempts, then checks for each of them if the corresponding edge is open or not; \item (ii) the informed server knows a priori who are its neighbors, and it performs all its $K$ attempts on the set of its actual neighbors in the graph. \end{itemize} First of all we prove the existence of a threshold: Transmission takes place at a macroscopic level if and only if $p {\mathbb E} K >1$ in the case (i), and iff ${\mathbb E} K>1$ in the case (ii). Then, with positive probability, a positive proportion of servers will be informed, whereas in the case of the reverse inequalities, the final number of informed servers is bounded in probability.
The value of the threshold is natural, observing that, in the first case, attempts taking place on closed edges are lost, so that the effective number of attempts is close (as $N$ increases) to a random sum \begin{equation} \label{eq:Kh} \widehat K=\sum_{1\leq k \leq K} B_k \end{equation} with $(B_k, k \geq 1)$ i.i.d.~Bernoulli with parameter $p$.
Our main results, Theorems \ref{th:LLN}, \ref{th:TLC}, \ref{th:LLN2} and \ref{th:TLC2} below, are the laws of large numbers and the central limit theorems for the number of informed servers with explicit values of the limits in each case. Our approach is to show that, in the limit $n \to {\infty}$ with a fixed $p \in (0,1]$, the information transmission process on the Erd\"os-R\'enyi graph is shown to be a bounded perturbation of the process on the complete graph with a suitable resource law. Then, the first and second order asymptotics, obtained by explicit computations on the complete graph in \cite{Ma3, CDS12}, still hold on the random graph. \color{black}
An important property of the model is abelianity, \color{black} e.g. see Proposition \ref{prop:2cg}. \color{black}
We can change the order in which emitters are taken without changing \color{black} the law of the final state of the process, and construct an efficient coupling of the processes on the two graphs. This property also implies that assuming the servers emit in a burst does not change the final result ({\color{black} a nice feature of the burst emission assumption is that it reveals a branching structure}). The two dynamics we consider here are simple and reasonable protocols, but we don't make any attempt for generality in this paper. \color{black} We will use an exploration process which allows to reveal at each step, only the necessary part of the graph in order to preserve randomness and stationarity in the subsequent steps.
{\bf Outline of the paper}: In Section 2 we define the model, recall useful results for the complete graph, and state our main results. Then, labeled trees are introduced with a view towards our constructions. Section 4 contains the proofs in the case of the first dynamics (i), and the last section deals with dynamics (ii).
\section{Model and results}
We start to recall some results for the information transmission process on the complete graph.
\subsection{Known asymptotics in the case of the complete graph} \label{sec:cgres}
When any server is connected to any other one, the communication network is the complete graph on ${\cal N}=\{1,\ldots,n\}$. {\color{black} We consider here discrete time and} we scale the time so that there is exactly one emission per time unit. Then, the information process can be fully described by the number $N_n(s)$ of informed servers at time $s$ and the number $S_n(s)$ of available emission attempts \color{black}
(see \cite{CDS12} for the formal definition). \color{black}
Precisely, for the information process with resource $K$ on the complete graph, the pair
$(S_n(s),N_n(s))_{s=0,1,\ldots}$ on ${\mathbb Z}_+ \times [1,n]$ is a Markov chain with transitions \begin{equation} \label{eq:CDSdyn} \left\{ \begin{array}{rcl}
{\mathbb P}\Big( S_n(s \! + \! 1)=S_n(s) \! - \! 1,N_n(s \! + \! 1)=N_n(s) \mid \mathcal{F}_s\Big) & =&\frac{N_n(s)}{n}, \\
{\mathbb P}\Big( S_n(s \! + \! 1)=S_n(s) \! + \! k \! - \! 1,N_n(s \! + \! 1)=N_n(s) \! + \! 1 \mid \mathcal{F}_s\Big) & =&\left(1-\frac{N_n(s)}{n}\right) {\mathbb P}(K=k),\ \end{array} \right. \end{equation} for $k \geq 0,$ with $\mathcal{F}_s$ the $\sigma$-field generated by $S_n(\cdot)$ and $N_n(\cdot)$ on $[0,s]$. \color{black} The transition probabilities are easily understood by interpreting what can occur at a given step: \color{black} On the first line {\color{black} of (\ref{eq:CDSdyn})} the emission takes place towards a previously informed target, though in the second one the target yields its own resource (a fresh r.v.~$K$). The chain is absorbed in the vertical semi-axis, at the finite time ${\mathfrak T}_n = \inf\{s: S_n(s)=0\}$.
In this section we recall some results from \cite{CDS12} (and of \cite{Ma3} for constant $K=2$) on the first and second order asymptotics of $N_n({\mathfrak T}_n )=N_n({\infty})$.
Let $q \in [0,1)$ be the largest root of \begin{equation}
\label{eq:p(theta)}
q\; {\mathbb E} K +\ln (1-q) =0. \end{equation} Then, $0<q<1 $ for ${\mathbb E} K>1$ and $q=0$ if ${\mathbb E} K \leq 1$.
\begin{Th}[\cite{CDS12}, Theorems 2.2 and 2.3]{}\label{th:cdsLLNTCL} (i) Assume ${\mathbb E} K \in (0,{\infty})$.
Then, as $n \to {\infty}$,
$$ \frac{1}{n}
N_n({\mathfrak T}_n ) \cvlaw q \times Ber(\sigma^{GW}) $$ with $Ber(\sigma)$ a Bernoulli variable with parameter $\sigma$, and $\sigma^{GW}$ is the largest solution $\sigma \in [0,1]$ of \begin{equation} \label{eq:probasurvivalgw} 1-\sigma= {\mathbb E}\left[(1-\sigma)^K\right], \end{equation} i.e. the survival probability of a Galton-Watson process with reproduction law $K$. \\
(ii) Assume ${\mathbb E} K >1$ and ${\mathbb E} K^2 < {\infty}$. Denote by $\sigma_K^2$ the variance of $K$ and fix some $\varepsilon$ with $0< \varepsilon <- \ln (1-q)$. As $n \to {\infty}$, we have the convergence in law, conditionally on $\{{\mathfrak T}_n \geq \varepsilon n\}$, $$ n^{-1/2} \big( N_n({\mathfrak T}_n) - n q \big) \cvlaw {\mathcal N}(0, \sigma_q^2), $$ with ${\mathcal N}(0, \sigma^2)$ a centered Gaussian with variance $\sigma^2$, and \begin{equation} \label{eq:varTCLq} \sigma_q^2=\frac{ q\sigma_K^2 (1-q)^2+ q(1-q) + (1-q)^2 \ln(1-q)}{[(1-q){\mathbb E} K-1]^{2}} \color{black} >0. \color{black} \end{equation} \end{Th}
We now state the main results of this paper, i.e. when the connection network is the Erd\"os-R\'enyi graph $G(n,p)$. Now, a server starting to emit, instantaneously exhausts its $K$ emissions in a burst. The time unit corresponds to complete exhaustion for an emitter. Let $N_n^{er}(t)$ be the number of informed servers at time $t$.
\subsection{ First mode of transmission on the Erd\"os-R\'enyi graph} \label{sec:res1}
First of all, the Erd\"os-R\'enyi graph $G(n,p)$ is sampled on the vertex set $\cal N$ (each unoriented edge is kept independently with probability $p$ or removed with probability $1-p$), and one vertex is selected as the first informed server. Then, at each integer time, an informed server which is not yet exhausted is selected to emit its $K$ attempts in a burst. For each attempt a target in $\cal N$ is selected (in the full population including the emitter). If the target is already informed or if the corresponding edge is not in the graph, the attempt is lost. Otherwise, the target becomes informed. After all attempts are checked, the emitter is turned to exhausted and the time is increased by one unit. The transmission ends at a finite time $ \tau_n^{er}$, \color{black} which is the first time when all informed servers are exhausted. \color{black}
Note that, because of the burst emission here, the {\it time scale is different} from Section \ref{sec:cgres} with one emission at a time. \color{black} With $N_n^{er}(t)$ the number of informed servers at time $t$, \color{black} we are interested in the asymptotics of $$\tau_n^{er} = N_n^{er}( \tau_n^{er} ) = N_n^{er}({\infty}).$$ \color{black} The first equality holds since it takes one time unit to exhaust an informed server, and the last one holds since the process stops at $\tau_n^{er} $. \color{black}
We will encounter the above quantities when $K$ is replaced by $\widehat K$ from (\ref{eq:Kh}), that we will denote using the same symbol with a hat:
\color{black} In particular, \color{black}
$\widehat \theta =0=\widehat \sigma^{GW}$ if $p {\mathbb E} K \leq 1$, and for $p {\mathbb E} K >1 , \widehat q \in (0,1)$ is the positive root of \begin{equation} \label{eq:hatq} \widehat q p {\mathbb E} K + \ln (1-{\widehat q})=0, \end{equation} and $\widehat \sigma^{GW}$ is the positive root of \begin{equation*} 1-\widehat \sigma^{GW}= {\mathbb E}\left[(1-\widehat \sigma^{GW})^{\widehat K}\right]= {\mathbb E}\left[(1-p \widehat \sigma^{GW})^{K}\right], \end{equation*} \color{black} that is, equation (\ref{eq:probasurvivalgw}) with hats. \color{black}
\begin{theo} \label{th:LLN} Assume ${\mathbb E} K^2<{\infty}$. Then, $$ \frac{ \tau_n^{er} }{n} \cvlaw \widehat q \times Ber(\widehat \sigma^{GW}) \;. $$ \end{theo}
The interesting case is of course when ${\mathbb E} \widehat K=p {\mathbb E} K >1$ to have $\widehat \sigma^{GW}>0$. In this case, let also \begin{eqnarray*}
\widehat \sigma_{{\color{black}\hat{q}}}^2=\frac{ \widehat q\sigma_{\widehat K}^2 (1-\widehat q)^2+ \widehat q(1-\widehat q) + (1-\widehat q)^2 \ln(1-\widehat q)}{[(1-\widehat q)p {\mathbb E} K-1]^{2}}{\color{black}>0}, \end{eqnarray*} with $\sigma_{\widehat K}^2= p(1-p) {\mathbb E} K + p^2 \sigma_K^2 $ the variance of $\widehat K$.
\begin{theo} \label{th:TLC} Assume ${\mathbb E} K^2<{\infty}$ and $p {\mathbb E} K >1$. Fix $\varepsilon \in (0,\widehat q)$. Then, conditionally on $\{\tau^{er}_n > n \varepsilon\}$, we have convergence in law: $$ \frac{
\tau_n^{er} - n \widehat q }{\sqrt n} \cvlaw {\mathcal N}(0, \widehat \sigma_{{\color{black}\hat{q}}}^2)\;. $$ \end{theo}
\subsection{Main results for the second mode of transmission} \label{sec:res2}
Again, we start by sampling the Erd\"os-R\'enyi graph $G(n,p)$ and one vertex as the first informed server. Then, at each integer time, an informed server which is not yet exhausted is selected to emit its $K$ attempts in a burst, each attempt being towards a random target uniformly distributed among the neighbors in the graph.
(If a site has no neighbours, it wastes its resource without result, and after that the process continues.) If the target is already informed the attempt is lost, but otherwise the target becomes informed. After all attempts are checked, the emitter is turned to exhausted and the time is increased by one unit. The transmission ends at some finite time $ \bar \tau_n^{er}$, with $\bar \tau_n^{er}$ informed servers. In the following theorems $q$, $\sigma^{GW}$ and $\sigma_q$ are from Section \ref{sec:cgres}.
\begin{theo} \label{th:LLN2} Assume ${\mathbb E} K^2<{\infty}$. $$ \frac{\bar \tau^{er}_n}{n} \cvlaw q \times Ber( \sigma^{GW}) . $$ \end{theo}
\begin{theo} \label{th:TLC2} Assume ${\mathbb E} K^2<{\infty}$ and ${\mathbb E} K >1$. Fix $\varepsilon \in (0,q)$. Then, conditionally on $\{\bar \tau^{er}_n > n \varepsilon\}$, we have convergence in law: $$ \frac{ \bar \tau^{er}_n - n q }{\sqrt n} \cvlaw {\mathcal N}(0, \sigma_{q}^2). $$ \end{theo}
\subsection{Strategy of the proofs}
\color{black} We use the known results about the information process on the complete graph to derive results on the Erd\"os-R\'enyi graph.
We show that case (i) is similar to the complete graph with $\hat K$ attempts. The difference is that in the latter model, the Bernoulli random variables in (\ref{eq:Kh}) (indicating the presence of the relevant edges) are regenerated independently at each attempt to transmit, though in the former the state of an edge is determined at its first appearance. A coupling argument is made to show that in fact this makes little difference to the final number of vertices receiving the information.
In case (ii) we keep track of which edges are in a known state and the key argument is that with high probability only $o(n)$ edges out of a vertex will ever be in a known state. Hence the argument is to show that, most likely, there will be $O_P(1)$ transmissions in which there is a discrepancy between the models. To take care of the consequences of discrepancies, we delay them until the end of the process -- taking advantage of irrelevance of the order of transmission. Finally we show that these few extra transmissions make little difference to the final proportion of vertices receiving the information. \color{black}
\section{Construction from labelled trees}
\subsection{Labelled trees}
Let $\mathcal{W}=\cup_{m\geq 0}{\mathbb N}^m$ be the set of all finite words on the alphabet ${\mathbb N}=\{1, 2,\dots\}$. By convention ${\mathbb N}^0=\{ \o \}$
contains only one element which can be interpreted as the empty word and which, in our formalism, will be the root of the tree. An element of $\mathcal{W}$ different from $\o$ is thus a $m$-uple $u=(i_1,\dots,i_m)$ which, to simplify, will be denoted by $u=i_1\dots i_m$. The length of $u$ denoted by $|u|$ equals $m$ (with $|\o|=0$). If $j\in {\mathbb N}$, we denote by $uj$ the element $i_1\dots i_mj$. The elements of the form $uj$ are interpreted as the descendants of $u$. We will use the following total order relation on $\mathcal{W}$, we write $w\leq w'$ if: $|w|<|w'|$, or $|w|=|w'|$ and $w\leq_{lex} w'$ in the lexicographical order.
A {\it rooted tree} $\mathcal{T}$ is an undirected simple connected graph without cycles and with a distinguished vertex. A {\em labelled tree} $(\mathcal{T},L)$ is a rooted tree $\mathcal{T}$ equipped with a label mapping $L$ from $\mathcal{T}$ to some set $\mathcal{N}$. Labelled trees we will consider are connected subsets of $\mathcal{W}$ containing $\o$. The label set $\mathcal{N}=\{1,2,\ldots,n\}$ encodes the set of servers. For the sake of brevity, we use the short notation $Lv=L(v)$, that the reader will distinguish from concatenation.
\subsection{Construction and coupling} \label{sec:constr}
Let $n\geq 2$ and $\mathcal{N}=\{1,\dots,n\}$. On a suitable probability space $(\Omega, \mathcal{F}, {\mathbb P})$, we define the following independent random elements: \begin{itemize} \item[(i)] $(K_i)_{i\in \mathcal{N}}$ are i.i.d.\ non-negative integer random variables; \item[(ii)] $I_0$ is a uniform random variable on $\mathcal{N}$; \item[(iii)] $(I^i_k)_{(i,k)\in \mathcal{N}\times {\mathbb N}}$ are independent uniform random variables on $\mathcal{N}$; \item[(iv)] $({B}^i_k)_{(i,k)\in \mathcal{N}\times {\mathbb N}}$ are independent Bernoulli random variables of parameter $p$. \end{itemize}
In the next sections, we construct couplings between different labelled trees using the above random elements. The different trees $\mathcal{T}({\infty})= \lim_{t \nearrow {\infty}}\mathcal{T}(t)$ are limits of some sequence, they will be constructed dynamically discovering step by step their nodes and their labels. At each step $t$, the labels of the tree $\mathcal{T}(t)$ are all different. \color{black} They represent the {\em informed} servers at time~$t$ and
will be partitioned \color{black}
into two (as in Section \ref{sec:er}) or three (as in Section \ref{sec:2frigos} and the end of Section \ref{sec:cg}) subsets:
\begin{equation} \label{eq:partition} \mathcal{T}(t) = \mathcal{A}(t) \cup \mathcal{E}(t) \cup \mathcal{D}(t), \qquad \mathcal{A}(t), \mathcal{E}(t), \mathcal{D}(t) \; {\rm disjoint}, \end{equation} where $\mathcal{E}(t)$ encodes the exhausted servers (those which have already used their resource), $\mathcal{A}(t)$ encodes the active servers (those which are waiting to use \color{black} their resource and ready to transmit),
\color{black}
and $\mathcal{D}(t)$
\color{black} encodes the set of delayed servers (those which have not started to transmit but are temporarily delayed). \color{black} In Section \ref{sec:er}, $\mathcal{D}(t)$ is empty.
\begin{rem} The sets in (\ref{eq:partition}) and the mapping $L$ depend on the number $n$ of servers. In general, for the sake of simplicity, we do not indicate explicitly the dependence in the notations. \end{rem}
In the next section, we construct transmission processes on the complete graph with law $\widehat{K}$ and on Erd\"os--R\'enyi random graph, using the above elements, thus we have a coupling between these processes, allowing to transfer results from one to the other.
\color{black} The reader may wonder, here or below, why we introduce so many independent r.v.'s in the construction, since a given edge is decided to be open or closed in the Erd\"os-R\'enyi graph only once, namely at its first appearance. The reason is that coupling the process with one on the complete graph requires to decide each edge more than once (cf.~ Sections \ref{sec:er} and \ref{sec:cg}). \color{black}
\section{First mode of emission: Burst emission}
We model the transmission of a message on the Erd\"os-R\'enyi random graph of parameter $p$. Each vertex $i$ of the graph is a server with resource $K_i$. Initially, one vertex receives a message and tries to send it to its neighbors: first it choses, uniformly among all the servers, one server (the target) to which it will try to send the message. If the edge between these two servers is present, then the information is transmitted and otherwise it is not. The emitting server repeats this operation until it has exhausted its own resource $K_i$. If the edge is present and the target server already knows the information then the emitter just loses one resource unit. When the emitter has exhausted its resource, we pick a new server among the informed ones and it starts to emit according to the same procedure. The process stops when all the informed servers have exhausted their resources.
\subsection{Erd\"os-R\'enyi graph} \label{sec:er}
\begin{figure}\label{fig1}
\end{figure}
We construct dynamically the random labelled tree $\mathcal{T}^{er}(t)$ in the following way. At $t=0$, using $I_0$, we discover the label of the root $\o$ and we set $L(\o)=I_0$. In the rest of this section, we abbreviate for clarity $L^{er}$ by $L$. Denoting by $i$ the value of $I_0$ for short notations, we consider a realization of $K_i$, ${B}^i_1,\dots, {B}^i_{K_i}$ and $I^i_1,\dots,I^i_{K_i}$. To each descendant $l$, $1\leq l \leq K_i$, of the root we associate the label $I^i_{l}$. Initially, we define \begin{align} \nonumber X(0)&=\o,\\
\mathcal{X}(0)&=\{w\in \mathcal{W} : |w|=1, 1\leq w\leq K_i\},\nonumber\\
\mathcal{X}^{er}(0)&=\{w\in \mathcal{W} : |w|=1, 1\leq w\leq K_i, {B}^i_w=1\},\nonumber\\ L(w) &= I^i_w, \qquad w \in \mathcal{X}(0),\nonumber\\ \mathcal{A}^{er}(0)&=\{w\in \mathcal{X}^{er}(0) : L(w)\neq i, L(w)\neq L(w'), w'<w, w'\in
\mathcal{X}(0)\},
\nonumber\\ \mathcal{T}^{er}(0)&=\{\o\}\cup \mathcal{A}^{er}(0). \label{eq:er0} \end{align} With the process $(X(t), \mathcal{T}^{er}(t),
\mathcal{A}^{er}(t))$ at time $t$ and $L$ defined on $\mathcal{T}^{er}(t)$, the value at the next step $t+1$ is defined by: \begin{itemize} \item If $\mathcal{A}^{er}(t)$ is non empty, we let $X(t+1)$ be its first element in the total order $\leq$, \[ X(t+1)=\inf\{w\in \mathcal{A}^{er}(t)\},\phantom{*}\mbox{denoted by $v$} \] and we consider a realization of $K_{Lv}$, ${B}^{Lv}_1,\dots, {B}^{Lv}_{K_{Lv}}$ and $I^{Lv}_1,\dots,I^{Lv}_{K_{Lv}}$. To each descendant $vl$, $1\leq l \leq K_{Lv}$ we associate the label $I^{Lv}_{l}$. Then, we update the sets of vertices \begin{align} \mathcal{X}(t+1)&=\{vl \in \mathcal{W} : 1\leq l \leq K_{Lv}\},\nonumber\\ L(w) &= I^{Lv}_l, \qquad w=vl \in \mathcal{X}(t+1),\nonumber\\ \mathcal{X}^{er}(t+1)&=\{vl \in \mathcal{W} : 1\leq l \leq K_{Lv}, {B}^{Lv}_l=1\},\nonumber\\ \mathcal{A}^{er}(t+1)&=(\mathcal{A}^{er}(t)\setminus \{v\})\cup\{w\in \mathcal{X}^{er}(t+1) : L(w)\notin L(\mathcal{T}^{er}(t)), \nonumber\\ &\qquad \qquad \qquad L(w)\neq L(w'), w'<w, w'\in \mathcal{X}(t+1)\} ,\nonumber\\ \mathcal{T}^{er}(t+1)&=\mathcal{T}^{er}(t)\cup \mathcal{A}^{er}(t+1). \label{eq:ert} \end{align}
\item If $\mathcal{A}^{er}(t)$ is empty, we set $\tau_n^{er}=t$ and the construction is stopped. \end{itemize}
At each step of the construction, we set $\mathcal{E}^{er}(t+1)=\mathcal{E}^{er}(t) \cup \{X(t+1)\}$ starting from $\mathcal{E}^{er}(0)=\{\o\}$, and $\mathcal{D}^{er}(t)\equiv \emptyset$. Hence $v=X(t+1)$ is moved from active to exhausted at time $t+1$, and the partition (\ref{eq:partition}) reduces to two subsets in the case of the Erd\"os-R\'enyi graph. \qed
The construction is illustrated by Figure \ref{fig1}.
\begin{rem} \label{rem:L} \color{black} (i) Note that the definitions of active servers in (\ref{eq:er0}) and (\ref{eq:ert})
require that the label has not appeared before. Indeed the first Bernoulli variable determines the status of the edge. \color{black}
(ii) Note that $L^{er}$ has been defined in this process on a larger tree than needed. In fact, we consider its restriction to $\mathcal{T}^{er}({\infty})=\mathcal{T}^{er}(\tau_n^{er})$, which is indeed injective. This procedure of restriction is needed in all the subsequent constructions. \end{rem}
It is not completely obvious that this construction corresponds to the description given at the beginning of Section \ref{sec:res1}. However, this is the case, as we show now. Here, an edge is open or closed according to the Bernoulli variable used on the first appearance of the edge in the construction. Here is a formal definition. Denote by $\cal E$ the set of unoriented edges on $\cal N$, i.e. the set of $e = \langle i,j \rangle, i, j \in {\cal N}$ (allowing self-edge). We say that the edge $e$ has appeared in the construction if there is some $t \;(0 \leq t \leq \tau_n^{er}-1)$ and some $\ell \leq K_{L[X(t)]}$ such that $$e= \langle L[X(t)], L[X(t)\ell] \rangle.$$ We denote by $t(e)$ and $\ell(e)$ the smallest (in the lexicographic order) $t$ and $\ell$ with the above property, by $Ap(t)=\{e: t(e)=t\}$ the set of edges which have appeared at time $t$ and $Ap=\cup_{t \geq 0} Ap(t)$. With an additional i.i.d. Bernoulli($p$) family $({B}^i_{-k})_{i \in {\cal N}, k \geq 1}$ independent of the variables in (i--iv), define $$ B(e) = \left\{ \begin{array}{lll} B^i_k & {\rm if} & e \in Ap, i=L[X(t(e))], k=\ell(e), \\ B^i_{-j} & {\rm if} & e \notin Ap, e=\langle i,j \rangle, i \leq j. \end{array} \right. $$
\begin{prop} \label{prop:er=er} The family $(B(e), e \in {\cal E})$ is i.i.d.~Bernoulli($p$), and is independent of the family $(I^i_k)_{i \in {\cal N}, k \geq 1}$,$ (K_i)_{i \in {\cal N}}$, $I_0$. \end{prop}
\noindent The proposition shows that the above construction coincides with the description of the information transmission process on the Erd\"os-R\'enyi graph as given in the beginning of Section~\ref{sec:res1}, in which the random graph is defined by the $B(e)$'s, and the dynamics uses the variables $I_0, I^i_\cdot, K_i$. Though the proof is standard, we give it for completeness.
$\Box$ For bounded measurable functions $f_e, g_i, h$ defined on the appropriate spaces, we compute \begin{eqnarray*} {\mathbb E} \prod_{e \in {\cal E}} f_e(B(e)) \times \prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) =\qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ \sum_{T, A(\cdot)} {\mathbb E} \prod_{e \in {\cal E}} f_e(B(e)) \times \prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A(\cdot)\}} \end{eqnarray*} where $A(\cdot)=(A(t), t=0,\ldots T-1)$ ranges over the collection of $T$ disjoints subsets of $\cal E$. With $A=\cup_t A(t)$, by independence of $({B}^i_{-k})_{i,k}$ with the other variables, the expectation in the last term is equal to \begin{eqnarray} \label{eq:Kbg}
\Big( \prod_{e \notin A} {\mathbb E} f_e(B(e)) \Big) \times \Big( {\mathbb E} \prod_{e \in A} f_e(B(e)) \times
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A\}} \Big). \end{eqnarray} Let us write the last expectation as \begin{eqnarray*}
{\mathbb E} \Big(\prod_{e \in A(T-1)} f_e\big( B^{L[X(t(e))]}_{ \ell(e)}\big) \times
\prod_{e \in A \setminus A(T-1)} f_e(B(e)) \times
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A\}} \Big) \qquad \\ \stackrel{\rm indep.}{=}
\Big( \prod_{e \in A(T-1)} {\mathbb E} f_e( B(e))\Big) \times \Big( {\mathbb E}
\prod_{e \in A \setminus A(T-1)} f_e(B(e)) \times
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A\} }\Big)\\ \stackrel{\rm iterating}{=} \Big( \prod_{e \in A} {\mathbb E} f_e( B(e))\Big) \times \Big( {\mathbb E}
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A\}} \Big). \qquad\qquad\qquad\qquad\qquad \end{eqnarray*} The first factor in the last term complements the one in (\ref{eq:Kbg}), and we finally get \begin{align*} \lefteqn{{\mathbb E} \prod_{e \in {\cal E}} f_e(B(e)) \times \prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0)}\phantom{**************}\nonumber\\ &= \prod_{e \in {\cal E}} {\mathbb E} f_e( B(e)) \Big( \sum_{T, A(\cdot)} {\mathbb E}
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) {\mathbf 1}_{\{\tau_n^{er}=T, Ap(\cdot)=A\}} \Big) \\ &=
\prod_{e \in {\cal E}} {\mathbb E} f_e( B(e)) \times \Big( {\mathbb E}
\prod_{i \in {\cal N}} g_i(K_i, I^i_\cdot) \times h(I_0) \Big), \end{align*} by summing over $T, A(\cdot)$. This proves the proposition. \qed
\subsection{Complete graph} \label{sec:cg}
{\color{black} With the above ingredients, we start to construct the information transmission model on the complete graph according to two different dynamics with distribution $\widehat K$. The first construction is simple and natural (and is close to that performed in Section 2.1 of \cite{CDS12}),} but the second one makes a useful coupling with the transmission model on the Erd\"os-R\'enyi graph.
{\bf Sequential construction.} We construct dynamically the random tree $\mathcal{T}^{cg,s}(t)$ together with the label mapping $L^{cg,s}$ that we abbreviate for clarity by $L=L^{cg,s}$ in the construction. The exploration vertex $X(t)$ we introduce below, also depends on the dynamics, $X=X^{cg,s}$, but we omit the superscript for the same reason.
At $t=0$, using $I_0$, we discover the label of the root, $L(\o)=I_0$. Suppose that $I_0=i$, and consider the realization of $K_i$, ${B}^i_1,\dots, {B}^i_{K_i}$ and $I^i_1,\dots,I^i_{K_i}$. The label of each descendant $l$, $1\leq l \leq K_i$, of the root is $I^i_{l}$. Initially, we suppose \begin{align} \nonumber X(0)&=\o, \\
\mathcal{X}^{cg,s}(0)&=\{w\in \mathcal{W} : |w|=1, 1\leq w\leq K_i, {B}^i_w=1\},\nonumber\\ L(w) &= I^i_w, \qquad w \in \mathcal{X}^{cg,s}(0),\nonumber\\ \mathcal{A}^{cg,s}(0)&=\{w\in \mathcal{X}^{cg,s}(0) : L(w) \neq i, L(w)\neq L(w'), w'<w, w'\in \mathcal{X}^{cg,s}(0)\},\nonumber\\ \mathcal{T}^{cg,s}(0)&=\{\o\}\cup \mathcal{A}^{cg,s}(0). \label{eq:cgs0} \end{align} With the process $(\mathcal{T}^{cg,s}(t),\mathcal{X}^{cg,s}(t), \mathcal{A}^{cg,s}(t))$ at time $t$, its value at the next step $t+1$ is defined by: \begin{itemize} \item If $\mathcal{A}^{cg,s}(t)$ is non empty, we let $X(t+1)$ be its first element, \[ X(t+1)=\inf\{w\in \mathcal{A}^{cg,s}(t)\},\phantom{*}\mbox{denoted by $v$} \] and we consider a realization of $K_{Lv}$, ${B}^{Lv}_1,\dots, {B}^{Lv}_{K_{Lv}}$ and $I^{Lv}_1,\dots,I^{Lv}_{K_{Lv}}$. To each descendant $vl$, $1\leq l \leq K_{Lv}$ we associate the label $I^{Lv}_{l}$. Then, we update the sets: \begin{align} \mathcal{X}^{cg,s}(t+1)&=\{vl \in \mathcal{W} : 1\leq l \leq K_{Lv}, {B}^{Lv}_l=1\},\nonumber\\ L(w) &= I^{Lv}_l, \qquad w=vl \in \mathcal{X}^{cg,s}(t+1),\nonumber\\ \mathcal{A}^{cg,s}(t+1)&=(\mathcal{A}^{cg,s}(t)\setminus \{v\})\cup\Big\{w\in \mathcal{X}^{cg,s}(t\!+\!1) : L(w)\notin L(\mathcal{T}^{cg,s}(t)), \nonumber\\ &\phantom{**************}L(w)\neq L(w'), w'<w, w'\in \mathcal{X}^{cg,s}(t\!+\!1)\Big\},\nonumber\\ \mathcal{T}^{cg,s}(t+1)&=\mathcal{T}^{cg,s}(t)\cup \mathcal{A}^{cg,s}(t+1). \label{eq:cgst} \end{align}
\item If $\mathcal{A}^{cg,s}(t)$ is empty, we set $\tau_n^{cg,s}=t$ and the transmission stops. \end{itemize}
At each step of the construction, we set $\mathcal{E}^{cg,s}(t+1)=\mathcal{E}^{cg,s}(t) \cup \{X(t+1)\}$ starting from $\mathcal{E}^{cg,s}(0)=\{\o\}$, and $\mathcal{D}^{cg,s}(t)\equiv \emptyset$, so that the partition (\ref{eq:partition}) reduces again to two subsets. \qed
\begin{rem} {\color{black} Observe that the condition ``$L(w)\neq L(w')$, $w'<w$" in the definition of $\mathcal{A}^{cg,s}(t)$ avoids counting twice the same label.} We also emphasize that \eqref{eq:cgs0}--\eqref{eq:cgst} and \eqref{eq:er0}--\eqref{eq:ert} differ by the set $\mathcal{X}^\cdot(t)$ for $w'$ in the next-to-last line of the formulae. In the Erd\"os-R\'enyi construction the same Bernoulli variable is in force each time an edge is used, whereas a fresh Bernoulli variable is needed on the complete graph. \end{rem}
In the next proposition we show that this construction yields the information transmission model on the complete graph from \cite{CDS12} with distribution $\widehat K$. This fact is necessary in order to use known results on the complete graph. \color{black} The time scales differently in the two constructions. \color{black} To relate them, we introduce, for all $i\in \mathcal{N}$, \begin{equation}\nonumber \widehat K_i = \sum_{k=1}^{ K_i} B^i_k, \end{equation} and, for $t=0,1,\ldots \tau_n^{cg,s}$, $\widehat R=\widehat R^{cg,s}_n$ by \begin{equation}\nonumber \widehat R(t)= \sum_{r=0}^{t-1} \widehat K_{LX(r)},\qquad \widehat R(0)=0, \end{equation} where we recall that $X=X^{cg,s}$ and $L=L^{cg,s}$. We also define $\widehat N_n^{cg,s}, \widehat S_n^{cg,s}$ starting from the initial configurations $\widehat N_n^{cg,s}(0)=1, \widehat S_n^{cg,s}(0)= \hat{K}_{L\o}$, with the following evolution. \begin{itemize} \item For $s \in ]\widehat R(t),\widehat R(t+1)]$ with $t \in [0,\tau_n^{cg,s}[$, we consider the smallest integer $\ell = \ell_s\in [1, K_{LX(t)}]$ such that $ \sum_{k=1}^{ \ell} B^{LX(t)}_k=s-\widehat R(t)$, and we define \begin{eqnarray} \label{eq:dodobiento} \widehat N_n^{cg,s}(s) \!\!\!\! &=& \!\!\!\! \widehat N_n^{cg,s}(s-1)+ \1{X(t)\ell \in \mathcal{A}^{cg,s}(t)},\\ \nonumber \widehat S_n^{cg,s}(s) \!\!\!\! &=& \!\!\!\! \widehat S_n^{cg,s}(s-1)+ \1{X(t) \ell \in \mathcal{A}^{cg,s}(t)} \times \widehat K_{L\big(X(t)\ell\big)}-1 , \end{eqnarray} where $X(t) \ell$ denotes by concatenation a direct child of $X(t)$ in the tree. We check from \eqref{eq:NetSaRt} below that \begin{equation} \label{eq:flb} \widehat R( \tau_n^{cg,s})= \inf\{ s \geq 0: \widehat S_n^{cg,s}(s)=0\}. \end{equation} \item After that time the process stops: $\widehat N_n^{cg,s}(s)=\widehat N_n^{cg,s}(\widehat R(\tau_n^{cg,s}))$ and $\widehat S_n^{cg,s}(s)=0$ for $s \geq \widehat R(\tau_n^{cg,s})$. \end{itemize}
Note, for further use, that by summing \eqref{eq:dodobiento}, we find for all $t$, \begin{eqnarray} \nonumber \widehat N_n^{cg,s}(\widehat R(t))&=&{\rm card} \;\mathcal{T}^{cg,s}(t), \qquad t \geq 0,\\ \label{eq:NetSaRt} \widehat S_n^{cg,s}(\widehat R(t))&=& \sum_{u \in \mathcal{A}^{cg,s}(t)} \widehat K_{Lu} , \qquad t \leq \tau_n^{cg,s}. \end{eqnarray}
The process $(\widehat S_n^{cg,s}, \widehat N_n^{cg,s}) (\cdot)$ is the one considered in \cite{CDS12}, i.e. we recover the dynamical definition of the information transmission process on the complete graph:
\begin{prop} \label{prop:cgs} $ \left( \widehat S_n^{cg,s} (s) , \widehat N_n^{cg,s} (s) \right)_{s\geq 0} $
is a Markov chain with transitions given by (\ref{eq:CDSdyn}) and resource variable $\widehat K$, stopped when the first coordinate vanishes. Moreover, we have equality in law $$(\widehat R( \tau_n^{cg,s}), {\rm card} \;\mathcal{T}^{cg,s}({\infty}))
\eqlaw ({\mathfrak T}_n, N_n ({\infty})).$$ \end{prop} $\Box$ From the independence of the random elements (i)--(iv) in Section \ref{sec:constr}, it is a standard exercise to check it is a Markov chain, and from (\ref{eq:dodobiento}) that the transition probability
is given by
\begin{eqnarray*}
(\sigma, \nu) \longrightarrow
(\sigma + B \widehat K -1, \nu +B),
\end{eqnarray*} where $\widehat K$ and $B$ are independent variables with law (\ref{eq:Kh}) and Bernoulli with parameter $1-\nu/n$. Together with the absorption rule at the random time defined by (\ref{eq:flb}), this proves the first claim. The other one then follows from (\ref{eq:NetSaRt}) and \eqref{eq:flb}. \qed
\color{black} Propositions \ref{prop:er=er} and \ref{prop:cgs} mean that we have a coupling between the information transmission process on the two graphs. \color{black} The main problem with it is that after the first discrepancy between $\mathcal{T}^{er}(t)$ and $\mathcal{T}^{cg,s}$ occurs, the two constructions diverge and we loose track of the differences. For instance, we have $\mathcal{A}^{er}(t) \subset \mathcal{A}^{cg,s}(t)$ for $t=0$, \color{black} but it may not be so at time $t=1$ if the smallest element of $\mathcal{A}^{cg,s}(0)$ is not in $\mathcal{A}^{er}(0)$. \color{black} Therefore we need a more subtle construction, proceeding with common elements as much as possible. We {\it delay}
the elements $\mathcal{D}^{cg,d}$ corresponding to servers which are informed in the {\it complete graph} dynamics but not yet informed on the Erd\"os-R\'enyi graph,
performing the construction with the common servers as much as possible. Thus the construction for the complete graph remains close to the one for the Erd\"os-R\'enyi graph.
{\bf Delayed construction.} We construct dynamically the random tree $\mathcal{T}^{cg,d}(t)$ and the labelling $L=L^{cg,d}$ {\em simultaneously} with that on the Erd\"os-R\'enyi graph according to (\ref{eq:er0}), (\ref{eq:ert}). Initially, in addition to the sets defined in (\ref{eq:er0}), we also consider \begin{align} \mathcal{D}^{cg,d}(0)&= \{w\in \mathcal{X}^{er}(0): L(w)\neq i, L(w)=L(w') \text{ for some } w'<w, w' \in \mathcal{X}(0)\setminus \mathcal{X}^{er}(0),\nonumber\\ &\qquad \qquad L(w)\neq L(w''), w''<w, w'' \in \mathcal{X}^{er}(0)\} , \nonumber\\ \mathcal{T}^{cg,d}(0)&=\{\o\}\cup \mathcal{A}^{er}(0)\cup \mathcal{D}^{cg,d}(0). \label{eq:cgd0} \end{align} For the delayed dynamics on the complete graph, the partition in (\ref{eq:partition}) has three terms: $\mathcal{E}(0)=\{\o\}, \mathcal{A}=\mathcal{A}^{er}(0)$ and $\mathcal{D}=\mathcal{D}^{cg,d}(0)$. The label function is defined in (\ref{eq:er0}).
With the process $(\mathcal{T}^{er}(t), \mathcal{T}^{cg,d}(t), \mathcal{A}^{er}(t), \mathcal{D}^{cg,d}(t))$ and $L$ at time $t$, the value at the next step $t+1$ is defined by:
\begin{itemize}
\item If $\mathcal{A}^{er}(t)$ is non empty, we follow all the prescriptions in (\ref{eq:ert}), and we also define $X^{cg,d}(t)=X^{er}(t)$ denoted by $X(t)$ therein, \begin{align*}
C(t+1)&=\{w\in \mathcal{X}^{er}(t\!+\!1) : L(w)\notin L(\mathcal{T}^{cg,d}(t)), L(w)= L(w')\text{ for some }w'\!<\!w, \\ & \qquad w'\in \mathcal{X}(t\!+\!1)\setminus \mathcal{X}^{er}(t\!+\!1), L(w)\neq L(w''), w''<w, w'' \in \mathcal{X}^{er}(t+1)\}.
\end{align*} (These are the nodes with label informed for the first time during the current burst on the complete graph, still not informed on the Erd\"os-R\'enyi graph. They will be placed in the set $\mathcal{D}^{cg,d}$ of delayed servers.) We also define the set \[F(t+1)=\{ w\in \mathcal{W}:Ê w\in \mathcal{D}^{cg,d}(t), L(w)\in L(\mathcal{A}^{er}(t+1))\}.\] We then update the sets \begin{align} \mathcal{D}^{cg,d}(t+1)&=\{w\in \mathcal{W}: w\in \mathcal{D}^{cg,d}(t), L(w)\notin L(\mathcal{A}^{er}(t+1))\}\cup C(t+1),\nonumber\\ \mathcal{T}^{cg,d}(t+1)&=(\mathcal{T}^{cg,d}(t)\setminus F(t+1)) \cup \mathcal{A}^{er}(t+1)\cup \mathcal{D}^{cg,d}(t+1). \label{eq:2k10} \end{align} During this step, the mapping $L^{cg,d}=L^{er}=L$ is extended according to the rule in (\ref{eq:ert}). The partitions (\ref{eq:partition}) for $\mathcal{T}^{er}(t)$ and $\mathcal{T}^{cg,d}(t)$ are then given by $\mathcal{A}=\mathcal{A}^{er}(t), \mathcal{E}=\mathcal{E}(t)$ (defined by $\mathcal{E}(t+1)=\mathcal{E}(t)\cup\{X(t+1)\}$) for both cases, and $\mathcal{D}=\emptyset$ for the first one but $\mathcal{D}=\mathcal{D}^{cg,d}(t)$ for the second one. It is therefore natural to set $\mathcal{A}^{cg,d}(t)=\mathcal{A}^{er}(t)$ for all such $t$'s. This step is in force till the first time when $\mathcal{A}^{er}(t)= \emptyset$, i.e. at time $\tau_n^{er}=t$ when the information process on the Erd\"os-R\'enyi graph ceases to evolve, after which we proceed as follows. \item If $\mathcal{A}^{er}(t)$ is empty and $\mathcal{D}^{cg,d}(t)$ is non empty, we set $X^{cg,d}(t+1)=X(t+1)$ with \[ X(t+1)=\inf\{w\in \mathcal{D}^{cg,d}(t)\},\phantom{*}\mbox{denoted by $v$} \] and we obtain a realization of $K_{Lv}$, ${B}^{Lv}_1,\dots, {B}^{Lv}_{K_{Lv}}$ and $I^{Lv}_1,\dots,I^{Lv}_{K_{Lv}}$. To each descendant $vl$, $1\leq l \leq K_{Lv}$ we associate the label $I^{Lv}_{l}$. Then, we define \begin{align*} \mathcal{X}^{cg,d}(t+1)&=\{vl \in \mathcal{W} : 1\leq l \leq K_{Lv},{B}^{Lv}_l=1\},\\ L(w) &= I^{Lv}_l, \qquad w=vl \in \mathcal{X}^{cg,d}(t+1) \qquad ({\rm with} \; L=L^{cg,d}).\nonumber \end{align*}
We then update the sets \begin{align} \mathcal{D}^{cg,d}(t+1)&=\Big(\mathcal{D}^{cg,d}(t)\setminus \{v\} \Big) \cup C(t+1),\nonumber\\ C(t+1) &= \{w\in \mathcal{X}^{cg,d}(t \! +\!1) : L(w)\!\notin \!L(\mathcal{T}^{cg,d}(t)), L(w)\!\neq \!L(w'), w'\!<\!w, w'\in \mathcal{X}^{cg,d}(t\!+\!1)\},\nonumber\\ \mathcal{T}^{cg,d}(t+1)&=\mathcal{T}^{cg,d}(t) \cup \mathcal{D}^{cg,d}(t+1). \label{eq:cgdt} \end{align} Recalling that $\mathcal{A}^{cg,d}(t)=\mathcal{A}^{cg,d}(\tau_n^{er})=\emptyset$ for $t > \tau_n^{er}$, we see that the complementary set $$\mathcal{E}^{cg,d}(t+1)= \mathcal{T}^{cg,d}(t+1) \setminus \mathcal{D}^{cg,d}(t+1)$$ evolves like $\mathcal{E}^{cg,d}(t+1)=\mathcal{E}^{cg,d}(t) \cup \{X(t+1)\}$.
\item If $\mathcal{A}^{er}(t)$ and $\mathcal{D}^{cg,d}(t)$ are empty, the evolution is stopped and we denote by $\tau_n^{cg,d}$ the smallest such time $t$. \qed
\end{itemize}
\begin{prop} \label{prop:cgd} We have the equality in law of the processes \begin{align} \label{eq:eqlawcg1} \big( {\rm card} \, \mathcal{T}^{cg,s}(t), {\rm card} \; \mathcal{A}^{cg,s}(t), {\rm card} \, \mathcal{E}^{cg,s}(t) \big)_{t\geq 0} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \\ \eqlaw \big( {\rm card} \, \mathcal{T}^{cg,d}(t), {\rm card} (\mathcal{A}^{cg,d}(t) \cup \mathcal{D}^{cg,d}(t)), {\rm card} \, \mathcal{E}^{cg,d}(t) \big)_{t\geq 0}\;. \quad \nonumber \end{align} In particular, \begin{equation} \label{eq:eqlawcg2} \big( {\rm card} \, \mathcal{T}^{cg,s}({\infty}), \tau_n^{cg,s}\big) \eqlaw \big( {\rm card} \, \mathcal{T}^{cg,d}({\infty}), \tau_n^{cg,d} \big)\;. \quad \nonumber \end{equation} \end{prop}
$\Box$ Since for all $t$, \begin{eqnarray*} {\rm card} \, \mathcal{T}^{cg,s}(t)&=& {\rm card} \; \mathcal{A}^{cg,s}(t) + {\rm card} \, \mathcal{E}^{cg,s}(t),\\
{\rm card} \, \mathcal{T}^{cg,d}(t)&=& {\rm card} (\mathcal{A}^{cg,d}(t) \cup \mathcal{D}^{cg,d}(t))+ {\rm card} \, \mathcal{E}^{cg,d}(t),
\end{eqnarray*} and for $i=s, d$, $$
{\rm card} \, \mathcal{E}^{cg,i}(t)=(t \wedge \tau_n^{cg,i})+1,
$$
it is enough to show that
\begin{equation} \nonumber
\big({\rm card} \; \mathcal{A}^{cg,s}(t); t \geq 0 \big) \eqlaw \big( {\rm card} (\mathcal{A}^{cg,d}(t) \cup \mathcal{D}^{cg,d}(t)); t \geq 0 \big). \end{equation} This relation follows from the equalities of the transitions $$ {\mathbb P}\Big( {\rm card} \mathcal{A}^{cg,s}(t+1)= \cdot \mid {\rm card} \mathcal{A}^{cg,s}(s)=a_s, {\rm card} \mathcal{E}^{cg,d}(s)=s+1, s \leq t \big) $$ and $$ {\mathbb P}\Big( {\rm card}(\mathcal{A}^{cg,d}(t+1) \cup \mathcal{D}^{cg,d}(t+1))= \cdot \mid
{\rm card}(\mathcal{A}^{cg,d}(s) \cup \mathcal{D}^{cg,d}(s))=a_s, {\rm card}\mathcal{E}^{cg,d}(s)=s+1, s \leq t \big)\;, $$ for all $t \geq 0$. Indeed, from (\ref{eq:cgst}), and from (\ref{eq:ert}, \ref{eq:cgdt}), both transitions are equal to the law of the variable $a_t-1+Y$, with $Y$ the number of new coupons obtained in $\widehat K$ attempts in a coupon collector process with $n$ different coupons starting with initially $a_t+t+1$ already obtained coupons.
\qed
Hence the sequential and delayed constructions are equivalent to the standard transmission processs on the complete graph. Here is a direct consequence of Propositions \ref{prop:cgs} and \ref{prop:cgd}.
\begin{cor}\label{cor:taucgsd}
It holds $\tau_n^{cg,s} \eqlaw \tau_n^{cg,d}$ and $\widehat R( \tau_n^{cg,d}) \eqlaw {\mathfrak T}_n$. \end{cor}
In fact, we have a stronger result.
\begin{prop} \label{prop:2cg} It holds
$L(\mathcal{T}^{cg,s}(\infty))=L(\mathcal{T}^{cg,d}(\infty))$ for all $\omega$. \end{prop}
$\Box$ This set depends only on the arrows, not on the order. Mathematically, for $i, j \in {\cal N}$, write $i \leadsto j$ if there exists $1 \leq k \leq K_i$ with $B^i_k=1$ and $I^i_k=j$. Then, it is not difficult to see that $L(\mathcal{T}^{cg,s}(\infty)) =L(\mathcal{T}^{cg,d}(\infty))$ because both are equal to the union $$
\cup_{m=0}^{{\infty}} \{i \in {\cal N}: \exists i_0,\ldots i_m \in {\cal N}, i_0=I_0, i_m=i, i_{k} \leadsto i_{k+1}, 0\!\leq \!k \!\leq \!m\!-\!1\}, $$ the above set being understood as $ \{I_0\} $ for $m=0$. \qed
\subsection{Coupling results}
With the above constructions, the information processes on the complete graph, in its delayed version, and on the Erd\"os-R\'enyi graph are finely coupled. First, it directly follows from the construction that: \begin{equation} \label{eq:couplage1} \tau_n^{cg,d}\geq \tau_n^{er}, \qquad \mathcal{T}^{er}(t)\subset \mathcal{T}^{cg,d}(t)\quad {\rm for\ all\ } t\geq 0, \end{equation} and \begin{equation} \label{eq:couplage1} \mathcal{A}^{cg,d}(t)=\mathcal{A}^{er}(t) \quad {\rm for\ all\ } t\leq \tau^{er}_n. \end{equation}
Note also that, from (\ref{eq:cgdt}), the set $\mathcal{D}^{cg,d}(t)$ can increase or decrease with time, and that the elements of $\mathcal{D}^{cg,d}(\tau_n^{er})$ together with their descendants encode the difference between the two processes.
\begin{prop} \label{prop:couplage-tau} Assume ${\mathbb E} K^2 <{\infty}$. Then, we have \begin{equation} \tau_n^{cg,d}-\tau_n^{er}=O_P(1), \label{eq:taucgd=er} \end{equation} and \begin{equation} \label{eq:Tccgd=er} {\tt card} \big( \mathcal{T}^{cg,d}({\infty}) \setminus \mathcal{T}^{er}({\infty}) \big) = O_P(1). \end{equation}
\end{prop} We recall that for a sequence of real random variables $Z_n$, we write $Z_n=O_P(1)$ when
$\sup_{n} {\mathbb P}( |Z_n| \geq A) \to 0$ as $A \to +{\infty}$. By Markov inequality, a sufficient condition for that is $\sup_n {\mathbb E} |Z_n| < {\infty}$.
$\Box$ First, observe that \begin{eqnarray} \label{eq:j14-1} \mathcal{T}^{cg,d}(t)&=& \mathcal{T}^{er}(t)\cup \mathcal{D}^{cg,d}(t), \qquad t \leq \tau_n^{er},\\ \label{eq:j14-2} \mathcal{T}^{cg,d}(t)&=& \mathcal{T}^{er}(t)\cup \mathcal{D}^{cg,d}(t) \cup \big( \mathcal{E}^{cg,d}(t) \setminus \mathcal{E}^{cg,d}(\tau^{er}_n)\big),
\qquad t \geq \tau^{er}_n. \end{eqnarray} For $t \leq \tau^{er}_n$, in view of (\ref{eq:2k10}), the set $\mathcal{D}^{cg,d}(t)$ can increase at most by $C(t)$. Letting $i=LX(t)$, we observe that $\mathcal{D}^{cg,d}(t)$ is added a node with label $j \in \mathcal{N}$ if $j$ appears at least twice in $(I^i_k; k \leq K_i)$, first with a Bernoulli $B^i_k=0$ and then at least once
with a Bernoulli $B^i_k=1$. Then, for $i,j \in \mathcal{N}$, we define the event $M(i,j)$ and the random variable $M(i)$
\begin{eqnarray*} M(i,j)&=&\big\{\exists k_1<k_2\leq K_i: B^i_{k_1}=0, B^i_{k_2}=1, I^i_{k_1}=I^i_{k_2}=j\big\},\\ M(i)&=& \sum_{j \in \mathcal{N}} \1{M(i,j)}, \end{eqnarray*} and we have, from the above observation, \begin{equation} \nonumber {\rm card} \; \mathcal{D}^{cg,d}(t) - {\rm card} \; \mathcal{D}^{cg,d}(t-1) \leq M(LX(t)). \end{equation} Thus, \begin{equation} \label{eq:majcle} {\rm card} \; \mathcal{D}^{cg,d}(t) \leq \sum_{i \in \mathcal{N}} M(i) \stackrel{\rm def}{=} Y, \qquad t \leq \tau^{er}_n, \end{equation} since each label $i \in {\cal N}$ can be picked at most once. The positive variable $Y$ has mean \begin{eqnarray*}
{\mathbb E} Y &=& n^2 {\mathbb E} [{\mathbb P}( M(i,j)| K_i)] \\
&\leq& n^2 {\mathbb E} {K_i \choose 2} p(1-p) \frac{1}{n^2}\\ &\leq& \frac{p(1-p)}{2} {\mathbb E} K^2. \end{eqnarray*} Since $K$ is square integrable, this is bounded, and then \begin{equation} \nonumber {\rm card} \; \mathcal{D}^{cg,d}(t)= O_{P}(1), \qquad t \leq \tau^{er}_n, \end{equation} and then \begin{equation} {\tt card} \big( \mathcal{T}^{cg,d}(\tau^{er}_n) \setminus \mathcal{T}^{er}(\tau^{er}_n) \big) = O_P(1), \end{equation} by (\ref{eq:j14-1}). Since $\mathcal{A}^{er}(\tau^{er}_n)=\emptyset$, we also have $ {\tt card} (\mathcal{A}^{cg,d}(\tau^{er}_n) \cup \mathcal{D}^{cg,d}(\tau^{er}_n) ) = O_P(1),$ and then \begin{equation} \label{eq:sam1} {\tt card} \;\mathcal{A}^{cg,s}(\tau^{er}_n) = O_P(1). \end{equation} We claim that this implies \begin{equation} \label{eq:sam2} \widehat S_n^{cg,s}(\widehat R(\tau^{er}_n) ) = O_P(1). \end{equation} Indeed, the conditional law of ${\tt card} \;\mathcal{A}^{cg,s}(t)$ given ${\tt card} \;\mathcal{A}^{cg,s}(t-1)=a_{t-1}, \widehat S_n^{cg,s}(\widehat R(t))=m$ is the law of the variable $a_{t-1}-1+Y$, with $Y$ the number of new coupons obtained in $m$ attempts in a coupon collector process with $n$ different images starting initially with $a_{t-1}+t$ already obtained different coupons. Hence, for (\ref{eq:sam1}) to hold, it is necessary that (\ref{eq:sam2}) holds.
We now use a lemma, which deals with the complete graph case only.
\begin{lm} \label{lem:keycg} Consider the process on the complete graph defined in (\ref{eq:CDSdyn}).\\
(i) For $A,B>0$ define
$$
u(A,B)= \limsup_{n \to {\infty}} {\mathbb P}( \inf \{ S_n(t) ; t \in [A, {\mathfrak T}_n-A]\} \leq B),
$$
with the convention $\inf \emptyset = +{\infty}$. Then, for all finite $B$, $u(A,B) \to 0$ as $A \to {\infty}$.\\
(ii) In particular,
for any random sequence $\sigma_n$,
\begin{equation} \nonumber S_n(\sigma_n) = O_P(1) \implies \min \{ \sigma_n, ( {\mathfrak T}_n- \sigma_n )^+ \}=O_P(1). \end{equation}
\end{lm}
$\Box$ Proof of Lemma \ref{lem:keycg}: If $q=0$, we have $ {\mathfrak T}_n=O_P(1)$ and the result is trivial. We focus on the case
$q\in (0,1)$.
The infimum is finite if and only if $ {\mathfrak T}_n\geq 2A$, so we get
\begin{eqnarray} \nonumber \limsup_{n \to {\infty}} {\mathbb P}\big( \inf \{ S_n(t) ; t \in [A, {\mathfrak T}_n-A]\} < {\infty} , {\mathfrak T}_n<n q/2\big) &=& \lim_{n \to {\infty}} {\mathbb P}( {\mathfrak T}_n\geq 2A, {\mathfrak T}_n<n q/2) \\&=& \label{eq:s5-1} {\mathbb P}(\tau^{GW} \in [2A, {\infty})) \end{eqnarray} where $\tau^{GW} $ is the survival time of the Galton-Watson process with offspring distribution $K$. Thus, the last term vanishes as $A \to {\infty}$.
We now study the contribution of the event $\{ {\mathfrak T}_n\geq n q/2\}$.
From the computations in the proof of Theorem 2.2 in \cite{CDS12}, the process $S_n$ increases linearly on the survival set with slope ${\mathbb E} K-1>0$ for times $t, t \to {\infty}, t = o(n)$. Precisely, we can fix $\eta>0$ and $\delta>0$ such that
\begin{equation} \label{eq:s5-2}
\liminf_{n \to {\infty}} {\mathbb P}( S_n(t) \geq \eta t; t \in [A, n\delta] \mid {\mathfrak T}_n\geq n q/2) = 1- \varepsilon(A),
\quad {\rm with} \; \lim_{A \to {\infty}} \varepsilon(A) =0. \end{equation} Similarly, from the law of large numbers at times close to $n q$, we see that the process $S_n$, on the survival set decreases linearly at such times with slope $e^{-q} {\mathbb E} K-1<0$. Precisely, we can choose $\eta$ and $\delta$ such that we have also
\begin{equation} \label{eq:s5-3}
\liminf_{n \to {\infty}} {\mathbb P}( S_n(t) \geq \eta ( {\mathfrak T}_n-t); t \in [ {\mathfrak T}_n-n\delta, {\mathfrak T}_n-A] \mid
{\mathfrak T}_n\geq n q/2) = 1- \varepsilon(A), \end{equation} with some function $\varepsilon$ such that $ \lim_{A \to {\infty}} \varepsilon(A) =0.$ Finally, from large deviations, we have \begin{equation} \label{eq:s5-4}
\lim_{n \to {\infty}} {\mathbb P}( S_n(t) \geq C n ; t \in [n\delta, n(q-\delta/2)] \mid
{\mathfrak T}_n\geq n q/2) = 1. \end{equation} From (\ref{eq:s5-2}), (\ref{eq:s5-3}) and (\ref{eq:s5-4}), we conclude that $$ \liminf_{n \to {\infty}} {\mathbb P}( S_n(t) \geq \eta A; t \in [A, {\mathfrak T}_n-A] \mid {\mathfrak T}_n\geq n q/2) = 1- \varepsilon(A), $$ with $\varepsilon$ as above. This, in addition to (\ref{eq:s5-1}), implies our claim (i). The other claim (ii) follows directly from (i). \qed
With the lemma we complete the proof of Proposition \ref{prop:couplage-tau}. From (\ref{eq:sam2}), the lemma shows that $\widehat R(\tau^{er}_n) $ is close to 0 or to $\widehat R( \tau_n^{cg,d})$. In turn this implies, by definition of $\widehat R$, that \begin{equation} \nonumber
\min \{ \tau^{er}_n, |\tau^{er}_n - \tau^{cg,d}_n|\}=O_P(1). \end{equation} Moreover, it is not difficult to see directly from the construction that for $\varepsilon>0$ small enough (in fact, $\varepsilon < \widehat q$), $$ \lim_{n \to {\infty}} {\mathbb P}( \tau^{er}_n \geq n \varepsilon) = \lim_{n \to {\infty}} {\mathbb P}( \tau^{cg,d}_n \geq n \varepsilon) =1- \widehat \sigma^{GW}. $$
Together with $\tau^{er}_n \leq \tau^{cg,d}_n $, the last two relations imply that $|\tau^{er}_n - \tau^{cg,d}_n| =O_P(1)$, which is (\ref{eq:taucgd=er}).
Further, following \cite{CDS12}, we see that the subtree generated by $\mathcal{D}^{cg,d}(\tau_n^{er})$ is subcritical. Indeed, similarly to above (\ref{eq:s5-3}), from the law of large numbers at times $s \sim n \widehat q$, we see that the process $\widehat S_n^{cg,d}$, on the survival set decreases linearly at such times with slope $e^{-\widehat q} p {\mathbb E} K-1<0$. By (\ref{eq:j14-2}), this yields the desired conclusion (\ref{eq:Tccgd=er}).
\qed
From these estimates we derive our main results for the first mode of emission.
$\Box$ {\em Proof of Theorems \ref{th:LLN} and \ref{th:TLC}. } The estimates (\ref{eq:taucgd=er}) and (\ref{eq:Tccgd=er}) are good enough to apply Theorem A for the complete graph and resource $\widehat K$. Indeed, by (\ref{eq:taucgd=er}) we have $$ \tau_n^{er} = \tau_n^{cg,d} + O_P(1), $$ and the sequence $(\tau_n^{cg,d})_{n \geq 1}$ obeys the law of large numbers in (i) and the central limit theorem in (ii) of Theorem A. Then, $(\tau_n^{er})_{n \geq 1}$ obeys the same limit theorems. \qed
\section{Second mode of emission}\label{sec:2frigos}
In this second part we consider, a slightly different kind of emission on the Erd\"os -R\'enyi random graph. At time $0$ a server $i$ is chosen uniformly among the $n$ servers. Then, this server chooses uniformly a target server $i_1$ among the $n$ servers. If the edge between $i_1$ and $i$ is present then $i$ transmits the information to $i_1$ and wastes one unit of its resource. If the edge between $i$ and~$i_1$ is absent, nothing happens. This operation is repeated until $i$ exhausts all of its resource $K_i$. Then, we chose another informed server and repeat the same procedure as for $i$. The process ends when all the informed servers have exhausted their resources. This mode of emission differs from the previous one in the fact that a server can only use a unit of resource if the edge between it and its target server is present. Hence it is a
perturbation of the information transmission process on the complete graph with resource $K$, but not $\widehat K$ in contrast with the above case.
On the probability space $(\Omega, \mathcal{F}, {\mathbb P})$, we consider, as in beginning of Section \ref{sec:constr}, the random elements $(K_i)_{i\in \mathcal{N}}$, $I_0$, $(I^i_k)_{(i,k)\in \mathcal{N}\times {\mathbb N}}$ and $({B}^i_{k})_{i \in {\cal N}, k \geq 1}$.
We first explain the ideas of the construction. We attach to each edge $e \in {\cal E}$ a variable $B(e) \in\{0,1, {\rm ``unknown"}\}$ indicating the current status of the edge, i.e. if the edge is, respectively, closed, open or still unknown at this stage. The variables are updated during the construction, they start from $B(e)={\rm ``unknown"}$ and can turn from ${\rm "unknown"}$ to 0 or 1 when they appear. Transmission on the complete graph occurs whenever meeting a Bernoulli variable $B^i_k$ with the value 1, more precisely, at the $K_i$ smallest such $k$'s. In the Erd\"os -R\'enyi case, we check if the variable $B^i_k$ is compatible with the current status of the edge. Emissions with incompatibilities are placed in the set $\cal D$ of delayed elements, and will be recast afresh later in the case of the random graph. Compatible emissions are common to the two processes, and in the case of a previously uninformed target it is placed in the set $\cal A$ and will be used in priority to emit in turn. When this set becomes empty, we end with the two sets ${\cal D}^{er}, {\cal D}^{cg}$, that we process independently. All processes being delayed in the construction, we don't indicate it in the notations; Similarly, we denote by $\mathcal{A}(t)$ the set of active vertices, since it is the same for the two processes.
Let $T_i=\inf\{j\geq 1: \sum_{l=1}^j{B}^i_{l}=K_i\}$. It is convenient to initialize the process at time $t=-1$. We start with $B(e)={\rm "unknown"} $ for all edge $e$,
with $$\mathcal{A}(-1)=\{\o\}, \qquad \mathcal{E}^{er}(-1)=\mathcal{E}^{cg}(-1)=\mathcal{D}^{er}(-1)=\mathcal{D}^{cg}(-1)=\emptyset, \qquad L \o= I_0. $$
With the process $(\mathcal{D}^{cg}(t), \mathcal{D}^{er}(t), \mathcal{E}^{cg}(t),\mathcal{E}^{er}(t), \mathcal{A}(t))$ at time $t$, its value at the next step $t+1$ is defined as follows (one can check that for $t=-1$ the first case below is in force with $X(0)=\o$): \begin{itemize} \item If $\mathcal{A}(t)$ is non empty, we let $X(t+1)$ be its first element, that we denote by $v$ for short notations, as well as $L(X(t+1))=i$. We define the sets \begin{align} \mathcal{X}(t+1)&=\{vl \in \mathcal{W} : 1\leq l\leq T_i\},\nonumber\\ \mathcal{X}^{cg}(t+1)&=\{vl \in \mathcal{X}(t+1): {B}^i_l=1\}\nonumber , \\ L(vl) &= I^i_l, \qquad vl \in \mathcal{X}(t+1) \nonumber. \end{align}
For edges $e$ of the form $e=\langle i, L(vk) \rangle$ with $B(e)$ previously unknown, the value is being discovered, so we assign $$ B(e) = B^i_{\ell(e)} \qquad {\rm with} \; \ell(e)= \min\{ \ell: I^i_\ell=I^i_k\}. $$ Define also \begin{align} {\rm Inc}_\ell(t+1) &=\big\{ w \in \mathcal{X} (t+1): B^i_w \neq B(\langle i, Lw \rangle), B(\langle i, Lw \rangle) =\ell \big\}, \qquad \ell=0,1, \nonumber\\ {\rm Inc}(t+1) &= {\rm Inc}_0(t+1) \cup {\rm Inc}_1(t+1).
\nonumber
\end{align}
Here, ${\rm Inc}(t)$ is the set of incompatibilities at time $t$, they are of two possible nature.
With $[x]^+=\max\{x,0\}$,
\begin{equation} \label{eq:canape2}
m(t+1)=[ {\rm card(Inc}_0(t+1))-{\rm card(Inc}_1(t+1))]^+
\end{equation} is the number of emissions from server $i$ on the random graph
to be recast later. An incompatibility from the set ${\rm Inc}_0(t+1)$ corresponds to an emission
on the complete graph, but not on the Erd\"os-R\'enyi graph, and it is delayed.
An incompatibility from the set ${\rm Inc}_1(t+1)$ corresponds to an emission
on the Erd\"os-R\'enyi graph (but not on the complete graph). Note that this does not increase the number of informed servers.
Define \begin{equation} \label{eq:canape} T'_i= \max \left\{ \ell \leq T_i: \sum_{k=1}^\ell {\bf 1}_{vk \in {\rm Inc}_1(t+1)} + \sum_{k=1}^\ell {\bf 1}_{vk \in \mathcal{X}^{cg}(t+1) \setminus {\rm Inc}_0(t+1)} \leq K_i\right\}. \end{equation}
Then, we update
\begin{align} \mathcal{E}^{cg}(t+1)&=\mathcal{E}^{er}(t+1)= \mathcal{E}^{er}(t) \cup \{v\},\nonumber\\ {\cal D}^{cg}(t+1)&= {\cal D}^{cg}(t) \cup {\rm Inc}_0(t+1) \cup \big \{vk \in \mathcal{X}^{cg}(t \! + \! 1) \setminus {\rm Inc}_0(t \! + \! 1) : k > T_i'\big\}, \nonumber\\ \mathcal{A}(t+1)&= (\mathcal{A}(t)\setminus \{v\}) \nonumber \\ &\!\!\!\! \cup \Big \{vk \in \mathcal{X}^{cg}(t \! + \! 1) \setminus {\rm Inc}_0(t \! + \! 1) : k \leq T_i', Lw \neq Lw', w<w' \; {\rm and\ } w' \in \mathcal{X}^{cg}(t \! + \! 1), \nonumber\\ & \qquad \qquad \qquad
Lw \neq Lw'', w'' \in \mathcal{E}(t+1) \cup \mathcal{A}(t) \Big\}, \nonumber\\ {\cal D}^{er}(t+1)&={\cal D}^{er}(t) \cup \left\{ vk \in {\rm Inc}(t+1): \sum_{\ell=1}^k {\bf 1}_{v\ell \in {\rm Inc}(t+1)} \leq m(t+1)\right\}
\label{eq:tr2-1} \end{align}
\item When $\mathcal{A}(t)$ becomes empty, we set $\widetilde \tau_n= t$, and from that time on, we continue {\it separately} the transmission processes on each of the two graphs, with the delayed emissions from the sets $\mathcal{D}^{er}(\widetilde \tau_n), \mathcal{D}^{cg}(\widetilde \tau_n)$. They will terminate at later times $ \bar \tau_n^{er}, \tau_n^{cg}$ when $\mathcal{D}$ gets empty.
(i) For the step from times $t$ to $t+1$ on the complete graph, we let $v$ be the first element of $ \mathcal{D}^{cg}(t)$ and $i=Lv$. If the label $i$ is not an element of $L \mathcal{E}^{cg}(t)$, we update $\mathcal{E}^{cg}(t+1)=\mathcal{E}^{cg}(t) \cup \{v\}$ and $\mathcal{D}^{cg}(t+1)=(\mathcal{D}^{cg}(t)\setminus \{v\} ) \cup \{vk; k=1,\ldots K_i\}$. If the label $i$ is an element of $L \mathcal{E}^{cg}(t)$, we update $\mathcal{E}^{cg}(t+1)=\mathcal{E}^{cg}(t)$ and $\mathcal{D}^{cg}(t+1)=\mathcal{D}^{cg}(t)\setminus \{v\} $. We then go to the next step.
When $\mathcal{D}^{cg}(t)$ becomes empty, set $\tau_n^{cg}=t$.
(ii) For the Erd\"os-R\'enyi graph, for the step from times $t \geq \widetilde \tau_n$ to $t+1$: \begin{itemize} \item If there is a $s \leq \widetilde \tau_n$ such that ${\rm Inc}(s) \bigcap \mathcal{D}^{er}(t) \neq \emptyset $, consider the smallest one, still denoted by $s$, $v=X(s)$, $m(s)$ from (\ref{eq:canape2})
and $i=Lv$. Scan the edges $e=\langle i, I^i_k\rangle$ for $k=T_i+1,T_i+2,\ldots$ to find the $m(s)$ first ones which are in the graph, and let $k_1,\ldots, k_{m(s)}$ the corresponding indices; If an edge $e$ was still unknown, put $B(e)=B^i_k$ for these $k$'s. Then, update $\mathcal{E}^{er}(t+1)=\mathcal{E}^{er}(t)$ and $\mathcal{D}^{er}(t+1)=\big(\mathcal{D}^{er}(t)\setminus {\rm Inc}(s)\big) \cup \{vk_1,\ldots, vk_{m(s)}\}. $ Then go the next step.
\item If there is no $s \leq \widetilde \tau_n$ with
${\rm Inc}(s) \bigcap \mathcal{D}^{er}(t) \neq \emptyset $, consider the smallest element $v$ in $\mathcal{D}^{er}(t)$ if any, and $i=Lv$. Scan the edges $e=\langle i, I^i_k\rangle$ for $k\geq 1$ to find the $K_i$ first ones which are in the graph, and let $k_1,\ldots, k_{K_i}$ the corresponding indices; If an edge $e$ was still unknown, put $B(e)=B^i_k$ for these $k$'s. Then, update $\mathcal{E}^{er}(t+1)=\mathcal{E}^{er}(t) \cup \{v\}$ and $\mathcal{D}^{er}(t+1)=\big(\mathcal{D}^{er}(t)\setminus \{v\} \big) \cup \{vk_1,\ldots, vk_{K_i}\}. $ Then go the next step.
\item When $\mathcal{D}^{er}(t)$ becomes empty, set $\bar \tau_n^{er}=t$.
\end{itemize}
\end{itemize}
The above construction is indeed a fine coupling of transmission processes on the Erd\"os-R\'enyi and the complete graphs.
$\Box$ Proofs of Theorems \ref{th:LLN2} and \ref{th:TLC2}. To get an incompatibility it is necessary to pick twice the same edge in the construction, and to meet an event of the type $$ \widetilde M(i,j; k_1,k_2)=\big\{ \langle i, I^i_{k_1} \rangle = \langle j, I^j_{k_2} \rangle , B^i_{k_1} \neq B^j_{k_2} \big\}. $$ More precisely, the following events are equal, $$ \{ {\rm Inc}(t+1) \neq \emptyset \} = \bigcup_{j \in {\cal N}, k_1 \leq T_{LX(t+1)}, k_2 \leq T_j} \widetilde M(LX(t+1),j; k_1,k_2). $$ The sets $\mathcal{D}^{er}, \mathcal{D}^{cg}$ increase only because of incompatibilites. Similar to (\ref{eq:majcle}), we can estimate, for times $t \leq \widetilde \tau_n$, the size of both sets by \begin{equation} \label{eq:majcle1}
{\rm card} \; \mathcal{D}^{er}(t) \leq \sum_{i, j \in \mathcal{N}} \sum_{k_1 \leq T_i, k_2 \leq T_j} {\bf 1}_{\widetilde M(i,j; k_1, k_2)} \stackrel{\rm def}{=} \widetilde Y, \qquad {\rm card} \; \mathcal{D}^{cg}(t) \leq \widetilde Y, \end{equation} since each server $i \in {\cal N}$ can emit at most one burst. An elementary computation shows that the expectation of $ \widetilde Y$ is bounded in $n$ as soon as ${\mathbb E} K^2 <{\infty}$. Then we obtain \begin{equation} \nonumber {\rm card} \; \mathcal{D}^{cg}(\widetilde \tau_n) = O_{P}(1), \qquad {\rm card} \; \mathcal{D}^{er}(\widetilde \tau_n)
= O_{P}(1). \end{equation} Following the line of proof of Proposition \ref{prop:couplage-tau} and using Lemma \ref{lem:keycg}, we derive from the first above estimate that $$ \tau_n^{cg} - \widetilde \tau_n = O_P(1). $$ Now, let us see that $\bar{\tau}_n^{er} - \widetilde \tau_n = O_P(1)$: we first prove that at time $\widetilde \tau_n$, with high probability, for all $i\notin \mathcal{E}(\widetilde \tau_n)$ the number of edges $e=\langle i,j \rangle$ such that $B(e)\neq {\rm``unknown"}$ is bounded from above by $ \sqrt{n}\ln n$.
Indeed, denoting by $E_n^i$ the number of ``known" edges adjacent to $i\in \mathcal{N}$ { at time} $\widetilde \tau_n$ we have \begin{equation*} E_n^i {\bf 1}_{\{ i\notin \mathcal{E}(\widetilde \tau_n)\}}= \sum_{j \in {\cal N}\setminus \{i\}}{\bf 1}_{\{B(\langle i,j \rangle)\neq {\rm``unknown"}\},
i\notin \mathcal{E}(\widetilde \tau_n), j \in \mathcal{E}(\widetilde \tau_n)\}} \end{equation*} where -- here as well as below -- the values of the random variables $(B(\langle i,j \rangle))_{j\in \mathcal{N}}$ are taken {\it at time} $\widetilde \tau_n$. We estimate the second moment \begin{align} {\mathbb E} \big[(E_n^i)^2 {\bf 1}_{\{i\notin \mathcal{E}(\widetilde \tau_n)\}}\big] &= \sum_{j \in {\cal N}\setminus \{i\}} {\mathbb P}\big(B(\langle i,j \rangle)\neq {\rm``unknown"};
i\notin \mathcal{E}(\widetilde \tau_n); j \in \mathcal{E}(\widetilde \tau_n)\big)\nonumber\\ &\phantom{**}+ \sum_{j\neq j' \in {\cal N}\setminus \{i\}}
{\mathbb P}\big(B(\langle i,j \rangle), B(\langle i,j' \rangle)\neq {\rm``unknown"};
i\notin \mathcal{E}(\widetilde \tau_n); j, j' \in \mathcal{E}(\widetilde \tau_n)\big)\nonumber \\&\leq \frac{n-1}{n} \frac{{\mathbb E} K}{p} + \frac{(n-1)(n-2)}{n^2} \frac{{\mathbb E} K^2}{p^2}, \label{RTO} \end{align} bounding the second line by \begin{align} \lefteqn{ {\mathbb E} \sum_{j\neq j' \in {\cal N}\setminus \{i\}}
{\mathbb P}\big(B(\langle i,j \rangle), B(\langle i,j' \rangle)\neq {\rm``unknown"};
i\notin \mathcal{E}(\widetilde \tau_n); j, j' \in \mathcal{E}(\widetilde \tau_n)\mid K_j, K_{j'} \big)} \phantom{************}
\nonumber \\ \leq {(n\!-\!1)(n\!-\!2)} & {\mathbb E} \left[
{\mathbb P}\big(B(\langle 1,2 \rangle), B(\langle 1,3 \rangle)\neq {\rm``unknown"} \mid
1\notin \mathcal{E}(\widetilde \tau_n); 2, 3 \in \mathcal{E}(\widetilde \tau_n); K_2, K_{3} \big)\right] \phantom{************}
\nonumber \\ \leq (n\!-\!1)(n\!-\!2)& {\mathbb E} \left[
{\mathbb P}\big(B(\langle 1,2 \rangle) \neq {\rm``unknown"} \mid
1 \notin \mathcal{E}(\widetilde \tau_n); 2 \in \mathcal{E}(\widetilde \tau_n); K_2 \big)^2\right] \phantom{************}
\nonumber \\
\leq (n\!-\!1)(n\!-\!2) & \frac{{\mathbb E} K^2}{n^2p^2}, \nonumber \end{align} and a similar, even simpler, bound for the second line of (\ref{RTO}). Combining the union bound and Markov inequality we deduce \begin{align*} {\mathbb P}\Big(\sup_{i\notin \mathcal{E}(\widetilde \tau_n)}E_n^i>\sqrt{n}\ln n\Big) &= {\mathbb P}\Big(\bigcup_{i\in \mathcal{N}}\{E_n^i>\sqrt{n}\ln n,i\notin \mathcal{E}(\widetilde \tau_n)\}\Big)\nonumber\\ &\leq n {\mathbb P}(E_n^i {\bf 1}_{\{i\notin \mathcal{E}(\widetilde \tau_n)\}}>\sqrt{n}\ln n) \nonumber\\ &\leq \frac{{\mathbb E} K+{\mathbb E} K^2}{p^2 \ln^2 n} \end{align*} by (\ref{RTO}), which goes to 0 as $n\to \infty$. We immediately obtain that at time $\widetilde \tau_n$, with high probability, for each element of $\mathcal{D}^{er}(\widetilde \tau_n)$ the number of ``known" incident edges is $o(n)$. We deduce that with high probability, as $n\to \infty$, the edges chosen to generate the subtrees of the elements of $\mathcal{D}^{er}(\widetilde \tau_n)$ are ``unknown". Since $\tau_n^{cg} - \widetilde \tau_n = O_P(1)$, the number of informed servers at time $\widetilde \tau_n$ is of order $qn$ on the survival set $\{\sup_n \widetilde \tau_n=\infty\}$. Now, from the last two comments and by stochastic domination by a Galton-Watson process with offspring mean $\frac{(1-q)n}{n-o(n)}{\mathbb E}(K)$, which is asymptotically smaller than $1$, we can conclude that the subtrees generated by the elements of $\mathcal{D}^{er}(\widetilde \tau_n)$ are subcritical. On the other hand, on the extinction set $\{\sup_n \widetilde \tau_n<\infty\}$, the probability that ${\rm card}(\mathcal{D}^{er}(\widetilde \tau_n))>0$ goes to 0 as $n\to \infty$. Hence, \begin{equation} \label{eq:fin} \bar{\tau}_n^{er} - \widetilde \tau_n = O_P(1). \end{equation}
Therefore, we conclude that $\tau_n^{cg}-\bar{\tau}_n^{er}=O_P(1)$.
From that point the proof is completely similar to that of Theorems \ref{th:LLN} and \ref{th:TLC}. We will not repeat the details. \qed
}
\end{document} | arXiv | {
"id": "1312.3897.tex",
"language_detection_score": 0.6928130388259888,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[]{A uniqueness result for an inverse problem of the steady state convection-diffusion equation} \author[]{Valter Pohjola} \date{} \keywords{Inverse boundary value problem; Convection-Diffusion; Advection-Difffusion;Magnetic Schr\"odinger operator.} \address{Department of Mathematics and Statistics, Helsingin yliopisto / Helsingfors universitet / University of Helsinki, Finland} \email{valter.pohjola@helsinki.fi}
\begin{abstract} We consider the inverse boundary value problem for the steady state convection diffusion equation. We prove that a velocity field $V$, is uniquely determined by the Dirichlet-to-Neumann map, when $V \in C^{0,\gamma} (\Omega)$, $2/3< \gamma \leq 1$, i.e. when $V$ is a H\"older continuous vector field with $2/3< \gamma \leq 1$. \end{abstract}
\maketitle
\section{Introduction}
\noindent The steady state convection-diffusion equation \begin{align}\label{eq:conv_prob}
(-\Delta + V \cdot \nabla) u &= 0, \quad \text{in}\quad \Omega, \\
u|_{\p \Omega} &= f, \nonumber \end{align} can be seen as a time independent model for transport phenomena in a fluid due to a diffusion process and convection caused by the fluid velocity $V$. One specific model is heat transfer in a fluid, in which case $u$ is taken as the temperature. In the following we will consider this problem assuming that\footnote{Here $H^s(\Omega)$ refers to the $L^2$ based Sobolev space with smoothness index $s$. } $f \in H^{1/2}(\p \Omega)$ and $V \in C^{0,\gamma}(\Omega,\mathbb{R}^n)$, with $2/3 < \gamma \leq 1$ and where the set $\Omega \subset \mathbb{R}^n$, $n \geq 3$ will be a bounded open set with Lipschitz boundary. Recall that the space of H\"older continuous functions, $C^{0,\gamma}(\Omega)$, $0 < \gamma \leq 1$ is defined as \begin{align*}
C^{0,\gamma}(\Omega) = \Big\{ g \in C(\ov{\Omega}) \;:\;
|g|_{C^{0,\gamma}(\Omega)} := \sup_{x,y \in \Omega, x\neq y}\frac{|g(x)-g(y)|}{|x-y|^\gamma} < \infty \Big\}, \end{align*} equipped with the norm \begin{align*}
\|g\|_{C^{0,\gamma}(\Omega)} := \|g\|_{L^\infty(\Omega)} + |g|_{C^{0,\gamma}(\Omega)}. \end{align*}
A physical formulation of the inverse problem we are about to consider, is to think of $u$ as the temperature in the region $\Omega$, we then ask if it is possible to determine the velocity field $V$ in the region $\Omega$ by controlling the temperature on the boundary and by measuring the heat flux on the boundary.
The boundary measurements are mathematically modeled by the so called Dirichlet to Neumann map (DN-map for short).
This is the map $\Lambda_V$ taking $f$ to $\p_n u := (n \cdot \nabla u)|_{\p\Omega}$, where $n$ is the outward pointing unit normal to $\p \Omega$. The unique solvability of the Dirichlet problem \eqref{eq:conv_prob} in $H^1(\Omega)$ (see Theorems 8.1 and 8.3 in \cite{GT}) shows that the DN-map well defined. The normal derivative $\p_n u$ needs, however in this case to be understood in a distributional sense, because of the non-smooth solutions we consider. The DN-map can then be defined in a weak sense, as the operator $\Lambda_V : H^{1/2}(\p\Omega) \to H^{-1/2}(\p\Omega)$ given by \begin{align*} \langle \Lambda_V f,\varphi \rangle := \int_{\Omega} (\nabla u \cdot \nabla \phi + V\cdot \nabla u\, \phi)dx, \end{align*}
where $L_V u:= (-\Delta + V \cdot \nabla) u =0$, in $\Omega$, $u|_{\Omega} = f$ and $\varphi \in H^{1/2}(\p \Omega)$,
$\phi \in H^1(\Omega)$, with $\phi |_{\p\Omega} = \varphi$. Here $\langle \cdot,\cdot \rangle$ denotes the distribution duality on $\p\Omega$. Notice also that the definition is independent of the choice of an extension $\phi$ of $\varphi$.
The mathematical form of the inverse problem is then the question, if the DN-map of the Dirichlet problem \eqref{eq:conv_prob} determines the velocity field $V$. The main result of this paper is the following theorem.
\begin{thm} \label{thm1}
Let $V_j \in C^{0,\gamma}(\Omega,\mathbb{R}^n)$, $j=1,2$ with $2/3 < \gamma \leq 1$.
Assume that $\Lambda_{V_1}=\Lambda_{V_2}$, then $V_1 = V_2$ in $\Omega$. \end{thm}
The first uniqueness result for the above inverse problem was given by Cheng, Nakamura and Sommersalo in \cite{CNS}, where they prove the unique determination of the velocity field $V$, for $V \in C^{\infty}(\ov{\Omega})$, and $\p \Omega \in C^\infty$. Salo improved this in \cite{Sa}, where it is shown that the result also holds when $V$ is Lipschitz continuous, i.e. $V \in C^{0,1}(\Omega)$. This was in turn improved by Knudsen and Salo in \cite{KS} where they prove that $V$ can be any H\"older continuous function provided that $\nabla \cdot V \in L^\infty$. Theorem \ref{thm1} improves on this by showing that the restriction $\nabla \cdot V \in L^\infty$, is unnecessary for H\"older continuous vector fields $V \in C^{0,\gamma}(\Omega)$, when $2/3 < \gamma \leq 1$.
The inverse problem of the closely related magnetic Schr\"odinger equation, was first studied by Sun in \cite{S}. There have been several improvements of this result by various authors. The sharpest and most recent result is given by Krupchyk and Uhlmann in \cite{KU} where they prove that the inverse problem is solvable for an electric potential $q\in L^\infty$ and a magnetic potential $A \in L^\infty$.
A first remark on Theorem \ref{thm1} concerns its relations to the celebrated Calderon problem (see e.g. \cite{SyU}). The Calderon problem asks if one can determine the conductivity in the interior of an object by measuring the current on the boundary, when one controls the voltage on the boundary (or vice versa), or in more mathematical terms if the DN-map corresponding to a Dirichlet problem of the conductivity equation $\nabla \cdot (\sigma \nabla u) = 0,$ where $\sigma$ is the conductivity, determines the conductivity. Writing the conductivity equation in non-divergence form we get that \[
\Delta u + \nabla \log (\sigma )\cdot \nabla u = 0 . \] This shows that the \eqref{eq:conv_prob} is a more general and therefore a more difficult problem then the Calderon problem.
As a second remark on Theorem \ref{thm1} we point out that the over all method of proving Theorem \ref{thm1} is to reduce it to an inverse problem for the magnetic Schrödinger equation, which is a self-adjoint first order perturbation of the Laplacian. We will more specifically be utilizing the method of proving uniqueness for the inverse problem of the magnetic Schr\"odinger equation given in \cite{KU}. One of the main ideas is that one can still use the methods of \cite{KU} for electric potentials with worse regularity of a specific distributional form, provided one assumes that the magnetic potentials are more regular.
The paper is organized as follows. In section 2 we reduce Theorem \ref{thm1} to a claim about the magnetic Schr\"odinger operator. Section 3 is devoted to constructing complex geometric optics solutions. In section 4 we prove the unique determination of the magnetic field and in section 5 we prove the unique determination of the electric potential.
\section{Reduction to the Magnetic Schr\"odinger case} The purpose of this section is to reduce Theorem \ref{thm1} to a similar statement concerning the magnetic Schr\"odinger operator. The argument is formulated by Cheng, Nakamura and Sommersalo in \cite{CNS} and by Salo in \cite{Sa}. The magnetic Schr\"odinger operator is formally given by \begin{align*}
L_{A,q}u = -\Delta u -iA\cdot \nabla u -i \nabla \cdot (Au)+ (A^2+q)u. \end{align*} We are going to consider the case where $A\in C^{0,\gamma}(\Omega,\mathbb{R}^n)$ and $q=\nabla \cdot F + p$, with $F\in C^{0,\gamma}(\Omega,\mathbb{R}^n)$ and $p \in L^\infty(\Omega, \mathbb{C})$. Hence we need to understand $L_{A,q}$ in a distributional sense, as an operator $L_{A,q}:H^1(\Omega)\to H^{-1}(\Omega)$, given by \[
\langle L_{A, q} \phi,\psi \rangle := \int_{\Omega} \nabla \phi \cdot \nabla \psi + i
A \cdot (\phi \nabla \psi - \psi \nabla \phi) + (A^2 + p)\phi \psi - F \cdot \nabla(\phi \psi ) \, dx, \] where $ \phi \in H^1(\Omega) $ and $ \psi \in H^1_0(\Omega) $.
The inverse problem for the magnetic Schr\"odinger operator we are about to consider comes from the Dirichlet Problem \begin{align*}
L_{A,q} u &= 0, \quad \text{in}\quad \Omega, \\
u|_{\p\Omega} &= f, \end{align*} where $f$ is in the Sobolev space $H^{1/2}(\p \Omega)$.
The normal component of the magnetic gradient on the boundary, $(\p_n+i n\cdot A)u|_{\p \Omega}$, here $n$ denotes the outward pointing unit normal vector on $\partial \Omega$, is in our case defined, following \cite{KU}, as the bounded linear map $N_{A, q} : H^1(\Omega) \to H^{-1/2}(\p\Omega)$ given by \[
\langle N_{A,q} u, \varphi \rangle = \int_{\Omega} \nabla u \cdot \nabla \phi + i A \cdot (u \nabla \phi - \phi \nabla u) + (A^2+p) u \phi - F \cdot \nabla (u \phi) \, dx \]
for any $ u \in H^1(\Omega) $ such that $ L_{A, q}u = 0 $ and any $ \varphi \in H^{1/2}(\p\Omega) $, such that $ \phi|_{\p \Omega} = \varphi $. The definition is independent of the choice of an extension $\phi$ of $\varphi$.
We shall consider the more general notion of a Cauchy data set, instead of the DN-map when dealing with the magnetic Schr\"odinger equation. The Cauchy data sets are the sets of boundary data of solutions, i.e. \[
C_{A,q}:=\{(u|_{\p \Omega},N_{A,q}u): u\in H^1(\Omega)\textrm{ and } L_{A,q}u=0 \textrm{ in } \Omega\}. \]
The magnetic field corresponding to a potential $A$ is given by the 2-form $dA$, which is defined as \begin{align} \label{eq:magDef}
dA=\sum_{1\le j<k\le n}(\p_{j}A_k-\p_{k}A_j)dx_j\wedge dx_k, \end{align} this definition should be understood in the sense of non-smooth differential forms (a.k.a. currents).
Our aim is now to reduce Theorem \ref{thm1} to the following Proposition, after which the rest of the paper is devoted to proving this Proposition.
\begin{prop} \label{thm2} Let $\Omega \subset \mathbb{R}^n$ be a bounded domain with Lipschitz
boundary.
Assume that $A_1,A_2,F_1,F_2 \in C^{0,\gamma}(\Omega,\mathbb{R}^n)$, $2/3 < \gamma \leq 1$, with
$A_1=A_2$ and $F_1=F_2$ on $\p\Omega$, and let $p_1,p_2 \in L^\infty(\Omega,\mathbb{C})$.
Assume that $C_{A_1,q_1}=C_{A_2,q_2}$, then $dA_1 = dA_2$ and
$\nabla \cdot F_1 + p_1 = \nabla \cdot F_2 + p_2$ in $\Omega$. \end{prop}
The above result is a variation of the main result in \cite{KU}. It differs from this by being applicable to lower regularity electric potentials (i.e. of the special distributional form), but it also by requires more regularity on the magnetic potentials.
Another more general point concerning the above result is that, we cannot in general hope to recover the magnetic potential $A$. This is because of the gauge invariance
of the Cauchy data sets. If $\psi\in C^{1,\gamma}(\Omega)$ and $\psi|_{\p \Omega}=0$, then $C_{A,q}=C_{A+\nabla\psi,q}$, i.e. it is possible to change the magnetic potentials without disturbing the boundary data (see Proposition \ref{prop_gauge_1} in the appendix).
At several points we will need extensions of H\"older continuous functions to a larger set containing $\Omega$. The following basic extension result on H\"older continuous functions will be used for this (see Theorem 3 on page 174 in \cite{St} and Theorem 16.11 on page 342 in \cite{CDK}).
\begin{lem}\label{HoldExt} Let $\Omega \subset \mathbb{R}^n$ be open set with Lipschitz boundary. Then there exists a continuous linear extension operator $E$, \[
E: C^{0,\gamma}(\Omega) \to C^{0,\gamma}_0(\mathbb{R}^n), \] for $0\leq \gamma\leq1$. More precisley there exists a constant $C=C(\Omega)>0$, such that for every $f \in C^{0,\gamma}(\Omega)$, $\supp(E(f))$ is compact, \[
E(f)|_\Omega = f \] and one has the norm estimate \[
\| E(f) \|_{C^{0,\gamma}(\mathbb{R}^n)} \leq C \| f \|_{C^{0,\gamma}(\Omega) }. \] \end{lem}
We will also need the following boundary reconstruction result from \cite{Sa} (see Theorem 1.9 in \cite{Sa}). \begin{thm} \label{bndry_rec} Let $\Omega \subset \mathbb{R}^n$ be open set with Lipschitz boundary and $n\geq 3$. Assume
$V_1,V_2\in C^{0,\gamma}(\Omega,\mathbb{R}^n)$, $0<\gamma \leq 1$. If $\Lambda_{V_1} = \Lambda_{V_2}$, then $V_1|_{\p\Omega}=V_2|_{\p\Omega}$. \end{thm}
Next we show how Theorem \ref{thm1} follows from Proposition \ref{thm2}. We follow the argument given in \cite{Sa}. The rest of the paper will focus on proving Proposition \ref{thm2}.
\textit{Proof of Theorem \ref{thm1}.} By Theorem \ref{bndry_rec} we know that $V_1=V_2$ on $\p \Omega$. Lemma \ref{HoldExt} allows us then to extend $V_j$ to a
ball $B$, $\Omega \subset\subset B$ so that $V_j \in C^{0,\gamma}(B,\mathbb{R}^n)$, $V_j|_{\p B} = 0$ and $V_1=V_2$ on $B \setminus \Omega$. Lemma \ref{lem_Cauchy_data_conv} below shows that the above extension does not alter the DN-maps, i.e. $\Lambda^B_{V_1}=\Lambda^B_{V_2}$. We may thus assume that that $\Omega = B$ and that $V_1=V_2=0$ on $\p\Omega=\p B$.
We now consider the magnetic Schr\"odinger operators $L_{A_j,q_j}$, $j=1,2$ that coincide with $L_{V_j}$. That is we choose
\[
A_j := iV_j/2 \quad\text{and}\quad q_j := V_j^2/4 - \nabla \cdot V_j/2, \] which gives that $L_{A_j,q_j} = L_{V_j}$.
Next we want to show that
$C_{A_j,q_j} = \{ (f,\Lambda_{V_j}f) \,|\, f \in H^{1/2}(\p B) \}$. We need only to show that $N_{A_j,q_j} u_j = \Lambda_{V_j} u_j$, $j=1,2$. Let $ u_j \in H^1(B) $ be such that $ L_{A_j, q_j}u_j = 0 $ and assume that $ \varphi \in H^{1/2}(\p B) $ and that $\phi \in H^1(B)$ is an extension of $\varphi$,
i.e. $ \phi|_{\p B} = \varphi$. Then by definition and because $V_j=0$ on $\p B$ \begin{align*}
\langle N_{A_j,q_j} u_j, \varphi \rangle
&= \int_{B} (\nabla u_j \cdot \nabla \phi - \frac{1}{2}V_j
\cdot (u_j \nabla \phi - \phi \nabla u_j)
+ \frac{1}{2} V_j \cdot \nabla (u_j \phi) )\, dx \\
&= \int_{B} (\nabla u_j \cdot \nabla \phi + V_j \cdot \nabla u_j \phi )\, dx \\
&= \langle \Lambda_{V_j} u_j, \varphi \rangle. \end{align*} The assumption that $\Lambda_{V_1}=\Lambda_{V_2}$, implies therefore that $C_{A_1,q_1} = C_{A_2,q_2}$.
We can now apply Proposition \ref{thm2}, which gives that $dV_1 = dV_2$. By the Poincar\'{e} Lemma (see Theorem 8.3 in \cite{CDK}), there exists an $\psi \in C^{1,\gamma}(B)$, s.t. $V_1-V_2 = \nabla \psi$, since $\nabla \psi = 0$ outside $\supp(V_1) \cup \supp(V_2)$, we have that $\psi$ is constant near $\p B$. We may hence add a constant to $\psi$, so that $\psi = 0$ near $\p B$.
The second consequence of Proposition \ref{thm2} is that $q_1 = q_2$, so that $V_1^2/2 - \nabla \cdot V_1 = V_2^2/2 - \nabla \cdot V_2$. This together with the fact that $V_2 = \nabla \psi- V_1$, gives the equation \begin{align} \label{eq:quasilin}
\Delta \psi - V_1 \cdot \nabla \psi + \frac{1}{2} (\nabla \psi)^2 = 0 \text{ in } B, \end{align} Next we prove that $\psi \in C^2(B)$. Because of \eqref{eq:quasilin} we have that $\psi \in C^0(\ov{B})$ satisfies \[
\Delta \psi = f \text{ in } B, \] with $f = V_1 \cdot \nabla \psi - \frac{1}{2} (\nabla \psi)^2 \in C^{0,\gamma}(B)$. By interior Schauder estimates (see Theorem 7.18 in \cite{Zw}) we know that $\psi \in C^{2,\gamma}(\ov{V})$, for every open $V \subset \subset B$. It follows that $\psi \in C^2(B)$.
We may now apply the maximum principle to $\psi$ (see Theorem 10.1 in \cite{GT}). From this it follows that $\psi = 0$ in $B$, since $\psi|_{\p B}=0$. We may thus conclude that $V_1=V_2$. \begin{flushright}
$\Box$ \end{flushright}
\section{Complex geometric optics solutions and remainder estimates} \label{sec:cgo}
\noindent In this section we shortly review the construction of complex geometric optics (CGO for short) solutions and then derive some remainder estimates related to these. We follow by large the construction given in \cite{KU}. We are however dealing with more regular magnetic potentials, which allows us to get the better remainder estimates that are needed. This and the more irregular electric potentials require us to make some modifications to the argument in \cite{KU}.
Smooth approximations of the potentials will be an important tool in the following. Our smoothing procedure will consist of an extension followed by a convolution with a mollifier. More specifically, given an $A \in C^{0,\gamma}(\Omega,\mathbb{C}^n)$, we consider an open bounded set $\Omega'$, s.t. $\Omega \subset\subset \Omega'$. By Lemma \ref{HoldExt} there is an extension of $A$ to $\mathbb{R}^n$,
$A'\in C^{0,\gamma}(\mathbb{R}^n,\mathbb{C}^n)$, s.t. $A=A'$ in $\Omega$, $A'|_{\mathbb{R}^n \setminus \Omega'} = 0$ and \begin{align} \label{es:HoldContExt}
\|A'\|_{C^{0,\gamma}(\mathbb{R}^n,\mathbb{C}^n)} \leq C \| A \|_{C^{0,\gamma}(\Omega,\mathbb{C}^n)}. \end{align}
Moreover let $ \Psi$ belong to $ C^\infty_0(\mathbb{R}^n) $ with $ 0 \leq \Psi(x) \leq 1 $ for all $ x \in \mathbb{R}^n $, $ \supp \Psi \subset \{ x \in \mathbb{R}^n : |x| \leq 1 \} $ and $ \int_{\mathbb{R}^n} \Psi \, dx = 1 $. Define $\Psi_\theta (x) = \theta^n \Psi(\theta x) $ for $\theta \in (0, \infty)$ and $ x \in \mathbb{R}^n $. We define $A^\sharp$ for any $A' \in C_0^{0,\gamma}(\mathbb{R}^n,\mathbb{C}^n)$, as \[
A^\sharp := \Psi_\theta \ast A'. \] Notice also that \eqref{es:HoldContExt} implies that
$\|A^\sharp\|_{C^{0,\gamma}(\mathbb{R}^n,\mathbb{C}^n)} \leq C \| A \|_{C^{0,\gamma}(\Omega,\mathbb{C}^n)}$, where $C$ is independent of $\theta$.
The following Lemma gives some basic and well known estimates for the above approximation scheme (see \cite{H}). \begin{lem}\label{LemApprox} Assume that $A \in C^{0,\gamma}(\Omega,\mathbb{C}^n)$, with $0<\gamma \leq 1$ and let $A'$ be the above extension of $A$ to $\mathbb{R}^n$. Then \begin{align}
\| A'-A^\sharp \|_{L^\infty(\mathbb{R}^n,\mathbb{C}^n)} &\leq C \theta^{-\gamma}, \label{es:Asharp1} \\
\| \p^{\alpha} A^\sharp \|_{L^\infty(\mathbb{R}^n,\mathbb{C}^n)} &\leq C \theta^{|\alpha|-\gamma}, \label{es:Asharp2} \end{align}
as $\theta \to \infty$, for any multi-index $\alpha$, with $|\alpha| \geq 1$. \end{lem} \begin{proof} Let $\Psi$ be as above. Assume that $x \in \mathbb{R}^n$.
For the first estimate we use \eqref{es:HoldContExt} and have that \begin{align*}
|A'(x) - A^\sharp(x) | &=
\big | \int_{\mathbb{R}^n} A'(x) \Psi(y)\,dy - \int_{\mathbb{R}^n} A'(x-y) \theta^n \Psi(\theta y) \,dy\big| \\
&\leq
\int_{\mathbb{R}^n} | A'(x) \Psi(y) - A'(x-y/\theta) \Psi(y)| \,dy \\
&\leq C \| A \|_{C^{0,\gamma}(\Omega,\mathbb{C}^n)}
\theta^{-\gamma} \int_{\mathbb{R}^n} |y|^{\gamma} |\Psi(y)| \,dy \\
&\leq
C \theta^{-\gamma}. \end{align*} To derive the second estimate \eqref{es:Asharp2} notice firstly that \begin{align*}
\int_{\mathbb{R}^n} \p^\alpha \Psi(y) dy = 0, \end{align*}
for all multi indexes $\alpha$, with $|\alpha| \geq 1$. Let $x \in \mathbb{R}^n$, then using the above observation, we have that \begin{align*}
|\p^\alpha A^\sharp(x)|
&=
\big| \int_{\mathbb{R}^n} A'(y) \theta^{n+|\alpha|} (\p^\alpha \Psi) \big(\theta(x-y)\big) \,dy \big| \\
&=
\big| \int_{\mathbb{R}^n} A'(x-y/\theta) \theta^{|\alpha|} (\p^\alpha \Psi) (y) \,dy \big| \\
&=
\big| \int_{\mathbb{R}^n} \big( A'(x-y/\theta) - A'(x) \big) \theta^{|\alpha|} (\p^\alpha \Psi) (y) \,dy \big| \\
& \leq \| A \|_{C^{0,\gamma}(\Omega,\mathbb{C}^n)} \theta^{|\alpha|}
\int_{\mathbb{R}^n} |y/\theta|^{\gamma} \big| (\p^\alpha \Psi) (y) \big| \,dy \\
& \leq C \theta^{|\alpha|-\gamma}. \end{align*} \end{proof}
\textbf{Remark.} In the rest of this section we will consider $A$ to be extended as $A'$ outside $\Omega$, i.e. we use $A$ to denote the extension $A'$.
We will now show how to construct so called complex geometric optics solutions following the argument in \cite{KU}. It is natural to formulate this in terms of certain semiclassical norms that are defined as follows \begin{align*}
&\|u\|_{H^1_{\textrm{scl}}(\Omega)}^2 := \|u\|_{L^2(\Omega)}^2+\|h\nabla u\|_{L^2(\Omega)}^2,\\
&\|v\|_{H^{-1}_{\textrm{scl}}(\Omega)} := \sup_{0\ne \psi\in C_0^\infty(\Omega)}\frac{|\langle v,\psi\rangle_{\Omega}|}{\|\psi\|_{H^1_{\textrm{scl}}(\Omega)}}. \end{align*}
The construction of CGO solutions is based on the solvability result below. The solvability result is in turn a consequence of a perturbed Carleman estimate, Proposition \ref{PCE} in the appendix. The argument that shows how to obtain the solvability result from the Carleman estimate is standard and we refer to the proof of Proposition 2.3 in \cite{KU}.
\begin{prop} \label{solvability}
Let $A,F \in L^\infty(\Omega, \mathbb{C}^n)$, $p \in L^\infty(\Omega,\mathbb{C})$ and
$q= \nabla \cdot F + p$. Furthermore let $\varphi(x) = \alpha \cdot x$, $\alpha \in \mathbb{R}^n$ with $|\alpha|=1$.
If $h>0$ is small enough, then for any $v \in H^{-1}(\Omega)$, there is a solution of the
equation
\[
e^{\varphi/h}h^2L_{A,q}(e^{-\varphi/h}u ) = v , \text{ in } \Omega,
\]
which satisfies
\begin{align} \label{eq:solv_est}
\|u\|_{H^1_{\emph{scl}}(\Omega)}\le \frac{C}{h}\|v\|_{H^{-1}_{\emph{scl}}(\Omega)}.
\end{align} \end{prop}
The CGO solutions $u\in H^1(\Omega)$ considered here solve \[ L_{A, q} u = 0, \] with $ A ,F \in C^{0,\gamma}(\Omega, \mathbb{C}^n) $, $0<\gamma \leq1$, $p \in L^\infty(\Omega,\mathbb{C})$ and have the form \begin{equation} \label{eq:CGOform} u(x;\zeta,h) = e^{x\cdot\zeta/h} (a(x;\zeta,h) + r(x;\zeta,h)), \end{equation}
where $\zeta\in\mathbb{C}^n$ with $\zeta\cdot\zeta=0$ and $|\zeta|\sim 1$; $ h $ is a small semiclassical parameter; $a$ is a smooth amplitude and $r$ is a reminder term.
We begin by assuming that $\zeta \in \mathbb{C}^n$, $\zeta = \zeta_0 + \zeta_1 $ is such that \begin{align} \label{eq:zetaAssum}
&\zeta\cdot\zeta=0,\;
\zeta_0 \text{ is constant with respect to $h$, } \zeta_1=\mathcal{O}(h),\\
&\text{as } h\to 0 \text{ and }|\Re\zeta_0| =|\Im \zeta_0|=1. \nonumber \end{align} Abbreviate the conjugated operator multiplied by $h^2$, with \[ L_\zeta:= e^{- \zeta \cdot x / h} h^2 L_{A,q} ( e^{\zeta \cdot x / h}). \] Then in order to construct $ u(\cdot; \zeta, h) $ of the form \eqref{eq:CGOform}, it is enough to prove the existence of a $ r(\cdot; \zeta, h) \in H^1(\Omega) $ solving \begin{equation}\label{eq:remainder} L_\zeta r = -L_\zeta a, \end{equation} in $\Omega$ for a suitable $a$. The $ a \in C^\infty(\mathbb{R}^n) $ is picked as the solution to \begin{equation}\label{eq:transport}
\zeta_0 \cdot \nabla a + i \zeta_0 \cdot A^\sharp a = 0, \quad \text{in} \quad \mathbb{R}^n, \end{equation} so that left hand side of \eqref{eq:remainder} becomes, using \eqref{eq:zetaAssum}, \eqref{eq:transport} and \eqref{eq:conjMaMf} given below, \begin{align} \label{eq:aparts} -L_\zeta a= & h^2\Delta a + i h^2 A \cdot \nabla a - h^2 m_A (a) - h^2 (A^2+p) a + 2h \zeta_1 \cdot \nabla a \\
& + 2hi\zeta_0 \cdot(A-A^\sharp)a
+ 2 hi \zeta_1 \cdot A a - h^2 m_{\nabla \cdot F} (a). \nonumber \end{align} Here $m_A $ and $m_{\nabla \cdot F}$ are the bounded linear operators from $ H^1(\Omega) $ to $ H^{-1}(\Omega) $ defined by \begin{align*}
\langle m_A (\phi), \psi \rangle &:= \int_{\Omega} i \phi A \cdot \nabla \psi \, dx, \\
\langle m_{\nabla \cdot F} (\phi), \psi \rangle &:= -\int_{\Omega} F \cdot \nabla (\phi\psi) \, dx, \end{align*} for all $ \phi \in H^1(\Omega) $ and all $ \psi \in H^1_0(\Omega)$. It easy to see that \begin{align} \label{eq:conjMaMf}
e^{-\zeta\cdot x/h}\circ h^2m_A \circ e^{\zeta\cdot x/h} &= -hi\zeta\cdot A+h^2 m_A, \\
e^{-\zeta\cdot x/h}\circ h^2m_{\nabla \cdot F} \circ e^{\zeta\cdot x/h} &= h^2 m_{\nabla \cdot F}.
\nonumber \end{align}
If we look for solutions to \eqref{eq:transport} in the form $a = e^{\Phi^\sharp}$, it will be enough that $ \Phi^\sharp(\cdot; \zeta_0, \theta) $ satisfies \begin{equation} \zeta_0 \cdot \nabla \Phi^\sharp + i \zeta_0 \cdot A^\sharp = 0 \label{eq:deltabarSHARP} \end{equation} in $ \mathbb{R}^n $. The fact that $ \mathrm{Re}\, \zeta_0 \cdot \mathrm{Im}\, \zeta_0 = 0 $
and $ |\mathrm{Re}\, \zeta_0| = |\mathrm{Im}\, \zeta_0| = 1 $, implies that $N_{\zeta_0}:= \zeta_0 \cdot \nabla $ is a $\overline{\partial}-$operator in suitable coordinates. The Cauchy operator $N_{\zeta_0}^{-1}$, defined by \[
(N_{\zeta_0}^{-1}f)(x) := \frac{1}{2\pi} \int_{\mathbb{R}^2}
\frac{f(x-y_1 \Re \zeta_0 - y_2 \Im \zeta_0)}{y_1+iy_2}\, dy_1 dy_2, \] for $f \in C_0(\mathbb{R}^n),$ is the inverse of the $\overline{\partial}-$operator
and gives thus that \[ \Phi^\sharp = N_{\zeta_0}^{-1} (- i \zeta_0 \cdot A^\sharp) \in C^\infty (\mathbb{R}^n). \] We will also use the following basic continuity result for the Cauchy operator (see \cite{Sa}, Lemma 7.4).
\begin{lem} \label{Cauchy_op} Let $f\in W^{k,\infty}(\mathbb{R}^n)$, $k \geq 0$, with $\supp(f) \subset B(0,R)$. Then we have that \begin{align} \label{eq:Cauchy_op_cont}
\|N_{\zeta_0}^{-1} f \|_{W^{k,\infty}(\mathbb{R}^n)}\le C\|f\|_{W^{k,\infty}(\mathbb{R}^n)}, \end{align} where $C=C(R)$. \end{lem}
Using now Lemma \ref{LemApprox} and Lemma \ref{Cauchy_op}, we have that \begin{align} \label{es:phi_sharp}
\| \p^\alpha \Phi^\sharp\|_{L^\infty(\mathbb{R}^n)} \leq C \theta^{|\alpha|-\gamma} \end{align}
for $ \theta \in (1, \infty) $ and a multi-indexes $\alpha$, $|\alpha| \geq 1$. Moreover, defining $ \Phi (\cdot; \zeta_0) := (\zeta_0 \cdot \nabla)^{-1} (- i \zeta_0 \cdot A) \in L^\infty (\mathbb{R}^n) $, solves analogously \begin{equation} \zeta_0 \cdot \nabla \Phi + i \zeta_0 \cdot A = 0 \label{eq:deltabar} \end{equation} and satisfies \begin{align}
\| \Phi (\cdot; \zeta_0)\|_{L^\infty(\mathbb{R}^n)} \leq C \|A\|_{L^\infty(\mathbb{R}^n)}. \label{es:phi} \end{align} Lemma \ref{Cauchy_op} and estimate \eqref{es:Asharp1} imply that the functions $\Phi^\sharp$ converge to $\Phi$ in $L^\infty(\Omega)$ or more explicitly that \begin{align*}
\big \| \Phi^\sharp (\cdot, \zeta_0, \theta) - \Phi (\cdot; \zeta_0) \big \|_{L^\infty(\mathbb{R}^n)}
\leq C \theta^{-\gamma}. \end{align*}
With the $a$ at hand the solvability result, Proposition \ref{solvability} guarantees the existence of a solution $r$, to equation \eqref{eq:remainder}, such that \begin{align}\label{es:remNorm}
\|r\|_{H^1_{\textrm{scl}}(\Omega)}\le \frac{C}{h}\| L_\zeta a \|_{H^{-1}_{\textrm{scl}}(\Omega)}. \end{align} Now we determine how the left hand side of the above estimate depends on $h$, i.e. we estimate the $H^{-1}_{\textrm{scl}}(\Omega)$-norm of the terms in equation \eqref{eq:aparts}. This gives us the behaviour of the $H^1_{\emph{scl}}(\Omega)$-norm of the remainder term $r$ in the parameter $h$.
Let $0\ne \psi\in C_0^\infty(\Omega)$. Then using \eqref{es:phi_sharp}, the fact that $\zeta_1=\mathcal{O}(h)$ and the Cauchy--Schwarz inequality we get that \begin{align*}
&|\langle h^2\Delta a, \psi \rangle_\Omega|\le \mathcal{O} (h^2\theta^{2-\gamma})
\|\psi\|_{L^2(\Omega)}\le \mathcal{O} (h^2\theta^{2-\gamma})
\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)}, \\
&|\langle ih^2 A\cdot \nabla a,\psi\rangle_\Omega|\le \mathcal{O} (h^2\theta^{1-\gamma})
\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)},\\
& |\langle 2h\zeta_1\cdot \nabla a,\psi\rangle_\Omega|\le \mathcal{O} (h^2\theta^{1-\gamma})
\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)},\\
& |\langle 2hi\zeta_1\cdot Aa,\psi \rangle_\Omega|\le \mathcal{O} (h^2)
\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)},\\
& |\langle h^2 (A^2+p) a,\psi\rangle_\Omega|\le \mathcal{O} (h^2)
\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)}. \end{align*} By Lemma \ref{LemApprox} we have on the other hand that \begin{align*}
|\langle 2hi\zeta_0 \cdot(A-A^\sharp)a,\psi\rangle_\Omega |&\le
\mathcal{O}(h)\|a\|_{L^\infty(\Omega)}\|A-A^\sharp\|_{L^2(\Omega)}\|\psi\|_{L^2(\Omega)}\\
&\le \mathcal{O}(h)\theta^{-\gamma} \|\psi\|_{H^1_{\textrm{scl}}(\Omega)}. \end{align*} Again by Lemma \ref{LemApprox} and estimate \eqref{es:phi_sharp} we have that \begin{align*}
|\langle h^2m_A(a),& \psi \rangle_\Omega|\le \bigg| \int_\Omega ih^2 A^\sharp
a\cdot \nabla \psi dx\bigg| + \bigg|\int_\Omega i h^2 (A-A^\sharp) a\cdot \nabla \psi
dx\bigg|\\
&\le \bigg| \int_\Omega ih^2 (\nabla \cdot (A^\sharp a)) \psi dx\bigg|
+\mathcal{O}(h)\|A-A^\sharp\|_{L^2(\Omega)}\|h\nabla\psi\|_{L^2(\Omega)}\\
&\le (\mathcal{O}(h^2\theta^{1-\gamma}) +\mathcal{O}(h)\theta^{-\gamma})
\|\psi\|_{H^1_{\textrm{scl}}(\Omega)}. \end{align*} Similarly with the help of Lemma \ref{LemApprox} and estimate \eqref{es:phi_sharp} we have that \begin{align*}
|\langle h^2m_{\nabla\cdot F}(a),& \psi \rangle_\Omega|
\le \bigg| \int_\Omega ih^2 F^\sharp \cdot \nabla (a\psi) dx\bigg|
+ \bigg|\int_\Omega i h^2 (F-F^\sharp) \cdot \nabla (a \psi) dx\bigg|\\
&\le \bigg| \int_\Omega ih^2 \nabla F^\sharp \cdot a\psi dx\bigg|
+ \bigg|\int_\Omega i h^2 (F-F^\sharp) \cdot \nabla (a \psi) dx\bigg|\\
&\le Ch^2 \theta^{1-\gamma}\|\psi\|_{H^{1}_{\textrm{scl}}(\Omega)}
+C h^2\|F-F^\sharp\|_{L^2(\Omega)} \|\nabla a\|_{L^\infty(\Omega)} \|\psi\|_{L^2(\Omega)}\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
+C h\|F-F^\sharp\|_{L^2(\Omega)} \|a\|_{L^\infty(\Omega)} \|h\nabla\psi\|_{L^2(\Omega)}\\
&\le C(h^2\theta^{1-\gamma}+h^2\theta^{1-2\gamma}+h\theta^{-\gamma})
\|\psi\|_{H^1_{\textrm{scl}}(\Omega)}. \end{align*} Combining the above estimates gives that \begin{align*}
\| L_\zeta a \|_{H^{-1}_{\emph{scl}}(\Omega)} \le C (h^2\theta^{2-\gamma} + h \theta^{-\gamma}) \end{align*} By choosing $\theta = h^{-1/2}$, we get hence by estimate \eqref{es:remNorm} that \begin{align*}
\|r\|_{H^1_{\emph{scl}}(\Omega)}\le C h^{\gamma/2} \end{align*} We have thus derived the following Proposition. \begin{prop} \label{CGOest} Let $\Omega\subset \mathbb{R}^n$, $n\ge 3$, be a bounded open set with Lipschitz boundary. Let $A, F \in C^{0,\gamma}(\Omega,\mathbb{R}^n)$, $0 < \gamma \leq 1$, $p \in L^\infty(\Omega,\mathbb{C}^n)$, with $q:= \nabla \cdot F + p$ and let $\zeta\in \mathbb{C}^n$ satisfy \eqref{eq:zetaAssum}. Then for all $h>0$ small enough, there exists a solution $u(x,\zeta;h)\in H^1(\Omega)$ of \[L_{A,q}u=0, \text{ in } \Omega \] of the form
$u(x,\zeta;h)=e^{x\cdot\zeta/h}(e^{\Phi^\sharp(x,\zeta_0;h)}+r(x,\zeta;h))$.
The function $\Phi^\sharp (\cdot,\zeta_0;h)\in C^\infty(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n) $ satisfies \begin{align} \label{es:phi_h_est}
\|\p^\alpha \Phi^\sharp\|_{L^\infty(\mathbb{R}^n)}\le C_\alpha h^\frac{\gamma-|\alpha|}{2}, \end{align}
for all $\alpha$, $|\alpha|\ge 1$, and $\Phi^\sharp (\cdot,\zeta_0;h)$ converges in the $L^\infty$-norm to $\Phi(\cdot,\zeta_0):=N_{\zeta_0}^{-1}(-i\zeta_0\cdot A)\in L^\infty(\mathbb{R}^n)$. More precisely \begin{align} \label{es:PhiPhiSharp}
\big \| \Phi^\sharp (\cdot, \zeta_0, h) - \Phi (\cdot; \zeta_0) \big \|_{L^\infty(\mathbb{R}^n)}
\leq C h^{\gamma/2}. \end{align} The remainder $r$ is such that \begin{align} \label{es:r_h_est}
\|r\|_{H^1_{\emph{scl}}(\Omega)}\le C h^{\gamma/2}, \end{align} as $h\to 0$. \end{prop}
\section{Uniqueness of the magnetic field} \label{sec:magUniq}
\noindent This section contains a proof of the first part of Proposition \ref{thm2}, i.e. we show that $dA_1 = dA_2$. We begin by stating an integral identity, which readily follows from the assumption that $C_{A_1,q_1}=C_{A_2,q_2}$. The proof can be found in \cite{KU} and only minor modifications are needed to make it work with electric potentials used here. \begin{prop} \label{int_identity} Let $\Omega\subset \mathbb{R}^n$, $n\ge 3$, be a bounded open set with Lipschitz boundary. Assume that $p_1,p_2\in L^\infty(\Omega,\mathbb{C})$ and $A_1,A_2,F_1,F_2\in C^{0,\gamma}(\Omega,\mathbb{C}^n)$, with $0 < \gamma \leq 1$. If $C_{A_1,q_1}=C_{A_2,q_2}$, then the following integral identity \begin{align} \label{eq:intId} \int_\Omega i&(A_1-A_2)\cdot (u_1\nabla \overline{u_2}-\overline{u_2}\nabla u_1)
+ (A_1^2-A_2^2+p_1-p_2)u_1\overline{u_2} \nonumber \\
-&(F_1-F_2) \cdot (u_1\nabla \overline{u_2}+\overline{u_2}\nabla u_1)\,dx=0 \end{align} holds for any $u_1,u_2\in H^1(\Omega)$ satisfying $L_{A_1,q_1}u_1=0$ in $\Omega$ and $L_{\ov{A_2},\ov{q_2}}u_2=0$ in $\Omega$, respectively. \end{prop}
The idea is then to choose specific CGO solutions and insert them into the integral identity and then show that this reduces, in the limit $h \to 0$ to a specific Fourier transform. The CGO will be chosen as follows.
Let $\xi,\mu_1,\mu_2\in\mathbb{R}^n$ be such that $|\mu_1|=|\mu_2|=1$ and $\mu_1\cdot\mu_2=\mu_1\cdot\xi=\mu_2\cdot\xi=0$. Define \begin{align} \label{eq_zeta_1_2}
\zeta_1 &=\frac{ih\xi}{2}+\mu_1 + i\sqrt{1-h^2\frac{|\xi|^2}{4}}\mu_2 , \nonumber \\
\zeta_2 &=-\frac{ih\xi}{2}-\mu_1+i\sqrt{1-h^2\frac{|\xi|^2}{4}}\mu_2, \end{align} so that $\zeta_j\cdot\zeta_j=0$, $j=1,2$, and \begin{align} \label{eq:z1_plus_z2}
(\zeta_1+\ov{\zeta_2})/h=i\xi. \end{align} Here $h>0$ is a small enough. Moreover, $\zeta_1= \mu_1+ i\mu_2+\mathcal{O}(h)$ and $\zeta_2= -\mu_1+ i\mu_2+\mathcal{O}(h)$ as $h\to 0$.
For all $h>0$, that are small enough there exists, by Proposition \ref{CGOest} a solution $u_1(x,\zeta_1;h)\in H^1(\Omega)$ to the equation $L_{A_1,q_1}u_1=0$ in $\Omega$, of the form \begin{equation} \label{eq_u_1} u_1(x,\zeta_1;h)=e^{x\cdot\zeta_1/h}(e^{\Phi_1^\sharp(x,\mu_1+i\mu_2;h)}+r_1(x,\zeta_1;h)), \end{equation} where $\Phi_1^\sharp(\cdot,\mu_1+i\mu_2;h) \in C^\infty(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$ is given by \begin{equation} \label{eq_phi_1_sharp_def} \Phi_{1}^\sharp(\cdot,\mu_1+i\mu_2;h):=N_{\mu_1+i\mu_2}^{-1} \big(-i(\mu_1+i\mu_2)\cdot A_1^\sharp\big) \end{equation} and $\Phi_1^\sharp(\cdot,\mu_1+i\mu_2;h) \to \Phi_1(\cdot,\mu_1+i\mu_2)$ in $L^\infty(\mathbb{R}^n)$ as $h\to 0$, where $\Phi_1$ is given by Proposition \ref{CGOest}.
Similarly, for all $h>0$ small enough, there exists a solution $u_2(x,\zeta_2;h)\in H^1(\Omega)$ to the equation $L_{\overline{A_2},\overline{q_2}}u_2=0$ in $\Omega$, of the form \begin{equation} \label{eq_u_2} u_2(x,\zeta_2;h)=e^{x\cdot\zeta_2/h}(e^{\Phi_2^\sharp(x,-\mu_1+i\mu_2;h)}+r_2(x,\zeta_2;h)), \end{equation} where $\Phi_2^\sharp(\cdot,-\mu_1+i\mu_2;h) \in C^\infty(\mathbb{R}^n) \cap L^\infty(\mathbb{R}^n)$ is given by \begin{equation} \label{eq_phi_2_sharp_def} \Phi_{2}^\sharp(\cdot,-\mu_1+i\mu_2;h):=N_{-\mu_1+i\mu_2}^{-1} \big(-i(-\mu_1+i\mu_2)\cdot \ov{A_2^\sharp}\big) \end{equation} and $\Phi_2^\sharp(\cdot,-\mu_1+i\mu_2;h) \to \Phi_2(\cdot,-\mu_1+i\mu_2)$ in $L^\infty(\mathbb{R}^n)$ as $h\to 0$, where $\Phi_2$ is given by Proposition \ref{CGOest}.
Notice also that we have by estimates \eqref{es:phi_h_est} and \eqref{es:r_h_est}, of Proposition \ref{CGOest}, that \begin{align}
\|\nabla \Phi^\sharp_j\|_{L^\infty(\mathbb{R}^n)} &\leq C h^\frac{\gamma-1}{2}, \label{es:phi_h}\\
\|r_j\|_{H^1_{\emph{scl}}(\Omega)} &\leq C h^{\gamma/2}, \label{es:r_h} \end{align} for $j=1,2$.
The next step is to insert the $u_1$ and $u_2$ specified above into \eqref{eq:intId}, multiply by $h$ and let $h \to 0$, in an attempt to obtain a Fourier transform of the magnetic field. This is done in the next Lemma. The proof is based on the argument found in \cite{KU}. The difference is however in how the electric potential is estimated. The crucial observation is that the last term in \eqref{eq:intId} containing the electric potentials, goes to zero, in $h$ when multiplied with an extra factor of $h$, even though it closely resembles the first term with the magnetic potentials, for which this does not happen.
\begin{lem}\label{LemTempFourier} For $A_1,A_2,\mu_1,\mu_2$ and $\xi$ as above we have that \begin{equation} \label{eq:with_phases_R_n} (\mu_1+i\mu_2)\cdot\int_{\mathbb{R}^n} (A_1-A_2) e^{ix\cdot\xi} e^{\Phi_{1}+\ov{\Phi_{2}}}dx=0. \end{equation} \end{lem} \begin{proof} We use the abbreviations $A:=A_1-A_2$, $F:=F_1-F_2$ and $p:=p_1-p_2$. First we multiply \eqref{eq:intId} by $h$. For the non-gradient terms in \eqref{eq:intId} we have by \eqref{es:r_h} that \begin{align*}
\Big|
h \int_\Omega &(A_1^2 -A_2^2+p)u_1\overline{u_2}\,dx \Big|
\\ &= \Big| h \int_\Omega (A_1^2-A_2^2+p) e^{ix\cdot\xi}(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} +e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})
\,dx \Big| \\
&\leq Ch \| A_1^2-A_2^2+p\|_{L^\infty}
\Big(
\|e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}\|_{L^\infty}
+ \|e^{\Phi_1^\sharp}\|_{L^\infty} \|\ov{r_2}\|_{L^2} \\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
+ \|r_1\|_{L^2} \|e^{\ov{\Phi_2^\sharp}}\|_{L^\infty}
+ \|r_1\|_{L^2}\| \ov{r_2}\|_{L^2} \Big) \\
&\leq C h \to 0, \end{align*} as $h \to 0$. For our specific CGO solutions, $u_1$ and $u_2$, we hence have that \begin{align} \label{eq:modIntId}
h \Big| \int_\Omega
iA \cdot (u_1\nabla \overline{u_2}-\overline{u_2}\nabla u_1)\,dx - h \int_\Omega F \cdot (u_1\nabla \overline{u_2}+\overline{u_2}\nabla u_1)
\,dx \Big| = \mathcal{O}(h), \end{align} as $h \to 0$.
We continue by estimating the first integral in \eqref{eq:modIntId}. Since the solutions $u_1$ and $u_2$ are of the CGO form one gets the following by expanding \begin{align} \label{eq:u1du2} hu_1\nabla\ov{u_2}=&\ov{\zeta_2}e^{ix\cdot\xi}(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} +e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})\\ &+he^{ix\cdot\xi}(e^{\Phi_1^\sharp}\nabla e^{\ov{\Phi_2^\sharp}} + e^{\Phi_1^\sharp}\nabla \ov{r_2} + r_1\nabla e^{\ov{\Phi_2^\sharp}} +r_1 \nabla \ov{r_2}) \nonumber. \end{align} The first term in the first parantheses in \eqref{eq:u1du2} gives \begin{align}\label{lim:first} \ov{\zeta_2} \cdot\int_\Omega iA e^{ix\cdot\xi}e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}dx\to -(\mu_1+i\mu_2)\cdot\int_\Omega iA e^{ix\cdot\xi}e^{\Phi_1+\ov{\Phi_2}}dx. \end{align} as $h\to 0$. This is because $\ov{\zeta_2}=-\mu_1-i\mu_2+\mathcal{O}(h)$ and by \eqref{es:PhiPhiSharp} we have that \begin{align*}
\bigg| (\mu_1+i\mu_2)\cdot\int_\Omega A e^{ix\cdot\xi}\big(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}-e^{\Phi_1+\ov{\Phi_2}}\big)dx \bigg|
&\leq C\big\|e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}- e^{\Phi_1+\ov{\Phi_2}}\big\|_{L^\infty(\Omega)}\\ &\leq C h^{\gamma/2} \to 0
\end{align*} as $h\to 0$. For the next three terms in \eqref{eq:u1du2}, we can use estimate \eqref{es:r_h} and Cauchy--Schwarz to conclude that \begin{align} \label{lim:middle}
\bigg|\int_\Omega & iA \cdot \overline{\zeta_2}
e^{ix\cdot\xi}(e^{\Phi_1^\sharp}\overline{r_2}+r_1e^{\overline{\Phi_2^\sharp}}+r_1\overline{r_2})dx\bigg|
\nonumber\\
&\le C\| A \|_{L^\infty}
(\big\|e^{\Phi_1^\sharp}\big\|_{L^2}\|\overline{r_2}\|_{L^2}+\|r_1\|_{L^2}\big\|
e^{\overline{\Phi_2^\sharp}}\big\|_{L^2}+\|r_1\|_{L^2}\|\overline{r_2}\|_{L^2}) \\
& \leq C h^{\gamma/2} \to 0, \nonumber \end{align} as $h\to 0$. For the last part of \eqref{eq:u1du2} containing the factor $h$, we have using estimates \eqref{es:r_h} and \eqref{es:phi_h} that \begin{align} \label{lim:last}
\bigg|\int_\Omega h iA \cdot e^{ix\cdot\xi}(e^{\Phi_1^\sharp}\nabla e^{\overline{\Phi_2^\sharp}} + e^{\Phi_1^\sharp}\nabla \overline{r_2} +
r_1\nabla e^{\overline{\Phi_2^\sharp}} +r_1 \nabla \overline{r_2})dx\bigg|\\ \le C h \big( h^{(\gamma-1)/2}+ h^{-1}h^{\gamma/2}+ h^{\gamma/2}h^{(\gamma-1)/2}+h^{\gamma}h^{-1} \big)
\to 0, \nonumber
\end{align} as $h\to 0$. Expanding the $\ov{u_2}\nabla u_1$ term in \eqref{eq:modIntId} gives \begin{align} \label{eq:u2du1} h\ov{u_2}\nabla u_1 = &\zeta_1 e^{ix\cdot\xi}(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} +e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})\\
&+he^{ix\cdot\xi}(\nabla e^{\Phi_1^\sharp}e^{\ov{\Phi_2^\sharp}} +
\nabla e^{\Phi_1^\sharp} \ov{r_2} + \nabla r_1 e^{\ov{\Phi_2^\sharp}} +\nabla r_1 \ov{r_2}) \nonumber. \end{align} Again $-\zeta_1=-\mu_1-i\mu_2+\mathcal{O}(h)$. The terms in \eqref{eq:u1du2} and \eqref{eq:u2du1} are of the same form. Doing the analogous estimates for \eqref{eq:u2du1} gives then that \begin{align*} h \int_\Omega
iA \cdot (u_1\nabla \overline{u_2}-\overline{u_2}\nabla u_1) \,dx
\to -2i(\mu_1+i\mu_2)\cdot\int_{\mathbb{R}^n} A e^{ix\cdot\xi} e^{\Phi_{1}+\ov{\Phi_{2}}}dx, \end{align*} as $h \to 0$.
We end the proof by showing that \begin{align} \label{lim:Fpart} h \int_\Omega F \cdot (u_1\nabla \overline{u_2}+\overline{u_2}\nabla u_1) d x \to 0, \end{align} as $h \to 0$. Using \eqref{eq:u1du2} and \eqref{eq:u2du1} gives that \begin{align} \label{eq:ududuu} h(u_1\nabla \overline{u_2}+\overline{u_2}\nabla u_1) \;=\; &(\ov{\zeta_2}+\zeta_1) e^{ix\cdot\xi} \big(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} +e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2} \big)\nonumber \\ &+he^{ix\cdot\xi} \big( e^{\Phi_1^\sharp}\nabla e^{\ov{\Phi_2^\sharp}} + e^{\Phi_1^\sharp}\nabla \ov{r_2} + r_1\nabla e^{\ov{\Phi_2^\sharp}} + r_1 \nabla \ov{r_2}\\ &\quad\quad\quad\quad +\nabla e^{\Phi_1^\sharp}e^{\ov{\Phi_2^\sharp}} + \nabla e^{\Phi_1^\sharp} \ov{r_2} + \nabla r_1 e^{\ov{\Phi_2^\sharp}} + \nabla r_1 \ov{r_2} \big). \nonumber \end{align} The second term on the right hand side is of the same form as the second term on the right hand side of \eqref{eq:u1du2} and \eqref{eq:u2du1}. The contribution of these terms are therefore zero in the limit $h \to 0$.
For the first term on the right hand side of \eqref{eq:ududuu} we get, using \eqref{eq:z1_plus_z2} and \eqref{es:r_h}, the estimate \begin{align*}
\bigg| h \int_{\Omega} &i\xi \cdot F ( e^{ix\cdot\xi} (e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} +e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})
) \,dx\bigg| \\
&\leq \mathcal{O}(h)(1+h^{\gamma/2} + h^{\gamma/2} + h^{\gamma} ) \to 0, \end{align*} as $h\to0$. This shows that \eqref{lim:Fpart} holds. \end{proof}
It turns out that the $e^{\Phi_{1}+\ov{\Phi_{2}}}$ term can be dropped from \eqref{eq:with_phases_R_n}. This is guaranteed by Proposition 3.3 in \cite{KU} (see also \cite{ER} and \cite{S}). Using the abbreviation $A:=A_1-A_2$ we thus obtain \begin{align*}
(\mu_1+i\mu_2) \cdot \int_{\mathbb{R}^n} A e^{ix\cdot\xi} dx = (\mu_1+i\mu_2)\cdot\widehat{A}(-\xi) = 0, \end{align*} where $\widehat{A}$ stands for the Fourier transform of $A$. Moreover for any $\mu \in \mathbb{R}^n$, with $\mu \cdot \xi = 0$, we have therefore that $\mu \cdot \widehat{A} =0$. It follows that the Fourier transform of the component functions of \eqref{eq:magDef} are zero. To see this notice that the above implies that \begin{align*}
\xi_j \widehat{A_k} - \xi_k \widehat{A_j} =
(\xi_j e_k - \xi_k e_j) \cdot \widehat{A} = 0, \end{align*} since $\xi \cdot (\xi_j e_k - \xi_k e_j) = 0$, where $e_k$ denote the standard basis vectors of $\mathbb{R}^n$. We have thus proved that $dA_1 = dA_2$. \\
\textbf{Remark.} Notice that, we only need the condition $0<\gamma \leq 1$ in recovering the magnetic potentials, instead of $2/3<\gamma \leq 1$.
\section{Uniqueness of the electric potential} To finish the proof of Proposition \ref{thm2}, we need to show that $q_1 = \nabla \cdot F_1 +p_1 = \nabla \cdot F_2+p_2=q_2$. Lemma \ref{HoldExt} and the assumption that $A_1=A_2$, $F_1=F_2$ on $\p \Omega$ and that $\p \Omega$ is Lipschitz, allows us to extend $A_j$ and $F_j$, $j=1,2$ to a ball $B$, with $\ov{\Omega} \subset B$, so that $A_1 = A_2$ and $F_1 = F_2$ in $B\setminus\Omega$, $F_j = A_j = 0$ on $\p B$ and $A_j,F_j \in C^{0,\gamma}(B)$, for $j=1,2$.
In the previous section we proved that $d(A_1 -A_2) = 0$. The Poincar\'{e} Lemma implies now that there is a $\psi \in C^{1,\gamma}(B)$ s.t. $A_1-A_2 = \nabla \psi$ in $B$ (see \cite{CDK}). We can moreover choose $\psi$ so
that $\psi|_{\p B} = 0$, since $A_1=A_2=0$ in $B \setminus \Omega$. By Lemma \ref{lem_Cauchy_data} and Proposition \ref{prop_gauge_1} below, we have that \[
C_{A_1,q_1}^{B}=C_{A_2,q_2}^{B}
=C_{A_2+\nabla\psi,q_2}^{B}=C_{A_1,q_2}^{B}. \] Proposition \ref{int_identity} gives then that \begin{align} \label{eq:redIntId}
\int_B (-F \cdot \nabla(u_1 \ov{u_2}) + p u_1\ov{u_2})\,dx = 0, \end{align} for any $u_1,u_2 \in H^1(B)$, satisfying $L_{A_1,q_1}u_1=0$, $L_{\ov{A_2},\ov{q_2}}u_2=0$ in $B$ and where $F := F_1-F_2$ and $p:=p_1-p_2$.
We now suppose, as in section \ref{sec:magUniq} that $u_1$ and $u_2$ are given by \eqref{eq_u_1} and \eqref{eq_u_2} (when $\Omega = B$), with $A_1=A_2$ and consider the limit of \eqref{eq:redIntId} as $h \to 0$. Expanding \eqref{eq:redIntId}, using \eqref{eq:z1_plus_z2} gives \begin{align} \label{eq:expIdInt}
&\int_B -F\cdot i\xi e^{ix\cdot\xi}(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}
+e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2}) \,dx \nonumber \\
&+\int_B -F\cdot e^{ix\cdot\xi} \nabla(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}
+e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})\,dx\\
&+\int_B p u_1 \ov{u_2}\,dx = 0. \nonumber \end{align} We begin by showing that the second integral in \eqref{eq:expIdInt} tends to zero, in the limit $h \to 0$.
We simplify \eqref{eq:expIdInt} firstly by writing $\widetilde{F} := Fe^{ix\cdot\xi}$. Notice also that $\widetilde{F} \in C^{0,\gamma}(B)$. The second simplification comes from the fact that $e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} = 1$. To show this notice first that the Cauchy operator has the following properties \begin{align*} \ov{N_{\zeta}^{-1}f}=N_{\ov{\zeta}}^{-1}\ov{f},\quad N_{-\zeta}^{-1}f=-N_\zeta^{-1}f. \end{align*} Applying these to the definitions \eqref{eq_phi_1_sharp_def} and \eqref{eq_phi_2_sharp_def} together with the fact that we are now considering the case with $A_1 = A_2$ yields \begin{align*}
\Phi_1^\sharp+\ov{\Phi_2^\sharp} =
N^{-1}_{\mu_1+i\mu_2}\big (-i(\mu_1+i\mu_2) \cdot (A_1^\sharp - A_2^\sharp) \big) =0, \end{align*} so that \begin{align} \label{eq:simp2}
e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}} = 1. \end{align}
Split the second integral in \eqref{eq:expIdInt} into pieces by taking the absolute value and applying the triangle inequality. Consider first the first term of the second integral in \eqref{eq:expIdInt}. By \eqref{eq:simp2} we have immediately that \begin{align} \label{es:Fp1}
\Big| \int_B \widetilde{F} \cdot \nabla e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}\,dx \Big|
= 0. \end{align} Next we consider the terms $\nabla(e^{\Phi_1^\sharp}\ov{r_2})$ and $\nabla(r_1e^{\ov{\Phi_2^\sharp}})$, coming from the second integral in \eqref{eq:expIdInt}.
Notice firstly that $\widetilde{F}|_{\p B} =0$, since $F|_{\p B} =0$. Letting $\widetilde{F}^\sharp := \Psi_\theta * \widetilde{F}$, where $\Psi_\theta$ is defined as in the beginning of section \ref{sec:cgo} and using the estimates of Proposition \ref{CGOest}
and Lemma \ref{LemApprox} we get that \begin{align} \label{es:Fp2}
\Big| \int_B \widetilde{F} \cdot \nabla \big( e^{\Phi_1^\sharp}\ov{r_2}\big) \,dx \Big|
&= \Big| \int_B \widetilde{F} \cdot \big( \nabla e^{\Phi_1^\sharp}\ov{r_2}
+ e^{\Phi_1^\sharp}\nabla \ov{r_2} \big) \,dx \Big| \nonumber \\
&\lesssim
\| \widetilde{F} \cdot \nabla e^{\Phi_1^\sharp}\|_{\infty} \|\ov{r_2} \|_{2}
+ \Big| \int_B \widetilde{F} \cdot e^{\Phi_1^\sharp}\nabla \ov{r_2}\,dx \Big| \nonumber \\
&\lesssim
h^{(\gamma-1)/2}h^{\gamma/2}
+ \Big| \int_B \widetilde{F} \cdot e^{\Phi_1^\sharp}\nabla \ov{r_2}\,dx \Big| \nonumber \\
&\lesssim
h^{\gamma-1/2}
+ \Big| \int_B \nabla \cdot \big(\widetilde{F}^\sharp e^{\Phi_1^\sharp} \big) \ov{r_2}\,dx \Big| \\
&\quad\quad\quad\quad
+ \Big| \int_B \big(\widetilde{F} - \widetilde{F}^\sharp\big) \cdot e^{\Phi_1^\sharp}\nabla \ov{r_2}\,dx
\Big| \nonumber\\
&\lesssim
h^{\gamma-1/2}
+ \theta^{1-\gamma} h^{\gamma/2}
+ \|\widetilde{F} - \widetilde{F}^\sharp\|_{\infty} \| \nabla \ov{r_2} \|_{2} \nonumber\\
&\lesssim
h^{\gamma-1/2}
+ \theta^{1-\gamma} h^{\gamma/2}
+ \theta^{-\gamma}h^{\gamma/2-1}. \nonumber \end{align} The last term from the second integral in \eqref{eq:expIdInt} is handled as follows \begin{align} \label{es:Fp3}
\Big| \int_B \widetilde{F} \cdot \nabla (r_1 \ov{r_2}) \,dx\Big|
&\lesssim
\Big| \int_B \nabla \cdot \widetilde{F}^\sharp r_1 \ov{r_2}\,dx \Big|
+ \Big| \int_B \big(\widetilde{F} -\widetilde{F}^\sharp\big)
\cdot \nabla (r_1 \ov{r_2})\,dx \Big| \nonumber \\
&\lesssim \|\nabla \cdot \widetilde{F}^\sharp \|_{\infty} \|r_1\|_{2}\|\ov{r_2}\|_{2} \nonumber \\
&\quad
+\| \widetilde{F} - \widetilde{F}^\sharp \|_{\infty}
\big( \|\nabla r_1\|_{2}\|\ov{r_2}\|_{2} + \|r_1\|_{2}\|\nabla\ov{r_2}\|_{2} \big) \\
&\lesssim \theta^{1-\gamma} h^{\gamma/2} h^{\gamma/2}
+ \theta^{-\gamma}h^{-1}h^{\gamma/2}h^{\gamma/2} \nonumber \\
&\lesssim \theta^{1-\gamma} h^{\gamma} + \theta^{-\gamma}h^{\gamma-1}. \nonumber \end{align} Combining \eqref{es:Fp1}, \eqref{es:Fp2} and \eqref{es:Fp3} and then choosing $\theta = h^{-1}$, gives for the second integral in \eqref{eq:expIdInt} that \begin{align*}
\Big| \int_B & F\cdot e^{ix\cdot\xi} \nabla(e^{\Phi_1^\sharp+\ov{\Phi_2^\sharp}}
+e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2})\,dx \Big| \\
&\lesssim
\theta^{1-\gamma}h^{\gamma/2} + \theta^{-\gamma}h^{\gamma/2-1} + h^{\gamma-1/2}\\
&=
2 h^{(3\gamma-2)/2 } + h^{\gamma-1/2} \to 0, \end{align*} as $h \to 0$, since we require that $\gamma > 2/3$.
We now return to the first integral in \eqref{eq:expIdInt}. It can be estimated using \eqref{es:r_h} and the Cauchy--Schwarz inequality as follows \begin{align*}
\Big| \int_B &-F\cdot i\xi e^{ix\cdot\xi}(
e^{\Phi_1^\sharp}\ov{r_2}+r_1e^{\ov{\Phi_2^\sharp}}+r_1\ov{r_2}) \,dx \Big| \\
&\lesssim
\big \|e^{\Phi_1^\sharp}\big \|_{\infty}
\big \|\ov{r_2}\big \|_{2}
+\big \|r_1\big\|_{2} \big \|e^{\ov{\Phi_2^\sharp}}\big\|_{\infty}
+\big \|r_1\big\|_{2} \big \|\ov{r_2}\big \|_{2} \\
&\lesssim
h^{\gamma/2} \to 0, \end{align*} as $h \to 0$. Estimating the third integral in \eqref{eq:expIdInt} in a simliar fashion and using \eqref{eq:simp2} we thus conclude that \eqref{eq:expIdInt} reduces to \begin{align*}
\int_B (-F\cdot i\xi e^{ix\cdot\xi} + p e^{ix\cdot \xi}) \,dx = 0, \end{align*} in the limit $h \to 0$. This implies that $\mathcal{F} \big( \nabla \cdot F + p \big)(-\xi) = 0$ in the distributional sense, which in turn implies that $0= \nabla \cdot F + p = (\nabla \cdot F_1 + p_1) - (\nabla \cdot F_2 + p_2)$, finishing the proof of Proposition \ref{thm2}.
\section{Appendix A -- Gauge invariance and Boundary data} Gauge invariance plays an important role when working with the magnetic Schr\"odinger equation. Here we state the basic result concerning the gauge invariance of the Cauchy data sets. This section also includes two results on when the equality of the boundary data on a smaller set implies the equality of the boundary data on a bigger set.
\begin{prop} \label{prop_gauge_1} Let $\Omega\subset \mathbb{R}^n$, $n\ge 3$, be a bounded open set with Lipschitz boundary. Assume $A,F \in C^{0,\gamma}(\Omega,\mathbb{C}^n)$, $0 < \gamma \leq 1$, $p \in L^\infty(\Omega, \mathbb{C})$, $\psi \in C^{1,\gamma}(\Omega,\mathbb{C})$ and let $q = \nabla \cdot F + p$ . Then we have
\begin{align} \label{eq:conj_lem}
e^{-i\psi}\circ L_{A,q }\circ e^{i\psi}=L_{A+\nabla \psi, q}.
\end{align}
If furthermore, $\psi|_{\p \Omega}=0$ then
\begin{align} \label{eq:conj_lem_2}
C_{A,q}=C_{A+\nabla\psi,q}.
\end{align} \end{prop} \begin{proof} Let $\psi \in C^{1,\gamma}(\Omega)$. By direct computation we know that for $L_{A,p}$, we have \[
e^{-i\psi}\circ L_{A,p }\circ e^{i\psi}=L_{A+\nabla \psi, p}. \] Furthermore we have that \[
e^{-i\psi}\circ (\nabla \cdot F) \circ e^{i\psi} = \nabla \cdot F, \] since for $u,v \in C^\infty_0(\Omega)$, we have that \begin{align*}
\big \langle e^{-i\psi}\circ (\nabla \cdot F) \circ e^{i\psi} u, v \big \rangle
&= -\int_\Omega F \cdot \nabla ( e^{-i\psi} u e^{i\psi} v)\,dx \\
&= -\int_\Omega F \cdot \nabla ( uv)\,dx, \end{align*} where $\langle \cdot,\cdot \rangle$ stands for the distributional duality. Thus recalling that $q = \nabla \cdot F + p$ it follows that \begin{align*}
e^{-i\psi}\circ L_{A,q }\circ e^{i\psi}=L_{A+\nabla \psi, q}, \end{align*} which proves \eqref{eq:conj_lem}.
In order to prove \eqref{eq:conj_lem_2}, assume that $\psi|_{\p\Omega}=0$. Let $u \in H^1(\Omega)$ be a solution to \[
L_{A,q} u = 0, \text{ in } \Omega. \] By \eqref{eq:conj_lem} we know that $e^{-i\psi}u \in H^1(\Omega)$ satisfies \[
L_{A + \nabla \psi,q} (e^{-i\psi}u) = 0, \text{ in } \Omega. \]
Moreover we have that $e^{-i\psi}u|_{\p\Omega} = u|_{\p \Omega}$. It remains hence to show that \[
N_{A + \nabla \psi,q} (e^{-i\psi}u) = N_{A,q} u, \text{ on } \p \Omega. \]
To that end let $\varphi \in H^{1/2}(\p\Omega)$ and let $\phi\in H^1(\Omega)$ be such that $\phi |_{\p\Omega} = \varphi$. Then \begin{align*}
\big \langle &N_{A + \nabla \psi,q} (e^{-i\psi}u) , \varphi \big \rangle
= \big \langle N_{A + \nabla \psi,q} (e^{-i\psi}u) , e^{i\psi}\varphi \big \rangle \\
&\quad= \int_\Omega \nabla(e^{-i\psi}u) \cdot \nabla(e^{i\psi} \phi)
+i(A +\nabla \psi) \cdot (e^{-i\psi}u \nabla(e^{i\psi} \phi) \\
&\quad\quad-\nabla (e^{-i\psi}u) e^{i\psi} \phi) + ((A + \nabla \psi)^2 +p)u\phi- F\cdot\nabla(u\phi)\,dx \\
&\quad= \int_\Omega \nabla u \cdot \nabla \phi
+ iA \cdot (u \nabla \phi -\nabla u \phi) + (A^2 +p)u\phi\\
&\quad \quad- F\cdot\nabla(u\phi)\,dx \\
&\quad= \big \langle N_{A,q} u , \varphi \big \rangle. \end{align*}
\end{proof}
The next Lemma is a slight modification of Lemma 4.2 in \cite{Sa}, we include the proof for the convenience of the reader. The Lemma shows that two DN-maps that coincide on small set, give two DN-maps that coincide on a bigger if we extend the potentials so that they are identical outside the smaller set.
\begin{lem} \label{lem_Cauchy_data_conv}
Assume that $\Omega, \Omega'\subset \mathbb{R}^n$ be bounded open sets with Lipschitz
boundaries, such that $\ov{\Omega}\subset \Omega'$
and let $V_1,V_2\in L^\infty(\Omega',\mathbb{C}^n)$.
Denote by $\Lambda_{V_j}^\Omega$ the DN-map corresponding to the Dirichlet problem on the set
$\Omega$.
Assume that
$V_1=V_2$ in $\Omega'\setminus\Omega$.
If $\Lambda_{V_1}^\Omega=\Lambda_{V_2}^\Omega$ then
$\Lambda_{V_1}^{\Omega'}=\Lambda_{V_2}^{\Omega'}$. \end{lem} \begin{proof} Given $u_1' \in H^1(\Omega')$, solving $L_{V_1} u_1' = 0$, in $\Omega'$ we need to find an $u_2' \in H^1(\Omega')$ solving $L_{V_2} u_2' = 0$, in $\Omega'$ with
$u_2'|_{\p \Omega'} = u_1'|_{\p \Omega'} $ and $\p_n u_2'|_{\p \Omega'} =\p_n u_1'|_{\p \Omega'}$.
The function $u_1 := u_1'|_{\Omega}$ solves $L_{V_1} u_1 = 0$, in $\Omega$. Let $u_2\in H^1(\Omega)$ be such that
$L_{V_2} u_2 = 0$ in $\Omega$ and $u_2|_{\p \Omega} = u_1|_{\p \Omega}$. We know that
$\p_n u_2|_{\p \Omega} = \p_n u_1|_{\p \Omega}$, since $\Lambda_{V_1}^\Omega=\Lambda_{V_2}^\Omega$. Thus $u_1 - u_2 \in H^1_0(\Omega)$. Define \begin{align*} u_2' := u_1' - (u_1 - u_2), \textrm{ in } \Omega', \end{align*} where we extended by $u_1 - u_2$ by zero into $\Omega'$. Clearly $u_2' \in H^1(\Omega')$,
$u_2'|_{\p \Omega'} = u_1'|_{\p \Omega'} $ and $\p_n u_2'|_{\p \Omega'} =\p_n u_1'|_{\p \Omega'}$.
It remains to check that $L_{V_2} u_2' = 0$, in $\Omega'$ in a weak sense. Let $\varphi \in C^\infty_0(\Omega')$, then \begin{align*}
\langle L_{V_2} u_2', \varphi \rangle_{\Omega'}
&= \int_{\Omega'} \nabla u_2' \cdot \nabla \varphi + V_2 \cdot \nabla u_2'\varphi \\
&= \int_{\Omega} \nabla u_2' \cdot \nabla \varphi + V_2 \cdot \nabla u_2'\varphi
+ \int_{\Omega'\setminus \Omega} \nabla u_2' \cdot \nabla \varphi + V_2 \cdot \nabla u_2'\varphi \\
&= \int_{\Omega} \nabla u_2 \cdot \nabla \varphi + V_2 \cdot \nabla u_2\varphi
+ \int_{\Omega'\setminus \Omega} \nabla u_1' \cdot \nabla \varphi + V_1 \cdot \nabla u_1'\varphi \\
&=
\int_{\Omega'} \nabla u_1' \cdot \nabla \varphi + V_1 \cdot \nabla u_1'\varphi \\
&=
\langle L_{V_1} u_1', \varphi \rangle_{\Omega'} \\
&= 0, \end{align*}
where we use the fact that $u_2|_{\p \Omega} = u_1|_{\p \Omega}$,
$u_1 = u_1'|_\Omega$ and $\Lambda^\Omega_{V_1} = \Lambda_{V_2}^\Omega$ to get the fourth equality. \end{proof}
We need a similar result concerning the magnetic Schr\"odinger operator.
\begin{lem} \label{lem_Cauchy_data} Let $\Omega, \Omega'\subset \mathbb{R}^n$ be bounded open sets with Lipschitz boundaries, such that $\ov{\Omega} \subset \Omega'$. Let $A_1,A_2,F_1,F_2\in C^{0,\gamma}(\Omega',\mathbb{C}^n)$, $0<\gamma \leq 1$, $p_1,p_2 \in L^\infty(\Omega',\mathbb{C}^n)$ and let $q_j:= \nabla \cdot F_j + p_j$. Denote by $C_{A_j,q_j}^{\Omega}$ the Cauchy data for $L_{A_j,q_j}$ in the set $\Omega$, $j=1,2$. Assume that \begin{equation} \label{eq_equality_A_q} A_1=A_2,\;F_1=F_2\quad\textrm{and}\quad p_1=p_2, \quad \textrm{in}\quad \Omega'\setminus\Omega. \end{equation} If $C_{A_1,q_1}^{\Omega}=C_{A_2, q_2}^\Omega$ then $C_{A_1,q_1}^{\Omega'}=C_{A_2, q_2}^{\Omega'}$. \end{lem} \begin{proof} Given $u_1' \in H^1(\Omega')$, solving $L_{A_1,q_1} u_1' = 0$, in $\Omega'$ we need to find an $u_2' \in H^1(\Omega')$ solving $L_{A_2,q_2} u_2' = 0$, in $\Omega'$ with
$u_2'|_{\p \Omega'} = u_1'|_{\p \Omega'} $ and $N_{A_2,q_2} u_2' = N_{A_1,q_1} u_1'$. This implies that $C_{A_1,q_1}^{\Omega'}\subset C_{A_2,q_2}^{\Omega'}$, from which the claim follows.
Let $u_1 := u_1'|_{\Omega}$. Then $L_{A_1,q_1} u_1 = 0$, in $\Omega$. Let $u_2\in H^1(\Omega)$ be such that
$L_{A_2,q_2} u_2 = 0$, in $\Omega$ and $u_2|_{\p \Omega} = u_1|_{\p \Omega}$. Because $C_{A_1,q_1}^{\Omega}=C_{A_2, q_2}^\Omega$, we know that $N_{A_2,q_2} u_2 = N_{A_1,q_1} u_1$, on $\p\Omega$.
In particular we have that $\varphi := u_2 - u_1 \in H^1_0(\Omega) \subset H^1_0(\Omega')$. Define \begin{align*} u_2' := u_1' + \varphi, \textrm{ in } \Omega', \end{align*} where we extended by $u_1 - u_2$ by zero into $\Omega'$. Clearly $u_2' \in H^1(\Omega')$,
$u_2'|_{\p \Omega'} = u_1'|_{\p \Omega'} $. We need thus to check that $L_{A_2,q_2} u_2' = 0$, in $\Omega'$ and that $N_{A_2,q_2} u_2' = N_{A_1,q_1} u_1'$.
Let $\psi \in C_0^\infty(\Omega')$, then \begin{align*}
\langle L_{A_2,q_2} u_2', \psi \rangle_{\Omega'} =
\int_{\Omega'} &\nabla (u_1'+\varphi) \cdot \nabla \psi + i A_2
\cdot ((u_1'+\varphi) \nabla \psi - \psi \nabla(u_1'+\varphi)) \\
& + (A_2^2+p_2)(u_1'+\varphi) \psi - F_2 \cdot \nabla ( (u_1'+\varphi) \psi) \, dx. \end{align*} Since $u_1'+\varphi = u_2$ on $\Omega$, we have that \begin{align*}
\langle L_{A_2,q_2} u_2', \psi \rangle_{\Omega'} &=
\int_{\Omega} \nabla u_2 \cdot \nabla \psi + i A_2
\cdot (u_2 \nabla \psi - \psi \nabla u_2) \\
&\quad+ (A_2^2+p_2)u_2\psi - F_2 \cdot \nabla ( u_2 \psi) \, dx \\
&+\int_{\Omega'\setminus \Omega} \nabla u_1' \cdot \nabla \psi + i A_1
\cdot (u_1' \nabla \psi - \psi \nabla u_1') \\
&\quad+ (A_1^2+p_1)u_1' \psi - F_1 \cdot \nabla (u_1' \psi) \, dx \\
&+\int_{\Omega'\setminus \Omega} \nabla \varphi \cdot \nabla \psi + i A_1
\cdot (\varphi\nabla \psi - \psi \nabla\varphi) \\
&\quad+ (A_1^2+p_1)\varphi \psi - F_1 \cdot \nabla ( \varphi \psi) \, dx \end{align*} The last integral is zero, since $\supp(\varphi) \subset \Omega$. Hence using the assumption that $N_{A_2,q_2} u_2 = N_{A_1,q_1} u_1$, on $\p\Omega$ gives \begin{align*}
\langle L_{A_2,q_2} u_2', \psi \rangle_{\Omega'} &=
\langle N_{A_2,q_2} u_2, \psi|_{\Omega} \rangle_{\p\Omega} \\
&\quad+\int_{\Omega'\setminus \Omega} \nabla u_1' \cdot \nabla \psi + i A_1
\cdot (u_1' \nabla \psi - \psi \nabla u_1') \\
&\quad\quad+ (A_1^2+p_1)u_1' \psi - F_1 \cdot \nabla (u_1' \psi) \, dx \\
&=\langle L_{A_1,q_1} u_1', \psi \rangle_{\Omega'}
=0. \end{align*} Thus we see that $L_{A_2,q_2} u_2' = 0$, in $\Omega'$.
A similar deduction shows that $N_{A_2,q_2} u_2' = N_{A_1,q_1} u_1'$. Hence we have that $C_{A_1,q_1}^{\Omega'}\subset C_{A_2,q_2}^{\Omega'}$. \end{proof}
\section{ Appendix B -- A Carleman estimate}
In this section we prove a Carleman estimate that implies the solvability result Proposition \ref{solvability}, in section \ref{sec:cgo}. The proof is a straight forward extension of the one in \cite{KU}, and we give it here for the convenience of the reader. The main concern is how to incorporate the $\nabla \cdot F$ term into the result in \cite{KU}.
The estimate we are about to prove is a perturbation of the Carleman estimate for the Laplacian, given in \cite{STz} (see also \cite{KU}). We state this result as follows.
\begin{prop}
Let $\varphi(x) = \alpha \cdot x$, $\alpha \in \mathbb{R}^n$, $|\alpha| = 1$ and let $\varphi_\varepsilon=\varphi+\frac{h}{2\varepsilon}\varphi^2$. Then for $0<h\ll \varepsilon\ll 1$ and $s\in\mathbb{R}$, we have \begin{align} \label{eq:CE_lap}
\frac{h}{\sqrt{\varepsilon}}\|u\|_{H^{s+2}_{\textrm{scl}}(\mathbb{R}^n)}\le C\|e^{\varphi_\varepsilon/h}h^2\Delta(e^{-\varphi_\varepsilon/h}u)\|_{H^s_{\textrm{scl}}(\mathbb{R}^n)}, \quad C>0, \end{align} for all $u\in C^\infty_0(\Omega)$. \end{prop}
We now apply this result in the case $s=-1$ and a fixed $\varepsilon>0$ that is sufficiently small.
\begin{prop} \label{PCE}
Let $\varphi(x) = \alpha \cdot x$, $\alpha \in \mathbb{R}^n$ with $|\alpha| = 1$. Assume $A,F \in L^\infty(\Omega, \mathbb{C}^n)$, $p \in L^\infty(\Omega,\mathbb{C})$ and $q= \nabla \cdot F + p$. Then for $0<h\ll 1$, we have \begin{align} \label{eq:CE_schr}
h\|u\|_{H^{1}_{\textrm{scl}}(\mathbb{R}^n)}\le C\|e^{\varphi/h}h^2L_{A,q}(e^{-\varphi/h}u) \|_{H^{-1}_{\textrm{scl}}(\mathbb{R}^n)}, \end{align} for all $u\in C^\infty_0(\Omega)$. \end{prop}
\begin{proof} Let $\varphi_\varepsilon=\varphi+\frac{h}{2\varepsilon}\varphi^2$ be the convexified weight, with $\varepsilon >0$ and $0<h\ll \varepsilon\ll 1$. Then in the proof Proposition 2.2 in \cite{KU}, it is shown that \begin{align} \label{eq:AA_est}
\|e^{\varphi_\varepsilon/h} h^2 A\cdot D(e^{-\varphi_\varepsilon/h}u) + e^{\varphi_\varepsilon/h} h^2 D\cdot( A e^{-\varphi_\varepsilon/h}u)
\|_{H_{\textrm{scl}}^{-1}(\mathbb{R}^n)}\le
\mathcal{O}(h)\|u\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}, \end{align}
where $D:=i^{-1}\nabla$. Here the implicit constant depends on $\|A\|_{L^\infty(\Omega)}$, $\| \varphi \|_{L^\infty(\Omega)}$
and $\| D \varphi \|_{L^\infty(\Omega)}$ (see (2.4) in \cite{KU}).
Furthermore, we have for all $0 \neq \psi \in C^\infty_0(\Omega)$ that \begin{align*}
\big| \langle e^{\varphi_\varepsilon/h} h^2 \nabla \cdot F (e^{-\varphi_\varepsilon/h} u) , \psi \rangle \big|
&\leq h^2 \int_{\mathbb{R}^n} | F \nabla \cdot ( e^{\varphi_\varepsilon/h}u e^{-\varphi_\varepsilon/h}\psi )| \\
&\leq h \|F\|_{L^\infty(\mathbb{R}^n)} \big( \|h\nabla u\|_{L^2(\mathbb{R}^n)} \|\psi\|_{L^2(\mathbb{R}^n)} \\
&\quad + \|u\|_{L^2(\mathbb{R}^n)} \|h \nabla \psi \|_{L^2(\mathbb{R}^n)} \big)\\
&\leq \mathcal{O}(h)\|u\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}\|\psi\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}. \end{align*} It follows from the definition of the $H^{-1}_{\textrm{scl}}$-norm that \begin{align} \label{eq:Fterm_est}
\| e^{\varphi_\varepsilon/h} h^2 \nabla \cdot F
(e^{-\varphi_\varepsilon/h} u)\|_{H^{-1}_{\textrm{scl}}(\mathbb{R}^n)}
&\leq \mathcal{O}(h)\|u\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}. \end{align} By choosing a small fixed $\varepsilon>0$ that is independent of $h$, we conclude from estimates \eqref{eq:CE_lap},\eqref{eq:AA_est} and \eqref{eq:Fterm_est} that \begin{align*}
\big \|&e^{\varphi_\varepsilon/h}(-h^2\Delta)(e^{-\varphi_\varepsilon/h}u) +
e^{\varphi_\varepsilon/h} h^2 A\cdot D(e^{-\varphi_\varepsilon/h}u)\\
&\quad + e^{\varphi_\varepsilon/h} h^2 D\cdot( A e^{-\varphi_\varepsilon/h}u)
+ e^{\varphi_\varepsilon/h} h^2 \nabla \cdot F
(e^{-\varphi_\varepsilon/h} u) \big \|_{H_{\textrm{scl}}^{-1}(\mathbb{R}^n)} \\
& \geq \frac{h}{C}\|u\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}. \end{align*} Moreover we have that \begin{align*}
\|h^2(A^2+p) u \|_{H_{\textrm{scl}}^{-1}(\mathbb{R}^n)}
\leq \mathcal{O}(h^2)\|u\|_{H^1_{\textrm{scl}}(\mathbb{R}^n)}. \end{align*} Combining the two previous estimates gives then that \begin{align*}
C\|e^{\varphi_\varepsilon/h}h^2L_{A,q}(e^{-\varphi_\varepsilon/h}u) \|_{H^{-1}_{\textrm{scl}}(\mathbb{R}^n)}
\geq \frac{h}{C} \|u\|_{H^{1}_{\textrm{scl}}(\mathbb{R}^n)}, \end{align*} where $C>0$. By using $e^{-\varphi_\varepsilon/h}u = e^{-\varphi_/h} e^{-\varphi^2/(2\varepsilon)}u$, we obtain \eqref{eq:CE_schr}. \end{proof}
\end{document} | arXiv | {
"id": "1405.6864.tex",
"language_detection_score": 0.5257561206817627,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\draft
\title{Teleportation via generalized measurements, and conclusive teleportation}
\author{ Tal Mor\thanks{Electrical Engineering department, University of California, Los Angeles, CA 90095, USA} and Pawel Horodecki\thanks{Faculty of Applied Physics and Mathematics, Technical University of Gda\'nsk, 80--952 Gda\'nsk, Poland} }
\date{\today}
\maketitle
\begin{abstract}
In this work we show that teleportation~\cite{BBCJPW} is a {\em special case} of a generalized Einstein, Podolsky, Rosen (EPR) non-locality. Based on the connection between teleportation and generalized measurements we define conclusive teleportation. We show that perfect conclusive teleportation can be obtained with any pure entangled state, and it can be arbitrarily approached with a particular mixed state.
\end{abstract}
\noindent KEY WORDS: Quantum information processing, Entanglement, Teleportation, Nonlocality, Generalized measurements, Conclusive teleportation, Distillation;
\section{Introduction}
Quantum information processing (QIP)~\cite{Hels67,Kholevo1,Kholevo2,Helstrom,Davies,Per93,Ben-Div-95,Ben-Sho-98} discusses information processing in which the basic units are two-level quantum systems (e.g., spin-half particles, the polarization of individual photons, etc.) known as quantum bits or shortly, {\em qubits}. The classical states, $0$ and $1$, of a classical bit
are generalized to quantum states of a qubit, $|0\rangle \equiv { 1 \choose 0}$ and
$|1\rangle \equiv { 0 \choose 1}$. The nonclassical aspect of a qubit is that it can also be in a superposition
$|\phi \rangle = \alpha |0\rangle + \beta |1\rangle = { \alpha \choose \beta}$, with $|\alpha^2| + |\beta^2| = 1$, and two or more qubits can be in a superposition which cannot be written as tensor product, and is known as an entangled state. The special properties of entangled states were first noted by Einstein-Podolsky-Rosen (EPR)~\cite{EPR}, and a proof for the special nonclassicality was first obtained by Bell~\cite{Bell}. The EPR-Bohm singlet state,
$ |\Psi^-\rangle = (1/\sqrt2) [|01\rangle - |10\rangle]$, of pair of qubits is the most important example of entanglement. [We prefer, for simplicity, to use these ``braket'' notations for two-particle states while using vector notations for one-particle states.] The singlet state can be complimented to a basis~\cite{BMR} (known now as the ``Bell basis'') by adding the three states
$ |\Psi^+\rangle = (1/\sqrt2) [|01\rangle + |10\rangle]$,
$ |\Phi^-\rangle = (1/\sqrt2) [|00\rangle - |11\rangle]$,
$ |\Phi^+\rangle = (1/\sqrt2) [|00\rangle + |11\rangle]$, the Bell-states (or the Braunstein, Mann, Revzen (BMR) states). We shall usually refer here to two qubits in any one of the Bell-BMR states as an EPR-pair, and to the EPR-Bohm state as the singlet state. Entanglement---the quantum feature visualized by such states---is an origin of fascinating quantum phenomena in quantum information theory: quantum computation \cite{deutsch85,ShorFactor,Grover,Ben-Div-95,NatureL}, entanglement-based quantum cryptography \cite{EPR-scheme,BHM96}, quantum error correction~\cite{ShorQEC,ScienceCLSZ,ShorFT,ScienceZ,CRSS,Rains}, and more.
One of the most fascinating discoveries is {\it quantum teleportation}~\cite{BBCJPW} which lies in the heart of quantum information theory (see~\cite{Ben-Sho-98,Caves}), and has been recently realized experimentally \cite{ExTel}. Quantum teleportation is a process of transmission of an {\it unknown}
quantum state $|\phi \rangle={\alpha \choose \beta}$ via a previously shared EPR pair with the help of only two classical bits transmitted via a classical channel (usually visualized by phone): Alice (the sender) has a qubit in an unknown quantum state which she wishes to transmit to Bob (the receiver) using additional EPR pair shared by her and Bob. To do this she performs joint measurement on the two particles which are in her hands, then she sends (via phone) her two-bit result to Bob, who performs some unitary operation on his particle ``transforming'' his particle to the (still unknown) original state
$|\phi\rangle$. The initial state of Alice's unknown state, and the EPR-pair (say, in a singlet state) is
$ {\alpha \choose \beta} | \Psi^- \rangle $. The teleportation is based on the fact that this initial state can also be written as~\cite{BBCJPW}: \begin{eqnarray}
&&|\Psi_{123}\rangle =
{\alpha \choose \beta}_1 | \Psi^-_{23} \rangle \ =
\frac{1}{2} \Big[ | \Phi^+_{12} \rangle { -\beta \choose \alpha}_3 +
| \Phi^-_{12} \rangle { \beta \choose \alpha}_3 + \nonumber \\
&& | \Psi^+_{12} \rangle { - \alpha \choose \beta }_3 +
| \Psi^-_{12} \rangle { - \alpha \choose - \beta }_3 \Big] \ ,
\end{eqnarray} where we add the particle's numbers to avoid confusion. A Bell measurement at Alice's site projects the state of Bob's particle, to be in one of the states \begin{equation} { -\beta \choose \alpha}\ ; \ { \beta \choose \alpha}\ ; \ { - \alpha \choose \beta }\ ; \ { \alpha \choose \beta } \ , \end{equation} depending on Alice's outcome. Using the appropriate rotation, \begin{eqnarray}
\left( \begin{array}{cc}
0 & 1 \\ -1 & 0 \end{array} \right) \ ;
\left( \begin{array}{cc}
0 & 1 \\ 1 & 0 \end{array} \right) \ ;
\left( \begin{array}{cc}
-1 & 0 \\ 0 & 1 \end{array} \right) \ ; or
\left( \begin{array}{cc}
1 & 0 \\ 0 & 1 \end{array} \right) \ , \end{eqnarray} respectively, each of these states can be rotated back to yield the
unknown state $|\phi\rangle$. Bob chooses the correct rotation based on the two bits he receives from Alice.
The minimal resources required for teleportation are one EPR singlet pair, which clearly, is
{\em independent} of $|\phi\rangle$, and two classical bits. This seems to be rather mysterious because (i) the particle is described by a point on a unit sphere, hence by two {\it real} numbers and not by two bits,
(ii) as one can check, even from those two classical bits neither Alice nor Bob can learn anything about the unknown parameters of the state $|\phi\rangle$.
The alternative approach presented in this paper somewhat clarifies the mystery. Namely, we interpret the teleportation in the light of the paper of Hughston, Jozsa and Wootters (HJW)~\cite{HJW}, and we present the teleportation process as a unique case of generalized EPR-nonlocality (we use the language of generalized measurement to express the ideas of~\cite{HJW}).
A positive operator valued measure (POVM) provides the most general physically realizable measurement in quantum mechanics~\cite{JP-DL,Helstrom,Davies,Per93}, and we also call these measurements ``generalized measurements''. Formally, a POVM is a collection of positive operators $A_i$ on a Hilbert space ${\cal H}_n$ of dimension $n$ which sum up to the identity, $A_1 + \ldots + A_r = I_n$. [When viewed as matrices, these are matrices which can be diagonalized and have only non-negative eigenvalues.] Standard measurements (which are usually described by some Hermitian operator in quantum mechanics books) arise as a special case where
$A_i = |\psi_i\rangle \langle\psi_i|$ and $A_i A_j = \delta_{ij}$. We discuss here only pure POVMs in which each of the $A_i$ is proportional to a projection
$A_i = q_i |\psi_i\rangle \langle\psi_i|$, but the operators $A_i$ are not necessarily orthogonal to each other, so that $r \ge n$. Any POVM can be implemented (at least in principle) by adding an ancilla in a known state, and performing a standard measurement in the enlarged Hilbert space~\cite{Per93}.
To describe the EPR-nonlocality and its generalization, let us first define the notion of $\rho$-ensembles~\cite{HJW}. An ensemble of quantum state is defined by a collection of normalized states $|\psi_1\rangle, \ldots, |\psi_m\rangle$ taken with a-priori probabilities $p_1, \ldots, p_m$ respectively. To any such ensemble one can associate its density matrix: \begin{equation}
\rho = \sum_{i=1}^m p_i |\psi_i\rangle \langle \psi_i| \ , \end{equation} and the term $\rho$-ensemble refers to an ensemble with a density matrix $\rho$. For instance, for the completely mixed state in 2-dimensions, $\rho = I/2$, the following are all legitimate $\frac{I}{2}$-ensembles: \begin{eqnarray}
E_1 &=& \{|\psi_1\rangle = {1 \choose 0},
|\psi_2\rangle = {0 \choose 1}; p_1 = p_2 = 1/2 \} \nonumber \\
E_2 &=& \{|\psi_1\rangle = {1/\sqrt2 \choose 1/\sqrt2},
|\psi_2\rangle = {1/\sqrt2 \choose -1/\sqrt2}; p_1 = p_2 = 1/2 \} \nonumber \\
E_3 &=& \{|\psi_1\rangle = {1 \choose 0},
|\psi_2\rangle = {0 \choose 1},
|\psi_3\rangle = {1/\sqrt2 \choose 1/\sqrt2}, \nonumber \\ &&
|\psi_4\rangle = {1/\sqrt2 \choose -1/\sqrt2}; \ p_i = 1/4 \ , \ 1 \leq i \leq 4 \} \nonumber \\
E_4 &=& \{
|\psi_1\rangle = {\alpha \choose \beta },
|\psi_2\rangle = {\alpha \choose -\beta },
|\psi_3\rangle = {\beta \choose \alpha }, \nonumber \\ &&
|\psi_4\rangle = {\beta \choose -\alpha }; \ p_i = 1/4 \ , \ 1 \leq i \leq 4
\}\ . \end{eqnarray}
When a classical system is subjected to a measurement of any of its properties a definite outcome exists (at least in principle). However, when a quantum particle (say a qubit) is in a state which is well defined in one bases, say ${1 \choose 0}$ in the rectilinear basis ${1 \choose 0}; {0 \choose 1}$, the state is undefined in any other basis, and a measurement, say, in the diagonal basis ${1/\sqrt2 \choose \pm 1/\sqrt2}$, does not have a definite outcome which can be predicted, and only the probabilities (of the possible outcomes) can be calculated. This is the well known {\em uncertainty principle}.
The EPR paradox~\cite{EPR} is as follows: If Alice and Bob share a singlet state, the state of Bob's particle is undefined (if we trace-out Alice's particle, then Bob's particle is in a completely mixed state $I/2$, but without tracing out Alice's particle, the state of Bob's particle by itself is not defined). However, if Alice measures in any basis she chooses to, say the rectilinear or the diagonal, she fully ``learns'' the state of Bob's particle. Assuming that a quantum state is ``real'' (as the state of a classical object) and assuming that the state cannot be changed instantaneously (immediately after Alice's measurement) when Alice and Bob are far apart, EPR concluded that the state of Bob's particle must have been previously defined in {\em both} bases, in contradiction with the uncertainty principle. They further concluded that this is a paradox (the EPR paradox) and thus that quantum mechanics is incomplete. Today we know, due to~\cite{Bell}, that indeed quantum mechanics is not described by a realistic-local model, and thus the EPR-paradox is resolved.
We refer to the following fact as the EPR nonlocality: the state of Bob's particle, previously undefined, become completely specified by Alice nonlocal operation. Thus, the EPR nonlocality is not a nonlocality in the sense of~\cite{Bell}, but the profound feature which allows to ``create'' quantum states from different ensembles as it was discussed in the original EPR analysis.
Using the language of $\rho$-ensembles, the EPR nonlocality is described as follows: Alice can choose whether Bob's state will be in a $\rho$-ensemble $E_1$ or $E_2$ by choosing an appropriate measurement on her member of the EPR pair. Thus, while Bob holds the mixed state $\rho$, Alice has an additional information regarding his state.
The EPR nonlocality is further generalized by HJW in~\cite{HJW}, by allowing Alice to perform generalized measurements (POVMs), hence enabling her to create {\em any} $\rho$-ensemble in Bob's site, and also knowing precisely the state he has. Note that she cannot chose $\rho$, and also she cannot chose the resulting state in Bob's hands, but she can choose the $\rho$ ensemble, and learn the state. Generating $\rho$-ensembles at a distance is the generalization of the EPR nonlocality in which only {\em standard (projection) measurements} are used. We shall refer to this generalized EPR nonlocality as the EPR-HJW nonlocality.
In particular Alice can create the $\rho$-ensemble $E_4$, and we shall show in Section~III, that creating this ensemble corresponds to the teleportation process, once we add the transmission of classical information from Alice to Bob (she transmits the outcome of her measurement). Thus, teleportation is a special case of generating $\rho$-ensembles at a distance, when Alice uses a special POVM and where the operations done by Alice and Bob are independent of the parameters of the (unknown) state. We call this view of the teleportation process ``telePOVM'' (see the acknowledgement), or teleportation via generalized measurements.
The next natural step is to use this approach to generalize the concept of teleportation, by removing the demand that the transmitted state can always be recovered. In Section~IV, we define the concept of {\em conclusive teleportation}. The term ``conclusive'' is taken from quantum information theory, when one asks the following question (see, for instance, \cite{Per93}): what is the optimal mutual information which can be extracted from two nonorthogonal quantum states each sent with probability half? One can obtain a definite (correct) answer (regarding the given state) {\em sometimes} for the price of knowing nothing in other occasions~\cite{Per93}. Here we adapt this term presenting the conclusive teleportation in which the teleportation process is {\em sometimes} successful, and the sender knows if it is successful or not. When Alice and Bob use an entangled pure state which is not fully entangled the conclusive teleportation scheme allows them to teleport a quantum state with fidelity one. This is done for the price of occasional failures, and the sender knows whether it is successful or it is not. For many purposes (e.g., for quantum cryptography~\cite{BB84,EPR-scheme,Ben92,BHM96}), one would prefer performing this conclusive teleportation rather than the original one which leads to a transfer fidelity which is smaller than one~\cite{Gisin}, and yields fidelity one only when the shared state is maximally entangled. [The fidelity of a state
$\rho$ relative to a pure state $|\psi\rangle$ is given by
$\langle \psi|\rho|\psi\rangle$; for other properties of the fidelity, see~\cite{Fuc-Gra-99}.] For instance, if Alice has an unknown qubit which she wishes to teleport to Bob, while they only share partially-entangled states, she can first create a fully entangled state, and try to teleport a qubit from such a pair to Bob via a conclusive teleportation. If she fails, she can try again (using another shared pair) till she succeeds. Once she succeeds to teleport an EPR-pair member, she can teleport the unknown qubit with fidelity one.
A further generalization is to let Bob also perform a conclusive measurement that sometimes succeeds (this requires, a 2-way classical communication). Surprisingly, we shall show in Section~V, that this type of teleportation can allow for a conclusive teleportation even when the shared entangled state is {\em mixed}. The conclusive teleportation obtained in this case is with arbitrarily high fidelity, but for the price of a probability of success decreasing to zero as the fidelity increases. We refer to it as a quasi-conclusive teleportation. The questions of quasi-conclusive teleportation with fixed probability of success or with only one-way classical communication allowed will be discussed elsewhere.
\section{TelePOVM}
Suppose that Alice and Bob share any two-particle entangled pure state in any dimension, such that the reduced density matrix in Bob's hands is $\rho$. Then, according to Hughston, Jozsa and Wootters~\cite{HJW}, any measurement at Alice side, performed on her part of the entangled state, creates a specific $\rho$-ensemble in Bob's hands. All $\rho$-ensembles are indistinguishable (recall that a quantum system is fully described by its density matrices) unless there exist an additional information somewhere. For example, in the Bennett-Brassard-84 (BB84) cryptographic scheme~\cite{BB84} Bob receives the same density matrix $\rho$ whether Alice uses the rectilinear basis or the diagonal basis, but he receives different $\rho$-ensembles. He cannot distinguish between the two ensembles and between the states in each particular occasion, unless he receives more information from Alice. When receiving additional information (the basis) he is told which $\rho$-ensemble he has, and (in this particular case) can find which state.
In the same sense, the EPR-scheme\cite{EPR-scheme}, provides a simple example of the HJW meaning of $\rho$-ensembles: when Alice chooses to measure her member of the singlet state in the rectilinear basis or in the diagonal basis, she ``creates'' a different $\rho$-ensemble in Bob's hands, $E_1$ or $E_2$ respectively. Bob can distinguish the two states to find Alice's bit after receiving additional information from Alice who tells him the basis (hence her choice of a $\rho$-ensemble). Alice's choice of measurement determines the $\rho$-ensemble, and furthermore, her result in each occasion, tells her which of the states is in Bob's hands. If the measurement is chosen in advance, and Alice tells Bob the outcome of the measurement (by sending one bit of information) he can know precisely the state of the qubit in his hands.
The generalization done by HJW replaces the standard, projection measurement by a generalized measurement~\cite{JP-DL,Helstrom,Per93} (POVM), so the number of results can be larger than the dimension of the Hilbert space in Alice's site or in Bob's site. Thus, the HJW-EPR nonlocality argument implies that the set of Bob states contains nonorthogonal states. Furthermore, if Alice sends him an additional information (her measurement's result) Bob can recognize in which of these states his particle is now. This is a very interesting result of~\cite{HJW} and we now show that teleportation provides a fascinating usage of it.
Let Alice and Bob share an EPR pair (say, the singlet state). Consider the following POVM ${\cal A}$: \begin{eqnarray}
A_1 &=& \frac{1}{2} \left( \begin{array}{cc}
\alpha^2 & \beta\alpha^* \\ \beta^*\alpha & \beta^2 \end{array} \right) \ ;
\ A_2 = \frac{1}{2} \left( \begin{array}{cc}
\beta^2 & -\beta^*\alpha \\ -\beta\alpha^* & \alpha^2 \end{array} \right) \ ; \nonumber \\
A_3 &=& \frac{1}{2} \left( \begin{array}{cc}
\beta^2 & \beta^*\alpha \\ \beta\alpha^* & \alpha^2 \end{array} \right) \ ;
\ A_4 = \frac{1}{2} \left( \begin{array}{cc}
\alpha^2 & -\beta\alpha^* \\ -\beta^*\alpha & \beta^2 \end{array} \right) \ , \label{POVM} \end{eqnarray}
with complex parameters $\alpha$, $\beta$, such that $|\alpha|^{2}
+|\beta|^{2}=1$. These matrices have positive eigenvalues and sum up to the unit matrix therefore form a POVM. Following the arguments of HJW, applying such a POVM to one member of two particles in an EPR state is equivalent to a choice of a specific $\rho$-ensemble combined of four possible states; when the result of the POVM is $A_i$, the other member is projected onto a state orthogonal to $A_i$, i.e., it will be in one of the states $\psi_1 = {\beta \choose -\alpha}\ $; $\psi_2 = {\alpha \choose \beta}\ $; $\psi_3 = {\alpha \choose -\beta}\ $, and $\psi_4 = {\beta \choose \alpha}$ respectively, and Alice will know in which of them. Alice can send Bob two-bit information to describe the outcome of her measurement (one of her four results), and this information actually tells him which of those four states he got. Then Bob can re-derive one of the states, say ${\alpha \choose \beta}$, by performing the appropriate rotation, according to the two classical bits he is being told. The reason that exactly two bits are required here is that the POVM has four outcomes.
It should be stressed that, for this specific POVM (\ref{POVM}), Bob's recovering operations {\it do not} depend on the parameters $\alpha$, $\beta$, so these need not be known to him.
Every POVM can be performed in the lab by performing a standard measurement on the system $\rho_{sys}$, plus an ancilla \cite{IvPe,Per93} (this is a property of the POVM so it is true independently of the state of the measured system). One way to perform the POVM (\ref{POVM}) is to take an ancilla in a state $\phi={\alpha \choose \beta}$, and perform the Bell measurement (a measurement such that the outcomes are the Bell-BMR states) on the ancilla and the system. The first operator, $A_1$, results from the measurement of the projection operator
$P_1 = | \Phi^+ \rangle \langle \Phi^+ |$ in the Hilbert-space of Alice's particle plus the ancilla. Applying the technique described in~\cite{Per93} (Chapter 9, sect. 9.5, 9.6, about generalized measurements and Neumark's theorem) we get terms of $A_{1}$ \begin{equation} (A_1)_{mn} = \sum_{rs} (P_1)_{mr,ns} (\rho_{aux})_{sr} \ , \end{equation} where $\rho_{aux}$ is the state of the ancilla, the $mn$ are the indices of the particle and the $sr$ are indices of the ancilla. The $m=0,\ n=0$ case corresponds to multiplying the upper left block of $P_1$ by the density matrix of the ancilla, and tracing the obtained matrix yielding: \begin{equation} {\rm Tr} \ \left(\begin{array}{cc} \frac{1}{2} & 0 \\ 0 & 0 \end{array}\right)_{rs} \left(\begin{array}{cc} \alpha^2 & \beta^*\alpha \\ \beta\alpha^* & \beta^2 \end{array}\right)_{sr} = \frac{1}{2} \beta^2 \ . \end{equation} The $m=1,\ n=0$ case (second line, first column, in $A_1$) results from a similar multiplication but with the lower left block of $P_1$. In the same way we calculated the other elements of that operator, and the other three operators, and we verified that the Bell measurement corresponds to the desired POVM.
It should be stressed that Alice's measurements {\em do not} depend on the parameters $\alpha$, $\beta$, thus these need not be known to her. Moreover she can learn nothing about the latter as all four results of her generalized measurement corresponding to operations (\ref{POVM}) can happen with equal probabilities. In the case of starting with the singlet state, all four Alice's results occur with equal probabilities and the initial state of Bob's particle is the maximally mixed state $\frac{I}{2}$ (reduced state of a maximally entangled state). Thus, it is clear that the teleportation is equivalent to the creation of a specific $\rho=\frac{I}{2}$ ensemble at a distance, where the specific $\frac{I}{2}$-ensemble is $E_4$. This can be done even if Alice and Bob do not know the state of the ancilla, ${\alpha \choose \beta}$ chosen by someone else, and this is exactly the process of teleportation of an unknown state.
This process will also teleport a density matrix (a mixed state) or a particle entangled with others. It can also easily be generalized to fully entangled states in higher ($N^2$) dimensions discussed in \cite{BBCJPW}.
\section{Generating $\rho$-ensembles in quantum key distribution}
To see one application of the ideas described above, let us view a different scenario (taken from quantum key distribution): Suppose that Alice has in mind a set of states and their probabilities, say, $E_3$, which is used in the BB84~\cite{BB84} quantum key distribution scheme. This describes a particular $\rho$-ensemble (the $\frac{I}{2}$-ensemble in the BB84 case) sent to Bob. If Alice doesn't care which of the states is sent in each experiment, but only that it belongs to that set, she does not need to send the states. Instead of sending Bob the states, she sends him a member of some entangled state such that the reduced density matrix in Bob's hands is $\rho$. Then she applies a specific POVM which creates the desired ensemble in Bob's hands. The relevant example is the EPR scheme~\cite{EPR-scheme}, in which an EPR-pair is shared by Alice and Bob. As we have seen before, Alice creates either the $\frac{I}{2}$-ensemble $E_1$ or $E_2$, when she apply a measurement in the rectilinear or the diagonal bases respectively. However, since the probability of each basis is 1/2, Alice's full operation, including the choice of the basis, can also be described by a POVM which leads to the ensemble $E_3$.
Let us present a less trivial example. Let the state \begin{equation}
| \chi_{23} \rangle = a | 00 \rangle +
b | 11 \rangle \ \label{entstate} \end{equation} (with $a$, $b$ real, and $a^2+b^2=1$) be prepared by Alice and let one particle be sent from Alice to Bob. Then let Alice measure her particle using a standard measurement in the computation (the rectilinear) basis. As result, the following $\rho$-ensemble is generated in Bob's hands: $\{ \frac{1}{\sqrt 2} {a+b \choose a-b}, \frac{1}{\sqrt 2} {a-b \choose a+b} ; p_1 = p_2 = 1/2 \}$. This operation produces the Bennett-92 \cite{Ben92} scheme for quantum key distribution, in the same way that the EPR scheme produces the BB84 scheme.
\section{Conclusive teleportation with any pure entangled state}
We first present the use of an additional one-way classical communication to modify the teleportation process: if Alice wishes to teleport to Bob a quantum state of which she can make more copies (e.g., to teleport a member of an EPR-pair) or if she wishes to teleport an arbitrary state from a set (e.g., a BB84 state), she can improve the teleportation process very much by using conclusive teleportation: a teleportation process which is sometimes successful. After performing her measurement, Alice uses the classical channel to tell Bob if the teleportation succeeded, and he uses the received state only if the teleportation succeed.
For instance, one can use conclusive teleportation to save time or classical bits. Let Alice and Bob share a fully entangled state, and use it to perform a conclusive teleportation: Alice performs a measurement which distinguishes the singlet state from the other three (triplet) states. Instead of sending $2$ bits she sends Bob only one bit telling him whether she received a singlet state or not. Bob doesn't need to do any operation on his particle. In a $\frac{1}{4} $ of the occasions she receives this result (the singlet state), hence performs a successful teleportation. This process makes sense when the classical bits are as expansive as the shared quantum states, or when a fast teleportation of arbitrary states (e.g. BB84 states) is required. Also, it allows teleportation when Bob is technologically limited and cannot perform the required rotations.
The process of conclusive teleportation makes more sense when Alice and Bob share a pure entangled state which is not fully entangled.
Let Alice and Bob share the state (\ref{entstate}) ({\it any} pure state can be written in that form called the Schmidt decomposition~\cite{HJW,Per93}), which they use to teleport a quantum state $\phi_{1}={\alpha \choose \beta}_1$. Following the method of \cite{BBCJPW}, the state of the three particles is written using the Bell-BMR states as: \begin{eqnarray}
&& | \Psi_{123} \rangle = | \phi_1 \rangle | \chi_{23} \rangle \ =
\frac{1}{\sqrt2} \Big[ | \Phi^+_{12} \rangle {a \alpha \choose b \beta }_3 +
| \Phi^-_{12} \rangle { a \alpha \choose - b \beta }_3 + \nonumber \\
&&\ \ \ \ | \Psi^+_{12} \rangle {a \beta \choose b \alpha}_3 +
| \Psi^-_{12} \rangle { -a \beta \choose b \alpha}_3 \Big] \ . \end{eqnarray} If Alice and Bob were to use the standard teleportation process, a Bell measurement still creates the same POVM as before. But, unlike the case of using a fully entangled state, the states created in Bob's hands depend also on $a$ and $b$, and not only on the state of the ancilla. The fidelity is clearly less than one (e.g., if Alice received a state $\Phi^+$ in her measurement (which happen with probability $p_{\Phi^+} =
(|\alpha|^2 a^2 + |\beta|^2 b^2)/2$, the fidelity
$|\langle \phi_1 | \phi_1^{\rm out}\rangle|^2$ of the output state is
$ (|\alpha|^2 a + |\beta|^2 b)^2/
(|\alpha|^2 a^2 + |\beta|^2 b^2)$, which depends on $a$ and $b$, and on the teleported state.
The POVM that reproduce the four desired states can be found. It is not performed by a Bell measurement and will depend on the state of the ancilla which is supposed to be unknown to both sides. So perfect teleportation will not take place this time.
We present a different measurement which generates the desired states in Bob's hands with perfect fidelity. The price we pay for the perfect state obtained, is that the process cannot be done with 100\% probability of success, therefore it is a conclusive teleportation. To explain how it works, let us return to the case of fully entangled state (standard teleportation, with initial EPR-pair $\Phi^+$) and separate the Bell measurement into two measurements (one follows the other): \begin{enumerate}
\item A measurement which checks whether the state is in the subspace spanned by $ | 00\rangle $ and
$ | 11 \rangle $, or in the subspace spanned by
$ | 01 \rangle $ and
$ | 10 \rangle $. \item A measurement in the appropriate subspace (according to the result of the previous step), which projects the state on one of the two possible Bell states in that subspace, $\Phi^\pm$ and $\Psi^\pm$ respectively. \end{enumerate}
When $| \Psi_{23} \rangle$ is not fully entangled we still repeat the first step of that two-steps process. To see the outcome, note that the state of the three particles can also be written as \begin{eqnarray}
&&| \Psi_{123} \rangle =
\frac{1}{2} \large[ [a | 00 \rangle
+ b | 11 \rangle ] {\alpha \choose \beta}_3
+ [a | 00 \rangle
- b | 11 \rangle ] {\alpha \choose -\beta}_3 \nonumber \\
&&+ [b | 01 \rangle
+ a | 10 \rangle ] {\beta \choose \alpha}_3
+ [b | 01 \rangle
- a | 10 \rangle ] {-\beta \choose \alpha}_3 \large]
\ .\end{eqnarray}
The first step projects $|\Psi_{123}\rangle$ on either the first two possibilities or the last two with equal probabilities. In the second step, let us assume that the result of the first step was the subspace spanned by the states
$|00 \rangle \equiv {1 \choose 0}_{\{00;11\}}$
and $| 11 \rangle \equiv {0 \choose 1}_{\{00;11\}}$. [A similar analysis can easily be done for the other case where the result of the first step is the subspace spanned by the states
$| 01 \rangle \equiv {1 \choose 0}_{\{01;10\}}$
and $| 01 \rangle \equiv {0 \choose 1}_{\{01;10\}}$.]
In this ${\{00;11\}}$ subspace, Alice now performs a second measurement, but not in the Bell-BMR basis which is now the states $(1/\sqrt2){ 1 \choose \pm 1}_{\{00;11\}}$, as in the ideal case. Instead, Alice performs a POVM which conclusively distinguish the two states, ${a \choose b}_{\{00;11\}} $ and $ {a \choose - b}_{\{00,11\}}$ (which are the first two states in the above expression). Assuming (without loss of generality) that $a^2 \ge b^2$ the POVM elements in that subspace are: \begin{eqnarray} A_1 &=& \left( \begin{array}{cc}
b^2 & ba \\ ba & a^2 \end{array} \right) \ ;
\ A_2 = \left( \begin{array}{cc}
b^2 & -ba \\ -ba & a^2 \end{array} \right) \ ; \nonumber \\
A_3 &=& \left( \begin{array}{cc}
1 - (b^2/a^2) & 0 \\ 0 & 0 \end{array} \right) \ . \label{POVM-A2} \end{eqnarray}
Such a POVM can never give a wrong result, and it gives an inconclusive result when the outcome is $A_3$. This POVM was found in \cite{Per93,EHPP} in the context of distinguishing the two states of~\cite{Ben92}). It is the optimal process for obtaining a perfect conclusive outcome, and a conclusive result is obtained with probability $1 - (|a|^2 - |b|^2)$. In our case, this is the probability of a successful teleportation. Alice tells Bob whether she succeeded in teleporting the state by sending him one bit, and in addition to this bit, Alice still has to send Bob the two bits for distinguishing the four possible states (so he can perform the required rotation). Alternatively, she can send him only one bit telling him whether he received the state or not (as we explained for the case of fully entangled state) loosing $\frac{3}{4}$ of the successful teleportations.
When used for distinguishing non-orthogonal states, this POVM allows to get the optimal deterministic information from two non-orthogonal states, although, on average, it yields less mutual information than the optimal projection measurement. In the same sense, on average, the conclusive teleportation does not yield the optimal average fidelity, but when it is successful -- the fidelity is one.
The conclusive teleportation process proves that any (pure) entangled state presents quantum non-locality. This fact can also be seen using the filtering method~\cite{Procrust} when applied to pure states.
\section{Arbitrary good conclusive bilocal teleportation via mixed states}
In a perfect conclusive teleportation Alice performs a teleportation process which is sometime successful, and when it is successful, the fidelity of the teleported state is one. In an imperfect conclusive teleportation, Alice performs a teleportation process which is sometime successful, and when it is successful, the fidelity of the teleported state is less than one but better than could be achieved with a standard teleportation.
The original idea of teleportation involves only one way classical communication from Alice to Bob. We shall now extend \footnote{The most general teleportation channel involving all local quantum operations plus 2-way classical communication (LQCC) protocols was introduced in Ref. \cite{single}.} it, allowing Bob to call Alice as well so that bilocal protocol is used. Note that here we do not consider the most general bilocal protocol (the so called ping-pong protocol) but only allow Bob and Alice to operate independently of the operation of the other. A ping-pong protocol could improve the probability of successful projection (e.g., increase the $p'(p)$ described below), by allowing several ``paths'' of successful distillation depending on the outcomes of the measurements in each step of the protocol. The communication (in our example) is just used to verify that the state was teleported. This generalization of teleportation makes sense, as in many cases the classical communication is treated as a free resource.
We have shown previously that a perfectly reliable conclusive teleportation can be achieved when pure entangled states are shared. We now show that it is possible to perform arbitrary good bilocal conclusive teleportation when certain {\em mixed states} are used (however, see the remark in the acknowledgements). The arbitrarily good conclusive teleportation (which we call ``quasi-conclusive teleportation'') is not described by a particular POVM, ${\cal A} = \{A_1, \ldots A_m\}$, but by a series of POVMs ${\cal A}^n = \{A_1(n), \ldots A_m(n)\}$, where $n$ is the index of this series. For any $\epsilon$ we can find $n$ such that the POVM ${\cal A}^n$ yields fidelity better than $1-\epsilon$ for teleportation. Yet, perfect fidelity cannot be achieved since the probability of success goes also to zero when $\epsilon$ goes to zero. Thus, we show that quasi-conclusive teleportation is successfully done via mixed states!
We first purify the mixed state, and then use it for teleportation.
Consider the state \begin{equation}
\varrho_{p}=p|\Psi^{-} \rangle \langle \Psi^{-}|
+ (1-p) | 00 \rangle \langle 00 | \ , \ \ 0<p<1 \label{psi} \end{equation}
which is a mixture of a singlet (with probability $p$) and a $|00\rangle$ state (with probability $1-p$). Let the bilocal Alice and Bob action be described in the following way: \begin{equation} \varrho_{p} \rightarrow \varrho' \equiv \frac{V_1 \otimes W_1 (\varrho) V_1^{\dagger} \otimes W_1^{\dagger} }{Tr(V_1 \otimes W_1 (\varrho) V_1^{\dagger} \otimes W_1^{\dagger})} \label{gen} \ . \end{equation} It can be realized by performing generalized measurements by Alice and Bob independently, i.e., Alice performs the measurement defined by the pair of operators $\{ V_1, V_2 \equiv \sqrt{I - V_1V_1^{\dagger}} \}$, and Bob performs the measurement defined by the pair of operators $\{ W_1, W_2 \equiv \sqrt{I - W_1W_1^{\dagger}} \}$. [Alice's POVM is the set ${\cal A}=\{ A_{1}=V_{1}^{\dagger}V_{1}, A_{2}=V_{2}^{\dagger}V_{2} \}$, and Bob's POVM is the set ${\cal B}=\{ B_{1}=W_{1}^{\dagger}W_{1}, B_{2}=W_{2}^{\dagger}W_{2} \}$.] When the outcomes of both Alice and Bob is 1, which correspond to the first operator in each lab ($V_1$ and $W_1$ respectively) the above transformation is successfully done.
After getting the results of their measurements Alice and Bob communicate via phone to keep only those particles for which both results correspond to the successful case. To show that quasi conclusive teleportation can be performed, we define the sequence of POVM operators
(in basis $\{ |0 \rangle |1 \rangle \}$: \begin{equation}
V_1(n)= \left( \begin{array}{cc}
(1/n) & 0 \\ 0 & 1
\end{array} \right) \ ;
W_1(n)= \left( \begin{array}{cc}
(1/n) & 0 \\ 0 & 1
\end{array} \right) \ . \label{AB} \end{equation} After the action of the corresponding POVM the new state is
$\varrho'=\varrho_{p'} \equiv p' |\Psi^{-} \rangle \langle \Psi^{-}|
+ (1-p') | 00 \rangle \langle 00 |$ with the parameter $p'$ depending on the input parameter $p$ as follows \begin{equation} p'(p)=\frac{1}{1+\frac{1-p}{np}} \ . \label{p'} \end{equation} The probability of successful transition from $\varrho_{p}$ to $\varrho_{p'}$ is \begin{equation} P_{p \rightarrow p'}= \frac{1}{n^{2}}[1 + (n-1)p] \ . \label{probability} \end{equation} Thus one can produce the state which has arbitrary good singlet fraction (a fidelity with a singlet)
$F(\varrho_{p})=\langle \Psi^{-}_{12}| \varrho_{p}\Psi^{-}_{12} \rangle)$, which obviously allows for arbitrary good conclusive teleportation. The key point is that the probability of successful teleportation {\it decreases to zero} with fidelity of teleportation (or equivalently singlet fraction) {\it going to unity}. But it is nonzero for any required fidelity arbitrary close to perfect one.
One natural question is whether it is possible to make teleportation arbitrary good via other mixed states. In general the answer is negative. In the case of Werner states (states in which a fully entangled state is mixed with the completely mixed state), for instance, this is a consequence of the fact that arbitrary good conclusive distillation is impossible \cite{Popescu}. In fact for those states the entanglement fidelity cannot be increased. Its best value $F_{max}$ is the initial (before the conclusive process) value $F_{0}$. Thus, following~\cite{single}, the maximal teleportation fidelity is equal to $\frac{2F_{0}+1}{3}$ and is less than $1$ apart from the trivial case where the initial state is fully entangled.
Another interesting question is whether it is possible to perform quasi-conclusive teleportation via mixed states with only one way classical communication.
This represents a more complicated issue which requires a more complicated technical analysis, and will be analyzed elsewhere.
\section{Summary}
In this paper we presented a new way of viewing the teleportation of an unknown quantum state. We showed that teleportation is a special and particular case of generating $\rho$-ensembles at a distance, hence, a special case of generalized EPR nonlocality (the HJW-EPR nonlocality). We believe that this view of teleportation reduces some of the mystery of that process, and in particular, explains why two classical bits can be sufficient for the teleportation of a qubit. This work also showed the usefulness of the HJW generalized EPR nonlocality, and their understanding that any $\rho$-ensemble can be generated nonlocally.
We feel that understanding the connection between these two important forms of nonlocality improves much the understanding of entanglement.
Based on the connection between teleportation and generalized measurements, we presented the process of conclusive teleportation, a teleportation which is sometime successful. We showed that any pure entangled state can be used to perform conclusive teleportation with fidelity one, and more surprising, certain mixed states can also be used to achieve conclusive teleportation with fidelity as close to one as we like.
\end{document} | arXiv | {
"id": "9906039.tex",
"language_detection_score": 0.8785711526870728,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Reduced commutativity of moduli of operators}
\author[P. Pietrzycki]{Pawe{\l} Pietrzycki}
\subjclass[2010]{Primary 47B20; Secondary 47B37} \keywords{normal operator, quasinormal operator, operator convex function, Davis-Choi-Jensen inequality, operator equation, operator inequality}
\address{Wydzia{\l} Matematyki i Informatyki, Uniwersytet Jagiello\'{n}ski, ul. {\L}ojasiewicza 6, PL-30348 Krak\'{o}w}
\email{pawel.pietrzycki@im.uj.edu.pl}
\begin{abstract} In this paper, we investigate the question of when the equations $A^{*s}A^{s}=(A^{*}A)^{s}$ for all $s \in S$, where $S$ is a finite set of positive integers, imply the quasinormality or normality of $A$. In particular, it is proved that if $S=\{p,m,m+p,n,n+p\}$, where $2\leq m < n$, then $A$ is quasinormal. Moreover, if $A$ is invertible and $S=\{m,n,n+m\}$, where $m \leq n$, then $A$ is normal. Furthermore, the case when $S=\{m,m+n\}$ and $A^{*n}A^n \leq (A^*A)^n$ is discussed.
\end{abstract}
\maketitle
\section{Introduction} The class of bounded quasinormal operators was introduced by A. Brown in \cite{brow}. A bounded operator $A$ on a (complex) Hilbert space $\mathcal{H}$ is said to be \textit{quasinormal} if $A(A^*A)=(A^*A)A$. Two different definitions of unbounded quasinormal operators appeared independently in \cite{kauf} and in \cite{szaf}. As recently shown in \cite{jabl}, these two definitions are equivalent. Following \cite{szaf}, we say that a closed densely defined operator $A$ in $\mathcal{H}$ is {\em quasinormal} if $A$ commutes with the spectral measure
$E$ of $|A|$, i.e $E(\sigma) A \subset AE(\sigma)$ for all Borel subsets $\sigma$ of the nonnegative part of the real line. By \cite[Proposition 1]{szaf}, a closed densely defined operator $A$ in $\mathcal{H}$ is quasinormal if and only if $U|A|\subset |A|U$, where
$A = U|A|$ is the polar decomposition of $A$ (cf.\ \cite[Theorem 7.20]{weid}). For more information on quasinormal operators we refer the reader to \cite{brow,conw,uch}, the bounded case, and to \cite{kauf,szaf,maj,jabl,qext,uch}, the unbounded one.
In 1973 M. R. Embry published a very influential paper \cite{embry} concerning the Halmos-Bram criterion for subnormality. In particular, she gave a characterization of the class of quasinormal operators in terms of powers of operators. Namely, a bounded operator $A$ in a Hilbert space is quasinormal if and only if the following condition holds \begin{equation}\label{wst}
A^{*n}A^{n}=(A^*A)^n\qquad \text{for all}\quad n\in \mathbb{N}. \end{equation}
This leads to the following question:
is it necessary to assume that the equality in \eqref{wst} holds for all $n\in\mathbb{N}$? To be more precise we ask for which subset $S\subset \mathbb{N}$ the following system of operator equations: \begin{equation}\label{xukl}
A^{*s}A^s=(A^*A)^s\qquad\text{for all}\quad s \in S, \end{equation} implies the quasinormality of $A$.
It is worth mentioning that in some sense similar problem was studied and solved in Group Theory. We say that a group $G$ is \textit{ $n$-Abelian} if $( xy)^n = x^ny^n$ for all $x, y\in G$. We call a set of integers $S$ \textit{Abelian forcing} if whenever $G$ is a group with the property that $G$ is $n$-Abelian for all $n$ in $S$, then $G$ is Abelian. Then we have the following theorem \begin{theorem} $($cf. \cite{gal}$)$ A set $S$ of integer is Abelian forcinig if and only if the greatest common divisor of the integer $n( n - 1)$ as $n$ ranges over $S$ is $2$.
\end{theorem}
In paper \cite{uch} M. Uchiyama proved that if bounded operator $A$ in Hilbert space is compact (in particular, is in finite dimensional space) or subnormal then the single equality \begin{equation}\label{xrow}
A^{*n}A^n=(A^*A)^n \end{equation} for $n\geq 2$ implies quasinormality of $A$. He also proved that, if one of the following conditions holds \begin{itemize} \item[(i)] $A$ is a hyponormal operator and satisfies \eqref{xukl} with $S=\{n,n+1\}$, where $n\in \mathbb{N}$, \item[(ii)] $A$ is an operator in separable Hilbert space and satisfies \eqref{xukl} with $S=\{k,k+1,l,l+1\}$, where $k,l\in \mathbb{N} $ and $k<l$, \end{itemize} then $A$ is quasinormal. It turns out that we can replace the system of equations in condition (i) by single equation \eqref{xrow} (cf. Theorem \ref{hyp}) and remove the assumption of separability in condition (ii) (cf. Theorem \ref{ucc}). Moreover, he obtains analogical results for densely defined operators on the assumption that some inclusion of domains of the operators $T^*T$, $TT^*$ and $T^*TP$, ($P$ is the projection onto the closure of range of $T$) holds. Several years later in the paper \cite{uch2} on occasion of investigating properties class of log-hyponormal operators, he showed that single equality \eqref{xrow} implies quasinormality in this class. This proof contains the proof of the fact that bounded operator in Hilbert space which satisfies \eqref{xukl} with $S=\{2,3\}$ is quasinormal. As was mentioned in the paper \cite{jabl} in the case of bounded operators, this characterization has been known for specialists working in this area since the late '80s. This characterization was independently discovered by A. A. S. Jibril \cite[Proposition 13]{Jib}. Unfortunately, this paper contains several errors. Z. J. Jab{\l}o{\'n}ski, I. B. Jung, and J. Stochel \cite{jabl} extended this characterization to densely defined operators. Their proof makes use of the technique of bounded vectors. They also gave three Examples of the non-quasinormal operator which satisfies the single equation \eqref{xrow}. The first example is related to Toeplitz operators on the Hardy space $H^2$, while two others are linked to weighted shifts on a directed tree. The class of weighted shifts on a directed tree was introduced in \cite{memo} and intensively studied since then \cite{9,geh,chav,planeta,bdp,adv,mart}. The class is a source of interesting examples (see e.g., \cite{dym,9,ja,trep,triv,abs,sque,jabl}).
In particular, they showed that for every integer $n\Ge 2$, there exists a weighted shift $A$ on a rooted and leafless directed tree with one branching vertex such that
\begin{equation}\label{qqq} \text{$(A^{*} A)^n=A^{*n}A^{n}$ and $(A^{*} A)^k \neq A^{*k}A^{k}$ for all $k\in\{2,3,\ldots\}\setminus \{n\}$.}
\end{equation}
\cite[Example 5.5]{jabl}. It remained an open question as to whether such construction is possible on a rootless and leafless directed tree. This is strongly related to the question of the existence of a composition operator $A$ in a $L^2$-space (over a $\sigma$-finite measure space) which satisfies \eqref{qqq}. It is shown that the answer is in the affirmative. In fact, the author showed that for every integer $n\Ge 2$ there exists a (necessarily non-quasinormal) weighted shifts $A$ on a rootless and leafless directed tree with one branching vertex which satisfies \eqref{qqq} (cf.\
\cite[Theorem 5.3.]{ja}). This combined with the fact that every weighted shift on a rootless directed tree with nonzero weights is unitarily equivalent to a composition operator in an $L^2$-space (see \cite[Theorem 3.2.1]{memo} and \cite[Lemma 4.3.1]{9}) yields examples of composition operators satisfying. It was observed in \cite[Theorem 3.3]{ja} that in the class of bounded injective bilateral weighted shifts, the single equality \eqref{xrow} with $n\Ge 2$ does imply quasinormality. This is no longer true for unbounded ones even for $k=3$ (cf.\ \cite[Example 4.4.]{ja}).
In this paper we will show that operator $A$ is quasinormal iff it satisfies \eqref{xukl} with $S=\{p,m,m+p,n,n+p\}$ for $p,m,n\in\mathbb{N}$ (cf. Theorem \ref{gll}). This theorem generalizes a characterization of quasinormality of bounded
operators given in \cite[Theorem 2.1.]{uch} and \cite[Proposition 13]{Jib}. The proof of this characterization makes use of the theory of operator convex functions and related to this theory Davis-Choi-Jensen inequality (cf. Theorem \ref{lohe}). In the case $S=\{p,q,p+q,2p,2p+q\}$ we will give an alternative proof which is completely different and fits nicely into our framework (cf. Theorem \ref{jjj}). Moreover, if $A$ is invertible then \eqref{xukl} with $S=\{m, n, n+m\}$ for $m,n\in \mathbb{N}$ also implies quasinormality of $A$ (cf. Theorem \ref{inv}). We obtain a new characterization of the normal operators which resembles that for the quasinormal operators.
\section{Preleminaries} In this paper, we use the following notation. The fields of rational, real and complex numbers are denoted by $\mathbb{Q}$, $\mathbb{R}$ and $\mathbb{C}$, respectively. The symbols $\mathbb{Z}$, $\mathbb{Z}_{+}$, $\mathbb{N}$ and $\mathbb{R}_+$ stand for the sets of integers, nonnegative integers, positive integers and nonnegative real numbers, respectively.
All Hilbert spaces considered in this paper are assumed to be complex. Let $A$ be a linear operator in a complex Hilbert space $\mathcal{H}$. Denote by $A^*$ the adjoint of $A$. We write $\boldsymbol{B}(\mathcal{H})$ and $\boldsymbol{B}_+(\mathcal{H})$, for the set of all bounded operators and positive operators in $\mathcal{H}$, respectively. The following fact follows from The Spectral Theorem and plays an important role in our further investigations. \begin{theorem} \label{rsn}$($cf. \cite{rud}$)$ Let $n\in \mathbb{N}$. Commutants of a positive operator and it's $n$-th root coincides. \end{theorem}
A linear map $\varPhi : \mathcal{A} \rightarrow \mathcal{ B}$ between $C^ *$-algebras is said to be positive if $\varPhi(A) \geq 0$ whenever $A \geq 0$. It is unital if $\varPhi$ preserves the identity.
Let $J\subset \mathbb{R}$ be an interval. A function $f : J \rightarrow \mathbb{R}$ is said to be \begin{itemize} \item[(i)] \textit{matrix monotone of degree $n$ } or \textit{ $n$-monotone}, if, for every selfadjoint $n$-dimension matrix $A$ and $B$, where $n\in \mathbb{N}$ with $\sigma(A),\sigma(B)\subset J$ inequality $A\leq B$ implies $f(A)\leq f(B)$, \item[(ii)] \textit{operator monotone} or \textit{matrix monotone}, if it is $n$-monotone for every $n\in \mathbb{N}$, \item[(iii)] \textit{matrix convex of degree $n$ } or \textit{ $n$-convex}, if for every selfadjoint $n$-dimension matrix $A$ and $B$, where $n\in \mathbb{N}$ with $\sigma(A),\sigma(B)\subset J$ \begin{equation*}
f(tA+(1-t)B)\leq tf(A)+(1-t)f(B) \quad \text{for all}\:\: t\in[0,1], \end{equation*} \item[(iv)] \textit{operator convex} or \textit{matrix convex}, if it is $n$-monotone for every $n\in \mathbb{N}$. \end{itemize}
In 1934 K. L\"owner \cite{l10} proved that a function defined on an open interval is operator monotone, if and only if it allows an analytic continuation into the complex upper half-plane, that is an analytic continuation to a Pick function. The class of operator monotone functions is an important class of real-valued functions and it has various applications in other branches of mathematics. This concept is closely related to operator convex functions which was introduced by F. Kraus in a paper \cite{kraus}. The operator monotone functions and operator convex functions have very important
properties, namely, they admit integral representations with respect to suitable Borel measures. In particular, a continuous function $f : [0, \infty) \rightarrow [0, \infty)$ is operator monotone if and only if there is a finite Borel measure $\mu$ on $[0,\infty)$ such that
$\int_0^\infty \frac{1}{1+\lambda^2}d\mu(\lambda)<\infty$ and \begin{equation}\label{repbol} f(t)=\alpha +\beta t+\int_0^\infty \Big(\frac{\lambda}{1+\lambda^2}-\frac{1}{t+\lambda}\Big)d\mu(\lambda), \end{equation} where $\alpha\in \mathbb{R}$ i $\beta \geq 0$. By the Bendat-Sherman formula (see \cite{ben,hans2}) operator convex function $f:(-1,1)\rightarrow \mathbb{R}$ admits an integral representation \begin{equation}\label{sher}
f(t)=\alpha +\beta t+\int_{-1}^{1} \frac{t^2}{1-t\lambda}d\mu(\lambda), \end{equation} with $\alpha\geq0$ and $\mu$ is a positive measure. We give below example of a function which is operator monotone. \begin{ex}\label{ex} The function $x^p$ for $p\in (0,1)$ is operator monotone and has an integral representation
\begin{equation*} x^p=\frac{\sin p\pi}{\pi}\int_0^\infty \frac{x\lambda^{p-1}}{x+\lambda}d\mu(\lambda). \end{equation*}
\end{ex} The fact that function from Example \ref{ex} is operator monotone is well known as L\"owner-Heinz inequality.
\begin{theorem}$($L\"owner-Heinz inequality \cite{l9,l10}$)$. \label{lohe} Let $A$, $B$ be bounded positive operators on $\mathcal{H}$ such that $0\leq B \leq A$. If $0\leq p \leq 1$ then $ B^p \leq A^p$. \end{theorem} The other two inequalities related to operator monotone and convex functions are Hansen inequality and Davis-Choi-Jensen inequality. The first of this has been established in \cite{hansen} by F. Hansen. In \cite{uch} M. Uchiyama gave a necessary and sufficient condition for the equality in the Hansen inequality and use it to show that \eqref{xukl} with $S=\{k,k+1,l,l+1\}$ implies quasinormality of $A$ in separable Hilbert space. The key ingredient of its proof is the integral representation of operator monotone functions \eqref{repbol}.
\begin{theorem}$($Hansen inequality, cf. \cite{hansen, uch}$)$ \label{hans} Let $f:[0,\infty)\rightarrow \mathbb{R}$ be an operator monotone function with $f(0)\geq 0$. Suppouse that $A$ is a bounded positive operator and $P$ a non trivial projection. Then we have \begin{equation*} Pf(A)P\geq f(PAP). \end{equation*} Moreover the equality hold, only in the case of $PA=AP$ and $f(0)=0$, if $f$ is not a linear function. \end{theorem} Now we formulate the Jensen operator inequality (Davis-Choi-Jensen inequality) due to Davis
\cite{davi} and Choi \cite{choi}. In \cite{petz} D. Petz gave
a necessary and sufficient condition for the equality in the Jensen's operator inequality using integral representation of operator convex functions \eqref{sher}. \begin{theorem}$($Davis-Choi-Jensen inequality$)$\label{petz} Let $\mathcal{A}$ and $\mathcal{B}$ be $C^*$-algebras with unit and let $\varPhi :\mathcal{A}\rightarrow \mathcal{B}$ be a unital positive linear map. If $f:(\alpha,\beta)\rightarrow \mathbb{R}$ is an operator convex function then \begin{equation*} f(\varPhi(a)) \leq \varPhi(f(a)) \end{equation*} for every $a=a^*\in \mathcal{A}$ with $\sigma(a)\subset (\alpha,\beta)$. Moreover, if $f$ is nonaffine then the equality holds if and only if $\varPhi$ restricted to the subalgebra generated by $\{a\}$ is multiplicative.
\end{theorem}
\section{A characterization of quasinormality}
In this section, we obtain new characterizations of quasinormality in terms of the system of equations \eqref{xukl}. We first show that the assumption of separability of \cite[Theorem 2.1.]{uch} can be removed. The following lemma was suggested to us by Professor Jan Stochel.
\begin{lemma}\label{yyy} Let $S \subset \mathbb{N}$ be a nonempty set such that
\begin{align} \label{quoo} \begin{minipage}{70ex} any bounded operator $A$ on a separable Hilbert space that satisfies \eqref{xukl} is quasinormal. \end{minipage}
\end{align} Then \eqref{quoo} remains true for any Hilbert space. \end{lemma} \begin{proof} Let $\mathcal{H}$ be a Hilbert space of any dimension. For every $f\in \mathcal{H}$, we define the separable subspace $\mathcal{M}_f$ of $\mathcal{H}$ by
\begin{equation*}
\mathcal{M}_f:=\overline{\{{A^*}^{i_k}{A}^{j_k}\cdots{A^*}^{i_1}{A}^{j_1}f \colon (i_1, \ldots,i_k), (j_1, \ldots, j_k) \in \mathbb{Z}_+^k, \, k \in \mathbb{N}\}}
\end{equation*}
Note that $\mathcal{M}_f$ reduces $A$ and operator $A|_{\mathcal{M}_f}$ also satisfies \eqref{xukl}. Hence, by \eqref{quoo}, $A|_{\mathcal{M}_f}$ is quasinormal. As a consequence, we have
\begin{equation*}
{A^*}|_{\mathcal{M}_f}{A}|_{\mathcal{M}_f}A|_{\mathcal{M}_f}= A|_{\mathcal{M}_f}{A^*}|_{\mathcal{M}_f}{A}|_{\mathcal{M}_f}.
\end{equation*}
In particular, applying the above
to vector $f$, we see that
\begin{equation*}
{A^*}{A}Af= A{A^*}{A}f.
\end{equation*}
Since vector $f$ was chosen arbitrarily $A$ is quasinormal. \end{proof}
We are now ready to show that the assumption of separability of \cite[ Theorem 2.1.]{uch} can be removed.
\begin{theorem}\label{ucc} Let $k,l\in \mathbb{N}$ be such that $l<k$ and $A$ be a bounded operator on $\mathcal{H}$. Then the following conditions are equivalent: \begin{itemize}
\item[(i)] operator $A$ is quasinormal, \item[(ii)] operator $A$ satisfies \eqref{xukl} with $S=\{l,l+1,k,k+1\}$. \end{itemize} \end{theorem}
\begin{proof} Set $l,k\in \mathbb{N}$ such that $l<k$. By \cite[Theorem 2.1.]{uch}, the set $S:=\{l,l+1,k,k+1\}$ has the property \eqref{quoo} for separable Hilbert space. Applying Lemma \ref{yyy} completes the proof. \end{proof}
The proof of the main result of this section (cf. Theorem \ref{gll}) involves several lemmas. The first of which collects some facts related to the block decomposition with respect to kernel/range decomposition of the operators which satisfies
\eqref{xukl}.
\begin{theorem}\label{ppppt} Let $A$ be a bounded operator on $\mathcal{H}$. Consider the block decomposition \begin{equation}\label{dek} A=\left[ \begin{array}{cc} A^\prime & 0 \\R & 0 \end{array} \right] \quad\text{on} \quad \overline{\mathcal{R}( A^*)}\oplus \mathcal{N}( A)=\mathcal{H},
\end{equation} where $A^\prime:=P A|_{\overline{\mathcal{R}( A^*)}}$, $R:=QA|_{\mathcal{N}( A)}$, $P:=P_{\overline{\mathcal{R}( A^*)}}$ and $Q:=I-P$. Then the following assertions are valid: \begin{itemize} \item[(i)] if operator $A$ satisfies the equation \eqref{xrow} for some $n\geq2$, then $A^\prime$ is injective, \item[(ii)] if operator $A$ satisfies \eqref{xukl} with $S=\{k,k+1\}$ and $A^\prime$ is bounded below, then the following equality hold \begin{equation}\label{onsr}
\|V^*({A^\prime}^*{A^\prime}+R^*R)^{k}V\|=\|({A^\prime}^*{A^\prime}+R^*R)^{k}\|, \end{equation}
where $A^\prime=V|{A^\prime}|$ is the polar decomposition of ${A^\prime}$. \end{itemize}
\end{theorem}
\begin{proof} (i) First, we choose $h\in \overline{\mathcal{R}( A^*)}$ such that $A^\prime h=0$, then \begin{equation*} \langle A^\prime h,f \rangle=0, \quad f \in \overline{\mathcal{R}( A^*)}. \end{equation*}
Since $A^\prime=P A|_{\overline{\mathcal{R}( A^*)}}$, then the last line takes the form: \begin{equation*} \langle P Ah,f \rangle=0, \quad f \in \overline{\mathcal{R}( A^*)}, \end{equation*} hence \begin{equation*} \langle Ah,A^*g\rangle=0, \quad g \in \mathcal{H}. \end{equation*} We see that the last condition is equivalent to $A^2h=0$. This and \eqref{xrow} (recall that $n\geq2$) give \begin{equation*}
(A^{*}A)^{n}h=A^{*n}A^{n}h=0, \end{equation*}
which yields $h\in \mathcal{N}((A^{*}A)^{n})=\mathcal{N}( |A|)=\mathcal{N}( A)$. In turn, $h\in \overline{\mathcal{R}( A^*)}$, and so $h=0$. Hence Operator $ A^\prime$ is injective.
(ii) By \eqref{dek} we have \begin{equation}\label{mod5} A^*A=\left[ \begin{array}{cc} {A^\prime}^*A^\prime+R^*R & 0 \\0 & 0 \end{array} \right]. \end{equation}
It is also easily seen that \eqref{xukl} with $S=\{k,k+1\}$ implies \begin{equation*}
(A^{*}A)^{k+1}=A^{*}(A^{*}A)^{k}A, \end{equation*} which is equivalent to
\begin{align}\label{ro} ({A^\prime}^*A^\prime+R^*R)^{k+1}={A^\prime}^*({A^\prime}^*A^\prime+R^*R)^{k}A^\prime. \end{align} Note that
\begin{align}\label{to2}\|V^*({A^\prime}^*{A^\prime}+R^*R)^{k}V\|&=\sup_{h\in \overline{\mathcal{R}( {A^\prime}^*)},\|h\|=1} \langle V^*({A^\prime}^*{A^\prime}+R^*R)^{k}Vh,h \rangle \\\notag&=\sup_{h\in \overline{\mathcal{R}( {A^\prime}^*)},\|h\|=1} \langle ({A^\prime}^*{A^\prime}+R^*R)^{k}Vh,Vh \rangle
\\\notag&=\sup_{g\in \overline{\mathcal{R}( V)},\|g\|=1} \langle ({A^\prime}^*{A^\prime}+R^*R)^{k}g,g \rangle
\\\notag&=\sup_{f\in \overline{\mathcal{R}( A^*)}} \frac{\langle ({A^\prime}^*{A^\prime}+R^*R)^{k}{A^\prime} f,{A^\prime} f \rangle}{\|{A^\prime} f\|^2}
\\\notag&=\sup_{f\in \overline{\mathcal{R}( A^*)}} \frac{\langle {A^\prime}^*({A^\prime}^*{A^\prime}+R^*R)^{k}{A^\prime} f,f \rangle}{\|{A^\prime} f\|^2}
\\\notag&=\sup_{f\in \overline{\mathcal{R}( A^*)}} \frac{\langle ({A^\prime}^*{A^\prime}+R^*R)^{k+1}f,f \rangle}{\|{A^\prime} f\|^2}. \end{align} It follows from the selfadjointness of $({A^\prime}^*{A^\prime}+R^*R)^{k}$ that there exists $\lambda \in \mathbb{C}$ and sequence $\{f_n\}_{n\in \mathbb{N}}$ on $\mathcal{H}$ such that the following conditions are satisfied:
\begin{gather} \label{dz1}
\|f_n\|=1, \quad n\in \mathbb{N},
\\ \label{dz2}
|\lambda|=\|({A^\prime}^*{A^\prime}+R^*R)^{k}\|,
\\ \label{dz3}
\lim_{n\rightarrow \infty}\|({A^\prime}^*{A^\prime}+R^*R)^{k}f_n-\lambda f_n\|=0.
\end{gather}
Since ${A^\prime}$ is bounded below, there exists $c>0$ such that $\|{A^\prime} f\|>c\|f\|$ for $f\in \overline{\mathcal{R}( A^*)}$. Hence, we have \begin{align*}
&\Big|\frac{\langle ({A^\prime}^*{A^\prime}+R^*R)^{k+1}f_n,f_n \rangle}{\|{A^\prime} f_n\|^2}-\frac{\langle ({A^\prime}^*{A^\prime}+R^*R)\lambda f_n,f_n \rangle}{\|{A^\prime} f_n\|^2}\Big|
\\&\leq\frac{1}{\|{A^\prime} f_n\|^2}|\langle ({A^\prime}^*{A^\prime}+R^*R)(({A^\prime}^*{A^\prime}+R^*R)^{k}-\lambda) f_n,f_n \rangle|
\\&\hspace{-1ex}\overset{\eqref{dz1}}\leq\frac{1}{c^2}\|\langle ({A^\prime}^*{A^\prime}+R^*R)(({A^\prime}^*{A^\prime}+R^*R)^{k}-\lambda) f_n\|
\\&\leq\frac{1}{c^2}\|\langle ({A^\prime}^*{A^\prime}+R^*R)\|\|(({A^\prime}^*{A^\prime}+R^*R)^{k}-\lambda)f_n\|.
\end{align*} Letting $n$ to infinity in the above inequality and using \eqref{dz3}, we get \begin{equation} \label{to}
\frac{\langle ({A^\prime}^*{A^\prime}+R^*R)^{k+1}f_n,f_n \rangle}{\|{A^\prime} f_n\|^2}-\frac{\langle ({A^\prime}^*{A^\prime}+R^*R)\lambda f_n,f_n \rangle}{\|{A^\prime} f_n\|^2}\rightarrow 0. \end{equation} By inequality \begin{equation*}
\frac{\langle ({A^\prime}^*{A^\prime}+R^*R) f_n,f_n \rangle}{\|{A^\prime} f_n\|^2}\geq\frac{\langle ({A^\prime}^*{A^\prime}) f_n,f_n \rangle}{\langle ({A^\prime}^*{A^\prime}) f_n,f_n \rangle}=1 \end{equation*} and \eqref{to}, we have \begin{equation*}
\sup_{f\in \overline{\mathcal{R}( A^*)}}\frac{\langle ({A^\prime}^*{A^\prime}+R^*R)^{k+1}f,f \rangle}{\|{A^\prime} f\|^2}\geq |\lambda|\overset{\eqref{dz2}}=\|({A^\prime}^*{A^\prime}+R^*R)^{k}\|. \end{equation*} The last inequality with \eqref{to2} gives \begin{equation*}
\|V^*(M^*M+R^*R)^{k}V\|\geq\|(M^*M+R^*R)^{k}\|. \end{equation*} In turn \begin{align*}
\|V^*({A^\prime}^*{A^\prime}+R^*R)^{k}V\|&\leq\|V^*\|\|({A^\prime}^*{A^\prime}+R^*R)^{k}\|\|V\|
\leq \|({A^\prime}^*{A^\prime}+R^*R)^{k}\|. \end{align*} Hence the equality in \eqref{onsr} holds. \end{proof}
The following Lemma turns out to be useful. It will be used several times in this paper.
\begin{lemma}\label{pomoc}
Let $k \in \mathbb{N}$ and $A$ be a bounded operator in $\mathcal{H}$ such that $A^*A$ commutes with $A^k$ and satisfy the equation \eqref{xrow} with $n=k$. Then $A$ is quasinormal. \end{lemma} \begin{proof} The case of $k=1$ is obvious. Assume now that $k>1$. First we show that $A$ satisfy \eqref{xukl} for $S=\{k,k+1,2k,2k+1\}$. Indeed, this is because \begin{equation*}
A^{*k+1}A^{k+1}=A^{*k}A^*AA^{k}=A^{*k}A^{k}A^*A=(A^*A)^{k+1}. \end{equation*} Similarly, we see that \begin{equation*}
A^{*2k}A^{2k}=A^{*k}(A^*A)^kA^{k}=A^{*k}A^{k}(A^*A)^k=(A^*A)^{2k}, \end{equation*} and \begin{equation*}
A^{*2k+1}A^{2k+1}=A^{*2k}A^*AA^{2k}=A^{*2k}A^{2k}A^*A=(A^*A)^{2k+1}. \end{equation*} Summarizing, we have shown that the operator $A$ satisfy \eqref{xukl} with $S=\{k,k+1,2k,2k+1\}$. This and Theorem \ref{ucc} imply that $A$ is quasinormal. \end{proof}
For the reader's convenience, we include the proof of the following result which is surely folklore.
\begin{lemma}\label{fug} Let $M,N,T \in \textbf{B}(\mathcal{H})$ be such that $M$ and $N$ are positive and \begin{equation}\label{eq1} TM^k=N^kT. \end{equation} Then $TM=NT$. \end{lemma} \begin{proof} An application of Berberian's trick concerning $2\times2$ operator matrices gives that the following equation is equivalent to
\eqref{eq1} \begin{equation*} \left[ \begin{array}{cc} 0 & 0 \\T & 0 \end{array} \right] \left[ \begin{array}{cc} M & 0 \\0 & N \end{array} \right]^k=\left[ \begin{array}{cc} M & 0 \\0 & N \end{array} \right]^k \left[ \begin{array}{cc} 0 & 0 \\T & 0 \end{array} \right]. \end{equation*} Using Theorem \ref{rsn}, we get \begin{equation*} \left[ \begin{array}{cc} 0 & 0 \\T & 0 \end{array} \right] \left[ \begin{array}{cc} M & 0 \\0 & N \end{array} \right]=\left[ \begin{array}{cc} M & 0 \\0 & N \end{array} \right] \left[ \begin{array}{cc} 0 & 0 \\T & 0 \end{array} \right]. \end{equation*} Looking more closely at the last line, we get $TM=NT$ which completes the proof. \end{proof}
We need one more fact in the proof of the main result of this section (cf. Theorem \ref{gll}), which seems to be of independent interest. \begin{lemma}\label{wak} Let $\alpha, \beta\in\mathbb{R}_+$ be such that $\alpha<\beta$ and $A$, $B$ be bounded operators on $\mathcal{H}$ such that $B$ is positive and injective. Then the following conditions are equivalent: \begin{itemize} \item[(i)] $A^*A\leq B$ and $A^*B^sA=B^{s+1}$ for $s=\alpha,\beta$, \item[(ii)] $A$ and $B$ commute and $A^*A= B$. \end{itemize} \end{lemma} \begin{proof}
(ii)$\Rightarrow$ (i) This part is obvious.
(i)$\Rightarrow$ (ii) We conclude from the Douglas factorization lemma \cite[Theorem 1]{doug}
that there exist operator $Q$ such that $\|Q\|\leq 1$ and $A=QB^\frac{1}{2}$. This, injectivity of $B$ and condition (i) gives that \begin{equation}\label{vvv} Q^*B^sQ=B^{s}\quad \text{for}\quad s=\alpha,\beta. \end{equation} Consider the operators on $\mathcal{H}\oplus\mathcal{H}$ given by \begin{align}Z=\left[ \begin{array}{cc} B^{\beta} & 0 \\0 & 0 \end{array}\label{hanpet} \right],\quad U=\left[ \begin{array}{cc} Q & R \\S & -Q^* \end{array} \right],\quad V=\left[ \begin{array}{cc} Q & -R \\S & Q^* \end{array} \right], \end{align} where $R=(I-QQ^*)^\frac{1}{2}$ and $S=(I-Q^*Q)^\frac{1}{2}$, and the maps \begin{equation*}
\varPhi:B(\mathcal{H}\oplus\mathcal{H})\rightarrow B(\mathcal{H}\oplus\mathcal{H}) \quad\text{and}\quad \varPsi:B(\mathcal{H}\oplus\mathcal{H})\rightarrow B(\mathcal{H}) \end{equation*} defined by \begin{equation*}
\varPhi(X)=\frac{1}{2}(U^*XU+V^*XV) \quad\text{and}\quad \varPsi(X)=P_{\mathcal{H}\oplus\{0\}}\varPhi(X)|_{\mathcal{H}\oplus\{0\}}. \end{equation*}
Using Lemma \ref{fug} with $n=2$ we verify that operators $U$ and $V$ are unitaries. Hence both $\varPhi$ and $\varPsi$ are unital positive linear maps. Let $f:(0,\infty)\rightarrow\mathbb{R}$ be function given by $f(x)=x^\frac{\alpha}{\beta}$. We therefore have \begin{align*}
\varPsi(f(Z))&=P_{\mathcal{H}\oplus\{0\}}\varPhi(f(Z))|_{\mathcal{H}\oplus\{0\}}=P_{\mathcal{H}\oplus\{0\}}\varPhi\big(\big[
\begin{smallmatrix}B^{\alpha} & 0 \\0 & 0 \end{smallmatrix}\big]\big)|_{\mathcal{H}\oplus\{0\}}\\&=P_{\mathcal{H}\oplus\{0\}}\Big[ \begin{smallmatrix} Q^*B^{\alpha}Q & 0 \\0 & RB^{\alpha}R \end{smallmatrix}\Big]|_{\mathcal{H}\oplus\{0\}}
=Q^*B^{\alpha}Q\overset{\eqref{vvv}}=(Q^*B^{\beta}Q)^\frac{\alpha}{\beta}\\&=f(Q^*B^{\beta}Q)=f\Big(P_{\mathcal{H}\oplus\{0\}}\left[ \begin{smallmatrix}Q^*B^{\beta}Q & 0 \\0 & RB^{\beta}R \end{smallmatrix}\right]|_{\mathcal{H}\oplus\{0\}}\Big)\\&=f\big(P_{\mathcal{H}\oplus\{0\}}\varPhi\big(\big[ \begin{smallmatrix} B^{\beta} & 0 \\0 & 0 \end{smallmatrix}\big]\big)|_{\mathcal{H}\oplus\{0\}}\big)=f(\varPsi(Z)). \end{align*} Since $-f$ is operator convex and $\varPsi (-f(Z))=-f(\varPsi(Z))$ application of Theorem \ref{petz} shows that map $\varPsi$ restricted to the subalgebra generated by $\{Z\}$ is multipticative. In particular \begin{equation*}
\varPsi(Z^k)=(\varPsi(Z))^k \quad \text{for every}\quad k\in \mathbb{N}. \end{equation*} The last equality is equivlent to the following one \begin{equation*}
Q^*B^{k\beta}Q=(Q^*B^{\beta}Q)^k \quad \text{for every}\quad k\in \mathbb{N}. \end{equation*} This and \eqref{vvv} gives \begin{equation}\label{vvv2}
Q^*B^{k\beta}Q=B^{k\beta} \quad \text{for every}\quad k\in \mathbb{N}. \end{equation} Put $C:=B^\frac{\beta}{2}Q$. We deduce from \eqref{vvv2} that \begin{align*}
C^*C&=Q^*B^{\beta}Q=B^{\beta},
\\C^{*2}C^2&=Q^*B^\frac{\beta}{2}C^*CB^\frac{\beta}{2}Q=Q^*B^{2\beta}Q=B^{2\beta}=(C^*C)^2,\\C^{*3}C^3&=C^{*}(C^{*2}C^2)C=Q^*B^\frac{\beta}{2}B^{2\beta}B^\frac{\beta}{2}Q\\&=Q^*B^{3\beta}Q=B^{3\beta}=(C^*C)^3. \end{align*} Since $C^{*2}C^2=(C^*C)^2$ and $C^{*3}C^3=(C^*C)^3$, we get that $C$ is quasinormal (cf. Theorem \ref{ucc}). Hence \begin{equation*} (C^*C)C=C(C^*C), \end{equation*} which implies \begin{equation*} B^\beta B^\frac{\beta}{2}Q=B^\frac{\beta}{2}QB^\beta. \end{equation*} Since $B^\frac{\beta}{2}$ is injective we get \begin{equation*}
B^\beta Q=QB^\beta. \end{equation*}
We infer from Theorem \ref{rsn}
that
\begin{equation*}
B Q=QB. \end{equation*} Multiplying the above equation right by $B^\frac{1}{2}$ and using $A=QB^\frac{1}{2}$, we see that $A$ and $B$ commute. Hence, we have \begin{equation*}
B^\beta A^*A= A^*B^\beta A=B^{\beta+1}. \end{equation*} This and injectivity of $B^{\beta}$ lead to \begin{equation*}
A^*A=B, \end{equation*} which completes the proof. \end{proof} \begin{rem} The operators defined in \eqref{hanpet} originally appeared in \cite{hans3} on the ocassion of proving that if $f$ is operator convex function, then the inequality $f(A^*XA)\leq A^*f(X)A$ holds for any contractive operator $A$ and for any selfadjoint operator $X$. \end{rem}
We are now in a position to formulate and prove the aforementioned analog of Theorem \ref{ucc}.
\begin{theorem}\label{gll} Let $m,n,p\in \mathbb{N}$ be such that $m< n$ and $A$ be a bounded operator in $\mathcal{H}$. Then the following conditions are equivalent: \begin{itemize}
\item[(i)] operator $A$ satisfies \eqref{xukl} with $S=\{p,m,m+p,n,n+p\}$,
\item[(ii)] operator $A$ is quasinormal. \end{itemize} \end{theorem} \begin{proof} Consider the block decomposition as in Theorem \ref{ppppt}. Hence, by Theorem \ref{ppppt} (i), ${A^\prime}$ is injective. An induction argument shows that \begin{align}\label{pot5} A^k=\left[ \begin{array}{cc} A^{\prime k} & 0 \\RA^{\prime k-1} & 0 \end{array} \right], \quad k\in \mathbb{N}. \end{align} It follows from \eqref{xukl} with $S=\{m,m+p,n,n+p\}$ that \begin{equation*}
A^{*p}(A^*A)^sA^p=(A^*A)^{p+s}\quad \text{for}\quad s=m,n. \end{equation*} This, together with \eqref{mod5} and \eqref{pot5}, implies that \begin{equation}\label{h1}
{A^\prime}^{*p}({A^\prime}^*{A^\prime}+R^*R)^s{A^\prime}^p=({A^\prime}^*{A^\prime}+R^*R)^{p+s}\quad \text{for}\quad s=m,n. \end{equation} We infer from \eqref{xrow} with $n=p$, \eqref{pot5} and \eqref{mod5} that \begin{equation}\label{h2}
{A^\prime}^{*p}{A^\prime}^p\leq {A^\prime}^{*p-1}({A^\prime}^{*}{A^\prime}+R^*R){A^\prime}^{p-1}=({A^\prime}^*{A^\prime}+R^*R)^{p}. \end{equation} By \eqref{h1} and \eqref{h2} and Lemma \ref{wak} applied to the pair $({{A^\prime}}^p,({{A^\prime}}^*{{A^\prime}}+R^*R)^{p})$ in place of $(A,B)$ and with $\alpha=\frac{m}{p}$ and $\beta=\frac{n}{p}$, we deduce that \begin{align}\label{h3}
{A^\prime}^{*p}{A^\prime}^p&=({A^\prime}^*{A^\prime}+R^*R)^{p}
\\ \label{h4}
{A^\prime}^p ({A^\prime}^*{A^\prime}+R^*R)^{p}&= ({A^\prime}^*{A^\prime}+R^*R)^{p}{A^\prime}^p. \end{align}
Combining \eqref{h2} with \eqref{h3}, we get \begin{equation*}
{A^\prime}^{*p-1}R^*R{A^\prime}^{p-1}=0, \end{equation*} which yields $R{A^\prime}^{p-1}=0$. This together with \eqref{pot5} implies that \begin{equation}\label{dekze} A^p=\left[ \begin{array}{cc} A^{\prime p} & 0 \\0 & 0 \end{array} \right]. \end{equation} We deduce from \eqref{h4}, \eqref{dekze}, \eqref{mod5} and Lemma \ref{rsn} that operator $A^p$ commute with $A^*A$. Applying Lemma \ref{pomoc} completes the proof. \end{proof}
The author proved that in the class of bounded injective bilateral weighted shifts, the single equation \eqref{xrow} with $n \geq 2$ does imply quasinormality (cf. \cite[Theorem 3.3]{ja}). As shown in \cite[Example 3.4]{ja}, this no longer true for unbounded injective bilateral weighted shifts. Now we will show that the condition \eqref{xukl} with $S=\{m,n\}$, where $n>m\geq2$, does imply quasinormality in the class of unbounded injective bilateral weighted shifts. \begin{theorem} Let $m,n\geq2$ be such that $m< n$. Then any injective bilateral weighted shift $A$ that satisfies \eqref{xukl} with $S=\{m,n\}$ is quasinormal. \end{theorem} \begin{proof} Let $A$ be a injective bilateral weighted shift with weights $\{\lambda_l\}_{l=-\infty}^\infty$ and $\{e_l\}_{l=-\infty}^\infty$ be the standard $0$-$1$ orthonormal basis of $\ell^2(\mathbb{Z})$.
By \cite[Theorem 3.2.1]{memo}, we can assume without loss of generality that $\lambda_l > 0$ for all $l \in \mathbb{Z}$. Suppose that $A$ satisfies \eqref{xukl} with $S=\{m,n\}$. It is easy to verify that the equation \begin{equation*}
A^{*k}A^ke_l=(A^{*}A)^ke_l,\quad l\in \mathbb{Z}, \, k\in\{m,n\}, \end{equation*} implies that \begin{equation}
\lambda_l^k=\lambda_{l}\lambda_{l+1}\cdots\lambda_{l+k-1},\quad l\in \mathbb{Z},\, k\in\{m,n\}. \end{equation} Hence, the sequence $\{a_l\}_{l=-\infty}^\infty$, where $a_l:=\ln \lambda_l$ satisfies the following recurrence relation for every $k\in \{m,n\}$, \begin{equation} \label{dus1}
(k-1)a_l=a_{l+1}+\cdots+a_{l+k-1},\quad l\in \mathbb{Z}. \end{equation} Observe that for every $k\in \{m-1,n-1\}$, the polynomial $\frac{1}{k}p_k(z)$ defined by \begin{equation*} p_k(z):= kz^k - (z^ {k-1} + z^{k-2}+\dots+ 1),\quad z\in \mathbb{C}, \end{equation*} is the characteristic polynomial of the recurrence relation \eqref{dus1}. Since \begin{equation}
\frac{p_k(z)}{z-1}=\frac{d}{dz}\frac{z^{k+1}-1}{z-1} \end{equation} we infer from \cite[Example 3.7]{kaj} that the polynomials $p_k$ for $k\in \{m-1,n-1\}$, have only one common root $z=1$. This combined with \cite[Lemma 4.1]{ja} and \cite[ Theorem 3.1.1]{hal} implies that the sequence $\{a_l\}_{l=-\infty}^\infty$ is constant. Hence, the operator $A$ is a multiple of a unitary operator and as such is quasinormal. This completes the proof. \end{proof}
In Theorem \ref{nowechar} below, we propose a method of constructing new sets $S$ such that \eqref{xukl} characterize quasinormal operators.
\begin{theorem}\label{nowechar} Let $n\in \mathbb{N}$ and $S\subset \mathbb{N}$ be a nonempty set such that any bounded operator $A$ on $\mathcal{H}$ that satisfies \eqref{xukl} is quasinormal.
Then set $\hat{S}=\{n\} \cup \{ns\:| s\in S\}$ also has this property. \end{theorem} \begin{proof} Suppose $A$ is operator which satisfies \eqref{xukl} with $S=\hat{S}$. It follows that \begin{equation*}
(A^n)^{*s}(A^n)^{s}=A^{*sn}A^{sn}=(A^{*}A)^{sn}= ((A^{*}A)^{n})^s=(A^{*n}A^n)^{s}. \end{equation*} Since the operator $A^n$ satisfies \eqref{xukl} with subset $S$, hence it is quasinormal. Thus we obtain \begin{equation}\label{nam}
A^{*n}A^nA^n=A^nA^{*n}A^n. \end{equation} Operator $A$ also satisfy equation \eqref{xrow}. We conclude from \eqref{nam} and Theorem \ref{rsn} that operator $A^n$ commutes with $A^*A$. Now applying Lemma \ref{pomoc} completes the proof. \end{proof}
Now we give another proof of Theorem \ref{gll} in the case of $S=\{p,q, p+q, 2p, 2p+q\}$. It is worth noting that this proof uses only elementary properties of $C^*$algebras and Theorem \ref{rsn}. We do not use theory of operator monotone and convex functions. We begin by proving the following lemma: \begin{lemma}\label{komut} Let $p,q\in \mathbb{N}$ and $A$ be bounded operator on $\mathcal{H}$ which satisfies \eqref{xukl} with $S=\{p,q, p+q, 2p, 2p+q\}$. Then operator $A^q$ commute with $A^{*p}A^{p}$. \end{lemma}
\begin{proof}
It follows from our assumptions that the following chain of equalities holds: \begin{align}\label{rec} A^{*(p+q)} A^{p}A^{*p}A^{p+q}&=A^{*q}(A^{*p}A^{p})^2A^q=A^{*q}(A^{*}A)^{2p}A^q \\& =A^{*q}(A^{*2p}A^{2p})A^q=A^{*2p+q}A^{2p+q}=(A^*A)^{2p+q}. \notag \end{align} We show that operators $A^q$ and $A^{*p}A^{p}$ commute. Indeed, if $f\in \mathcal{H}$, then \begin{align*}
\|(A^{*p}A^{p+q}&-A^qA^{*p}A^{p})f\|^2=\langle A^{*p}A^{p+q}f,A^{*p}A^{p+q}f\rangle \\&-2\re \langle A^{*p}A^{p+q}f,A^qA^{*p}A^{p}f\rangle +\langle A^qA^{*p}A^{p}f,A^qA^{*p}A^{p}f\rangle \\& =\langle A^{*(p+q)} A^{p}A^{*p}A^{p+q}f,f\rangle -2\re\langle A^{*p}A^{p}A^{*q}A^{*p}A^{p+q}f,f\rangle \\& +\langle A^{*p}A^{p}A^{*q}A^qA^{*p}A^{p}f,f\rangle \\&\overset{\eqref{rec}} =\langle(A^*A)^{2p+q} f,f\rangle -2\re\langle A^{*p}A^{p}A^{*(p+q)}A^{p+q}f,f\rangle \\& +\langle A^{*p}A^{p}(A^{*q}A^q)A^{*p}A^{p}f,f\rangle \\& =\langle(A^*A)^{2p+q} f,f\rangle -2\re\langle (A^{*}A)^{p}(A^{*}A)^{p+q}f,f\rangle \\& +\langle (A^{*}A)^{p}(A^{*}A)^q(A^{*}A)^{p}f,f\rangle=0, \end{align*} which implies \begin{equation*} A^{*p}A^{p+q}=A^qA^{*p}A^{p}. \end{equation*} This completes the proof. \end{proof} As an immediate consequence of Lemma \ref{komut} we obtain the next Theorem. \begin{theorem}\label{jjj} Let $p,q\in \mathbb{N}$ and $A$ be bounded operator in $\mathcal{H}$ which satisfies \eqref{xukl} with $S=\{p,q, p+q, 2p, 2p+q\}$. Then operator $A$ is quasinormal. \end{theorem} \begin{proof} Applying Lemma \ref{komut}, we deduce that \begin{equation*} A^qA^{*p}A^{p}=A^{*p}A^{p}A^q. \end{equation*} This, equation \eqref{xrow} with $n=p$ and Theorem \ref{rsn} yields \begin{equation}\label{www} A^qA^{*}A=A^{*}A^{}A^q.
\end{equation} To complete the proof, it suffices to consider two disjunctive cases.
\textsc{Case} 1. If $q=1$, then by \eqref{www} operator $A$ is quasinormal.
\textsc{Case} 2. We now consider the other case when $q>1$, then it follows from \eqref{www} that \begin{align*}
A^{*2q}A^{2q}&=A^{*q}A^{q*}A^qA^q=A^{*q}(A^{*}A)^qA^q\\&=A^{*q}A^q(A^{*}A)^q=(A^{*}A)^{2q}\\
A^{*sq+1}A^{sq+1}&=A^{*sq}A^{*}AA^{sq}=A^{*sq}A^{sq}A^{*}A=(A^{*}A)^{sq+1}, \end{align*} for $s=1,2$. Hence operator $A$ satisfies \eqref{xukl} with $S=\{q,q+1,2q,2q+1\}$. This and Case 1. imply that operator $A$ is quasinormal. \end{proof}
The new operator transform $\hat{A}$ of $A$ from the class $A(k)$ to the class of hyponormal operators was introduced in \cite{mary}. We define $\hat{A}$ by \begin{equation*}
\hat{A}:=WU||A|^kA|^\frac{1}{k+1}, \end{equation*}
where $|A||A^*|=W||A||A^*||$ is the polar decomposition. The following Theorem guarantees $\hat{A}$ is indeed hyponormal.
\begin{theorem}$($cf. \cite[ Theorem 1]{mary} $)$.\label{pop}
Let $A=U|A|$ be the polar decomposition of class $A(k)$ operator, then operator $\hat{A}:=WU||A|^kA|^\frac{1}{k+1}$ is hyponormal, where $|A||A^*|=W||A||A^*||$ is the polar decomposition. \end{theorem}
The following characterization of quasinormality can be deduced from Theorem \ref{pop} and \cite[Proposition 2.3.]{uch}.
\begin{theorem}
Let $k\in \mathbb{N}$. Bounded operator $A$ is quasinormal iff it satisfies \eqref{xukl} with $S=\{k,k+1\}$ and $ |A||A^*|=P_{\overline{\mathcal{R}( A)}}||A||A^*||$ is the polar decomposition. \end{theorem} \begin{proof}
Note that if $A$ is quasinormal, then it \eqref{xukl} for $S=\{k,k+1\}$ (see \eqref{wst}). Let $A=U|A|$ be the polar decomposition. Since $|A^*|= U|A|U^*$ yields
\begin{equation*}
|A||A^*|= |A|U|A|U^*=U|A|^2U^*=(U|A|U^*)^2=|A^*|^2 \end{equation*} we deduce that \begin{equation*}
|A||A^*|=P_{\overline{\mathcal{R}( A)}}|A^*|^2 \end{equation*} is the polar decomposition.
We now show the reverse implication. Since $A$ satisfy \eqref{xukl} with $S=\{k,k+1\}$, we see that $(A^*|A|^{2k}A)^\frac{1}{k+1}\geq |A|^2$. Hence, operator $A$ is in class $A(k)$. By Theorem \ref{pop} the transform $\hat{A}$ is hyponormal operator. By our assumption, we have \begin{equation*}
\hat{A}=WU||A|^kA|^\frac{1}{k+1}=P_{\overline{\mathcal{R}( A)}}U|A|=A. \end{equation*} This yields $A$ is also hyponormal operator. This and the fact that $A$ satisfy \eqref{xukl} with $S=\{k,k+1\}$ combined with \cite[Proposition 2.3.]{uch} completes the proof. \end{proof} \section{A characterization of normality}
The single equality \eqref{xrow} implies the normality of compact operators \cite[Preposition]{uch}. In general, neither the single equality \eqref{xrow} nor the system of equations \eqref{xukl} imply the normality. We obtain a new characterization of the normal operators which resembles that for the quasinormal operators (cf. Theorem \ref{gll}).
We begin by proving a few auxiliary lemmas. The first one will be deduced from Lemma \ref{fug}. \begin{lemma}\label{przep} Let $k\in\mathbb{N}$, and $A$ be a bounded operator such that both operators $A$ and $A^*$ ssatisfies the equality \eqref{xrow}. Then the following equation hold: \begin{equation*} AA^{*}A^k=A^kA^{*}A \end{equation*} \end{lemma} \begin{proof} It is clear that \begin{equation*} A^k(A^{*k}A^{k})=(A^kA^{*k})A^{k}. \end{equation*} Using equalities $A^{*k}A^{k}=(A^{*}A)^{k}$ and $A^{k}A^{*k}=(AA^{*})^{k}$ we obtain the equality \begin{equation}\label{r1} A^k(A^{*}A)^{k}=(AA^{*})^kA^{k}. \end{equation} Employing Lemma \ref{fug} completes the proof. \end{proof} We will derive Theorem \ref{md} from the following more technical result.
\begin{lemma}\label{gl} Let $k\in\mathbb{N}$ and $A$ be a bounded operator in $\mathcal{H}$ such that both operators $A$ and $A^{*}$ satisfies \eqref{xukl} with $S=\{k,k+1\}$. Then $A$ is quasinormal. \end{lemma} \begin{proof} Since both operators $A$ and $A^*$ satisfies \eqref{xukl}, we can deduce from Lemma \ref{przep} the following equality \begin{equation*} AA^{*}A^{k+1}=A^{k+1}A^{*}A. \end{equation*} An induction argument shows that for every natural number j the equality holds
\begin{equation*} (AA^{*})^jA^{k+1}=A^{k+1}(A^{*}A)^j, \end{equation*} for every $j\in \mathbb{N}$. In particular for $j=k$ we have \begin{equation*} (AA^{*})^kA^{k+1}=A^{k+1}(A^{*}A)^k. \end{equation*} This and equation \eqref{xrow} with $n=k$ for operators $A$ and $A^*$ gives \begin{equation}\label{recrel} A^kA^{*k}A^{k+1}=A^{k+1}A^{*k}A^k. \end{equation} We now prove that operators $A$ and $A^{*k}A^{k}$ commute. \begin{align*}
\|(A^{*k}A^{k+1}&-AA^{*k}A^{k})f\|^2=\langle A^{*k}A^{k+1}f,A^{*k}A^{k+1}f\rangle -2\re \langle A^{*k}A^{k+1}f,AA^{*k}A^{k}f\rangle \\& +\langle AA^{*k}A^{k}f,AA^{*k}A^{k}f\rangle \\& =\langle A^{*(k+1)} (A^{k}A^{*k}A^{k+1})f,f\rangle -2\re\langle A^{*k}A^{k}A^{*}A^{*k}A^{k+1}f,f\rangle \\& +\langle A^{*k}A^{k}A^{*}AA^{*k}A^{k}f,f\rangle \\&\overset{\eqref{recrel}} =\langle A^{*(k+1)} (A^{k+1}A^{*k}A^{k})f,f\rangle -2\re\langle A^{*k}A^{k}A^{*(k+1)}A^{k+1}f,f\rangle \\& +\langle A^{*k}A^{k}(A^{*}A)A^{*k}A^{k}f,f\rangle \\& =\langle (A^{*}A)^{k+1}(A^{*}A)^{k}f,f\rangle -2\re\langle (A^{*}A)^{k}(A^{*}A)^{k+1}f,f\rangle \\& +\langle (A^{*}A)^{k}(A^{*}A)(A^{*}A)^{k}f,f\rangle=0, \end{align*} for every $f\in \mathcal{H}$. Hence, \begin{equation*} A^{*k}A^{k+1}=AA^{*k}A^{k}. \end{equation*} Using \eqref{xrow} with $n=k$, we get \begin{equation*} (A^{*}A)^{k}A=A(A^{*}A)^{k}. \end{equation*} By Theorem \ref{rsn}, we see that \begin{equation*} A^{*}A^{2}=AA^{*}A \end{equation*} which yields that operator $A$ is quasinormal. This completes the proof. \end{proof} Now we are in a position to prove the aforementioned characterization of normality of operators, which is a direct consequence of the above Lemma. \begin{theorem}\label{md} Let $k \in \mathbb{N}$, and $A$ be a bounded operator in $\mathcal{H}$. Then the following conditions are equivalent: \begin{enumerate} \item[(i)] both operator $A$ and $A^{*}$ satisfies \eqref{xukl} with $S=\{k,k+1\}$, \item[(ii)] operator $A$ is normal. \end{enumerate} \end{theorem} \begin{proof} Employing Lemma \ref{gl} to operator $A$ and $A^{*}$, we get, that both of them are quasinormal. Since quasinormal operator is hyponormal, we see that operator $A$ and $A^{*}$ are hyponormal hence, \begin{equation*}
\|Af\|=\|A^{*}f\|. \end{equation*}
By \cite[Theorem 12.12]{rud} $A$ is normal. \end{proof}
We conclude this section by giving an analogue of Theorem \ref{gll} in case of an invertible operator.
\begin{theorem}\label{inv} Let $m,n\in \mathbb{N}$ be such that $m\leq n$ and $A$ be an invertible operator. Then the following conditions are equivalent: \begin{itemize} \item[(i)] operator $A$ and $A^{*}$ satisfies \eqref{xukl} with $S=\{m,n,m+n\}$, \item[(ii)] operator $A$ is normal. \end{itemize} \end{theorem} \begin{proof}
Let $A^n=U_n|A^n|$ be the polar decomposition of $A^n$. Since $A^n$ is invertible operator, we conclude that $U_n$ is unitary operator. By our assumption,
\begin{equation}\label{inj}
A^{*n}(A^*A)^mA^{n}=(A^*A)^{m+n}
\end{equation}
It follows from \eqref{inj} and the polar decomposition of $A^n$ that
\begin{equation}
|A^{n}|U_n^*(A^*A)^mU_n |A^{n}|=(A^*A)^{m+n}.
\end{equation}
The injectivity of $|A^{n}|$ implies
\begin{equation}
U_n^*(A^*A)^mU_n =(A^*A)^m,
\end{equation}and consequently $(A^*A)^m$ and $U_n$ commute. By Theorem \ref{rsn}, $|A|^n$ and $U_n$ commute. Since $|A|^n=|A^n|$, $A^n$ is quasinormal. Applying Theorem \ref{rsn} once more we see that $A^*A$ and $A^n$ commute. This and Lemma \ref{pomoc} show that $A$ is quasinormals. Since invertible quasinormal operators are normal the proof is complete.
\end{proof}
\section{Operator inequalities}
Aluthge and Wang \cite{aw3,aw4} showed several results on powers of $p$-hyponormal and log-hyponormal operators. The study has been continued by Furuta i Yanagida \cite{fy1,fy2}, Ito \cite{ito} and Yamazaki \cite{yama}. We collect this results
\begin{theorem}\label{ph} Let $m\in \mathbb{N}$ and $A$ be $p$-hyponormal operator with $p\in (m-1,m]$. Then the following inequalities holds \begin{itemize}
\item[(i)] $A^{*n}A^{n}\geq(A^{*}A)^{n}$ and $(AA^{*})^{n}\leq A^{n}A^{*n}$, for every positive integer $n\leq m$,
\item[(ii)]
\begin{align*}
(A^{*n}A^{n})^{\frac{(p+1)}{n}}\geq \dots &\geq (A^{*m+2}A^{m+2})^{\frac{(p+1)}{m+2}}\\&\geq (A^{*m+1}A^{m+1})^{\frac{(p+1)}{m+1}}\geq (A^{*}A)^{p+1}
\end{align*}
and
\begin{align*}
(A^{n}A^{*n})^{\frac{(p+1)}{n}}\leq \dots &\leq (A^{m+2}A^{*m+2})^{\frac{(p+1)}{m+2}}\\&\leq (A^{m+1}A^{*m+1})^{\frac{(p+1)}{m+1}}\leq (AA^{*})^{p+1},
\end{align*}
for $n\geq m+1$. \end{itemize} \end{theorem}and analogical results for log-hyponormal operators.
\begin{theorem} Let $A$ be log-hyponormalnym operator. Then
\begin{equation*}
(A^{*n}A^{n})^{\frac{1}{n}}\geq \dots \geq (A^{*3}A^{3})^{\frac{1}{3}}\geq (A^{*2}A^{2})^{\frac{1}{2}}\geq (A^{*}A)
\end{equation*}
and
\begin{equation*}
(A^{n}A^{*n})^{\frac{1}{n}}\leq \dots \leq (A^{3}A^{*3})^{\frac{1}{3}}\leq (A^{2}A^{*2})^{\frac{1}{2}}\leq (AA^{*})
\end{equation*}
dla $n\in\mathbb{N}$.
\end{theorem}
The following Theorem which is a reinforcement of \cite[Preposition 2.3.]{uch} is an immediate consequence of Theorem \ref{ph}.
\begin{theorem}\label{hyp} Let $m,n\in \mathbb{N}$ and $A$ be $p$-hyponormal operator with $p\in (m-1,m]$.
If $A$ satisfies equation \eqref{xrow} with $n\geq m+3$ then is quasinormal.
Morover, if operator $A$ is hyponormal and satisfies equation with $k\geq 2$ then is quasinormal.
\end{theorem}
\begin{proof} By \eqref{xrow} for $n=m+3$ and the condition (ii) of Theorem \ref{ph}, operator $A$ satisfy \eqref{xukl} with $S=\{m+1,m+2,m+3\}$. This combined with Theorem \ref{ucc} implies that $A$ is quasinormal.
In the case when $A$ is hyponormal, using once again the condition (ii) of Theorem \ref{ph}, we deduce that $A$ satisfy the equation $A^{*2}A^2=(A^*A)^2$. The last equality is equivalent to the following one $A^*(A^*A-AA^*)A=0$. Since $A$ is hyponormal yields $A^*A-AA^*$ is non-negative, we deduce that $(A^*A-AA^*)^\frac{1}{2}A=0$. This completes the proof.
\end{proof}
The following result is also an analog of Theorems \ref{ph}.
\begin{theorem}$($cf. \cite{mia}$)$.\label{mia}
Let $A$ be invertible operator class $A$. Then the following inequalities holds:
\begin{itemize}
\item[(i)] $|A^n|^\frac{2}{n}\geq (A^*|A^{n-1}|^\frac{2}{n-1}A)\geq |A|^2$, \quad $n=2,3,\dots,$
\item[(ii)] $|A^{n+1}|^\frac{2n}{n+1}\geq |A^n|^2$, \quad $n\in \mathbb{N}$,
\item[(iii)] $|A^{2n}|\geq |A^n|^2$, \quad $n\in \mathbb{N}$,
\item[(iv)] $|A|^2\geq |A^2|\geq \dots \geq |A^n|^\frac{2}{n}$, \quad $n\in \mathbb{N}$,
\item[(v)] $|A^{-2}|\geq |A^{-1}|^2$.
\end{itemize}
\end{theorem}
The key ingredient of its proof consists the next Lemma, which is a direct consequence of the celebrated Furuta inequality (cf. \cite{furuta}).
\begin{theorem}\label{twc}$($cf. \cite{mia}$)$ Let $A$ and $B$ be positive invertible operators such that $(B^\frac{1}{2}AB^\frac{1}{2})^\frac{\beta_0}{\alpha_0+\beta_0}\geq B$ holds for fixed $\alpha_0 , \beta_0 \geq 0$ with $\alpha_0+\beta_0>0$. Then for any fixed $\delta\geq -\beta_0$ function
\begin{equation*}
g:[1,\infty)\times[1,\infty)\rightarrow \boldsymbol{B}_+(\mathcal{H})
\end{equation*}
given by
\begin{equation*}
g(\lambda,\mu)=B^\frac{-\mu}{2}(B^\frac{\mu}{2}A^\lambda B^\frac{\mu}{2})^\frac{\delta+\beta_0\mu}{\alpha_0\lambda+\beta_0\mu}B^\frac{-\mu}{2}
\end{equation*}
is an increasing function of both $\lambda$ and $\mu$ for $\lambda\geq 1$ and $\mu \geq 1 $ such, that $\alpha_0 \lambda \geq \delta$.
\end{theorem}
Now we obtain a chain of inequalities of this type for invertible operators which satisfies \eqref{xukl} for some $S\subset \mathbb{N}$. We begin with a generalization of the \cite[Lemma 1]{mia}.
The proof of the following lemma is analogous to the proof of mentioned Lemma.
\begin{lemma}\label{mia2}
Let $A$ be an invertible operator such that
\begin{equation*}
(A^{*m}|A^p|^{2k}A^m)^\frac{m}{pk+m}\geq |A^m|^2,
\end{equation*}
for some $k\in(0;\infty)$ and $m,p\in \mathbb{N}$. Then for any fixed $\delta\geq -m$ function
\begin{equation*}
f_{p,\delta}:(0,\infty)\rightarrow \boldsymbol{B}_+(\mathcal{H}),
\end{equation*} defined by
\begin{equation*}
f_{p,\delta}(l)=(A^{*m}|A^p|^{2l}A^m)^\frac{\delta+m}{pl+m}
\end{equation*}
is increasing for $l\geq \max\{k,\frac{\delta}{p}\}$.
\end{lemma}
\begin{proof}
Let $A^m=U_m|A^m|$ be the polar decomposition of $A^m$. Since $A^m$ is invertible operator, we conclude that $U_m$ is unitary operator.
Suppose now that the following inequality holds
\begin{equation}\label{n31}
(A^{*m}|A^p|^{2k}A^m)^\frac{m}{pk+m}\geq |A^m|^2.
\end{equation}
Since $A^{*m}=U_m^*|A^{*m}|$, we get
\begin{align*}
(A^{*m}|A^p|^{2k}A^m)^\frac{m}{pk+m}&=
(U_m^*|A^{*m}||A^p|^{2k}|A^{*m}|U_m)^\frac{m}{pk+m}\\&=U_m^*(|A^{*m}||A^p|^{2k}|A^{*m}|)^\frac{m}{pk+m}U_m.
\end{align*}
This and \eqref{n31} imply that
\begin{equation*}
U_m^*(|A^{*m}||A^p|^{2k}|A^{*m}|)^\frac{m}{pk+m}U_m\geq |A^m|^2.
\end{equation*}
We see that the above inequality is equivalent to the following one
\begin{equation*}
(|A^{*m}||A^p|^{2k}|A^{*m}|)^\frac{m}{pk+m}\geq U_m |A^m|^2U_m^*=|A^{*m}|^2.
\end{equation*}
Let $A=|A^p|^{2k}$ and $B=|A^{*m}|^2$. Then the last inequality takes the form
\begin{equation*}
(B^\frac{1}{2}AB^\frac{1}{2})^\frac{m}{pk+m}\geq B.
\end{equation*}
Applying Theorem \ref{twc} we see that for every real $\delta\geq -m$ funkction
\begin{align*}
g(\lambda)&=B^\frac{-1}{2}(B^\frac{1}{2}A^\lambda B^\frac{1}{2})^\frac{\delta+m}{pk\lambda+m}B^\frac{-1}{2}\\&=
|A^{*m}|^{-1}(|A^{*m}||A^p|^{2k\lambda} |A^{*m}|)^\frac{\delta+m}{pk\lambda+m}|A^{*m}|^{-1}
\end{align*}
is increasing for $\lambda \geq 1$ such that $pk\lambda \geq \delta$. Set $\lambda=\frac{l}{k}$, then
\begin{align*}
g(\frac{l}{k})&=
|A^{*m}|^{-1}(|A^{*m}||A^p|^{2l} |A^{*m}|)^\frac{\delta+m}{pl+m}|A^{*m}|^{-1}
\\&=|A^{*m}|^{-1}(U_mU_m^*|A^{*m}||A^p|^{2l} |A^{*m}|U_mU_m^*)^\frac{\delta+m}{pl+m}|A^{*m}|^{-1}
\\&=|A^{*m}|^{-1}(UA^{*m}|A^p|^{2l} A^{m}U_m^*)^\frac{\delta+m}{pl+m}|A^{*m}|^{-1}
\\&=(A^{*m})^{-1}(A^{*m}|A^p|^{2l} A^{m})^\frac{\delta+m}{pl+m}(A^{m})^{-1}
\\&=(A^{*m})^{-1}f_{p,\delta}(l)(A^{m})^{-1}.
\end{align*}
This means that $f_{p,\delta}(l)$ is increasing for
$l\geq k$ such that $pl\geq \delta$, which completes the proof.
\end{proof}
The main technical result of this section is the following Theorem. \begin{theorem}\label{mpj} Let $m,n\in \mathbb{N}$ be such that $n\leq m$ and $A$ be an invertible operator, which satisfy \eqref{xukl} with $S=\{m,n+m\}$ and $A^{*n}A^n\leq (A^{*}A)^n$. Then the following inequality hold:
\begin{equation}\label{piekne}
(|A^{p+m}|^{\frac{2m}{m+p}}\geq (A^{*m}|A^p|^\frac{2n}{p}A^m)^\frac{m}{m+n}\geq|A^m|^2
\end{equation}
for $p=n+im$, where $i\in \mathbb{Z}_+$. \end{theorem}
\begin{proof} We use an induction on $p$. We easily check that both inequalities hold for $p=n$: \begin{align*}
(A^{*m}|A^n|^\frac{2n}{n}A^m)^\frac{m}{m+n}=(A^{*m}A^{*n}A^nA^m)^\frac{m}{m+n}=|
A^{m+n}|^\frac{2m}{m+n}=|A|^{2m}=|A^m|^2 \end{align*} and \begin{equation*}
(|A^{m+n}|^{\frac{2m}{m+n}}=|A^m|^{2}=(A^{*m}|A^n|^\frac{2n}{n}A^m)^\frac{m}{m+n}. \end{equation*} Suppose that both inequalities hold for a given integer $p$. We show that both of them hold for $p+m$ as well. First, we prove that the second inequality in \eqref{piekne} holds. By induction hypothesis, we have \begin{equation*}
|A^{p+m}|^{\frac{2m}{m+p}}\geq|A^m|^2. \end{equation*}
Applying the L\"owner-Heinz inequality with the power $\frac{n}{m}$ to $|A^{p+m}|^{\frac{2m}{m+p}}$ and $|A^m|^2$ and using $A^{*s} A^{s}=(A^*A)^s$ for $s=n,m$, we get the following: \begin{equation*}
|A^{p+m}|^{\frac{2n}{m+p}}\geq|A^n|^2. \end{equation*} Multiplying both sides of the above inequality on the left by $A^{*m}$ and on the right by $A^{m}$ and using $A^{*s} A^{s}=(A^*A)^s$ for $s=n,m+n$ gives \begin{equation*}
A^{*m}|A^{p+m}|^{\frac{2n}{m+p}}A^{m}\geq A^{*m}|A^n|^2A^{m}=|A|^{2(m+n)}. \end{equation*} Applying L\"owner-Heinz inequality again with the power $\frac{m}{m+n}$ and using $A^{*s} A^{s}=(A^*A)^s$ for $s=m,m+n$, we conclude that \begin{equation}\label{pc}
(A^{*m}|A^{p+m}|^{\frac{2n}{m+p}}A^{m})^\frac{m}{m+n}\geq |A^m|^2, \end{equation} which completes the induction argument for the proof of the second inequality in \eqref{piekne}.
Now we turn to the proof of the first inequality in \eqref{piekne}. To make the notation more readable, we write $p^\prime$ instead of $p+m$. Note that the inequality in \eqref{pc} could be rewritten in the following form \begin{equation*}
(A^{*m}|A^{p^\prime}|^\frac{2n}{p^\prime}A^m)^\frac{m}{m+{p^\prime}\frac{n}{p^\prime}}\geq|A^m|^2. \end{equation*} By Lemma \ref{mia2} with $k=\frac{n}{p^\prime}$ function \begin{equation*}
f_{p^\prime,0}(l)=(A^{*m}|A^{p^\prime}|^{2l}A^m)^\frac{m}{p^\prime l+m} \end{equation*} is increasing for $l\geq \max\{\frac{n}{p^\prime},0\}=\frac{n}{p^\prime}.$ In particular $f_{p^\prime,0}(1)\geq f_{p^\prime,0}(\frac{n}{p^\prime})$, which gives \begin{equation*}
|A^{p^\prime+m}|^{\frac{2m}{m+p^\prime}}=f_{p^\prime,0}(1)\geq f_{p^\prime,0}(\frac{n}{p^\prime})=(A^{*m}|A^{p^\prime}|^\frac{2n}{p^\prime}A^m)^\frac{m}{m+n}. \end{equation*} This completes the proof. \end{proof} We are now in a position to formulate and prove the aforementioned analogue of Theorem \ref{mia}.
\begin{theorem}\label{cn} Let $m,n\in \mathbb{N}$ be such that $n\leq m$ and $A$ be an invertible which satisfies \eqref{xukl} with $S=\{m,n+m\}$ and $A^{*n}A^n\leq (A^{*}A)^n$. Then the following inequalities holds: \begin{equation}\label{cnier}
|A^{n}|^\frac{2}{n}\leq |A^{m+n}|^\frac{2}{m+n}\leq |A^{2m+n}|^\frac{2}{2m+n}\leq \dots\leq|A^{rm+n}|^\frac{2}{rm+n} \end{equation} for $r\in \mathbb{N}$. \end{theorem} \begin{proof} We use induction to prove that \begin{equation}\label{cccc}
|A^{m+p}|^\frac{2p}{m+p}\geq |A^{p}|^2, \end{equation} for $p=n+im$, where $i\in \mathbb{Z}_+$. It is easy to verify that \eqref{cccc} holds for $p=n$. Suppose that above inequalitity holds for a given integer $p$. By Theorem \ref{mpj} the following inequality holds \begin{equation*}
(A^{*m}|A^{p+m}|^\frac{2n}{p+m}A^m)^\frac{m}{m+n}\geq|A^m|^2. \end{equation*} Applying Lemma \ref{mia} implies \begin{equation*}
f_{p+m,p}(l)=(A^{*m}|A^{p+m}|^{2l}A^m)^\frac{p+m}{(p+m)l+m}, \end{equation*} is increasing for $l\geq \max \{\frac{n}{p+m},\frac{p}{p+m} \}.$ By induction hypothesis \eqref{cccc} and the monotonicity of $f_{p+m,p}$, we have \begin{align*}
|A^{p+m}|^2&=A^{*m}|A^{p}|^2A^m\leq A^{*m}|A^{m+p}|^\frac{2p}{p+m}A^m\\&=f_{p+m,p}(
\frac{p}{p+m})\leq f_{p+m,p}(1)\\&=|A|^\frac{2(p+m)}{p+2m}, \end{align*} which completes the induction argument.
Now we prove \eqref{cnier}. Applying the L\"owner-Heinz inequality with the power $\frac{1}{p}$ and \eqref{cccc}, we get \begin{equation*}
|A^{m+p}|^\frac{2}{m+p}\geq |A^{p}|^\frac{2}{p}, \end{equation*} which completes the proof. \end{proof}
\end{document} | arXiv | {
"id": "1802.01007.tex",
"language_detection_score": 0.5874424576759338,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} We describe the effect of rational singularities on the Brauer group of a surface, and compute the Brauer groups of all singular del Pezzo surfaces over an algebraically closed field. \end{abstract}
\title{Brauer groups of singular del Pezzo surfaces}
\section{Introduction}
The Brauer group of a variety $X$, which in this paper we take to mean the cohomology group $\Br X = \mathrm{H}^2(X,{\mathbb{G}_\mathrm{m}})$, was extensively studied by Grothendieck~\cite{Grothendieck:GB}. Brauer groups of singular varieties are not particularly well behaved: in particular, the Brauer group of a singular variety need not inject into the Brauer group of its function field. The purely local question, of understanding the Brauer group of the local ring of a singularity, has been well studied: see, for example,~\cite{DFM:JA-1993}. An interesting feature of the results discussed in this article is that the calculation is a global one, and often leads to elements of the Brauer group which are locally trivial in the Zariski topology. One individual example of such an element was given by Ojanguren~\cite{Ojanguren:JA-1974}, whose algebra is of order 3 and defined on a singular cubic surface with three $A_2$ singularities; it will be shown below that this is the only type of singular cubic surface admitting a 3-torsion Brauer element. A more general framework for studying such examples, described by Grothendieck in~\cite{Grothendieck:GB}, was developed by De~Meyer and Ford~\cite{DF:AAAM-1990} to give examples of toric surfaces admitting non-trivial, locally trivial Azumaya algebras.
In this article we take a slightly different approach which, for varieties with rational singularities, shows how the calculation of the Brauer group can be made very explicit by using the intersection pairing. We then apply this to arguably the simplest interesting class of singular projective surfaces, namely the singular del Pezzo surfaces. These are easy to approach for two reasons: they have rational singularities; and they come with a natural desingularisation which is a rational surface. In Proposition~\ref{prop:br} we show how to combine the Leray spectral sequence for the desingularisation with Lipman's detailed description of the local Picard groups above the singular points \cite{Lipman:IHES-1969}. In particular, it follows that the Brauer group may be easily computed using the intersection form on the desingularisation. For singular del Pezzo surfaces, this is well understood, and in section~\ref{sec:dp} we apply Proposition~\ref{prop:br} to compute the Brauer groups of all singular del Pezzo surfaces over an algebraically closed field; the Brauer group depends only on the singularity type of the surface. The arguments, and hence the results, are valid in arbitrary characteristic.
The principal motivation for this article is in applying the Brauer group to study rational points of del Pezzo surfaces, as first suggested by Manin~\cite{Manin:GBG}. For arithmetic questions, it is often more useful to work with a desingularisation of the original variety, and so the Brauer group of the singular variety is not of obvious interest. However, there are some situations where one cannot avoid looking at the Brauer group of a singular variety; the situation we have in mind is that of a model of a del Pezzo surface over a local ring, where the Brauer group of the (possibly singular) special fibre must be taken into account.
\section{The Brauer group of a surface with rational singularities}
In this section we study the Brauer group of a surface $Y$ having only isolated rational singularities, over an algebraically closed base field.
Following Lipman, by a \emph{desingularisation} $X \to Y$ we mean a proper birational morphism from a regular scheme $X$. If $Y$ is a normal surface with finitely many rational singularities, then there is a unique minimal desingularisation $X \to Y$ which may be constructed as a sequence of blow-ups at singular points.
We write $\Br(X/Y)$ to mean $\ker(\Br Y \to \Br X)$. If $Y$ is integral with function field $K$, then there is a sequence of maps $\Br Y \to \Br X \to \Br K$. Since $X$ is regular, $\Br X$ injects into $\Br K$; it follows that $\Br(K/Y) \cong \Br(X/Y)$.
Whenever $A$ is an Abelian group, $A^*$ denotes the group $\Hom(A,\mathbb{Z})$.
\begin{proposition}\label{prop:br} Let $Y$ be a normal surface over an algebraically closed field $k$; suppose that $Y$ has finitely many rational singularities, and let $f\colon X \to Y$ be the minimal desingularisation. Let $\mathbf{E}$ denote the subgroup of $\Pic X$ generated by the classes of the exceptional curves of the resolution, and let $\theta \colon \Pic X \to \mathbf{E}^*$ be the homomorphism induced by the intersection pairing on $\Pic X$. Then there is an exact sequence \[ 0 \to \Pic Y \xrightarrow{f^*} \Pic X \xrightarrow{\theta} \mathbf{E}^* \to \Br Y \xrightarrow{f^*} \Br X. \] \end{proposition}
\begin{proof} Since $f$ is proper and birational, we have $f_* {\mathbb{G}_\mathrm{m}} = {\mathbb{G}_\mathrm{m}}$. It follows that, for any flat morphism of schemes $Y' \to Y$, if $f_{Y'} \colon X \times_Y Y' \to Y'$ denotes the base change of $f$, the following sequence is exact (see~\cite[Section~8.1, Proposition~4]{BLR:NM}): \begin{equation}\label{eq:leray} 0 \to \Pic Y' \xrightarrow{f^*_{Y'}} \Pic (X \times_Y Y') \to {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y') \to \Br Y' \xrightarrow{f^*_{Y'}} \Br (X \times_Y Y'). \end{equation}
Taking $Y'=Y$ in~\eqref{eq:leray} gives the exact sequence \begin{equation}\label{eq:step0} 0 \to \Pic Y \xrightarrow{f^*} \Pic X \to {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y) \to \Br Y \xrightarrow{f^*} \Br X. \end{equation} So it will be enough to exhibit an isomorphism $\alpha \colon {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y) \to \mathbf{E}^*$ such that composing $\alpha$ with the natural homomorphism $\Pic X \to {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y)$ gives the homomorphism $\theta$ described in the statement of the theorem.
From now on, we work with ${\mathcal{P}\!\mathit{ic}}_{X/Y}$ as a sheaf only on the small \'etale site of $Y$, in order to be able to talk about its stalks.
\paragraph{Step 1: Localisation}
As the sheaf ${\mathcal{P}\!\mathit{ic}}_{X/Y}$ on $Y_{\textrm{\'et}}$ is supported on the singular points, the natural map \begin{equation}\label{eq:stalks} {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y) \to \prod_{P \text{ singular}}({\mathcal{P}\!\mathit{ic}}_{X/Y})_P, \end{equation} from the global sections of ${\mathcal{P}\!\mathit{ic}}_{X/Y}$ to the direct product of its stalks at the singular points, is an isomorphism.
At each singular point $P$, let $\tilde{Y}_P$ denote $\Spec \mathcal{O}_{Y,P}^{\textrm{sh}}$, the spectrum of the Henselisation of the local ring at $P$, and set $\tilde{X}_P = X \times_Y \tilde{Y}_P$. The stalk of ${\mathcal{P}\!\mathit{ic}}_{X/Y}$ at the geometric point $P$ is naturally isomorphic to ${\mathcal{P}\!\mathit{ic}}_{X/Y}(\tilde{Y}_P)$ which is simply $\Pic \tilde{X}_P$, as is seen by taking $Y' = \tilde{Y}_P$ in~\eqref{eq:leray} and using the facts that $\Pic \tilde{Y}_P$ and $\Br \tilde{Y}_P$ are trivial (for the latter, see~\cite[IV,
Corollary~1.7]{Milne:EC}). Combining this with the isomorphism~\eqref{eq:stalks}, we see that the natural map \begin{equation*}\label{eq:step1} {\mathcal{P}\!\mathit{ic}}_{X/Y}(Y) \to \prod_P \Pic \tilde{X}_P \end{equation*} is an isomorphism.
\paragraph{Step 2: Lipman's description of $\Pic\tilde{X}_P$} For each singular point $P$ of $Y$, denote by $\mathbf{E}_P$ the subgroup of $\Pic X$ generated by the exceptional curves lying over $P$. Let $Y_P$ denote the spectrum of the Zariski local ring of $Y$ at $P$, and $X_P = X \times_Y Y_P$. We will use $\theta_P$ to denote the homomorphism $\Pic X_P \to \mathbf{E}_P^*$ induced by the intersection pairing on $X$.\footnote{Lipman's definition of the map $\theta_P$ is slightly more general, involving dividing by the least degree of an invertible sheaf on each exceptional curve. Since we are working over an algebraically closed field, all of our exceptional curves have a $k$-point, hence an invertible sheaf of degree $1$.} Lipman~\cite[Part~IV]{Lipman:IHES-1969} studied the kernel and cokernel of $\theta_P$ in detail, defining an exact sequence \begin{equation*}\label{eq:lipman} 0 \to \Pic^0 X_P \to \Pic X_P \xrightarrow{\theta_P} \mathbf{E}_P^* \to G(Y_P) \to 0 \end{equation*} attached to the resolution $X_P \to Y_P$, and showed that $\Pic^0 X_P=0$ when $Y_P$ has a rational singularity, and that $G(Y_P)=0$ when $Y_P$ is Henselian. We thus obtain isomorphisms $\Pic\tilde{X}_P \cong \mathbf{E}_P^*$, such that the composite homomorphism $\Pic X \to \prod_P \Pic\tilde{X}_P \to \prod_P \mathbf{E}_P^*$ is $\theta_P$.
\paragraph{Step 3: Globalisation} Finally, note that two exceptional curves lying above distinct singularities of $Y$ are disjoint, so in particular have intersection number zero. Therefore the subgroups $\mathbf{E}_P \subseteq \Pic X$ are mutually orthogonal, and so $\mathbf{E} \cong \bigoplus_P \mathbf{E}_P$ and $\mathbf{E}^* \cong \prod_P \mathbf{E}_P^*$.
It is now easily verified that replacing ${\mathcal{P}\!\mathit{ic}}_{X/Y}(Y)$ in~\eqref{eq:step0} with $\mathbf{E}^*$ does indeed lead to the desired exact sequence. \end{proof}
\begin{corollary}\label{cor:local} If $P$ is a singular point of $Y$, then $\Br(X_P/Y_P)$ is isomorphic to the cokernel of $\theta_P \colon \Pic X \to \mathbf{E}_P^*$, which is equal to Lipman's group $G(Y_P)$. \end{corollary} \begin{proof} Applying the proposition to $Y_P$ shows that $\Br(X_P/Y_P)$ is isomorphic to $\coker(\Pic X_P \to \mathbf{E}_P^*)$, which by definition is equal to $G(Y_P)$. Since $X$ is smooth, the restriction map $\Pic X \to \Pic X_P$ is surjective, and the statement follows. \end{proof}
\begin{corollary}\label{cor:one} If $Y$ has only one singularity $P$, then $\Br(X/Y) \cong \Br(X_P/Y_P)$. \end{corollary} \begin{proof} In this case $\mathbf{E} = \mathbf{E}_P$, so the statement follows immediately from Corollary~\ref{cor:local}. \end{proof}
\section{Singular del Pezzo surfaces} \label{sec:dp}
In this section we apply Proposition~\ref{prop:br} to compute the Brauer groups of singular del Pezzo surfaces. We refer to~\cite{CT:PLMS-1988} and~\cite{Demazure:SDP} for background details on singular del Pezzo surfaces.
Let $X$ be a generalised del Pezzo surface over an algebraically closed field $k$, and $f\colon X \to Y$ the morphism contracting the $(-2)$-curves (and nothing else), so that $Y$ is the corresponding singular del Pezzo surface. The Picard group of $X$ fits into a short exact sequence \[ 0 \to Q \to \Pic X \xrightarrow{(\cdot,K_X)} \mathbb{Z} \to 0 \] where $Q$ is the subgroup orthogonal to the canonical class $K_X$ under the intersection pairing. The exceptional curves of $f$ are all contained in $Q$. Let $\mathbf{E}$ denote the subgroup of $Q$ generated by all the exceptional curves of $X \to Y$ (equivalently, all the $(-2)$-curves on $X$).
\begin{proposition}\label{prop:brdp} $\Br Y$ is isomorphic to $(Q/\mathbf{E})_{\mathrm{tors}}$. \end{proposition} \begin{proof} Firstly, $\Br X$ is trivial, for $X$ is a rational surface. By Proposition~\ref{prop:br}, $\Br Y$ is therefore isomorphic to the cokernel of the map $\theta \colon \Pic X \to \mathbf{E}^*$. Now $\theta$ factors as $\Pic X \to Q^* \to \mathbf{E}^*$, giving an exact sequence \[ \coker(\Pic X \to Q^*) \to \Br Y \to \coker(Q^* \to \mathbf{E}^*) \to 0. \] It follows from the description of $Q$ in~\cite[II.4]{Demazure:SDP} that $\Pic X \xrightarrow{\theta} Q^*$ is surjective. Indeed, one easily checks that the basis of $Q$ given by the simple roots $\alpha_i$ described there can be extended (for example, by adjoining one exceptional class $E_1$) to a basis of $\Pic X$. So we are left with an isomorphism between $\Br Y$ and $\coker(Q^* \to \mathbf{E}^*)$. To compute the latter group, we take the short exact sequence \[ 0 \to \mathbf{E} \to Q \to (Q/\mathbf{E}) \to 0 \] and apply $\Hom(\cdot, \mathbb{Z})$ to obtain the longer exact sequence \[ 0 \to (Q/\mathbf{E})^* \to Q^* \to \mathbf{E}^* \to \Ext^1(Q/\mathbf{E},\mathbb{Z}) \to \Ext^1(Q,\mathbb{Z}). \] As $Q$ is a free Abelian group, we have $\Ext^1(Q,\mathbb{Z})=0$, and therefore $\Br Y$ is isomorphic to $\Ext^1(Q/\mathbf{E},\mathbb{Z})$, which by a standard calculation is isomorphic to $(Q/\mathbf{E})_{\mathrm{tors}}$. \end{proof}
We note the following interesting corollary. \begin{corollary} Let $Y$ be a singular del Pezzo surface over an algebraically closed field, and denote by $Y^\mathrm{ns}$ the non-singular locus of $Y$. Then there is an isomorphism of abstract groups $\Br Y \cong \Pic(Y^\mathrm{ns})_{\mathrm{tors}}$. \end{corollary} \begin{proof} Since $Y^\mathrm{ns}$ is isomorphic to the complement of the exceptional curves in $X$, we have $\Pic Y^\mathrm{ns} \cong (\Pic X)/\mathbf{E}$ and so $\Pic(Y^\mathrm{ns})_{\mathrm{tors}} \cong (Q/\mathbf{E})_{\mathrm{tors}}$. \end{proof}
It remains to enumerate the possible singularity types of del Pezzo surfaces and to compute $Q/\mathbf{E}$ in each case. The algorithm for listing the possible configurations of $(-2)$-curves is well known, as is the list of possible configurations, so we only summarise the algorithm very briefly. The free Abelian group $Q$, together with the negative definite intersection pairing, is isomorphic to the root lattice of a particular root system depending only on the degree of the surface. Within this root lattice, the exceptional divisors of the desingularisation $X \to Y$ form a set of simple roots in some sub-root system, and indeed form a $\Pi$-system in the sense of Dynkin~\cite[\S 5]{Dynkin:AMST-1957}. To list the $\Pi$-systems contained in $Q$, we use the following two theorems from~\cite{Dynkin:AMST-1957}: \begin{itemize} \item Theorem~5.2: every $\Pi$-system is contained in a $\Pi$-system which is of
maximal rank, that is, which spans $Q$ as a vector space; \item Theorem~5.3: the $\Pi$-systems of maximal rank may be all be obtained from
some set of simple roots in $Q$ by iterating the following
procedure, called an \emph{elementary transformation}: starting with a set of simple roots, choose one connected
component of the associated Dynkin diagram; adjoin the most
negative root of that component and discard one of the original
simple roots of that component. \end{itemize} So, starting from any choice of simple roots in $Q$, we can obtain all $\Pi$-systems up to the action of the Weyl group. Not quite all of these can actually be achieved as configurations of $(-2)$-curves: see~\cite{Urabe:S1981}, though it is not immediately clear that the methods there also apply in positive characteristic.
Let us remark that, given a root system $R$, the primes dividing $\#(\mathbb{Z} R/\mathbb{Z} R')_{\mathrm{tors}}$ for $R'$ a closed subsystem of $R$ are called \emph{bad primes}: see, for example, \cite[Appendix~B]{MT:LAG}. A corollary of Proposition~\ref{prop:brdp} is that the primes which can divide the order of the Brauer group of a singular del Pezzo surface of degree $d$ are the bad primes of the associated root system. It turns out that the bad primes are simply those occurring as coefficients when a maximal root is expressed in terms of simple roots, and so they are easily listed. There are no bad primes for $A_n$; $2$ is the only bad prime for $D_n$ ($n \ge 4$); $2$ and $3$ are the bad primes for $E_6$ and $E_7$; and $E_8$ has bad primes $2$, $3$ and $5$.
\begin{theorem} Let $Y$ be a singular del Pezzo surface of degree $d$ over an algebraically closed field. If $d \ge 5$, then $\Br Y = 0$. If $1 \le d \le 4$, then the Brauer group of $Y$ is determined by its singularity type; the singularity types giving rise to non-trivial Brauer groups are listed in Tables~\ref{table:deg4}--\ref{table:deg1}. Each class in $\Br Y$ is represented by an Azumaya algebra. Except for the singularity types $A_7$ in degree 2, and $A_7$, $A_8$ and $D_8$ in degree 1, the corresponding Azumaya algebras are locally trivial in the Zariski topology. \end{theorem} \begin{proof} For $d \ge 5$, the relevant root system is of type $A_n$, so there are no bad primes and the Brauer group is trivial. For $1 \le d \le 4$, the results of applying the algorithm described above are listed in the tables. Since $\Br Y$ is torsion, it follows from a result proved by Gabber and, independently, by de~Jong~\cite{DeJong:Gabber} that every class is represented by an Azumaya algebra. It remains to prove the statement about Zariski-local triviality. If $P$ is a singular point of a singular del Pezzo surface $Y$ then Corollary~\ref{cor:local} shows that, in the notation used there, $\Br(X_P/Y_P) \cong \coker(\Pic X \to \mathbf{E}_P^*)$; it is enough to show that $\Br(X_P/Y_P) = 0$. Replacing $Y$ by a del Pezzo surface of the same degree, but with only one singularity of the same type as $P$, changes neither $\Pic X$, $\mathbf{E}_P^*$ nor the map between them, so we may assume that $P$ is the only singularity of $Y$. Then $\Br(X_P/Y_P) = \Br(X/Y) = \Br Y$ by Corollary~\ref{cor:one}. But the tables show that $\Br Y=0$, except in the cases listed above. \end{proof}
\begin{table}[p] \caption{Brauer groups of singular del Pezzo surfaces of degree $4$}
\begin{tabular}{ll|ll} Singularity type & Brauer group & Singularity type & Brauer group \\ \hline $2A_1 + A_3$ & $\mathbb{Z}/2\mathbb{Z}$ & $4A_1$ & $\mathbb{Z}/2\mathbb{Z}$ \end{tabular} \label{table:deg4} \end{table}
\begin{table}[p] \caption{Brauer groups of singular del Pezzo surfaces of degree $3$}
\begin{tabular}{ll|ll} Singularity type & Brauer group & Singularity type & Brauer group \\ \hline $A_1 + A_5$ & $\mathbb{Z}/2\mathbb{Z}$ & $2A_1 + A_3$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $4A_1$ & $\mathbb{Z}/2\mathbb{Z}$ & $3A_2$ & $\mathbb{Z}/3\mathbb{Z}$ \end{tabular} \end{table}
\begin{table}[p] \caption{Brauer groups of singular del Pezzo surfaces of degree $2$} \begin{threeparttable}
\begin{tabular}{ll|ll} Singularity type & Brauer group & Singularity type & Brauer group \\ \hline $A_1 + 2A_3$ & $\mathbb{Z}/4\mathbb{Z}$ & $5A_1$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $A_1 + A_5$ & $\mathbb{Z}/2\mathbb{Z}$ & $6A_1$ & $(\mathbb{Z}/2\mathbb{Z})^2$ \\ $A_1 + D_6$ & $\mathbb{Z}/2\mathbb{Z}$ & $7A_1$ $\dagger$ & $(\mathbb{Z}/2\mathbb{Z})^3$ \\ $2A_1 + A_3$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_2 + A_5$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $2A_1 + D_4$ & $\mathbb{Z}/2\mathbb{Z}$ & $3A_2$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $3A_1 + A_3$ & $\mathbb{Z}/2\mathbb{Z}$ & $2A_3$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $3A_1 + D_4$ & $(\mathbb{Z}/2\mathbb{Z})^2$ & $A_7$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $4A_1$ * & $\mathbb{Z}/2\mathbb{Z}$ \end{tabular} \begin{tablenotes} \item [*] There are (up to the action of the Weyl group) two different
ways of embedding $4A_1$ into $E_7$, and so two different
singularity types of degree $2$ del Pezzo surface with root system
$4A_1$. One of these has Brauer group $\mathbb{Z}/2\mathbb{Z}$; the other has
trivial Brauer group. \item [$\dagger$] This sub-root system does not arise from a del Pezzo surface~\cite{Urabe:S1981}. \end{tablenotes} \end{threeparttable} \end{table}
\begin{table}[p] \caption{Brauer groups of singular del Pezzo surfaces of degree $1$} \begin{threeparttable}
\begin{tabular}{ll|ll} Singularity type & Brauer group & Singularity type & Brauer group \\ \hline $A_1+A_2+A_5$ & $\mathbb{Z}/6\mathbb{Z}$ & $4A_1+A_3$ & $(\mathbb{Z}/2\mathbb{Z})^2$ \\ $A_1+3A_2$ & $\mathbb{Z}/3\mathbb{Z}$ & $4A_1+D_4$ $\dagger$ & $(\mathbb{Z}/2\mathbb{Z})^3$ \\ $A_1+2A_3$ & $\mathbb{Z}/4\mathbb{Z}$ & $5A_1$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $A_1+A_5$ * & $\mathbb{Z}/2\mathbb{Z}$ & $6A_1$ & $(\mathbb{Z}/2\mathbb{Z})^2$ \\ $A_1+A_7$ & $\mathbb{Z}/4\mathbb{Z}$ & $7A_1$ $\dagger$ & $(\mathbb{Z}/2\mathbb{Z})^3$ \\ $A_1+D_6$ & $\mathbb{Z}/2\mathbb{Z}$ & $8A_1$ $\dagger$ & $(\mathbb{Z}/2\mathbb{Z})^4$ \\ $A_1+E_7$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_2+A_5$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $2A_1+A_2+A_3$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_2+E_6$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $2A_1+A_3$ * & $\mathbb{Z}/2\mathbb{Z}$ & $3A_2$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $2A_1+2A_3$ & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$ & $4A_2$ & $(\mathbb{Z}/3\mathbb{Z})^2$ \\ $2A_1+A_5$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_3+D_4$ & $\mathbb{Z}/2\mathbb{Z}$ \\ $2A_1+D_4$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_3+D_5$ & $\mathbb{Z}/4\mathbb{Z}$ \\ $2A_1+D_5$ & $\mathbb{Z}/2\mathbb{Z}$ & $2A_3$ * & $\mathbb{Z}/2\mathbb{Z}$ \\ $2A_1+D_6$ & $(\mathbb{Z}/2\mathbb{Z})^2$ & $2A_4$ & $\mathbb{Z}/5\mathbb{Z}$ \\ $3A_1+A_3$ & $\mathbb{Z}/2\mathbb{Z}$ & $A_7$ * & $\mathbb{Z}/2\mathbb{Z}$ \\ $3A_1+D_4$ & $(\mathbb{Z}/2\mathbb{Z})^2$ & $A_8$ & $\mathbb{Z}/3\mathbb{Z}$ \\ $4A_1$ * & $\mathbb{Z}/2\mathbb{Z}$ & $2D_4$ & $(\mathbb{Z}/2\mathbb{Z})^2$ \\ $4A_1+A_2$ & $\mathbb{Z}/2\mathbb{Z}$ & $D_8$ & $\mathbb{Z}/2\mathbb{Z}$ \end{tabular} \label{table:deg1} \begin{tablenotes} \item [*] Each of these five root systems may be embedded into $E_8$ in two
distinct ways. In all five cases, one way results in trivial Brauer
group; the other results in the Brauer group shown in the table. \item [$\dagger$] These sub-root systems do not arise from del Pezzo surfaces~\cite{Urabe:S1981}. \end{tablenotes} \end{threeparttable} \end{table}
\end{document} | arXiv | {
"id": "1201.4299.tex",
"language_detection_score": 0.7239276766777039,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\twocolumn[ \icmltitle{Adaptive Second Order Coresets for Data-efficient Machine Learning}
\begin{icmlauthorlist} \icmlauthor{Omead Pooladzandi}{yyy}
\icmlauthor{David Davini}{sch} \icmlauthor{Baharan Mirzasoleiman}{sch}
\end{icmlauthorlist}
\icmlaffiliation{yyy}{Department of Electrical \& Computer Engineering, University of California, Los Angeles, USA}
\icmlaffiliation{sch}{Department of Computer Science, University of California, Los Angeles, USA}
\icmlcorrespondingauthor{Omead Pooladzandi}{opooladz@ucla.edu}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in ]
\begin{abstract} Training machine learning models on massive datasets incurs substantial computational costs. To alleviate such costs, there has been a sustained effort to develop data-efficient training methods that can carefully select subsets of the training examples that generalize on par with the full training data. However, existing methods are limited in providing theoretical guarantees for the quality of the models trained on the extracted subsets, and may perform poorly in practice. We propose {\textsc{AdaCore}}\xspace, a method that leverages the geometry of the data to extract subsets of the training examples for efficient machine learning. The key idea behind our method is to dynamically approximate the curvature of the loss function via an exponentially-averaged estimate of the Hessian to select weighted subsets (coresets) that provide a close approximation of the full gradient preconditioned with the Hessian. We prove rigorous guarantees for the convergence of various first and second-order methods applied to the subsets chosen by {\textsc{AdaCore}}\xspace. Our extensive experiments show that {\textsc{AdaCore}}\xspace extracts coresets with higher quality compared to baselines and speeds up training of convex and non-convex machine learning models, such as logistic regression and neural networks, by over 2.9x over the full data and 4.5x over random subsets\footnote{ Code is available at \url{ https://github.com/opooladz/AdaCore}}. \end{abstract} \section{Introduction} Large datasets have been crucial for the success of modern machine learning models. Learning from massive datasets, however, incurs substantial computational costs and becomes very challenging \cite{asi2019importance,strubell2019energy,schwartz2019green}. Crucially, not all data points are equally important for learning \cite{birodkar2019semantic,katharopoulos2018not,toneva2018empirical}. While several examples can be excluded from training without harming the accuracy of the final model \cite{birodkar2019semantic,toneva2018empirical}, other points need to be trained on many times to be learned \cite{birodkar2019semantic}. To improve scalability of machine learning, it is essential to theoretically understand and quantify the value of different data points on training and optimization. This allows identifying examples that contribute the most to learning and safely excluding those that are redundant or non-informative.
To find essential data points, recent empirical studies used heuristics such as the fully trained or a smaller proxy model’s uncertainty (entropy of predicted class probabilities) \cite{coleman2020selection}, or forgetting events \cite{toneva2018empirical} to identify examples that frequently transition from being classified correctly to incorrectly. Others employ either the gradient norm \cite{alain2015variance,katharopoulos2018not} or the loss \cite{ loshchilov2015online,schaul2015prioritized} to sample important points that reduce variance of stochastic optimization methods. Such methods, however, do not provide any theoretical guarantee for the quality of the trained model on the extracted examples.
Quantifying the importance of different data points without training a model to convergence is very challenging. First, the value of each example cannot be measured without updating the model parameters and measuring the loss or accuracy. Second, as the effect of different data points changes throughout training, their value cannot be precisely measured before training converges. Third, to eliminate redundancies, one needs to look at the importance of individual data points as well as the higher-order interactions between data points. Finally, one needs to provide theoretical guarantees for the performance and convergence of the model trained on the extracted data points.
Here, we focus on finding data points that contribute the most to learning and automatically excluding redundancies while training a model. A practical and effective approach is to carefully select a small subset of training examples that closely approximate the full gradient, i.e., the sum of the gradients over all the training data points. This idea has been recently employed to find a subset of data points that guarantee convergence of first-order methods to near-optimal solution for training convex models \cite{mirzasoleiman2020coresets}. However, modern machine learning models are high dimensional and non-convex in nature. In such scenarios, subsets selected based on gradient information only capture gradient along the sharp dimensions, and lack diversity within groups of examples with similar training dynamics. Hence, they representative large groups of examples with a few data points with substantial weights. This introduces a large error in the gradient estimation and result in first-order coresets to perform poorly.
We propose \textit{ADAptive second-order} \textit{COREsets} ({\textsc{AdaCore}}\xspace) that incorporates the geometry of the data to iteratively select weighted subsets (coresets) of training examples that captures the gradient of the loss preconditioned with the Hessian, by maximizing a submodular function. Such subsets capture the curvature of the loss landscape along different dimensions, and provide convergence guarantees for first and second-order methods. As a naive use of Hessian at every iteration is prohibitively expensive for overparameterized models, {\textsc{AdaCore}}\xspace relies on Hessian-free methods to extract coresets that capture the full gradient preconditioned by the Hessian diagonal. Furthermore, {\textsc{AdaCore}}\xspace exponentially averages first and second-order information in order to smooth the noise in the local gradient and curvature information.
We first provide a theoretical analysis of our method and prove its convergence for convex and non-convex functions. For a $\beta$-smooth and $\alpha$-strongly convex loss function and a subset $S$ selected by {\textsc{AdaCore}}\xspace that estimates the full preconditioned gradient by an error of at most $\epsilon$, we prove that Newton's method and AdaHessian applied to $S$ with constant stepsize $\eta=\alpha/\beta$ converges to a $\beta\epsilon/\alpha$ neighborhood of the optimal solution, in exponential rate.
For non-convex overparameterized functions such as deep networks, we prove that for a $\beta$-smooth and $\mu$-PL$^*$ loss function satisfying $\|\nabla\mathcal{L}(w)\|^2/2\geq\mu\mathcal{L}(w)$, (stochastic) gradient descent applied to subsets found by {\textsc{AdaCore}}\xspace has similar training dynamics to that of training on full data, and converges at a exponential rate. In both cases, {\textsc{AdaCore}}\xspace leads to a speedup by training on smaller subsets.
Next, we empirically study the examples selected by {\textsc{AdaCore}}\xspace during training. We show that as training continues, {\textsc{AdaCore}}\xspace selects more uncertain or forgettable samples. Hence, {\textsc{AdaCore}}\xspace effectively determines the value of every learning example, i.e., when and how many times a sample needs to be trained on, and automatically excludes redundant and non-informative instances. Importantly, incorporating curvature in selecting coresets allows {\textsc{AdaCore}}\xspace to quantify the value of training examples more accurately, and find fewer but more diverse samples than existing methods.
We demonstrate the effectiveness of various first and second-order methods, namely SGD with momentum, Newton's method and AdaHessian, applied to {\textsc{AdaCore}}\xspace for training models with a convex loss function (logistic regression) as well as models with a non-convex loss functions, namely ResNet-20, ResNet-18, and ResNet-50, on MNIST, CIFAR10, (Imbalanced) CIFAR100, and BDD100k \cite{deng2012mnist,cifar10,bdd100k}. Our experiments show that {\textsc{AdaCore}}\xspace can effectively extract crucial samples for machine learning, resulting in higher accuracy while achieving over 2.9x speedup over the full data and 4.5x over random subsets, for training models with convex and non-convex loss functions.
\section{Related Work} Data-efficient methods have recently gained a lot of interest. However, existing methods often require training the original \cite{ birodkar2019semantic, ghorbani2019data,toneva2018empirical} or a proxy model \cite{coleman2020selection} to convergence, and use features or predictions of the trained model to find subsets of examples that contribute the most to learning. While these results empirically confirm the existence of notable semantic redundancies in large datasets \cite{birodkar2019semantic}, such methods cannot identify the crucial subsets before fully training the original or the proxy model on the entire dataset. Most importantly, such methods do not provide any theoretical guarantees for the model's performance trained on the extracted subsets.
There have been recent efforts to take advantage of the difference in importance among various samples to reduce the variance and improve the convergence rate of stochastic optimization methods. Those that are applicable to overparameterized models employ either the gradient norm \cite{alain2015variance,katharopoulos2018not} or the loss \cite{ loshchilov2015online,schaul2015prioritized} to compute each sample’s importance. However, these methods do not provide rigorous convergence guarantees and cannot provide a notable speedup.
A recent study proposed a method, {\textsc{Craig}}\xspace, to find subsets of samples that closely approximate the full gradient, i.e., sum of the gradients over all the training samples \cite{mirzasoleiman2020coresets}. {\textsc{Craig}}\xspace finds the subsets by maximizing a submodular function, and provides convergence guarantees to a neighborhood of the optimal solution for strongly-convex models. {\textsc{GradMatch}}\xspace \cite{killamsetty2021grad} proposes a variation to address the same objective using orthogonal matching pursuit (OMP) \cite{killamsetty2021grad}, and {\textsc{Glister}}\xspace \citet{killamsetty2020glister} aims at finding subsets that closely approximate the gradient of a held-out validation set. However, \textsc{Glister} requires a validation set, and \textsc{GradMatch} uses OMP which may return subsets as little as 0.1\% of the intended size. Such subsets are then augmented with random samples. In contrast, our method successfully finds subsets of higher quality by preconditioning the gradient by the Hessian information.\looseness=-1
\section{Background and Problem Setting} Training machine learning models often reduces to minimizing an empirical risk function. Given a not-necessarily convex loss ${\cal{L}}$, one aims to find model parameter vector $w_*$ in the parameter space $\mathcal{W}$ that minimizes the loss ${\cal{L}}$ over the training data:
\begin{align}\label{eq:problem} w_* \in {\arg\min}_{w \in \mathcal{W}} {\cal{L}}(w), \quad\quad\quad\\ {\cal{L}}(w) := \sum_{i\in V} l_i(w), \quad l_i(w)=l(f(x_i,w),y_i). \nonumber
\end{align} Here, $V = \{1,\dots,n\}$ is an index set of the training data, $w\in\mathbb{R}^d$ is the parameters of the model $f$ being trained, and $l_i$ is the loss function associated with training example $i\in V$ with feature vector $x_i\in \mathbb{R}^d$ and label $y_i$.
We denote the gradient of the loss w.r.t. model parameters by $\mathbf{g} = \nabla {\cal{L}}(w)=\frac{1}{|V|}\sum_{i\in V}\frac{\partial l_i}{\partial w}$,
and the corresponding second derivative (i.e., Hessian) by $\mathbf{H} = \nabla^2{\cal{L}}(w) = \frac{1}{|V|}\sum_{i\in V}\frac{\partial^2 l_i}{\partial w_j \partial w_k}$.
First order gradient methods are popular for solving Problem \eqref{eq:problem}. They start from an initial point $w_0$ and at every iteration $t$, step in the negative direction of the gradient $\mathbf{g}_t$ multiplied by learning rate $\eta_t$. The most popular first-order method is Stochastic Gradient Descent (SGD) \cite{robbins1951stochastic}: \begin{align}\label{eq:gd_update}
w_{t+1} = w_{t} - \eta_t \mathbf{v}_t, \quad\quad \mathbf{v}_t=\mathbf{g}_t, \end{align}
SGD is often used with momentum, i.e., $\mathbf{v}_t\!=\!\beta\mathbf{v}_{t-1}\!+\!(1\!-\!\beta)\mathbf{g}_t$ where $\beta\!\in\![0,1]$, accelerating it in dimensions whose gradients point in the same directions and dampening oscillations in dimensions whose gradients change directions \cite{qian1999momentum}.
For larger datasets, mini-batched SGD is used, where $\mathbf{v}_t\!=\!\frac{1}{m}\sum_{j=1}^m l_{i_t^{(j)}}(w_t)$, where $m$ is the size of the mini-batch of datapoints whose indices $\{i_t^{(1)}, \ldots, i_t^{(m)}\}$ are uniformly drawn with replacement from $V$, at each iteration $t$.\looseness=-1
Second-order gradient methods rely on the geometry of the problem to automatically rotate and scale the gradient vectors, using the curvature of the loss landscape. In doing so, second-order methods can choose a better descent direction and automatically adjust the learning rate for each parameter. Hence, second-order methods have superior convergence properties compared to first-order methods. Newton’s method \cite{bertsekas1982projected} is a classical second order method that preconditions the gradient vector with inverse of the local Hessian at every iteration, $\mathbf{H}_t^{-1}$: \begin{equation}\label{eq:newton} w_{t+1}=w_t-\eta_t \mathbf{H}_t^{-1}\mathbf{g}_t. \end{equation} As inverting the Hessian matrix requires quadratic memory and cubic computational complexity, several methods approximate Hessian information to significantly reduce time and memory complexity \cite{Nocedal,schaul2013no,martens2015optimizing,xu2020second}. In particular, AdaHessian \cite{yao2020adahessian} directly approximates the diagonal of the Hessian and relies on exponential moving averaging and block diagonal averaging to smooth out and reduce the variation of the Hessian diagonal. \section{{\textsc{AdaCore}}\xspace: Adaptive Second order Coresets}
The key idea behind our proposed method is to leverage the geometry of the data, precisely the curvature of the loss landscape, to select subsets of the training examples that enable fast convergence. Here, we first discuss why coresets that only capture the full gradient perform poorly in various scenarios. Then, we show how to incorporate curvature information in subset selection for training convex and non-convex models with provable convergence guarantees--- ameliorating problems of first-order coresets.
\subsection{When First-order Coresets Fail}
First-order coreset methods iteratively select weighted subsets of training data that closely approximate the full gradient at particular values of $w_t$, e.g. beginning of every epoch \!\cite{killamsetty2021grad,killamsetty2020glister,mirzasoleiman2020coresets}: \begin{align}
\hspace{-2mm}
S^{*}_t= \!\!\underset{S \subseteq V, \gamma_{t,j} \geq 0 ~\forall j}{\arg\min}|S| \quad \textrm{s.t.} \quad \|\textbf{g}_t-\sum_{j\in S}\gamma_{t,j} \textbf{g}_{t,j}\|\leq \epsilon,
\end{align} where $\textbf{g}_{t,j}$ and $\gamma_{t,j}>0$ are the gradient and the weight of element $j$ in the coreset $S$. Such subsets often perform poorly for high-dimensional and non-convex functions, due to the following reasons: (1) the scale of gradient $\mathbf{g}\in\mathbb{R}^d$ is often different along different dimensions. Hence, the selected subsets estimate the full gradient closely only along dimensions with a larger gradient scale. This can introduce a significant error in the optimization trajectory for both convex and non-convex loss functions; (2) the loss functions associated with different data points $l_i$ may have similar gradients but very different curvature properties at a particular $w_t$. Thus, for a small $\delta>0$, the gradients $\nabla l_i(w_t+\delta)$ at $w_t+\delta$ may be totally different than the gradients $\nabla l_i(w_t)$ at $w_t$.
Consequently, subsets that capture the gradient well at at a particular point during training may not provide a close approximation of the full gradient after a few gradient updates, e.g., mini-batches. This often results in inferior performance, particularly when selecting larger subsets for non-convex loss functions; (3) subsets that only capture the gradient, select one representative example with a large weight from data points with similar gradients at $w_t$.
Such subsets lack diversity and cannot distinguish different subgroups of the data. Importantly, the large weights introduce a substantial error in estimating the full gradient and result in a poor performance, as we show in {Fig. \ref{fig:when_craig_fails} in the Appendix.}
\subsection{Adaptive Second-order Coresets} To address the above issues, our main idea is to select subsets of training examples that capture the full gradient preconditioned with the curvature of the loss landscape. In doing so, we normalize the gradient by multiplying it by the Hessian inverse, $\mathbf{H}^{-1}\mathbf{g}$, before selecting the subsets. This allows selecting subsets that (1) can capture the full gradient in all dimensions equally well; (2) contain a more diverse set of data points with similar gradients, but different curvature properties; and (3) allow adaptive first and second-order methods trained on the coresets to obtain similar training dynamics to that of training on the full data.
Formally, our goal in {\textsc{AdaCore}}\xspace is to adaptively find the smallest subset $S \subseteq V$ and corresponding per-element weights $\gamma_j > 0$ that approximates the full gradient preconditioned with the Hessian matrix, with an error of at most $\epsilon > 0$ at every iteration $t$, I.e.,:
\begin{align}\label{eq:main}
S^{*}_t= \!\!\underset{S \subseteq V, \gamma_{t,j} \geq 0 ~\forall j}{\arg\min}&|S|, \quad \textrm{s.t.}\quad\\
&\|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|\leq\epsilon,\nonumber \end{align} where $\mathbf{H}^{-1}_t \textbf{g}_t$ and $\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}$ are preconditioned gradients of the full data and the subset $S$.
\subsection{Scaling up to Over-parameterized Models}\label{sec:diag}
Directly solving the optimization problem \eqref{eq:main} requires explicit calculation and storage of the Hessian matrix and its inverse. This is infeasible for large models such as neural networks. In the following, we first address the issue of calculating the inverse Hessian at every iteration. \!Then, we discuss how to efficiently find a near-optimal subset to estimates the full preconditioned gradient by solving Eq. \!\eqref{eq:main}. \looseness=-1
\paragraph{Approximating the Gradients}\label{sec:approxgrad} For neural networks, derivative of the loss ${\cal{L}}$ w.r.t. the input to the last layer \cite{katharopoulos2018not, mirzasoleiman2020coresets} or the penultimate layer \cite{killamsetty2021grad} can capture the variation of gradient norm well. We extend these results (Appendix \ref{proof:boundnormederrornn}) to show that the normed difference preconditioned gradient difference between data points can be approximately efficiently bound by: \begin{align}
&\| \mathbf{H}_{i}^{-1} \textbf{g}_i - \mathbf{H}_{j}^{-1} \textbf{g}_j \| \leq \\
&c_1\|\Sigma'_L(z_i^{(L)}) (\mathbf{H}_{i}^{-1} \textbf{g}_{i})^{(L)}- \Sigma'_L(z_j^{(L)}) (\mathbf{H}_{j}^{-1} \textbf{g}_{j})^{(L)}\|+c_2, \nonumber \end{align} where $\Sigma'_L(z_i^{(L)})(\mathbf{H}_{i}^{-1} \textbf{g}_{i})^{(L)}$ is gradient preconditioned by the inverse of the Hessian of the loss w.r.t. the input to the last layer for data point $i$, and $c_1, c_2$ are constants. Since the upper bound depends on the weight parameters, we need to update our subset $S$ using {\textsc{AdaCore}}\xspace during the training.
Calculating the last layer gradient often requires only a forward pass, which is as expensive as calculating the loss, and does not require any extra storage. For example, having a softmax as the last layer, the gradients of the loss w.r.t. the $i^{th}$ input to the softmax is $p_i-y_i$, where $p_i$ is the $i^{th}$ output the softmax and $y$ is the one-hot encoded label with the same dimensionality as the number of classes. Using this low-dimensional approximation $\hat{\textbf{g}_{i}}$ for the gradient $\textbf{g}_{i}$ we can efficiently calculate the preconditioned gradient for every data point. For non-convex functions, the local gradient information can be very noisy. To smooth out the local gradient information and get a better approximation of the global gradient, we apply exponential moving average with a parameter $0<\beta_1$ to the low-dimensional gradient approximations: \begin{equation}\label{eq:g_avg} \overline{\mathbf{g}}_{t}=\frac{(1-\beta_1)\sum_{i=1}^t \beta_2^{t-i}\hat{\mathbf{{g}_{i}}}}{1-\beta_2^t}. \end{equation} \paragraph{Approximating the Hessian Preconditioner} Since it is infeasible to calculate, store, and invert the full Hessian matrix every iteration, we use an inexact Newton method, where an approximate Hessian operator is used instead of the full Hessian. To efficiently calculate the Hessian diagonal, we first use the Hessian-Free method \cite{yao2018inexact} to compute the multiplication between Hessian $\mathbf{H}_t$ and a random vector $z$ with Rademacher distribution. To do so, we backpropagate on the low-dimensional gradient estimates multiplied by $z$ to get $\mathbf{H}_tz=\partial \hat{\mathbf{g}}_t^T z /\partial w_t$. Now, we can use the Hutchinson's method of obtains a stochastic estimate of the diagonal of the Hessian matrix as follows: \begin{align}\label{eq:hf} \text{diag}(\mathbf{H}_t)=\mathbb{E}[z\odot (\mathbf{H}_tz)], \end{align} without having to form the Hessian matrix explicitly \cite{BEKAS20071214}. The diagonal approximation has the same convergence rate as using Hessian for strongly convex, and strictly smooth functions (Proof in Appendix \ref{proof:diagconv}). Nevertheless, our method can be applied to general machine learning problems, such as deep networks and regularized classical methods (e.g., SVM, LASSO), which are strongly-convex. To smooth out the noisy local curvature and get a better approximation of the global Hessian information, we apply an exponential moving average with parameter $0<\beta_2<1$ to the Hessian diagonal estimate in Eq. \eqref{eq:hf}:
\begin{equation}\label{eq:h_avg} \overbar{\mathbf{H}}_{t}=\sqrt{\frac{(1-\beta_2)\sum_{i=1}^t \beta_2^{t-i} \text{diag}(\mathbf{H}_i)\text{diag}(\mathbf{H}_i)}{1-\beta_2^t}}. \end{equation} Using exponentially averaged gradient and Hessian approximations in Eq. \eqref{eq:g_avg}, and \eqref{eq:h_avg}, the preconditioned gradients in Eq. \eqref{eq:main} can be approximated as follows: \begin{align}\label{eq:H_sub}
S^{*}_t= \!\!\underset{S \subseteq V, \gamma_{t,j} \geq 0 ~\forall j}{\arg\min}&|S|, \quad \textrm{s.t.}\quad\\
&\|\overbar{\mathbf{H}}_t^{-1} \overline{\textbf{g}}_t-\sum_{j\in S}\gamma_{t,j}\overbar{\mathbf{H}}_{t,j}^{-1} \overline{\textbf{g}}_{t,j}\|\leq\epsilon\nonumber. \end{align} Next, we discuss how to efficiently find near-optimal weighted subsets that closely approximate the full preconditioned gradient by solving Eq. \eqref{eq:main}. \subsection{Extracting Second-order Coresets}\label{sec:alg} The subset selection problem \eqref{eq:main} is NP-hard \cite{natarajan1995sparse}. However, it can be considered as a special case of the sparse vector approximation problem that has been studied in the literature, including convex optimization formulations—e.g. basis pursuit \cite{chen2001atomic}, sparse projections \cite{pilanci2012recovery,kyrillidis2013sparse}, LASSO \cite{tibshirani1996regression}, and compressed sensing \cite{donoho2006compressed}. These methods, however, are expensive to solve and often require tuning regularization coefficients and thresholding to ensure cardinality constraints. More recently, the connection between sparse modeling and \textit{submodular}\footnote{A set function $F:2^V \rightarrow \mathbb{R}^+$ is submodular if $F(S\cup\{e\}) - F(S) \geq F(T\cup\{e\}) - F(T),$ for any $S\subseteq T \subseteq V$ and $e\in V\setminus T$.} optimization have been demonstrated \cite{elenberg2018restricted, mirzasoleiman2020coresets}. The advantage of submodular optimization is that a fast and simple greedy algorithm often provides a near-optimal solution. Next, we briefly discuss how submodularity can be used to find a near-optimal solution for Eq. \eqref{eq:main}. We build on the recent result of \cite{mirzasoleiman2020coresets} that showed that the error of estimating an expectation by a weighted sum of a subset of elements is upper-bounded by a submodular facility location function. In particular, via the above result, we get:
\begin{align}\label{eq:min_upper}
\min_{S\subseteq V} \| \overbar{\mathbf{H}}_t^{-1}\overline{\mathbf{g}}_t -
\sum_{j \in S} & \gamma_{t,j}\overbar{\mathbf{H}}_{t,j.}^{-1}\overline{\mathbf{g}}_{t,j}^{}~\| \\
&\leq \sum_{i\in V} \min_{j \in S} \| \overbar{\mathbf{H}}_{t,i.}^{-1}\overline{\mathbf{g}}_{t,i}^{} - \overbar{\mathbf{H}}_{t,j.}^{-1}\overline{\mathbf{g}}_{t,j}^{}\|.\nonumber
\end{align} Setting the upper bound in the right-hand side of Eq. \eqref{eq:min_upper} to be less than $\epsilon$ results in the smallest weighted subset $S^*$ that approximates full preconditioned gradient by an error of at most $\epsilon$, at iteration $t$. Formally, we wish to solve the following optimization problem: \begin{align}
S^* \in &{\arg\min}_{S\subseteq V} |S|, \quad \text{s.t.} \quad\label{eq:L}\\ &L(S)=
\sum_{i\in V} \min_{j \in S} \| \overbar{\mathbf{H}}_{t,i.}^{-1}\overline{\mathbf{g}}_{t,i}^{} - \overbar{\mathbf{H}}_{t,j.}^{-1}\overline{\mathbf{g}}_{t,j}^{}\| \leq \epsilon, \nonumber \end{align} By introducing a phantom example $e$, we can turn the minimization problem \eqref{eq:L} into the following submodular cover problem, with a facility location objective $F(S)$: \begin{align}\label{eq:cover}
S^* \in \underset{S\subseteq V}{\arg\min} &
|S|, \quad \text{s.t.}\\ \quad &F(S)=C_1 - L(S \cup \{e\}) \geq C_1-\epsilon\nonumber, \end{align}
where $C_1=L(\{e\})$ is a constant upper-bounding the value of $L(S)$. The subset $S^*$ obtained by solving the maximization problem \eqref{eq:cover} is the medoid of the preconditioned gradients, and the weights $\gamma_j$ are the number of elements that are closest to the medoid $j\in S^*$, i.e. $\gamma_j=\!\sum_{i\in V} \mathbb{I}[j=\min_{s \in S} \| \overbar{\mathbf{H}}_{t,i}^{-1}\overline{\mathbf{g}}_{t,i}^{} \!\!-\! \overbar{\mathbf{H}}_{t,s}^{-1}\overline{\mathbf{g}}_{t,s}^{} \|]$. For the above submodular cover problem, the classical greedy algorithm provides a logarithmic approximation guarantee $|S| \leq \big(1+ \ln (\max_e F(e|\emptyset))\big) |S^*|$ \cite{wolsey1982analysis}. The greedy algorithm starts with the empty set $S_0=\emptyset$, and at each iteration $t$, it chooses an element $e\in V$ that maximizes the marginal utility $F(e|S_{t})=F(S_{t}\cup\{e\}) - F(S_{t})$. Formally,
$S_t = S_{t-1}\cup\{{\arg\max}_{e\in V} F(e|S_{t-1})\}$.
The computational complexity of the greedy algorithm is $\mathcal{O}(nk)$. However, its complexity can be reduced to $\mathcal{O}(|V|)$ using stochastic methods \cite{mirzasoleiman2015lazier}, and can be further improved using lazy evaluation \cite{minoux1978accelerated} and distributed implementations \cite{mirzasoleiman2013distributed}. The pseudocode can be found in Alg. \ref{alg:greedy} in Appendix \ref{appx:alg}.
\paragraph{One coreset for convex functions}\label{dis:onecoresetcvx} For convex functions, normed gradient differences between data points can be efficiently upper-bounded by the normed difference between feature vectors \cite{allen2016exploiting,hofmann2015variance, mirzasoleiman2020coresets}. We apply a similar idea to upper-bound the normed difference between preconditioned gradients. This allows us to find one subset before the training. See proof in Appendix \ref{proof:boundnormerror}.
\subsection{Convergence Analysis}\label{sec:convergence}
Here, we analyze the convergence rate of first and second order methods applied to the weighted subsets $S$ found by {\textsc{AdaCore}}\xspace. By minimizing Eq. \eqref{eq:cover} at every iteration $t$, {\textsc{AdaCore}}\xspace finds subsets that approximate full preconditioned gradient by an error of at most $\epsilon$, i.e. $\|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|\leq\epsilon$. This allows us to effectively analyze the reduction in the value of the loss function ${\cal{L}}$ at every iteration $t$. Below, we discuss the convergence of a first and second-order gradient method applied to subsets extracted by {\textsc{AdaCore}}\xspace.
\textbf{Convergence for Newton's Methods and AdaHessian} We first provide the convergence analysis for the case where the
function ${\cal{L}}$ in Problem (\ref{eq:problem}) is strongly convex, i.e. there exist a constant $\alpha>0$ such that $\forall w,w'\in \mathbb{R}^d$ we have ${\cal{L}}(w) \geq {\cal{L}}(w') + \langle \nabla {\cal{L}}(w'), w-w' \rangle + \frac{\alpha}{2} \| w'-w \|^2$, and each component function has a Lipschitz gradient, i.e. $ \forall w \in \mathcal{W}$ we have $\| \nabla {\cal{L}}(w) - \nabla {\cal{L}}(w') \| \leq\! \beta \| w-w'\|$. We get the following results by applying Newton's method and AdaHessian to the weighted subsets $S$ extracted by {\textsc{AdaCore}}\xspace.
\begin{restatable}{theorem}{newtonrestate} \label{thm:newton}
Assume that ${\cal{L}}$ is $\alpha$-strongly convex and $\beta$-smooth.
Let $S$ be a weighted subset
obtained by {\textsc{AdaCore}}\xspace that estimate the
preconditioned gradient
by an error of at most $\epsilon$ at every iteration $t$, i.e., $\|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|\leq\epsilon$. Then with learning rate $\alpha/\beta$, Newton's method with update rule of Eq. \eqref{eq:newton}
applied to the subsets has the following convergence behavior:
\begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t) \leq -\frac{\alpha^{3}}{2\beta^{4}} (\|\mathbf{g}_t\|-\beta\epsilon)^2.
\end{align} In particular, the algorithm converges to a $\beta\epsilon/\alpha$-neighborhood of the optimal solution $w_*$.
\end{restatable}
\begin{corollary}\label{thm:adahessian}
For an $\alpha$-strongly convex and $\beta$-smooth loss ${\cal{L}}$, AdaHessian with Hessian power $k$, applied to subsets found by {\textsc{AdaCore}}\xspace converges to a $\beta\epsilon/\alpha$-neighborhood of the optimal solution $w_*$,
and satisfies:
\begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t) \leq -\frac{\alpha^{k+2}}{2\beta^{k+3}} (\|\mathbf{g}_t\|-\beta\epsilon)^2.
\end{align} \end{corollary} The proofs can be found in Appendix \ref{proof:4.1}.
\textbf{Convergence for (S)GD in Over-parameterized Case}
Next, we discuss the convergence behavior of gradient descent applied to the subsets found by {\textsc{AdaCore}}\xspace. In particular, we build upon the recent results of \cite{liu2020toward} that guarantees convergence for first-order methods on a broad class of general over-parameterized non-linear systems, including neural networks for which the tangent kernel, defined as $\mathbf{J}^T\mathbf{J}$ are not close to constant, but satisfy the Polyak-Lojasiewicz (PL) condition. Where $\mathbf{J}=\partial f/\partial w$ is the Jacobian of the function $f$ with respect to the parameters $w$. A loss function ${\cal{L}}$ is $\mu$-PL$^*$ on a set $\mathcal{W}$, if $\frac{1}{2}\| \nabla {\cal{L}}(w)\|^2\geq \mu {\cal{L}}(w), \forall w\in \mathcal{W}$. \begin{restatable}{theorem}{plrestate} \label{thm:pl}
Assume that the loss function ${\cal{L}}(w)$ is $\beta$-smooth, and $\mu$-PL$^*$ on a set $\mathcal{W}$, and $S$ is a weighted subset
obtained by {\textsc{AdaCore}}\xspace that estimates the preconditioned gradient by an error of at most $\epsilon$, i.e., $\|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|\leq\epsilon$.
Then with learning rate $\eta$, gradient descent with update rule of Eq. \eqref{eq:gd_update}
applied to the subsets have the following convergence behavior at iteration $t$:
\begin{align}
{\cal{L}}(w_{t}) \leq (1-\frac{\eta\mu\alpha^2} {\beta^2})^{t} {\cal{L}}(w_0) - \frac{\eta\alpha^2} {2\beta^2}(\beta^2\epsilon^2-2\beta\epsilon \nabla_{\max}), \end{align} where $\alpha$ is the minimum eigenvalue of all Hessian matrices during training, and $\nabla_{\max}$ is an upper bound on the norm of the gradients. \end{restatable} \begin{restatable}{theorem}{plsgdrestate} \label{thm:pl-sgd}
Under the same assumptions as in Theorem \ref{thm:pl}, for mini-batch SGD with
mini-batch size $m \in \mathbb{N}$, the mini-batch SGD with update rule Eq. \eqref{eq:gd_update}, with learning rate $\eta = \frac{m}{\beta(m-1)}$,
applied to the subsets have the following convergence behavior:
\begin{align}
\mathbb{E}[{\cal{L}}(w_{t})] \leq (1-\frac{\eta\mu \alpha^2}{2\beta})^t \mathbb{E}[{\cal{L}}(w_0)] - \frac{\alpha^2\eta}{2\beta}( \beta\epsilon^2 -2 \epsilon\nabla_{\max})\label{eq:adacore_convergence_sgd} \end{align} where $\alpha$ is the minimum eigenvalue of all Hessian matrices during training, and $\nabla_{\max}$ is an upper bound on the norm of the gradients, and the expectation is taken w.r.t. the randomness in the choice of mini-batch.
\end{restatable} The proofs can be found in Appendix \ref{proof:4.3}.
We show an {exponential} convergence for GD (Theorem \ref{thm:pl}) and SGD (Theorem \ref{thm:pl-sgd}) under the $\mu$-PL$^*$ condition, as well as for second order methods (Theorems \ref{thm:newton}, \ref{thm:adahessian}), under $\alpha$-strongly convex and $\beta$-smooth assumptions on the loss. \section{Experiments} In this section, we evaluate the effectiveness of {\textsc{AdaCore}}\xspace, by answering the following questions: (1) how does the performance of various first and second-order methods compare when applied to subsets found by {\textsc{AdaCore}}\xspace vs. the full data and baselines; (2) how effective is {\textsc{AdaCore}}\xspace for extracting crucial subsets for training convex and non-convex over-parameterized models with different optimizers; and (3) how does {\textsc{AdaCore}}\xspace perform in eliminating redundancies
and enhancing diversity of the selected elements.
\textbf{Baselines} In the convex setting, we compare the performance of {\textsc{AdaCore}}\xspace with {\textsc{Craig}}\xspace \cite{mirzasoleiman2020coresets} that extracts subsets that approximate the full gradient, as well as Random subsets. For non-convex experiments, we additionally compare {\textsc{AdaCore}}\xspace with {\textsc{GradMatch}}\xspace and {\textsc{Glister}}\xspace \cite{killamsetty2021grad,killamsetty2020glister}. For {\textsc{AdaCore}}\xspace and {\textsc{Craig}}\xspace, we use the gradient w.r.t the input to the last layer, and for {\textsc{Glister}}\xspace and {\textsc{GradMatch}}\xspace we use the gradient w.r.t the penultimate layer, as specified by the methods. In all cases, we select subsets separately from each class proportional to the class sizes, and train on the union of the subsets. We report average test accuracy across 3 trials in all experiments.
\subsection{Convex Experiments} In our convex experiments, we apply {\textsc{AdaCore}}\xspace to select a coreset to classify the Ijcnn1 dataset using L2-regularized logistic regression: $f_i(x) = ln(1-\text{exp}(-w^{T}x_yy_i)) + 0.5 \mu w^{T}w$. Ijcnn1 includes 49,990 training and 91,701 test data points of 22 dimensions, from 2 classes with 9-to-1 class imbalance ratio. In the convex setting, we only need to calculate the curvature once to find one {\textsc{AdaCore}}\xspace subset for the entire training. Hence, we utilize the complete Hessian information, computed analytically, as discussed in Appendix \ref{appx:hessian}. We apply an exponential decay learning schedule $\alpha_k = \alpha_0 b^k$ with learning rate parameters $\alpha_0$ and $b$. For each model and method (including the random baseline) we tuned the parameters via a search and reported the best results.\looseness=-1
\textbf{{\textsc{AdaCore}}\xspace achieves smaller loss residual with a speedup} Figure \ref{fig:grad_diff} compares the loss residual for SGD and Newton's method applied to coresets of size 10\% extracted by {\textsc{AdaCore}}\xspace (blue), {\textsc{Craig}}\xspace (orange), and random (green) with that of full dataset (red). We see that {\textsc{AdaCore}}\xspace effectively minimizes the training loss, achieving a better loss residual than {\textsc{Craig}}\xspace and random sampling. In particular, {\textsc{AdaCore}}\xspace matches the loss achieved on the full dataset with more than a 2.5x speedup for SGD and Newton's methods. We note that training on random 10\% subsets of the data cannot effectively minimize the training loss. We show the superior performance of training with SGD on subsets of size 10\% to 90\% found with {\textsc{AdaCore}}\xspace vs {\textsc{Craig}}\xspace in Appendix Fig. \ref{fig:normed_grad_diff_subsets}.\looseness=-1
\textbf{{\textsc{AdaCore}}\xspace better estimates the full gradient} Fig. \ref{fig:grad_diff_2} shows the normalized gradient difference between gradient of the full data vs. weighted gradient of subsets of different sizes obtained by {\textsc{AdaCore}}\xspace vs {\textsc{Craig}}\xspace and Random, at the end of training by each method. We see that by considering curvature information, {\textsc{AdaCore}}\xspace obtains a better gradient estimate than {\textsc{Craig}}\xspace and Random subsets. \begin{figure}\label{fig:grad_diff}
\end{figure} \begin{figure}\label{fig:grad_diff_2}
\end{figure}
\begin{figure*}\label{subfig:R18_acc}
\label{subfig:R18_hist}
\label{subfig:R18_forget}
\label{fig:craig_vs_ada}
\end{figure*}
\begin{table}[t] \small \centering \caption{Training ResNet20 using AdaHessian and SGD+momentum on coresets of size 1\% selected by different methods from CIFAR10. Percent of full data selected during entire training is shown (in parentheses). Using $b_H$=64, {\textsc{AdaCore}}\xspace achieves up to 16.8\% higher accuracy, while selecting a smaller fraction of data points. Exponential averaging of gradient and Hessian, \!and a smaller $b_H$ helps.\looseness=-1 }
\resizebox{\columnwidth}{!}{
\begin{tabular}{cccc}
\hline
{\color[HTML]{000000} \textbf{}} & {\color[HTML]{333333} \textbf{AdaHessian}} & {\color[HTML]{333333} \textbf{SGD+Momentum}} \\ \hline
\multicolumn{1}{l|}{Random } & $59.1\% \!\pm 2.8 (87\%)$ & $45.9\% \!\pm 2.5 (87\%)$\\
\multicolumn{1}{l|}{ {\textsc{Craig}}\xspace} & $59.5\% \!\pm 2.8 (74\%)$ & $ 43.6\% \!\pm 1.6 (75\%)$ \\%\hline
\multicolumn{1}{l|}{ {\textsc{GradMatch}}\xspace} & $57.5\% \!\pm 1.3 (74\%)$ & $49.4\% \!\pm 1.6 (74\%)$ \\
\multicolumn{1}{l|}{ {\textsc{Glister}}\xspace} & $37.5\% \!\pm 1.3 (74\%)$ & $38.6\% \pm 1.6 (74\%)$ \\\hline
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace \!(no avg) } & $58.4\% \!\pm 0.2 (73\%)$ & $51.5\% \!\pm 1.1 (74\%)$\\
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace \! (avg g) } & $59.8\% \!\pm 0.5 (73\%)$ & $53.2\% \pm 1.1 (74\%)$\\
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace \!(avg H) } & $60.2\% \!\pm 0.5 (73\%)$ & $54.4\% \!\pm 1.1 (74\%)$\\
\multicolumn{1}{l|}{\textbf{{\textsc{AdaCore}}\xspace } } & $ \textbf{60.2\%} \!\pm 0.5 (\textbf{73\%}) $ & $\textbf{55.4\%} \!\pm 1.1 ( \textbf{74\%})$\\
\hline
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace $b_h\!\!=\!\!512$ } & $57.2\% \pm 0.5 (73\%)$ & $52.4\% \pm 1.1 (74\%)$\\
\end{tabular}
}
\label{table:resnet20_extended}
\end{table}
\subsection{Non-Convex Experiments}
\textbf{Datasets} We use CIFAR10 (60k points from 10 classes) , class imbalanced version of CIFAR10 (32.5k points from 10 classes) and CIFAR100 (32.5k points from 100 classes) \cite{cifar10}, BDD100k (100k points from 7 classes) \cite{bdd100k}. The results on MNIST (70k points from 10 classes) \cite{deng2012mnist} can be found in Appendix \ref{exp:mnist}. Images are normalized to [0,1] by division with 255.
\textbf{Models and Optimizers} We train ResNet-20 and ResNet-18 \cite{he2016deep}, with convolution, average pooling and dense layers with softmax outputs and weight decay of $10^{-4}$. We use a batch size of 256 in all experiments (except Table \ref{table:batch}, Fig. \ref{subfig:bdd}), and train using SGD with momentum of 0.9 (default), or AdaHessian. For training, we use standard learning rate scheduler for ResNet starting with 0.1 and exponentially decaying by factor 0.1 at epochs 100 and 150. We used linear learning rate warm-up for the first 20 epochs to prevent weights from diverging when training with subsets. All experiments were ran on a 2.4GHz CPU and RTX 2080 Ti GPU.
\textbf{Calculating the Curvature} To calculate the Hessian diagonal using Eq. \eqref{eq:hf}, we use a batch size of $b_H\!=\!64$ to calculate the expected Hessian diagonal over the training data. We observed that a smaller batch size provides a higher quality Hessian compared to larger batch sizes, \!as shown in Table \ref{table:resnet20_extended}.
\textbf{Baseline Comparison and Ablation Study} Table \ref{table:resnet20_extended} shows the accuracy of training ResNet-20, using SGD with momentum of 0.9 and AdaHessian, for 200 epochs on $S$=1\% subsets of CIFAR-10 chosen every $R$=1 epoch by different methods. For SGD+momentum, {\textsc{AdaCore}}\xspace outperforms {\textsc{Craig}}\xspace by 12\%, Random by 10\%, {\textsc{GradMatch}}\xspace by 6\%, and {\textsc{Glister}}\xspace by 16.8\%. Note that in total, {\textsc{AdaCore}}\xspace selects 74\% of the dataset during the entire training process, whereas Random visits 87\%. Thus, {\textsc{AdaCore}}\xspace effectively selects subsets contributing the most to generalization. We see that the accuracy gap between the baselines and {\textsc{AdaCore}}\xspace shrinks when applying more powerful optimizers such as AdaHessian. Table \ref{table:resnet20_extended} also shows the effect of exponential averaging of gradients and Hessian diagonal, and larger batch sizes for calculating the Hessian diagonal $b_H$. We see that exponential averaging help {\textsc{AdaCore}}\xspace achieving better performance, and smaller $b_H$ provides better results.
Fig. \ref{subfig:R18_acc} compares the performance of ResNet-18 on 1\% subsets selected from CIFAR-10 with different methods. We compare the performance of training on {\textsc{AdaCore}}\xspace, {\textsc{Craig}}\xspace, {\textsc{GradMatch}}\xspace, {\textsc{Glister}}\xspace, and Random subsets for 200 epochs, with training on full data for 15 epochs. This is the number of iterations required for training on the full data to achieve a comparable performance to that of {\textsc{AdaCore}}\xspace subsets. We see that training on {\textsc{AdaCore}}\xspace coresets achieves a better accuracy 2.5x faster than training on the full dataset, and more than 4.5x faster than the next best subset selection algorithm for this setting (\textit{c.f.} Fig. \ref{fig:repeat_craig_vs_ada_1000} in Appendix for complete results).
\begin{table}[b]
\caption{Test accuracy and percent of full data selected (in parentheses), when selecting $S$=1\% coresets every $R$ epochs from Imbalanced CIFAR-10 to train ResNet18. }
\label{table:cifar_imb_sgd_mom} \begin{small} \resizebox{\columnwidth}{!}{
\begin{tabular}{p{1cm}|lll} \hline
& {\color[HTML]{333333} $S$=$1\%$, $R$=20} & $S$=$1\%$, $R$=10 & $S$=$1\%$, $R$=5 \\ \hline AdaCore & $\textbf{57.3\%} (\textbf{5\%})$ & $\textbf{57.12} (\textbf{9.5\%})$ & $\textbf{60.2\%} (\textbf{14.5\%})$ \\ {\textsc{Craig}}\xspace & $48.6\% (8\%)$ & $55 (16\%)$ & $53.05\% (27.5\%)$ \\ Random & $54.7\% (8\%)$ & $54.6 (18\%)$ & $54.6\% (33.2\%)$\\ \textsc{GradM} & $29.9\% (8.2\%)$ & $29.1\% (14.7\%)$ & $32.75\% (23.2\%)$\\ {\textsc{Glister}}\xspace & $21.1\% (8.6\%)$ & $17.2\% (16\%)$ & $14.4\% (22.2\%)$ \end{tabular} } \end{small}
\end{table}
\begin{table}[b]
\caption{Training ResNet18 with $S$=1\% subsets every $R$=1 epoch from CIFAR10 using batch size $b$= 512, 256, 128. {\textsc{AdaCore}}\xspace can leverage larger mini-bath size and obtain a larger accuracy gap to {\textsc{Craig}}\xspace and Random. For $b$=512, we have\! 1\! mini-batch (GD). Std is reported in Appendix Table\! \ref{table:batch_pm}. }\label{table:batch}
\begin{small} \resizebox{\columnwidth}{!}{
\begin{tabular}{l|lllll} \hline
& \textsc{AdaC.} & {\textsc{Craig}}\xspace & Rand & \begin{tabular}[c]{@{}l@{}}Gap/\\ {\textsc{Craig}}\xspace\end{tabular} & \begin{tabular}[c]{@{}l@{}}Gap/\\ Rand\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}GD~~ b=512\end{tabular} & \textbf{58.32}\% & 56.32\% & 49.14\% & 1.69\% & \textbf{8.91}\% \\ \begin{tabular}[c]{@{}l@{}}SGD b=256\end{tabular} & \textbf{68.23}\% & 58.3\% & 60.7\% & \textbf{9.93}\% & 8.16\% \\ \begin{tabular}[c]{@{}l@{}}SGD b=128\end{tabular} & \textbf{66.89}\% & 58.17\% & 65.46\% & \textbf{8.81}\% & 1.52\% \end{tabular} } \end{small} \end{table}
\begin{table*}[t] \caption{Test accuracy and percent of full data selected (in parentheses), when selecting $S$\% coresets every $R$ epochs from CIFAR-10 and Imbalanced CIFAR-10 to train ResNet18. } \label{table:cifar_imb_ada_craig}
\begin{small} \begin{tabular}{lllll} \hline
& \text{\begin{tabular}[c]{@{}l@{}}ResNet20, ~CIFAR10\\ $S$ = $30\%$, ~~$R$ = 20\end{tabular}} & \text{\begin{tabular}[c]{@{}l@{}}ResNet20, ~CIFAR10\\ $S$ = $10\%$, ~~$R$ = 20\end{tabular}} & \text{\begin{tabular}[c]{@{}l@{}}ResNet18, ~CIFAR10-IMB\\ $S$ = $30\%$, ~~$R$ = 20\end{tabular}} & \text{\begin{tabular}[c]{@{}l@{}}ResNet18, ~CIFAR10-IMB\\ $S$ = $10\%$,~~ $R$ = 20\end{tabular}} \\ \hline {\textsc{AdaCore}}\xspace & $\textbf{80.57\%} \pm 0.11$ \; $(\textbf{74.6\%})$ & $\textbf{70.6}\% \pm 0.33$ \; $(\textbf{44.8\%})$ & $\textbf{85.7\%} \pm 0.1$ \; $(\textbf{74\%})$ & $\textbf{76\%} \pm 0.3$ \; $(\textbf{43.8\%})$\\ {\textsc{Craig}}\xspace & $65.8\% \pm 0.41$ \; $(90.9\%)$ & $58.5\% \pm 1.27$ \; $(60.75\%)$ & $79.3\% \pm 1.6$ \; $(84.5\%)$& $71.6\% \pm 0.15$ \; $(56.4\%)$ \end{tabular}
\end{small}
\end{table*}
\textbf{Frequency and size of subsets selection} Table \ref{table:cifar_imb_sgd_mom}, \ref{table:cifar_imb_ada_craig} shows the performance of different methods for selecting subsets of size $S\%$ of the data every $R$ epochs, from CIFAR-10 and imbalanced CIFAR-10. Table \ref{table:cifar_imb_sgd_mom} shows that selecting subsets of size 1\% every $R=5, 10, 20$ epochs with {\textsc{AdaCore}}\xspace achieves a superior performance compared to the baselines. Table \ref{table:cifar_imb_ada_craig} shows that {\textsc{AdaCore}}\xspace can successfully select larger subsets of size $S=10\%, 30\%$ and outperform {\textsc{Craig}}\xspace (Std is reported in Appendix, Table \ref{table:class_imb_sgd_mom_full}).
\textbf{{\textsc{AdaCore}}\xspace speeds up training} Fig \ref{fig:speedup} compares the speedup of various methods during training ResNet18 on 10\% subsets selected every $R=20$ epochs from BDD 100k and CIFAR-100. All the methods are trained to achieve a test accuracy between 72\% and 74\% on BDD 100k, and between 57\% and 50\% on CIFAR-100. On BDD 100k, {\textsc{AdaCore}}\xspace achieves 74\% accuracy in 100 epochs and training on full data achieves a similar performance in 45 epochs. For CIFAR-100, {\textsc{AdaCore}}\xspace achieves 59\% accuracy in 200 epochs and training on full data achieves a similar performance in 40 epochs. Complete results on speedup and test accuracy of each method can be found in Appendix \ref{exp:bdd100k}, \ref{exp:cifar100}. We see that {\textsc{AdaCore}}\xspace achieves 2.5x speedup over training on full data and 1.7x over that of training on random subsets on BDD 100k. For CIFAR-100, {\textsc{AdaCore}}\xspace achieves 4.2x speedup over training on random subsets and 2.9x over training on full data. Compared to the baselines, {\textsc{AdaCore}}\xspace can achieve achieve the desired accuracy much faster.
\textbf{Effect of batch size} Table \ref{table:batch} compares the performance of training with different batch sizes on subsets found by various methods. We see that training with larger batch size on subsets selected by {\textsc{AdaCore}}\xspace can achieve a superior accuracy. As {\textsc{AdaCore}}\xspace selects more diverse subsets with smaller weights, one can train with larger mini-batches on the subsets without increasing the gradient estimate error. In contrast, {\textsc{Craig}}\xspace subsets have elements with larger weights and hence training with fewer larger mini-batches has larger gradient error and does not improve the performance.
In summary, see that {\textsc{AdaCore}}\xspace consistently outperforms the baselines over various architectures, optimizers, subset sizes, selection frequency, and batch sizes.
\textbf{{\textsc{AdaCore}}\xspace selects more diverse subsets} Fig. \ref{subfig:R18_hist} shows the number of times different methods selected a particular elements during the entire training. We see that {\textsc{AdaCore}}\xspace successfully selects a more diverse set of examples compared to {\textsc{Craig}}\xspace. We note that {\textsc{GradMatch}}\xspace may not be able to select subsets with the desired size, and instead augments the selected subset with randomly selected examples. Hence, it has a normal-shaped distribution. Fig. \ref{subfig:R18_forget} shows mean forgetting score for all examples within a class ranked by {\textsc{AdaCore}}\xspace at the end of training, over sliding window of size 100. We see that {\textsc{AdaCore}}\xspace prioritizes selecting less forgettable examples. This shows that indeed {\textsc{AdaCore}}\xspace is able to distinguish different groups of easier examples better, and hence can prevent catastrophic forgetting by including their representatives in the coresets. \looseness=-1 \begin{figure}
\caption{ Speedup of various methods over training on random subsets and full data, for training ResNet18 on CIFAR100 and ResNet50 on BDD100k with batch size=128.
}
\label{subfig:bdd}
\label{fig:speedup}
\end{figure}
\begin{figure}\label{subfig:forget}
\label{subfig:certain}
\label{subfig:selected}
\label{subfig:not_selected}
\label{fig:compare_reject_accept}
\end{figure}
\textbf{{\textsc{AdaCore}}\xspace vs Forgettability and Uncertainty} Fig. \ref{subfig:forget}, \ref{subfig:certain} show mean forgettability and uncertainty in sliding windows of size 100, 200 over examples sorted by {\textsc{AdaCore}}\xspace at the end of training. We see that {\textsc{AdaCore}}\xspace heavily biases its selections towards forgettable and uncertain points, as training proceeds. Interestingly, \ref{subfig:forget} reveals that {\textsc{AdaCore}}\xspace avoids the most forgettable samples in favor of slightly more memorable ones, suggesting that {\textsc{AdaCore}}\xspace can better distinguish easier groups of examples. Figure \ref{subfig:certain} shows similar bias towards uncertain samples. Fig. \ref{subfig:selected}, \ref{subfig:not_selected} show the most and least selected images by {\textsc{AdaCore}}\xspace, respectively. We see the redundancies in the never selected images, whereas images frequented by {\textsc{AdaCore}}\xspace are quite diverse in color, angles, occluded subjects, and airplane models. This confirms the effectiveness of {\textsc{AdaCore}}\xspace in extracting the most crucial subsets for learning and eliminating redundancies.
\section{Conclusion} We proposed {\textsc{AdaCore}}\xspace, a method that leverages the topology of the dataset to extract salient subsets of large datasets for efficient machine learning. The key idea behind {\textsc{AdaCore}}\xspace is to dynamically incorporate the curvature { and gradient} of the loss function via an adaptive estimate of the Hessian to select weighted subsets (coresets) which closely approximate the preconditioned gradient of the full dataset. We proved exponential convergence rate for first and second-order optimization methods applied to {\textsc{AdaCore}}\xspace coresets, under certain assumptions. Our extensive experiments, using various optimizers e.g., SGD, AdaHessian, and Newton's method, show that {\textsc{AdaCore}}\xspace can extract higher quality coresets compared to baselines, rejecting potentially redundant data points. This speeds up the training of various machine learning models, such as logistic regression and neural networks, by over 4.5x while selecting fewer but more diverse data points for training.
\appendix \onecolumn \section{Proofs of Theorems}
\subsection{Proof of Theorem \ref{thm:newton}}
\newtonrestate*
\begin{proof} \label{proof:4.1} We prove Theorem \ref{thm:newton} (similarly to the proof of Newton's method in \cite{10.5555/993483}) for the following general update rule for $0\leq k\leq1$: \begin{align}
\Delta w_t = \mathbf{H}_t^{-k}\mathbf{g}_t\\
w_{t+1} = w_t - \eta\Delta w_t \end{align}
For $k=1$, this corresponds to the update rule of the Newton's method.
Define $\lambda(w_t) = (\mathbf{g}_t^T \mathbf{H}_t^{-k} \mathbf{g}_t)^{1/2}$.
Since ${\cal{L}} (w)$ is $\beta$-smooth, we have
\begin{align}
{\cal{L}} (w_{t+1}) &\leq {\cal{L}} (w_t)-\eta \mathbf{g}_t^T \Delta w_t + \frac{\eta^2 \beta \|\Delta w_t\|^2}{2}\\
&\leq {\cal{L}} (w_t)-\eta\lambda(w_t)^2+\frac{\beta}{2\alpha^k}\eta^2\lambda(w_t)^2,
\end{align}
where in the last equality, we used
\begin{align}
\lambda(w_t)=\Delta w_t \mathbf{H}_t^k \Delta w_t^T.
\end{align}
Therefore, using step size $\hat{\eta}=\frac{\alpha^k}{\beta}$ we have $w_{t+1} = w_t-\hat{\eta}\Delta w_t$
\begin{align}
{\cal{L}} (w_{t+1})\leq {\cal{L}} (w_t)-\frac{1}{2}\hat{\eta}\lambda(w_t)^2
\end{align}
Since $\alpha I \preceq \mathbf{H}_t \preceq \beta \emph{I}$, we have
\begin{align}
\lambda(w_t)^2 = \mathbf{g}_t^T \mathbf{H}_t^{-k} \mathbf{g}_t \geq \frac{1}{\beta^k}\|\mathbf{g}_t\|^2,
\end{align}
and therefore ${\cal{L}}$ decreases as follows,
\begin{align}
{\cal{L}} (w_{t+1}) - {\cal{L}} (w_t) \leq -\frac{1}{2\beta^k}\hat{\eta}\|\mathbf{g}_t\|^2 =-\frac{\alpha^k}{2\beta^{k+1}}\|\mathbf{g}_t\|^2.
\end{align}
Now for the subset, from Eq. \eqref{eq:main} we have that $\|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|\leq\epsilon$. Hence, via reverse triangle inequality $\|\mathbf{H}^{-1}_t \textbf{g}_t\|\leq \|\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\|+\epsilon$, and we get
\begin{align}
\frac{\|\mathbf{g}_t\|}{\beta}\leq\| (\mathbf{H}_t)^{-1}\mathbf{g}_t\|\leq \| (\mathbf{H}^S_t)^{-1}\mathbf{g}^S_t\pmb\gamma\|+\epsilon \leq \frac{\|\mathbf{g}^S_t\|}{\alpha}+\epsilon,
\label{eq:ulb_subset}
\end{align}
where $\mathbf{g}^S_t = \sum_{j \in S} \mathbf{g}_{t,j}$ and $\mathbf{H}^S_t=\sum_{j\in S}\mathbf{H}_{t,j.}$ are the gradient and Hessian of the subset respectively. In Eq. \eqref{eq:ulb_subset} the RHS follows from operator norms and the LHS follows from
the following lower bound on the norm of the product of two matrices:
\begin{align}
\begin{aligned}
\|AB\| &= \max_{\|x\|=1} \|x^TAB\|\\
&= \max_{\|x\|=1} \|x^TA\|\left\|\frac{x^TA}{\|x^TA\|}B\right\|\\
&\ge \max_{\|x\|=1} \sigma_{\min(A)}\left\|\frac{x^TA}{\|x^TA\|}B\right\|\\
&= \max_{\|y\|=1} \sigma_{\min(A)}\left\|y^TB\right\|\\
&= \sigma_{\min(A)}\|B\|,
\end{aligned}
\label{stack}
\end{align}
Hence,
\begin{align}
\|\mathbf{g}_t^S\|\geq\frac{\alpha}{\beta} (\|\mathbf{g}_t\|-\beta\epsilon)
\end{align}
Therefore, on the subset we have
\begin{align}
{\cal{L}} (w_{t+1}) - {\cal{L}} (w_t)
&\leq -\frac{\alpha^k}{2\beta^{k+1}}\|\mathbf{g}^S_t\|^2\\
&\leq
-\frac{\alpha^{k}}{2\beta^{k+1}} (\frac{\alpha}{\beta})^2(\|\mathbf{g}_t\|-\beta\epsilon)^2\\
&= -\frac{\alpha^{k+2}}{2\beta^{k+3}} (\|\mathbf{g}_t\|-\beta\epsilon)^2.
\end{align}
The algorithm stops descending when $\|\mathbf{g}_t\|=\beta\epsilon$. From strong convexity we know that \begin{align}
\|\mathbf{g}_t\|=\beta\epsilon\geq \alpha\|w-w_*\| \end{align} Hence, we get \begin{align}
\|w-w_*\|\leq \beta\epsilon/\alpha. \end{align}
As such we have Corollary \ref{thm:adahessian}. and when we set $k=1$ we have proof of Theorem \ref{thm:newton}. \end{proof}
\textbf{Descent property for Equation \ref{eq:main}} \label{proof:diagconv} For a strongly convex function, ${\cal{L}}$, we have that the diagonal elements of the Hessian are positive \cite{yao2020adahessian}. This allows the diagonal to approximate the full Hessian which has good convergence properties.
Given a function ${\cal{L}} (w)$ which is strongly convex and strictly smooth, we have that $\nabla^2 {\cal{L}} (w)$ is upper and lower bounded by two constants $\beta$ and $\alpha$, so that $\alpha I \leq \nabla^2 {\cal{L}} (w) \leq \beta I$, for all $w$. For a strongly convex function the diagonal elements in diag($\mathbf{H}_{t}$) are all positive, and we have: \begin{align}
\alpha \leq e_i^T \mathbf{H}_{t} e_i = e_i^T \text{diag}(\mathbf{H}_{t}) e_i = \text{diag}(\mathbf{H}_{t})_{i,j} \leq \beta \end{align} where $e_j$ represents the natural basis vectors. Therefore, the diagonal entries of $\text{diag}(\mathbf{H}_{t})$ are in the range $[\alpha,\beta]$. Therefore, the average of a subset of the numbers are still in the range $[\alpha,\beta]$. As such, we can prove that Eq. (\ref{eq:H_sub}) has the same convergence rate as its full matrix counterpart, by following the same proof as Theorem \ref{thm:newton}.
\subsection{Proof of Theorem \ref{thm:pl} and \ref{thm:pl-sgd}} \label{proof:4.3}
A loss function $\mathcal{L}(w)$ is considered $\mu$-PL on a set $\mathcal{S}$, if the following holds: \begin{align}
\frac{1}{2}\|\mathbf{g}\|^{2} \geq \mu\left(\mathcal{L}(w)-\mathcal{L}\left(w_{*}\right)\right), \forall w \in \mathcal{S}
\label{eq:pl} \end{align} where $w_{*}$ is a global minimizer. When additionally $\mathcal{L}\left(w_{*}\right) = 0$, the $\mu$-$\text{PL}$ condition is equivalent to the $\mu$-$\text{PL}^{*}$ condition \begin{align}
\frac{1}{2}\|\mathbf{g}\|^{2} \geq \mu\mathcal{L}(w), \forall w \in \mathcal{S}. \end{align}
\plrestate*
For Lipschitz continuous $\mathbf{g}$ and $\mu$-PL$^*$ condition, gradient descent on the entire dataset yields \begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_{t}) \leq -\frac{\eta} {2}\|\mathbf{g}_{t}\|^2 \leq -\eta\mu {\cal{L}}(w_{t}), \end{align}
and,
\begin{align}
{\cal{L}}(w_{t})\leq(1-\eta\mu)^t {\cal{L}}(w_0) \end{align} which was shown in \cite{liu2020toward}.
We build upon this result to an {\textsc{AdaCore}}\xspace subset.
\begin{proof}
From Eq. \eqref{eq:ulb_subset} we have that
\begin{align}
\frac{\|\mathbf{g}_t\|}{\beta}\leq\| (\mathbf{H}_t)^{-1}\mathbf{g}_t\|\leq \| (\mathbf{H}^S_t)^{-1}\mathbf{g}_t^S\pmb\gamma\|+\epsilon \leq \frac{\|\mathbf{g}_t^S\|}{\alpha}+\epsilon
\end{align}
Hence solving for $\|\mathbf{g}_t^S\|$ we have,
\begin{equation}
\|\mathbf{g}_t^S\|\geq \frac{\alpha}{\beta}(\|\mathbf{g}_t\|-\beta\epsilon).
\label{eq:s_error}
\end{equation}
For the subset we have
\begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t)
&\leq -\frac{\eta}{2}\|\mathbf{g}_t^S\|^2
\end{align}
By substituting Eq. (\ref{eq:s_error}) we have.
\begin{align}
&\leq -\frac{\eta \alpha^2}{2 \beta^2}(\|\mathbf{g}_t\|-\beta\epsilon)^2\\
&= -\frac{\eta\alpha^2} {2\beta^2}(\|\mathbf{g}_t\|^2+\beta^2\epsilon^2-2\beta\epsilon\|\mathbf{g}_t\|)\label{eq:pre_spectral_upper}\\
&\leq - \frac{\eta\alpha^2} {2\beta^2}(\|\mathbf{g}_t\|^2+\beta^2\epsilon^2-2\beta\epsilon \nabla_{\max})\label{eq:pl_before_ada}\\
&\leq -\frac{\eta\alpha^2} {2\beta^2}(2\mu {\cal{L}}(w_t)+\beta^2\epsilon^2-2\beta\epsilon \nabla_{\max}) \label{eq:pl_applied_adacore}
\end{align}
Where we can upper bound the norm of $\mathbf{g}_t$ in Eq. (\ref{eq:pre_spectral_upper}) by a constant $\nabla_{max}$. And Eq. (\ref{eq:pl_applied_adacore}) follows from the $\mu$-PL condition from Eq. (\ref{eq:pl}).
Hence,
\begin{align}
{\cal{L}}(w_{t+1}) \leq (1-\frac{\eta\mu\alpha^2} {\beta^2}) {\cal{L}}(w_t) - \frac{\eta\alpha^2} {2\beta^2}(\beta^2\epsilon^2-2\beta\epsilon \nabla_{\max})
\end{align}
Since, $\sum_{j=0}^k(1-\frac{\eta\mu\alpha^2} {\beta^2})^j\leq\frac{\beta^2}{\eta\mu\alpha^2}$, for a constant learning rate $\eta$ we get
\begin{align}
{\cal{L}}(w_{t+1}) \leq (1-\frac{\eta\mu\alpha^2} {\beta^2})^{t+1} {\cal{L}}(w_0) - \frac{\eta\alpha^2} {2\beta^2}(\beta^2\epsilon^2-2\beta\epsilon \nabla_{\max})
\label{eq:adacore_convergence}
\end{align}
\end{proof}
\plsgdrestate* For Lipschitz continuous $\mathbf{g}$ and $\mu$-PL$^*$ condition, gradient descent on the entire dataset yields \begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t) \leq -\frac{\eta} {2}\|\mathbf{g}_t\|^2 \leq -\eta\mu {\cal{L}}(w_t), \end{align}
and,
\begin{align}
{\cal{L}}(w_{t})\leq(1-\eta\mu)^t {\cal{L}}(w_0)\label{eq:grad_rate}, \end{align} which was shown in \cite{liu2020toward}.
We build upon this result to an {\textsc{AdaCore}}\xspace subset.
\begin{proof}
From Eq. \eqref{eq:ulb_subset} we have that
\begin{align}
\frac{\|\mathbf{g}_t\|}{\beta}\leq\| (\mathbf{H}_t)^{-1}\mathbf{g}_t\|\leq \| (\mathbf{H}^S_t)^{-1}\mathbf{g}_t^S\pmb\gamma\|+\epsilon \leq \frac{\|\mathbf{g}_t^S\|}{\alpha}+\epsilon
\end{align}
Hence solving for $\|\mathbf{g}_t^S\|$ we have,
\begin{equation}
\|\mathbf{g}_t^S\|\geq \frac{\alpha}{\beta}(\|\mathbf{g}_t\|-\beta\epsilon).
\end{equation}
For the subset we have
\begin{align}
{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t)
&\leq -\frac{\eta}{2}\|\mathbf{g}_t^S\|^2
\end{align}
Fixing $w_t$ and taking expectation with respect to the randomness in the choice of the batch $i_t^{(1)} \dots i_t^{(m)}$ (noting that those indices are i.i.d.), we have
\begin{align}
\mathbb{E}_{i_t^{(1)} \dots i_t^{(m)}} [{\cal{L}}(w_{t+1}) - {\cal{L}}(w_t)]
&\leq -\eta(\alpha - \eta \frac{\beta}{m}(\alpha \frac{m-1}{2}+\beta){\cal{L}}(w_t)\\
&\leq -\underbrace{\eta(1 - \frac{\eta \beta (m-1)}{2m})}_{c_1}\|\mathbf{g}_t\|^2 + \underbrace{\frac{\eta^2\beta \lambda}{m}}_{c_2}{\cal{L}}(w_t)\\
&\leq -c_1 \frac{\alpha^2}{\beta^2}\|\mathbf{g}_t + \beta \epsilon\|)^2 + c_2{\cal{L}}(w_t)
\end{align}
We can upper bound the norm of $\mathbf{g}_t$ in Eq. (\ref{eq:pl_applied_adacore}) by a constant $\nabla_{max}$. And Eq. (\ref{eq:pl_applied_adacore}) follows from the $\mu$-PL condition from Eq. (\ref{eq:pl}) and assuming $\eta \leq \frac{2}{\beta}$.
\begin{align}
&\leq -c_1\frac{\alpha^2}{\beta^2}(\mu{\cal{L}}(w_t) - 2 \nabla_{max}\beta \epsilon + \beta^2\epsilon^2) + c_2 {\cal{L}}(w_t)\\
&\leq -\eta(1 - \frac{\eta \beta (m-1)}{2m})\frac{\alpha^2}{\beta^2}(\mu{\cal{L}}(w_t) - 2 \nabla_{max}\beta \epsilon + \beta^2\epsilon^2) + \frac{\eta^2\beta \lambda}{m} {\cal{L}}(w_t)\\
&= \eta(\mu\frac{\alpha^2}{\beta^2} -\eta \frac{\beta}{m}(\mu \frac{\alpha^2(m-1)}{\beta^2 2}+\lambda)){\cal{L}}(w_t) + \frac{\eta^2\beta \lambda}{m} {\cal{L}}(w_t)+ c_1 \frac{\alpha^2}{\beta^2}(2\nabla_{max} \beta \epsilon - \beta^2\epsilon^2)\\
&=\eta \mu \frac{\alpha^2}{\beta^2} (1 - \eta \beta \frac{m-1}{2m})\mathbb{E}[{\cal{L}}(w_t)] + \eta\frac{\alpha^2}{\beta^2}(1-\eta\beta\frac{m-1}{2m})(2\nabla_{max} \beta \epsilon - \beta^2\epsilon^2)
\end{align}
By optimizing the quadratic term in the upper bound with respect to $\eta$ we get $\eta = \frac{m}{\beta(m-1)}$.
\begin{align}
\mathbb{E}[{\cal{L}}(w_{t+1})] \leq (1-\frac{\mu \alpha^2m}{2\beta^2(m-1)}) \mathbb{E}[{\cal{L}}(w_t)] + \frac{\alpha^2m}{\beta^2}\frac{2\nabla_{max} \beta \epsilon - \beta^2\epsilon^2}{2\beta(m-1)}
\end{align}
Hence,
\begin{align}
\mathbb{E}[{\cal{L}}(w_{t+1})] \leq \left(1-\frac{\eta^*(m)\mu \alpha^2}{2\beta}\right) \mathbb{E}[{\cal{L}}(w_t)] + \frac{\alpha^2\eta^*(m)}{\beta}(\nabla_{max} \epsilon - \beta\epsilon^2/2)
\end{align}
\end{proof}
\subsection{Discussion on Greedy to Extract Near-optimal Coresets }\label{appx:alg} As discussed in Section \ref{sec:alg}, a greedy algorithm can be applied to find near-optimal coresets that estimate the general descent direction with an error of at most $\epsilon$ by solving the submodular cover problem Eq. \eqref{eq:cover}. For completeness, we include the pseudocode of the greedy algorithm in Algorithm \ref{alg:greedy}. The {\textsc{AdaCore}}\xspace algorithm is run per class.
\begin{algorithm}[ht]
\begin{algorithmic}[1]
\REQUIRE Set of component functions $f_i$ for $i \in V=[n]\}$.
\ENSURE Subset $S \subseteq V$ with corresponding per-element stepsizes $\{\gamma\}_{j\in S}$.
\STATE $S_0 \leftarrow \emptyset, s_0=0, i=0.$
\WHILE {$F(S) < C_1 - \epsilon$}
\STATE $j\in {\arg\max}_{e \in V\setminus S_{i-1}} F (e|S_{i-1})$
\STATE $S_i = S_{i-1}\cup \{j\}$
\STATE $i = i + 1$
\ENDWHILE
\FOR {$j=1$ to $|S|$}
\STATE{$\gamma_j = \sum_{i\in V} \mathbb{I} \big[ j = {\arg\min}_{s \in S} {\max_{w \in \mathcal{W}}} \|\mathbf{H}^{-1}_t \textbf{g}_t-\sum_{j\in S}\gamma_{t,j}\mathbf{H}_{t,j}^{-1} \textbf{g}_{t,j}\| \big]$}
\ENDFOR
\end{algorithmic}
\caption{\textsc{{\textsc{AdaCore}}\xspace} (Adaptive Coresets for Accelerating first and second order optimization methods) }
\label{alg:greedy} \end{algorithm}
\textbf{}
\section{Bounding the Norm of Difference Between Preconditioned Gradients}
\subsection{Convex Loss Functions}\label{proof:boundnormerror} We show the normed difference for ridge regression. Similar results can be deduced for other loss functions such as square loss \cite{allen2016exploiting}, logistic loss, smoothed hinge losses, etc.
For ridge regression $f_i(w)=\frac{1}{2} ( \langle x_i, w \rangle - y_i )^2 + \frac{\lambda}{2} \| w\|^2$, we have $\nabla f_i(w) = x_i (\langle x_i, w \rangle - y_i) + \lambda w$. Furthermore for invertable Hessian H, let $H^{-1}_i = A$ and $H^{-1}_j = B$. Therefore,
\begin{align}
\|A \nabla f_i(w)-B \nabla f_j(w)\| =\| A (x_i\langle x_i,w\rangle - x_iy_i + \lambda w) - B(x_j \langle x_j,w \rangle - x_jy_j + \lambda w) \|
\end{align}
\begin{align}
&= \| A x_i\langle x_i,w\rangle - Bx_j \langle x_j,w\rangle + Bx_j y_j - Ax_iy_i + \lambda(A-B)w \|\\
&= \| A x_i \langle x_i,w\rangle + Bx_j \langle x_i,w\rangle -Bx_j \langle x_i,w\rangle - B x_j \langle x_j,w\rangle \nonumber \\
&\quad\quad\quad\quad\quad + Bx_jy_j - Ax_iy_i + Bx_jy_i -Bx_jy_i + \lambda(A+B)w\|\\
&= \| \langle x_i,w \rangle (A x_i - Bx_j) + \langle x_i - x_j,w \rangle Bx_j + (y_j-y_i)Bx_j + y_i(Bx_j-Ax_i) + \lambda (A-B)w \|\\
&= \| (\langle x_i,w \rangle - y_i)(A x_i - Bx_j) + (\langle x_i - x_j,w \rangle + y_j -y_i )B x_j + \lambda (A+B)w \|\\
&\leq | \langle x_i,w \rangle - y_i | \|A x_i - Bx_j \| + |\langle x_i - x_j,w \rangle + y_j -y_i | \| Bx_j\| + \lambda \|(A-B)w\|\\
&\leq | \langle x_i,w \rangle - y_i | ( \|A - B \| \| x_i\| + \| B\| \| x_i - x_j\|) + |\langle x_i - x_j,w \rangle + y_j -y_i | \| Bx_j\| \nonumber \\
&\quad\quad\quad\quad\quad+ \lambda \|(A+B)w\| \\
&\leq O(\|w\|)( \| A - B \| + \| B\| \|x_i - x_j \| ) + O(\|w\|)\|B\| \|x_j\| \|x_i-x_j\| \\
&\leq O(\|w\|)( \| A\| +\| B \| + \| B\| \|x_i - x_j \| ) + O(\|w\|)\|B\| \|x_j\| \|x_i-x_j\| \label{eq:norm_inv_hess} \end{align}
In Eq. \eqref{eq:norm_inv_hess} we have the norm of the inverse of the Hessian matrix. Since H is invertible we have min$_i \sigma_i > 0,$ \begin{align}
\min _{i} \sigma_{i}=\inf _{x \neq 0} \frac{\|H x\|_{2}}{\|x\|_{2}} \Longleftrightarrow \frac{1}{\min _{i} \sigma_{i}}=\sup _{x \neq 0} \frac{\|x\|_{2}}{\|H x\|_{2}}\\
\frac{1}{\min _{i} \sigma_{i}}=\sup _{x \neq 0} \frac{\|x\|_{2}}{\|H x\|_{2}}=\sup _{H^{-1} z \neq 0} \frac{\left\|H^{-1} z\right\|_{2}}{\|z\|_{2}}=\sup _{z \neq 0} \frac{\left\|H^{-1} z\right\|_{2}}{\|z\|_{2}}=\left\|H^{-1}\right\|_{2}, \end{align} where the substitution $Hx = z$ was made, and utilized that $H^{-1}z = 0 \Longleftrightarrow z = 0$ since H is invertible. Hence, \begin{align}
&\leq O(\|w\|) \|B\| \|x_i - x_j\| \\
&\leq O(\|w\|) \|x_i - x_j \|
\end{align}
For $\| x_i \| \leq 1$, and
$|y_i-y_j|\approx0$ .
Assuming that $\|w\|$ is bounded for all $w \in \mathcal{W}$, an upper bound on the euclidean distance between preconditioned gradients can be precomputed.
\subsection{Neural Networks}\label{proof:boundnormederrornn} We closely follow proofs from \cite{katharopoulos2018not} and \cite{mirzasoleiman2020coresets} to show that we can bound the difference between the Hessian inverse preconditioned gradients of an entire NN up to a constant of the difference between the Hessian inverse preconditioned gradients of the last layer of the NN, between arbitrary datapoints $i$ and $j$.\\
Consider an $L$-layer perceptron, where $w^{(l)}\in \mathbb{R}^{M_lxM_{l-1}}$ is the weight matrix for the $l^{th}$ layer with $M_l$ hidden units. Furthermore assume $\sigma^{(l)}(.)$ is a Lipschitz continuous activation function. Then we let,
\begin{align} x_{i}^{(0)} &=x_{i}, \\ z_{i}^{(l)} &=w^{(l)} x_{i}^{(l-1)}, \\ x_{i}^{(l)} &=\sigma^{(l)}\left(z_{i}^{(l)}\right) . \end{align} With, \begin{align} \Sigma_{l}^{\prime}\left(z_{i}^{(l)}\right) &=\operatorname{diag}\left(\sigma^{\prime(l)}\left(z_{i, 1}^{(l)}\right), \cdots \sigma^{\prime(l)}\left(z_{i, M_{l}}^{(l)}\right)\right) \\ \Delta_{i}^{(l)} &=\Sigma_{l}^{\prime}\left(z_{i}^{(l)}\right) w_{l+1}^{T} \cdots \Sigma_{l}^{\prime}\left(z_{i}^{(L-1)}\right) w_{L}^{T}. \end{align} We have,
\begin{align}
\| \mathbf{H}_{i}^{-1} \textbf{g}_{i}-& \mathbf{H}_{j}^{-1}\textbf{g}_{j} \| \\
=&\left\|\left(\Delta_{i}^{(l)} \Sigma_{L}^{\prime}(z_{i}^{(L)}) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}\right)(x_{i}^{(l-1)})^{T}-\left(\Delta_{j}^{(l)} \Sigma_{L}^{\prime}(z_{j}^{(L)}) (\mathbf{H}_{j}^{-1})^{(L)}\textbf{g}_{j}^{(L)}\right)(x_{j}^{(l-1)})^{T}\right\| \\
\leq &\left\|\Delta_{i}^{(l)}\right\| \cdot\left\|x_{i}^{(l-1)}\right\| \cdot\left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}-\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{j}^{-1})^{(L)}\textbf{g}_{j}^{(L)}\right\| \\
&+\left\|\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}\right\| \cdot\left\|\Delta_{i}^{(l)}\left(x_{i}^{(l-1)}\right)^{T}-\Delta_{j}^{(l)}\left(x_{j}^{(l-1)}\right)^{T}\right\| \nonumber \\
\leq &\left\|\Delta_{i}^{(l)}\right\| \cdot\left\|x_{i}^{(l-1)}\right\| \cdot\left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}-\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{j}^{-1})^{(L)}\textbf{g}_{j}^{(L)}\right\| \\
&+\left\|\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}\right\| \cdot\left(\left\|\Delta_{i}^{(l)}\right\| \cdot\left\|x_{i}^{(l-1)}\right\|+\left\|\Delta_{j}^{(l)}\right\| \cdot\left\|x_{j}^{(l-1)}\right\|\right) \nonumber \\
\leq & \underbrace{\max _{l}\left(\left\|\Delta_{i}^{(l)}\right\| \cdot\left\|x_{i}^{(l-1)}\right\|\right)}_{c_{l, i}} \cdot\left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}-\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{j}^{-1})^{(L)}\textbf{g}_{j}^{(L)}\right\| \\
&+\underbrace{\left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}\right\| \cdot \max _{l, i, j}\left(\left\|\Delta_{i}^{(l)}\right\| \cdot\left\|x_{i}^{(l-1)}\right\|+\left\|\Delta_{j}^{(l)}\right\| \cdot\left\|x_{j}^{(l-1)}\right\|\right)}_{c_{2}} \nonumber \end{align}
From \cite{katharopoulos2018not}, \cite{mirzasoleiman2020coresets}, we have that the variation of the gradient norm is mostly captured by the gradient of the loss function with respect to the pre-activation outputs of the last layer of our neural network. Here we have a similar result, where, the variation of the gradient preconditioned on the inverse of the Hessian norm is mostly captured by the gradient preconditioned on the inverse of the Hessian of the loss function with respect to the pre-activation outputs of the last layer of our neural network. Assuming $\left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}\right\|$ is bounded, we get \begin{align}
\| \mathbf{H}_{i}^{-1} \textbf{g}_{i}-& \mathbf{H}_{j}^{-1}\textbf{g}_{j} \| \leq c_1 \left\|\Sigma_{L}^{\prime}\left(z_{i}^{(L)}\right) (\mathbf{H}_{i}^{-1})^{(L)}\textbf{g}_{i}^{(L)}-\Sigma_{L}^{\prime}\left(z_{j}^{(L)}\right) (\mathbf{H}_{j}^{-1})^{(L)}\textbf{g}_{j}^{(L)}\right\| + c_2 \end{align}
where $c_1, c_2$ are constants. The above holds for an affine operation followed by a slope-bounded non-linearity $\left(\left|\sigma^{\prime}(w)\right| \leq K\right)$.
\subsection{Analytic Hessian for Logistic Regression}\label{appx:hessian} Here we provide the analytical formulation of the Hessian of the binary cross entropy loss per data point $n$ with respect to weights $\mathbf{w}$ for Logistic Regression.
For Binary Logistic Regression we have a loss function ${\cal{L}}(\mathbf{w})$ defined as: \begin{align}
{\cal{L}}(\mathbf{w}) =- \sum_{i=1}^{N} l_i(\mathbf{w})= - \sum_{i=1}^{N} {y_i ln(\hat{\sigma}) + (1-y_i) ln(1-\hat{\sigma})}, \text{ where } \hat{\sigma}_i = \sigma(\mathbf{w^Tx_i}+b) \end{align} and $\sigma$ is the sigmoid function. \\
We form a Hessian matrix for each data point $i$ based on loss function $\ell_i(\mathbf{w})$ as follows:
\[
H_n = \left(\begin{array}{@{}c|c@{}}
\frac{\partial}{\partial \mathbf{w^2}} l_i(\mathbf{w})
& \frac{\partial }{\partial \mathbf{w} \partial b} l_i(\mathbf{w}) \\ \hline
\frac{\partial }{\partial b \partial \mathbf{w}} l_i(\mathbf{w}) &
\begin{matrix} \frac{\partial }{\partial b \partial b} l_i(\mathbf{w})
\end{matrix} \end{array}\right) =
\left(\begin{array}{@{}c|c@{}}
\begin{matrix}
\hat{\sigma}_i(1-\hat{\sigma}_i)\mathbf{ x_i x_i^T}
\end{matrix}
& \hat{\sigma}_i(1-\hat{\sigma}_i)\mathbf{ x_i }\\ \hline
{[\hat{\sigma}_i(1-\hat{\sigma}_i) \mathbf{x_i}]^T} &
\begin{matrix}
\hat{\sigma}_i (1-\hat{\sigma}_i)
\end{matrix} \end{array}\right) \] This allows us to analytically form the Hessian information per point which is needed to precompute a single coreset which will be used throughout training of the convex regularized logistic regression problem.
\section{Further Empirical Evidence}\label{apx:empirical}
\subsection{{\textsc{AdaCore}}\xspace estimates full gradient closely, reaching smaller loss}
{\textsc{AdaCore}}\xspace obtains a better estimate of the preconditioned gradient by considering curvature and gradient information compared to the state-of-the-art algorithm {\textsc{Craig}}\xspace and random subsets. This is quantified by calculating the difference between weighted gradients of coresets and the gradient of the complete dataset.
Fig \ref{fig:normed_grad_diff_subsets}, shows the difference in loss reached by {\textsc{AdaCore}}\xspace vs {\textsc{Craig}}\xspace over different subset sizes. This shows that corsets selected using {\textsc{AdaCore}}\xspace to classify the Ijcnn1 dataset using logistic regression can reach a lower loss over varying subset sizes than {\textsc{Craig}}\xspace .
\begin{figure}\label{fig:normed_grad_diff_subsets}
\end{figure}
\subsection{Class imbalance CIFAR-10} \label{exp:clasimn} To provide further empirical evidence, we include results using a class-imbalanced version of the CIFAR-10 dataset for ResNet18. We skewed the class distribution linearly, keeping $90\%$ of class 9, $80\%$ of class 8 \dots $10\%$ of class 1, and $0\%$ of class 0, and trained for 200 epochs. Selecting a coreset for every epoch can be computationally expensive; instead, one can compute a coreset once every $R$ epochs. Here we investigate {\textsc{AdaCore}}\xspace's performance on various $R$ values. As Table \ref{table:class_imb_sgd_mom_full} shows, {\textsc{AdaCore}}\xspace can withstand class imbalance much better than {\textsc{Craig}}\xspace and randomly selected subsets. When $R=20$, {\textsc{AdaCore}}\xspace achieves $57.3\%$ final test accuracy, $+8.7\%$ above {\textsc{Craig}}\xspace, $+2.6\%$ above Random, 27.4\% above {\textsc{GradMatch}}\xspace and 36.2\% above {\textsc{Glister}}\xspace.
\begin{table}[h!] \caption{CIFAR-10 Class Imbalance, ResNet18. Final test accuracy and percent of full data selected (in parentheses). Trained with SGD + Momentum, selecting a coreset every $R$ epochs that is $S$ percent of the full dataset. Note {\textsc{AdaCore}}\xspace has greater accuracy while seeing fewer data points.} \label{table:class_imb_sgd_mom_full} \begin{tabular}{llll} \hline Accuracy & {\color[HTML]{333333} $S = 1\% R = 20$} & $S = 1\% R = 10$ & $S = 1\% R = 5$ \\ \hline {\textsc{AdaCore}}\xspace & $\textbf{57.3\%} \pm 0.5$ \; $(\textbf{5\%})$ & $\textbf{57.12} \pm 0.96$ \; $(\textbf{9.5\%})$ &$ \textbf{60.2\%} \pm 0.36$ \; $(\textbf{14.5\%})$ \\ {\textsc{Craig}}\xspace & $48.6\% \pm 0.8$ \; $(8\%)$ & $55 \pm 1 $\;$(16\%)$ & $53.05\% \pm 0.24$ \;$(27.5\%)$ \\ Random & $54.7\% \pm 0.3 $\;$(8\%)$ & $54.6 \pm 0.76$ \;$(18\%)$ & $54.6\% \pm 0.74$ \;$(33.2\%)$\\ {\textsc{GradMatch}}\xspace & $29.9\% \pm 0.4$ \;$(8.2\%)$ & $29.1\% \pm 0. 8$\; $(14.7\%)$ & $32.75\% \pm 0.83$ \;$ (23.2\%)$\\ {\textsc{Glister}}\xspace & $21.1\% \pm 0.42$ \;$(8.6\%)$ &$ 17.2\% \pm 0.75$\;$(16\%)$ & $14.4\% \pm 0.83$ \; $(22.2\%)$ \end{tabular} \end{table}
Not only is {\textsc{AdaCore}}\xspace able to outperform {\textsc{Craig}}\xspace, random, {\textsc{GradMatch}}\xspace, and {\textsc{Glister}}\xspace, but it can do so while selecting a smaller fraction of the data points during training, as shown under all settings in Table \ref{table:class_imb_sgd_mom_full}.
\subsection{Class imbalance BDD100k}\label{exp:bdd100k} Additionally, we compared {\textsc{AdaCore}}\xspace to {\textsc{Craig}}\xspace and random selection for the BDD100k dataset, which has seven inherently imbalanced classes and 100k data points. We train ResNet50 with SGD + momentum for 100 epochs choosing subset size (s = 10\%) every (R = 20) epochs on the weather prediction task. We see that {\textsc{AdaCore}}\xspace can outperform {\textsc{Craig}}\xspace by 2\% and random by 8.8\% seen in Table \ref{table:bdd100kepoch}. \begin{table}[ht]\caption{{\textsc{AdaCore}}\xspace outperforms other baseline subset selection algorithms as well as training on the full dataset, reaching a better accuracy in less time. This provides up to a 2.3x speedup compared to to the state of the art.} \begin{tabular}{ll} \hline \textbf{SGD + Momentum Accuracy} & {\color[HTML]{333333} \textbf{S = 10\% R = 20}} \\ \hline {\textsc{AdaCore}}\xspace & 74.3\% \\ {\textsc{Craig}}\xspace & 72.3\% \\ Random & 65.5\% \end{tabular}\label{table:bdd100kepoch} \end{table}\\
Additionally, Table \ref{table:bdd100k} shows that {\textsc{AdaCore}}\xspace outperforms baseline methods on BDD100k providing 2.3x speedup vs. training on the entire dataset and a 1.8x speedup vs. random. We see that {\textsc{Craig}}\xspace, {\textsc{GradMatch}}\xspace \& {\textsc{Glister}}\xspace do not reach the accuracy of {\textsc{AdaCore}}\xspace even given more time and epochs. The epoch value is seen in parenthesis by accuracy. These experiments were run with SGD+momentum.
\begin{table}[ht]\caption{{\textsc{AdaCore}}\xspace outperforms other baseline subset selection algorithms as well as training on the full dataset, reaching a better accuracy in less time. This provides up to a 2.3x speedup compared to to the state of the art.} \begin{tabular}{@{}lllll@{}} \toprule \textbf{} & \textbf{BDD100k} & \textbf{} & \multicolumn{2}{l}{Speedup over} \\ \midrule
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}$S = 10\%$\\ R = 20\end{tabular}} &
\begin{tabular}[c]{@{}l@{}}Accuracy\\ (epoch)\end{tabular} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Time\\ (s)\end{tabular}} &
\multicolumn{1}{c}{Rand} &
\multicolumn{1}{c}{Full} \\ \midrule
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace} & $74.3\%(100) $ & 7331 & \textbf{1.8} & \textbf{2.3} \\
\multicolumn{1}{l|}{{\textsc{Craig}}\xspace} & $73.1\%(150)$ & 10996 & 1.3 & 1.6 \\
\multicolumn{1}{l|}{Random} & $73.3\%(180)$ & 13050 & 1 & 1.2 \\
\multicolumn{1}{l|}{{\textsc{GradMatch}}\xspace} & $72\%(200)$ & 14040 & .7 & 1.1 \\
\multicolumn{1}{l|}{{\textsc{Glister}}\xspace} & $73\%(200)$ & 12665 & 1.03 & 1.2 \\
\multicolumn{1}{l|}{Full Dataset} & $74.3\% (45)$ & 16093 & 0.8 & 1 \\ \bottomrule \end{tabular}\label{table:bdd100k} \end{table}
\subsection{CIFAR-100}\label{exp:cifar100} Table \ref{table:cifar100} shows that {\textsc{AdaCore}}\xspace outperforms baseline methods on CIFAR100, providing 4x speedup vs. training on the entire dataset and a 3.8x speedup vs. Random. We see that {\textsc{Craig}}\xspace, {\textsc{GradMatch}}\xspace and {\textsc{Glister}}\xspace do not reach the accuracy of {\textsc{AdaCore}}\xspace even given more time and epochs. The epoch value is seen in parenthesis by accuracy. These experiments were run with SGD+momentum. \begin{table}[ht]\caption{{\textsc{AdaCore}}\xspace outperforms other baseline subset selection algorithms as well as training on the full dataset, reaching a better accuracy in less time. This provides up to a 4.3x speedup compared to to the state of the art.}
\begin{tabular}{@{}lllll@{}} \toprule \textbf{} & \textbf{CIFAR100} & & \multicolumn{2}{l}{Speedup over} \\ \midrule
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}S = 10\%\\ R = 20\end{tabular}} &
\begin{tabular}[c]{@{}l@{}}Accuracy\\ (epoch)\end{tabular} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}Time\\ (s)\end{tabular}} &
\multicolumn{1}{c}{Rand} &
\multicolumn{1}{c}{Full} \\ \midrule
\multicolumn{1}{l|}{{\textsc{AdaCore}}\xspace} & 58.8\%(200) & 341 & \textbf{4.3} & \textbf{2.8} \\
\multicolumn{1}{l|}{{\textsc{Craig}}\xspace} & 57.3\%(250) & 426 & 3.5 & 2.2 \\
\multicolumn{1}{l|}{Random} & 58.1\%(864) & 1470 & 1 & 0.65 \\
\multicolumn{1}{l|}{{\textsc{GradMatch}}\xspace} & 57\%(200) & 980 & 1.5 & 0.97 \\
\multicolumn{1}{l|}{{\textsc{Glister}}\xspace} & 56\%(300) & 1110 & 1.3 & 0.86 \\
\multicolumn{1}{l|}{Full Dataset} & 59\% (40) & 960 & 1.5 & 1 \\ \bottomrule \end{tabular}\label{table:cifar100}
\end{table}
\subsection{When first order coresets fail, continued} \label{apx:wcf}
By preconditioning with curvature information, {\textsc{AdaCore}}\xspace is able to magnify smaller gradient dimensions that would otherwise be ignored during coreset selection. Moreover, it allows {\textsc{AdaCore}}\xspace to include points with similar gradients but different curvature properties. Hence, {\textsc{AdaCore}}\xspace can select more diverse subsets compared to {\textsc{Craig}}\xspace as well as {\textsc{GradMatch}}\xspace. This allows {\textsc{AdaCore}}\xspace to outperform first order coreset methods in many regimes, such as when subset size is large (e.g. $\geq$10\%) and for larger batch size (e.g. $\geq$ 128).
\begin{figure}\label{fig:when_craig_fails}
\end{figure}
In addition to the results shown in Figure \ref{subfig:R18_acc}, (reproduced here as Fig \ref{fig:repeat_craig_vs_ada}) where $R=1$, {\textsc{AdaCore}}\xspace outperforms {\textsc{Craig}}\xspace as well as {\textsc{GradMatch}}\xspace when we increase the coreset selection period $R$. Fig \ref{fig:when_craig_fails} shows that for larger $R$, first-order methods succumb to catastrophic forgetting each time a new subset is chosen, whereas {\textsc{AdaCore}}\xspace achieves a smooth rise in classification accuracy. This increased stability between coresets is another benefit of {\textsc{AdaCore}}\xspace's greater selection diversity. Interestingly, {\textsc{AdaCore}}\xspace achieves higher final test accuracy while selecting a smaller fraction of data points to train on during the training than {\textsc{Craig}}\xspace. Note that since {\textsc{AdaCore}}\xspace takes curvature into account while selecting the coresets, it can successfully select data points with a similar gradient but different curvature properties and extract a more diverse set of data points than {\textsc{Craig}}\xspace. However, as the coresets found by {\textsc{AdaCore}}\xspace provide a close estimation of the full preconditioned gradients for several epochs during training, the number of distinct data points selected by {\textsc{AdaCore}}\xspace is smaller than {\textsc{Craig}}\xspace.
\begin{figure}\label{fig:repeat_craig_vs_ada}
\label{fig:repeat_craig_vs_ada_1000}
\end{figure}
For completeness we provide Fig \ref{fig:repeat_craig_vs_ada_1000}, in which we allow training random subset selection 1000 epochs. We see that it takes over 4.5x longer for Random to near the accuracy of ResNet18 trained with {\textsc{AdaCore}}\xspace and Full. We use the same experimental setup as seen in Fig \ref{fig:repeat_craig_vs_ada}.
\subsection{MNIST} \label{exp:mnist} For our MNIST classifier, we use a fully-connected hidden layer of 100 nodes and ten softmax output nodes; sigmoid activation and L2 regularization with $\mu = 10^{-4}$ and mini-batch size of 32 on the MNIST dataset of handwritten digits containing 60,000 training and 10,000 test images all normalized to [0,1] by division with 255. We apply SGD with a momentum of 0.9 to subsets of size 40\% of the dataset chosen at the beginning of each epoch found by {\textsc{AdaCore}}\xspace, CRAIG, and random. Fig \ref{fig:mnist} compares the training loss and test accuracy of the network trained on coresets chosen by {\textsc{AdaCore}}\xspace, CRAIG, and random, with that of the entire dataset. We see that {\textsc{AdaCore}}\xspace can benefit from the second-order information and effectively finds subsets that achieve superior performance to that of baselines and the entire dataset. At the same time, it achieves a 2.5x speedup over training on the entire dataset. \begin{figure}\label{fig:mnist}
\end{figure}
\subsection{How batch size affects coreset performance} We see in Table \ref{table:batch_pm} that training with larger batch size on subsets selected by {\textsc{AdaCore}}\xspace can achieve a superior accuracy. We reproduce Table \ref{table:batch} here with standard deviation values. \begin{table}[ht]
\caption{Training ResNet18 with $S$=1\% subsets every $R$=1 epoch from CIFAR10 using batch size $b$= 512, 256, 128. {\textsc{AdaCore}}\xspace can leverage larger mini-bath size and obtain a larger accuracy gap to {\textsc{Craig}}\xspace and Random. For $b$=512, we have 1 mini-batch (GD). }\label{table:batch_pm}
\begin{small}
\begin{tabular}{l|lllll} \hline
& \textsc{{\textsc{AdaCore}}\xspace} & {\textsc{Craig}}\xspace & Rand & \begin{tabular}[c]{@{}l@{}}Gap/\\ {\textsc{Craig}}\xspace\end{tabular} & \begin{tabular}[c]{@{}l@{}}Gap/\\ Rand\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}GD~~ b=512\end{tabular} & $58.32\% \pm 0.45$ & $56.32\% \pm 0.32$ & $49.14\% \pm 1.19$ & $1.69\%$ & $8.91\%$ \\ \begin{tabular}[c]{@{}l@{}}SGD b=256\end{tabular} & $68.23\% \pm 0.2$ & $58.3\% \pm 1.38$ & $60.7\% \pm 1.04$ & $9.93\%$ & $8.16\%$ \\ \begin{tabular}[c]{@{}l@{}}SGD b=128\end{tabular} & $66.89\% \pm 0.73$& $58.17\% \pm 1.34$ & $65.46\% \pm0.93$ & $8.81\%$ & $1.52\%$ \end{tabular} \end{small} \end{table}
\subsection{Potential Social Impacts} Regarding social impact, our coreset method can outperform other methods in accuracy while selecting fewer data points over training and providing over 2.5x speedup. This will allow for a more efficient learning pipeline resulting in a lesser environmental impact. Our method can significantly decrease the financial and environmental costs of learning from big data. The financial costs are due to expensive computational resources, and environmental costs are due to the substantial energy consumption and the produced carbon footprint.
\end{document} | arXiv | {
"id": "2207.13887.tex",
"language_detection_score": 0.684436559677124,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} We prove that five ways to define entry {\bf A086377} in the OEIS do lead to the same integer sequence.
\noindent{\em Key words.}{\;Sturmian word; morphic sequence; Beatty sequence; continued fraction.}
\end{abstract}
\maketitle
\section{Introduction} \noindent In September of 2003 Benoit Cloitre contributed a sequence to the On-Line Encyclopedia of Integer Sequences \cite{A086377}, defined by him as $a_1=1$, and for $n\geq 2$ by \begin{equation} a_n= \left\{ \begin{aligned} a_{n-1}+2&\quad{\rm{if\:}} n \: {\rm is\: in \: the \: sequence,}\\ a_{n-1}+2&\quad{\rm{if\:}} n \: {\rm and\:} n\!-\!1 \: {\rm are\: not \: in \: the \: sequence,}\\ a_{n-1}+3&\quad{\rm{if\:}} n \: {\rm is\: not\: in \: the \: sequence, but\:} n\!-\!1 \: {\rm is\: in \: the \: sequence.} \end{aligned}\right. \end{equation} The first 25 values of this sequence are $$1, 4, 6, 8, 11, 13, 16, 18, 21, 23, 25, 28, 30, 33, 35, 37, 40, 42, 45, 47, 49, 52, 54, 57, 59.$$ The purpose of this paper is to prove equivalence of five ways to define this integer sequence, most of them already conjecturally stated in the OEIS article on {\bf A086377}. Besides a simplified recursion, the alternatives involve statements in terms of a morphic sequence, of a Beatty sequence, and of approximation properties linking a classical continued fraction of $\frac{4}{\pi}$ to that of $\sqrt{2}$. \section{The theorem} \begin{theorem}\label{thm} The following five definitions produce the same integer sequence: \begin{itemize} \item[$(a_n)$] defined by $a_1=1$ and for $n\geq 2$: \begin{equation*}\label{eq:A086377} a_n= \left\{ \begin{aligned} a_{n-1}+2&\quad{\rm{if\:}} n \: {\rm is\: in \: the \: sequence,}\\ a_{n-1}+2&\quad{\rm{if\:}} n \: {\rm and\:} n\!-\!1 \: {\rm are\: not \: in \: the \: sequence,}\\ a_{n-1}+3&\quad{\rm{if\:}} n \: {\rm is\: not\: in \: the \: sequence, but\:} n\!-\!1 \: {\rm is\: in \: the \: sequence.} \end{aligned}\right. \end{equation*} \item[$(b_n)$] defined by $b_1=1$ and for $n\geq 2$: \begin{equation*}\label{eq:short} b_n= \left\{ \begin{aligned} b_{n-1}+2&\quad{\rm{if\:}} n\!-\!1\: {\rm is\: not \: in \: the \: sequence,}\\ b_{n-1}+3&\quad{\rm{if\:}} n\!-\!1 \: {\rm is\: in \: the \: sequence.} \end{aligned}\right. \end{equation*} \item[$(c_n)$] for $n\geq 1$ defined as the position of the $n$-th zero in the fixed point of the morphism \begin{equation*}\label{eq:morphism} \phi:\ \left\{ \begin{aligned} 0&\mapsto 011\\ 1&\mapsto 01 \end{aligned}\right.; \end{equation*} \item[$(d_n)$] defined by $d_n=\left\lfloor (1+\sqrt{2})\cdot n -\frac12\sqrt{2}\right\rfloor$ for $n\geq 1$; \item[$(e_n)$] defined by $e_n=\lceil r_n\rfloor=\lfloor r_n+\frac{1}{2}\rfloor$, with $r_1=\dfrac{4}{\pi}$ and $r_{n+1}=\dfrac{n^2}{r_{n}-(2n-1)}$, for $n\geq 1$. \end{itemize} \end{theorem} \noindent At first we found it hard to believe the equivalence of these definitions, but a verification of the first 130000 terms ($a_{130000}= 313847$) convinced us to look for proofs.
\section{Simplification and a morphic sequence} \noindent To show that $(b_n)$ defines the same sequence as $(a_n)$, simply note that $a_n-a_{n-1}\ge 2$ for all $n$: hence if $n$ is in the sequence then $n-1$ is not, and we can combine the first two cases in Equation~(\ref{eq:A086377}).
In a comment to sequence {\bf A086377}, Clark Kimberling asked if the integers in this sequence coincide with the positions of the zeroes in sequence {\bf A189687}, which is the fixed point of the substitution \begin{equation*} \phi:\ \left\{ \begin{aligned} 0&\mapsto 011\\ 1&\mapsto 01 \end{aligned}\right., \end{equation*} defining the sequence $(c_n)$ in the Theorem. It is not hard to see that this indeed produces the same as sequence $(b_n)$; repeatedly applying the morphism $\phi$ to $0$ produces after a few steps the initial segment $$0110101011010110101101010110101101010110101101010110101101011\cdots.$$ The position $c_n$ of the $n$-th zero is 2 ahead of $c_{n-1}$ precisely when the latter is followed by a single~1, that is, when there is a 1 at position~$n-1$, and it is 3 ahead of $c_{n-1}$ if that zero is followed by~11, which means that there was a 0 at position~$n-1$. Thus the rule is exactly that defining~$(b_n)$.
\section{Beatty sequence} \noindent Every pair of real numbers $\alpha$ and $\beta$ determines a Beatty sequence by $${\rm B}(\alpha, \beta)_n :=\lfloor n\alpha+\beta \rfloor, \qquad n=1,2,\dots .$$ The numbers $\alpha$ and $\beta$ also determine sequences by
$$ {\rm St}(\alpha, \beta)_n :=\lfloor (n+1)\alpha+\beta \rfloor-\lfloor n\alpha+\beta \rfloor, \qquad n=1,2,\dots ,$$ which is a Sturmian sequence (of slope $\alpha$), over the alphabet $\{0, 1\}$, provided that $0\leq\alpha<1$.
Thus Sturmian sequences are first differences of Beatty sequences (when $0\le \alpha<1$), but Beatty sequences and Sturmian sequences are also linked in another way.
\begin{lemma}\label{lem:pos1} Let $\alpha>1$ be irrational, and let $(s_n)_{n\ge 1}$ be given by $s_n={\rm St}(\frac{1}{\alpha}, -\frac{\beta}{\alpha})_n$, for some real number $\beta$ with $\alpha+\beta> 1$ and such that $k\alpha+\beta \not\in \mathbb{Z}$ for all positive integers $k$. Then ${\rm B}(\alpha, \beta)$ is the sequence of positions of $1$ in $(s_n)$. \end{lemma}
\begin{proof} This is a generalization of Lemma 9.1.3 in \cite{Allou}, from homogeneous to inhomogeneous Sturmian sequences.
The proof also generalizes: \begin{eqnarray*} \exists\, k\ge 1: \: n = \lfloor k\alpha + \beta\rfloor & \Longleftrightarrow & \exists\, k\ge 1: \: n \le k\alpha + \beta < n+1\\ & \Longleftrightarrow & \exists\, k\ge 1: \: \frac{n-\beta}{\alpha} \le k < \frac{n+1-\beta}{\alpha}\\ & \Longleftrightarrow & \exists\, k\ge 1: \: \bigg\lfloor\frac{n-\beta}{\alpha}\bigg\rfloor=k-1 {\rm \: and \:} \bigg\lfloor\frac{n-\beta}{\alpha}+\frac1{\alpha}\bigg\rfloor=k\\ & \Longleftrightarrow & \bigg\lfloor\frac{n+1}{\alpha}-\frac{\beta}{\alpha}\bigg\rfloor- \bigg\lfloor\frac{n}{\alpha}-\frac{\beta}{\alpha}\bigg\rfloor=1\\ & \Longleftrightarrow & {\rm St}\bigg(\frac1\alpha, -\frac{\beta}{\alpha}\bigg)_n=1. \hspace{12em} \qedhere \end{eqnarray*} \end{proof} \noindent Our goal in this section is to prove that $(c_n)=(d_n)$. Let $\psi$ be the morphism $\psi: \mor{10}{100}$, and let $w$ be the fixed point. Then $$w = 1001010100101001010010101001010010101001010010101001010010100\cdots,$$ which is the mirror image of $\phi$ in the definition of $(c_n)$, i.e., $\psi=E\phi E$, with $E$ the exchange morphism given by $E(0)=1, E(1)=0$. So the positions of $0$ in the fixed point of $\phi$ correspond to the positions of $1$ in the fixed point $w$ of $\psi$.
Let $\alpha_d=1+\sqrt{2}$ and $\beta_d=-\frac12\sqrt{2}$; then $d_n={\rm B}(\alpha_d, \beta_d)_n$, for $n\geq1$.
Applying Lemma~\ref{lem:pos1}, we deduce that $d_n$ also equals the position of the $n$-th $1$ in the Sturmian sequence ${\rm St}(\alpha,\beta)$, generated by $$\alpha=\frac1{\alpha_d}=\sqrt{2}-1, \; \beta=\frac{-\beta_d}{\alpha_d}= 1-\frac12\sqrt{2}. $$
\begin{lemma}\label{lem:psi} ${\rm St}\big(\sqrt{2}-\!1, 1-\frac12\sqrt{2}\big)=w$. \end{lemma}
\begin{proof} This was already proved by Nico de Bruijn in 1981 (\cite{Bruijn de}), where it is the main example. Note, however, that our Sturmian sequences start at $n=1$.
For a `modern' proof as suggested by \cite[Section 4]{Dekk-Stur}, let $\psi_1$ and $\psi_2$ be the elementary morphisms given by $\psi_1(0)=01, \psi_1(1)=0$, and $\psi_2(0)=10, \psi_2(1)=0$. Then $\psi=\psi_2\psi_1E$. This implies that the fixed point $w$ of $\psi$ is a Sturmian word (see \cite[Corollary 2.2.19]{Lothaire}). To find its parameters $(\alpha,\beta)$, use the 2D fractional linear maps that describe how the parameters of a Sturmian word change when one applies an elementary morphism. For Sturmian words starting at $n=0$, the maps for $E, \psi_1$ and $\psi_2$ are\footnote{Actually there is a subtlety here involving the ceiling representation of a Sturmian sequence, but that does not apply in our case since $\beta \not\in \mathbb{Z} \alpha + \mathbb{Z}$.} respectively (see \cite[Lemma 2.2.17, Lemma 2.2.18, Exercise 2.2.6]{Lothaire}) \begin{equation*}T_0(x,y)=(1-x,1-y),\; T_1(x,y)=\left(\frac{1-x}{2-x},\frac{1-y}{2-x}\right), \; T_2(x,y)= \left(\frac{1-x}{2-x},\frac{2-x-y}{2-x}\right). \end{equation*} The change of parameters by applying $\psi$ is therefore the composition \begin{equation*} T_{210}(x,y):=T_2T_1T_0(x,y)= \left(\frac{1}{2+x},\frac{2+x-y}{2+x}\right). \end{equation*} But the parameters $\alpha$ and $\beta$ of $w$ do \emph{not} change when one applies $\psi$. This means that $(\alpha,\beta)$ is a fixed point of $T_{210}$, and one easily computes $\alpha=\sqrt{2}-1$, and then $\beta=\frac12 \sqrt{2}$. Since our Sturmian words start at $n=1$, we have to subtract $\alpha$ from $\beta$ and obtain that $w = {\rm St}\big(\sqrt{2}-\!1, 1-\frac12\sqrt{2}\big)$. \end{proof}
\section{Converging recurrence} \noindent In a comment to entry {\bf A086377}, Joseph Biberstine conjectured a beautiful connection with the infinite continued fraction expansion $$\frac{4}{\pi}= 1+{\strut 1^2\over \displaystyle{3+{\strut 2^2\over \displaystyle{5+{\strut 3^2\over \displaystyle{7+{\strut 4^2\over \displaystyle{9+{\strut 5^2\over \displaystyle{11+{\strut 6^2\over\ddots}}}}}}}}}}},$$ derived from the arctangent function expansion. If we define $R_n$ for $n\geq 1$ by $$R_n= 2n-1+{\strut n^2\over \displaystyle{2n+1+{\strut (n+1)^2\over \displaystyle{2n+3+{\strut (n+2)^2\over \displaystyle{2n+5+{\strut (n+3)^2\over\ddots}}}}}}},$$ then $R_1 = 4/\pi$ and $\displaystyle R_n=2n-1+\frac{n^2}{R_{n+1}}$. We see that $$\frac{R_n}{n}\frac{R_{n+1}}{n+1}-\frac{2n-1}{n}\frac{R_{n+1}}{n+1}- \frac{n^2}{n(n+1)}=0.$$ This implies that if $R_n/n$ converges, for $n\rightarrow\infty$, then it does so to a (positive) zero of $x^2-2x-1$, that is, to $1+\sqrt{2}$; cf.~Lemma \ref{lemma:rn} below.
We consider now, conversely and slightly more generally, for any real $h\geq 1$, a sequence of positive numbers $r_n$ satisfying \begin{equation} \label{e:r} r_n = h n - 1 + \frac{n^2}{r_{n+1}} \end{equation} for $n \ge 1$. We first show that this sequence is unique, i.e., there is a unique $r_1 > 0$ such that $r_n > 0$ for all $n \ge 1$, and give estimates for its terms.
\begin{lemma}\label{lemma:rn} For each $h \ge 1$, there is a unique sequence of positive real numbers $(r_n)_{n\ge1}$ satisfying the recurrence~\eqref{e:r}. Moreover, we have for this sequence, for all $n \ge 1$, \begin{equation} \label{e:rn} 0 < r_n - \alpha n + c < \frac{(\alpha-c)(c-1)}{\alpha n} \end{equation} with $\alpha = \dfrac{h+\sqrt{h^2+4}}{2}$ and $c = \dfrac{1+\alpha}{2\alpha-h} = \dfrac{1}{2} + \dfrac{h+2}{2\sqrt{h^2+4}}$. \end{lemma}
\begin{proof}
Let $f_n(x) = hn-1+n^2/x$. Suppose that a sequence of positive numbers $r_n$ satisfies~\eqref{e:r}, i.e., that $f_n(r_{n+1}) = r_n$ for all $n \ge 1$. Then we have $r_n > hn-1$ and thus $r_n < (h+1/h) n$ for all $n \ge 1$. We deduce that there exists some $\delta > 0$ and $N \ge 1$ such that $r_n > (h+\delta) n$ for all $n \ge N$. Suppose that there is another sequence of positive numbers $\tilde{r}_n$ satisfying~\eqref{e:r}. Since $|f_n'(x)| = |n/x|^2 < 1/(h+\delta)$ for all $x > (h+\delta)n$, we have \[
|r_N - \tilde{r}_N| = |f_N f_{N+1} \cdots f_{n-1}(r_n) - f_N f_{N+1} \cdots f_{n-1}(\tilde{r}_n)| < \frac{|r_n - \tilde{r}_n|}{(h+\delta)^{n-N}} < \frac{n/h}{(h+\delta)^{n-N}} \] for all $n \ge N$, hence $r_N = \tilde{r}_N$, which implies that $r_n = \tilde{r}_n$ for all $n \ge 1$.
Next we show that \[ f_n\big(\alpha (n+1) - c\big) < \alpha n - c + \frac{(\alpha-c)(c-1)}{\alpha n} \] and \[ f_n\Big(\alpha (n+1) - c + \frac{(\alpha-c)(c-1)}{(n+1)\alpha}\Big) > \alpha n - c. \] Indeed, using that $\alpha^2 = h \alpha + 1$ and $2\alpha c - h c = 1 + \alpha$, we have \begin{align*} (\alpha n + \alpha - c)\, f_n\big(\alpha (n+1) - c\big) & = (hn-1) (\alpha n + \alpha - c) + n^2 \\ & = (h\alpha+1) n^2 + (h\alpha-hc-\alpha) n - (\alpha-c) \\ & < \alpha^2 n^2 + (\alpha^2 - 2\alpha c) n - (\alpha-c) + \frac{(\alpha-c)^2(c-1)}{\alpha n} \\ & = (\alpha n + \alpha - c) \Big(\alpha n - c + \frac{(\alpha-c)(c-1)}{\alpha n}\Big), \end{align*} and \begin{align*} \Big(\alpha n + \alpha - c + \frac{(\alpha-c)(c-1)}{\alpha(n+1)}\Big) (\alpha n - c) \hspace{-9em} & \hspace{9em} < \alpha^2 n^2 + (\alpha^2 - 2\alpha c) n - (\alpha-c) - \frac{c(\alpha-c)(c-1)}{\alpha(n+1)} \\ & < (h\alpha+1) n^2 + (h\alpha-hc-\alpha) n - (\alpha-c) - \frac{(\alpha-c)(c-1)}{\alpha(n+1)} \\ & < (hn-1) \Big(\alpha n + \alpha - c + \frac{(\alpha-c)(c-1)}{\alpha(n+1)}\Big) + n^2 \\ & = \Big(\alpha n + \alpha - c + \frac{(\alpha-c)(c-1)}{\alpha(n+1)}\Big)\, f_n\Big(\alpha (n+1) - c + \frac{(\alpha-c)(c-1)}{\alpha(n+1)}\Big). \end{align*} As $f_n$ is monotonically decreasing for $x > 0$, we deduce that \[ 0 < f_n(x) - \alpha n + c < \frac{(\alpha-c)(c-1)}{\alpha n} \] for all $x$ with $0 \le x - \alpha (n+1) + c \le \frac{(\alpha-c)(c-1)}{\alpha(n+1)}$. Then we also have \[ 0 < f_n f_{n+1} \cdots f_{n+k-1}\big(\alpha (n+k) - c + x\big) - \alpha n + c < \frac{(\alpha-c)(c-1)}{\alpha n} \] for all $k,n \ge 1$, $0 \le x - \alpha (n+k) + c \le \frac{(\alpha-c)(c-1)}{\alpha(n+k)}$. As $f_n$ is contracting for $x \ge \alpha(n+1)-c$, the intervals $[f_1 f_2 \cdots f_n(\alpha (n+1) -c), f_1 f_2 \cdots f_n(\alpha (n+1) -c+\frac{(\alpha-c)(c-1)}{\alpha(n+1)})]$ converge to a point~$r_1$. Then the numbers $r_n$ given by \eqref{e:r} satisfy \eqref{e:rn} for all $n \ge 1$. By the first paragraph of the proof, this is the unique sequence of positive numbers satisfying~\eqref{e:r}. \end{proof}
\noindent Now consider when $\alpha n - c + \frac{1}{2}$ is close to $\lceil \alpha n - c + \frac{1}{2}\rceil$. Let $p_k/q_k$ be the convergents of the regular continued fraction $\alpha = [h;h,h,\ldots]$, i.e., $q_{-1} = 0$, $q_0 = 1$, $q_{k+1} = h q_k + q_{k-1}$ for $k \ge 1$, $p_k = q_{k+1}$. Then we have \[ q_k = \frac{\alpha^{k+1}+(-1)^k/\alpha^{k+1}}{\alpha+1/\alpha} \] and thus \begin{equation} \label{e:qp} q_k \alpha - p_k = \frac{(-1)^k}{\alpha^{k+1}}. \end{equation}
\begin{lemma} Let $h$ be a positive integer and $\alpha = \dfrac{h+\sqrt{h^2+4}}{2}$. Then we have \[ \lceil \alpha n \rceil - \alpha n = \left\{\begin{array}{ll}j/\alpha^{2k} & \mbox{if}\ n = j q_{2k-1},\, k \ge 1,\, 1 \le j < \alpha^{2k}, \\[1ex] (\alpha-1)/\alpha^{2k+1} & \mbox{if}\ n = q_{2k-1} + q_{2k},\, k \ge 0, \\[1ex] (\alpha+1)/\alpha^{2k+2} & \mbox{if}\ n = q_{2k+1} - q_{2k},\, k \ge 0, \end{array}\right. \] and $n (\lceil \alpha n \rceil - \alpha n) \ge 1$ for all other $n \ge 1$. \end{lemma}
\begin{proof} The formulas for $n = j q_{2k-1}$, $n = q_{2k-1} + q_{2k}$ and $n = q_{2k+1} - q_{2k}$ are immediate from~\eqref{e:qp}. By \cite[Ch.~2, \S 5, Theorem~2]{RockettSzusz}, we have $n (\lceil \alpha n \rceil - \alpha n) \ge 1$ for all $n \ge 1$ that are not of the form $j q_k$, $1 \le j < \alpha / \sqrt{h}$, $q_k+q_{k-1}$ or $q_k-q_{k-1}$. Since $\alpha q_{2k} - \lfloor \alpha q_{2k} \rfloor = 1 /\alpha^{2k+1}$, $\alpha (q_{2k} + q_{2k+1}) - \lfloor \alpha (q_{2k} + q_{2k+1}) \rfloor = (\alpha-1) /\alpha^{2k+2}$ and $\alpha (q_{2k} - q_{2k-1}) - \lfloor \alpha (q_{2k} - q_{2k-1}) \rfloor = (\alpha+1) /\alpha^{2k+1}$, we have $\lceil \alpha n \rceil - \alpha n > 1/2$ for $n = jq_{2k}$, $n = q_{2k} + q_{2k+1}$ and $n = q_{2k} - q_{2k-1}$. If moreover $n \ge 2$, then we have thus $n (\lceil \alpha n \rceil - \alpha n) \ge 1$ for these $n$ as well. Since $q_0 + q_{-1} = 1$, the case $n = 1$ has already been treated. \end{proof}
\noindent We obtain that \[ n \big(\lceil \alpha n \rceil - \alpha n\big) = \left\{\begin{array}{ll}\displaystyle\frac{j^2(1-1/\alpha^{4k})}{\sqrt{h^2+4}} & \mbox{if}\ n = j q_{2k-1},\, k \ge 1,\, 1 \le j < \alpha^{2k}, \\[1ex] \displaystyle\frac{h-(\alpha-1)^2/\alpha^{4k+2}}{\sqrt{h^2+4}} & \mbox{if}\ n = q_{2k-1} + q_{2k},\, k \ge 0, \\ \displaystyle\frac{h-(\alpha+1)^2/\alpha^{4k+4}}{\sqrt{h^2+4}} & \mbox{if}\ n = q_{2k+1} - q_{2k},\, k \ge 0.\end{array}\right. \] The worst case for $n = q_{2k-1} + q_{2k}$ or $n = q_{2k+1} - q_{2k}$ is given by $n = q_{-1} + q_0 = 1$, hence \[ n \big(\lceil \alpha n \rceil - \alpha n\big) \ge h+1-\alpha = 1-\frac{1}{\alpha} \] for all $n \ge 1$ such that $n \ne q_{2k-1}$ for all $k \ge 1$.
Now we come back to the case $h = 2$ and consider the distance of $\alpha n - c + \frac{1}{2}$ to the nearest integer above $\alpha n - c + \frac{1}{2}$. Note that $c-\frac{1}{2} = \frac{1}{\sqrt{2}}$. We have \[ 2 \Big(\Big\lceil \alpha n - \frac{1}{\sqrt{2}}\Big\rceil - \alpha n + \frac{1}{\sqrt{2}}\Big) = 2 \Big\lceil \alpha n - \frac{1}{\sqrt{2}}\Big\rceil -1 - \alpha (2n-1) \ge \lceil \alpha (2n-1)\rceil - \alpha (2n-1) > \frac{\alpha-1}{2\alpha n}, \] where we have used that $q_{2k-1}$ is even for all $k \ge 1$, thus \[ \Big\lceil \alpha n - \frac{1}{\sqrt{2}}\Big\rceil - \alpha n + \frac{1}{\sqrt{2}} > \frac{\alpha-1}{4\alpha n}. \] Since $(\alpha-c)(c-1) = \displaystyle\frac{1}{4\alpha}$, we have \[ \alpha n - \frac{1}{\sqrt{2}} < r_n+\frac{1}{2} < \alpha n - \frac{1}{\sqrt{2}} +\frac{1}{4\alpha n} < \alpha n - \frac{1}{\sqrt{2}} +\frac{\alpha-1}{4\alpha n} < \Big\lceil \alpha n - \frac{1}{\sqrt{2}}\Big\rceil \] for all $n \ge 1$, thus $d_n = e_n$. This completes the proof of Theorem \ref{thm}.
\noindent We remark that $h = 2$ cannot be replaced by an arbitrary positive integer in the previous paragraph. For example, for $h = 1$, we have $\alpha = \frac{1+\sqrt{5}}{2}$, $c = \frac{\alpha^2}{\sqrt{5}}$, $\lfloor 137 \alpha - c + \frac{1}{2} \rfloor = 220$ and $\lfloor r_{137} + \frac{1}{2} \rfloor = 221$. However, computer simulations suggest that (for any $h$) we always have $\lfloor \alpha n - c \rfloor = \lfloor r_n \rfloor$.
\end{document} | arXiv | {
"id": "1710.01498.tex",
"language_detection_score": 0.6110692620277405,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title {Exact dynamics of interacting qubits in a thermal environment:\\ Results beyond the weak coupling limit} \author{Lian-Ao Wu} \affiliation{Department of Theoretical Physics and History of Science, The Basque Country University (EHU/UPV) and IKERBASQUE -- Basque Foundation for Science, 48011, Bilbao, Spain} \author{Claire X. Yu} \affiliation{Chemical Physics Theory Group, Department of Chemistry and Center for Quantum Information and Quantum Control, University of Toronto, 80 St. George street, Toronto, Ontario, M5S 3H6, Canada} \author{Dvira Segal} \affiliation{Chemical Physics Theory Group, Department of Chemistry and Center for Quantum Information and Quantum Control, University of Toronto, 80 St. George street, Toronto, Ontario, M5S 3H6, Canada} \pacs{03.65.Yz, 03.65.Ud, 03.67.Mn, 42.50.Lc}
\date{\today}
\begin{abstract} We demonstrate an exact mapping of a class of models of two interacting qubits in thermal reservoirs to two separate spin-bath problems. Based on this mapping, exact numerical simulations of the qubits dynamics can be performed, beyond the weak system-bath coupling limit. Given the time evolution of the system, we study, in a numerically exact way, the dynamics of entanglement between pair of qubits immersed in boson thermal baths, showing a rich phenomenology, including an intermediate oscillatory behavior, the entanglement sudden birth, sudden death, and revival. We find that stationary entanglement develops between the qubits due to their coupling to a thermal environment, unlike the isolated qubits case in which the entanglement oscillates. We also show that the occurrence of entanglement sudden death in this model depends on the portion of the zero and double excitation states in the subsystem initial state. In the long-time limit, analytic expressions are presented at weak system-bath coupling, for a range of relevant qubit parameters.
\end{abstract}
\maketitle
\section{Introduction}
Understanding the dynamics of a dissipative quantum system is a prominent challenge in physics, as a quantum system is never perfectly isolated from a larger environment. A minimal, yet highly rich model for exploring quantum dissipation effects, is the spin-boson model, including an impurity two-level system (referred to as a spin) coupled to a thermal reservoir. This model displays a rich phase diagram in the equilibrium regime \cite{Legget,LehurR}. The non-equilibrium version of this model, referring to the case where the spin is coupled to two thermal reservoirs, has been suggested as a toy model for exploring quantum transport phenomenology through an anharmonic nanojunction \cite{Rectif, SegalM}. In this case, the generic situation is one of a non-equilibrium steady-state, regardless of the initial preparation.
Interacting two-level systems are the basic element in quantum computation, thus it is paramount to extent the minimal spin-boson scenario and describe more complex modular systems, e.g., two interacting qubits immersed in a thermal environment \cite{Lehurq}. For a schematic representation, see Fig. \ref{FigS}. The qubits may share their thermal environment, or may separately couple to independent baths, maintained at a nonzero temperature. The latter situation corresponds to the case where the qubits are not necessarily placed close to each other. In another relevant setup, one qubit couples indirectly to a thermal reservoir, through its interaction with the other qubit. This situation effectively corresponds to a subsystem anharmonically coupled to a harmonic bath, allowing to introduce nontrivial nonlinear effects \cite{Grifoni}. Physical realizations include, for example, ultracold atoms in optical lattices \cite{Cold}, trapped ions \cite{Ion}, resonator-coupled superconducting qubit arrays \cite{Josephson,Super}, and electron spins in quantum dots and doped semiconductors \cite{QD}. In such systems one should consider (at least) four energy scales: the internal qubit energetics, controlling its isolated (Rabi oscillation) dynamics, qubit-bath interaction strength and the environment temperature, leading to decoherence and relaxation processes, and qubit-qubit coupling energy, admitting state transfer between qubits and a nontrivial gate-functionality.
The dissipative multi-qubit system has recently served as a simple model for resolving issues related to coherence dynamics in the time evolution of biological molecules, e.g., the Fenna- Matthews-Olson (FMO) complexes, resulting in the identification of the relevant decoherence, relaxation and disentanglement timescales \cite{Whaley,Thorwart, Pachon}. It should be noted that these works have considered single-excitation states only, ignoring the contribution of the zero and the doubly excited states in the two-qubits dynamics.
In this paper, we analytically demonstrate that a class of interacting two-qubit systems immersed in separate thermal reservoirs or within a common bath, can be mapped onto two uncoupled spin-bath problems, allowing for an exact numerical solution of the qubits dynamics. For a bosonic environment and a particular system-bath interaction form, we perform those simulations using an exact numerical technique, the quasi adiabatic path-integral (QUAPI) approach \cite{QUAPI}, providing the population and coherence dynamics of the system. With this at hand, we can follow the exact dynamics of entanglement between the qubits, as quantified by Wootters' concurrence \cite{Wooters}. For a range of system and bath parameters, analytic results are presented, describing the system behavior in the long time limit. The model investigated here is more general than what has been typically considered before, going beyond the simple exchange interaction model \cite{Thorwart,Pachon}.
\begin{figure}
\caption{Scheme of the model system including two interacting qubits (a) immersed in a common bath, (b) coupled to separate baths, $L$ and $R$, and (c) with qubit '2' coupled to a thermal bath only through its interaction with qubit '1'. This model can represent the nonlinear coupling of qubit '2' to a structured bath. Simulations were performed here assuming scenario (c).}
\label{FigS}
\end{figure}
Entanglement is associated with nonclassical correlations between two or more quantum systems \cite{EntagR}. Since it is a basic resource in quantum computation and information technology, it is important to understand the extent to which environmental-induced decoherence processes degrade and destroy it \cite{TingYu}, or alternatively, generate \cite{Benatti} and maintain it \cite{ShiLiang}. It has been recently shown that two qubits in separate reservoirs may disentangle {\it at finite times}, as opposed to the behavior of coherences. This process is referred to as ``entanglement sudden death" \cite{TingYu,DeathExp}. More recent theoretical and experimental studies have looked at related effects, e.g., the collapse and subsequent revival of the entanglement \cite{Guo}, or its delayed-sudden birth, induced by a dissipative bath \cite{Tanas}. Steady-state entanglement generation by dissipation has been recently observed in atomic ensembles \cite{Ciracss}. These studies have assumed non-interacting qubits, and the system dynamics has been typically followed within quantum master equation approaches (e.g., the Redfield equation or Lindblad formalism \cite{Petrucci}), by invoking the weak system-bath coupling approximation. The markovian limit has been further assumed in many cases, see e.g., \cite{Benatti,Asma}.
Our work here departs from these studies in two substantial aspects. First, we consider a more complex model for the subsystem, introducing qubit-qubit interaction, with the motivation to examine a setup relevant for quantum computing technology. Using an exact mapping, we show how the dynamics can be followed within a simpler construction: While we take into account the zero excitation and double excitation states, under certain initial conditions their dynamics can be separated from the evolution of the single-excitation states. However, their contribution to the pair entanglement is paramount. Second, we refrain from making approximations: We study the qubits dynamics using a numerically exact method, assuming a class of initial states. The results are valid beyond the weak system-bath coupling scenario, accommodating non-markovian effects.
Our calculations display rich dynamics. Particularly, we observe the development of a {\it stationary} concurrence due to the coupling of the qubits to thermal baths. This result stands in a direct contrast to the oscillatory behavior observed in the fully coherent regime. It demonstrates that while entanglement inherently relies on the existence of quantum correlations in the system, it nevertheless requires non-vanishing decoherence and relaxation effects in order to be stabilized and become useful for quantum technologies. Other phenomena detected and explained here are entanglement delayed sudden birth, sudden death, and revival.
The paper is organized as follows. In Sec. II we describe the model of interest, and explain its mapping onto two separate spin-bath problems. In Sec. III we explain how we follow the system dynamics, and include relevant expressions for calculating the qubits concurrence. Numerical results within QUAPI are included in Sec. IV. The long-time limit is discussed in Sec. V. Sec. VI concludes.
\section{Model}
The general model to be considered here includes two interacting qubits, $i=1,2$, immersed within separate reservoirs, $L$ and $R$, respectively. The formalism can be reduced to describe a single-bath scenario. The total Hamiltonian includes three terms,
\begin{eqnarray} H=H_{S}+H_{B}+V_{SB}. \label{eq:HT} \end{eqnarray}
$H_{S}$ and $H_{B}=H_L+H_R$ stand for the system and reservoirs Hamiltonians, respectively. The former includes the isolated qubits with the internal energy bias $\epsilon_{i}$ and a qubit-qubit interaction term $V_{ss}$,
\begin{eqnarray} H_{S}&=&\epsilon _{1}\sigma _{1}^{z}+\epsilon _{2}\sigma _{2}^{z}+V_{ss}, \nonumber\\ V_{ss}&=&\frac{J}{2}[(1+\gamma )\sigma _{1}^{x}\sigma _{2}^{x}+(1-\gamma )\sigma _{1}^{y}\sigma _{2}^{y}+2\delta \sigma _{1}^{z}\sigma _{2}^{z}]. \end{eqnarray}
$\sigma_i^{p}$ ($p=x,y,z$) are the Pauli matrices for the $i$th spin, $J$ is an energy parameter characterizing the exchange interaction, $\gamma$ and $\delta$ set the interaction anisotropy. Our mapping holds for a dephasing-type system-bath interaction model,
\begin{eqnarray} V_{SB}=\sigma _{1}^{z}B_{L}+\sigma _{2}^{z}B_{R}. \label{eq:deph} \end{eqnarray}
Here, $B_{\nu}$ is a $\nu=L,R$ bath operator, with $B_L$ coupled to spin '1' and $B_R$ coupled to spin '2'. In our simulations below we adopt bosonic reservoirs: each thermal baths includes a collection of independent harmonic oscillators,
\begin{eqnarray} H_{\nu}=\sum_{k\in \nu}\omega_ka_k^{\dagger}a_k.\end{eqnarray}
The operators $a_k^{\dagger}$ and $a_k$ are bosonic creation and annihilation operators, respectively, $\omega_k$ is the mode frequency. We also assume that the interaction operators constitute the reservoirs displacements from equilibrium,
\begin{eqnarray} B_{\nu}=\sum_{k\in \nu} \lambda_k \left(a_k^{\dagger}+a_k\right). \end{eqnarray}
Here, $\lambda_{k}$ are system-bath coupling constants. The mapping described next, from the Hamiltonian (\ref{eq:HT}) into two spin-bath problems, neither rely on a particular bath statistics nor on the details of the operators $B_{\nu}$. For example, it is valid for a model of two-qubits in fermionic or spin environments.
The Hamiltonian introduced so far takes into account two independent reservoirs. We could explore a similar setup with one qubit coupled to a thermal reservoir indirectly, through its interaction with the second qubit, mimicking nonlinear effects \cite{Grifoni}, see Fig. \ref{FigS}. Another relevant setup includes two qubits immersed in a common thermal reservoir.
The Hilbert space of the qubits is spanned by four vectors,
$\left|00\right\rangle, \left|01\right\rangle,
\left|10\right\rangle, \left| 11\right\rangle$, forming the excitation basis of the Hamiltonian, where the left (right) digit indicates the state of qubit 1 (2). We now show that the Hamiltonian can be mapped onto two spin-bath type models. We begin by defining four composite system operators,
\begin{eqnarray} P_{z}&=&\frac{1}{2}(\sigma_{1}^{z}-\sigma _{2}^{z}), \,\,\, Q_{z}=\frac{1}{2}(\sigma _{1}^{z}+\sigma _{2}^{z}), \,\,\, \nonumber \\ P_{x}&=&\frac{1}{2}\left( \sigma_{1}^{x}\sigma _{2}^{x}+\sigma _{1}^{y}\sigma _{2}^{y} \right), \,\,\, Q_{x}=\frac{1}{2} \left(\sigma _{1}^{x}\sigma _{2}^{x}-\sigma _{1}^{y}\sigma _{2}^{y} \right). \end{eqnarray}
In the excitation basis, these operators take the explicit form
\begin{eqnarray} P_x= \left|01\right\rangle\langle 01| +
\left|10\right\rangle\langle10|, \,\, P_z=
\left|10\right\rangle\langle 01| - \left|01\right\rangle\langle10| \nonumber\\
Q_x= \left|00\right\rangle\langle 11| +
\left|11\right\rangle\langle00|, \,\, Q_z=
\left|00\right\rangle\langle 00| - \left|11\right\rangle\langle11|. \end{eqnarray}
Additionally, we construct two identity-type operators,
\begin{eqnarray} I_P&=&\left|10\right\rangle\langle 01| +
\left|01\right\rangle\langle10| \nonumber\\
I_Q&=& \left|00\right\rangle\langle 00| +
\left|11\right\rangle\langle11|. \end{eqnarray}
With these at hand, the Hamiltonian (\ref{eq:HT}) can be written as
\begin{eqnarray} H=H_Q+H_P, \end{eqnarray} where
\begin{eqnarray} H_Q&=& \epsilon Q_{z} + J\gamma Q_{x} + Q_{z}B+J\delta I_Q +H_B I_Q \nonumber\\ H_P &=&\bar{\epsilon}P_{z}+JP_{x} +P_{z}\overline{B}- J\delta I_P +H_{B}I_P. \label{eq:HTR} \end{eqnarray}
Here, $\epsilon =\epsilon_{1}+\epsilon _{2}$, $\bar{\epsilon}=\epsilon_{1}-\epsilon _{2}$, $B=B_{L}+B_{R}$ and $\overline{B}=B_{L}-B_{R}$. One can easily show that the following commutators vanish \cite{para} ($m,n=x,z$)
\begin{eqnarray} \lbrack Q_{m},P_{n}]=[P_{m},\sigma _{1}^{z}\sigma _{2}^{z}]=[Q_{m},\sigma _{1}^{z}\sigma _{2}^{z}]=0. \end{eqnarray}
The Hilbert space of two qubits can thus be factored into two direct-sum subspaces: The first, $P$, is spanned by
$\left|01\right\rangle$ and $\left|10\right\rangle$. The second,
$Q$, is spanned by $\left|00\right\rangle$ and$\left| 11\right\rangle$. One can further prove that $P_{x},[P_{x},P_{z}]$ and $P_{z}$ generate an $SU^{P}(2)$ group, and $Q_{x},[Q_{x},Q_{z}]$ and $Q_{z}$ generate another $SU^{Q}(2)$ group. The two groups have a direct-sum structure, $SU^{P}(2)\oplus SU^{Q}(2)$. We now note that $P_{x}$ and $P_{z}$ in subspace $P$ play the role of the Pauli matrices $\sigma^{x} $ and $\sigma^{z}$ (in the space spanned by
$\left| 0\right\rangle $ and $\left| 1\right\rangle$). The same principle holds for $Q_{x}$ and $Q_{z}$ in the $Q$ subspace. Overall, in mapping Eq. (\ref{eq:HT}) into Eq. (\ref{eq:HTR}) we replaced a model of two interacting spins coupled each to its own thermal reservoir, by a model of two separate spin-boson-type systems, where each spin in the new model is coupled to {\it both} reservoirs. The latter model is significantly simpler than the former, and we can explore its dynamics using exact simulation tools for a class of certain initial conditions.
The mapping described here holds for general reservoirs and a bilinear system-bath interaction form, with an arbitrary bath operator coupled to the subsystem. The results also hold when we apply a dressing transformation \cite{dress} $W$, $H^{\prime }=W^{\dagger }HW$ on the two qubits Hamiltonian, or the reservoirs. For example, we may introduce a spin-orbital coupling into $V_{ss}$ via $W=\exp (i\frac{\theta}{2} \sigma _{1}^{z})$, while keeping other terms in the total Hamiltonian unchanged
\begin{eqnarray} V_{ss}^{\prime }=\cos \theta V_{ss}+\sin \theta \frac{J}{2}[(1+\gamma )\sigma _{1}^{y}\sigma_{2}^{x}-(1-\gamma )\sigma _{1}^{x}\sigma _{2}^{y}]. \end{eqnarray}
Another relevant case that can be simulated exactly relies on the absence of external fields, $\epsilon _{1}=\epsilon _{2}=0$, taking $W=\exp (i\frac{\pi }{4}(\sigma _{1}^{y}+\sigma _{2}^{y}))$. This results in
\begin{eqnarray} H_{S}^{\prime }=\frac{J}{2}[(1+\gamma )\sigma _{1}^{z}\sigma _{2}^{z}+(1-\gamma )\sigma _{1}^{y}\sigma _{2}^{y}+2\delta \sigma _{1}^{x}\sigma _{2}^{x}] \end{eqnarray}
with $V_{SB}'=\lambda _{1}\sigma_{1}^{x}B_{L}+\lambda_{2}\sigma _{2}^{x}B_{R}$. After this transformation, the model has turned into the anisotropic XYZ-type model with flip-flop ($\sigma_x$) coupling between the system and reservoirs. In this form, the model describes energy exchange between the qubits and the baths, unlike the original Hamiltonian [Eq. (\ref{eq:deph})] which delineates dephasing effects. When $1-\gamma = 2\delta$, it reduces to the standard XY model, $H_{S}^{\prime }=\frac{J}{2}[(1-\gamma )(\sigma _{1}^{y}\sigma _{2}^{y}+ \sigma_{1}^{x}\sigma _{2}^{x})+(1+\gamma )\sigma _{1}^{z}\sigma_{2}^{z}] $.
\section{Dynamics and quantum entanglement}
We explain here how we time-evolve the reduced density matrix, to obtain the qubits dynamics. As an initial condition we consider a system-bath product state, where the reservoirs are maintained in a canonical-thermal state, $\rho_{\nu}=e^{-\beta_{\nu}H_{\nu}}/Z_{\nu}$, $Z_{\nu}={\rm Tr_B}[e^{-\beta_{\nu}H_{\nu}}]$ is the partition function. In order to separate the dynamics into the $Q$ and $P$ branches, we must adopt a direct sum $Q$-$P$ initial state for the qubits. Overall, the total density matrix at time $t=0$ is written as
\begin{eqnarray} \rho(0)= \left( \begin{array}{cc} \rho_P(0) & 0 \\ 0 & \rho_Q(0) \\ \end{array} \right) \otimes \rho_B. \label{eq:t0} \end{eqnarray}
For a particular example, see Eq. (\ref{eq:init}). Under this construction, the reduced density matrix follows
\begin{eqnarray} &&\rho_S(t)={\rm Tr_B}[ U(t)\rho(0)U^{\dagger}(t)] \nonumber\\ &&= {\rm Tr_B}\left( \begin{array}{cc} U_P(t)\rho_P(0) \rho_B U^{\dagger}_P(t) & 0 \\ 0 & U_Q(t)\rho_Q(0) \rho_B U^{\dagger}_Q(t) \\ \end{array} \right) \label{eq:rhoS} \nonumber \\ \end{eqnarray}
The trace is performed over the $L$ and $R$ degrees of freedom. The time evolution operators, $U(t)=U_Q(t)\oplus U_P(t)$, are defined as
\begin{eqnarray} U_{Q}(t) =e^{-itH_Q}, \,\,\, U_{P}(t) =e^{-itH_P},
\label{eq:timeE}
\end{eqnarray}
with $H_P$ and $H_Q$ given in Eq. (\ref{eq:HTR}). Eq. (\ref{eq:rhoS}) establishes an important result: The dynamics of the two-qubit system proceeds in two independent branches, each equivalent to a spin-bath model. In the case of bosonic baths, the dynamics in each branch is followed next using the QUAPI technique \cite{QUAPI}, a numerically exact simulation tool that can be easily extended to include more than one thermal reservoir. The output of this calculation is the reduced density matrix of the qubits. We use this information and investigate the time evolution of entanglement between the qubits. As a side comment, we note that if the qubits were to couple to a common bath, $\bar B=0$, only the $Q$ subspace would have become susceptible to decoherring effects, while the $P$ subspace would be an invariant subspace, or a ``decoherence free" subspace.
Based on Eqs. (\ref{eq:t0})-(\ref{eq:timeE}), we conclude that the reduced density matrix $\rho_S$ is an X-type matrix at all times, once we organize it in the standard order of basis vectors
$\left| 00\right\rangle ,\left| 01\right\rangle ,\left|
10\right\rangle \ $and $\left| 11\right\rangle$,
\begin{eqnarray} &&\rho_S= \left( \begin{array}{cccc} (\rho_{S})_{00,00}& 0 & 0 & (\rho_{S})_{00,11} \\ 0& (\rho_{S})_{01,01} & (\rho_{S})_{01,10} & 0 \\ 0& (\rho_{S})_{10,01} & (\rho_{S})_{10,10} & 0 \\ (\rho_{S})_{11,00}& 0 & 0 & (\rho_{S})_{11,11} \\ \end{array} \right) \label{eq:M} \nonumber \\ \end{eqnarray}
This is an important result: The dynamics of a class of dissipative interacting qubits [Eqs. (\ref{eq:HT})-(\ref{eq:deph})] can be reached via the solution of two spin-bath problems, and the reduced density matrix satisfies an X-form at all times. Different quantities are of interest, e.g., the timescale for maintaining coherences in the system \cite{Pachon}. Here, we focus on quantum correlations in the system \cite{EntagR}, computed next in a numerically exact way beyond weak coupling. In particular, we quantify the degree of entanglement between the qubits using Wootters' concurrence \cite{Wooters}. For mixed states it is calculated by considering the eigenvalues of the matrix $r(t)=\rho_S(t)\sigma_1^{y}\otimes \sigma_2^{y}\rho_S ^{\ast }(t)\sigma_1^{y}\otimes \sigma_2^{y}$, given here by
\begin{eqnarray}
\lambda_{1,2}
&=&\left[\sqrt{(\rho_S)_{01,01}(\rho_S)_{10,10}}\pm \left|
(\rho_S)_{01,10}\right| \right]^{2}, \nonumber\\
\lambda_{3,4}&=&\left[\sqrt{ (\rho_S)_{00,00}(\rho_S)_{11,11}}\pm \left| (\rho_S)_{00,11}\right|\right] ^{2}. \end{eqnarray}
In terms of these eigenvalues, the concurrence is defined as
\begin{eqnarray} C(t)&=&\max (2\max (\sqrt{\lambda_{1}},\sqrt{\lambda_2}, \sqrt{\lambda_3}, \sqrt{\lambda_4}), \nonumber\\ &-&\sqrt{\lambda_1}- \sqrt{\lambda_2}-\sqrt{\lambda_3}-\sqrt{\lambda_4},0). \end{eqnarray}
It varies from $C=0$ for a disentangled state to $C=1$ for a maximally entangled state. In the present case it reduces to
\begin{eqnarray} C(t)=\max(0, 2F_1,2F_2) \label{eq:Ct} \end{eqnarray}
with
\begin{eqnarray} F_1&=& |(\rho_S)_{01,10}| - [(\rho_S)_{00,00}(\rho_S)_{11,11}]^{1/2}, \nonumber\\
F_2&=& |(\rho_S)_{00,11}| - [(\rho_S)_{01,01}(\rho_S)_{10,10}]^{1/2} \label{eq:Conc1}. \end{eqnarray}
The dynamics of concurrence for an $X$-state density matrix has been examined in different works. For example, in Ref. \cite{Asma} it is demonstrated that the effect of entanglement sudden death should always take place in a noninteracting qubit system once coupled to a finite temperature reservoir. As we mention in the introductory section, we depart from this study and similar works in two aspects: (i) We build the reduced density matrix using an exact numerical treatment, and (ii) we consider a more general model, including quit-qubit interaction effects, with the motivation to consider a setup more relevant for quantum computation technologies.
\section{Numerical Results}
We simulate the spin-boson dynamics in the $P$ and $Q$ branches (separately) using QUAPI \cite{QUAPI}, to obtain the population and coherences in each branch. With this at hand, we generate the $4\times 4$ reduced density matrix $\rho_S(t)$, Eq. (\ref{eq:M}). The qubits degree of entanglement is calculated using Eq. (\ref{eq:Ct}). Our general description assumes two thermal reservoirs: $H_L$, coupled to spin 1 and $H_R$, coupled to spin 2. These reservoirs are characterized by the spectral function $J_{\nu}(\omega)=\pi \sum_{k\in \nu} \lambda_{k}^2\delta(\epsilon_{k}-\omega)$. Specifically, we simulate Ohmic baths, $J_{\nu}(\omega)=\frac{\pi K_{\nu}}{2}\omega e^{-\omega/\omega_c}$; $\omega_c$ is the cutoff frequency. The dimensionless prefactor $K_{\nu}$ is referred to as the Kondo parameter, describing the strength of the system-bath interaction energy. In practice, our simulations were performed without the $R$ reservoir, by taking $K_R=0$. The reason for this choice is that in the spin-boson Hamiltonian (\ref{eq:HTR}) the inclusion of identical reservoirs which are interacting in the same manner with the spins (same functional form for $B_L$ and $B_R$, up to a sign), simply amounts to an additive operation, reflecting a linear scaling of the Kondo parameter when more than one reservoir is incorporated. In what follows, we thus use the short notation $K\equiv K_{L}$, $K_R=0$ and $T\equiv T_L$.
The energy parameters in the system are the qubit-qubit coupling, taken as $J=1$, and the anisotropy parameters $\delta=0.1$, $\gamma=0.5$. The qubits are assumed identical with $\epsilon_1=\epsilon_2\sim 0.1-0.5$. For the reservoir we take as a cutoff frequency $\omega_c=7.5$, and use temperatures at the range $T=0.1-1$. The Kondo parameter extends from the weak coupling limit ($K=0.05$) to the strong-intermediate regime, $K=0.8$, where convergence of QUAPI can be achieved. The following initial condition is utilized for the qubits subsystem
\begin{eqnarray} \rho_S(0)=a\rho_Q(0)\oplus(1-a)\rho_P(0), \label{eq:Bell} \end{eqnarray}
with $0\leq a\leq 1$ and $\rho_{Q,P}(0)$ as (maximally entangled) Bell states,
\begin{eqnarray} \rho_Q(0)= \frac{1}{2}\begin{pmatrix} 1 & 1 \\ 1 & 1 \end{pmatrix}; \,\,\, \rho_P(0)=\frac{1}{2}\begin{pmatrix} 1 &1 \\ 1 & 1 \end{pmatrix} \label{eq:init} \end{eqnarray}
The concurrence (\ref{eq:Ct}) can be simplified if the following conditions are simultaneously satisfied: (i) $a\leq1/2$, and (ii) $\epsilon_1=\epsilon_2$. The latter condition, combined with the initial state ascribing identical weight to diagonal elements in the $P$ subspace, implies that the populations in the $P$ (single excitation) subspace are identical at all times, $(\rho_S)_{01,01}=(\rho_S)_{10,10}=\frac{1-a}{2}$.
Since $\sqrt{(\rho_S)_{00,00}(\rho_S)_{11,11}}\leq \frac{a}{2}$ at all times and the density matrix positivity condition demands that
$|\rho_{i,j}|^2\leq\rho_{i,i}\rho_{j,j}$, the off diagonal terms are bounded by $|(\rho_S)_{11,00}|\leq \frac{a}{2}$. This implies that $F_2$ cannot be larger than zero at any instant for $a\leq 1/2$. The concurrence can then be simplified to
\begin{eqnarray} C_{a\leq \frac{1}{2}}(t)= \max (0,2F_1 ). \label{eq:Conc2} \end{eqnarray}
This expression indicates that $C$ is nonzero if the magnitude of the coherence in the $P$ subspace is large, in comparison to the product of populations in the $Q$ subspace. Thus, to understand the behavior of entanglement between the qubits one needs to follow the population and coherence dynamics in both subspaces. At the special point $a=0$ the concurrence reduces to the simple form
\begin{eqnarray} C_{a=0}(t)=\max \Big( |(\rho_S)_{01,10}| ,0 \Big), \label{eq:Conc3} \end{eqnarray}
which only depends on coherences behavior, a continuous function. As a result, concurrence sudden death is eliminated, indicating that this effect is directly linked to the inclusion of zero and double excitation components in the dynamics.
\begin{figure}
\caption{Population dynamics (a) in the single excitation subspace $P$ and (b) in the zero and double excitation subspace $Q$. We use the Bell states (\ref{eq:Bell}) as the initial density matrix with $a=1/2$. $K$=0.05, 0.1, 0.2, 0.3, 0.4, 0.5, bottom to top; the data at $K$=0.05 is the most oscillatory. Other parameters are $\epsilon_{1}=\epsilon_2=0.2$, $\delta=0.1$, $\gamma=0.5$, $T=0.2$ and $J=1$. QUAPI was used with a time step $\delta t=0.25$ and a memory time $\tau_c=9\delta t$. Convergence was verified by studying the behavior at different time step $\delta t$ and memory size $\tau_c$.
}
\label{Fig1}
\end{figure}
\begin{figure}
\caption{Real and imaginary parts of the coherences in the single excitation subspace $P$, (a) and (c),
and in the zero and double excitation subspace $Q$, (b) and (d).
The different lines were calculated with $K$=0.05, 0.1, 0.2, 0.3, 0.4, 0.5. Parameters are the same as in Fig. \ref{Fig1}.}
\label{Fig2}
\end{figure}
\begin{figure}
\caption{Concurrence between the two qubits as a function of time,
manifesting a steady-state bath-induced entanglement generation. The different lines were calculated with $K$=0.05, 0.1, 0.2, 0.3, 0.4, 0.5. Parameters are the same as in Fig. \ref{Fig1}. Right panel: The concurrence dynamics in the absence of a thermal environment for the same set of qubits parameters. }
\label{Fig3}
\end{figure}
The qubits population behavior in time is displayed in Fig. \ref{Fig1}, using $a=\frac{1}{2}$.
The qubits have the same energy gap, thus in the $P$ subspace the two states are degenerate and their population is identical at all times, independently of $K$. In contrast, in the $Q$ subspace the energy difference between the states is significant, larger than the temperature, $T/2\epsilon<1$; the tunneling element is given by $\gamma J=\frac{1}{2}$, with $\gamma$ as the anisotropy in the qubit-qubit coupling. In such a situation we expect the steady-state population of the spin-up state to be significantly smaller than the ground state population, as indeed we observe in Fig. \ref{Fig1}(b). An interesting observation is the phenomenon of population inversion between the zero and the double excitation states before steady-state sets in. This behavior occurs roughly up to a timescale that is inversely proportional to the Kondo parameter $K$, independent of the temperature.
For the same set of parameters Fig. \ref{Fig2} presents the coherence dynamics in the two subspaces. Generally, coherences are diminishing with the increase of $K$. Given the population and coherence dynamics, we display in Fig. \ref{Fig3} the concurrence, calculated using Eq. (\ref{eq:Conc2}), manifesting a rich dynamics. The following characteristic's are of particular interest: (i) The birth-time of the concurrence, (ii) its oscillations, (iii) the occurrence of sudden death and revival, and (iv) the steady-state value. We now explain those properties.
{\it Time-zero concurrence.} The particular initial condition used here, $a=\frac{1}{2}$, results in $C(t=0)=0$. This is because while we are using maximally entangled states within each subspace as an initial condition, the entanglement between the two qubits themselves is zero initially, since all relevant reduced density matrix elements, necessary for evaluating Eq. (\ref{eq:Conc2}), are identical \cite{Alber}.
{\it Delayed sudden birth.} When $(\rho_S)_{00,00}\sim (\rho_{S})_{11,11}$, a situation taking place at, and close to, the initial time, the concurrence should be zero, given the positivity condition that limits the value of off-diagonal elements. For small $K$, the time it takes the system to depart from its initial-equal population state is prolonged compared to a large-$K$ case, thus, the concurrence birth-time is delayed with respect to the large $K$ behavior. Interestingly, the delay in the birth time does not extend linearly with $K$. Rather, the delay is significant for both $K=0.05$ (weak system-bath coupling) and for $K=0.5$ (intermediate coupling), while it is shorter for in-between values, $K\sim 0.3$; The reason is that the delay time is a nontrivial function of both the time it takes the coherences to establish, and the time it takes the population to significantly depart from the initial (equally populated) setup.
{\it Oscillations.} The oscillatory nature of $C$ in time, best manifested for $K\leq0.1$ reflects the Rabi-type oscillations of the diagonal elements $(\rho_S)_{00,00}$ and $ (\rho_{S})_{11,11}$. When these elements are similar in value, the concurrence drops, and even dies during a certain time interval, depending on the magnitude of the coherence at that time.
{\it Steady-state value.} If the two qubits are isolated from thermal effects ($K$=0, right panel of Fig \ref{Fig3}), the concurrence oscillations reflect the nature of the population and coherences dynamics, depicting Rabi oscillations. The qubits behavior under a dissipative thermal bath is notably distinct: Since both population and coherences approach a constant at long time, the concurrence reaches a steady-state value as well. It predominantly reflects the magnitude of the coherence $(\rho_S)_{10,01}$ in the long time limit since the population weakly depends on $K$ at long time, see Fig. \ref{Fig1}(b). Interestingly, for the present $a=\frac{1}{2}$ case the steady-state value of the concurrence is almost identical at weak system-bath coupling, $K=0.05-0.3$. It significantly degrades around $K=0.4-0.5$. Beyond that, it is identically zero.
{\it Sudden death and revival.} Based on Figs.
\ref{Fig1}-\ref{Fig3}, we can draw general conclusions regarding the process of entanglement sudden death. The effect is directly linked to the existence of population in the zero and double excitation states. If the dynamics within the $Q$ space is eliminated all together ($a=0$), the concurrence is only controlled by the absolute value of the coherence $|(\rho_S)_{01,10}|$, Eq. (\ref{eq:Conc3}). This quantity does not manifest an oscillatory behavior: Under the present initial condition it starts at a large value, touches zero at a particular time, then grows again to a certain extent.
(Under a different initial condition, e.g.,
$|(\rho_S)_{01,10}(0)|=0$, the entanglement will systematically grow, up to the steady-state value). In contrast, when the double excitation state is initially populated, oscillations between the states in the $Q$ subspace largely occur, if system-bath coupling is weak. Then, the competition between the two terms in $F_2$, see Eq. (\ref{eq:Conc1}), can result in a disentanglement over a finite time interval.
Fig. \ref{Fig4} displays the concurrence using different initial conditions, by playing with the parameter $a$. This modifies the weight of the zero and double excitation states in the dynamics. When $a=0$ and $K=0$ the entanglement is maximal ($C=1$) at all times. For finite $K$, keeping $a=0$, it dies at the particular point at which $|(\rho_S)_{01,10}|=0$. Beyond that, it recovers to a value close to 1. When we include the $Q$ states, e.g., by taking $a=0.2$, we observe the effect of entanglement sudden death, over a certain time interval. The duration of this interval grows when $a$ is further increased up to $a\leq1/2$. Beyond this point the coherence in the $P$ subspace may dominate over the population in the $Q$ subspace, resulting in a positive value for $F_1,$ eliminating entanglement sudden death. The behavior at intermediate-strong system-bath coupling, $K=0.6$, is included in Fig. \ref{Fig4b}, demonstrating that temporal oscillations are washed out. The dynamics at even larger $K$ is similar in trends, with reduced concurrence value.
The role of the temperature is displayed in Fig. \ref{Fig5}. At high temperature the concurrence is zero. At intermediate values, $T<\epsilon\sim 1$ we find that its sole effect is a shift down of the qubit entanglement with increasing temperature. All other features (birth time, oscillation) stay intact. The simulation could not be performed at temperatures below $T\sim 0.1$ due to convergence issues in QUAPI.
We can readily study the concurrence under different initial conditions for the $P$ and $Q$ subspaces, not necessarily in the form of Bell states, as long as Eq. (\ref{eq:t0}) is obeyed. In particular, using a diagonal state for the time-zero reduced density matrix, similar features as those discussed above were obtained.
\begin{figure}
\caption{Concurrence between the two qubits as a function of time, using Bell states [Eq. (\ref{eq:Bell}] with $a=0$ (full), $a=0.2$ (dashed), $a=0.5$ (dashed-dotted) and $a=0.8$ (dotted). Left Panel: $K=0.05$. Right Panel: $K=0$. Other parameters are the same as in Fig. \ref{Fig1}. }
\label{Fig4}
\end{figure}
\begin{figure}
\caption{Same as Fig. \ref{Fig4} but at strong system-bath coupling $K=0.6$, $a=0$ (full), $a=0.2$ (dashed), $a=0.5$ (dashed-dotted) and $a=0.8$ (dotted).
}
\label{Fig4b}
\end{figure}
\begin{figure}
\caption{The role of the bath temperature on the concurrence evolution. $T=0.1$ (dashed line), $T=0.2$ (dashed-dotted line), $T=0.4$ (dotted line) and $T=0.6$ (full line). We use Bell states [Eq. (\ref{eq:Bell})] with $a=0.5$ and $K=0.1$. Other parameters are as in Fig. \ref{Fig1}.}
\label{Fig5}
\end{figure}
\section{Universal features at long time}
The long time behavior of the concurrence, representing the equilibrium limit, is displayed in Fig. \ref{FigC} as a function of both $K$, the system-bath coupling parameter, and the initial state preparation ratio $a$, see Eq. (\ref{eq:Bell}). We note that the concurrence can be significant in both the weak and strong coupling regimes, as long as the system evolves predominantly in either the $P$ or $Q$ subspaces. We now show that at weak coupling, $K\ll1$, and at low temperatures, $T<J\gamma$, for a broad range of parameters (as we explain below), the following general result holds
\begin{eqnarray} C_{a<\frac{1}{2}}(t\rightarrow \infty)\sim 1-2a. \label{eq:Clong0} \end{eqnarray}
The important implication of this result is that to the lowest order in $K$ the concurrence deviates from unity due to the occupation of the zero and doubly excited states in the system. This trend was observed before, e.g., in Ref. \cite{Zubairy}. However, here, for the first time, it is justified analytically, based on the spin-boson model behavior \cite{Weiss}. We derive Eq. (\ref{eq:Clong0}) by studying the long-time limit of $F_1$, as it dictates the concurrence when $a\leq \frac{1}{2}$, see Eq. (\ref{eq:Conc2}). In the biased case, weak coupling theory (beyond the noninteracting blip approximation) provides \cite{Weiss}
\begin{eqnarray} \langle Q_z\rangle=(\rho_S)_{00,00}-(\rho_S)_{11,11}\sim a \frac{\epsilon}{\Delta_b}\tanh\left (\frac{\Delta_b}{T} \right), \end{eqnarray}
in the thermodynamic limit. Here
$\Delta_b^2=\epsilon^2+\Delta_{eff}^2$, $\Delta_{eff}$ is a nontrivial function of $K$, $\omega_c$, and the bare tunneling element in the $Q$ subspace, $J\gamma$. In the weak coupling limit we can write $\Delta_{eff}\sim J\gamma$, thus $\Delta_b \sim \sqrt{\epsilon^2+J^2\gamma^2}$. Manipulating the polarization, we obtain the relevant term,
\begin{eqnarray} \sqrt{(\rho_S)_{00,00}(\rho_S)_{11,11}} \sim \frac{a}{2}\sqrt{ 1-\left( \frac{\epsilon}{\Delta_b} \right)^2
\tanh^2 \left( \frac{\Delta_b}{T}\right) }. \label{eq:1} \end{eqnarray}
The other element in $F_1$ is the coherence in the $P$ subspace. In the long time limit it satisfies \cite{Weiss}
\begin{eqnarray}
|(\rho_S)_{01,10}|\sim \frac{1-a}{2}\frac{J}{\Omega}\tanh\left(\frac{\Omega}{T}\right), \label{eq:2} \end{eqnarray}
where $\Omega^2=J^2 + 2 J^2 K\mu$; the proportionality factor obeys $\mu= \psi(iJ/\pi T)-\ln(J/T)$ with $\psi$ as the digamma function \cite{Weiss}. As expected, the equilibrium concurrence depends on the environmental temperature, leading to entanglement degradation at high $T$, as observed in Fig. \ref{Fig5}. Considering the low temperature case, $T<J,J\gamma$, we note that the trigonometric term in both Eq. (\ref{eq:1}) and Eq. (\ref{eq:2}) is close to unity. If we further work in the region $\epsilon <J\gamma$, the square root expression in Eq. (\ref{eq:1}) gets close to 1. Under these broad conditions, the concurrence reduces to
\begin{eqnarray} C_{a<\frac{1}{2}}(t\rightarrow \infty)&\sim& (1-a) \frac{1}{\sqrt{1+2\mu K}} - a \nonumber\\ &\sim& 1-2a-\mu K(1-a). \label{eq:Clong1} \end{eqnarray}
One should note that the $K$ dependence is more subtle than the simple linear scale attained here, since the tunneling element $J$ should be corrected by $K$ in a nontrivial manner \cite{Weiss}. The simple result (\ref{eq:Clong1}) provides us with some basic-interesting rules for building a long-time concurrence within the range of parameters mentioned above: (i) It decays linearly with the overall population placed in the $Q$ (zero and double excitation) subspace. (ii) The reservoir temperature does not significantly affect it. (iii) It does not depend on the qubit interaction energy. We note again that these observations are valid for $a<0.5$, as long as $T< J,J\gamma$, $\epsilon<J\gamma$ and $K\ll1$. When $a>0.5$, the concurrence is determined by the competition between $F_1$ and $F_2$, see Eq. (\ref{eq:Ct}). The numerics then suggests that $C_{a>\frac{1}{2}}(t\rightarrow \infty)\sim 2a-1$ holds,
for similar energy parameters.
We conclude this section by emphasizing the implication of Eq. (\ref{eq:Clong0}): One could {\it set} the steady-state entanglement in a {\it dissipative} system, by controlling the initial population in the $P$ and $Q$ subspaces.
\begin{figure}
\caption{The long time concurrence representing equilibrium behavior, for different initial states and system-bath coupling parameters. $T=0.2$, $J=1$, $\gamma=0.5$, $\delta=0.1$ and $\epsilon_1=\epsilon_2=0.2$. The long time limit was taken here as $t=100$. }
\label{FigC}
\end{figure}
\section{Conclusions}
Using exact numerical tools, we simulated the time evolution of two qubits immersed in thermal environments, considering a class of initial states for the subsystem. This task was achieved by reducing the two qubits-bath model into two spin-bath systems, whose dynamics could be readily followed separately. Using Wootters' formula for the concurrence \cite{Wooters}, we quantified the degree of the qubits entanglement in time, exposing rich dynamics, including oscillations, delayed sudden birth, sudden death, and revival. Specifically, we showed that the occurrence of entanglement sudden death can be traced down to the initial population of the zero and double excitation states. The steady-state behavior was discussed in the weak coupling limit.
Our results are significant for several reasons. First, we exposed a general mapping between an interacting two-qubit system embedded in a bath, and two spin-bath models, allowing us to simulate the dynamics of the original model using a numerically exact method that was developed for studying the prominent spin-boson case. Second, based on our mapping scheme, we calculated the concurrence measure and demonstrated the essential role of the environment in generating a stationary entanglement between the (interacting) qubits. By engineering the environment and its interaction with the system one could tune the degree of disentanglement in the system \cite{Cirac}. Earlier studies in this field have mostly treated a simpler version of our model, ignoring qubit-qubit interaction energy, further utilizing perturbative treatments. To the best of our knowledge, our work is the first to calculate the concurrence exactly in a dissipative and interacting qubit model.
Future studies will focus on the dynamics of non-classical correlations beyond the entanglement measure, evaluating quantum discord \cite{Discord1}. This could be done by relying on the $X$-form of the reduced density matrix \cite{Alber,Fei}. With this at hand, we plan to study the dynamics of classical and quantum correlations in the qubit system, specifically, to investigate classical and quantum decoherence mechanisms and the possible transition between them \cite{decoh}.
\begin{acknowledgments} L.-A. Wu has been supported by the Ikerbasque Foundation Start-up, the CQIQC grant and the Spanish MEC (Project No. FIS2009-12773-C02-02) D. Segal acknowledges support from NSERC discovery grant. The research of C. X. Yu is supported by the Early Research Award of D. Segal. \end{acknowledgments}
\end{document} | arXiv | {
"id": "1207.6995.tex",
"language_detection_score": 0.7978386282920837,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Post-hoc Uncertainty Learning using a Dirichlet Meta-Model}
\begin{abstract}
It is known that neural networks have the problem of being over-confident when directly using the output label distribution to generate uncertainty measures. Existing methods mainly resolve this issue by retraining the entire model to impose the uncertainty quantification capability so that the learned model can achieve desired performance in accuracy and uncertainty prediction simultaneously. However, training the model from scratch is computationally expensive and may not be feasible in many situations. In this work, we consider a more practical post-hoc uncertainty learning setting, where a well-trained base model is given, and we focus on the uncertainty quantification task at the second stage of training. We propose a novel Bayesian meta-model to augment pre-trained models with better uncertainty quantification abilities, which is effective and computationally efficient. Our proposed method requires no additional training data and is flexible enough to quantify different uncertainties and easily adapt to different application settings, including out-of-domain data detection, misclassification detection, and trustworthy transfer learning. We demonstrate our proposed meta-model approach's flexibility and superior empirical performance on these applications over multiple representative image classification benchmarks.
\end{abstract}
\section{Introduction}\label{sec:intro}
Despite the promising performance of deep neural networks achieved in various practical tasks~\citep{simonyan2014very, ren2015faster, hinton2012deep, mikolov2010recurrent, alipanahi2015predicting, litjens2017survey}, uncertainty quantification (UQ) has attracted growing attention in recent years to fulfill the emerging demand for more robust and reliable machine learning models, as UQ aims to measure the reliability of the model's prediction quantitatively. Accurate uncertainty estimation is especially critical for the field that is highly sensitive to error prediction, such as autonomous driving~\citep{bojarski2016end} and medical diagnosis~\citep{begoli2019need}.
Most state-of-the-art approaches~\citep{gal2016dropout, lakshminarayanan2017simple, malinin2018predictive, van2020simple} focus on building a deep model equipped with uncertainty quantification capability so that a single deep model can achieve both desired prediction and UQ performance simultaneously. However, such an approach for UQ suffers from practical limitations because it either requires a specific model structure or explicitly training the entire model from scratch to impose the uncertainty quantification ability. A more realistic scenario is to quantify the uncertainty of a pretrained model in a post-hoc manner due to practical constraints. For example, (1) compared with prediction accuracy and generalization performance, uncertainty quantification ability of deep learning models are usually considered with lower priority, especially for profit-oriented applications, such as recommendation systems; (2) some applications require the models to impose other constraints, such as fairness or privacy, which might sacrifice the UQ performance; (3) for some applications such as transfer learning, the pretrained models are usually available, and it might be a waste of resources to train a new model from scratch.
Motivated by these practical concerns, we focus on tackling the post-hoc uncertainty learning problem, i.e., given a pretrained model, how to improve its UQ quality without affecting its predictive performance.
Prior works on improving uncertainty quality in a post-hoc setting have been mainly targeted towards improving calibration \citep{guo2017calibration,kull2019beyond}. These approaches typically fail to augment the pre-trained model with the ability to capture different sources of uncertainty, such as epistemic uncertainty, which is crucial for applications such Out-of-Distribution (OOD) detection. Several recent works \citep{chen2019confidence,jain2021deup} have adopted the meta-modeling approach, where a meta-model is trained to predict whether or not the pretrained model is correct on the validation samples. These methods still rely on a point estimate of the meta-model parameters, which can be unreliable, especially when the validation set is small.
In this paper, we propose a novel Bayesian meta-model-based uncertainty learning approach to mitigate the aforementioned issues. Our proposed method requires no additional data other than the training dataset and is flexible enough to quantify different kinds of uncertainties and easily adapt to different application settings.
Our empirical results provides crucial insights regarding meta-model training: (1) The diversity in feature representations across different layers is essential for uncertainty quantification, especially for out-of-domain (OOD) data detection tasks; (2) Leveraging the Dirichlet meta-model to capture different uncertainties, including total uncertainty and epistemic uncertainty; (3) There exists an over-fitting issue in uncertainty learning similar to supervised learning that needs to be addressed by a novel validation strategy to achieve better performance. Furthermore, we show that our proposed approach has the flexibility to adapt to various applications, including OOD detection, misclassification detection, and trustworthy transfer learning.
\section{Related Work}
Uncertainty Quantification methods can be broadly classified as \textit{intrinsic} or \textit{extrinsic} depending on how the uncertainties are obtained from the machine learning models. Intrinsic methods encompass models that inherently provide an uncertainty estimate along with its predictions. Some intrinsic methods such as neural networks with homoscedastic/heteroscedastic noise models \citep{wakefield2013bayesian} and quantile regression \cite{koenker1978regression} can only capture \textit{Data} (aleatoric) uncertainty. Many applications including out-of-distribution detection, requires capturing both \textit{Data} (aleatoric) and \textit{Model} (epistemic) accurately. Bayesian methods such as Bayesian neural networks (BNNs) \cite{neal2012bayesian,blundell15,welling2011bayesian} and Gaussian processes \cite{gpbook} and ensemble methods~\citep{lakshminarayanan2017simple} are well known examples of intrinsic methods that can quantify both uncertainties. However, Bayesian methods and ensembles can be quite expensive and require several approximations to learn/optimize in practice~\citep{mackay1992practical,kristiadi2021learnable,welling2011bayesian}. Other approaches attempt to alleviate these issues by directly parameterizing a Dirichlet prior distribution over the categorical label proportions via a neural network~\citep{malinin2018predictive, sensoy2018evidential, malinin2019reverse, nandy2020towards, charpentier2020posterior, joo2020being}.
Under model misspecification, Bayesian approaches are not well-calibrated and can produce severely miscalibrated uncertainties. In the particular case of BNNs, sparsity-promoting priors have been shown to produce better-calibrated uncertainties, especially in the small data regime \cite{ghosh2019model}, and somewhat alleviate the issue. Improved approximate inference methods and methods for prior elicitation in BNN models are active areas of research for forcing BNNs to produce better-calibrated uncertainties. Frequentist methods that approximate the jackknife have also been proposed to construct calibrated confidence intervals~\cite{alaa2019discriminative}.
For models without an inherent notion of uncertainty, extrinsic methods are employed to extract uncertainties in a post-hoc manner. The post-hoc UQ problem is still under-explored, and few works focus on tackling this problem. One such approach is to build auxiliary or meta-models, which have been used successfully to generate reliable confidence measures (in classification) \citep{chen2019confidence}, prediction intervals (in regression), and to predict performance metrics such as accuracy on unseen and unlabeled data~\citep{elder2020learning}. Similarly, DEUP~\citep{jain2021deup} trains an error predictor to estimate the epistemic uncertainty of the pretrained model in terms of the difference between generalization error and aleatoric uncertainty. LULA~\citep{kristiadi2021learnable} trains additional hidden units building on layers of MAP-trained model to improve its uncertainty calibration with Laplace approximation. However, many of these methods require additional data samples that are either validation or out of distribution dataset to train or tune the hyper-parameter, which is infeasible when these data are not available. Moreover, they are often not flexible enough to distinguish epistemic uncertainty and aleatoric uncertainty, which are known to be crucial in various learning applications~\citep{hullermeier2021aleatoric}. In contrast, our proposed method does not require additional training data or modifying the training procedure of the base model.
\section{Problem Formulation} \label{sec:problem}
We focus on classification problems in this paper. Let ${\mathcal{Z}} = {\mathcal{X}} \times {\mathcal{Y}}$, where ${\mathcal{X}}$ denotes the input space, and ${\mathcal{Y}}=\{1,\cdots,K\}$ denotes the label space. Given a base-model training set $\mathcal{D}_B=\left\{\boldsymbol{x}^B_{i}, y^B_{i}\right\}_{i=1}^{N_B}\in {\mathcal{Z}}^{N_B} $ containing i.i.d. samples generated from the distribution $P^{B}_{Z}$, a pretrained base model ${\bm{h}} \circ \Phi:{\mathcal{X}} \rightarrow \Delta^{K-1}$ is constructed, where $\Phi: {\mathcal{X}} \rightarrow {\mathbb{R}}^{l}$ and ${\bm{h}}: {\mathbb{R}}^{l} \rightarrow \Delta^{K-1}$ denote two complementary components of the neural network. More specifically, $\Phi\left({\bm{x}}\right) = \Phi\left({\bm{x}}; {\bm{w}}_{\phi}\right)$ stands for the intermediate feature representation of the base model, and the model output ${\bm{h}}\left(\Phi({\bm{x}})\right) = {\bm{h}}\left(\Phi({\bm{x}}; {\bm{w}}_{\phi}); {\bm{w}}_h\right)$ denotes the predicted label distribution $P_B\left( {\bm{y}} \mid \Phi({\bm{x}})\right) \in \Delta^{K-1}$ given input sample ${\bm{x}}$, where $\left({\bm{w}}_{\phi}, {\bm{w}}_h\right)\in {\mathcal{W}}$ are the parameters of the pretrained base model.
The performance of the base model is evaluated by a non-negative loss function $\ell_B: {\mathcal{W}} \times {\mathcal{Z}} \to {\mathbb{R}}_+$, e.g., cross entropy loss. Thus, a standard way to obtain the pretrained base model is by minimizing the empirical risk over $\mathcal{D}_B$, i.e., ${\mathcal{L}}_B({\bm{h}} \circ \Phi,{\mathcal{D}}_B) \triangleq \frac{1}{N_B}\sum_{i=1}^{N_B} \ell_B \left({\bm{h}} \circ \Phi, ({\bm{x}}^B_i,y^B_i) \right)$.
Although the well-trained deep base model is able to achieve good prediction accuracy, the output label distribution $P_B\left({\bm{y}} \mid \Phi({\bm{x}})\right)$ is usually unreliable for uncertainty quantification, i.e., it can be overconfident or poorly calibrated. Without retraining the model from scratch, we are interested in improving the uncertainty quantification performance in an efficient post-hoc manner. We utilize a meta-model ${\bm{g}} : {\mathbb{R}}^{l} \rightarrow \mathcal{\tilde{Y}}$ with parameter ${\bm{w}}_{g}\in {\mathcal{W}}_g$ building on top of the base model. The meta-model shares the same feature extractor from the base model and generates an output $\tilde{{\bm{y}}} = {\bm{g}}\left(\Phi({\bm{x}}); {\bm{w}}_g\right)$, where $\tilde{{\bm{y}}}\in \mathcal{\tilde{Y}}$ can take any form, e.g., a distribution over $\Delta^{K-1}$ or a scalar. Given a meta-model training set $\mathcal{D}_M=\left\{\boldsymbol{x}^M_{i}, y^M_{i}\right\}_{i=1}^{N_M}$ with i.i.d. samples from the distribution $P^{M}_{Z}$, our goal is to obtain the meta-model by optimizing a training objective ${\mathcal{L}}_M({\bm{g}} \circ \Phi,{\mathcal{D}}_M) \triangleq \frac{1}{N_M}\sum_{i=1}^{N_M} \ell_M \left({\bm{g}} \circ \Phi, ({\bm{x}}^M_i,y^M_i) \right)$, where $\ell_M: {\mathcal{W}}_g \times {\mathcal{W}}_{\phi} \times {\mathcal{Z}} \to {\mathbb{R}}_+$ is the loss function for the meta-model.
In the following, we formally introduce the post-hoc uncertainty learning problem using meta-model. \begin{problem}\label{prob:general}[Post-hoc Uncertainty Learning by Meta-model] Given a base model ${\bm{h}} \circ \Phi$ learned from the base-model training set ${\mathcal{D}}_B$, the uncertainty learning problem by meta-model is to learn the function ${\bm{g}}$ using the meta-model training set ${\mathcal{D}}_M$ and the shared feature extractor $\Phi$, i.e., \begin{equation} {\bm{g}}^* = \argmin_{{\bm{g}}} {\mathcal{L}}_M({\bm{g}} \circ \Phi,{\mathcal{D}}_M), \end{equation} such that the output from the meta-model $\tilde{{\bm{y}}} = {\bm{g}}\left(\Phi(x)\right)$ equipped with an uncertainty metric function ${\bm{u}} : \mathcal{\tilde{Y}} \rightarrow {\mathbb{R}}$ is able to generate a robust uncertainty score ${\bm{u}}\left(\tilde{{\bm{y}}}\right)$. \end{problem}
Next, the most critical questions are how the meta-model should use the information extracted from the pretrained base model, what kinds of uncertainty the meta-model should aim to quantify, and finally, how to train the meta-model appropriately.
\section{Method}\label{sec:method}
In this section, we specify the post-hoc uncertainty learning framework defined in Problem~\ref{prob:general}. First, we introduce the structure of the meta-model. Next, we discuss the meta-model training procedure, including the training objectives and a validation trick. Finally, we define metrics for uncertainty quantification used in different applications.
The design of our proposed meta-model method is based on three high-level insights: (1) Different intermediate layers of the base model usually capture various levels of feature representation, from low-level features to high-frequency features, e.g., for OOD detection task, the OOD data is unlikely to be similar to in-distribution data across all levels of feature representations. Therefore, it is crucial to leverage the diversity in feature representations to achieve better uncertainty quantification performance. (2) Bayesian method is known to be capable of modeling different types of uncertainty for various uncertainty quantification applications, i.e., total uncertainty and epistemic uncertainty. Thus, we propose a Bayesian meta-model to parameterize the Dirichlet distribution, used as the conjugate prior distribution over label distribution. (3) We believe that the overconfident issue of the base model is caused by over-fitting in supervised learning with cross-entropy loss. In the post-hoc training of the meta-model, a validation strategy is proposed to improve the performance of uncertain learning instead of prediction accuracy.
\begin{figure}
\caption{A toy example of our proposed meta-model method in OOD detection application shows the diversity of features in different layers. MetaModel utilizes two intermediate features, while Layer1 and Layer2 only are trained with one individual feature.}
\label{fig: toy_problem}
\end{figure}
Before discussing the details of our proposed method, we want to use a toy example of the OOD detection task shown in Figure~\ref{fig: toy_problem} to elaborate our insights. The goal is to improve the OOD (FashionMNIST) detection performance of a LeNet base model trained on MNIST. The meta-model takes base-model intermediate feature representation as input and parameterizes a Dirichlet distribution over the probability simplex. We train three different meta-models using both intermediate layer features, using either of the two features, respectively, and then visualize and compare the output Dirichlet distribution on the simplex. Specifically, we take the three dominant classes with the largest logits to approximately visualize the Dirichlet distribution over the probability simplex. We observe that the meta-model outputs a much sharper distribution for the in-distribution sample than the OOD sample. Moreover, compared to the meta-model trained with one feature, the meta-model trained with multiple intermediate layers can further distinguish the two distributions on simplex. They generate sharper distribution for the in-distribution sample while exhibiting a more uniform distribution for the OOD sample, which strongly supports our key claims.
\subsection{Meta-Model Structure}\label{subsec: model} The proposed meta-model consists of multiple linear layers $\{{\bm{g}}_j\}_{j=1}^{m}$ attached to different intermediate layers from the base model, and a final linear layer ${\bm{g}}_{c}$ that combines all the features and generates a single output. Specifically, given an input sample ${\bm{x}}$, denote the multiple intermediate feature representation extracted from the base-model as $\{\Phi_j\left({\bm{x}}\right)\}_{j=1}^{m}$. For each intermediate base-feature $\Phi_j$, the corresponding linear layer will construct a low-dimensional meta-feature $\{{\bm{g}}_j\left(\Phi_j\left({\bm{x}}\right)\right)\}_{j=1}^{m}$. Then, the final linear layer of the meta-model takes the multiple meta-features as inputs and generates a single output, i.e., $\tilde{{\bm{y}}} = {\bm{g}}\left(\{\Phi_j\left({\bm{x}}\right)\}_{j=1}^{m}; {\bm{w}}_g\right) = {\bm{g}}_c\left(\{{\bm{g}}_j\left(\Phi_j\left({\bm{x}}\right)\right)\}_{j=1}^{m}; {\bm{w}}_{g_c}\right) $. In practice, the linear layers ${\bm{g}}_{i}$ and ${\bm{g}}_{c}$ only consist of fully connected layers and activation function, which ensures the meta-model has a much simpler structure and enables efficient training.
Given an input sample ${\bm{x}}$, the base model outputs a conditional label distribution $P_B\left({\bm{y}} | \Phi({\bm{x}})\right) \in \Delta^{K-1}$, corresponding to a single point in the probability simplex. However, such label distribution $P_B\left({\bm{y}} | \Phi({\bm{x}})\right)$ is a point estimate, which only shows the model's uncertainty about different classes but cannot reflect the uncertainty due to the lack of knowledge of a given sample, i.e., the epistemic uncertainty. To this end, we adopt the Dirichlet technique commonly used in the recent literature~\citep{malinin2018predictive, malinin2019reverse, nandy2020towards, charpentier2020posterior} in order to better quantify the epistemic uncertainty. Let the label distribution as a random variable over probability simplex, denote as $\boldsymbol{\pi} = [\pi_1, \pi_2, ..., \pi_k]$, the Dirichlet prior distribution is the conjugate prior of the categorical distribution, i.e., \begin{equation}
\operatorname{Dir}(\boldsymbol{\pi} | \boldsymbol{\alpha}) \triangleq \frac{\Gamma\left(\alpha_{0}\right)}{\prod_{c=1}^{K} \Gamma\left(\alpha_{c}\right)} \prod_{c=1}^{K} \pi_{c}^{\alpha_{c}-1}, \alpha_c>0,\alpha_0 \triangleq \sum_{c=1}^K \alpha_c, \end{equation} Our meta-model ${\bm{g}}$ explicitly parameterize the posterior Dirichlet distribution, i.e, \begin{equation*}
q(\boldsymbol{\pi}|\Phi({\bm{x}});{\bm{w}}_g ) \triangleq \operatorname{Dir}(\boldsymbol{\pi} | \boldsymbol{\alpha}({\bm{x}})), \ \boldsymbol{\alpha}({\bm{x}}) = e^{{\bm{g}}\left(\Phi({\bm{x}}); {\bm{w}}_g\right)}, \end{equation*} where the output of our meta-model is $\tilde{{\bm{y}}} = \log{\boldsymbol{\alpha}({\bm{x}})}$, and $\boldsymbol{\alpha}({\bm{x}}) = [\alpha_1({\bm{x}}), \alpha_2({\bm{x}}), ..., \alpha_k({\bm{x}})]$ is the concentration parameter of the Dirichlet distribution given an input ${\bm{x}}$. The overall structure of the meta-model is shown in Figure~\ref{fig: structure}.
\begin{figure}
\caption{Meta-Model structure}
\label{fig: structure}
\end{figure}
\subsection{Uncertainty Learning} \paragraph{Training Objective}\label{subsec: objective}
From Bayesian perspective, the predicted label distribution using the Dirichlet meta-model is given by the expected categorical distribution: \begin{equation}
q(y=c|\Phi({\bm{x}}); {\bm{w}}_g) = \mathbb{E}_{q(\boldsymbol{\pi}|\Phi({\bm{x}}); {\bm{w}}_g)} [P(y=c|\boldsymbol{\pi})] = \frac{\alpha_c({\bm{x}})}{\alpha_0({\bm{x}})}, \end{equation} where $\alpha_0=\sum_{c=1}^K \alpha_c$ is the precision of the Dirichlet distribution.
The true posterior of the categorical distribution over sample $({\bm{x}},y)$ is $P(\boldsymbol{\pi}| \Phi({\bm{x}}), y) \propto P(y | \boldsymbol{\pi}, \Phi({\bm{x}})) P(\boldsymbol{\pi} | \Phi({\bm{x}}))$, which is difficult to evaluate. Instead, we utilize a variational inference technique used in~\citep{joo2020being} to generate a variational distribution $q(\boldsymbol{\pi}| \Phi({\bm{x}}); {\bm{w}}_g)$ parameterized by Dirichlet distribution with meta-model to approximate the true posterior distribution $P(\boldsymbol{\pi}| \Phi({\bm{x}}), y)$, then minimize the KL-divergence $\mathrm{KL}\left(q(\boldsymbol{\pi}| \Phi({\bm{x}}); {\bm{w}}_g) \| P(\boldsymbol{\pi}| \Phi({\bm{x}}), y)\right)$, which is equivalent to maximize the evidence lower bound (ELBO) loss (derivation is provided in Appendix~\ref{sec:ELBO-appendix}), i.e., \begin{align}
{\mathcal{L}}_{\mathrm{VI}}({\bm{w}}_g) &=\frac{1}{N_M}\sum_{i=1}^{N_M} \mathbb{E}_{q(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g)}[\log P(y_i \mid \boldsymbol{\pi}, \boldsymbol{x}_i)] \nonumber \\
&\quad - \lambda\cdot\mathrm{KL}\left(q(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g) \| P(\boldsymbol{\pi} | \Phi({\bm{x}}_i))\right)\\ &= \frac{1}{N_M}\sum_{i=1}^{N_M} \psi\left(\alpha^{(i)}_{y_i}\right)-\psi\left(\alpha^{(i)}_{0}\right) \nonumber\\
&\quad - \lambda\cdot \mathrm{KL}\left( \operatorname{Dir}(\boldsymbol{\pi}| \boldsymbol{\alpha}^{(i)}) \| \operatorname{Dir}(\boldsymbol{\pi} | \boldsymbol{\beta})\right), \end{align} where $\boldsymbol{\alpha}^{(i)}$ is the Dirichlet concentration parameter parameterized by the meta-model, i.e., $\boldsymbol{\alpha}^{(i)} = e^{\left({\bm{g}}\left(\Phi({\bm{x}}_i); {\bm{w}}_g\right)\right)}$, $\boldsymbol{\psi}$ is the digamma function, and $\boldsymbol{\beta}$ is the predefined concentration parameter of prior distribution. In practice, we simply let $\boldsymbol{\beta} = [1,\cdots,1]$. The likelihood term encourages the categorical distribution to be sharper around the true class on the simplex, and the KL-divergence term can be viewed as a regularizer to prevent overconfident prediction, and $\lambda$ is the hyper-parameter to balance the trade-off.
\paragraph{Validation for Uncertainty Learning} \label{sec:validation}
Validation with early stopping is a commonly used technique in supervised learning to train a model with desired generalization performance, i.e., stop training when the error evaluated on the validation set starts increasing. However, we observe that the standard validation method does not work well for uncertainty learning. One possible explanation is that model achieves the highest accuracy when validation loss is small, but may not achieve the best UQ performance, i.e., the model can be overconfident. To this end, we propose a simple and effective validation approach specifically for uncertainty learning. Instead of monitoring the validation cross-entropy loss, we evaluate a specific uncertainty quantification performance metric. For example, we create another noisy validation set for the OOD task by adding noise to the original validation samples and treating such noisy validation samples as OOD samples (more details are provided in the Appendix~\ref{ood-dataset-appendix}). We evaluate the uncertainty score ${\bm{u}}\left(\tilde{{\bm{y}}}\right)$ on both the validation set and noisy validation set and stop the meta-model training when the OOD detection performance achieves its maximum based on some predefined metrics, e.g., AUROC score. Unlike most existing literature using additional training data to help achieve desired performance~\citep{hendrycks2018deep, kristiadi2021learnable, malinin2018predictive}, we nonetheless do not require any additional data for training the meta-model.
\subsection{Uncertainty Metrics} \label{sec:metric}
In this section, we show that our meta-model has the desired behavior to quantify different uncertainties and how they can be used in various applications.
\textbf{Total Uncertainty.}
Total uncertainty, also known as predictive uncertainty, is a combination of epistemic uncertainty and aleatoric uncertainty. The total uncertainty is often used for misclassification detection problems, where the misclassified samples are viewed as in-distribution hard samples. There are two standard ways to measure total uncertainty: (1) \textbf{Entropy} (Ent): The Shannon entropy of expected categorical label distribution over the Dirichlet distribution, i.e., ${\mathcal{H}}\left(P(y|\Phi({\bm{x}}); {\bm{w}}_g) \right) = {\mathcal{H}}\left(\mathbb{E}_{P(\boldsymbol{\pi}|\Phi({\bm{x}}); {\bm{w}}_g)}[P(y|\boldsymbol{\pi})]\right)
$; (2) \textbf{Max Probability} (MaxP): The probability of the predicted class in label distribution, i.e., $\max_c P(y=c|\Phi({\bm{x}}); {\bm{w}}_g)$.
\textbf{Epistemic Uncertainty.} The epistemic uncertainty quantifies the uncertainty when the model has insufficient knowledge of a prediction, e.g.,
the case of an unseen data sample. The epistemic uncertainty is especially useful in OOD detection problems. When the meta-model encounters an unseen sample during testing, it will output a high epistemic uncertainty score due to a lack of knowledge. We define three metrics to measure the epistemic uncertainties. (1) \textbf{Differential Entropy} (Dent): Differential entropy measures the entropy of the Dirichlet distribution, a large differential entropy corresponds a more spread Dirichlet distribution, i.e., ${\mathcal{H}}\left(P(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g) \right) = -\int P(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g)\cdot \log{P(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g)} d\boldsymbol{\pi}$. (2) \textbf{Mutual Information} (MI): Mutual Information is the difference between the Entropy (measures total uncertainty) and the expected entropy of the categorical distribution sampled from the Dirichlet distribution (approximates aleatoric uncertainty), i.e., ${\mathcal{I}}\left(y,\boldsymbol{\pi}|\Phi({\bm{x}}_i) \right) = {\mathcal{H}}\left(\mathbb{E}_{P(\boldsymbol{\pi}|\Phi({\bm{x}}); {\bm{w}}_g)} [P(y|\boldsymbol{\pi})]\right)- \mathbb{E}_{P(\boldsymbol{\pi}|\Phi({\bm{x}}); {\bm{w}}_g)}[{\mathcal{H}}\left(P(y|\boldsymbol{\pi})\right)]$. (3) \textbf{Precision}(Prec): The precision is the summation of the Dirichlet distribution concentration parameter $\boldsymbol{\alpha}$, larger value corresponds to sharper distribution and higher confidence, i.e., $\alpha_0 = \sum_{c=1}^k \alpha_c$.
In this section, we will discuss several important applications that can be tackled using our proposed post-hoc uncertainty learning framework.
Our meta-model has two major advantages: (1) It is efficient, i.e., it is easy to optimize and empirically require less number of training data due to smaller model complexity, (2) It is versatile, i.e., it not only can quantify different uncertainties to adapt to different applications, but also different settings, i.e., like transfer learning, the distribution of training data for meta-model can even be different from the base-model. \paragraph{Out of domain data detection} The base-model ${\bm{h}}$ is trained using data sampled from distribution $P^B_{Z}$. During testing, if there exists some unobserved out of domain data from another distribution $P^{ood}_{Z}$, the meta-model should correctly identify the in-distribution or out-of-distribution input sample. We use the same training set as base model to train the meta-model, i.e., $\mathcal{D}_{B} = \mathcal{D}_{M}$, which means our proposed method does not require additional OOD data for training. For OOD task, we use three epistemic uncertainties to measure the uncertainty score ${\bm{u}}\left(\tilde{y}\right)$. \paragraph{Miss-classification Detection} Instead of detecting whether a testing sample is out of domain, the meta-model aims to identify the failure or success of prediction during testing. We train the meta-model with the same setting as OOD detection task, and use the two total uncertainties. \paragraph{Trustworthy Transfer Learning} The aforementioned applications assume the data used for training the base-model and the meta-model are exactly same, i.e., $\mathcal{D}_{B} = \mathcal{D}_{M}$, or $P^B_{Z} = P^B_{M}$ in general. Instead, the transfer learning assumes there exists a pretrained model ${\bm{h}}_s$ trained using source task data $\mathcal{D}_{s}$ sampled from source distribution $P^s_{Z}$, and the goal is to adapt the source model to target task using target data $\mathcal{D}_{t}$ sampled from target distribution $P^t_{Z}$. Most existing transfer learning problem focus on improving the prediction performance of transferred model, but ignore its uncertainty quantification performance on the target task. Our meta-model method can be utilized as a simple and effective way to address this problem, i.e, given pretrained source model ${\bm{h}} \circ {\bm{f}} = \argmin_{{\bm{h}} \circ {\bm{f}}} {\mathcal{L}}_E({\bm{h}} \circ {\bm{f}},{\mathcal{D}}_s)$, the meta-model can be efficiently trained using target domain data, i.e., ${\bm{g}} = \argmin_{{\bm{g}}} {\mathcal{L}}_E({\bm{g}} \circ {\bm{f}},{\mathcal{D}}_t)$. \section{Experiment Results} \label{sec:exp} In this section, we will demonstrate the strong empirical performance of our proposed meta-model-based uncertainty learning method: first, We introduce the UQ applications; then we describe the experiment settings; next, we present the main results of the three aforementioned uncertainty quantification applications; and finally, we discuss our takeaways. More experiment results and implementation details are given in the Appendix~\ref{setup-appendix} and Appendix~\ref{results-appendix}.
\subsection{Uncertainty Quantification Applications} \label{sec:app} We primarily focus on three applications that can be tackled using our proposed meta-model approach: (1) \textbf{Out of domain data detection.} Given a base-model ${\bm{h}}$ trained using data sampled from the distribution $P^B_{Z}$. We use the same base-model training set to train the meta-model, i.e., $\mathcal{D}_{B} = \mathcal{D}_{M}$. During testing, there exists some unobserved out-of-domain data from another distribution $P^{ood}_{Z}$. The meta-model is expected to identify the out-of-distribution input sample based on epistemic uncertainties. (2) \textbf{misclassification Detection.} Instead of detecting whether a testing sample is out of domain, the goal here is to identify the failure or success of the meta-model prediction at test time using total uncertainties. (3) \textbf{Trustworthy Transfer Learning.}
In transfer learning, there exists a pretrained model trained using source task data $\mathcal{D}_{s}$ sampled from source distribution $P^s_{Z}$, and the goal is to adapt the source model to a target task using target data $\mathcal{D}_{t}$ sampled from target distribution $P^t_{Z}$. Most existing transfer learning approaches only focus on improving the prediction performance of the transferred model, but ignore its UQ performance on the target task. Our meta-model method can be utilized to address this problem, i.e, given pretrained source model ${\bm{h}}^{s} \circ \Phi^{s}$, the meta-model can be efficiently trained using target domain data by ${\bm{g}}^t = \argmin_{{\bm{g}}} {\mathcal{L}}_E({\bm{g}} \circ \Phi^s,{\mathcal{D}}_t)$.
\begin{table*}[t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{OOD Detection AUROC score.} MI, Dent, and Prec stand for different epistemic uncertainty metrics, i.e., Mutual Information, Differential Entropy, and precision. Settings stand for post-hoc or traditional, i.e., training the entire model from scratch. Additional Data stands for if using additional training data or not.}
\begin{tabular}{lllllllll}
\\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{Omniglot}& \textbf{FMNIST} & \textbf{KMNIST}& \textbf{CIFAR10} &\textbf{Corrupted} \\
\hline
MNIST & Base Model(Ent) & Traditional & No & 98.9$\pm$0.5 & 97.8$\pm$0.8 & 95.8$\pm$0.8 & 99.4$\pm$0.2 & 99.5$\pm$0.3 \\
LeNet & Base Model(MaxP) & Traditional & No & 98.7$\pm$0.6 & 97.6$\pm$0.8 & 95.6$\pm$0.8 & 99.3$\pm$0.2 & 99.4$\pm$0.4 \\
& Whitebox & Post-hoc & Yes & 98.5$\pm$0.3 & 97.7$\pm$0.6 & 96.0$\pm$0.2 & 99.5$\pm$0.1 & 99.5$\pm$0.1 \\
& LULA & Post-hoc & Yes & 99.8$\pm$0.0 & 99.4$\pm$0.0 & \textcolor{violet}{\textbf{99.3}$\pm$0.1} & 99.9$\pm$0.0 & 99.6$\pm$0.1 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 99.7$\pm$0.1 & 99.5$\pm$0.2 & 98.2$\pm$0.3 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 99.3$\pm$0.2 & 99.3$\pm$0.2 & 98.0$\pm$0.2 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MI)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.6}$\pm$0.2} &97.7$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 99.8$\pm$0.0 & 99.5$\pm$0.2 & 97.6$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.5}$\pm$0.2} &97.7$\pm$0.5 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TinyImageNet} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & Traditional & No & 86.4$\pm$4.6 & 90.8$\pm$1.3 & 89.0$\pm$0.5 & 87.5$\pm$1.1 & 85.9$\pm$8.2 \\
VGG16 & Base Model(MaxP) & Traditional & No & 86.3$\pm$4.4 & 90.4$\pm$1.2 & 88.7$\pm$0.5 & 87.3$\pm$1.1 & 85.7$\pm$8.1 \\
& Whitebox & Post-hoc & Yes & 96.9$\pm$0.9 & 95.2$\pm$1.2 & 89.3$\pm$2.2 & 88.9$\pm$2.5 & 96.4$\pm$1.0 \\
& LULA & Post-hoc & Yes & 97.1$\pm$1.7 & 94.3$\pm$0.0 & 92.8$\pm$0.1 & 90.0$\pm$0.0 & 97.7$\pm$2.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 96.3$\pm$3.0 &89.0$\pm$5.2 & 89.6$\pm$3.4 & 89.4$\pm$3.5& 95.9$\pm$4.3 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 95.6$\pm$3.6 & 87.8$\pm$4.4 &89.1$\pm$2.4 & 88.2$\pm$2.6 & 94.0$\pm$7.3 \\
& \textbf{Ours(MI)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.2$\pm$0.9 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Dent)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & 98.4$\pm$0.9 & \textcolor{violet}{\textbf{95.7}$\pm$0.8} & 97.7$\pm$0.5 & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.1$\pm$0.5 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
\toprule
CIFAR100 & Base Model(Ent) & Traditional & No & 76.2$\pm$5.2 & 77.8$\pm$2.4 & 80.1$\pm$0.5 & 79.7$\pm$0.3 & 65.8$\pm$11.4 \\
WideResNet & Base Model(MaxP) & Traditional & No & 73.9$\pm$4.3 & 76.4$\pm$2.3 & 78.7$\pm$0.5 & 78.0$\pm$0.2 & 63.8$\pm$10.4 \\
& Whitebox & Post-hoc & Yes & 89.0$\pm$0.7 & 82.4$\pm$1.1 & 80.5$\pm$0.7 & 79.0$\pm$1.1 & 83.1$\pm$1.6 \\
& LULA & Post-hoc & Yes & 84.2$\pm$1.0 & 83.2$\pm$0.1 & 79.6$\pm$0.3 & 78.5$\pm$0.1 & 80.6$\pm$1.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 92.6$\pm$2.0 & 80.8$\pm$3.0 & 81.1$\pm$1.2 & 84.9$\pm$1.2& 85.6$\pm$3.9 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 88.6$\pm$3.2 & 78.4$\pm$3.0 & 79.7$\pm$0.6 & 82.4$\pm$0.6& 79.3$\pm$4.5 \\
& \textbf{Ours(MI)} & Post-hoc & No & 94.3$\pm$1.0 & \textcolor{violet}{\textbf{84.4}$\pm$1.8} & 81.9$\pm$4.7 & 85.5$\pm$3.6 & \textcolor{violet}{\textbf{90.8}$\pm$3.3} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 93.3$\pm$1.4 & 84.0$\pm$2.3 & 79.5$\pm$3.0 & 84.6$\pm$2.8& 89.8$\pm$2.6 \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{94.4}$\pm$1.0} & \textcolor{violet}{\textbf{84.4}$\pm$1.7} & \textcolor{violet}{\textbf{82.1}$\pm$4.9} & \textcolor{violet}{\textbf{85.6}$\pm$3.6}&
\textcolor{violet}{\textbf{90.8}$\pm$3.4} \\
\toprule
\end{tabular}
\label{table:OOD}
\end{center}
\end{table*}
\begin{comment} \begin{table*}[t!]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{CIFAR10 OOD Detection AUROC score} }
\begin{tabular}{lllllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & Traditional & No & 86.4$\pm$4.6 & 90.8$\pm$1.3 & 89.0$\pm$0.5 & 87.5$\pm$1.1 & 85.9$\pm$8.2 \\
VGG16 & Base Model(MaxP) & Traditional & No & 86.3$\pm$4.4 & 90.4$\pm$1.2 & 88.7$\pm$0.5 & 87.3$\pm$1.1 & 85.7$\pm$8.1 \\
& Whitebox & Post-hoc & Yes & 96.9$\pm$0.9 & 95.2$\pm$1.2 & 89.3$\pm$2.2 & 88.9$\pm$2.5 & 96.4$\pm$1.0 \\
& LULA & Post-hoc & Yes & 97.1$\pm$1.7 & 94.3$\pm$0.0 & 92.8$\pm$0.1 & 90.0$\pm$0.0 & 97.7$\pm$2.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 96.3$\pm$3.0 &89.0$\pm$5.2 & 89.6$\pm$3.4 & 89.4$\pm$3.5& 95.9$\pm$4.3 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 95.6$\pm$3.6 & 87.8$\pm$4.4 &89.1$\pm$2.4 & 88.2$\pm$2.6 & 94.0$\pm$7.3 \\
& \textbf{Ours(MI)} & Post-hoc & No & 100.0$\pm$0.0 & 98.8$\pm$0.5 & 95.2$\pm$0.9 & 98.1$\pm$0.3 & 100.0$\pm$0.0 \\
& \textbf{Ours(Dent)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{98.4}$\pm$0.9} & \textcolor{violet}{\textbf{95.7}$\pm$0.8} & \textcolor{violet}{\textbf{97.7}$\pm$0.5} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Prec)} & Post-hoc & No & 100.0$\pm$0.0 & 98.8$\pm$0.5 & 95.1$\pm$0.5 & 98.1$\pm$0.3 & 100.0$\pm$0.0 \\
\toprule
\end{tabular}
\label{table:OOD-CIFAR10}
\end{center}
\end{table*}
\begin{table*}[t!]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{CIFAR100 OOD Detection AUROC score}}
\begin{tabular}{lllllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
CIFAR100 & Base Model(Ent) & Traditional & No & 76.2$\pm$5.2 & 77.8$\pm$2.4 & 80.1$\pm$0.5 & 79.7$\pm$0.3 & 65.8$\pm$11.4 \\
WideResNet & Base Model(MaxP) & Traditional & No & 73.9$\pm$4.3 & 76.4$\pm$2.3 & 78.7$\pm$0.5 & 78.0$\pm$0.2 & 63.8$\pm$10.4 \\
& Whitebox & Post-hoc & Yes & 89.0$\pm$0.7 & 82.4$\pm$1.1 & 80.5$\pm$0.7 & 79.0$\pm$1.1 & 83.1$\pm$1.6 \\
& LULA & Post-hoc & Yes & 84.2$\pm$1.0 & 83.2$\pm$0.1 & 79.6$\pm$0.3 & 78.5$\pm$0.1 & 80.6$\pm$1.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 92.6$\pm$2.0 & 80.8$\pm$3.0 & 81.1$\pm$1.2 & 84.9$\pm$1.2& 85.6$\pm$3.9 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 88.6$\pm$3.2 & 78.4$\pm$3.0 & 79.7$\pm$0.6 & 82.4$\pm$0.6& 79.3$\pm$4.5 \\
& \textbf{Ours(MI)} & Post-hoc & No & 94.3$\pm$1.0 & \textcolor{violet}{\textbf{84.4}$\pm$1.8} & 81.9$\pm$4.7 & 85.5$\pm$3.6 & \textcolor{violet}{\textbf{90.8}$\pm$3.3} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 93.3$\pm$1.4 & 84.0$\pm$2.3 & 79.5$\pm$3.0 & 84.6$\pm$2.8& 89.8$\pm$2.6 \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{94.4}$\pm$1.0} & \textcolor{violet}{\textbf{84.4}$\pm$1.7} & \textcolor{violet}{\textbf{82.1}$\pm$4.9} & \textcolor{violet}{\textbf{85.6}$\pm$3.6}&
\textcolor{violet}{\textbf{90.8}$\pm$3.4} \\
\toprule
\end{tabular}
\label{table:OOD-CIFAR100}
\end{center}
\end{table*}
\end{comment}
\subsection{Settings}\label{sec:setting} \paragraph{Benchmark.} For both OOD detection and misclassification detection tasks, we employ three standard datasets to train the base model and the meta-model: MNIST, CIFAR10, and CIFAR100. For each dataset, we use different base-model structures, i.e., LeNet for MNIST, VGG-16~\citep{simonyan2014very} for CIFAR10, and WideResNet-16~\citep{zagoruyko2016wide} for CIFAR100. For LeNet and VGG-16, the meta-model uses extracted feature after each pooling layer, and for WideResNet-16, the meta-model uses extracted feature after each residual black. In general, the total number of intermediate features is less than 5 to ensure computational efficiency. For the OOD task, we consider five different OOD datasets for evaluating the OOD detection performance: Omiglot, FashionMNIST, KMNIST, CIFAR10, and corrupted MNIST as outliers for the MNIST dataset; SVHN, FashionMNIST, LSUN, TinyImageNet, and corrupted CIFAR10 (CIFAR100) as outliers for CIFAR10 (CIFAR100) dataset. For the trustworthy transfer learning task, we use the ResNet-50 pretrained on ImageNet as our pretrained source domain model and adapt the source model to the two target tasks, STL10 and CIFAR10, by training the meta-model.
\paragraph{Baselines.}
For OOD and misclassification tasks, except the naive base-model trained with cross-entropy loss, we mainly compare with the existing post-hoc UQ methods as baselines: (1) The meta-model based method (Whitebox)~\citep{chen2019confidence}; (2) The post-hoc uncertainty quantification using Laplace Approximation (LULA)~\citep{kristiadi2021learnable}. In order to further validates our strong empirical performance, we also compare with other SOTA intrinsic UQ methods in the Appendix~\ref{results-appendix}: (1) The standard Bayesian method Monte-Carlo Dropout~\citep{gal2016dropout}; (2) The Dirichlet network with variational inference (Be-Bayesian)~\citep{joo2020being}; (3) The posterior network with density estimation~\citep{charpentier2020posterior}; (4) The robust OOD detection method ALOE~\citep{chen2020robust}.
For the trustworthy transfer learning task, since there is no existing work designed for this problem, we compare our method with two simple baselines: (1) Fine-tune the last layer of the source model. (2) Train our proposed meta-model on top of the source model using standard cross-entropy loss.
\paragraph{Performance Metrics.} We evaluate the UQ performance by measuring the area under the ROC curve (AUROC) and the area under the Precision-Recall curve (AUPR). The results are averaged over five random trails for each experiment. For the OOD task, we consider the in-distribution test samples as the negative class and the outlier samples as the positive class. For the misclassification task, we consider correctly classified test samples as the negative class and miss-classified test samples as the positive class.
\subsection{OOD Detection}\label{sec:OOD} The OOD detection results for the three benchmark datasets, MNIST, CIFAR10, and CIFAR100, are shown in Table~\ref{table:OOD}. Additional baseline comparisons are provided in Table~\ref{table:OOD-appendix} in Appendix. Our proposed Dirichlet meta-model method consistently outperforms all the baseline methods in terms of AUROC score (AUPR results are shown in Appendix), including the recent proposed SOTA post-hoc uncertainty learning method LULA. We also evaluate the performance of all the uncertainty metrics defined in section~\ref{sec:metric}, as it can be observed that compared to total uncertainty (Ent and MaxP), epistemic uncertainties (MI, Dent, Prec) can achieve better UQ performance for the OOD detection task. Moreover, our proposed method does not require additional data to train the meta-model. In contrast, Whitebox requires an additional validation set to train the meta-model, and LULA also needs an additional OOD dataset during training to distinguish the in-distribution samples and outliers, which imposes practical limitations.
\subsection{Misclassification Detection}\label{sec:Miss} The misclassification detection results for the three benchmark datasets, MNIST, CIFAR10, and CIFAR100, are shown in Table~\ref{table:miss-class}. Additional baseline comparisons are provided in Table~\ref{table:miss-class-appendix}. LULA turns out to be a strong baseline for the misclassification detection task. Although our proposed method performs slightly worse than LULA in terms of the AUROC, it outperforms all the baselines in terms of AUPR.
\begin{table*}[t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{Misclassification Results.} Ent and MaxP stand for Entropy and Max Probability, respectively. Settings stand for post-hoc or traditional, i.e., training the entire model from scratch.}
\begin{tabular}{llllllllll}
\\
\toprule
\textbf{Methods} & \textbf{Settings} & \textbf{Metric} & \textbf{MNIST}& \makecell{\textbf{CIFAR 10}} & \makecell{\textbf{CIFAR 100}}& \textbf{Metric} & \textbf{MNIST}& \makecell{\textbf{CIFAR 10}} & \makecell{\textbf{CIFAR 100}}\\
\hline
Base Model(Ent) & Traditional & AUROC & 96.7$\pm$0.9 & 92.1$\pm$0.2 & 87.2$\pm$0.2 & AUPR & 37.4$\pm$4.0 & 47.5$\pm$1.2 & 67.0$\pm$1.2 \\
Base Model(MaxP) & Traditional & AUROC & 96.7$\pm$0.9 & 92.1$\pm$0.2 & 86.8$\pm$0.2 & AUPR & 39.4$\pm$3.6 & 46.6$\pm$1.1 & 65.7$\pm$1.0 \\
Whitebox &Post-hoc & AUROC & 94.9$\pm$0.2 & 90.2$\pm$0.1 & 80.3$\pm$0.1 & AUPR & 30.4$\pm$0.3 & 45.7$\pm$0.2 & 52.5$\pm$0.3 \\
LULA &Post-hoc & AUROC & \textcolor{violet}{\textbf{98.8}$\pm$0.1} & \textcolor{violet}{\textbf{94.5}$\pm$0.0} & \textcolor{violet}{\textbf{87.5}$\pm$0.1} & AUPR & 40.7$\pm$4.2 & 47.3$\pm$0.7 & 66.0$\pm$0.4 \\
\hline
\textbf{Ours(Ent)} &Post-hoc & AUROC & 96.9$\pm$0.6 & 91.1$\pm$0.2 & 83.4$\pm$0.1 & AUPR & 35.6$\pm$4.5 & 50.0$\pm$3.1 & 66.3$\pm$0.4 \\
\textbf{Ours(MaxP)} &Post-hoc & AUROC & 97.4$\pm$0.4 & 92.2$\pm$0.7 & 85.8$\pm$0.2 & AUPR & \textcolor{violet}{\textbf{44.5}$\pm$5.1} & \textcolor{violet}{\textbf{54.2}$\pm$3.2} & \textcolor{violet}{\textbf{68.2}$\pm$0.5} \\
\toprule
\end{tabular}
\label{table:miss-class}
\end{center}
\end{table*}
\begin{table*}[t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{Trustworthy Transfer Learning Results.} Ent, MaxP, MI, and Dent stand for different uncertainty measurements, i.e., Entropy, Max Probability, Mutual Information, and Differential Entropy, respectively. We use ResNet-50 pretrained with ImageNet as the source model and FashionMNIST as OOD samples.}
\begin{tabular}{lllllllll}
\\
\toprule
\textbf{Methods} &\textbf{Target} &\textbf{Test Acc} & \textbf{AUROC} & \textbf{AUPR}& \textbf{Target} &\textbf{Test Acc}& \textbf{AUROC} & \textbf{AUPR} \\
\hline
FineTune(Ent) &\textbf{STL10} &48.1$\pm$0.5 & 89.2$\pm$0.9 & 89.3$\pm$1.2 & \textbf{CIFAR10} &65.0$\pm$0.4 &74.8$\pm$1.6 &71.6$\pm$1.8 \\
FineTune(MaxP) & &48.1$\pm$0.5 & 81.8$\pm$1.2 & 83.2$\pm$1.7 & &65.0$\pm$0.4 &72.7$\pm$1.4&69.4$\pm$1.5 \\
\hline
CrossEnt Loss(Ent) & &48.0$\pm$0.2 & 88.9$\pm$1.0 & 87.4$\pm$1.5 & &86.3$\pm$0.1 &85.0$\pm$ 1.0& 82.2$\pm$1.9 \\
CrossEnt Loss(MaxP) & &48.0$\pm$0.2 & 84.9$\pm$0.7 & 84.7$\pm$0.8 & &86.3$\pm$0.1 &83.1$\pm$0.9& 79.0$\pm$1.6 \\
\hline
\textbf{Ours(Ent)} & &47.2$\pm$0.3 & \textcolor{violet}{\textbf{91.8$\pm$0.8}} & \textcolor{violet}{\textbf{91.3$\pm$0.7}} & &86.6$\pm$0.3 &89.9$\pm$1.3&88.8$\pm$1.5\\
\textbf{Ours(MaxP)}& &47.2$\pm$0.3 & 87.3$\pm$1.7 & 87.9$\pm$1.5 & &86.6$\pm$0.3 &87.6$\pm$1.6&85.5$\pm$2.0 \\
\textbf{Ours(MI)} & &47.2$\pm$0.3 & 90.2$\pm$2.0 & 88.7$\pm$2.8 & &86.6$\pm$0.3 &90.8$\pm$0.9&89.9$\pm$1.2\\
\textbf{Ours(Dent)}& &47.2$\pm$0.3 & 91.7$\pm$1.0 & 90.3$\pm$1.7 & &86.6$\pm$0.3 &\textcolor{violet}{\textbf{92.0$\pm$0.8}}& \textcolor{violet}{\textbf{91.1$\pm$0.8}}\\
\toprule
\end{tabular}
\label{table:transfer}
\end{center}
\end{table*}
\subsection{Trustworthy Transfer Learning}\label{sec:transfer} We use ImageNet pretrained ResNet-50 as our source domain base model and adapt the pretrained model to the target task by training the meta-model using the target domain training data. Unlike traditional transfer learning, which only focuses on testing prediction accuracy on the target task, we also evaluate the UQ ability of the meta-model in terms of OOD detection performance. We use FashionMNIST as OOD samples for both target tasks STL10 and CIFAR10 and evaluate the AUROC score. The results are shown in Table~\ref{table:transfer}. Our proposed meta-model method can achieve comparable prediction performance to the baseline methods and significantly improve the OOD detection performance, which is crucial in trustworthy transfer learning.
\subsection{Discussion}\label{sec:ablation}
In this section, we further investigate our proposed method through an ablation study using the CIFAR10 OOD task. Based on our insights and the empirical results, we conclude the following four key factors in the success of our meta-model based method:
\textbf{Feature Diversity.} We replace our proposed meta-model structure with a simple linear classifier attached to only the final layer. The ablation results are shown in Table~\ref{table:ablation-structure} as ``\textbf{Linear-Meta}''. It can be observed here and in Figure~\ref{fig: toy_problem} that the performance degrades without using features from all intermediate layers, which further justifies the importance of feature diversity and the effectiveness of our meta-model structure.
\textbf{Dirichlet Technique.} Instead of using a meta-model to parameterize a Dirichlet distribution, we train the meta-model using the standard cross-entropy loss, which simply outputs a categorical label distribution. The ablation results are shown in Table~\ref{table:ablation-structure} as ``\textbf{Cross-Ent}''. It can be shown that performance degrades again because it cannot quantify epistemic uncertainty, which justifies the effectiveness of using Bayesian techniques.
\textbf{Validation for Uncertainty Learning.} We retrain the last layer of the base model using the cross-entropy loss with the proposed validation trick. The results are shown in Table~\ref{table:ablation-structure} as ``\textbf{LastLayer}''. It turns out even such a naive method can achieve improved performance compared to the base model, which further justifies the effectiveness of the post-hoc uncertainty learning setting, i.e., the benefit of solely focusing on UQ performance at the second stage. This interesting observation inspires us to conjecture that efficiently retraining the classifier of the base model at the second stage will lead to better UQ performance. A theoretical investigation of this observation can be interesting for future work.
\textbf{Data Efficiency.} Instead of using all the training samples, we randomly choose only $10\%$ samples to train the meta-model. The results are shown in Table~\ref{table:ablation-structure} as ``$10\%$\textbf{data}''. It can be observed that our meta-model requires a small amount of data to achieve comparable performance due to the smaller model complexity. Therefore, our proposed method is also more computationally efficient than the approaches that retrain the whole model from scratch.
\begin{table}[h]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{Ablation Study of Meta-model (CIFAR10 AUROC score).} The results are reported as mean over five experiment trails. Error bars are provided in Table~\ref{table:ablation-structure-appendix} in the Appendix.}
\begin{tabular}{lllllll}
\\
\toprule
\textbf{Methods} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
Base Model(Ent) & 86.4 & 90.8 & 89.0 & 87.5 & 85.9 \\
Base Model(MaxP) & 86.3 & 90.4 & 88.7 & 87.3 & 85.7 \\
\hline
\textbf{Linear-Meta(Ent)} & 90.4 & 91.3 & 91.5 & 89.5 & 90.4 \\
\textbf{Linear-Meta(MaxP)} & 90.1 & 91.5 & 91.4 & 89.6 & 90.1 \\
\textbf{Linear-Meta(MI)} & 90.6 & 90.1 & 91.2 & 88.8 & 90.5 \\
\textbf{Linear-Meta(Dent)} & 90.4 & 90.7 & 91.4 & 89.2 & 90.3 \\
\textbf{Linear-Meta(Prec)} & 90.6 & 90.0 & 91.2 & 88.8 & 90.6\\
\hline
\textbf{Cross-Ent(Ent)} & 94.2 & 91.2 & 91.2 & 90.3& 94.7 \\
\textbf{Cross-Ent(MaxP)} & 93.3& 91.1 &90.9 & 90.0 & 94.0 \\
\hline
\textbf{LastLayer(Ent)}& 93.0& 90.2 & 91.9 & 89.9& 93.1 \\
\textbf{LastLayer(MaxP)} & 92.9 & 90.5 &91.9& 90.1 & 93.1\\
\hline
\textbf{$10\%$data(Ent)} & 90.0 &89.1& 88.7 & 88.2& 90.2 \\
\textbf{$10\%$data(MaxP)} & 90.9 & 88.1&86.9& 86.5& 91.7 \\
\textbf{$10\%$data(MI)}& 99.9 & 98.1 & 95.4& 97.2 & 99.9 \\
\textbf{$10\%$data(Dent)} & 96.7 & 97.4 & 94.3& 95.7 & 96.7\\
\textbf{$10\%$data(Prec)} & 99.9& 98.0& 95.4 & 97.3& 99.9 \\
\hline
\textbf{Ours(Ent)} & 96.3 &89.0& 89.6 & 89.4& 95.9 \\
\textbf{Ours(MaxP)} & 95.6 & 87.8 &89.1 & 88.2 & 94.0 \\
\textbf{Ours(MI)}& \textcolor{violet}{\textbf{100.0}} & \textcolor{violet}{\textbf{98.8}} & 95.2 & \textcolor{violet}{\textbf{98.1}} & \textcolor{violet}{\textbf{100.0}} \\
\textbf{Ours(Dent)} & \textcolor{violet}{\textbf{100.0}} & 98.4 & \textcolor{violet}{\textbf{95.7}} & 97.7 & \textcolor{violet}{\textbf{100.0}} \\
\textbf{Ours(Prec)} & \textcolor{violet}{\textbf{100.0}} & \textcolor{violet}{\textbf{98.8}} & 95.1 & \textcolor{violet}{\textbf{98.1}} & \textcolor{violet}{\textbf{100.0}} \\
\toprule
\end{tabular}
\label{table:ablation-structure}
\end{center}
\end{table}
\section{Concluding Remarks} \label{sec:conclusion}
We provide a novel solution for the uncertainty quantification problem via our proposed post-hoc uncertainty learning framework and the Dirichlet meta-model approach. Our method turns out to be both effective and computationally efficient for various UQ applications. We believe our meta-model approach not only has the flexibility to tackle other applications relevant to uncertainty quantification, such as quantifying transfer-ability in transfer learning and domain adaptation, but also can adapt to other model architecture such as transformer and language model. Exploring these potential applications and offering a theoretical interpretation of the meta-model can be interesting future work.
\appendix \section{Derivation of ELBO loss} \label{sec:ELBO-appendix} The Dirichlet ELBO loss can be fomulated as follows: \begin{align}
{\mathcal{L}}_{\mathrm{VI}}({\bm{w}}_g) &=\frac{1}{N_M}\sum_{i=1}^{N_M} \mathbb{E}_{q(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g)}[\log P(y_i \mid \boldsymbol{\pi}, \boldsymbol{x}_i)] \nonumber \\
&- \lambda\cdot\mathrm{KL}\left(q(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g) \| P(\boldsymbol{\pi} | \Phi({\bm{x}}_i))\right) \end{align}
\normalsize The second term in the summation is simply the KL-divergence between variational distribution parameterized by the meta model and the predefined prior distribution, i.e., $\mathrm{KL}\left( \operatorname{Dir}(\boldsymbol{\pi}| \boldsymbol{\alpha}^{(i)}) \| \operatorname{Dir}(\boldsymbol{\pi} | \boldsymbol{\beta})\right)$. The first term in the summation is the expectation of cross-entropy loss w.r.t the variational Dirichlet distribution, whose closed form can be obtained as follows, \begin{align}
&\mathbb{E}_{q(\boldsymbol{\pi}| \Phi({\bm{x}}_i); {\bm{w}}_g)}[\log P(y_i \mid \boldsymbol{\pi}, \boldsymbol{x}_i)] \nonumber \\
=& \mathbb{E}_{ \operatorname{Dir}(\boldsymbol{\pi}| \boldsymbol{\alpha}^{(i)})}[\log \boldsymbol{\pi}_{y_i}]\\
=& \int \log \boldsymbol{\pi}_{y_i} \operatorname{Dir}(\boldsymbol{\pi}| \boldsymbol{\alpha}^{(i)}) d\boldsymbol{\pi}\\ =&\int_{0}^{1} \log \boldsymbol{\pi}_{y_i} \operatorname{Beta}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i}) d\boldsymbol{\pi}_{y_i}\\ =& \int_{0}^{1} \log \boldsymbol{\pi}_{y_i} \frac {\boldsymbol{\pi}_{y_i}^{\alpha^{(i)}_{y_i}-1}(1-\boldsymbol{\pi}_{y_i})^{\alpha^{(i)}_{0}-\alpha^{(i)}_{y_i}-1}} {\operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} d\boldsymbol{\pi}_{y_i}\\ =& \frac {\int_{0}^{1} \frac {d \boldsymbol{\pi}_{y_i}^{\alpha^{(i)}_{y_i}-1}}{d\alpha^{(i)}_{y_i}}(1-\boldsymbol{\pi}_{y_i})^{\alpha^{(i)}_{0}-\alpha^{(i)}_{y_i}-1} d\boldsymbol{\pi}_{y_i}} {\operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} \\ =& \frac { \frac{d}{d\alpha^{(i)}_{y_i}} \int_{0}^{1} \boldsymbol{\pi}_{y_i}^{\alpha^{(i)}_{y_i}-1} (1-\boldsymbol{\pi}_{y_i})^{\alpha^{(i)}_{0}-\alpha^{(i)}_{y_i}-1} d\boldsymbol{\pi}_{y_i}} {\operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} \\ =& \frac {1} {\operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} \frac {d\operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} {d\alpha^{(i)}_{y_i}}\\ =& \frac {d \log \operatorname{B}(\alpha^{(i)}_{y_i}, \alpha^{(i)}_{0}-\alpha^{(i)}_{y_i})} {d\alpha^{(i)}_{y_i}}\\ =& \frac {d \left(\log \Gamma(\alpha^{(i)}_{y_i})+ \log \Gamma(\alpha^{(i)}_{0}-\alpha^{(i)}_{y_i}) - \log \Gamma(\alpha^{(i)}_{0})\right)} {d\alpha^{(i)}_{y_i}}\\ =& \frac {d \log \Gamma(\alpha^{(i)}_{y_i})}{d\alpha^{(i)}_{y_i}} - \frac {d \log \Gamma(\alpha^{(i)}_{0})}{d\alpha^{(i)}_{0}}\\ =& \psi\left(\alpha^{(i)}_{y_i}\right)-\psi\left(\alpha^{(i)}_{0}\right) \end{align} Therefore, the ELBO loss can be formulated as the closed form as follows, \begin{align} {\mathcal{L}}_{\mathrm{VI}}({\bm{w}}_g) &= \frac{1}{N_M}\sum_{i=1}^{N_M} \psi\left(\alpha^{(i)}_{y_i}\right)-\psi\left(\alpha^{(i)}_{0}\right) \nonumber\\
&- \lambda\cdot \mathrm{KL}\left( \operatorname{Dir}(\boldsymbol{\pi}| \boldsymbol{\alpha}^{(i)}) \| \operatorname{Dir}(\boldsymbol{\pi} | \boldsymbol{\beta})\right). \end{align}
\section{Experiment Setup} \label{setup-appendix} \subsection{Implementation Details for OOD and Misclassification} The post-hoc uncertainty learning problem aims to improve the UQ performance of a pretrained base model. First, we generate the pretrained model by training the base model using cross-entropy loss to achieve optimal testing accuracy. The maximum epochs for training LeNet, VGG-16, and WideResNet-16 are set to be 20, 200, and 200, respectively. Then, in the second stage, we freeze the parameter of the pretrained base model and train the meta-model on top of it using the Dirichlet variational loss. The meta-model uses the same training data as the base model, and the maximum epochs for training the meta-model is set to be 50. All the models are optimized using an SGD optimizer. The hyper-parameters for training the base model and meta-model are summarized in Table~\ref{table: hyper}, where $bs$ denotes the batch size, $lr$ denotes the learning rate, $m$ denotes the momentum, $wd$ denotes the weight decay, $\lambda$ is the hyper-parameter to balance the two terms in variational loss, and $\beta$ is the concentration parameter of prior Dirichlet distribution. All experiments are implemented in PyTorch using Titan RTX GPUs with 24 GB memory. \begin{table*}[t]
\caption{Hyper-parameter for training the base model and meta-model}
\centering
\begin{tabular}{llllllllll}
\toprule
Dataset & Model & Epoch & $bs$ & $\lambda$ & $\beta$ & $lr$ & $m$ & $wd$\\
\midrule
MNIST & LeNet-Base & 20 & 128 & / & / & 0.01 & 0.9 & $5\times10^{-4}$ \\
\midrule
MNIST & LeNet-Meta & 50 & 128 & $10^{-1}$ & 1 & 0.1 & 0.9 & $5\times10^{-4}$ \\
\midrule
CIFAR10 & VGG16-Base & 200 & 128 & / & / & 0.1 & 0.9 & $1\times10^{-4}$ \\
\midrule
CIFAR10 & VGG16-Meta & 50 & 128 & $10^{-3}$ & 1 & 0.001 & 0.9 & $1\times10^{-4}$ \\
\midrule
CIFAR100 & WideResNet-Base & 200 & 128 & / & / & 0.1 & 0.9 & $1\times10^{-4}$ \\
\midrule
CIFAR100 & WideResNet-Meta & 50 & 128 & $10^{-3}$ & 1 & 0.1 & 0.9 & $1\times10^{-4}$ \\
\bottomrule
\end{tabular}
\label{table: hyper} \end{table*} \subsection{Implementation Details of Trustworthy Trnasfer Learning} We download the pretrained ResNet-50 trained on ImageNet as the base-model. Similarly, we freeze the parameter of the pretrained model and train the meta-model on top of it using the training data of the target task. All the models are optimized using an SGD optimizer. The hyper-parameters for training the meta-model are summarized in Table~\ref{table: hyper-transfer}, where $bs$ denotes the batch size, $lr$ denotes the learning rate, $m$ denotes the momentum, $wd$ denotes the weight decay, $\lambda$ is the hyper-parameter to balance the two terms in variational loss, and $\beta$ is the concentration parameter of prior Dirichlet distribution. \begin{table*}[t]
\caption{Hyper-parameter}
\centering
\begin{tabular}{llllllllll}
\toprule
Dataset & Model & Epoch & $bs$ & $\lambda$ & $\beta$ & $lr$ & $m$ & $wd$\\
\midrule
STL10 & ResNet50-Meta & 50 & 128 & $10^{-3}$ & 1 & 0.01 & 0.9 & $1\times10^{-4}$ \\
\midrule
CIFAR10 & ResNet50-Meta & 50 & 128 & $10^{-3}$ & 1 & 0.01 & 0.9 & $1\times10^{-4}$ \\
\bottomrule
\end{tabular}
\label{table: hyper-transfer} \end{table*}
\subsection{Meta-Model Structure} The high-level description of the meta-model structure is provided in~\ref{subsec: model}. More specifically, all the linear layers ${\bm{g}}_{i}$ and ${\bm{g}}_{c}$ consist of three elements: a fully-connected layer, ReLU activation function, and Max-Pooling. Each ${\bm{g}}_{i}$ has multiple fully-connected layers, each is followed by a ReLU and a Max-Pooling, each fully-connected layer reduces the input feature dimension to half size, and the output meta-feature of ${\bm{g}}_{i}$ has the dimension as the class number, e.g., 10 for CIFAR10. The final linear layer ${\bm{g}}_{c}$ is a single fully-connected layer that takes the concatenation of all the meta-feature and outputs the concentration parameter $\alpha$.
\subsection{Training Time Complexity} Our meta-model-based approach is much more efficient than traditional uncertainty quantification approaches due to the simpler structure and faster convergence speed. To quantitatively demonstrate such efficiency, we measure the wall-clock time of training the meta-model in seconds (on a single Titan RTX GPU) as follows. The training time of training VGG16 model on CIFAR10 dataset is 66.5s for five epochs; The training time of training WideResNet model on CIFAR100 dataset is 241.9s for ten epochs. The training time of the meta-model can be negligible compared to those approaches training the entire base model from scratch (usually taking several hours).
\subsection{Validation for Uncertainty Learning} We use the proposed validation trick discussed in~\ref{sec:validation} to perform early stopping in the training of the meta-model. We randomly pick $\%20$ training data as our validation set. For the OOD detection task, we create the noisy validation set by by applying various kinds of noise and perturbation to the original images, including permuting the pixels, applying Gaussian blurring, and contrast re-scaling. For the misclassification task, we directly use the validation set to evaluate the misclassification detection performance with the correctly classified and miss-classified validation samples.
\subsection{Description of OOD datasets} \label{ood-dataset-appendix} For OOD detection task, we use the testing set as in-domain dataset, and make sure the out of domain dataset has the same number of samples (10000 samples) as in-domain dataset. Different dataset input images are resized to 32x32 to ensure they have the same size, and all gray-scale images are converted into three-channel images. We use the following datasets as OOD samples for the OOD detection task: \begin{itemize}
\item \textbf{Omniglot} contains 1632 hand-written characters taken from 50 different alphabets. We randomly pick 10000 images from its testing set as OOD samples for MNIST.
\item \textbf{Fashion-MNIST} is a dataset of Zalando's article images. We use the testing set with 10000 images as OOD samples for both MNIST and CIFAR.
\item \textbf{KMNIST} contains handwritten characters from the Japanese Kuzushiji texts. We use the testing set with 10000 images as OOD samples for MNIST.
\item \textbf{SVHN} contains images of house numbers taken from Google Street View. We use the testing set with 10000 images as OOD samples for CIFAR.
\item \textbf{LSUN} The Large-scale Scene UNderstanding dataset(LSUN) is a dataset of different objects taken from 10 different scene categories. We use the images from the classroom categories and randomly sample 10000 training images as OOD samples for CIFAR.
\item \textbf{TIM} TinyImageNet(TIM) is a subset of the ImageNet dataset, and we use the validation set with 10000 images as OOD samples for CIFAR
\item \textbf{Corrupted} is an artificial dataset generated by perturbing the original testing image using Gaussian blurring, pixel permutation, and contrast re-scaling. \end{itemize}
\section{Additional Experiment Results} \label{results-appendix} \subsection{OOD Detection} In the following, we compare our proposed method with several SOTA uncertainty quantification methods with traditional settings on the OOD detection task. The results are shown in~Table \ref{table:OOD-appendix}. our proposed method can still outperform these methods. We provide additional OOD detection results in terms of AUPR score as shown in~Table \ref{table: OOD-aupr}. \begin{table*}[t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{OOD Detection AUROC score.} MI, Dent, and Prec stand for different epistemic uncertainty metrics, i.e., Mutual Information, Differential Entropy, and precision. Settings stand for post-hoc or traditional, i.e., training the entire model from scratch. Additional Data stands for if using additional training data or not.}
\begin{tabular}{lllllllll}
\\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{Omniglot}& \textbf{FMNIST} & \textbf{KMNIST}& \textbf{CIFAR10} &\textbf{Corrupted} \\
\hline
MNIST & Base Model(Ent) & Traditional & No & 98.9$\pm$0.5 & 97.8$\pm$0.8 & 95.8$\pm$0.8 & 99.4$\pm$0.2 & 99.5$\pm$0.3 \\
LeNet & Base Model(MaxP) & Traditional & No & 98.7$\pm$0.6 & 97.6$\pm$0.8 & 95.6$\pm$0.8 & 99.3$\pm$0.2 & 99.4$\pm$0.4 \\
& MCDropout & Traditional & No & 98.2$\pm$0.1 & 98.1$\pm$0.4 & 92.9$\pm$0.3 & 99.3$\pm$0.4 & 98.7$\pm$0.2 \\
& BeBayesian & Traditional & No & 99.2$\pm$0.4 & 98.5$\pm$0.5 & 96.1$\pm$0.9 & 99.5$\pm$0.2 & 99.7$\pm$0.2 \\
& PostNet & Traditional & No & 99.0$\pm$0.5 & 99.1$\pm$0.5 & 99.3$\pm$0.3 & 99.0$\pm$0.2 & 98.9$\pm$0.7 \\
& ALOE & Traditional & Yes & 100.0$\pm$0.1 & 99.0$\pm$0.3 & 96.7$\pm$0.6 & 99.8$\pm$0.1 & 100.0$\pm$0.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 99.7$\pm$0.1 & 99.5$\pm$0.2 & 98.2$\pm$0.3 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 99.3$\pm$0.2 & 99.3$\pm$0.2 & 98.0$\pm$0.2 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MI)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.6}$\pm$0.2} &97.7$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 99.8$\pm$0.0 & 99.5$\pm$0.2 & 97.6$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.5}$\pm$0.2} &97.7$\pm$0.5 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TinyImageNet} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & Traditional & No & 86.4$\pm$4.6 & 90.8$\pm$1.3 & 89.0$\pm$0.5 & 87.5$\pm$1.1 & 85.9$\pm$8.2 \\
VGG16 & Base Model(MaxP) & Traditional & No & 86.3$\pm$4.4 & 90.4$\pm$1.2 & 88.7$\pm$0.5 & 87.3$\pm$1.1 & 85.7$\pm$8.1 \\
& MCDropout & Traditional & No & 75.6$\pm$1.1 & 80.9$\pm$0.5 & 85.1$\pm$1.6 & 80.0$\pm$1.3 & 86.8$\pm$1.1 \\
& BeBayesian &Traditional & No & 91.7$\pm$3.9 & 88.3$\pm$0.6 & 83.2$\pm$0.6 & 81.0$\pm$0.6 & 94.6$\pm$4.6 \\
& PostNet & Traditional & No & 93.7$\pm$1.2 & 95.8$\pm$2.8 & 93.3$\pm$3.4 & 92.4$\pm$2.9 & 93.4$\pm$2.2 \\
& ALOE & Traditional & Yes & 99.9$\pm$1.1 & 98.3 $\pm$ 0.7 & 92.1$\pm$1.3 & 99.9$\pm$0.1 & 96.2$\pm$0.7 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 96.3$\pm$3.0 &89.0$\pm$5.2 & 89.6$\pm$3.4 & 89.4$\pm$3.5& 95.9$\pm$4.3 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 95.6$\pm$3.6 & 87.8$\pm$4.4 &89.1$\pm$2.4 & 88.2$\pm$2.6 & 94.0$\pm$7.3 \\
& \textbf{Ours(MI)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.2$\pm$0.9 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Dent)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & 98.4$\pm$0.9 & \textcolor{violet}{\textbf{95.7}$\pm$0.8} & 97.7$\pm$0.5 & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.1$\pm$0.5 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
\toprule
CIFAR100 & Base Model(Ent) & Traditional & No & 76.2$\pm$5.2 & 77.8$\pm$2.4 & 80.1$\pm$0.5 & 79.7$\pm$0.3 & 65.8$\pm$11.4 \\
WideResNet & Base Model(MaxP) & Traditional & No & 73.9$\pm$4.3 & 76.4$\pm$2.3 & 78.7$\pm$0.5 & 78.0$\pm$0.2 & 63.8$\pm$10.4 \\
& MCDropout & Traditional & No & 77.6$\pm$1.9 & 77.1$\pm$0.4 & 77.0$\pm$3.5 & 79.7$\pm$0.4 & 64.4$\pm$8.9 \\
& BeBayesian &Traditional & No & 74.9$\pm$8.2 & 81.0$\pm$2.1 & 80.0$\pm$0.7 & 79.6$\pm$0.2 & 63.0$\pm$13.7 \\
& PostNet & Traditional & No & 83.4$\pm$1.6 & 83.1$\pm$3.2 & 81.0$\pm$1.3 & 80.1$\pm$1.1 & 87.7$\pm$4.2 \\
& ALOE & Traditional & Yes & 93.9$\pm$0.3 & 86.9$\pm$3.7 & 70.2$\pm$1.2 & 85.3$\pm$3.7 & 92.2$\pm$1.5 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 92.6$\pm$2.0 & 80.8$\pm$3.0 & 81.1$\pm$1.2 & 84.9$\pm$1.2& 85.6$\pm$3.9 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 88.6$\pm$3.2 & 78.4$\pm$3.0 & 79.7$\pm$0.6 & 82.4$\pm$0.6& 79.3$\pm$4.5 \\
& \textbf{Ours} & Post-hoc & No & 94.3$\pm$1.0 & \textcolor{violet}{\textbf{84.4}$\pm$1.8} & 81.9$\pm$4.7 & 85.5$\pm$3.6 & \textcolor{violet}{\textbf{90.8}$\pm$3.3} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 93.3$\pm$1.4 & 84.0$\pm$2.3 & 79.5$\pm$3.0 & 84.6$\pm$2.8& 89.8$\pm$2.6 \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{94.4}$\pm$1.0} & \textcolor{violet}{\textbf{84.4}$\pm$1.7} & \textcolor{violet}{\textbf{82.1}$\pm$4.9} & \textcolor{violet}{\textbf{85.6}$\pm$3.6}&
\textcolor{violet}{\textbf{90.8}$\pm$3.4} \\
\toprule
\end{tabular}
\label{table:OOD-appendix}
\end{center}
\end{table*}
\begin{comment}
\begin{table*}[h]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{CIFAR10 OOD Detection AUROC score} }
\begin{tabular}{lllllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & Traditional & No & 86.4$\pm$4.6 & 90.8$\pm$1.3 & 89.0$\pm$0.5 & 87.5$\pm$1.1 & 85.9$\pm$8.2 \\
VGG16 & Base Model(MaxP) & Traditional & No & 86.3$\pm$4.4 & 90.4$\pm$1.2 & 88.7$\pm$0.5 & 87.3$\pm$1.1 & 85.7$\pm$8.1 \\
& MCDropout & Traditional & No & 75.6$\pm$1.1 & 80.9$\pm$0.5 & 85.1$\pm$1.6 & 80.0$\pm$1.3 & 86.8$\pm$1.1 \\
& BeBayesian &Traditional & No & 91.7$\pm$3.9 & 88.3$\pm$0.6 & 83.2$\pm$0.6 & 81.0$\pm$0.6 & 94.6$\pm$4.6 \\
& PostNet & Traditional & No & 93.7$\pm$1.2 & 95.8$\pm$2.8 & 93.3$\pm$3.4 & 92.4$\pm$2.9 & 93.4$\pm$2.2 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 96.3$\pm$3.0 &89.0$\pm$5.2 & 89.6$\pm$3.4 & 89.4$\pm$3.5& 95.9$\pm$4.3 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 95.6$\pm$3.6 & 87.8$\pm$4.4 &89.1$\pm$2.4 & 88.2$\pm$2.6 & 94.0$\pm$7.3 \\
& \textbf{Ours(MI)} & Post-hoc & No & 100.0$\pm$0.0 & 98.8$\pm$0.5 & 95.2$\pm$0.9 & 98.1$\pm$0.3 & 100.0$\pm$0.0 \\
& \textbf{Ours(Dent)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{98.4}$\pm$0.9} & \textcolor{violet}{\textbf{95.7}$\pm$0.8} & \textcolor{violet}{\textbf{97.7}$\pm$0.5} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Prec)} & Post-hoc & No & 100.0$\pm$0.0 & 98.8$\pm$0.5 & 95.1$\pm$0.5 & 98.1$\pm$0.3 & 100.0$\pm$0.0 \\
\toprule
\end{tabular}
\label{table:OOD-CIFAR10-appendix}
\end{center}
\end{table*}
\begin{table*}[h]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{CIFAR100 OOD Detection AUROC score}}
\begin{tabular}{lllllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
CIFAR100 & Base Model(Ent) & Traditional & No & 76.2$\pm$5.2 & 77.8$\pm$2.4 & 80.1$\pm$0.5 & 79.7$\pm$0.3 & 65.8$\pm$11.4 \\
WideResNet & Base Model(MaxP) & Traditional & No & 73.9$\pm$4.3 & 76.4$\pm$2.3 & 78.7$\pm$0.5 & 78.0$\pm$0.2 & 63.8$\pm$10.4 \\
& MCDropout & Traditional & No & 77.6$\pm$1.9 & 77.1$\pm$0.4 & 77.0$\pm$3.5 & 79.7$\pm$0.4 & 64.4$\pm$8.9 \\
& BeBayesian &Traditional & No & 74.9$\pm$8.2 & 81.0$\pm$2.1 & 80.0$\pm$0.7 & 79.6$\pm$0.2 & 63.0$\pm$13.7 \\
& PostNet & Traditional & No & 83.4$\pm$1.6 & 83.1$\pm$3.2 & 81.0$\pm$1.3 & 80.1$\pm$1.1 & 87.7$\pm$4.2 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 92.6$\pm$2.0 & 80.8$\pm$3.0 & 81.1$\pm$1.2 & 84.9$\pm$1.2& 85.6$\pm$3.9 \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 88.6$\pm$3.2 & 78.4$\pm$3.0 & 79.7$\pm$0.6 & 82.4$\pm$0.6& 79.3$\pm$4.5 \\
& \textbf{Ours} & Post-hoc & No & 94.3$\pm$1.0 & \textcolor{violet}{\textbf{84.4}$\pm$1.8} & 81.9$\pm$4.7 & 85.5$\pm$3.6 & \textcolor{violet}{\textbf{90.8}$\pm$3.3} \\
& \textbf{Ours(Dent)} & Post-hoc & No & 93.3$\pm$1.4 & 84.0$\pm$2.3 & 79.5$\pm$3.0 & 84.6$\pm$2.8& 89.8$\pm$2.6 \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{94.4}$\pm$1.0} & \textcolor{violet}{\textbf{84.4}$\pm$1.7} & \textcolor{violet}{\textbf{82.1}$\pm$4.9} & \textcolor{violet}{\textbf{85.6}$\pm$3.6}&
\textcolor{violet}{\textbf{90.8}$\pm$3.4} \\
\toprule
\end{tabular}
\label{table:OOD-CIFAR100-appendix}
\end{center}
\end{table*} \end{comment}
\begin{table*}[h]
\begin{center}
\scriptsize
\caption{\textbf{OOD Detection AUPR score.} MI, Dent, and Prec stand for different epistemic uncertainty metrics, i.e., Mutual Information, Differential Entropy, and precision. Settings stand for post-hoc or traditional, i.e., training the entire model from scratch. Additional Data stands for if using additional training data or not.}
\begin{tabular}{lllllllll}
\\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{Omniglot}& \textbf{FMNIST} & \textbf{KMNIST}& \textbf{CIFAR10} &\textbf{Corrupted}\\
\hline
MNIST & Base Model(Ent) & Traditional & No & 98.7$\pm$0.6 & 97.6$\pm$0.8 & 95.4$\pm$0.7 & 99.4$\pm$0.2 & 99.5$\pm$0.3 \\
LeNet & Base Model(MaxP) & Traditional & No & 98.4$\pm$0.7 & 97.3$\pm$0.9 & 95.1$\pm$0.8 & 99.2$\pm$0.2 & 99.3$\pm$0.4 \\
& MCDropout & Traditional & No & 98.1$\pm$0.4 & 97.9$\pm$0.5 & 92.7$\pm$0.8 & 99.3$\pm$0.3 & 98.7$\pm$0.6 \\
& BeBayesian &Traditional & No & 99.1$\pm$0.4 & 98.4$\pm$0.5 & 95.8$\pm$0.9 & 99.5$\pm$0.2 & 99.6$\pm$0.2 \\
& Whitebox & Post-hoc & Yes & 97.8$\pm$0.3 & 97.2$\pm$0.6 & 95.3$\pm$0.3 & 99.4$\pm$0.2 & 99.4$\pm$0.2 \\
& LULA & Post-hoc & Yes & 99.8$\pm$0.0 & 99.5$\pm$0.1 & \textcolor{violet}{\textbf{99.3}$\pm$0.0} & 99.9$\pm$0.0 & 99.9$\pm$0.0 \\
\hline
& \textbf{Ours(Ent)} & Post-hoc & No & 99.7$\pm$0.1 & 99.4$\pm$0.2 & 97.9$\pm$0.3 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MaxP)} & Post-hoc & No & 99.2$\pm$0.2 & 99.2$\pm$0.2 &97.6$\pm$0.3 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(MI)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.6}$\pm$0.2} & 97.5$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}}$\pm$0.0 \\
& \textbf{Ours(Dent)} & Post-hoc & No & 99.7$\pm$0.0 & 99.4$\pm$0.3 & 97.4$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{Ours(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{99.9}$\pm$0.0} & \textcolor{violet}{\textbf{99.6}$\pm$0.2} &97.6$\pm$0.5 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{Settings} & \textbf{Additional Data} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TinyImageNet} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & Traditional & No & 77.1$\pm$7.8 & 87.9$\pm$1.5 & 85.2$\pm$0.6 & 83.1$\pm$1.4 & 76.5$\pm$14.0 \\
VGG16 & Base Model(MaxP) & Traditional & No & 77.4$\pm$7.1 & 86.8$\pm$1.5 & 84.3$\pm$0.5 & 82.5$\pm$1.5 & 76.3$\pm$13.7 \\
& MCDropout & Traditional & No & 61.5$\pm$1.0 & 70.9$\pm$0.7 & 79.3$\pm$1.9 & 71.9$\pm$1.2 & 78.2$\pm$2.4 \\
& BeBayesian & Traditional & No & 86.5$\pm$6.6 & 86.9$\pm$0.7 & 82.3$\pm$0.7 & 80.0$\pm$0.5 & 90.8$\pm$8.5 \\
& Whitebox & Post-hoc & Yes & 92.8$\pm$1.8 & 93.4$\pm$2.4 & 85.3$\pm$2.8 & 86.8$\pm$3.0 & 90.5$\pm$2.1 \\
& LULA & Post-hoc & Yes & 97.3$\pm$1.4 & 94.8$\pm$0.0 & 93.4$\pm$0.0 & 89.2$\pm$0.0 & 97.7$\pm$1.7 \\
\hline
& \textbf{OursELBO(Ent)} & Post-hoc & No & 94.0$\pm$5.7 & 87.0$\pm$5.9 & 87.2$\pm$5.0 & 88.6$\pm$4.6 & 92.3$\pm$8.2 \\
& \textbf{OursELBO(MaxP)} & Post-hoc & No & 93.6$\pm$5.0 & 86.2$\pm$4.2 & 86.7$\pm$3.5 & 87.5$\pm$3.1 & 90.4$\pm$9.1 \\
& \textbf{OursELBO} & Post-hoc & No & 100.0$\pm$0.0 & \textcolor{violet}{\textbf{98.4$\pm$0.8}} & 92.9$\pm$1.8 & \textcolor{violet}{\textbf{97.9$\pm$0.4}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{OursELBO(Dent)} & Post-hoc & No & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & 98.1$\pm$0.9 & \textcolor{violet}{\textbf{94.2}$\pm$1.1} & 97.8$\pm$0.4 & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{OursELBO(Prec)} & Post-hoc & No & 100.0$\pm$0.0 & 98.3$\pm$0.9 & 92.8$\pm$1.9 & \textcolor{violet}{\textbf{97.9$\pm$0.4}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
\hline
CIFAR100 & Base Model(Ent) & Traditional & No & 70.8$\pm$5.7 & 73.0$\pm$1.9 & 75.2$\pm$1.1 & 76.6$\pm$0.5 & 60.5$\pm$11.9 \\
WideResNet & Base Model(MaxP) & Traditional & No& 67.4$\pm$4.4 & 71.0$\pm$1.9 & 73.3$\pm$0.9 & 73.8$\pm$0.4 & 57.7$\pm$10.1 \\
& MCDropout & Traditional & No & 67.9$\pm$2.5 & 70.4$\pm$1.5 & 72.4$\pm$3.3 & 75.6$\pm$0.7 & 55.9$\pm$6.3 \\
& BeBayesian & Traditional & No & 67.5$\pm$7.8 & 75.5$\pm$2.6 & 74.4$\pm$1.0 & 74.9$\pm$0.4 & 58.1$\pm$11.7 \\
& Whitebox & Post-hoc & Yes & 84.2$\pm$2.0 & 79.5$\pm$0.7 & 74.3$\pm$1.2 & 73.5$\pm$0.5 & 77.1$\pm$4.0 \\
& LULA & Post-hoc & Yes & 84.3$\pm$1.1 & 83.8$\pm$0.2 & 79.4$\pm$0.4 & 76.8$\pm$0.2 & 79.2$\pm$1.5 \\
\hline
& \textbf{OursELBO(Ent)} & Post-hoc & No & 88.0$\pm$3.0 & 75.5$\pm$3.3 & 75.3$\pm$1.7 & 82.0$\pm$1.7& 80.0$\pm$6.3 \\
& \textbf{OursELBO(MaxP)} & Post-hoc & No & 83.5$\pm$4.6 & 73.7$\pm$2.9 & 74.2$\pm$1.1 & 79.2$\pm$0.9& 71.8$\pm$6.3 \\
& \textbf{OursELBO(MI)} & Post-hoc & No & 90.1$\pm$1.9 & \textcolor{violet}{\textbf{79.4}$\pm$1.9} & 77.1$\pm$3.6 & 83.2$\pm$4.5 & \textcolor{violet}{\textbf{86.4}$\pm$5.2} \\
& \textbf{OursELBO(Dent)} & Post-hoc & No & 88.5$\pm$2.2 & 78.6$\pm$2.8 & 74.1$\pm$3.7 & 82.0$\pm$3.6& 85.4$\pm$5.1 \\
& \textbf{OursELBO(Prec)} & Post-hoc & No & \textcolor{violet}{\textbf{90.2}$\pm$1.9} & \textcolor{violet}{\textbf{79.4}$\pm$1.8} & \textcolor{violet}{\textbf{77.4}$\pm$4.2} & \textcolor{violet}{\textbf{83.3}$\pm$4.6}& \textcolor{violet}{\textbf{86.4}$\pm$5.2} \\
\toprule
\end{tabular}
\label{table: OOD-aupr}
\end{center}
\end{table*}
\begin{comment} \begin{table*}[h]
\begin{center}
\scriptsize
\caption{\textbf{CIFAR10 OOD Detection AUPR score.}}
\begin{tabular}{llllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted}\\
\hline
CIFAR10 & Base Model(Ent) & No/No & 77.1$\pm$7.8 & 87.9$\pm$1.5 & 85.2$\pm$0.6 & 83.1$\pm$1.4 & 76.5$\pm$14.0 \\
VGG16 & Base Model(MaxP) & No/No& 77.4$\pm$7.1 & 86.8$\pm$1.5 & 84.3$\pm$0.5 & 82.5$\pm$1.5 & 76.3$\pm$13.7 \\
& MCDropout & No/Yes & 61.5$\pm$1.0 & 70.9$\pm$0.7 & 79.3$\pm$1.9 & 71.9$\pm$1.2 & 78.2$\pm$2.4 \\
& BeBayesian &No/Yes & 86.5$\pm$6.6 & 86.9$\pm$0.7 & 82.3$\pm$0.7 & 80.0$\pm$0.5 & 90.8$\pm$8.5 \\
& Whitebox & Yes/No & 92.8$\pm$1.8 & 93.4$\pm$2.4 & 85.3$\pm$2.8 & 86.8$\pm$3.0 & 90.5$\pm$2.1 \\
& LULA &Yes/No & 97.3$\pm$1.4 & 94.8$\pm$0.0 & 93.4$\pm$0.0 & 89.2$\pm$0.0 & 97.7$\pm$1.7 \\
\hline
& \textbf{OursELBO(Ent)} &No/No & 94.0$\pm$5.7 & 87.0$\pm$5.9 & 87.2$\pm$5.0 & 88.6$\pm$4.6 & 92.3$\pm$8.2 \\
& \textbf{OursELBO(MaxP)} &No/No & 93.6$\pm$5.0 & 86.2$\pm$4.2 & 86.7$\pm$3.5 & 87.5$\pm$3.1 & 90.4$\pm$9.1 \\
& \textbf{OursELBO} &No/No & 100.0$\pm$0.0 & 98.4$\pm$0.8 & 92.9$\pm$1.8 & 97.9$\pm$0.4 & 100.0$\pm$0.0 \\
& \textbf{OursELBO(Dent)} &No/No & \textcolor{violet}{\textbf{100.0}$\pm$0.0} & \textcolor{violet}{\textbf{98.1}$\pm$0.9} & \textcolor{violet}{\textbf{94.2}$\pm$1.1} & \textcolor{violet}{\textbf{97.8}$\pm$0.4} & \textcolor{violet}{\textbf{100.0}$\pm$0.0} \\
& \textbf{OursELBO(Prec)} &No/No & 100.0$\pm$0.0 & 98.3$\pm$0.9 & 92.8$\pm$1.9 & 97.9$\pm$0.4 & 100.0$\pm$0.0 \\
\toprule
\end{tabular}
\label{table: OOD-CIFAR10-aupr}
\end{center}
\end{table*}
\begin{table*}[h]
\begin{center}
\scriptsize
\caption{\textbf{CIFAR100 OOD Detection AUPR score.}}
\begin{tabular}{llllllll}
\\
\toprule
\makecell{\textbf{ID Data}\\/\textbf{Model}} & \textbf{Methods} & \textbf{Settings} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted}\\
\hline
CIFAR100 & Base Model(Ent) & No/No & 70.8$\pm$5.7 & 73.0$\pm$1.9 & 75.2$\pm$1.1 & 76.6$\pm$0.5 & 60.5$\pm$11.9 \\
WideResNet & Base Model(MaxP) & No/No& 67.4$\pm$4.4 & 71.0$\pm$1.9 & 73.3$\pm$0.9 & 73.8$\pm$0.4 & 57.7$\pm$10.1 \\
& MCDropout & No/Yes & 67.9$\pm$2.5 & 70.4$\pm$1.5 & 72.4$\pm$3.3 & 75.6$\pm$0.7 & 55.9$\pm$6.3 \\
& BeBayesian &No/Yes & 67.5$\pm$7.8 & 75.5$\pm$2.6 & 74.4$\pm$1.0 & 74.9$\pm$0.4 & 58.1$\pm$11.7 \\
& Whitebox & Yes/No & 84.2$\pm$2.0 & 79.5$\pm$0.7 & 74.3$\pm$1.2 & 73.5$\pm$0.5 & 77.1$\pm$4.0 \\
& LULA &Yes/No & 84.3$\pm$1.1 & 83.8$\pm$0.2 & 79.4$\pm$0.4 & 76.8$\pm$0.2 & 79.2$\pm$1.5 \\
\hline
& \textbf{OursELBO(Ent)} &No/No & 88.0$\pm$3.0 & 75.5$\pm$3.3 & 75.3$\pm$1.7 & 82.0$\pm$1.7& 80.0$\pm$6.3 \\
& \textbf{OursELBO(MaxP)} &No/No & 83.5$\pm$4.6 & 73.7$\pm$2.9 & 74.2$\pm$1.1 & 79.2$\pm$0.9& 71.8$\pm$6.3 \\
& \textbf{OursELBO(MI)} &No/No & 90.1$\pm$1.9 & \textcolor{violet}{\textbf{79.4}$\pm$1.9} & 77.1$\pm$3.6 & 83.2$\pm$4.5 & \textcolor{violet}{\textbf{86.4}$\pm$5.2} \\
& \textbf{OursELBO(Dent)} &No/No & 88.5$\pm$2.2 & 78.6$\pm$2.8 & 74.1$\pm$3.7 & 82.0$\pm$3.6& 85.4$\pm$5.1 \\
& \textbf{OursELBO(Prec)} &No/No & \textcolor{violet}{\textbf{90.2}$\pm$1.9} & \textcolor{violet}{\textbf{79.4}$\pm$1.8} & \textcolor{violet}{\textbf{77.4}$\pm$4.2} & \textcolor{violet}{\textbf{83.3}$\pm$4.6}& \textcolor{violet}{\textbf{86.4}$\pm$5.2} \\
\toprule
\end{tabular}
\label{table: OOD-CIFAR100-aupr}
\end{center}
\end{table*} \end{comment}
\subsection{Misclassification Detection} In the following, we compare our proposed method with several SOTA uncertainty quantification methods with traditional settings on the misclassfication detection task. The results are shown in~Table \ref{table:miss-class-appendix}.
\begin{table*}[t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{Misclassification Results.} Ent and MaxP stand for Entropy and Max Probability, respectively. Settings stand for post-hoc or traditional, i.e., training the entire model from scratch.}
\begin{tabular}{llllllllll}
\\
\toprule
\textbf{Methods} & \textbf{Settings} & \textbf{Metric} & \textbf{MNIST}& \makecell{\textbf{CIFAR 10}} & \makecell{\textbf{CIFAR 100}}& \textbf{Metric} & \textbf{MNIST}& \makecell{\textbf{CIFAR 10}} & \makecell{\textbf{CIFAR 100}}\\
\hline
Base Model(Ent) & Traditional & AUROC & 96.7$\pm$0.9 & 92.1$\pm$0.2 & 87.2$\pm$0.2 & AUPR & 37.4$\pm$4.0 & 47.5$\pm$1.2 & 67.0$\pm$1.2 \\
Base Model(MaxP) & Traditional & AUROC & 96.7$\pm$0.9 & 92.1$\pm$0.2 & 86.8$\pm$0.2 & AUPR & 39.4$\pm$3.6 & 46.6$\pm$1.1 & 65.7$\pm$1.0 \\
MCDropout & Traditional & AUROC & 95.1$\pm$0.1 & 90.4$\pm$0.6 & 83.7$\pm$0.1 & AUPR & 34.0$\pm$0.5 & 42.1$\pm$2.1 & 60.3$\pm$0.2 \\
BeBayesian & Traditional & AUROC & 97.1$\pm$0.5 & 86.6$\pm$0.5 & 85.1$\pm$0.6 & AUPR & 39.4$\pm$4.0 & 47.1$\pm$1.6 & 63.2$\pm$1.3 \\
PostNet & Traditional & AUROC & 97.0$\pm$0.6 & 90.3$\pm$0.3 & 84.1$\pm$0.5 & AUPR & 41.3$\pm$2.7 & 51.3$\pm$2.1 & 66.4$\pm$0.3 \\
\hline
\textbf{Ours(Ent)} &Post-hoc & AUROC & 96.9$\pm$0.6 & 91.1$\pm$0.2 & 83.4$\pm$0.1 & AUPR & 35.6$\pm$4.5 & 50.0$\pm$3.1 & 66.3$\pm$0.4 \\
\textbf{Ours(MaxP)} &Post-hoc & AUROC & \textcolor{violet}{\textbf{97.4$\pm$0.4}} & \textcolor{violet}{\textbf{92.2$\pm$0.7}} & \textcolor{violet}{\textbf{85.8$\pm$0.2}} & AUPR & \textcolor{violet}{\textbf{44.5}$\pm$5.1} & \textcolor{violet}{\textbf{54.2}$\pm$3.2} & \textcolor{violet}{\textbf{68.2}$\pm$0.5} \\
\toprule
\end{tabular}
\label{table:miss-class-appendix}
\end{center}
\end{table*}
\subsection{Ablation Study} In the following, we show the complete results of ablation study in Table~~\ref{table:ablation-structure-appendix}. \begin{table*}[!t]
\begin{center}
\scriptsize
\captionsetup{font=small}
\caption{\textbf{Ablation Study of Meta-model (CIFAR10 AUROC score)}}
\begin{tabular}{llllllll}
\\
\toprule
\textbf{ID Data}\ \&\ \textbf{Model} & \textbf{Methods} & \textbf{SVHN}& \textbf{FMNIST} & \textbf{LSUN}& \textbf{TIM} &\textbf{Corrupted} \\
\hline
CIFAR10 & Base Model(Ent) & 86.4$\pm$4.6 & 90.8$\pm$1.3 & 89.0$\pm$0.5 & 87.5$\pm$1.1 & 85.9$\pm$8.2 \\
VGG16 & Base Model(MaxP) & 86.3$\pm$4.4 & 90.4$\pm$1.2 & 88.7$\pm$0.5 & 87.3$\pm$1.1 & 85.7$\pm$8.1 \\
\hline
& \textbf{Single-layer Meta(Ent)} & 90.4$\pm$0.7 & 91.3$\pm$0.2 & 91.5$\pm$0.2 & 89.5$\pm$0.2 & 90.4$\pm$0.8 \\
& \textbf{Single-layer Meta(MaxP)} & 90.1$\pm$0.6 & 91.5$\pm$0.1 & 91.4$\pm$0.2 & 89.6$\pm$0.1 & 90.1$\pm$0.6 \\
& \textbf{Single-layer Meta(MI)} & 90.6$\pm$1.2 & 90.1$\pm$0.6 & 91.2$\pm$0.2 & 88.8$\pm$0.3 & 90.5$\pm$1.3 \\
& \textbf{Single-layer Meta(Dent)} & 90.4$\pm$0.9 & 90.7$\pm$0.3 & 91.4$\pm$0.2 & 89.2$\pm$0.2 & 90.3$\pm$1.0 \\
& \textbf{Single-layer Meta(Prec)} & 90.6$\pm$1.2 & 90.0$\pm$0.6 & 91.2$\pm$0.2 & 88.8$\pm$0.4 & 90.6$\pm$1.4 \\
\hline
& \textbf{Cross-Ent Meta(Ent)} & 94.2$\pm$2.2 & 91.2$\pm$0.7 & 91.2$\pm$0.6 & 90.3$\pm$0.8& 94.7$\pm$2.5 \\
& \textbf{Cross-Ent Meta(MaxP)} & 93.3$\pm$1.6 & 91.1$\pm$0.7 &90.9$\pm$0.4 & 90.0$\pm$0.6 & 94.0$\pm$2.0 \\
\hline
& \textbf{Base-Model+LastLayer(Ent)}& 93.0$\pm$1.1 & 90.2$\pm$0.3 & 91.9$\pm$0.2 & 89.9$\pm$0.2& 93.1$\pm$1.2 \\
&\textbf{Base-Model+LastLayer(MaxP)} & 92.9$\pm$0.9 & 90.5$\pm$0.3 &91.9$\pm$0.2 & 90.1$\pm$0.1 & 93.1$\pm$0.9 \\
\hline
& \textbf{$10\%$data(Ent)} & 90.0$\pm$7.1 &89.1$\pm$4.4 & 88.7$\pm$6.4 & 88.2$\pm$5.2& 90.2$\pm$7.4 \\
& \textbf{$10\%$data(MaxP)} & 90.9$\pm$5.5 & 88.1$\pm$3.3 &86.9$\pm$7.0 & 86.5$\pm$4.5 & 91.7$\pm$5.4 \\
& \textbf{$10\%$data(MI)}& 99.9$\pm$0.2 & 98.1$\pm$2.2 & 95.4$\pm$1.7 & 97.2$\pm$2.4 & 99.9$\pm$0.1 \\
& \textbf{$10\%$data(Dent)} & 96.7$\pm$3.5 & 97.4$\pm$3.0 & 94.3$\pm$4.1 & 95.7$\pm$4.1 & 96.7$\pm$3.4 \\
& \textbf{$10\%$data(Prec)} & 99.9$\pm$0.1 & 98.0$\pm$2.2 & 95.4$\pm$1.7 & 97.3$\pm$2.3 & 99.9$\pm$0.1 \\
\hline
& \textbf{Ours(Ent)} & 96.3$\pm$3.0 &89.0$\pm$5.2 & 89.6$\pm$3.4 & 89.4$\pm$3.5& 95.9$\pm$4.3 \\
& \textbf{Ours(MaxP)} & 95.6$\pm$3.6 & 87.8$\pm$4.4 &89.1$\pm$2.4 & 88.2$\pm$2.6 & 94.0$\pm$7.3 \\
& \textbf{Ours(MI)} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.2$\pm$0.9 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Dent)} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & 98.4$\pm$0.9 & \textcolor{violet}{\textbf{95.7}$\pm$0.8} & 97.7$\pm$0.5 & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
& \textbf{Ours(Prec)} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} & \textcolor{violet}{\textbf{98.8$\pm$0.5}} & 95.1$\pm$0.5 & \textcolor{violet}{\textbf{98.1$\pm$0.3}} & \textcolor{violet}{\textbf{100.0$\pm$0.0}} \\
\toprule
\end{tabular}
\label{table:ablation-structure-appendix}
\end{center}
\end{table*}
\end{document} | arXiv | {
"id": "2212.07359.tex",
"language_detection_score": 0.6620088219642639,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A Dynamic Programming Solution to a Generalized LCS Problem}
\begin{abstract} In this paper, we consider a generalized longest common subsequence problem, the string-excluding constrained LCS problem. For the two input sequences $X$ and $Y$ of lengths $n$ and $m$, and a constraint string $P$ of length $r$, the problem is to find a common subsequence $Z$ of $X$ and $Y$ excluding $P$ as a substring and the length of $Z$ is maximized. The problem and its solution were first proposed by Chen and Chao\cite{1}, but we found that their algorithm can not solve the problem correctly. A new dynamic programming solution for the STR-EC-LCS problem is then presented in this paper. The correctness of the new algorithm is proved. The time complexity of the new algorithm is $O(nmr)$. \end{abstract}
\section{Introduction} In this paper, we consider a generalized longest common subsequence problem. The longest common subsequence (LCS) problem is a well-known measurement for computing the similarity of two strings. It can be widely applied in diverse areas, such as file comparison, pattern matching and computational biology\cite{2,5,6,8,9}.
A sequence is a string of characters over an alphabet $\sum$. A subsequence of a sequence $X$ is obtained by deleting zero or more characters from $X$ (not necessarily contiguous). A substring of a sequence $X$ is a subsequence of successive characters within $X$.
For a given sequence $X=x_1x_2\cdots x_n$ of length $n$, the $i$th character of $X$ is denoted as $x_i \in \sum$ for any $i=1,\cdots,n$. A substring of $X$ from position $i$ to $j$ can be denoted as $X[i:j]=x_ix_{i+1}\cdots x_j$. A substring $X[i:j]=x_ix_{i+1}\cdots x_j$ is called a prefix or a suffix of $X$ if $i=1$ or $j=n$, respectively.
Given two sequences $X$ and $Y$, the longest common subsequence (LCS) problem is to find a subsequence of $X$ and $Y$ whose length is the longest among all common subsequences of the two given sequences.
For some biological applications some constraints must be applied to the LCS problem. These kinds of variant of the LCS problem are called the constrained LCS (CLCS) problem\cite{10}. Recently, Chen and Chao\cite{1} proposed the more generalized forms of the CLCS problem, the generalized constrained longest common subsequence (GC-LCS) problem. For the two input sequences $X$ and $Y$ of lengths $n$ and $m$,respectively, and a constraint string $P$ of length $r$, the GC-LCS problem is a set of four problems which are to find the LCS of $X$ and $Y$ including/excluding $P$ as a subsequence/substring, respectively. The four generalized constrained LCS can be summarized in Table 1.
\begin{table}[ht] \begin{center} \caption{The GC-LCS problems} \begin{tabular}{lll} \hline\hline
Problem & Input & Output\\\hline SEQ-IC-LCS &$X$,$Y$, and $P$ & The longest common subsequence of $X$ and $Y$ including $P$ as a subsequence\\ STR-IC-LCS &$X$,$Y$, and $P$ & The longest common subsequence of $X$ and $Y$ including $P$ as a substring\\ SEQ-EC-LCS &$X$,$Y$, and $P$ & The longest common subsequence of $X$ and $Y$ excluding $P$ as a subsequence\\ STR-EC-LCS &$X$,$Y$, and $P$ & The longest common subsequence of $X$ and $Y$ excluding $P$ as a substring\\ \hline \end{tabular} \end{center} \end{table}
We will discuss the STR-EC-LCS problem in this paper. We have noticed that a previous proposed dynamic programming algorithm for the STR-EC-LCS problem\cite{1} can not correctly solve the problem. A new dynamic solution for the STR-EC-LCS problem is then presented in this paper. The correctness of the new algorithm is proved. The time complexity of the new algorithm is $O(nmr)$.
The organization of the paper is as follows.
In the following 4 sections we describe our presented dynamic programming algorithm for the STR-EC-LCS problem.
In section 2 we review the dynamic programming algorithm for the STR-EC-LCS problem proposed by Chen and Chao\cite{1}. We point out that their algorithm will not work for a simple counterexample. In section 3 we give a new dynamic solution for the STR-EC-LCS problem with time complexity $O(nmr)$ in a different point of view. In section 4 we discuss the issues to implement the algorithm efficiently. Some concluding remarks are in section 5.
\section{A Proposed Dynamic Programming Algorithm} In this section, we will focus on the STR-EC-LCS problem and its solution proposed previously by Chen and Chao\cite{1}. As noted in table 1, for the two input sequences $X$ and $Y$ of lengths $n$ and $m$, and a constraint string $P$ of length $r$, the STR-EC-LCS problem is to find an LCS $Z$ of $X$ and $Y$ excluding $P$ as a substring.
Let $L(i,j,k)$ denote the length of an LCS of $X[1:i]$ and $Y[1:j]$ excluding $P[1:k]$ as a substring. Chen and Chao gave a recursive formula (1) for computing $L(i,j,k)$ as follows.
\beql{eq21} L(i,j,k)=\left\{\begin{array}{ll} L(i-1,j-1,k) & \texttt{if } k=1 \texttt{ and } x_i=y_j=p_k, \\ 1+\max \{L(i-1,j-1,k-1),L(i-1,j-1,k)\} & \texttt{if } k\geq 2 \texttt{ and } x_i=y_j=p_k, \\ 1+L(i-1,j-1,k)& \texttt{if } x_i=y_j \texttt{ and } (k=0, \texttt{ or } k>0 \texttt{ and } x_i\neq p_k), \\ \max\left\{ L(i-1,j,k),L(i,j-1,k) \right\} & \texttt{if } x_i\neq y_j. \end{array} \right. \end{equation}
The boundary conditions of this recursive formula are $L(i,0,k) = L(0,j,k) = 0$ for any $0\leq i\leq n, 0\leq j\leq m$, and $0\leq k \leq r$.
The correctness of the recursive formula (1) was based on Theorem 3 of their paper\cite{1} as follows.
\begin{theo}\label{th3} \verb"(Chen and Chao 2011)" Let $S_{i,j,k}$ denote the set of all LCSs of $X[1:i]$ and $Y[1:j]$ excluding $P[1:k]$ as a substring. If $Z[1:l]\in S_{i,j,k}$, the following conditions hold:
\renewcommand\labelenumi{(\theenumi)} \begin{enumerate}
\item If $x_i=y_j=p_k$ and $k = 1$, then $z_l\neq x_i$ and $Z[1:l]\in S_{i-1,j-1,k}$.
\item If $x_i=y_j=p_k$ and $k\geq 2$, then $z_l=x_i=y_j=p_k$ and $z_{l-1}=p_{k-1}$ implies $Z[1:l-1]\in S_{i-1,j-1,k-1}$.
\item If $x_i=y_j=p_k$ and $k\geq 2$, then $z_l=x_i=y_j=p_k$ and $z_{l-1}\neq p_{k-1}$ implies $Z[1:l-1]\in S_{i-1,j-1,k}$.
\item If $x_i=y_j=p_k$ and $k\geq 2$, then $z_l\neq x_i$ implies $Z[1:l]\in S_{i-1,j-1,k}$.
\item If If $x_i=y_j$ and $x_i\neq p_k$, then $z_l=x_i=y_j$ and $Z[1:l-1]\in S_{i-1,j-1,k}$.
\item If $x_i\neq y_j$, then $z_l\neq x_i$ implies $Z[1:l]\in S_{i-1,j,k}$.
\item If $x_i\neq y_j$, then $z_l\neq y_j$ implies $Z[1:l]\in S_{i,j-1,k}$. \end{enumerate}
\end{theo}
Since a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P[1:k-1]$ as a substring is also a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P[1:k]$ as a substring, by the definition of $L(i,j,k)$, we know that $L(i,j,k)\geq L(i,j,k-1)$ is always true. Therefore, the recursive formula (1) can be further reduced to the recursive formula (2). \beql{eq22} L(i,j,k)=\left\{\begin{array}{ll} L(i-1,j-1,k) & \texttt{if } k=1 \texttt{ and } x_i=y_j=p_k, \\ 1+L(i-1,j-1,k) & \texttt{if } k\geq 2 \texttt{ and } x_i=y_j=p_k, \\ 1+L(i-1,j-1,k)& \texttt{if } x_i=y_j \texttt{ and } (k=0, \texttt{ or } k>0 \texttt{ and } x_i\neq p_k), \\ \max\left\{ L(i-1,j,k),L(i,j-1,k) \right\} & \texttt{if } x_i\neq y_j. \end{array} \right. \end{equation} Furthermore, the most important thing is that the above Theorem was only stated but without a strict proof. Therefore, the correctness of the proposed algorithm can not be guaranteed. For example, if $X=abbb, Y=aab$ and $P=ab$, the values of $L(i,j,k), 1\leq i\leq 4,1\leq j\leq 3,0\leq k\leq 2$ computed by recursive formula (1) and (2) are listed in Table 2.
\begin{table}[ht] \caption{$L(i,j,k)$ computed by recursive formula (1) and (2)} \begin{center}
\begin{tabular}{|c|ccc|ccc|ccc|} \hline &&$k=0$&&&$k=1$&&&$k=2$&\\\hline
$i=1$&1&1&1&0&0&0&1&1&1\\ $i=2$&1&1&2&0&0&1&1&1&2\\ $i=3$&1&1&2&0&0&1&1&1&2\\ $i=4$&1&1&2&0&0&1&1&1&2\\ \hline \end{tabular} \end{center} \end{table}
From Table 2 we know that the final answer is $L(4,3,2)=2$ which is computed by the formula that $L(4,3,2)=1+L(3,2,2)$ since in this case $k\geq 2$ and $a_4=b_3=p_2='b'$. But, this is a wrong answer, since the correct answer should be 1.
We have tried to modify the recursive formula (1) or (2) to a correct one, but failed.
In next section, we will investigate the problem in a different way and finally present a correct dynamic solution for the STR-EC-LCS problem.
\section{Our New Dynamic Programming Solution}
For the two input sequences $X=x_1x_2\cdots x_n$ and $Y=y_1y_2\cdots y_m$ of lengths $n$ and $m$, respectively, and a constraint string $P=p_1p_2\cdots p_r$ of length $r$, we want to find an LCS of $X$ and $Y$ excluding $P$ as a substring.
In the description of our new algorithm, a function $\sigma$ will be mentioned frequently. For any string $S$ and a fixed constraint string $P$, the length of the longest suffix of $S$ that is also a prefix of $P$ is denoted by function $\sigma(S)$.
The symbol $\oplus$ is also used to denote the string concatenation.
For example, if $P=aaba$ and $S=aabaaab$, then substring $aab$ is the longest suffix of $S$ that is also a prefix of $P$, and therefore $\sigma(S)=3$.
It is readily seen that $S\oplus P=aabaaabaaba$.
Let $Z(i,j,k)$ denote the set of all LCSs of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(z)=k$ for each $z\in Z(i,j,k)$. The length of of an LCS in $Z(i,j,k)$ is denoted as $f(i,j,k)$.
If we can compute $f(i,j,k)$ for any $1\leq i\leq n, 1\leq j\leq m$, and $0\leq k<r$ efficiently, then the length of an LCS of $X$ and $Y$ excluding $P$ as a substring must be $\max\limits_{0\leq t<r}\left\{f(n,m,t)\right\}$.
We can give a recursive formula for computing $f(i,j,k)$ by following Theorem.
\begin{theo}\label{th1} For the two input sequences $X=x_1x_2\cdots x_n$ and $Y=y_1y_2\cdots y_m$ of lengths $n$ and $m$, respectively, and a constraint string $P=p_1p_2\cdots p_r$ of length $r$, let $Z(i,j,k)$ denote the set of all LCSs of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(z)=k$ for each $z\in Z(i,j,k)$.
The length of of an LCS in $Z(i,j,k)$ is denoted as $f(i,j,k)$.
For any $1\leq i\leq n, 1\leq j\leq m$, and $0\leq k < r$, $f(i,j,k)$ can be computed by the following recursive formula \eqn{eq31}.
\beql{eq31} f(i,j,k)=\left\{\begin{array}{ll} \max\left\{ f(i-1,j,k),f(i,j-1,k) \right\} & \texttt{if } x_i\neq y_j,\\ \max\left\{
f(i-1,j-1,k),1+\max\limits_{0\leq t<r}\left\{f(i-1,j-1,t)|\sigma(P[1:t]\oplus x_i)=k\right\} \right\} & \texttt{if } x_i= y_j. \end{array} \right. \end{equation}
The boundary conditions of this recursive formula are $f(i,0,k) = f(0,j,k) = 0$ for any $0\leq i\leq n, 0\leq j\leq m$, and $0\leq k \leq r$. \end{theo}
\noindent{\bf Proof.}
For any $1\leq i\leq n, 1\leq j\leq m$, and $0\leq k < r$, suppose $f(i,j,k) = t$ and $z=z_1,\cdots, z_t\in Z(i,j,k)$.
First of all, we notice that for each pair $(i',j'), 1\leq i'\leq n, 1\leq j'\leq m$,such that $i'\leq i$ and $j'\leq j$, we have $f(i',j',k) \leq f(i,j,k)$, since a common subsequence $z$ of $X[1:i']$ and $Y[1:j']$ excluding $P$ as a substring and $\sigma(z)=k$ is also a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(z)=k$.
(1) In the case of $x_i\neq y_j$, we have $x_i\neq z_t$ or $y_j\neq z_t$.
(1.1)If $x_i\neq z_t$, then $z=z_1,\cdots, z_t$ is a common subsequence of $X[1:i-1]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(z_1,\cdots, z_t)=k$, and so $f(i-1,j,k) \geq t$. On the other hand, $f(i-1,j,k)\leq f(i,j,k) = t$. Therefore, in this case we have $f(i,j,k) = f(i-1,j,k)$.
(1.2)If $y_j\neq z_t$, then we can prove similarly that in this case, $f(i,j,k) = f(i,j-1,k)$.
Combining the two subcases we conclude that in the case of $x_i\neq y_j$, we have $f(i,j,k)=\max\left\{ f(i-1,j,k),f(i,j-1,k) \right\}$.
(2) In the case of $x_i=y_j$, there are also two cases to be distinguished.
(2.1)If $x_i=y_j\neq z_t$, then $z=z_1,\cdots, z_t$ is also a common subsequence of $X[1:i-1]$ and $Y[1:j-1]$ excluding $P$ as a substring and $\sigma(z_1,\cdots, z_t)=k$, and so $f(i-1,j-1,k) \geq t$. On the other hand, $f(i-1,j-1,k)\leq f(i,j,k) = t$. Therefore, in this case we have $f(i,j,k) = f(i-1,j-1,k)$.
(2.2)If $x_i=y_j=z_t$, then $f(i,j,k) = t>0$ and $z=z_1,\cdots, z_t$ is an LCS of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(z_1,\cdots, z_t)=k$, and thus $z_1,\cdots, z_{t-1}$ is a common subsequence of $X[1:i-1]$ and $Y[1:j-1]$ excluding $P$ as a substring.
Let $\sigma(z_1,\cdots, z_{t-1})=q$ and $f(i-1,j-1,q)=s$. Then $z_1,\cdots, z_{t-1}$ is a common subsequence of $X[1:i-1]$ and $Y[1:j-1]$ excluding $P$ as a substring and $\sigma(z_1,\cdots, z_{t-1})=q$. Therefore, we have
\beql{eq32} f(i-1,j-1,q)=s\geq t-1. \end{equation}
Let $v=v_1,\cdots, v_s\in Z(i-1,j-1,q)$ is an LCS of $X[1:i-1]$ and $Y[1:j-1]$ excluding $P$ as a substring and $\sigma(v_1,\cdots, v_s)=q$. Then $\sigma((v_1,\cdots, v_s)\oplus x_i)=\sigma(P[1:q]\oplus x_i)=k$, and thus $(v_1,\cdots, v_s)\oplus x_i$ is a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma((v_1,\cdots, v_s)\oplus x_i)=k$.
Therefore, \beql{eq33} f(i,j,k)=t\geq s+1. \end{equation}
Combining \eqn{eq32} and \eqn{eq33} we have $s=t-1$. Therefore, $z_1,\cdots, z_{t-1}$ is an LCS of $X[1:i-1]$ and $Y[1:j-1]$ excluding $P$ as a substring and $\sigma(z_1,\cdots, z_{t-1})=q$.
In other words, \beql{eq34}
f(i,j,k)\leq 1+\max\limits_{0\leq q<r}\left\{f(i-1,j-1,q)|\sigma(P[1:q]\oplus x_i)=k\right\} \end{equation}
On the other hand, for any $0\leq q<r$, if $f(i-1,j-1,q)=s$ and $\sigma(P[1:q]\oplus x_i)=k$, then for any $v=v_1,\cdots, v_s\in Z(i-1,j-1,q)$, $v\oplus x_i$ is a common subsequence of $X[1:i]$ and $Y[1:j]$ and $\sigma(v\oplus x_i)=k$. Since $v$ excludes $P$ as a substring and $\sigma(v\oplus x_i)=k<r$, $v\oplus x_i$ is a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring. Furthermore, $v\oplus x_i$ is a common subsequence of $X[1:i]$ and $Y[1:j]$ excluding $P$ as a substring and $\sigma(v\oplus x_i)=k$. Therefore, $f(i,j,k)=t\geq 1+s=1+f(i-1,j-1,q)$, and so we conclude that, \beql{eq35}
f(i,j,k)\geq 1+\max\limits_{0\leq q<r}\left\{f(i-1,j-1,q)|\sigma(P[1:q]\oplus x_i)=k\right\} \end{equation}
Combining \eqn{eq34} and \eqn{eq35} we have, in this case, \beql{eq36}
f(i,j,k)= 1+\max\limits_{0\leq q<r}\left\{f(i-1,j-1,q)|\sigma(P[1:q]\oplus x_i)=k\right\} \end{equation}
Combining the two subcases in the case of $x_i=y_j$, we conclude that the recursive formula \eqn{eq31} is correct for the case $x_i=y_j$.
The proof is complete.
$\blacksquare$
\section{The Implementation of the Algorithm} According to Theorem \ref{th1}, our new algorithm for computing $f(i,j,k)$ is a standard 2-dimensional dynamic programming algorithm. By the recursive formula \eqn{eq31}, the new dynamic programming algorithm for computing $f(i,j,k)$ can be implemented as the following Algorithm 1. \begin{algorithm} \caption{STR-EC-LCS} {\bf Input:} Strings $X=x_1\cdots x_n$, $Y=y_1\cdots y_m$ of lengths $n$ and $m$, respectively, and a constraint string $P=p_1\cdots p_r$ of lengths $r$\\ {\bf Output:} The length of an LCS of $X$ and $Y$ excluding $P$ as a substring \begin{algorithmic}[1] \FORALL{$i,j,k$ , $0\leq i\leq n, 0\leq j\leq m$, and $0\leq k \leq r$} \STATE $f(i,0,k) \leftarrow 0, f(0,j,k) \leftarrow 0$ \{boundary condition\} \ENDFOR \FOR{$i=1$ to $n$} \FOR{$j=1$ to $m$} \FOR{$k=0$ to $r$} \IF {$x_i\neq y_j$} \STATE $f(i,j,k) \leftarrow \max\{f(i-1,j,k),f(i,j-1,k)\}$ \ELSE
\STATE $u \leftarrow \max\limits_{0\leq t<r}\left\{f(i-1,j-1,t)|\sigma(P[1:t]\oplus x_i)=k\right\}$ \STATE $f(i,j,k) \leftarrow \max\{f(i-1,j-1,k),1+u\}$ \ENDIF \ENDFOR \ENDFOR \ENDFOR \RETURN $\max\limits_{0\leq t<r}\{f(n,m,t)\}$ \end{algorithmic} \end{algorithm}
To implement our new algorithm efficiently, the most important thing is to compte $\sigma(P[1:k]\oplus x_i)$ for each $0\leq k<r$ and $x_i, 1\leq i\leq n$, in line 10 efficiently.
It is obvious that $\sigma(P[1:k]\oplus x_i)=k+1$ for the case of $x_i=p_{k+1}$. It will be more complex to compute $\sigma(P[1:k]\oplus x_i)$ for the case of $x_i\neq p_{k+1}$. In this case the length of matched prefix of $P$ has to be shortened to the largest $t<k$ such that $p_{k-t+1}\cdots p_k=p_1\cdots p_t$ and $x_i=p_{t+1}$. Therefore, in this case, $\sigma(P[1:k]\oplus x_i)=t+1$.
This computation is very similar to the computation of the prefix function in KMP algorithm for solving the string matching problem\cite{3,7}.
For a given string $S=s_1\cdots s_n$, the prefix function $kmp(i)$ denotes the length of the longest prefix of $s_1\cdots s_{i-1}$ that matches a suffix of $s_1\cdots s_i$. For example, if $S=ababaa$, then $kmp(1),\cdots, kmp(6)=0,0,1,2,3,1$.
For the constraint string $P=p_1\cdots p_r$ of lengths $r$, its prefix function $kmp$ can be pre-computed in $O(r)$ time as follows.
\begin{algorithm} \caption{Prefix Function} {\bf Input:} String $P=p_1\cdots p_r$\\ {\bf Output:} The prefix function $kmp$ of $P$ \begin{algorithmic}[1] \STATE $kmp(0) \leftarrow -1$ \FOR{$i=2$ to $r$} \STATE $k \leftarrow 0$\\ \WHILE{$k\geq 0 \ \AND\ p_{k+1}\neq p_i$} \STATE $k \leftarrow kmp(k)$\\ \ENDWHILE \STATE $k \leftarrow k+1$\\ \STATE $kmp(i) \leftarrow k$\\ \ENDFOR \end{algorithmic} \end{algorithm}
With this pre-computed prefix function $kmp$, the function $\sigma(P[1:k]\oplus ch)$ for each character $ch\in\sum$ and $1\leq k\leq r$ can be described as follows.
\begin{algorithm} \caption{$\sigma(k,ch)$} {\bf Input:} String $P=p_1\cdots p_r$, integer $k$ and character $ch$\\ {\bf Output:} $\sigma(P[1:k]\oplus ch)$ \begin{algorithmic}[1] \WHILE{$k\geq 0 \ \AND\ p_{k+1}\neq ch$} \STATE $k \leftarrow kmp(k)$\\ \ENDWHILE \RETURN $k+1$ \end{algorithmic} \end{algorithm}
Then, we can compute an index $t^*$ such that $$f(i-1,j-1,t^*)=\max\limits_{0\leq t<r}\left\{f(i-1,j-1,t)|\sigma(P[1:t]\oplus x_i)=k\right\}$$ in line 10 of Algorithm 1 by the following Algorithm 4.
\begin{algorithm} \caption{$\max\sigma(i,j,k)$} {\bf Input:} Integers $i,j,k$\\
{\bf Output:} An index $t^*$ such that $$f(i-1,j-1,t^*)=\max\limits_{0\leq t<r}\left\{f(i-1,j-1,t)|\sigma(P[1:t]\oplus x_i)=k\right\}$$ \begin{algorithmic}[1] \STATE $tmp \leftarrow -1$, $t^* \leftarrow -1$\\ \FOR{$t=0$ to $r-1$} \IF{$\sigma(t,x_i)=k \ \AND\ f(i-1,j-1,t)>tmp$} \STATE $tmp \leftarrow f(i-1,j-1,t), t^* \leftarrow t$\\ \ENDIF \ENDFOR \RETURN $t^*$ \end{algorithmic} \end{algorithm}
Then the value of $u$ in line 10 of Algorithm 1 must be $$u=f(i-1,j-1,t^*)=f(i-1,j-1,\max\sigma(i,j,k)).$$
We can improve the efficiency of above algorithms further in following two points.
First, we can pre-compute a table $\lambda$ of the function $\sigma(P[1:k]\oplus ch)$ for each character $ch\in\sum$ and $1\leq k\leq r$ to speed up the computation of $\max\sigma(i,j,k)$.
\begin{algorithm} \caption{$\lambda(1:r,ch\in \Sigma)$} {\bf Input:} String $P=p_1\cdots p_r$, alphabet $\Sigma$\\ {\bf Output:} A table $\lambda$ \begin{algorithmic}[1] \FORALL{$a\in \Sigma$ \AND \ $a\neq p_1$} \STATE $\lambda(0,a) \leftarrow 0$\\ \ENDFOR \STATE $\lambda(0,p_1) \leftarrow 1$\\ \FOR{$t=1$ to $r-1$} \FORALL{$a\in \Sigma$} \IF{$a=p_{t+1}$} \STATE $\lambda(t,a) \leftarrow t+1$\\ \ELSE \STATE $\lambda(t,a) \leftarrow \lambda(kmp(t),a)$\\ \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm}
The time cost of above preprocessing algorithm is obviously $O(r|\Sigma|)$. By using this pre-computed table $\lambda$, the value of function $\sigma(P[1:k]\oplus ch)$ for each character $ch\in\sum$ and $1\leq k<r$ can be computed readily in $O(1)$ time.
Second, the computation of function $\max\sigma(i,j,k)$ is very time consuming and many repeated computations are overlapped in the whole {\bf for} loop of the Algorithm 1. We can amortized the computation of function $\max\sigma(i,j,k)$ to each entry of $f(i,j,k)$ in the {\bf for} loop on variable $k$ of the Algorithm 1 and finally reduce the time costs of the whole algorithm. The modified algorithm can be described as follows.
\begin{algorithm} \caption{STR-EC-LCS} {\bf Input:} Strings $X=x_1\cdots x_n$, $Y=y_1\cdots y_m$ of lengths $n$ and $m$, respectively, and a constraint string $P=p_1\cdots p_r$ of lengths $r$\\ {\bf Output:} The length of an LCS of $X$ and $Y$ excluding $P$ as a substring \begin{algorithmic}[1] \FORALL{$i,j,k$ , $0\leq i\leq n, 0\leq j\leq m$, and $0\leq k \leq r$} \STATE $f(i,0,k) \leftarrow 0, f(0,j,k) \leftarrow 0$ \{boundary condition\} \ENDFOR \FOR{$i=1$ to $n$} \FOR{$j=1$ to $m$} \FOR{$k=0$ to $r$} \STATE $f(i,j,k) \leftarrow \max\{f(i-1,j,k),f(i,j-1,k)\}$ \ENDFOR \IF {$x_i=y_j$} \FOR{$k=0$ to $r$} \STATE $t \leftarrow \lambda(k,x_i)$ \STATE $f(i,j,t) \leftarrow \max\{f(i,j,t),1+f(i-1,j-1,k)\}$ \ENDFOR \ENDIF \ENDFOR \ENDFOR \RETURN $\max\limits_{0\leq t<r}\{f(n,m,t)\}$ \end{algorithmic} \end{algorithm}
Since $\lambda(k,x_i)$ can be computed in $O(1)$ time for each $x_i,1\leq i\leq n$ and any $0\leq k<r$, the loop body of above algorithm requires only $O(1)$ time. Therefore, our new algorithm for computing the length of an LCS of $X$ and $Y$ excluding $P$ as a substring requires $O(nmr)$ time and $O(r|\Sigma|)$ preprocessing time.
If we want to get the answer LCS of $X$ and $Y$ excluding $P$ as a substring, but not just its length, we can also present a simple recursive back tracing algorithm for this purpose as the following Algorithm 7.
\begin{algorithm} \caption{$back(i,j,k)$} {\bf Comments:} A recursive back tracing algorithm to construct the answer LCS \begin{algorithmic}[1] \IF{$i=0 \ \OR \ j=0$} \RETURN \ENDIF \IF {$x_i=y_j$} \IF {$f(i,j,k)=f(i-1,j-1,k)$} \STATE $back(i-1,j-1,k)$ \ELSE \STATE $back(i-1,j-1,\max\sigma(i,j,k))$ \PRINT $x_i$ \ENDIF \ELSIF{$f(i-1,j,k)>f(i,j-1,k)$} \STATE $back(i-1,j,k)$ \ELSE \STATE $back(i,j-1,k)$ \ENDIF \end{algorithmic} \end{algorithm}
In the end of our new algorithm, we will find an index $t$ such that $f(n,m,t)$ gives the length of an LCS of $X$ and $Y$ excluding $P$ as a substring. Then, a function call $back(n,m,t)$ will produce the answer LCS accordingly.
Since the cost of the algorithm $\max\sigma(i,j,k))$ is $O(r)$ in the worst case, the algorithm $back(i,j,k)$ will cost $O(r\max(n,m))$.
Finally we summarize our results in the following Theorem.
\begin{theo}\label{th2}
The Algorithm 6 solves STR-EC-LCS problem correctly in $O(nmr)$ time and $O(nmr)$ space, with preprocessing time $O(r|\Sigma|)$. \end{theo}
\section{Concluding Remarks} We have suggested a new dynamic programming solution for the STR-EC-LCS problem. The new algorithm corrects a previously presented dynamic programming algorithm with the same time and space complexities.
The STR-IC-LCS problem is another interesting generalized constrained longest common subsequence (GC-LCS) which is very similar to the STR-EC-LCS problem.
The STR-IC-LCS problem , introduced in\cite{1}, is to find an LCS of two main sequences, in which a constraining sequence of length $r$ must be included as its substring. In \cite{1} an $O(nmr)$-time algorithm was given for it. Almost immediately the presented algorithm was improved to a quadratic-time algorithm and furthermore to many main input sequences\cite{4}.
It is not clear that whether the same improvement can be applied to our presented $O(nmr)$-time algorithm for the STR-EC-LCS problem to achieve a quadratic-time algorithm. We will investigate the problem further.
\end{document} | arXiv | {
"id": "1301.7183.tex",
"language_detection_score": 0.7608413100242615,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\definecolor{navy}{RGB}{46,72,102} \definecolor{pink}{RGB}{219,48,122} \definecolor{grey}{RGB}{184,184,184} \definecolor{yellow}{RGB}{255,192,0} \definecolor{grey1}{RGB}{217,217,217} \definecolor{grey2}{RGB}{166,166,166} \definecolor{grey3}{RGB}{89,89,89} \definecolor{red}{RGB}{255,0,0}
\preprint{APS/123-QED}
\title{Efficacy of virtual purification-based error mitigation on quantum metrology} \author{Hyukgun Kwon} \affiliation{Department of Physics and Astronomy, Seoul National University, Seoul 08826, Republic of Korea} \author{Changhun Oh} \affiliation{Pritzker School of Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA} \author{Youngrong Lim} \affiliation{School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455, Korea} \author{Hyunseok Jeong} \email{h.jeong37@gmail.com} \affiliation{Department of Physics and Astronomy, Seoul National University, Seoul 08826, Republic of Korea} \author{Liang Jiang} \email{liangjiang@uchicago.edu} \affiliation{Pritzker School of Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA}
\begin{abstract} Noise is the main source that hinders us from fully exploiting quantum advantages in various quantum informational tasks. However, characterizing and calibrating the effect of noise is not always feasible in practice. Especially for quantum parameter estimation, an estimator constructed without precise knowledge of noise entails an inevitable bias. Recently, virtual purification-based error mitigation (VPEM) has been proposed to apply for quantum metrology to reduce such a bias occurring from unknown noise. While it was demonstrated to work for particular cases, whether VPEM always reduces a bias for general estimation schemes is unclear yet. For more general applications of VPEM to quantum metrology, we study factors determining whether VPEM can reduce the bias. We find that the closeness between the dominant eigenvector of a noisy state and the ideal quantum probe (without noise) with respect to an observable determines the reducible amount of bias by VPEM. Next, we show that one should carefully choose the reference point of the target parameter, which gives the smallest bias because the bias depends on the reference point. Otherwise, even if the dominant eigenvector and the ideal quantum probe are close, the bias of the mitigated case could be larger than the non-mitigated one.
Finally, we analyze the error mitigation for a phase estimation scheme under various noises. Based on our analysis, we predict whether VPEM can effectively reduce a bias and numerically verify our results. \end{abstract}
\maketitle
\section{\label{sec:level1}Introduction}
Quantum metrology is a study of exploiting quantum properties to surpass the classical limit of estimation precision \cite{giovannetti2001quantum,baumgratz2016quantum,giovannetti2006quantum,Giovannetti2011,braun2018quantum,Pirandola2018}. For various cases, quantum resources such as entanglement or squeezing enable us to achieve a better estimation precision that cannot be attained by classical means.
Especially for a phase estimation scheme in an interferometer system, which is one of the most important tasks in quantum metrology, the quantum advantage using various nonclassical states, such as N00N state \cite{PhysRevA.54.R4649,lee2002quantum,Demkowicz-Dobrzanski2015} and squeezed state \cite{Demkowicz-Dobrzanski2015,caves1981quantum,PhysRevLett.100.073601,abadie2011gravitational,aasi2013enhanced}, have been studied and experimentally demonstrated \cite{nagata2007beating,mitchell2004super,yonezawa2012quantum,berni2015ab,yu2020quantum}.
To consider more practical situations, it is vital to study the effect of noise on estimation performance. Many studies investigated how the performance deteriorates due to noise, mostly focusing on statistical error \cite{escher2011general,PhysRevLett.106.153603}. In practice, however, it is not always feasible to obtain complete information about noise. Such a lack of knowledge of the noise in the system results in a bias of the estimator, which can be more critical than the statistical error because we cannot reduce the bias simply by increasing the number of samples, unlike the statistical error.
In addition, while quantum error correction can recover the noisy state to its original state~\cite{PhysRevLett.112.080801,PhysRevLett.112.150801,PhysRevLett.112.150802,PhysRevX.7.041009,zhou2018achieving,PhysRevLett.122.040502,zhuang2020distributed,PRXQuantum.2.010343}, it requires many resources which may not be practically accessible in the current era.
To address the above issues, Ref.~\cite{yamamoto2021error} has recently proposed applying virtual-purification-based error mitigation (VPEM) \cite{PhysRevX.11.031057,PhysRevX.11.041036} to quantum metrology to suppress the bias occurring from unknown noise and
shown that it considerably reduces the bias of an estimator of the phase estimation with a GHZ state. The principle behind VPEM is that it purifies the output state, which has undergone an unknown noise, to its dominant eigenstate, so that it effectively enables one to exploit the dominant eigenstate instead of the noisy state \cite{PhysRevX.11.031057,PhysRevX.11.041036}. While Ref.~\cite{yamamoto2021error} presented a framework of VPEM for quantum metrology and provided an example in which VPEM effectively reduces a bias,
the general applicability of VPEM to quantum metrology has not been fully understood.
In this work, we study when VPEM can effectively reduce the bias from unknown noise. In particular, we identify two crucial factors that determine whether VPEM can reduce the bias. The first one is how close the dominant eigenvector of a noisy state and the (noiseless) ideal state are. Because VPEM only allows us to exploit the dominant eigenvector, not necessarily the ideal state, the dominant eigenvector has to be guaranteed to be sufficiently close to the ideal state. We find that the closeness between the expectation values of an observable $\hat{A}$ (used for estimation) of the ideal quantum probe and the dominant eigenvector of a noisy state, determines how much bias can be reduced by VPEM. More specifically, the closeness decides the leading order of the reduced bias in the noise strength. In addition, we show that applying VPEM does not necessarily reduce the bias compared to without VPEM. The second one is the reference point of a parameter that we want to estimate. Assuming that the unknown parameter to be estimated is small (often called local parameter estimation \cite{paris2009quantum, Morelli_2021}), the reference point is the one around which the unknown parameter varies.
In local parameter estimation, we show that one has to carefully choose the reference point that gives the smallest bias before applying VPEM to quantum parameter estimation. Otherwise, even if the expectation values of $\hat{A}$ over the ideal quantum probe and the dominant eigenvector are the same, the bias of the mitigated case could be larger than the non-mitigated case. We emphasize that a strategy of choosing a reference point is a unique feature of quantum metrology that has not been considered in the previous study \cite{yamamoto2021error}.
To elaborate on the effect of the two aforementioned factors on VPEM, we apply our analysis to the phase estimation with the parity measurement in the interferometer system and numerically verify our analysis. We consider a N00N state and product of coherent and squeezed state as quantum probes for the phase estimation in the presence of phase diffusion, photon loss, and additive Gaussian noise. For the N00N state with phase diffusion or photon loss and for the product of a coherent and squeezed state in the presence of a special type of additive Gaussian noise, we find that the bias can be reduced by VPEM because the ideal quantum probe and the dominant eigenvector of the noisy state are the same, which is a similar case to the previous study \cite{yamamoto2021error}. In addition, we also find that the bias of the mitigation case could be larger than the non-mitigated case if one does not select a proper reference point. In contrast, for the product of coherent and squeezed state in the presence of photon loss, we find that it cannot benefit from VPEM, i.e., the magnitude of bias of the non-mitigated case and mitigated case are similar, even if we adopt the optimal reference point because the expectation values of the parity over the ideal quantum probe and the dominant eigenvector are not close enough. It indicates that VPEM does not always promise to reduce the bias error.
This paper is organized as follows: In Sec.~\ref{S2}, we explain how a bias occurs when one disregards unknown noise, and introduce VPEM. In Sec.~\ref{S3}, we show the reducible amount of bias through VPEM. In particular, in Sec.~\ref{S3A}, we inspect the relation between the dominant eigenvector and the leading order of the bias with respect to noise strength. In Sec.~\ref{S3B}, we show how the bias depends on the reference point, highlighting the importance of the reference point when one applies VPEM to parameter estimation. In Sec.~\ref{S4}, we apply our analysis to the phase estimation in the interferometer system.
\section{Estimation error with bias} \label{S2} Before introducing our main results, let us explain a basic quantum estimation scheme and how a bias occurs in the presence of an unknown noise.
\subsection{Ideal Case} Let us consider a unitary process that encodes an unknown parameter $\phi$.
To estimate the parameter $\phi$, a prepared quantum probe $\hat{\rho}^{(0)}_{\text{id}}=|\psi^{(0)}_{\text{id}}\rangle\langle \psi^{(0)}_{\text{id}}|$, assumed to be a pure state, encodes the parameter by a unitary operation $\hat{\mathcal{U}}(\phi+\phi_{0})$, which results in the ideal output state ${\hat{\rho}_{\text{id}}(\phi+\phi_{0})=\dyad{\psi_{\text{id}}(\phi+\phi_{0})}}$. Here, $\phi_{0}$ is an additional parameter that one can freely select on purpose. Since we consider a local parameter estimation, $\phi_{0}$ is a reference point of small $\phi$. One then measures the state in the eigenbasis of an observable $\hat{A}$ for $N_{s}$ times, which renders the outcomes $\{A_{\text{id},k}\}^{N_{s}}_{k=1}$, where $A_{\text{id},k}$ is the $k$th measurement outcome. Meanwhile, the expectation value of $\hat{A}$ over $\hat{\rho}_{\text{id}}$ is written as \begin{align}
\Tr\left[\hat{A}\hat{\rho}_{\text{id}}(\phi+\phi_0) \right]
&=\bra{\psi_{\text{id}}(\phi+\phi_0)}\hat{A}\ket{\psi_{\text{id}}(\phi+\phi_0)} \\
&=x_{\text{id}} + y_{\text{id}}\phi+O(\phi^{2}), \end{align} where the expectation value is linearized for small $\phi$ with the coefficients $x_\text{id}$ and $y_\text{id}$ \begin{align}
x_\text{id}(\phi_{0}) \equiv \text{Tr}[\hat{A}\hat{\rho}_\text{id}(\phi_{0})],
~y_\text{id}(\phi_{0}) \equiv \pdv{\text{Tr}[\hat{A}\hat{\rho}_\text{id}(\phi_0+\phi)]}{\phi} \bigg \vert_{\phi=0}. \label{xydef} \end{align}
Thus, the following estimator is unbiased for small $\phi$, \begin{align}
\phi^{\text{est}}_{\text{id}}=\frac{1}{y_{\text{id}}}\left(\bar{A}_{\text{id}}-x_{\text{id}}\right), \label{estimatorideal} \end{align} where $\bar{A}_{\text{id}}$ is the average of measurement outcomes $\sum_{k=1}^{N_{s}}{A_{\text{id},k}}/{N_{s}}$ (See Fig.~\ref{fig:schematic} for the schematic of the estimation.).
\begin{figure*}\label{fig:schematic}
\end{figure*}
The estimation error of the estimator is given by \begin{align}\label{esteideal}
\delta^{2}\phi_{\text{id}}
=\frac{1}{y_{\text{id}}^{2}}\frac{\V[\hat{A}]_{\hat{\rho}_{\text{id}}}}{N_{s}}, \end{align} which depends only on the statistical error that originates from the fluctuation of the average of the measurement outcomes due to the finite sample size. Here, $\V[\hat{A}]_{\hat{\rho}} \equiv \text{Tr}[\hat{A}^{2}\hat{\rho}]-\text{Tr}[\hat{A}\hat{\rho}]^{2}$ is the variance of $\hat{A}$ over the quantum state. Therefore, the estimation error can be reduced by increasing the number of samples $N_{s}$. It is worth emphasizing that the convergence rate is closely related to Fisher information \cite{braunstein1994statistical, paris2009quantum}, which is the main focus of many studies about quantum metrology \cite{giovannetti2006quantum,Giovannetti2011}. Our interest lies in the bias rather than the statistical error, which will be introduced in the following section.
\subsection{Error case} In a more practical scenario where noise occurs during estimation, the ideal state $\hat{\rho}_{\text{id}}$ is replaced by an error state $\hat{\rho}_{\text{e}}$, which depends on the noise. As the ideal case, one prepares $N_{s}$ numbers of $\hat{\rho}_{\text{e}}$ and obtains measurement outcomes $\{A_{\text{e},k}\}^{N_{s}}_{k=1}$, where $A_{\text{e},k}$ is the $k$th measurement outcome from $\hat{\rho}_{\text{e}}$. When one does not know that the noise occurs and exploits the same theoretical value $\text{Tr}[\hat{A}\hat{\rho}_{\text{id}}(\phi+\phi_{0})]$ as the ideal case, the estimator is then written as \begin{align}
\phi^{\text{est}}_{\text{e}}=\frac{1}{y_{\text{id}}}\left(\bar{A}_{\text{e}}-x_{\text{id}}\right),\label{estimatore} \end{align} where $\bar{A}_{\text{e}}$ is the average of measurement outcomes $\sum_{k=1}^{N_{s}}{A_{\text{e},k}}/{N_{s}}$.
Due to this choice, the estimator $\phi^{\text{est}}_{\text{e}}$ generally has a bias (See Fig.~\ref{fig:schematic}(d) for the schematic of how bias occurs.), \begin{align}
\left<\phi^{\text{est}}_{\text{e}}\right>-\phi
&= \frac{x_{\text{e}}-x_{\text{id}}+(y_{\text{e}}-y_{\text{id}})\phi }{y_{\text{id}}}+O(\phi^{2}) \label{estiavge}\\
&\equiv B_{\text{e}}(\phi,\phi_{0},\Delta), \label{biase} \end{align} where $x_\text{e}$ and $y_\text{e}$ are defined by a similar relation to the ideal case \begin{align}
\Tr\left[\hat{A}\hat{\rho}_{\text{e}}(\phi+\phi_{0}) \right]
&= x_{\text{e}} + y_{\text{e}}\phi +O(\phi^{2}), \end{align} and $\big<\cdots\big>$ is an average over all possible measurement outcomes of the quantum state. Suppose one can completely characterize the noise and $\hat{\rho}_\text{e}$, which is a typical assumption in many previous studies but may not always be feasible; one can then construct an unbiased estimator using $x_\text{e}$ and $y_\text{e}$ as the ideal case. Thus, the bias occurs from the lack of information about the noise. For our case, since $\phi^{\text{est}}_{\text{e}}$ is a biased estimator, the corresponding estimation error $\delta^{2}\phi_{\text{e}}$ contains an additional term due to the bias: \begin{align}
\delta^{2}\phi_{\text{e}} =\left(B_{\text{e}}\right)^{2}+\frac{1}{N_{s}}\frac{\V[\hat{A}]_{\hat{\rho}_{\text{e}}}}{y_{\text{id}}^{2}}. \label{esterre} \end{align} In contrast to the statistical error, one cannot reduce the bias by simply increasing the number of samples $N_{s}$. For the rest of the paper, we call $\left(B_{\text{e}}\right)^{2}$ as a bias error of error case. Fig.~\ref{fig:schematic} presents the schematic of the occurrence of the bias.
\subsection{Error Mitigated Quantum Metrology} \begin{figure}
\caption{Error mitigation circuits. After a Hadamard gate on the ancilla qubit, one applies a sequence of $n$ controlled swap gates, followed by a controlled-$\hat{A}$ ($\hat{I}$) and another Hadamard gate, and measures in $\hat{Z}$-basis.}
\label{fig:Circuits}
\end{figure}
In this section, we introduce the scheme of VPEM quantum metrology, recently proposed in Ref.~\cite{yamamoto2021error}. The scheme employs $N_s$ copies of so-called $A$-circuit and $I$-circuit, each of which exploits $n$ copies of an error state \cite{PhysRevX.11.031057} (see Fig.~\ref{fig:Circuits}). Although one might iterate each circuit for different numbers, we equally set $N_{s}$ as the number of iterations for each circuit for simplicity. The details of the dynamics are described in Refs.~\cite{yamamoto2021error, PhysRevX.11.031057, PhysRevX.11.041036}.
By implementing the circuits with an error state with its diagonal form being \begin{align}
\hat{\rho}_{\text{e}}(\phi+\phi_{0}) =\lambda \dyad{\psi} + (1-\lambda) \sum_{k=1} p_{k} \dyad{\psi_k}, \label{eq:diag} \end{align}
one obtains measurement outcomes $\left\{z^{A}_{k}\right\}^{N_{s}}_{k=1}$ and $\left\{z^{I}_{k}\right\}^{N_{s}}_{k=1}$ from each circuit, where $z^{A}_{k}, z^{I}_{k}\in \{1,-1\}$ for all $k$'s. Here, $\sum_{k=1} p_{k}=1$ and $\lambda$ is the largest eigenvalue and $|\psi\rangle$ is the corresponding eigenvector, which may be different from the ideal state $\ket{\psi_{\text{id}}}$. Also, the relevant parameters in Eq.~\eqref{eq:diag} are functions of $(\phi+\phi_{0})$ and noise strength $\Delta$. After an experiment, we obtain the average values as \begin{align}
&\bar{Z}^{A} \equiv \frac{1}{N_s}\sum^{N_{s}}_{k=1}z^{A}_{k},~~~
\bar{Z}^{I}\equiv \frac{1}{N_s}\sum^{N_{s}}_{k=1}z^{I}_{k}, \end{align} whose averages over all possible measurement outcomes are given by \cite{PhysRevX.11.041036} \begin{align}
&\left<\bar{Z}^{A}\right>=\lambda^{n}\langle \psi \vert \hat{A} \vert \psi \rangle + (1-\lambda)^{n}\sum_{k=1}p_{k}^{n} \langle \psi_{k} \vert \hat{A} \vert \psi_{k} \rangle,\\
&\left<\bar{Z}^{I}\right>=\lambda^{n} + (1-\lambda)^{n}\sum_{k=1}p_{k}^{n}. \label{zbari} \end{align} One can find that the average of $\bar{Z}^{A}/\bar{Z}^{I}$ can be approximated by \begin{align}
\left<\frac{\bar{Z}^{A}}{\bar{Z}^{I}}\right>
&\approx \frac{\Tr\left[\hat{A}\hat{\rho}_{\text{e}}^{n}\right]}{\Tr\left[\hat{\rho}_{\text{e}}^{n}\right]}
\equiv\Tr\left[\hat{A}\hat{\rho}_{\text{mit}}(\phi+\phi_{0})\right]\\
& = x_{\text{mit}} + y_{\text{mit}}\phi + O(\phi^{2}), \end{align}
where $\hat{\rho}^{n}_{\text{e}}$ is the $n$th power of $\hat{\rho}_{\text{e}}$ and \begin{align}
\hat{\rho}_{\text{mit}}
&\equiv \lambda'\dyad{\psi} + (1-\lambda')\sum_{k=1}p'_{k}\dyad{\psi_{k}}, \label{rhomit} \end{align} with \begin{align}
\lambda' \equiv \frac{1}{1 + \left(\frac{1-\lambda}{\lambda}\right)^{n}\sum_{k=1}p_{k}^{n} },~~~ p'_{k} \equiv \frac{p_{k}^{n}}{\sum_{l=1}p_{l}^{n}}. \end{align} Note that $\lambda'$ is the dominant eigenvalue of a mitigated state $\hat{\rho}_{\text{mit}}$. We now set an estimator as \begin{align}
\phi^{\text{est}}_{\text{mit}}=\frac{1}{y_{\text{id}}}\left(\bar{A}_{\text{mit}}-x_{\text{id}}\right), \label{estimatormit} \end{align} where $\bar{A}_{\text{mit}} \equiv {\bar{Z}^{A}}/{\bar{Z}^{I}}$ and we again used the ideal coefficients $x_\text{id}$ and $y_\text{id}$ in Eq.~\eqref{xydef} because we do not know $x_\text{mit}$ and $y_\text{mit}$ for an unknown noise. $\phi^{\text{est}}_{\text{mit}}$ is then still a biased estimator, whose bias is given by (See Fig.~\ref{fig:schematic}.) \begin{align}
\left<\phi^{\text{est}}_{\text{mit}}\right>-\phi
&= \frac{x_{\text{mit}}-x_{\text{id}}+(y_{\text{mit}}-y_{\text{id}})\phi}{y_{\text{id}}}+O(\phi^{2}) \label{estiavgmit}\\
&\equiv B_{\text{mit}}(\phi,\phi_{0},\Delta), \label{biasmit} \end{align}
and the corresponding estimation error is \cite{PhysRevX.11.041036} \begin{align}
&\delta^{2}\phi_{\text{mit}}
\approx \left(B_{\text{mit}}\right)^{2}\\
&+\frac{1}{N_{s}y^{2}_{\text{id}}}\bigg[\frac{1-\Tr\left[\hat{A}\hat{\rho}^{n}_{\text{e}}\right]}{\Tr\left[\hat{\rho}^{n}_{\text{e}}\right]^{2}} +\frac{\Tr\left[\hat{A}\hat{\rho}^{n}_{\text{e}}\right]^{2} (1-\Tr\left[\hat{\rho}^{n}_{\text{e}}\right]^{2})}{\Tr\left[\hat{\rho}^{n}_{\text{e}}\right]^{4}} \bigg]. \label{esterrmit} \end{align} In Eq.~\eqref{esterrmit}, the first term $(B_{\text{mit}})^2$ is the bias error and the second term is the statistical error.
To summarize, the effect of VPEM is to replace the error state $\hat{\rho}_e$ with the purified state $\hat{\rho}_\text{mit}= \hat{\rho}_e^n/\Tr\left[\hat{\rho}_e^n\right]$ for the order $n$. Since $\lambda$ is the largest eigenvalue, the $\lambda^{n}$ term becomes dominant. In contrast, the coefficient $(1-\lambda)^{n}p_{k}^{n}$, associated with the orthogonal subspace to the dominant eigenvector, is suppressed as $n$ increases. Therefore, the mitigated state $\hat{\rho}_{\text{mit}}$ in Eq.~\eqref{rhomit} converges to the dominant eigenvector $\dyad{\psi}$, which results in $\text{Tr}[\hat{A}\hat{\rho}_{\text{mit}}] \approx \langle \psi \vert \hat{A} \vert \psi \rangle$. Hence, if $y_{\text{id}}$ is not too small and the dominant eigenvector $\ket{\psi}$ and the ideal state $\ket{\psi_{\text{id}}}$ are close enough, such that $(\langle \psi \vert \hat{A} \vert \psi \rangle-\langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle)/y_{\text{id}} \approx 0$, the bias error of the mitigation case $(B_{\text{mit}})^{2}$ is smaller than that of the error case $(B_{\text{e}})^{2}$. Fig.~\ref{fig:schematic} illustrates how VPEM reduces the bias.
\section{Crucial factors for reducing bias via VPEM} \label{S3} In the previous section, we observed that VPEM effectively purifies an error state to its dominant eigenvector. In this section, we further analyze the efficiency of VPEM with respect to the distance between the dominant eigenvector and the ideal state and show that this is not sufficient for VPEM to reduce the bias noise. We emphasize that an additional factor must be necessarily considered, which is the reference point.
\subsection{Expectation value of $\hat{A}$ over the dominant eigenvector \label{S3A}} We analyze how the difference of the expectation values of the observable $\hat{A}$ over the dominant eigenvector of $\hat{\rho}_e$ and the ideal state affects the efficacy of VPEM. To this end, we find an expression of a bias in the error case: \begin{align}
&B_{\text{e}}(\phi,\phi_{0},\Delta)=\frac{x_{\text{e}}(\phi_{0})-x_{\text{id}}(\phi_{0})+\phi\left[y_{\text{e}}(\phi_{0})-y_{\text{id}}(\phi_{0})\right]}{y_{\text{id}}(\phi_{0})}\\
&=\frac{1}{y_{\text{id}}(\phi_{0})}\sum_{k=1}^{n-1}\left[ f_{k}(\phi_{0}) + \phi \pdv{f_{k}(\phi)}{\phi}\bigg\vert_{\phi=\phi_{0}}\right]\Delta^{k} + O(\Delta^{n}),\label{Bedelta} \end{align} where we defined \begin{align}
&f_{1}(\phi)\equiv a_{1}(\phi) - \lambda_{1}(\phi)b_{0}(\phi),\\
&f_{k}(\phi)\equiv a_{k}(\phi) + \sum_{l=1}^{k-1}\lambda_{l}(\phi)a_{k-l}(\phi) - \sum_{l=1}^{k}\lambda_{l}(\phi)b_{k-l}(\phi), \end{align} for $k \geq 2$. Here, $\lambda_{k}$, $a_{k}$ and $b_{k}$ are coefficients from the following expansions: \begin{align}
&\lambda = 1 - \sum_{k=1}^{\infty}\lambda_{k}(\phi+\phi_{0})\Delta^{k}, \label{lambdaterm}\\
&\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle = \sum_{k=1}^{\infty}a_{k}(\phi+\phi_{0})\Delta^{k}, \label{dominant}\\
&\sum_{k=1}p_{k}\langle \psi_{k} \vert \hat{A} \vert \psi_{k}\rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle = \sum_{k=0}^{\infty}b_{k}(\phi+\phi_{0})\Delta^{k}.\label{bkterm} \end{align}
Eq.~\eqref{Bedelta} shows that the bias of error case consists of three distinct terms: (i) the difference between the expectation value of $\hat{A}$ over the dominant eigenvector and the ideal state $\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$, characterized by $a_{k}$, (ii) the difference between the dominant eigenvalue $\lambda$ and $1$, identified by $\lambda_{k}$, and (iii) the difference between the expectation value of $\hat{A}$ over the tail terms $\sum_{k=1}p_{k}\langle \psi_{k} \vert \hat{A} \vert \psi_{k}\rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$, expressed by $b_{k}$. After applying VPEM, the bias becomes \begin{align}
&B_{\text{mit}}(\phi,\phi_{0},\Delta) \nonumber \\
&=\frac{x_{\text{mit}}(\phi_{0})-x_{\text{id}}(\phi_{0})+\phi\left[y_{\text{mit}}(\phi_{0})-y_{\text{id}}(\phi_{0})\right]}{y_{\text{id}}(\phi_{0})} \\
&=\frac{1}{y_{\text{id}}(\phi_{0})}\sum_{k=1}^{n-1}\left[ a_{k}(\phi_{0})+\phi\pdv{a_{k}(\phi)}{\phi}\bigg\vert_{\phi=\phi_{0}}\right]\Delta^{k}+O(\Delta^{n}). \label{Bmitdelta} \end{align} One can easily show Eq.~\eqref{Bmitdelta} using the definition of mitigated state $\hat{\rho}_{\text{mit}}$ in Eq.~\eqref{rhomit} and the expansions in Eqs.~\eqref{dominant}-\eqref{bkterm}.
A crucial difference from the error case is that the bias $B_{\text{mit}}$ up to the order of $\Delta^{n-1}$ only consists of $a_{k}$ which comes from the difference between the expectation value of $\hat{A}$ over the dominant eigenvector and the ideal state $\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$. Therefore, $\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$ dictates the leading order in $\Delta$ of the bias while other contributions from $b_k$ and $\lambda_k$ are suppressed by VPEM. As a consequence, if $\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle=0$, $B_{\text{mit}}=O(\Delta^{n})$ while $B_{\text{e}}=O(\Delta)$, which clearly shows that in the small regime of $\Delta$, the bias can be reduced by applying VPEM. However, if the dominant eigenvector and the ideal state are not close enough, especially when $a_{1}(\phi_{0})\neq 0$, we emphasize that VPEM cannot even guarantee the constant factor reduction of the bias even for small noise. More specifically, for the case when $a_{1}(\phi_{0})\neq 0$, the leading orders of the biases are given by \begin{align}
\abs{B_{\text{e}}} &\approx \abs{\frac{ a_{1}(\phi_{0}) - \lambda_{1}(\phi)b_{0}(\phi_{0})}{y_{\text{id}}(\phi_{0})}}\Delta,\\
\abs{B_{\text{mit}}} &\approx \abs{\frac{ a_{1}(\phi_{0})}{y_{\text{id}}(\phi_{0})}}\Delta. \end{align} Note that since $\lambda$ is the eigenvalue of the error state, it should be less than $1$ for any given $\Delta$, which means the $\lambda_{1}$ in Eq.~\eqref{lambdaterm} should be positive for small $\Delta$. In contrast, we emphasize that $a_{1}$ and $b_{0}$, in Eqs.~\eqref{dominant},~\eqref{bkterm}, can be either positive or negative; as a result, there could be a case where $\abs{B_{\text{e}}}<\abs{B_{\text{mit}}}$. Especially, in Sec.~\ref{prodofcssimul}, we study the case where the magnitude of a bias increases even if one applies VPEM. We note that while a similar analysis has been conducted in Ref.~\cite{PhysRevX.11.041036} to Sec.~\ref{S3A}, the possibility of VPEM giving a larger error than without VPEM has not been discussed in the previous studies, to the best of our knowledge.
\iffalse It shows that the difference between the expectation value of dominant eigenvector and the ideal state is upper bounded by the trace distance between the dominant eigenvector and the ideal state. In addition, they find a condition when the first order in the noise strength of the trace distance disappears which implies that the difference of the expectation values is $O(\Delta^{2})$. Meanwhile, our investigation can directly find the leading order in $\Delta$ of bias which is reduced by VPEM.
Lastly, we note that recent studies show that the bias comes from the dominant eigenvector is exponentially smaller than the bias occurring from the tail terms $\lambda_{k}$, $p_{k}$, and $\ket{\psi_{k}}$ \cite{PhysRevX.11.031057,koczor2021dominant}. However we emphasize that their analysis concern a discrete variable (DV) system where the analysis in DV system cannot be directly translated into a CV system that we are considering. In addition, we suggest a practical example where error mitigation is not beneficial because of the difference between the dominant eigenvector and the ideal state. (See Sec.~\ref{prodofcssimul}.) \fi
\subsection{Reference point of the parameter} \label{S3B}
\begin{figure}\label{fig-referencepoint}
\end{figure}
We now introduce another important factor that plays a crucial role in VPEM for quantum metrology, which is the reference point. Recall that, according to Eqs.~\eqref{biase} and \eqref{biasmit}, the bias of both error case and mitigation case can be expressed as \begin{align} \label{biasterms}
B(\phi,\phi_{0},\Delta)
&\approx \frac{x(\phi_{0})-x_{\text{id}}(\phi_{0})+\left[y(\phi_{0})-y_{\text{id}}(\phi_{0})\right]\phi}{y_{\text{id}}(\phi_{0})} \end{align} Here $B(\phi,\phi_{0},\Delta)$, $x$, $y$ and $\hat{\rho}$ stand for $B_{\text{e/mit}}$, $x_{\text{e/mit}}$, $y_{\text{e/mit}}$ and $\hat{\rho}_{\text{e/mit}}$. Because both biases depend on the reference point $\phi_{0}$, one might be able to reduce the bias by simply controlling $\phi_{0}$. In other words, there might be regions or points of $\phi_{0}$ that give a smaller bias than others. (See Fig. \ref{fig-referencepoint}.) Note that Eq.~\eqref{biasterms} consists of the zeroth order of $\phi$ which is $(x-x_{\text{id}})/y_{\text{id}}$ and the first order of $\phi$ which is $(y-y_{\text{id}})/y_{\text{id}}$. Since we are considering small $\phi$, one can alleviate the bias by choosing the reference point $\phi_{0}$ such that \begin{align}
\phi^{\text{opt}}_{0}(\Delta_{0}) = \arg~\min_{\phi_{0}}\left[ \frac{x(\phi_{0})-x_{\text{id}}(\phi_{0})}{y_{\text{id}}(\phi_{0})}\right]^{2}. \label{referencepoint} \end{align} Especially, if there exists $\phi^{\text{opt}}_{0}$ that makes $(x(\phi^{\text{opt}}_{0})-x_{\text{id}}(\phi^{\text{opt}}_{0}))/y_{\text{id}}(\phi^{\text{opt}}_{0})=0$, the zeroth order of $\phi$ in Eq.~\eqref{biasterms} vanishes and the bias only contains $\phi$ order, equivalently, the bias error only has the $\phi^{2}$ order term.
Now we consider VPEM in quantum metrology based on the analysis of the reference point. For the proper comparison of the bias errors, we compare $(B_{\text{e}})^{2}$ and $(B_{\text{mit}})^{2}$ with an optimal reference point for each case. Otherwise, even if $\langle \psi \vert \hat{A} \vert \psi \rangle = \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$, $(B_{\text{mit}})^{2}$ can be larger than $(B_{\text{e}})^{2}$ because of an inadequate reference point instead of the failure of error mitigation. Therefore, one should consider a reference point when adopting error mitigation to quantum metrology, unlike expectation value estimation~\cite{PhysRevX.11.031057, PhysRevX.11.041036}. Here we note that the optimal reference point for error case $\phi^{\text{opt}}_{0,\text{e}}$ and mitigation case $\phi^{\text{opt}}_{0,\text{mit}}$ could be different.
In general, the optimal reference point depends on noise strength. Although choosing an optimal reference point seems not feasible for an unknown noise strength~$\Delta$, we can still apply our method for some situations. First, when the optimal point does not depend on $\Delta$, one can choose the optimal point satisfying Eq.~\eqref{referencepoint} regardless of noise strength. When the optimal point depends on $\Delta$, a possible choice of the reference point is to use some prior knowledge in such as way that \begin{align}
\phi^{\text{opt}}_{0}=\arg \min_{\phi_{0}} \int_{\Delta_{1}}^{\Delta_{2}} p(\Delta)\left[\frac{x(\phi_{0})-x_{\text{id}}(\phi_{0})}{y_{\text{id}}(\phi_{0})}\right]^{2}d\Delta \label{averagedreferencepoint} \end{align} as a reference point where $p(\Delta)$ is the prior distribution of noise strength $\Delta$. When the optimal point does not rapidly change as $\Delta$, the above choice can be sufficient.
Lastly, we present three different cases and compare $B_{\text{e}}$ and $B_{\text{mit}}$ when one chooses the optimal reference points for each case. For simplicity, we assume that there exists optimal reference points $\phi^{\text{opt}}_{0,\text{e}}$ and $\phi^{\text{opt}}_{0,\text{mit}}$ which make the zeroth order of bias in $\phi$ vanish.
\textit{Case 1.~--} The dominant eigenvector is the same as the ideal state, which renders ${\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle=0}$. In this case, if one applies VPEM, the leading order of a bias of mitigation case $B_{\text{mit}}$ in the noise strength is $\Delta^{n}$. In addition, by choosing $\phi^{\text{opt}}_{0,\text{mit}}$ as a reference point, $B_{\text{mit}}$ can be reduced up to $O(\phi\Delta^{n})$ which shows the efficacy of VPEM. This case corresponds to the phase estimation in the interferometer system with N00N state in the presence of phase diffusion or photon loss, studied in Sec.~\ref{phaseestN00N}, and with the product of coherent and squeezed state as a quantum probe in the presence of additive Gaussian noise which is studied in Sec.~\ref{prodofcssimul}.
\textit{Case 2.~--} The dominant eigenvector is not close enough to the ideal state such that ${\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle=O(\phi\Delta)}$. In this case, $B_{\text{e}}$ and $B_{\text{mit}}$ have the same order of bias $O(\phi\Delta)$, i.e., VPEM cannot effectively reduce the bias. The phase estimation exploiting the product of a coherent and squeezed state which suffers from photon loss corresponds to \textit{Case 2}, which is investigated in Sec.~\ref{prodofcssimul}.
\textit{Case 3.~--} $\langle \psi \vert \hat{A} \vert \psi \rangle - \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle=O(\phi\Delta^{n'})$ for $n'\geq 2$. In this case, a bias cannot be suppressed beyond $O(\phi\Delta^{n'})$, no matter how large the order of VPEM $n$ is. Hence, choosing $n>n'$ does not further reduce the leading $\Delta$ order of the bias.
A previous study of phase estimation exploiting GHZ state in the presence of amplitude damping noise \cite{yamamoto2021error} can be classified as \textit{Case 1}, where the dominant eigenvector is the same as the ideal state; therefore, the bias of mitigation case $B_{\text{mit}}$ is the order of $\Delta^{n}$, where one can benefit from error mitigation. However, we emphasize that this is a special case of our analysis, and we will investigate other cases in the following sections.
\section{Numerical simulation of VPEM for phase estimation scheme}\label{S4} In this section, we apply VPEM to phase estimation in an interferometer, which is one of the most extensively studied quantum metrological tasks. We exploit a N00N state or the product of a coherent and squeezed state as a quantum probe, which are well-known quantum states that enable quantum-enhanced estimation \cite{lee2002quantum,PhysRevLett.100.073601}. Based on our analytic study in Sec.~\ref{S3}, we inspect whether the phase estimation scheme can benefit from VPEM in the presence of various types of noises.
\begin{figure}
\caption{Interferometer system. See the main text for more details.}
\label{fig:Interferometer}
\end{figure}
Let us recall a phase estimation in an interferometer system, which is illustrated in Fig.~\ref{fig:Interferometer}. First, one encodes an unknown phase $\phi$ by a phase shift operator $\hat{\Phi}(\phi)=e^{i\hat{n}_{2}\phi}$ onto a prepared two-mode quantum state $|\psi^{(0)}_{\text{id}}\rangle$, where $\hat{n}_{i}$ is the number operator in the $i$th mode.
After a two-mode balanced beam splitter $\hat{U}_{\text{BS}}$, which renders an ideal output state $\ket{\psi_{\text{id}}(\phi+\phi_{0})} \equiv \hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})|\psi^{(0)}_{\text{id}}\rangle$, one measures the state and estimates the $\phi$ using the measurement outcomes. Here we choose the parity operator $\hat{A}=(-1)^{\hat{n}_{1}}$ as a measurement operator. This is a crucial property to apply the error mitigation scheme because the parity operator is both unitary and Hermitian.
In addition, the parity measurement is known to the render Heisenberg scaling precision with appropriate input states \cite{hofmann2009all, anisimov2010quantum, chiruvelli2011parity,seshadreesan2011parity,PhysRevA.87.043833}.
We will consider three kinds of noise: phase diffusion, photon loss, and additive Gaussian noise which are dominant noises in phase estimation.
Phase diffusion happens when a random phase $x$ occurs during a phase $\phi$ encoding. Assuming that $x$ follows the Gaussian distribution, the ideal state then transforms as \begin{align}
\hat{\rho}_{\text{id}} \rightarrow \hat{\rho}_{\text{e}}=\frac{1}{\sqrt{2\pi\Delta} }\int_{-\infty}^{\infty}e^{-\frac{x^{2}}{2\Delta}}\hat{\rho}_{\text{id}}(\phi_{0}+\phi+x), \label{phasediffusion} \end{align} where $\Delta$ is a noise strength that characterizes the degree of the phase diffusion.
Photon loss transforms an annihilation operator as $\hat{a} \rightarrow \sqrt{\eta}\hat{a}+\sqrt{1-\eta}\hat{e}$, where $\hat{a}$ and $\hat{e}$ correspond to the annihilation operator for the system and the environment, respectively. The loss rate $0 \leq (1-\eta) \equiv \Delta \leq 1$ corresponds to noise strength. Since a photon-loss channel occurring on both modes with the same strength commutes with beam splitter and phase shift operation, our analysis covers a photon loss occurring from state preparation to before the measurement.
Finally, additive Gaussian noise is that a random displacement $\vec{x}=(x,p)$ occurs following Gaussian distribution, which transforms a state $\hat{\rho}$ as \begin{align}
\frac{1}{2\pi }\frac{1}{\sqrt{\Delta_{x}\Delta_{p}} }\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\frac{x^{2}}{2\Delta_{x}}}e^{-\frac{p^{2}}{2\Delta_{p}}}\hat{D}(\vec{x})\hat{\rho}\hat{D}^{\dagger}(\vec{x}) dx dp, \label{additivenoise} \end{align} where $\hat{D}(\vec{x})=e^{(x+ip)\hat{a}^{\dagger}-(x-ip)\hat{a}}$ is a displacement operator. Here $\Delta_{x}$ and $\Delta_{p}$ are noise strength.
We consider an additive Gaussian noise occurring on the second mode of the initial quantum state $|\psi^{(0)}_{\text{id}}\rangle$.
\subsection{N00N state} \label{phaseestN00N} \subsubsection{Ideal case} It is known that a N00N state is one of the representative quantum probes which enables a quantum-enhanced phase estimation \cite{PhysRevA.54.R4649,lee2002quantum,Demkowicz-Dobrzanski2015}.
In the ideal case, one exploits $|\psi^{(0)}_{\text{id}}\rangle=(\ket{N}_{1}\ket{0}_{2}+\ket{0}_{1}\ket{N}_{2})/{\sqrt{2}}$ as a quantum probe, and the expectation value of parity over $\ket{\psi_{\text{id}}(\phi+\phi_{0})}=\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})|\psi^{(0)}_{\text{id}}\rangle$ is \begin{align}
\Tr\left[\hat{A}\hat{\rho}_{\text{id}}\right]=\frac{(-i)^{N}e^{{iN(\phi+\phi_{0})}}+i^{N}e^{{-iN(\phi+\phi_{0})}}}{2}. \end{align} Therefore, $\text{Tr}[\hat{A}\hat{\rho}_{\text{id}}]$ is either $\sin{N(\phi+\phi_{0})}$ or $\cos{N(\phi+\phi_{0})}$ with different signs. Since we assume that we can control the reference point, as a consequence, we can always find $\phi'_{0}$ that satisfies $\sin{N(\phi+\phi_{0})}=\cos{N(\phi+\phi'_{0})}$. Thus, without loss of generality, we consider it as $\sin{N(\phi+\phi_{0})}$. The estimation error for the ideal case is given by \begin{align}
\delta^{2}\phi_{\text{id}} = \frac{1}{y_{\text{id}}^{2}}\frac{\V[\hat{A}]_{\hat{\rho}_{\text{id}}}}{N_{s}}=\frac{1}{N_{s}N^{2}}, \end{align} which shows the Heisenberg scaling.
\begin{figure*}\label{fig:N00N_Phase}
\end{figure*}
\subsubsection{Phase diffusion} In the presence of phase diffusion, the ideal state transforms to an error state (which can be found using Eq. \eqref{phasediffusion}), \begin{align}
\hat{\rho}_{\text{e}} &=\left(\frac{1+ e^{-\frac{\Delta}{2}N^{2}}}{2}\right) \dyad{\psi_{\text{id}}} + \left(\frac{1- e^{-\frac{\Delta}{2}N^{2}}}{2}\right)\dyad{\psi_{\perp}} \\
&\equiv \lambda \dyad{\psi_{\text{id}}} + \lambda_{\perp}\dyad{\psi_{\perp}}, \end{align} where $\ket{\psi_{\text{id}}}=\ket{\psi_{\text{id}}(\phi+\phi_{0})}$ and $\ket{\psi_{\perp}}\equiv \hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})\left[(\ket{N}_{1}\ket{0}_{2}-\ket{0}_{1}\ket{N}_{2})/\sqrt{2}\right]$ which is orthogonal to $\ket{\psi_{\text{id}}}$. In this case, the dominant eigenvector is the same as the ideal state, which implies that $\langle \psi \vert \hat{A} \vert \psi \rangle = \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$. Therefore, according to the analysis in Sec.~\ref{S3}, one can expect that VPEM can effectively reduce the bias error. Let us demonstrate this by inspecting the expectation values of $\hat{A}$: \begin{align}
&\text{Tr}[\hat{A}\hat{\rho}_{\text{id}}]=\sin{N(\phi+\phi_{0})}\\ &=\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi+O(\phi^{2}),\\
&\text{Tr}[\hat{A}\hat{\rho}_{\text{e}}]=(\lambda-\lambda_{\perp}) \sin{N(\phi+\phi_{0})}\\
&=(\lambda-\lambda_{\perp}) \left[\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi\right]+O(\phi^{2}),\\
&\text{Tr}[\hat{A}\hat{\rho}_{\text{mit}}]=\left(\frac{\lambda^{n}-\lambda^{n}_{\perp}}{\lambda^{n}+\lambda^{n}_{\perp}}\right) \sin{N(\phi+\phi_{0})}\\ &=\left(\frac{\lambda^{n}-\lambda^{n}_{\perp}}{\lambda^{n}+\lambda^{n}_{\perp}}\right) \left[\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi\right]+O(\phi^{2}). \end{align} First, we find that $\text{Tr}[\hat{A}\hat{\rho}_{\text{id}}(0)]=\text{Tr}[\hat{A}\hat{\rho}_{\text{e}}(0)]=\text{Tr}[\hat{A}\hat{\rho}_{\text{mit}}(0)]$, and $\arg \max_{\phi_{0}}y_{\text{id}}(\phi_{0})=0$, which show that the optimal reference points are $\phi^{\text{opt}}_{0,\text{e}}=\phi^{\text{opt}}_{0,\text{mit}}=0$ satisfying Eq.~\eqref{referencepoint}, for both error and mitigation cases regardless of the noise strength $\Delta$. Second, once we choose the optimal reference point, the biases are given by \begin{align}
&B_{\text{e}} = \left(-\frac{N^{2}}{2}\Delta\right)\phi +O(\Delta^{2}), \label{biaseN00NPhase}\\
&B_{\text{mit}} = -2\left( \frac{ N^{2}}{4}\Delta\right)^{n}\phi+O(\Delta^{n+1}), \label{biasmitN00NPhase} \end{align} which shows that VPEM reduces the bias in the small range of $\Delta$ that satisfies $\frac{N^{2}}{4}\Delta \gg (\frac{N^{2}}{4}\Delta)^{n}$.
We support our results with numerical simulations. Fig.~\ref{fig:N00N_Phase} exhibits a simulation result of bias errors with the N00N state in the presence of phase diffusion. We show that under the optimal reference point, VPEM can always effectively reduce the bias regardless of noise strength and the value of $\phi$. We find that $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2} \geq B_{\text{mit}}(\phi,\phi^{\text{opt}}_{0,\text{mit}},\Delta)^{2}$ as we expected through our analysis since the dominant eigenvector is equal to the ideal state. Here, we emphasize that since $\phi^{\text{opt}}_{0,\text{e}}$ and $\phi^{\text{opt}}_{0,\text{mit}}$ satisfy Eq.~\eqref{referencepoint}, both biases disappear at $\phi=0$. In addition, Fig.~\ref{fig:N00N_Phase} also shows the importance of the reference point. It shows that if one does not carefully choose a reference point, an error case could have a smaller bias than a mitigation case even though $\langle \psi \vert \hat{A} \vert \psi \rangle = \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$. We find that $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2} \leq B_{\text{mit}}(\phi,\frac{\pi}{30},\Delta)^{2}$ (also for the $B_{\text{mit}}(\phi,\frac{\pi}{20},\Delta)^{2}$) when the dominant eigenvalue in the range $0.8 \leq \lambda \leq 0.9$, regardless of $\phi$. We emphasize that $\pi/30$ and $\pi/20$ are not optimal reference point.
\begin{figure*}
\caption{(a)-(c) Bias errors (with log scale) from simulation, exploiting N00N state ($N=5)$ as a quantum probe in the presence of photon loss with different noise strengths. Other features of the figure are the same as Fig. \ref{fig:N00N_Phase}.}
\label{fig:N00N_Loss}
\end{figure*} \subsubsection{Photon loss} In the presence of the photon loss, a single mode number state $\dyad{N}$ transforms to $\sum_{k=0}^{N}\binom{N}{k}\eta^{k}(1-\eta)^{N-k}\dyad{k}$, and the component $\dyad{N}{0}$ becomes $\sqrt{\eta}^{N}\dyad{N}{0}$, where $(1-\eta)$ is the loss rate and we denote it as noise strength $\Delta$. Since we assume that photon loss occurs on each mode independently with the same noise strength and the loss channel commutes with the beam splitter and phase shift operations, one can easily find that the error state is \begin{align}
\hat{\rho}_{\text{e}}
=\eta^{N} \dyad{\psi_{\text{id}}} + \left(1-\eta^{N}\right)\sum^{2N}_{k=1} p_{k} \dyad{\psi_k}. \end{align} Here, notice that the dominant eigenvector is equal to the ideal state. In addition, \begin{align}
&p_{k}=p_{N+k}=\binom{N}{k-1}\left(\frac{\eta^{k-1}(1-\eta)^{N-k+1}}{2(1-\eta^{N})}\right),\\
&|\psi_{k}\rangle=\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})\ket{k-1,0}_{1,2}, \\
&|\psi_{k+N}\rangle=\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})\ket{0,k-1}_{1,2}, \end{align} for $k\in \{1,2,\cdots,N\}$. Note that $\langle \psi_{k} \vert \hat{A} \vert \psi_{k} \rangle=0$ for all $k$. The corresponding expectation values are given by \begin{align}
&\text{Tr}[\hat{A}\hat{\rho}_{\text{id}}]=\sin{N(\phi+\phi_{0})}\\
&= \sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi+O(\phi^{2}),\\
&\text{Tr}[\hat{A}\hat{\rho}_{\text{e}}]= (1-\Delta)^{N} \sin{N(\phi+\phi_{0})}\\
&= (1-\Delta)^{N} \left[\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi\right]+O(\phi^{2}),\\
&\text{Tr}[\hat{A}\hat{\rho}_{\text{mit}}] \nonumber \\
&=\left(\frac{(1-\Delta)^{nN}}{\sum_{k=0}^{N}\binom{N}{k}^{n}(1-\Delta)^{nk}\Delta^{n(N-k)}}\right)\sin{N(\phi+\phi_{0})}\\
&= \left(\frac{(1-\Delta)^{nN}}{\sum_{k=0}^{N}\binom{N}{k}^{n}(1-\Delta)^{nk}\Delta^{n(N-k)}}\right) \nonumber \\
&~~~~\times \left[\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi\right]+O(\phi^{2}). \end{align} Again, one can find that the optimal reference points satisfying Eq.~\eqref{referencepoint} are $\phi^{\text{opt}}_{0,\text{e}}=\phi^{\text{opt}}_{0,\text{mit}}=0$. \iffalse the expectation value of the mitigation case is \begin{align}
&\text{Tr}[\hat{A}\hat{\rho}_{\text{mit}}]= \left(\frac{(1-\Delta)^{2N}}{\Delta^{2N}~_{2}F_{1}\left(-N,-N;1;\frac{(1-\Delta)^{2}}{\Delta^{2}}\right)}\right) \\
&~~~~~~~~~~~~~~~~~\times\left[\sin{N\phi_{0}}+ (N\cos{N\phi_{0}})\phi\right]+O(\phi^{2}), \end{align} where $_{2}F_{1}\left(a,b;c;z\right)$ is a hypergeometric series. \fi Especially, for $n=2$, the corresponding biases are \begin{align}
&B_{\text{e}} = \left(-N\Delta\right)\phi+O(\Delta^{2}), \\
&B_{\text{mit}} = \left( -N^{2}\Delta^{2}\right)\phi+O(\Delta^{3}), \end{align} which clearly shows that the bias is reduced by error mitigation. We also numerically demonstrate that error mitigation can reduce bias in the presence of photon loss, which is shown in Fig. \ref{fig:N00N_Loss}. Similarly to the previous case, the VPEM reduces the bias error significantly because the dominant eigenvector is the same as the ideal state.
\subsection{Product of coherent and squeezed states}\label{prodofcssimul} \subsubsection{Ideal case} We prepare a coherent state $\ket{\alpha_{0}}_{1}$ and a squeezed vacuum state $\ket{r,0}_{2}$, and inject the states into a beam splitter which makes the entanglement between the modes.
We exploit the state $\hat{U}_{\text{BS}}\ket{\alpha_{0}}_{1}\ket{r,0}_{2}$ as a quantum probe $|\psi^{(0)}_{\text{id}}\rangle$ which is a two-mode input state in Figure~\ref{fig:Interferometer}, the ideal state is then ${\ket{\psi_{\text{id}}}=\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})|\psi^{(0)}_{\text{id}}\rangle}$. For the product of coherent state and squeezed state case, we study the effectiveness of error mitigation in the presence of photon loss and additive Gaussian noise. We consider $N_{c}=N_{r}=2.5$ with $n=2$. Unlike the previous case, the optimal reference point varies with the noise strength $\Delta$. To choose an appropriate reference point, we set $p(\Delta)$ to be uniform Eq.~\eqref{averagedreferencepoint} as $1/(\Delta_{2}-\Delta_{1})$ in the range $[\Delta_{1},\Delta_{2}]$ such that $\lambda(\Delta_{1})=1$ and $\lambda(\Delta_{2})=0.8$.
\iffalse The expectation of $\hat{A}$ over $\hat{\rho}_{\text{id}}$ is \begin{align}
\Tr\left[\hat{A}\hat{\rho}_{\text{id}}\right]= \frac{\exp\left[-N_{c}\left(\frac{\sqrt{{N_{r}}^{2}+N_{r}}\sin^{2}{\phi}-\cos{\phi}}{N_{r}\sin^{2}{\phi}+1}+1\right)\right] }{\sqrt{N_{r}\sin^{2}{\phi}+1}} \end{align} where $N_{c}\equiv \abs{\alpha_{0}}^{2}$ and $N_{r}\equiv \sinh^{2}r$ are mean photon number of $\ket{\alpha_{0}}_{1}$ and $\ket{r,0}_{2}$, respectively.
\cor{The notable point is that the estimation error depends on $\phi$. Especially for $\phi=0$, the estimation error is \begin{align}
\delta^{2}\phi_{\text{id}} ~ \vert_{\phi=0}=\frac{1}{2N_{c}\sqrt{N_{r}(N_{r}+1)}+2N_{c}N_{r}+N_{c}+N_{r}}, \end{align} which shows the Heisenberg scaling \cite{seshadreesan2011parity}. \cor{(CO: do we need this?)}} \fi
\subsubsection{Photon loss} \begin{figure}\label{fig:contourloss}
\end{figure}
\begin{figure*}
\caption{(a)-(c) Bias errors (with log scale) from simulation using the product of coherent state ($N_{c}=2.5$) and squeezed vacuum state ($N_{r}=2.5$) in the presence of photon loss with different noise strengths. The red-colored regions are where the bias of the error case is smaller than the mitigation one and the blue-colored regions are the opposite. Other features of the figure are the same as Fig. \ref{fig:N00N_Phase}.}
\label{fig:sqz_Loss}
\end{figure*}
In the presence of photon loss, the coherent state $\dyad{\alpha_{0}}$ remains as a coherent state with reduced amplitude coherent state $\dyad{\bar{\alpha_{0}}}$, where $\bar{\alpha}_{0}=\sqrt{(1-\Delta)\alpha_{0}}$. The squeezed vacuum state $\dyad{r,0}$, where the $r$ is a squeezing parameter, transforms to a squeezed thermal state \cite{serafini2017quantum} $\sum_{k=0}^{\infty} \frac{\bar{N}^{k}}{(\bar{N}+1)^{k+1}} \dyad{\bar{r},k}_{2} $ where $\bar{N}\equiv \frac{-1+\sqrt{1-4N_{r}\{(1-\Delta)^{2}-(1-\Delta)\}}}{2}$ and $\bar{r}\equiv \frac{1}{4}\ln\left[\frac{(1-\Delta)(2N_{r}+2\sqrt{N^{2}_{r}+N_{r}})+1}{(1-\Delta)(2N_{r}-2\sqrt{N^{2}_{r}+N_{r}})+1}\right]$. Here $\ket{\bar{r},k}$ is a squeezed number state with squeezing parameter $\bar{r}$. Again, because we assume that the photon loss occurs on both modes with the same loss rate and the loss channel commutes with linear optical operations which are phase shift and beam splitters, the error state is \begin{align}
\hat{\rho}_{\text{e}}=\sum_{k=0}^{\infty} \frac{\bar{N}^{k}}{(\bar{N}+1)^{k+1}} \left(\hat{\mathcal{U}} \dyad{\bar{\alpha}_{0}}_{1} \otimes \dyad{\bar{r},k}_{2}\hat{\mathcal{U}}^{\dagger} \right), \end{align} where $\hat{\mathcal{U}}\equiv\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})\hat{U}_{\text{BS}}$.
\iffalse where \begin{align}
&\bar{r}\equiv \frac{1}{4}\ln\left[\frac{(1-\Delta)(2N_{r}+2\sqrt{N^{2}_{r}+N_{r}})+1}{(1-\Delta)(2N_{r}-2\sqrt{N^{2}_{r}+N_{r}})+1}\right],\\
&\bar{N}\equiv \frac{-1+\sqrt{1-4N_{r}\{(1-\Delta)^{2}-(1-\Delta)\}}}{2},\\
&\hat{\mathcal{U}}\equiv\hat{U}_{\text{BS}}\hat{\Phi}(\phi+\phi_{0})\hat{U}_{\text{BS}},\\
&\bar{\alpha}_{0}=\sqrt{(1-\Delta)\alpha_{0}}, \end{align} and $\ket{\bar{r},k}$ is a squeezed number state with the squeezing parameter $\bar{r}$. \fi
We emphasize that the dominant eigenvector $\ket{\psi}=\hat{\mathcal{U}}\ket{\bar{\alpha}_{0}}_{1}\ket{\bar{r},k=0}_{2}$ is different from the ideal state. According to our analysis in Sec. \ref{S3A}, we can expect that VPEM will not be beneficial since the dominant eigenvector and the ideal state are different.
We numerically show the inefficacy of VPEM. For the numerical simulations, we consider the noise strength in the range $[\Delta_{1}=0,\Delta_{2}=0.146]$ and assume that $\Delta$ follows uniform distribution in the range. Here $\Delta_{2}$ satisfies $\lambda(\Delta_{2})=0.8$ where $\lambda$ is the dominant eigenvalue of the error state. The optimal reference point $\phi^{\text{opt}}_{0}$ satisfying Eq.~\eqref{referencepoint} depends on a given $\Delta$ for both error and mitigation cases. The relation between $\Delta$ and $\phi^{\text{opt}}_{0}$ is provided in Fig.~\ref{fig:contourloss}. Therefore, we consider an averaged optimal reference point that satisfies Eq. \eqref{averagedreferencepoint} which are $\phi^{\text{opt}}_{0,\text{e}}\approx0.2$ and $\phi^{\text{opt}}_{0,\text{mit}}\approx0.087$. Under the chosen optimal reference points, we numerically simulate the bias errors of the $\phi$ estimation exploiting the product of the coherent and the squeezed state in the presence of photon loss. Fig.~\ref{fig:sqz_Loss} shows the non-effectiveness of VPEM-based quantum metrology as implied by our analysis because of the difference between the dominant eigenvector and the ideal state. For the most of the regions of $\phi$, $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2}$ and $B_{\text{mit}}(\phi,\phi^{\text{opt}}_{0,\text{mit}},\Delta)^{2}$ are similar in magnitudes. Moreover, there are combinations of $\Delta$ and $\phi$, $B_{\text{mit}}(\phi,\phi^{\text{opt}}_{0,\text{mit}},\Delta)^{2}$ is larger than the $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2}$.
\begin{figure}\label{fig:contourdis}
\end{figure}
\begin{figure*}
\caption{(a)-(c) The simulations of the bias errors (with log scale) correspond to the product of coherent state ($N_{c}=2.5$) and squeezed vacuum state ($N_{r}=2.5$) in the presence of additive Gaussian noise with different noise strengths. Other features of the figure are the same as Fig. \ref{fig:N00N_Phase}.}
\label{fig:sqz_Gaussian}
\end{figure*}
\subsubsection{Additive Gaussian noise} For pedagogical purposes, we consider an additive Gaussian noise, which might not be directly relevant to experiments. Let us assume that the additive Gaussian noise occurs only on the second mode of the initial quantum probe $\ket{\alpha_{0}}_{1}\ket{r,0}_{2}$. Furthermore, we assume that $\Delta_{x}$ and $\Delta_{p}$ in Eq.~\eqref{additivenoise} are \begin{align}
&\Delta_{x}=\sqrt{\frac{\Delta}{2}}e^{-r},~~\Delta_{p}=\sqrt{\frac{\Delta}{2}}e^{r} \label{standarddisplace} \end{align} where $r$ is a squeezing parameter in $\ket{r,0}_{2}$ and $\Delta$ is a noise strength. We note that the additive Gaussian noise is a Gaussian error such that a Gaussian state remains as a Gaussian state after suffering from the noise. Similar to the photon loss case, the additive Gaussian noise makes a single-mode squeezed state to squeezed thermal state \cite{serafini2017quantum} $\sum_{k=0}^{\infty} \frac{\Delta^{k}}{(\Delta+1)^{k+1}} \dyad{r,k}$. Since we consider the additive Gaussian noise only on the second mode of the quantum probe at the state preparation stage (before passing the first beam splitter), as a consequence, the error state is \begin{align}
\hat{\rho}_{\text{e}}=\sum_{k=0}^{\infty} \frac{\Delta^{k}}{(\Delta+1)^{k+1}} \bigg[\hat{\mathcal{U}} \dyad{\alpha_{0}}_{1} \otimes \dyad{r,k}_{2}\hat{\mathcal{U}}^{\dagger} \bigg], \end{align} whose dominant eigenvector is the same as the ideal state, which means $\langle \psi \vert \hat{A} \vert \psi \rangle = \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$. In this case, $a_{1}(\phi)$ and $\pdv{a_{1}(\phi)}{\phi}$ are always $0$ regardless of $\phi$, therefore, one can expect that error mitigation can efficiently reduce bias error.
We numerically show the efficiency of VPEM. For the numerical simulations, we consider the noise strength in the range $[\Delta_{1}=0,\Delta_{2}=0.25]$ and again, assume that $p(\Delta)=1/(\Delta_{2}-\Delta_{1})$. Here $\Delta_{2}$ satisfies $\lambda(\Delta_{2})=0.8$. In an additive Gaussian noise case again, the optimal reference point that satisfies Eq. \eqref{referencepoint} depends on noise strength (See Fig. \ref{fig:contourdis}), we consider an averaged optimal reference point that satisfies Eq. \eqref{averagedreferencepoint} which are $\phi^{\text{opt}}_{0,\text{e}}\approx 0.338$ and $\phi^{\text{opt}}_{0,\text{mit}}\approx 0.318$. Figure \ref{fig:sqz_Gaussian} is a simulation result with the product of the coherent and squeezed state in the presence of additive Gaussian noise. In Fig. \ref{fig:sqz_Gaussian}, we show that even though one adopts the reference points that satisfy Eq. \eqref{averagedreferencepoint}, VPEM can effectively reduce the bias because the additive Gaussian noise let the dominant eigenvector being the ideal state. For most of $\phi$'s, one can clearly find that $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2} \geq B_{\text{mit}}(\phi,\phi^{\text{opt}}_{0,\text{mit}},\Delta)^{2}$. In addition, we can also find the importance of the reference point in Fig.~\ref{fig:sqz_Gaussian}. We find that $B_{\text{e}}(\phi,\phi^{\text{opt}}_{0,\text{e}},\Delta)^{2} \leq B_{\text{mit}}(\phi,\frac{\pi}{30},\Delta)^{2}$ even though $\langle \psi \vert \hat{A} \vert \psi \rangle = \langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$.
\section{Discussion} We have investigated crucial factors that dictate the efficiency of VPEM when one applies the method to quantum metrology. We find that the dominant eigenvector has to be close to the ideal state to reduce the bias by VPEM successfully. More specifically, $\langle \psi \vert \hat{A} \vert \psi \rangle-\langle \psi_{\text{id}} \vert \hat{A} \vert \psi_{\text{id}} \rangle$ determines the reducible amount of bias. In addition, we argue that one should carefully choose an optimal reference point that minimizes bias error. Otherwise, the bias error of the mitigation case could be larger than the error case even if the dominant eigenvector is equal to the ideal state, because of the inadequate reference point. We emphasize that many of the practical estimation scenarios giving an additional parameter for the reference point have already been implemented experimentally, such as phase estimation schemes \cite{berni2015ab}. Based on the analysis, we analytically and numerically inspect the phase estimation scheme in the interferometer system. We consider the N00N state and the product of the coherent and the squeezed state as quantum probes in the presence of phase diffusion, photon loss, and additive Gaussian noise. We study that all the phase estimation schemes that we mentioned above can be explained by our analysis.
Finally, we point out the differences from the recent study~\cite{yamamoto2021error}.
First, while Ref.~\cite{yamamoto2021error} analyzes bias up to the zeroth order of $\phi$, we consider up to the first order of $\phi$, which is important because
according to our optimal reference point strategy, the zeroth order of $\phi$ can be vanished both for the error case and mitigation case. Second, we study the cases where VPEM cannot effectively reduce the bias, which has not been studied before, to the best of our knowledge. Lastly, we finally present the importance of the reference point. In the previous study, because the optimal reference point is fixed for the error case and mitigated case, the role of the reference point was not considered. It would be an interesting future work to find a strategy for choosing an optimal reference point other than Eq. \eqref{averagedreferencepoint}. For example, one can develop an adaptive strategy that exploits measurement outcomes to estimate a noise strength and updates a reference point using the estimated noise strength. This method may give a tailored reference point for a given noise which results in a more reduced bias than the reference point in Eq. \eqref{averagedreferencepoint}.
\section*{Acknowledgments} H.K. and H.J. were supported by the National Research Foundation of Korea (NRF) grants funded by the Korean government (Grant~Nos.~NRF-2023R1A2C1006115 and NRF-2022M3E4A1076099) and by the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (IITP-2021-0-01059 and IITP-2023-2020-0-01606). H.K. was supported by the education and training program of the Quantum Information Research Support Center, funded through the National research foundation of Korea (NRF) by the Ministry of science and ICT (MSIT) of the Korean government (No.2021M3H3A103657313). L.J. and C.O. acknowledge support from the ARO (W911NF-23-1-0077), ARO MURI (W911NF-21-1-0325), AFOSR MURI (FA9550-19-1-0399, FA9550-21-1-0209), AFRL (FA8649-21-P-0781), NSF (OMA-1936118, ERC-1941583, OMA-2137642), NTT Research, and the Packard Foundation (2020-71479). This work is partially supported by the U.S. Department of Energy Office of Science National Quantum Information Science Research Centers. Y. L. acknowledges National Research Foundation of Korea a grant funded by the Ministry of Science and ICT (NRF-2022M3H3A1098237) and KIAS Individual Grant (CG073301) at Korea Institute for Advanced Study.
\end{document} | arXiv | {
"id": "2303.15838.tex",
"language_detection_score": 0.7159337997436523,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Normal lattice of certain metabelian \(p\)-groups \(G\) with \(G/G^\prime\simeq (p,p)\)}
\author{Daniel C. Mayer} \address{Naglergasse 53\\8010 Graz\\Austria} \email{algebraic.number.theory@algebra.at} \urladdr{http://www.algebra.at}
\thanks{Research supported by the Austrian Science Fund (FWF): P 26008-N25}
\subjclass[2000]{Primary 20D15, 20F12, 20F14, secondary 11R37, 11R29, 11R11} \keywords{lattice of normal subgroups, \(p\)-groups of derived length \(2\), power-commutator presentations, central series, two-step centralizers, second Hilbert \(p\)-class field, principalization of \(p\)-classes, quadratic fields}
\date{December 28, 2013}
\begin{abstract} Let \(p\) be an odd prime. The lattice of all normal subgroups and the terms of the lower and upper central series are determined for all metabelian \(p\)-groups with generator rank \(d=2\) having abelianization of type \((p,p)\) and minimal defect of commutativity \(k=0\). It is shown that many of these groups are realized as Galois groups of second Hilbert \(p\)-class fields of an extensive set of quadratic fields which are characterized by principalization types of \(p\)-classes. \end{abstract}
\maketitle
\section{Introduction} \label{s:Intro}
Let \(p\ge 3\) be an odd prime number, and \(G=\langle x,y\rangle\) be a two-generated metabelian \(p\)-group having an elementary bicyclic derived quotient \(G/G^\prime\) of type \((p,p)\).
Assume further that \(G\) is of order \(\lvert G\rvert=p^n\) with \(n\ge 2\), and of nilpotency class \(\mathrm{cl}(G)=m-1\) with \(m\ge 2\). Then \(G\) is of coclass \(\mathrm{cc}(G)=n-m+1=e-1\) with \(e\ge 2\). Denote by \[G=\gamma_1(G)>\gamma_2(G)=G^\prime>\ldots>\gamma_{m-1}(G)>\gamma_m(G)=1\] the (descending) lower central series of \(G\), where \(\gamma_j(G)=\lbrack\gamma_{j-1}(G),G\rbrack\) for \(j\ge 2\), and by \[1=\zeta_0(G)<\zeta_1(G)<\ldots<G^\prime=\zeta_{m-2}(G)<\zeta_{m-1}(G)=G\] the (ascending) upper central series of \(G\), where \(\zeta_j(G)/\zeta_{j-1}(G)=\mathrm{Centre}(G/\zeta_{j-1}(G))\) for \(j\ge 1\).
Let \(s_2=t_2=\lbrack y,x\rbrack\) denote the main commutator of \(G\), such that \(\gamma_2(G)=\langle s_2,\gamma_3(G)\rangle\). By means of the two series \(s_j=\lbrack s_{j-1},x\rbrack\) for \(j\ge 3\) and \(t_\ell=\lbrack t_{\ell-1},y\rbrack\) for \(\ell\ge 3\) of higher commutators and the subgroups \(\Sigma_j=\langle s_j,\ldots,s_{m-1}\rangle\) with \(j\ge 3\) and \(T_\ell=\langle t_\ell,\ldots,t_{e+1}\rangle\) with \(\ell\ge 3\), we obtain the following fundamental distinction of cases.
\begin{enumerate} \item The \textit{uniserial} case of a CF group (\textit{cyclic factors}) of coclass \(\mathrm{cc}(G)=1\) (maximal class), where \(t_3\in\Sigma_3\), \(\gamma_3(G)=\langle s_3,\gamma_4(G)\rangle\), \(e=2\), and \(m=n\). There are two subcases:\\ (1.1) \(t_3=1\in\gamma_m(G)\), where \(G\) contains an abelian maximal subgroup and \(k=0\),\\ (1.2) \(1\ne t_3\in\gamma_{m-k}(G)\), \(1\le k\le m-4\), where all maximal subgroups are non-abelian. \item The \textit{biserial} case of a non-CF or BCF group (\textit{bicyclic or cyclic factors}) of coclass \(\mathrm{cc}(G)\ge 2\), where \(t_3\not\in\Sigma_3\), \(\gamma_3(G)=\langle s_3,t_3,\gamma_4(G)\rangle\), \(e\ge 3\), and \(m<n\). Again there exist two subcases, characterized by the \textit{defect of commutativity} \(k\) of \(G\):\\ (2.1) \(t_{e+1}=1\in\gamma_m(G)\), where \(\Sigma_3\cap T_3=1\) and \(k=0\),\\ (2.2) \(1\ne t_{e+1}\in\gamma_{m-k}(G)\), for some \(k\ge 1\), where \(\Sigma_3\cap T_3\le\gamma_{m-k}(G)\). \end{enumerate}
In this article, we are interested in two-generator metabelian \(p\)-groups \(G=\langle x,y\rangle\) of coclass \(\mathrm{cc}(G)\ge 2\) having the convenient property \(\Sigma_3\cap T_3=1\), resp. \(k=0\), where the product \(\Sigma_3\times T_3\) is direct and coincides with the major part of the \textit{normal lattice} of \(G\), as shown in Figure \ref{fig:NormalLatticeFigure}.
\begin{definition} \label{dfn:Diamond} A pair \((U,V)\) of normal subgroups of a \(p\)-group \(G\), such that \(V<U\le G\) and \((U:V)=p^2\), is called a \textit{diamond} if the quotient \(U/V\) is abelian of type \((p,p)\). \end{definition}
If \((U,V)\) is a diamond and \(U=\langle u_1,u_2,V\rangle\), then the \(p+1\) intermediate subgroups of \(G\) between \(U\) and \(V\) are given by \(\langle u_2,V\rangle\) and \(\langle u_1u_2^{i-2},V\rangle\) with \(2\le i\le p+1\).
\section{The normal lattice} \label{s:NormalLattice}
In this section, let \(G=\langle x,y\rangle\) be a metabelian \(p\)-group with two generators \(x,y\), having abelianization \(G/G^\prime\) of type \((p,p)\) and satisfying the independence condition \(\Sigma_3\cap T_3=1\), that is, \(G\) is a metabelian \(p\)-group with defect of commutativity \(k=0\) \cite[\S\ 3.1.1, p. 412, and \S\ 3.3.2, p. 429]{Ma}. We assume that \(G\) is of coclass \(\mathrm{cc}(G)\ge 2\), since the normal lattice of \(p\)-groups of maximal class has been determined by Blackburn \cite{Bl}.
\begin{theorem} \label{thm:NormalLattice} The complete normal lattice of \(G\)
contains the heading diamond \((G,G^\prime)\) and the rectangle \(\bigl((P_{j,\ell},P_{j+1,\ell+1})\bigr)_{3\le j\le m-1,\ 3\le\ell\le e}\) of trailing diamonds, where \(P_{j,\ell}=\Sigma_j\times T_\ell\) for \(3\le j\le m\) and \(3\le\ell\le e+1\). The structure of the normal lattice is visualized in Figure \ref{fig:NormalLatticeFigure}. \end{theorem}
Note that \(P_{j,\ell}=\langle s_j,\ldots,s_{m-1}\rangle\times\langle t_\ell,\ldots,t_e\rangle=\langle s_j,t_\ell,P_{j+1,\ell+1}\rangle\) for \(3\le j\le m-1\), \(3\le\ell\le e\).
\begin{conjecture} \label{cnj:NormalLattice} The complete normal lattice of \(G\) consists exactly of the normal subgroups given in Theorem \ref{thm:NormalLattice}. \end{conjecture}
\begin{corollary} \label{cor:NormalLattice} The total number of normal subgroups of \(G\) is given by \[me-(m+2e)+6+\lbrack me-(2m+3e)+7\rbrack\cdot (p-1),\] in particular, for \(p=3\) it is given by \[3me-(5m+8e)+20.\] \end{corollary}
\begin{corollary} \label{cor:CentralSeries} Blackburn's two-step centralizers of \(G\) \cite{Bl} are given by
\begin{equation*} \chi_j(G)= \begin{cases} G^\prime \text{ for } 1\le j\le e-1,\\ \langle y,G^\prime\rangle \text{ for } e\le j\le m-2,\\ G \text{ for } j\ge m-1, \end{cases} \end{equation*}
\noindent in particular, none of the maximal subgroups of \(G\) occurs as a two-step centralizer, when \(e=m-1\).
\begin{enumerate}
\item The factors of the lower central series of \(G\) are given by
\begin{equation*} \gamma_j(G)/\gamma_{j+1}(G)\simeq \begin{cases} (p,p) \text{ for } j=1 \text{ and } 3\le j\le e,\\ (p) \text{ for } j=2 \text{ and } e+1\le j\le m-1. \end{cases} \end{equation*}
\item The terms of the lower central series of \(G\) are given by
\begin{equation*} \gamma_j(G)= \begin{cases} \langle x,y,G^\prime\rangle \text{ for } j=1,\\ \langle s_2,\gamma_3(G)\rangle \text{ for } j=2,\\ P_{j,j} \text{ for } 3\le j\le e,\\ \Sigma_j \text{ for } e+1\le j\le m-1. \end{cases} \end{equation*}
\item The factors of the upper central series of \(G\) are given by
\begin{equation*} \zeta_j(G)/\zeta_{j-1}(G)\simeq \begin{cases} (p,p) \text{ for } 1\le j\le e-2 \text{ and } j=m-1,\\ (p) \text{ for } e-1\le j\le m-2. \end{cases} \end{equation*}
\item The terms of the upper central series of \(G\) are given by
\begin{equation*} \zeta_j(G)= \begin{cases} P_{m-j,e+1-j} \text{ for } 1\le j\le e-2,\\ P_{m-j,3} \text{ for } e-1\le j\le m-3,\\ \langle s_2,\zeta_{m-3}(G)\rangle \text{ for } j=m-2,\\ \langle x,y,\zeta_{m-2}(G)\rangle \text{ for } j=m-1. \end{cases} \end{equation*}
\end{enumerate}
\end{corollary}
\begin{proof}
We prove the invariance of all claimed normal subgroups under inner automorphisms of \(G=\langle x,y\rangle\).
It is well known that the subgroups in the heading diamond are normal, since they contain the commutator subgroup \(G^\prime=\gamma_2(G)\).
We start the proof with the tops of trailing diamonds. For \(g\in P_{j,\ell}\) and \(s\in G^\prime\) we have \(s^{-1}gs=s^{-1}sg=g\), since \(P_{j,\ell}<G^\prime\), for \(j\ge 3\), \(\ell\ge 3\), and \(G\) was assumed to be metabelian. Now, \(P_{j,\ell}\) is the direct product of \(\Sigma_j\) and \(T_\ell\), since we suppose that \(\Sigma_3\cap T_3=1\). So it suffices to show invariance of \(\Sigma_j\) and \(T_\ell\) under conjugation with the generators \(x\) and \(y\) of \(G\). We have \(x^{-1}s_jx=s_j\lbrack s_j,x\rbrack=s_js_{j+1}\in\Sigma_j\) and \(y^{-1}s_jy=s_j\lbrack s_j,y\rbrack=s_j\in\Sigma_j\) for \(j\ge 3\). And similarly we have \(x^{-1}t_\ell x=t_\ell\lbrack t_\ell,x\rbrack=t_\ell\in T_\ell\) and \(y^{-1}t_\ell y=t_\ell\lbrack t_\ell,y\rbrack=t_\ell t_{\ell+1}\in T_\ell\) for \(\ell\ge 3\).
Next we prove invariance of intermediate groups between top and bottom of trailing diamonds. They are of the shape \(\langle t_\ell,P_{j+1,\ell+1}\rangle\) or \(\langle s_jt_\ell^i,P_{j+1,\ell+1}\rangle\) with \(0\le i\le p-1\). For \(t_\ell\), invariance has been shown above. So we investigate \(s_jt_\ell^i\). We have \(x^{-1}s_jt_\ell^ix=x^{-1}s_jx(x^{-1}t_\ell x)^i=s_js_{j+1}t_\ell^i\), where \(s_{j+1}\in P_{j+1,\ell+1}\), and \(y^{-1}s_jt_\ell^iy=y^{-1}s_jy(y^{-1}t_\ell y)^i=s_jt_\ell^it_{\ell+1}^i\), where \(t_{\ell+1}^i\in P_{j+1,\ell+1}\). (Here we probably are tacitly using power conditions like \(s_j^p\in\Sigma_{j+1}\) for \(j\ge 3\) and \(t_\ell^p\in T_{\ell+1}\) for \(\ell\ge 3\).)
Thus we have proved the invariance of all claimed normal subgroups under inner automorphisms.
The number of all (heading and trailing) diamonds of the normal lattice is \(1+(m-1-2)\cdot (e-2) =1+(m-3)\cdot (e-2) =1+me-2m-3e+6 =me-(2m+3e)+7\).
There are \(p-1\) inner vertices of valence \(2\) in each diamond, which gives a total of\\ \((me-\lbrack 2m+3e\rbrack+7)\cdot (p-1)\) inner vertices.
The remaining (outer) vertices form the heading square and the trailing rectangle with\\ \(4+(m-1+1-2)\cdot (e+1-2) =4+(m-2)\cdot (e-1) =4+me-m-2e+2 =me-(m+2e)+6\) vertices.
Outer and inner vertices together form a lattice of \(me-(m+2e)+6+(me-\lbrack 2m+3e\rbrack+7)\cdot (p-1)\) normal subgroups.
For \(p=3\), this formula yields \(me-m-2e+6+2me-4m-6e+14 =3me-(5m+8e)+20\).
For each \(j\ge 2\), Blackburn's two-step centralizer \(\chi_j(G)\) is defined as the biggest intermediate group between \(G\) and \(G^\prime=\gamma_2(G)\) such that \(\lbrack\gamma_j(G),\chi_j(G)\rbrack\le\gamma_{j+2}(G)\). Since \(\lbrack\gamma_j(G),\gamma_2(G)\rbrack\le\gamma_{j+2}(G)\), for any \(j\ge 2\), \(\chi_j(G)\) certainly contains \(\gamma_2(G)\). Since \(\lbrack s_j,x\rbrack=s_{j+1}\notin\gamma_{j+2}(G)\) for \(2\le j\le m-2\), \(\lbrack t_\ell,y\rbrack=t_{\ell+1}\notin\gamma_{\ell+2}(G)\) for \(2\le\ell\le e-1\), and \(e\le m-1\), neither \(x\) nor \(y\) can be an element of \(\chi_j(G)\) for \(2\le j\le e-1\). However, since \(\lbrack t_e,y\rbrack=t_{e+1}=1\in\gamma_{e+2}(G)\) and \(\lbrack s_e,y\rbrack=1\in\gamma_{e+2}(G)\), we have \(\chi_j(G)=\langle y,\gamma_2(G)\rangle\) for \(e\le j\le m-2\), provided that \(e\le m-2\). Finally, since \(\lbrack s_{m-1},x\rbrack=s_m=1\in\gamma_{m}(G)=\gamma_{m+1}(G)=1\), the two-step centralizers \(\chi_j(G)\) with \(j\ge m-1\) coincide with the entire group \(G\).
The members of the lower central series can be constructed recursively by \(\gamma_j(G)=\lbrack\gamma_{j-1}(G),G\rbrack\). There is a unique ramification generating the series \(\Sigma_3\) and \(T_3\) for \(j=3\), since \(\gamma_3(G)=\lbrack\gamma_2(G),G\rbrack=\lbrack\langle s_2,\gamma_3(G)\rangle,G\rbrack =\langle\lbrack s_2,x\rbrack,\lbrack s_2,y\rbrack,\gamma_4(G)\rangle=\langle s_3,t_3,\gamma_4(G)\rangle\). Otherwise the series \(\Sigma_3\) and \(T_3\) do not mix and we have \(\gamma_j(G)=\lbrack\gamma_{j-1}(G),G\rbrack=\lbrack\langle s_{j-1},t_{j-1},\gamma_j(G)\rangle,G\rbrack\)\\ \(=\langle\lbrack s_{j-1},x\rbrack,\lbrack s_{j-1},y\rbrack,\lbrack t_{j-1},x\rbrack,\lbrack t_{j-1},y\rbrack,\gamma_{j+1}(G)\rangle =\langle s_j,t_j,\gamma_{j+1}(G)\rangle\), since \(\lbrack s_{j-1},y\rbrack=\lbrack t_{j-1},x\rbrack=1\) for \(j\ge 4\). For \(j=e+1\) the bicyclic factors stop, since \(t_{e+1}=\lbrack t_e,y\rbrack=1\), and \(\gamma_{e+1}\) is simply given by \(\Sigma_{e+1}\).
The members of the upper central series can be constructed recursively by \(\zeta_j(G)/\zeta_{j-1}(G)=\mathrm{Centre}(G/\zeta_{j-1}(G))\). All groups \(G\) with the assigned properties have a bicyclic centre \(\zeta_1(G)=\langle s_{m-1},t_e\rangle\), since \(\lbrack s_{m-1},x\rbrack=\lbrack t_e,y\rbrack=1\).
Generally, the equations \(\lbrack s_{m-j},x\rbrack=s_{m-(j-1)},\ \lbrack s_{m-j},y\rbrack=1,\ \lbrack t_{e+1-j},x\rbrack=1,\ \lbrack t_{e+1-j},y\rbrack=t_{e+1-(j-1)}\), whose right sides are elements of \(\zeta_{j-1}(G)\), show that \(s_{m-j}\) and \(t_{e+1-j}\) commute with all elements of \(G\) modulo \(\zeta_{j-1}(G)\). Therefore, we have \(\zeta_j(G)=P_{m-j,e+1-j}\).
However, for \(j=e-1\) the bicyclic factors stop, since \(\lbrack t_{e+1-j},x\rbrack=\lbrack t_2,x\rbrack=\lbrack s_2,x\rbrack=s_3\), which is not contained in \(\zeta_{e-2}(G)\), except for \(e=m-1\). Consequently, \(\zeta_j(G)=P_{m-j,3}\) for \(j\ge e-1\), since it cannot contain \(t_2=s_2\).
\end{proof}
\begin{figure}
\caption{Full normal lattice, including lower and upper central series, of a \(p\)-group \(G\) with \(G/G^\prime\simeq (p,p)\), \(\mathrm{cl}(G)=m-1\), \(\mathrm{cc}(G)=e-1\), \(\mathrm{dl}(G)=2\), \(k(G)=0\).}
\label{fig:NormalLatticeFigure}
\end{figure}
\section{Applications in Algebraic Number Theory} \label{s:NumberTheoryApps}
Let \(K=\mathbb{Q}(\sqrt{D})\) be a quadratic number field with discriminant \(D\) and denote by \(G=\mathrm{Gal}(\mathrm{F}_p^2(K)\vert K)\) the Galois group of the second Hilbert \(p\)-class field \(\mathrm{F}_p^2(K)\) of \(K\), that is, the maximal metabelian unramified \(p\)-extension of \(K\). We recall that coclass and class of \(G\) are given by the equations \(\mathrm{cc}(G)=r=e-1\) and \(\mathrm{cl}(G)=m-1\) in terms of the invariants \(e\) and \(m\). Due to our extensive computations for the papers \cite{Ma0,Ma}, we are able to underpin the present theory of normal lattices by numerical data concerning the \(2\,020\) complex and the \(2\,576\) real quadratic fields with \(3\)-class group of type \((3,3)\) and discriminant in the range \(-10^6<D<10^7\).
Figure \ref{fig:BCFGroups} shows several examples of normal lattices of \(3\)-groups \(G\) with \textit{bicyclic and cyclic factors} of the central series. They are located on coclass trees of coclass graphs \(\mathcal{G}(3,r)\) \cite[p. 189 ff]{Ne}.
Here, the length of the rectangle of trailing diamonds is bigger than the width, \(m-1>e\), the upper central series is different from the lower central series, and the last lower central \(\gamma_{m-1}(G)\) is cyclic, whence the parent \(\pi(G)=G/\gamma_{m-1}(G)\) is of the same coclass. Such groups were called \textit{core groups} in \cite{Ma}. Concerning the principalization type \(\varkappa(K)\) of \(K\) which coincides with the transfer kernel type (TKT) \(\varkappa(G)\) of \(G\), see \cite{Ma1,Ma}. Different TKTs can give rise to equal normal lattices.
\begin{figure}
\caption{\(3\)-groups \(G=\mathrm{Gal}(\mathrm{F}_3^2(K)\vert K)\) with bicyclic and cyclic factors.}
\label{fig:BCFGroups}
\end{figure}
\begin{example} \label{exm:BCFGroups} \(3\)-groups \(G\) of coclass \(3\le\mathrm{cc}(G)\le 4\).
\begin{itemize} \item Coclass \(\mathrm{cc}(G)=4\), class \(\mathrm{cl}(G)=7\):\\ a total of \(14\) complex quadratic fields, e. g.,\\ \(D=-159\,208\) with principalization type F.13,\\ \(D=-249\,371\) with principalization type F.12,\\ \(D=-469\,787\) with principalization type F.11,\\ \(D=-469\,816\) with principalization type F.7,\\ and a single real quadratic field of discriminant\\ \(D=8\,127\,208\) with principalization type F.13,\\ branch groups of depth \(1\), visualized by Figure \ref{fig:BCFGroups}, \(e=5\), \(m=8\). \item Coclass \(\mathrm{cc}(G)=4\), class \(\mathrm{cl}(G)=6\):\\ a single real quadratic field of discriminant\\ \(D=8\,491\,713\) with principalization type d\({}^\ast\).25,\\ mainline group, visualized by Figure \ref{fig:BCFGroups}, \(e=5\), \(m=7\). \item Coclass \(\mathrm{cc}(G)=3\), class \(\mathrm{cl}(G)=5\):\\ two real quadratic fields of discriminant\\ \(D=1\,535\,117\) with principalization type d.23,\\ \(D=2\,328\,721\) with principalization type d.19,\\ branch groups of depth \(1\), visualized by Figure \ref{fig:BCFGroups}, \(e=4\), \(m=6\). \end{itemize}
\end{example}
In Figure \ref{fig:BFGroups} we display numerous examples of normal lattices of \(p\)-groups \(G\) with \textit{bicyclic factors} of the central series, except the bottle neck \(\gamma_2(G)/\gamma_3(G)\). They are located as vertices on the sporadic part \(\mathcal{G}_0(p,r)\) of coclass graphs \(\mathcal{G}(p,r)\), outside of coclass trees, \cite[Fig. 3.5, p. 439]{Ma}.
Here, the rectangle of trailing diamonds degenerates to a square with \(e=m-1\), the upper central series is the reverse lower central series, and thus the last lower central \(\gamma_{m-1}(G)\) is bicyclic, whence the (generalized) parent \(\tilde\pi(G)=G/\gamma_{m-1}(G)\) is of lower coclass. Such groups were called \textit{interface groups} in \cite{Ma}.
\begin{figure}
\caption{\(p\)-groups \(G=\mathrm{Gal}(\mathrm{F}_p^2(K)\vert K)\) with bicyclic factors only.}
\label{fig:BFGroups}
\end{figure}
\begin{example} \label{exm:BFGroups} \(p\)-groups \(G\) with \(p\in\lbrace 3,5,7\rbrace\).
\begin{itemize} \item \(p=3\), coclass \(\mathrm{cc}(G)=6\), class \(\mathrm{cl}(G)=7\):\\ a single complex quadratic field of discriminant\\ \(D=-423\,640\) with principalization type F.12,\\ sporadic group, visualized by Figure \ref{fig:BFGroups}, \(e=7\), \(m=8\). \item \(p=3\), coclass \(\mathrm{cc}(G)=4\), class \(\mathrm{cl}(G)=5\):\\ a total of \(78\) complex quadratic fields, e. g.,\\ \(D=-27\,156\) with principalization type F.11,\\ \(D=-31\,908\) with principalization type F.12,\\ \(D=-67\,480\) with principalization type F.13,\\ \(D=-124\,363\) with principalization type F.7,\\ and a single real quadratic field of discriminant\\ \(D=8\,321\,505\) with principalization type F.13,\\ sporadic groups, visualized by Figure \ref{fig:BFGroups}, \(e=5\), \(m=6\). \item \(p=3\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=3\):\\ a total of \(936\) complex quadratic fields, e. g.,\\ \(D=-4\,027\) with principalization type D.10,\\ \(D=-12\,131\) with principalization type D.5,\\ and a total of \(140\) real quadratic fields, e. g.,\\ \(D=422\,573\) with principalization type D.10,\\ \(D=631\,769\) with principalization type D.5,\\ sporadic groups, visualized by Figure \ref{fig:BFGroups}, \(e=3\), \(m=4\). \item \(p=5\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=3\): see \cite[Tbl. 3.13, p. 450]{Ma}. \item \(p=7\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=3\): see \cite[Tbl. 3.14, p. 450]{Ma}. \end{itemize}
\end{example}
Figure \ref{fig:SmallBCFGroups} shows many examples of normal lattices of \lq\lq small\rq\rq\ \(p\)-groups \(G\) with \textit{bicyclic and cyclic factors} of the central series. They are located on coclass trees of coclass graphs \(\mathcal{G}(p,r)\) \cite[Fig. 3.6--3.7, pp. 442--443]{Ma}.
\begin{figure}
\caption{Small \(p\)-groups \(G=\mathrm{Gal}(\mathrm{F}_p^2(K)\vert K)\) with bicyclic and cyclic factors.}
\label{fig:SmallBCFGroups}
\end{figure}
\begin{example} \label{exm:SmallBCFGroups} Small \(p\)-groups \(G\) with \(p\in\lbrace 3,5,7\rbrace\).
\begin{itemize} \item \(p=3\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=7\):\\ a total of \(28\) complex quadratic fields, e. g.,\\ \(D=-262\,744\) with principalization type E.14,\\ \(D=-268\,040\) with principalization type E.6,\\ \(D=-297\,079\) with principalization type E.9,\\ \(D=-370\,740\) with principalization type E.8,\\ branch groups of depth \(1\), visualized by Figure \ref{fig:SmallBCFGroups}, \(e=3\), \(m=8\). \item \(p=3\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=6\):\\ two real quadratic fields, e. g.,\\ \(D=1\,001\,957\) with principalization type c.21,\\ mainline groups, visualized by Figure \ref{fig:SmallBCFGroups}, \(e=3\), \(m=7\). \item \(p=3\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=5\):\\ a total of \(383\) complex quadratic fields, e. g.,\\ \(D=-9\,748\) with principalization type E.9,\\ \(D=-15\,544\) with principalization type E.6,\\ \(D=-16\,627\) with principalization type E.14,\\ \(D=-34\,867\) with principalization type E.8,\\ and a total of \(21\) real quadratic fields, e. g.,\\ \(D=342\,664\) with principalization type E.9,\\ \(D=3\,918\,837\) with principalization type E.14,\\ \(D=5\,264\,069\) with principalization type E.6,\\ \(D=6\,098\,360\) with principalization type E.8,\\ branch groups of depth \(1\), visualized by Figure \ref{fig:SmallBCFGroups}, \(e=3\), \(m=6\). \item \(p=3\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=4\):\\ a total of \(54\) real quadratic fields, e. g.,\\ \(D=534\,824\) with principalization type c.18,\\ \(D=540\,365\) with principalization type c.21,\\ mainline groups, visualized by Figure \ref{fig:SmallBCFGroups}, \(e=3\), \(m=5\). \item \(p=5\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=5\): see \cite[Tbl. 3.13, p. 450]{Ma}. \item \(p=7\), coclass \(\mathrm{cc}(G)=2\), class \(\mathrm{cl}(G)=5\): see \cite[Tbl. 3.14, p. 450]{Ma}. \end{itemize}
\end{example}
\section{Final Remarks} \label{s:FinalRemarks}
\begin{itemize}
\item Among the \(2\,020\) complex quadratic fields with \(3\)-class group of type \((3,3)\) and discriminant in the range \(-10^6<D<0\), the dominating part of \(1\,440\), that is \(71.29\,\%\), has a second \(3\)-class group with minimal defect of commutativity \(k=0\). The remaining \(28.71\,\%\) have \(k=1\) and TKTs G.16, G.19 and H.4.
\item Among the \(2\,576\) real quadratic fields with \(3\)-class group of type \((3,3)\) and discriminant in the range \(0<D<10^7\), a modest part of \(273\), i. e. \(10.6\,\%\), has a second \(3\)-class group of coclass at least \(2\). A dominating part of \(222\) among these \(273\) second \(3\)-class groups, that is \(81.3\,\%\), has minimal defect of commutativity \(k=0\), whereas \(18.7\,\%\) have \(k=1\) and TKTs b.10, G.16, G.19 and H.4.
\item It should be pointed out that the power-commutator presentations which we used for proving Theorem \ref{thm:NormalLattice} and its Corollaries are rudimentary, since in fact they consist of commutator relations only. Thus they define an isoclinism family of \(p\)-groups of fixed order, rather than a single isomorphism class of \(p\)-groups.
On the other hand, experience shows that the transfer kernel type (TKT) of a \(p\)-group mainly depends on the power relations. This explains why different TKTs frequently give rise to equal normal lattices.
\end{itemize}
\end{document} | arXiv | {
"id": "1403.3902.tex",
"language_detection_score": 0.709969162940979,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Norm inflation for NLW] {A remark on norm inflation for nonlinear wave equations }
\author[J.~Forlano and M.~Okamoto] {Justin Forlano and Mamoru Okamoto}
\address{ Justin Forlano\\ Maxwell Institute for Mathematical Sciences\\ Department of Mathematics\\ Heriot-Watt University\\ Edinburgh\\
EH14 4AS\\
United Kingdom \\}
\email{j.forlano@hw.ac.uk}
\address{ Mamoru Okamoto\\ Department of Mathematics \\ Graduate School of Science \\ Osaka University \\ Toyonaka\\
Osaka 560-0043 \\
Japan\\}
\email{okamoto@math.sci.osaka-u.ac.jp} \subjclass{Primary 35L05, 35B30} \date{}
\keywords{nonlinear wave equation; ill-posedness; norm inflation}
\begin{abstract} In this note, we study the ill-posedness of nonlinear wave equations (NLW). Namely, we show that NLW experiences norm inflation at every initial data in negative Sobolev spaces. This result covers a gap left open in a paper of Christ, Colliander, and Tao (2003) and extends the result by Oh, Tzvetkov, and the second author (2019) to non-cubic integer nonlinearities. In particular, for some low dimensional cases, we obtain norm inflation above the scaling critical regularity. We also prove ill-posedness for NLW, via norm inflation at general initial data, in negative regularity Fourier-Lebesgue and Fourier-amalgam spaces. \end{abstract}
\maketitle
\section{Introduction}
We consider the Cauchy problem of the following nonlinear wave equation (NLW): \begin{equation} \begin{cases}\label{NLW1} \partial_t^{2}u-\Delta u = \pm u^{k} \\
(u,\partial_t u)|_{t = 0} = (u_0,u_1), \end{cases} \qquad ( t, x) \in \mathbb{R} \times \mathcal{M}, \end{equation} \noindent where $\mathcal{M}=\mathbb{T}^d$ or $\mathbb{R}^d$ ($d\geq 1$) and $k \geq 2$ is an integer.
Our goal in this paper is to study ill-posedness of \eqref{NLW1} in negative Sobolev spaces. In this regard, we recall the critical regularity associated to \eqref{NLW1} posed on $\mathcal{M}=\mathbb{R}^{d}$.
First, NLW~\eqref{NLW1} has the following scaling symmetry: given $\lambda>0$, if $u$ solves \eqref{NLW1}, then $u_{\lambda}(t,x)=\lambda^{\frac{2}{k-1}}u(\lambda t,\lambda x)$ also solves \eqref{NLW1} with rescaled initial data $\lambda^{\frac{2}{k-1}}(u_0 (\lambda x), u_1 (\lambda x))$. This scaling leaves the $\dot{H}^{s_{\text{scaling}}(d,k)}(\mathbb{R}^d)$-norm invariant, where \begin{align} s_{\text{scaling}} (d,k) :=\frac{d}{2}-\frac{2}{k-1}. \label{scaling} \end{align} Secondly, \eqref{NLW1} is invariant under the Lorentz transformation (conformal symmetry), which gives rise to the critical regularity $s_{\text{conf}}(d,k) := \frac{d+1}4-\frac1{k-1}$; see~\cite{LiSo}. In addition, we need the condition $s \ge 0$ in order for the nonlinearity to make sense as a distribution. Hence, the critical regularity of \eqref{NLW1} is given by \begin{align} \begin{split} s_{\text{crit}}(d,k)&=\max( s_{\text{scaling}}(d,k),s_{\text{conf}}(d,k),0) \\ &= \max\bigg( \frac d2 - \frac{2}{k-1}, \frac{d+1}{4}-\frac{1}{k-1}, 0 \bigg). \end{split} \label{scrit} \end{align} The purpose of the critical regularity for \eqref{NLW1} on $\mathbb{R}^d$ is that we expect (local-in-time) well-posedness in $H^s(\mathbb{R}^d)$ when $s>s_{\text{crit}}(d,k)$ and ill-posedness, due to some instability, when $s<s_{\text{crit}}(d,k)$. This heuristic provided by \eqref{scrit} is also instrumental in the well-posedness theory of \eqref{NLW1} on periodic domains $\mathcal{M}=\mathbb{T}^{d}$, despite the lack of scaling and conformal symmetries in this setting.
We now survey the well-posedness theory for \eqref{NLW1}, specifically restricting our attention to local-in-time results. Well-posedness of \eqref{NLW1} above the critical regularity $s_{\text{crit}}(d,k)$ was studied in \cite{Kap, LiSo, KeelTao, Tao}. Moreover, ill-posedness of \eqref{NLW1} below the critical regularity has been studied in \cite{Lin1, Lin2, CCT, Leb, BTz1, Xia, Tz1, OOTz}. In particular,
Christ, Colliander, and Tao \cite{CCT} proved norm inflation for \eqref{NLW1} on $\mathbb{R}^d$ when\footnote{They considered the nonlinearity $\pm |u|^{k-1}u$ instead of $\pm u^k$. Moreover, they proved that the solution map fails to be uniformly continuous when $s<0$ and for the defocusing case. See also \cite{LiSo} for the ill-posedness result in the focusing case.}: (i) $k\geq 2$ and \begin{align*} s_{\text{scaling}}(1,k)<s< \frac 12 - \frac 1k, \end{align*} and (ii) for either odd integer $k\geq 3$ or $k\geq k_0+1$ for integer $k_0 >\frac{d}{2}$ and \begin{align*} s\leq -\frac{d}{2}\quad \text{or}\quad 0<s<s_{\text{scaling}}(d,k). \end{align*} Applying the argument in \cite[Corollary 7]{CCT}, which uses the finite speed of propagation for \eqref{NLW1} to deduce norm inflation in dimension $d\geq 2$ from norm inflation in $d=1$, the result of (ii) extends to norm inflation for any $k\geq 2$, $s\leq -\frac{1}{2}$, and $s<s_{\text{scaling}}(1,k)$.
Here, norm inflation (at the trivial initial condition $(u_0,u_1)=(0,0)$) means that given any $\varepsilon>0$, there exists a solution $u_\varepsilon$ to \eqref{NLW1} and $t_\varepsilon \in (0,\varepsilon)$ such that \begin{align}
\| (u_\varepsilon(0), \partial_t u_\varepsilon(0)) \|_{\mathcal{H}^s(\mathcal{M})} < \varepsilon \qquad \text{ and }
\qquad \| u_\varepsilon(t_\varepsilon)\|_{H^s(\mathcal{M})} > \varepsilon^{-1}, \label{NI1} \end{align} where $\mathcal{H}^s (\mathcal{M}) := H^s(\mathcal{M}) \times H^{s-1}(\mathcal{M})$. This phenomenon is a stronger notion of ill-posedness than the discontinuity of the solution map at zero. In particular, the result in \cite{CCT} leaves open the question of norm inflation for NLW~\eqref{NLW1} when \begin{align} -\frac 12 <s< \min (s_{\text{scaling}}(1,k),0). \label{gap} \end{align} In the context of \eqref{NLW1} on $\mathbb{T}^{3}$ for $0<s<s_{\text{scaling}}(3,k)$, Xia~\cite{Xia} generalized \eqref{NI1} to norm inflation based at general initial data (see \eqref{NI2} below). In \cite{OOTz}, Oh, Tzvetkov, and the second author proved norm inflation at general initial data for the cubic NLW ($k=3$) when $d\geq 2$ and $s<0$.\footnote{We point out that this norm inflation result was proved as a basic ingredient for the
main purpose of the paper~\cite{OOTz}; namely, to study the approximation property of solutions to the renormalized cubic NLW on $\mathbb{T}^2$ with rough, random initial data distributed according to the Gaussian free field.} For the particular case $k=3$ and $d\geq 2$, this result extends the norm inflation at zero in \cite{CCT} to norm inflation at general initial data.
Our aim in this paper is to prove norm inflation at general initial data for \eqref{NLW1} in negative Sobolev spaces, thus filling the remaining gap left open in \eqref{gap}. The following is our main result.
\begin{theorem}\label{THM:NI} Given $d \in \mathbb{N}$, let $\mathcal{M} = \mathbb{R}^d$ or $\mathbb{T}^d$. Suppose that $k \ge 2$ is an integer and $s<0$. Fix $(u_0, u_1) \in \mathcal{H}^s(\mathcal{M})$. Then, given any $\varepsilon > 0$, there exist a solution $u_\varepsilon$ to \eqref{NLW1} on $\mathcal{M}$ and $t_\varepsilon \in (0, \varepsilon) $ such that \begin{align}
\| (u_\varepsilon(0), \partial_t u_\varepsilon(0)) - (u_0, u_1) \|_{\mathcal{H}^s(\mathcal{M})} < \varepsilon \qquad \text{and}
\qquad \| u_\varepsilon(t_\varepsilon)\|_{H^s(\mathcal{M})} > \varepsilon^{-1}. \label{NI2} \end{align} \end{theorem}
Theorem~\ref{THM:NI} thus closes the remaining gap in \eqref{gap} and, in the case $s<0$ and $k\neq 3$ (in view of \cite{OOTz}), extends the result in \cite{CCT} to norm inflation based at any initial condition. When $(u_0,u_1)=(0,0)$, Theorem \ref{THM:NI} is reduced to the usual norm inflation at zero initial data stated in \eqref{NI1}. As a corollary to Theorem \ref{THM:NI}, we obtain that the solution map to \eqref{NLW1}: $(u_0,u_1) \in \mathcal{H}^s(\mathcal{M}) \mapsto (u,\partial_t u) \in C([-T,T];\mathcal{H}^s(\mathcal{M}))$ is discontinuous everywhere in $\mathcal{H}^s(\mathcal{M})$, for $s<0$.
We now describe two approaches to proving norm inflation for Cauchy problems. The first is the approach used in \cite{CCT} which is based on studying low-to-high energy transfer in the associated dispersionless (ODE) model and scaling analysis. By avoiding the scaling analysis, Burq and Tzvetkov \cite{BTz1} proved norm inflation as in \eqref{NI1} for the cubic NLW on three-dimensional compact Riemannian manifolds when $0<s<s_{\text{scaling}}(3,3)$. In particular, the argument in \cite{Xia} is also based on this method. The second method is a Fourier analytic approach introduced by Bejenaru and Tao~\cite{BT} and developed further by Iwabuchi and Ogawa \cite{IO}; see also~\cite{Kishimoto, O17}.
Our proof of Theorem~\ref{THM:NI} uses this method and follows the presentation by Oh~\cite{O17}, which we now briefly describe. We begin with a reduction: we may assume the initial data $(u_0,u_1)$ are sufficiently regular by a density argument.
The key idea is to express a solution $u_{\varepsilon}$ to \eqref{NLW1} in terms of a power series expansion in the initial data and to show that one of the terms in the expansion dominates all the others.
More specifically, we write $u_{\varepsilon}$ as the following power series expansion: \begin{align} u_{\varepsilon}= \sum_{j=0}^{\infty} \Xi_{j}(u_{\varepsilon}(0),\partial_t u_{\varepsilon}(0)), \label{psexp} \end{align} where $\{\Xi_{j}\}_{j=0}^{\infty}$ are multilinear operators in the linear solution $S(t)(u_{\varepsilon}(0),\partial_t u_{\varepsilon}(0))$ of (increasing) degree $kj+1$. They are precisely the successive new terms added to a Picard iteration expansion of $u_{\varepsilon}$. We define the initial data for $u_{\varepsilon}$ by \begin{align} (u_{\varepsilon}(0),\partial_t u_{\varepsilon}(0))=(u_0, u_1)+(\phi_{0,\varepsilon},\phi_{1,\varepsilon}), \label{shiftdata} \end{align}
where the perturbations $(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$ are chosen so that: \begin{enumerate}[(i)] \item $(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$ converges to $(0,0)$ in $\mathcal{H}^{s}(\mathcal{M})$, as $\varepsilon\rightarrow 0$, \item there exists times $t_{\varepsilon}\rightarrow 0$ as $\varepsilon\rightarrow 0$ such that the second Picard iterate $\Xi_{1}(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$ dominates in \eqref{psexp}; namely, \begin{align*}
\| u_{\varepsilon}(t_{\varepsilon})\|_{H^{s}(\mathcal{M})} \gtrsim \big\| \Xi_{1}(\phi_{0,\varepsilon},\phi_{1,\varepsilon})(t_{\varepsilon}) \big\|_{H^{s}(\mathcal{M})} \rightarrow \infty, \end{align*} as $\varepsilon \rightarrow 0$. \end{enumerate}
These ingredients then lead to norm inflation based at $(u_0,u_1)$ as in \eqref{NI2}. The mechanism responsible for the instability in (ii) is the high-to-low transfer of energy, which is specifically exploited by the choice of $(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$.
We note that although we work in rough topologies, the functions $u_{\varepsilon}$ are smooth and hence there is no issue in making sense of the power series expansion \eqref{psexp}.
In \cite{O17}, the operators $\Xi_{j}$ are indexed using trees, which allows to directly treat the nonlinear estimates without an induction.
As it is based on exploiting high-to-low energy transfer in the nonlinearity, the Fourier analytic approach works well in negative Sobolev spaces. Indeed, for the case of nonlinear Schr\"{o}dinger equations (NLS), this method was used in \cite{IO, Kishimoto, O17} to fill a similar gap left in \cite{CCT} of norm inflation for NLS in negative Sobolev spaces.
However, it does rely on the translation invariance of the underlying space $\mathcal{M}$, making it unsuitable for the case of more general domains. See also \cite{M, CDS, AC, CK} for ill-posedness results of NLS. The idea of exploiting a high-to-low energy transfer was used in \cite{BTz} to prove failure of $C^{2}$-smoothness of the solution map for the BBM equation.
For $k\in \{2,3,4\}$ in $d=1$ and $k=2$ in $d=2,3$, Theorem~\ref{THM:NI} yields norm inflation at general initial data above the scaling critical regularity $s_{\text{scaling}} (d,k)$ defined in \eqref{scaling}. This phenomenon of norm inflation above the scaling critical regularity has also been observed for the cubic fractional NLS~\cite{CP} and quadratic NLS~\cite{IO, Kishimoto, Ok}. In this regime, it is essential to exploit resonant interactions in the nonlinearity. In the aforementioned papers, the choice of the initial data $(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$ in \eqref{shiftdata} (with $(u_0,u_1)=(0,0)$) only activates (nearly) resonant contributions in the second Picard iterate $\Xi_{1}(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$. However, for the case of NLW~\eqref{NLW1}, our analysis of the second Picard iterate is more subtle since our choice of perturbation $(\phi_{0,\varepsilon},\phi_{1,\varepsilon})$ requires us to also handle nonresonant contributions. To show that the resonant part is dominant, we need to take the existence time a bit longer. See Proposition \ref{PROP:INSTAB}.
We also note that the argument in \cite[Corollary 7]{CCT} is not applicable for deducing norm inflation at general initial data in dimensions $d\geq 2$ from norm inflation at general initial data in dimension $d=1$. Thus, we cannot simply deduce Theorem~\ref{THM:NI} from the corresponding result in one dimension.
\begin{remark}\rm In this paper, we focus on the estimate for the norm of $u$. However, by the same argument as in Section \ref{SEC:3} below, we can show the growth of the norm of $\partial_t u$ as well as of $u$ in \eqref{NI2}: \begin{align*}
\| \partial_t u_\varepsilon (t_\varepsilon) \|_{H^{s-1}} > \varepsilon^{-1}. \end{align*} See Remark~\ref{RMK:dt} for more details. \end{remark}
\begin{remark}\rm By a straightforward modification, the same norm inflation result as in Theorem \ref{THM:NI} holds for the following nonlinear Klein-Gordon equation: \begin{equation} \begin{cases}\label{NLKG} \partial_t^{2}u-\Delta u + u = \pm u^{k} \\
(u,\partial_t u)|_{t = 0} = (u_0,u_1), \end{cases} \qquad ( t, x) \in \mathbb{R} \times \mathcal{M}. \end{equation} See Remark \ref{REM:NLKGlb} for a further discussion. \end{remark}
\begin{remark}\rm Theorem~\ref{THM:NI} completes the ill-posedness theory for NLW~\eqref{NLW1} in negative regularities $s<0$. The situation however is not complete in positive regularities. In particular, for $d\geq 2$ and $0<s<s_{\text{conf}}(d,k)$, the focusing NLW (corresponding to the $+\,$sign in \eqref{NLW1}) on $\mathbb{R}^d$ has explicit solutions with arbitrarily small $\mathcal{H}^{s}(\mathbb{R}^d)$-norm which blow-up in arbitrarily small time; see~\cite{LiSo} and \cite[Exercise 3.67]{TAO}. In contrast, it is not known if there are similarly behaving blow-up solutions in the defocusing case (the $-\,$sign in \eqref{NLW1}) when $s_{\text{scaling}}(d,k)<s<s_{\text{conf}}(d,k)$. \end{remark}
\begin{remark}\rm There are also other approaches to proving ill-posedness results, for instance, the work of Lebeau~\cite{Leb} and Carles and collaborators~\cite{AC, CDS, CK, BC}. The results they obtain are stronger than norm inflation at zero and demonstrate a loss of regularity: there are smooth solutions for which the norm in the second expression in \eqref{NI2} can be replaced by $H^{\sigma}$ for any (or some) $\sigma \in \mathbb{R}$. Kishimoto~\cite{Kishimoto} also proves infinite loss of regularity results using the Fourier analytic approach. It would be of interest to investigate this loss of regularity phenomenon for the NLW~\eqref{NLW1} in negative Sobolev spaces. \end{remark}
While our main interest in this paper is to close the gap in \eqref{gap} and extend previous results to norm inflation at general initial data in $\mathcal{H}^{s}(\mathcal{M})$, our proof admits an easy extension to more general spaces of functions and hence, to a more general result than that in Theorem~\ref{THM:NI}.
We now introduce these spaces, which are the Fourier-Lebesgue and Fourier-amalgam spaces. We use $\mathcal{S}(\mathcal{M})$ to denote the space of Schwartz functions if $\mathcal{M}=\mathbb{R}^d$ or the space of $C^{\infty}$-functions if $\mathcal{M}=\mathbb{T}^{d}$. Given $s\in \mathbb{R}$ and $1\leq q\leq \infty$, we define the Fourier-Lebesgue space $\mathcal{F} L^{s,q}(\mathcal{M})$ as the closure of $\mathcal{S}(\mathcal{M})$ under the following norm: \begin{align*}
\|f \|_{\mathcal{F} L^{s,q}(\mathcal{M})} = \big\| \jb{\xi}^s \widehat f \big\|_{L^{q}(\widehat \mathcal{M})}, \end{align*}
where $\jb{\, \cdot \,}:=(1+|\cdot|^2)^{\frac{1}{2}}$ and $\widehat \mathcal{M}$ denotes the Pontryagin dual of $\mathcal{M}$, i.e.,~ \begin{align} \widehat \mathcal{M} = \begin{cases}
\mathbb{R}^d & \text{if }\mathcal{M} = \mathbb{R}^d, \\
\mathbb{Z}^d & \text{if } \mathcal{M} = \mathbb{T}^d. \end{cases} \label{dual} \end{align}
\noindent When $\widehat \mathcal{M} = \mathbb{Z}^d$, we endow it with the counting measure. We note that $\mathcal{F}L^{s,2}(\mathcal{M})=H^{s}(\mathcal{M})$, and $\mathcal{F}L^{s,q_1}(\mathbb{T}^d)\subseteq \mathcal{F}L^{s,q_2}(\mathbb{T}^d)$ when $1\leq q_1 \leq q_2\leq \infty$. This last property implies that the spaces $\mathcal{F}L^{s,q}(\mathbb{T}^{d})$ with $q>2$ are wider than $H^{s}(\mathbb{T}^d)$, for any $s\in \mathbb{R}$.
On the Euclidean spaces there is, in general, no relation between the spaces $\mathcal{F}L^{s,q}(\mathbb{R}^d)$ for fixed $s\in \mathbb{R}$. The Fourier-Lebesgue spaces are encapsulated within a wider class of function spaces known as the Fourier-amalgam spaces. Given $s\in \mathbb{R}$ and $1\leq p,q\leq \infty$, we define the Fourier-amalgam space $\ft w^{p,q}_{s}(\mathbb{R}^d)$ as the closure of $\mathcal{S}(\mathbb{R}^d)$ under the norm \begin{align}
\|f\|_{\ft w^{p,q}_{s}(\mathbb{R}^d)}:=\big\| \jb{n}^{s} \| \chi_{n+Q}(\xi)\widehat f(\xi)\|_{L_{\xi}^{p}(\mathbb{R}^d)}\big\|_{\ell^q_{n}(\mathbb{Z}^d)}, \label{fw} \end{align} where $Q:=[-\frac{1}{2},\frac{1}{2})^{d}$ and $\chi_{n+Q}$ is the indicator function on the set $n+Q$, for $n\in \mathbb{Z}^d$. In the special case $p=q$, we have $\ft w^{q,q}_{s}(\mathbb{R}^d)=\mathcal{F}L^{s,q}(\mathbb{R}^d)$. Moreover, when $p=2$, $\ft w^{2,q}_{s}(\mathbb{R}^d)=M^{2,q}_{s}(\mathbb{R}^d)$, where $M^{2,q}_{s}(\mathbb{R}^d)$ is a modulation space. We point out that the Fourier-amalgam spaces $\ft w^{p,q}_{s}(\mathbb{R}^d)$ are the Fourier image of the classical weighted Wiener-amalgam spaces $W(L^p,L_{s}^q)$; see for example \cite{Fei}. Our interest in the Fourier-amalgam spaces rests solely in the Euclidean space setting where $\mathcal{M}=\mathbb{R}^d$, since in the periodic case, we have \begin{align*} \ft w^{p,q}_{s}(\mathbb{T}^d)=M^{2,q}_{s}(\mathbb{T}^d)=\mathcal{F}L^{s,q}(\mathbb{T}^d), \end{align*} for any $1\leq p,q \leq \infty$ and $s\in \mathbb{R}$.
\begin{remark}\rm In \cite{IO}, Iwabuchi and Ogawa use the space $(M^{2,1})_A(\mathbb{R}^d)$ defined through the following norm \begin{align*}
\|f\|_{(M^{2,1})_A (\mathbb{R}^d)}:=\sum_{ n \in A\mathbb{Z}^d}\|\chi_{n+Q_{A}}(\xi) \widehat f(\xi) \|_{L^{2}_{\xi}(\mathbb{R}^d)}, \end{align*} where $Q_{A}:=[-\frac{A}{2},\frac{A}{2})^{d}$, and $A>0$ is a dyadic number, to verify the convergence of the power series expansions. These spaces satisfy $(M^{2,1})_{A}(\mathbb{R}^d) \simeq_{A} (M^{2,1})_1 (\mathbb{R}^d)$. We note that $(M^{2,1})_1(\mathbb{R}^d)=\ft w_{0}^{2,1}(\mathbb{R}^d)$, and the algebra property of $(M^{2,1})_{A}(\mathbb{R}^d)$ can be viewed as a consequence of the algebra property of the spaces $\ft w^{p,1}_{0}(\mathbb{R}^d)$, for $1\leq p \leq \infty$. \end{remark}
As in the $L^{2}$-based case above, we can associate to the NLW~\eqref{NLW1} the scaling critical regularity \begin{align} s_{\text{scaling}}(d,k,q):=d\bigg( 1-\frac{1}{q}\bigg)-\frac{2}{k-1}, \label{scalingFLq} \end{align} in the Fourier-Lebesgue spaces $\mathcal{F}L^{s,q}(\mathcal{M})$. Unfortunately, the Fourier-amalgam spaces do not interact well with respect to scaling. However, H\"{o}lder's inequality implies $\ft w^{p,q}_{s}(\mathbb{R}^d)$ is a weaker norm than $\mathcal{F}L^{s,q}(\mathbb{R}^d)$ when $p\leq q$ and for any $s\in \mathbb{R}$, so we expect \eqref{scalingFLq} to essentially be a scaling critical regularity for \eqref{NLW1} in Fourier-amalgam spaces $\ft w^{p,q}_{s}(\mathbb{R}^d)$.
There are very few well-posedness results for NLW~\eqref{NLW1} in these spaces. In Fourier-Lebesgue spaces, well-posedness for the quadratic NLW ($k=2$ in \eqref{NLW1}) in $\mathcal{F}L^{s,q}(\mathbb{R}^{2})$ for $2\leq q<\infty$ and $s>1+\frac{3}{2q'}$ follows from \cite[Equation (9)]{GrigT}; in fact, the main results in \cite{Grunrock, GrigN, GrigT} are for the quadratic derivative NLW in $\mathcal{F}L^{s,q}(\mathbb{R}^d)$ for $d=2,3$. The well-posedness study of nonlinear dispersive equations in Fourier-amalgam spaces was recently initiated by the first author and T.~Oh~\cite{FO}, in the context of the one-dimensional cubic NLS on $\mathbb{R}$, and to the best of the authors' knowledge, there are no well-posedness results for NLW~\eqref{NLW1} in Fourier-amalgam spaces. As for ill-posedness results, norm inflation in Fourier-Lebesgue spaces for NLS-type equations have been obtained in \cite{CK, Kishimoto, BC} and also in modulation spaces in \cite{BC}. For NLW~\eqref{NLW1}, our second main result is norm inflation at general initial data in negative regularity Fourier-Lebesgue and Fourier-amalgam spaces.
\begin{theorem}\label{THM:FLq} Let $d \in \mathbb{N}$ and suppose that $k \ge 2$ is an integer. Given $1\leq p, q\leq \infty$ and $s<0$, we define \begin{align} X^{p,q}_{s}(\mathcal{M}):= \begin{cases}
\ft w^{p,q}_{s}(\mathbb{R}^d) & \text{if} \,\,\, \,\mathcal{M}=\mathbb{R}^d, \\
\mathcal{F}L^{s,q}(\mathbb{T}^d) & \text{if}\,\,\,\, \mathcal{M}=\mathbb{T}^d \,\,\, \text{and}\,\,\, \,p=q.
\end{cases} \label{Xspace} \end{align} Fix $(u_0, u_1) \in X^{p,q}_{s}(\mathcal{M})$. Then, given any $\varepsilon > 0$, there exist a solution $u_\varepsilon$ to \eqref{NLW1} on $\mathcal{M}$ and $t_\varepsilon \in (0, \varepsilon) $ such that \begin{align}
\| (u_\varepsilon(0), \partial_t u_\varepsilon(0)) - (u_0, u_1) \|_{X^{p,q}_{s}(\mathcal{M})} < \varepsilon \qquad \text{and}
\qquad \| u_\varepsilon(t_\varepsilon)\|_{X^{p,q}_{s}(\mathcal{M})} > \varepsilon^{-1}.
\notag \end{align} \end{theorem}
In the body of this paper, we detail the proof of Theorem~\ref{THM:NI}. The proof of Theorem~\ref{THM:FLq} follows from minor modifications of the aforementioned proof and these details are contained in Section~\ref{app:amalgam}. Due to the local-in-time nature of the analysis in this paper, the sign of the nonlinearity in \eqref{NLW1} does not play any role. Hence, we only consider the $+\,$sign in the following. Moreover, in view of the time reversibility of the equation, we focus only on positive times.
\section{Power series expansion indexed by trees} \label{SEC:2}
In this section, we show the well-posedness in the Fourier-Lebesgue space and exploit power series expansions. We define \begin{align} \overrightarrow{\mathcal{F}L}^{s, q}(\mathcal{M}) := \mathcal{F} L^{s,q}(\mathcal{M}) \times \mathcal{F} L^{s-1,q}(\mathcal{M}),
\notag \end{align} and, for convenience, write $\mathcal{F}L^q (\mathcal{M}) := \mathcal{F}L^{0,q}(\mathcal{M})$ and $\overrightarrow{\mathcal{F}L}^q (\mathcal{M}) := \overrightarrow{\mathcal{F}L}^{0,q}(\mathcal{M})$.
Let $S(t)$ denote the linear wave propagator: \begin{align}
S(t)(\vec{u}_0)= S(t)(u_0,u_1)=\cos(t|\nabla|) u_0+\frac{\sin (t|\nabla|)}{|\nabla|} u_1 \label{linsol} \end{align} and let $\mathcal{I}$ denote the $k$-linear Duhamel operator \begin{align} \mathcal{I}[u_1,\ldots, u_k] (t)
:= \int_{0}^{t} \frac{\sin ((t-t')|\nabla|)}{|\nabla |}\bigg[ \prod_{j=1}^{k} u_j(t') \bigg]dt'. \label{forcing} \end{align} Writing $\mathcal{I}^k [u]=\mathcal{I}[u,\ldots, u]$, we have the following Duhamel formulation of \eqref{NLW1}: \begin{align} u(t)= S(t)(\vec{u}_{0})+\mathcal{I}^k[u](t). \label{duhamel} \end{align} We use the convention \begin{align*}
\frac{\sin (t |0|)}{|0|}=t.
\end{align*} For $0\leq t \leq 1$, we have \begin{align}
\| S(t)\vec{u}_0\|_{H^s} \leq \|u_0\|_{H^s}+t\|u_1\|_{H^{s-1}}\leq \|\vec{u}_0\|_{\mathcal{H}^{s}}. \label{linestHs} \end{align}
Second, we recall the following definitions and terminology used in \cite{O17} to describe $k$-ary trees.
\begin{definition} \label{DEF:tree} \rm (i) Given a partially ordered set $\mathcal{T}$ with partial order $\leq$, we say that $b \in \mathcal{T}$ with $b \leq a$ and $b \ne a$ is a child of $a \in \mathcal{T}$, if $b\leq c \leq a$ implies either $c = a$ or $c = b$. If the latter condition holds, we also say that $a$ is the parent of $b$.
\noindent (ii) A tree $\mathcal{T}$ is a finite partially ordered set, satisfying the following properties\footnote{We do not identify two trees even if there is an order-preserving bijection between them.}: \begin{itemize} \item Let $a_1, a_2, a_3, a_4 \in \mathcal{T}$. If $a_4 \leq a_2 \leq a_1$ and $a_4 \leq a_3 \leq a_1$, then we have $a_2\leq a_3$ or $a_3 \leq a_2$,
\item A node $a\in \mathcal{T}$ is called terminal, if it has no child. A non-terminal node $a\in \mathcal{T}$ is a node with exactly $k$ children,
\item There exists a maximal element $r \in \mathcal{T}$ (called the root node) such that $a \leq r$ for all $a \in \mathcal{T}$,
\item $\mathcal{T}$ consists of the disjoint union of $\mathcal{T}^0$ and $\mathcal{T}^\infty$, where $\mathcal{T}^0$ and $\mathcal{T}^\infty$ denote the collections of non-terminal nodes and terminal nodes, respectively.
\end{itemize} \noindent (iii) Let $\boldsymbol{T}(j)$ denote the set of all trees with $j$ non-terminal nodes \end{definition}
Note that a given $k$-ary tree $\mathcal{T} \in \pmb{T}(j)$ has $kj+1$ nodes. This follows from the fact that the number of non-terminal and terminal nodes of $\mathcal{T}$ are $j$ and $(k-1)j+1$, respectively, where $j\in \mathbb{N} \cup \{0\}$.
We have the following basic combinatorial property for $k$-ary trees. The proof is a straightforward adaptation of the one in \cite[Lemma 2.3]{O17} for ternary trees ($k=3$).
\begin{lemma}\label{lemma:counting} There exists a constant $C_{0}>0$ such that \begin{equation} \label{tjcnt}
|\pmb{T}(j)| \leq C_{0}^{j}. \end{equation} \end{lemma}
For fixed $\vec{\phi}\in \overrightarrow{\mathcal{F}L}^{1} (\mathcal{M})$, we associate to a given tree $\mathcal{T}\in \mathbf{T}(j)$, a space-time distribution $\Psi_{\vec{\phi}}(\mathcal{T})\in \mathcal{D}'([0,T]\times \mathcal{M})$ as follows: we replace a non-terminal node by the Duhamel integral operator $\mathcal{I}$ with its $k$ arguments as children and we replace all terminal nodes by the linear solution $S(t)\vec{\phi}$. We then define
\begin{align} \Xi_{j}(\vec{\phi})=\sum_{\mathcal{T}\in \mathbf{T}(j)} \Psi_{\vec{\phi}}(\mathcal{T}). \label{sum} \end{align} For example, \begin{align*} \Xi_{0}(\vec{\phi})=S(t)\vec{\phi} \quad \text{and} \quad \Xi_{1}(\vec{\phi})=\mathcal{I}[ S(t)\vec{\phi}, \ldots, S(t)\vec{\phi}]. \end{align*}
\noindent The multilinear operators $\Xi_{j}$ satisfy the following estimates. We use short-hand notations such as $C_T \mathcal{F}L^p = C([0, T]; \mathcal{F}L^p(\mathcal{M}))$ for $T>0$. The following estimates will be used to show convergence of the power series expansion and is similar to \cite[Lemma 2.5]{O17}.
\begin{lemma}\label{LEM:Xiestimates} There exists $C>0$ such that the following hold: Given $\vec \phi \in \overrightarrow{\mathcal{F}L}^{1}(\mathcal{M})$, $j\in \mathbb{N}$, $\vec{\psi}\in \overrightarrow{\mathcal{F}L}^{1}(\mathcal{M}) \cap \mathcal{H}^{0}(\mathcal{M})$, and $0<T\leq 1$, we have \begin{align}
\| \Xi_{j}(\vec{\phi})(T)\|_{\mathcal{F}L^{1}} &\leq C^{j}T^{2j}\|\vec{\phi}\|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j+1}, \label{FL1}\\
\| \Xi_{j}(\vec{\psi})(T)\|_{\mathcal{F}L^{\infty}} &\leq C^{j}T^{2j}\|\vec{\psi}\|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j-1} \|\vec{\psi}\|_{\mathcal{H}^{0}}^{2}.\label{FLinfty}\ \end{align} \end{lemma} \begin{proof} For $\vec{\phi}=(\phi_0,\phi_1)$, we have from \eqref{linsol} and $0<t\leq T\leq 1$, \begin{align}
\| S(t)(\vec{\phi})\|_{C_T\mathcal{F}L^{1}}\leq \| \phi_0 \|_{\mathcal{F}L^{1}}+T\|\phi_{1}\|_{\mathcal{F}L^{-1,1}}\leq \|\vec{\phi}\|_{\overrightarrow{\mathcal{F}L}^{1}}. \label{linest} \end{align}
As $|\sin t|\leq |t|$ for every $t\in \mathbb{R}$, \eqref{forcing} and the algebra property of $\mathcal{F}L^{1}(\mathcal{M})$ imply \begin{align}
\| \mathcal{I}[u_1,\ldots, u_k]\|_{C_T \mathcal{F}L^1} \leq \bigg(\int_{0}^{T} (T-t) dt\bigg) \prod_{j=1}^{k}\|u_j\|_{C_T \mathcal{F}L^1}\leq CT^2 \prod_{j=1}^{k}\|u_j\|_{C_T \mathcal{F}L^1}. \label{Iesti} \end{align} For a fixed $\mathcal{T} \in \mathbf{T}(j)$, $\Psi_{\vec{\phi}}\,(\mathcal{T})$ is essentially $j$ iterated compositions of the operator $\mathcal{I}^{k}[S(t)\vec{\phi}]$, with $(k-1)j+1$ terms $S(t)\vec{\phi}$. Hence, \eqref{FL1} follows from \eqref{sum}, \eqref{tjcnt}, \eqref{Iesti}, and \eqref{linest}. Likewise, \eqref{FLinfty} follows similarly in addition to using Young's inequality. \end{proof}
We now justify the power series expansion for solutions to \eqref{duhamel}.
\begin{lemma}\label{LEM:local}
Let $k\geq 2$ be an integer and $M>0$. Then, for any $0<T\ll \min (M^{-\frac{k-1}{2}},1)$ and $\vec{u}_0 \in \overrightarrow{\mathcal{F}L}^{1}(\mathcal{M})$ with $\| \vec{u}_0 \|_{\overrightarrow{\mathcal{F}L}^{1}}\leq M$, the following holds: \begin{itemize}
\item[(i)] There exists a unique solution $u\in C([0,T];\mathcal{F} L^{1}(\mathcal{M}))$ satisfying $(u,\partial_t u)|_{t=0}=\vec{u}_0$ to \eqref{duhamel}. \item[(ii)] The solution $u$ in \textup{(i)} may be expressed as \begin{align} u= \sum_{j=0}^{\infty}\Xi_{j}(\vec{u}_{0})=\sum_{j=0}^{\infty}\sum_{\mathcal{T}\in \mathbf{T}(j)} \Psi_{\vec{u}_{0}}(\mathcal{T}), \label{uexpand} \end{align} where the series converges absolutely in $C([0,T];\mathcal{F}L^{1}(\mathcal{M}))$. \end{itemize} \end{lemma}
\begin{proof} We begin with (i). We define \begin{align*} \Gamma[u](t):= S(t)(\vec{u}_0)+\mathcal{I}^{k}[u](t). \end{align*} Then, \eqref{linest} and \eqref{Iesti} imply \begin{align*}
\|\Gamma[u]\|_{C_T \mathcal{F}L^{1}}
\leq \| \vec{u}_0 \|_{\overrightarrow{\mathcal{F}L}^{1}}+CT^{2}\|u\|_{C_T \mathcal{F}L^{1}}^{k}. \end{align*}
Thus, for $0<T\leq 1$ such that $CT^{2}M^{k-1}\ll 1$, $\Gamma$ maps the ball $B_{2M}:=\{ v\in C([0,T];\mathcal{F}L^{1}(\mathcal{M})) \, : \, \|v\|_{C_T \mathcal{F}L^{1}}\leq 2M\}$ into itself. In view of the multilinearity of $\mathcal{I}$, we may reduce $T$ further to ensure that $\Gamma$ is in fact a strict contraction on $B_{2M}$. The contraction mapping theorem and an a posteriori continuity argument completes (i). We move onto verifying (ii). We fix $0<T\leq 1$ such that $CT^{2}M^{k-1}\ll 1$ and fix $\varepsilon>0$. From \eqref{FL1}, we see that the sum in \eqref{uexpand} converges absolutely in $C([0,T];\mathcal{F}L^{1}(\mathcal{M}))$ and hence there exists an integer $J_1 \geq 0$ such that for every $J\geq J_1$, \begin{align}
\| U-U_{J}\|_{C_{T}\mathcal{F}L^1}<\frac{\varepsilon}{3}, \label{U1} \end{align} where \begin{align*} U_J := \sum_{j=0}^{J}\Xi_{j}(\vec{u}_0) \quad \text{and} \quad U:=\sum_{j=0}^{\infty}\Xi_{j}(\vec{u}_0). \end{align*} In particular, $U, U_{J}\in B_{2M}$ for any $J\in \mathbb{N}$. From (i), $\Gamma$ is continuous from $B_{2M}$ into itself and hence there exists an integer $J_2 \geq 0$ such that for every $J\geq J_2$, \begin{align}
\|\Gamma[U_{J}]-\Gamma[U]\|_{C_{T}\mathcal{F}L^1} <\frac{\varepsilon}{3}. \label{U3} \end{align} Now for a fixed integer $J\geq 1$, we consider the difference $U_J-\Gamma[U_J]$. We have \begin{align*} U_J-\Gamma[U_J]& = \sum_{j=1}^{J}\Xi_{j}(\vec{u}_{0})-\sum_{0\leq j_1,\ldots,j_k\leq J}\mathcal{I}[\Xi_{j_1}(\vec{u}_0),\ldots, \Xi_{j_{k}}(\vec{u}_0)] \\ & = -\sum_{\ell=J+1}^{kJ+1}\sum_{\substack{0\leq j_1,\ldots,j_k\leq J \\ j_1+\cdots +j_k=\ell-1}}\mathcal{I}[\Xi_{j_1}(\vec{u}_0),\ldots, \Xi_{j_{k}}(\vec{u}_0)]. \end{align*} Using \eqref{Iesti}, \eqref{FL1}, and crudely estimating the sums, we obtain \begin{align*}
\| U_J-\Gamma[U_J]\|_{C_T \mathcal{F}L^1}
&\leq CT^{2} \sum_{\ell=J+1}^{kJ+1} \sum_{\substack{0\leq j_1,\ldots,j_k\leq J \\ j_1+\cdots +j_k=\ell-1}} \prod_{m=1}^{k} \|\Xi_{j_m}(\vec{u}_0)\|_{C_T \mathcal{F}L^1} \\ & \leq CT^{2}M^{k}J^{k}\sum_{\ell=J+1}^{\infty}(CT^{2}M^{k-1})^{\ell-1} \\ & \leq CM J^{k}(CT^{2}M^{k-1})^{J}. \end{align*} Thus, there exists an integer $J_3 \geq 0$ such that for every $J\geq J_3$, \begin{align}
\| U_{J}-\Gamma[U_{J}]\|_{C_T \mathcal{F}L^1} <\frac{\varepsilon}{3}. \label{U2} \end{align} With $J:=\max_{\ell=1,2,3} J_{\ell}$, \eqref{U1}, \eqref{U3}, and \eqref{U2} imply \begin{align*}
\|U-\Gamma[U]\|_{C_{T}\mathcal{F}L^1}&\leq \|U-U_J\|_{C_{T}\mathcal{F}L^1}+\| U_J-\Gamma[U_J]\|_{C_{T}\mathcal{F}L^1}
+\|\Gamma[U_J]-\Gamma[U]\|_{C_{T}\mathcal{F}L^1} \\ & <\varepsilon. \end{align*} Thus, $U=\Gamma[U]$ and, by uniqueness, we conclude $u=U$. \end{proof}
\section{Norm inflation for NLW} \label{SEC:3}
In this section, we present the proof of Theorem \ref{THM:NI} by establishing the following proposition.
\begin{proposition} \label{PROP:NI} Let $\mathcal{M}=\mathbb{R}^d$ or $\mathbb{T}^d$, $k\geq 2$ be an integer, $s<0$, and fix $u_0,u_1\in \mathcal{S}(\mathcal{M})$. Then, for any $n\in \mathbb{N}$, there exists a smooth solution $u_n$ to the NLW~\eqref{NLW1} and $t_{n}\in (0, \frac{1}{n})$ such that \begin{align}
\|(u_n (0), \partial_t u_{n}(0))-(u_0,u_1)\|_{\mathcal{H}^{s}(\mathcal{M})}<\frac 1n \qquad \text{and} \qquad
\|u_{n}(t_n)\|_{H^{s}(\mathcal{M})}>n. \label{nexplosion} \end{align} \end{proposition}
From density and diagonal arguments, Theorem \ref{THM:NI} follows from Proposition \ref{PROP:NI}. See \cite{Xia, O17} for the details.
Thus, the remaining part of this paper is devoted to the proof of Proposition \ref{PROP:NI}. While the argument closely follows \cite[Section 3]{O17}, we will detail it here in order to make this paper self-contained. In the following, we fix $\vec{u}_0 =(u_0,u_1)$ with $u_0, u_1 \in \mathcal{S}(\mathcal{M})$. In Subsection \ref{SUBSEC:31}, we prove multilinear estimates for each term in the power series expansion. Moreover, by observing high-to-low energy transfer and resonant interaction, we show a crucial lower bound for the first multilinear term. We then present the proof of Proposition \ref{PROP:NI} in Subsection \ref{SUBSEC:32}.
\subsection{Multilinear estimates} \label{SUBSEC:31}
In this subsection, we state the multilinear estimates on $\Xi_j$. Moreover, we show that the first multilinear term $\Xi_1$ is the leading part in the power series expansion in negative Sobolev spaces.
Let $\chi_K$ denote the indicator function of a subset $K \subset \widehat{\mathcal{M}}$, where $\widehat{\mathcal{M}}$ is as in \eqref{dual}. Set $e_1 := (1,0,\dots, 0) \in \widehat{\mathcal{M}}$. Given $n\in \mathbb{N}$, let $N=N(n)\gg 1$ to be chosen later. We set $\vec{\phi}_{n}=(\phi_{0,n}, 0)$ by \begin{align} \widehat \phi_{0,n}=R \chi_{\Omega}, \label{data} \end{align} where $R=R(N)\geq 1$, \begin{align} \Omega= \bigcup_{\eta \in \Sigma} (\eta+Q_{A}), \label{omega} \end{align} $Q_{A}=[-\frac A2, \frac A2)^{d}$, $A\gg 1$, and \begin{align} \Sigma=\{-2Ne_1, -Ne_1, Ne_1,2Ne_1\}. \label{Sig} \end{align} We require $N, R,$ and $A$ to be chosen so that
\begin{align}
\|\vec{u}_0\|_{\overrightarrow{\mathcal{F}L}^{1}}\ll RA^{d} \qquad \text{and} \qquad A\ll N, \label{sizedatau} \end{align} where the last condition ensures that $\Omega$ in \eqref{omega} is a disjoint union. Notice that \eqref{data} and \eqref{omega} imply \begin{align}
\|\vec{\phi}_n\|_{\overrightarrow{\mathcal{F}L}^{1}}
=\|\phi_{0,n}\|_{\mathcal{F}L^{1}} \sim RA^{d} \qquad \text{and} \qquad \|\vec{\phi}_{n}\|_{\mathcal{H}^{s}}\sim RA^{\frac d2}N^{s}, \label{sizedata} \end{align} for any $s\in \mathbb{R}$. We define $\vec{u}_{0,n}:=\vec{u}_0+\vec{\phi}_n$. For each $n\in \mathbb{N}$, Lemma~\ref{LEM:local} implies that there exists a unique solution $u_{n}\in C([0,T];\mathcal{F}L^{1}(\mathcal{M}))$ to \eqref{duhamel} with $(u_{n},\partial_t u_{n})\vert_{t=0}=\vec{u}_{0,n}$ and admitting the power series expansion \begin{align} u_{n}=\sum_{j=0}^{\infty} \Xi_{j}(\vec{u}_{0,n})=\sum_{j=0}^{\infty} \Xi_{j}(\vec{u}_{0}+\vec{\phi}_{n}) \label{powerseries} \end{align} on $[0,T]$, provided \begin{align}
0<T\ll \big( \|\vec{u}_{0}\|_{\overrightarrow{\mathcal{F}L}^{1}}+RA^{d}\big)^{-\frac{k-1}{2}}. \label{localtime} \end{align}
We now state some key estimates for the multilinear expressions $\Xi_{j}(\vec{u}_{0,n})$. The proofs are essentially the same as in \cite[Lemma 3.2 and Lemma 3.3]{O17} and are included for completeness.
\begin{lemma}\label{LEM:multlin2} For any $s<0$ and $j\in \mathbb{N}$, the following estimates hold: \begin{align}
\|\vec{u}_{0,n}-\vec{u}_{0}\|_{\mathcal{H}^{s}} &\lesssim RA^{\frac{d}{2}}N^{s}, \label{hs1} \\
\|\Xi_{0}(\vec{u}_{0,n})(t)\|_{H^{s}} &\lesssim 1+RA^{\frac{d}{2}}N^{s}, \label{hs2} \\
\|\Xi_{1}(\vec{u}_{0,n})(t)-\Xi_{1}(\vec{\phi}_{n})(t)\|_{H^s} &\lesssim t^{2}\|\vec{u}_{0}\|_{ \mathcal{H}^{0}}R^{k-1}A^{d(k-1)}, \label{hs3} \\
\|\Xi_{j}(\vec{u}_{0,n})(t)\|_{H^s} &\lesssim C^j t^{2j}(RA^{d})^{(k-1)j} ( \|\vec{u}_0\|_{\mathcal{H}^0}+Rf_{s}(A)), \label{hs4} \end{align} where \begin{equation} f_{s}(A):= \begin{cases}
1 & \textup{if} \,\, \,s<-\frac{d}{2}, \\
\left( \log A \right)^{\frac{1}{2}}& \textup{if} \,\,\, s=-\frac{d}{2}, \\
A^{\frac{d}{2}+s} & \textup{if} \,\, \, s>-\frac{d}{2} \end{cases}
\notag \end{equation} and $0<t\leq 1$. \end{lemma}
\begin{proof} The proofs of \eqref{hs1} and \eqref{hs2} follow immediately from $\vec{\phi}_{n}=\vec{u}_{0,n}-\vec{u}_0$, \eqref{sizedata}, and \eqref{linestHs}. By the multilinearity of $\mathcal{I}$, we have \begin{align} \Xi_{1}(\vec{u}_{0,n})(t)-\Xi_{1}(\vec{\phi}_{n})(t)= \sum_{\vec{\psi}_1, \ldots, \vec{\psi}_{k}} \mathcal{I}[ S(t)\vec{\psi}_1,\ldots, S(t)\vec{\psi}_{k}], \label{diff1} \end{align} where the sum is over all choices of $\vec{\psi}_{j}\in \{ \vec{u}_0, \vec{\phi}_{n}\}$ with at least one appearance of $\vec{u}_{0}$. Since $s<0$, \eqref{diff1}, Young's inequality, \eqref{linestHs}, and \eqref{linest} imply \begin{align*}
\|\Xi_{1}(\vec{u}_{0,n})(t)-\Xi_{1}(\vec{\phi}_{n})(t)\|_{H^s} & \leq \|\Xi_{1}(\vec{u}_{0,n})(t)-\Xi_{1}(\vec{\phi}_{n})(t)\|_{L^2} \\
& \leq \sum_{\vec{\psi}_1, \ldots, \vec{\psi}_{k}} \| \mathcal{I}[ S(t)\vec{\psi}_1,\ldots, S(t)\vec{\psi}_{k}] \|_{L^2} \\
& \lesssim t^{2}\|\vec{u}_{0}\|_{\mathcal{H}^{0}} (\|\vec{u}_0\|_{\overrightarrow{\mathcal{F}L}^1}^{k-1}+\|\vec{\phi}_{n}\|_{\overrightarrow{\mathcal{F}L}^1}^{k-1}). \end{align*} Using \eqref{sizedatau} and \eqref{sizedata}, we obtain \eqref{hs3}.
We now prove \eqref{hs4}. By the triangle inequality, we have \begin{align}
\|\Xi_{j}(\vec{u}_{0,n})(t)\|_{H^s} \leq \|\Xi_{j}(\vec{u}_{0,n})(t)-\Xi_{j}(\vec{\phi}_{n})(t)\|_{H^s}+\|\Xi_{j}(\vec{\phi}_{n})(t)\|_{H^{s}}, \label{xiju} \end{align} and thus we reduce to proving estimates for the two terms on the right hand side of the above.
From \eqref{data} and \eqref{omega}, $\supp \mathcal{F} [ S(t)\vec{\phi}_{n}]$ is contained within at most four disjoint cubes of volume approximately $A^{d}$. Thus, for each fixed $\mathcal{T}\in {\bf{T}}(j)$, the support of $\mathcal{F} [ \Psi_{\vec{\phi}_{n}}(\mathcal{T})]$ is contained in at most $4^{(k-1)j+1}$ cubes of volume approximately $A^d$. Hence, for some $c,C>0$, we have \begin{align*}
| \supp\mathcal{F} [ \Xi_{j}(\vec{\phi}_{n})(t)] |\leq C^{j}A^{d}\leq |C^{\frac{j}{d}}Q_{A}|. \end{align*}
As $s<0$, $\jb{\xi}^{s}$ is decreasing in $|\xi|$ and using \eqref{FLinfty}, \eqref{linestHs}, \eqref{linest}, and \eqref{sizedata}, we have \begin{align} \begin{split}
\| \Xi_{j}(\vec{\phi}_n)(t)\|_{H^{s}} & \leq \|\jb{\xi}^{s}\|_{L^2(\supp\mathcal{F} [ \Xi_{j}(\vec{\phi}_{n})(t)])} \| \Xi_{j}(\vec{\phi}_n)(t)\|_{\mathcal{F}L^{\infty}} \\
& \lesssim C^{j}\|\jb{\xi}^{s}\|_{L^2(C^{\frac{j}{d}}Q_A)} t^{2j}(RA^{d})^{(k-1)j-1}(RA^{\frac{d}{2}})^{2} \\ & \lesssim C^{j}t^{2j}(RA^{d})^{(k-1)j}Rf_{s}(A). \end{split} \label{xijphi} \end{align} Meanwhile, by considerations similar to \eqref{diff1}, we have \begin{align} \begin{split}
\|\Xi_{j}(\vec{u}_{0,n})(t)-\Xi_{j}(\vec{\phi}_{n})(t)\|_{H^s} & \leq \|\Xi_{j}(\vec{u}_{0,n})(t)-\Xi_{j}(\vec{\phi}_{n})(t)\|_{L^2} \\
& \leq C^{j}t^{2j}\|\vec{u}_{0}\|_{\mathcal{H}^0}(\|\vec{u}_0 \|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j}+\|\vec{\phi}_{n}\|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j})\\
&\leq C^{j}t^{2j}\|\vec{u}_0\|_{\mathcal{H}^0}(RA^{d})^{(k-1)j}. \end{split} \label{xijphi2} \end{align} Thus, \eqref{hs4} follows from \eqref{xiju}, \eqref{xijphi}, and \eqref{xijphi2}. \end{proof}
We now recall the following bounds on convolutions of characteristic functions of cubes: \begin{lemma} For any $a,b,\xi\in \widehat{\mathcal{M}}$ and $A\geq 1$, we have \begin{align} c_{d}A^{d}\chi_{a+b+Q_A}(\xi) \leq\chi_{a+Q_A}\ast \chi_{b+Q_A}(\xi)\leq C_{d}A^{d}\chi_{a+b+Q_{2A}}(\xi). \label{conv} \end{align} \end{lemma}
In the following proposition, we identify that the first multilinear term in the Picard expansion is culpable for the instability in Proposition~\ref{PROP:NI}.
\begin{proposition}\label{PROP:INSTAB} Let $k\geq 2$ be an integer and $s<0$. Let $\vec{\phi}_{n}$ be as in \eqref{data}. Then, for $(AN)^{-\frac 12} \ll T\ll A^{-1}$, we have \begin{align}
\| \Xi_{1}(\vec{\phi}_n)(T)\|_{H^{s}} \gtrsim T^{2}R^k A^{d(k-1)} A^{\frac{d}{2}+s}. \label{lowerbound} \end{align} \end{proposition}
\begin{proof}
\noindent To simplify notation, we write \begin{align} \Gamma:= \bigg\{ (\xi_1,\ldots, \xi_k) \in \widehat{\mathcal{M}}^k \, : \, \sum_{j=1}^{k}\xi_{j}=\xi \bigg\} \qquad \text{and} \qquad d\xi_{\Gamma}:=d\xi_{1}\cdots d\xi_{k-1}.
\notag \end{align}
Restricting $|\xi|\lesssim A$ and using $A\ll N$, \eqref{data}, and product-to-sum formulas, we have \begin{align*} \mathcal{F}[ \Xi_{1}(\vec{\phi}_n)(T) ](\xi)
&= \int_{0}^{T} \frac{\sin((T-t)|\xi|)}{|\xi|} \mathcal{F} \big[ (S(t)(\phi_{0,n},0))^{k}\big] dt\\
&= \int_{0}^{T} \frac{\sin((T-t)|\xi|)}{|\xi|}\int_{\Gamma} \prod_{m=1}^{k} \cos(t|\xi_m|)\widehat \phi_{0,n}(\xi_m) d\xi_{\Gamma}dt \\ & = R^k
\int_{0}^{T} \frac{ \sin((T-t)|\xi|)}{|\xi|} \int_{\Gamma} \prod_{m=1}^{k} \cos(t|\xi_{m}|)\chi_{\Omega}(\xi_{m}) dt d\xi_{\Gamma} \\ & = \frac{R^k}{2^k} \sum_{\substack{(\eta_1, \dots, \eta_k) \in \Sigma^k \\ \eta_1+ \dots + \eta_k =0}} \sum_{(\varepsilon_1, \dots, \varepsilon_k) \in \{ -1,1\}^k}
\int_{0}^{T} \frac{ \sin((T-t)|\xi|)}{|\xi|} \\ &\quad \times
\int_{\Gamma}\cos \bigg( t \sum_{j=1}^k \varepsilon_j |\xi_j| \bigg) \prod_{j=1}^{k} \chi_{\eta_j + Q_A}(\xi_j) d\xi_{\Gamma} dt. \end{align*} For each fixed $\eta:=(\eta_1,\ldots,\eta_k)\in \Sigma^{k}$ with $\eta_1+\dots+\eta_k=0$, we split the inner summation into two parts: \begin{align*} \sum_{(\varepsilon_1, \dots, \varepsilon_k) \in \{ -1,1\}^k} = \sum_{(\varepsilon_1, \dots, \varepsilon_k) \in S_1 (\eta)} +\sum_{(\varepsilon_1, \dots, \varepsilon_k) \in S_2(\eta) } \end{align*} where \begin{align*}
S_{1}(\eta):= \bigg\{ (\varepsilon_1, \dots, \varepsilon_k) \in \{ -1,1\}^k \, : \, \sum_{j=1}^{k} \varepsilon_{j}|\eta_{j}|=0 \bigg\}, \end{align*}
$S_{2}(\eta) := \{-1,1\}^{k} \setminus S_1(\eta)$ and we write
\begin{align} \mathcal{F}& [\Xi_{1}(\vec{\phi}_n)(T) ](\xi) =\frac{R^{k}}{2^k}\sum_{\substack{\eta=(\eta_1, \dots, \eta_k) \in \Sigma^k \\ \eta_1+ \dots + \eta_k =0}} \Big( I_{1}(\eta,\xi,T)+I_{2}(\eta,\xi,T) \Big). \label{decomp} \end{align} Note that the set $S_{1}(\eta)$ is non-empty. For fixed $\xi_{j}\in \eta_{j}+Q_{A}$ and then $(\varepsilon_1, \dots, \varepsilon_k) \in S_1 (\eta)$, \begin{align}
\bigg\vert \sum_{j=1}^{k} \varepsilon_{j}|\xi_{j}| \bigg\vert =
\bigg\vert \sum_{j=1}^{k} \varepsilon_{j}\big(|\xi_{j}|-|\eta_j|\big) \bigg\vert \le
\sum_{j=1}^{k} |\xi_{j} - \eta_j|
\lesssim A. \label{S1prop} \end{align} Then, it follows from \eqref{S1prop} that \begin{align}
\cos \bigg( t \sum_{j=1}^k \varepsilon_j |\xi_j| \bigg) \geq \frac{1}{2} \label{cosbd2} \end{align} for $0<t<T\ll A^{-1}$. Moreover, we have \begin{align}\label{sinbd}
\frac{\sin ((T-t)|\xi|)}{|\xi|} \gtrsim T-t \end{align}
for $0<t<T\ll A^{-1}$ and $|\xi| \lesssim A$. Using \eqref{decomp}, \eqref{conv}, \eqref{cosbd2}, and \eqref{sinbd}, we obtain \begin{align*} I_{1}(\eta,\xi,T) &\gtrsim \sum_{(\varepsilon_1, \dots, \varepsilon_k) \in S_1 (\eta)}\int_{0}^{T} (T-t)dt \int_{\Gamma} \prod_{j=1}^{k} \chi_{\eta_j +Q_{A}}(\xi_j)d\xi_{\Gamma} \\ & \gtrsim T^{2}A^{d(k-1)}\chi_{Q_{A}}(\xi) \end{align*} and hence \begin{align} \frac{R^{k}}{2^{k}}\sum_{\substack{\eta=(\eta_1, \dots, \eta_k) \in \Sigma^k \\ \eta_1+ \dots + \eta_k =0}} I_{1}(\eta,\xi,T) \gtrsim T^{2}R^{k}A^{d(k-1)}\chi_{Q_{A}}(\xi)
\label{I1con} \end{align} for $1\leq A\ll N$ and $0<T\ll A^{-1}$.
We now turn to the contribution from $I_{2}(\eta,\xi,T)$. We observe that for each fixed $\eta=(\eta_1,\ldots,\eta_k)$, $(\varepsilon_1, \dots, \varepsilon_k) \in S_{2}(\eta)$, and $\xi_j \in \eta_{j}+Q_{A}$, \begin{align}
\bigg\vert \sum_{j=1}^{k} \varepsilon_{j}|\xi_j| \bigg\vert \sim N. \label{S2prop} \end{align}
In view of $A\ll N$, the upper bound is obvious. For the lower bound, the reverse triangle inequality yields $||\xi_j|-|\eta_j|| \le |\xi_j-\eta_j| \lesssim A$ for $\xi_j \in \eta_{j}+Q_{A}$ and hence, we have \begin{align*}
\sum_{j=1}^{k} \varepsilon_{j}|\xi_j|
= \sum_{j=1}^{k} \varepsilon_j |\eta_j|
+ \sum_{j=1}^{k} \varepsilon_{j} ( |\xi_j| - |\eta_j|)
=\sum_{j=1}^{k} \varepsilon_j |\eta_j| +\mathcal{O}(A). \end{align*} It follows from \eqref{Sig} that \begin{align*}
\bigg\vert \sum_{j=1}^{k} \varepsilon_j |\eta_j| \bigg\vert \geq N \end{align*} for $(\varepsilon_1, \dots, \varepsilon_k) \in S_2(\eta)$, which verifies \eqref{S2prop}. We therefore have \begin{align*} I_{2}(\eta,\xi,T) = \sum_{(\varepsilon_1, \dots, \varepsilon_k) \in S_2(\eta)}
&\frac{1}{2|\xi|}\int_{\Gamma}\int_{0}^{T}
\bigg[ \sin \bigg( T|\xi|-t\bigg( |\xi|-\sum_{j=1}^{k} \varepsilon_j |\xi_j|\bigg) \bigg) \\
&+\sin\bigg( T|\xi|-t\bigg( |\xi|+\sum_{j=1}^{k} \varepsilon_{j}|\xi_j| \bigg)\bigg) \bigg] dt
\prod_{j=1}^{k}\chi_{\eta_j+Q_A}(\xi_j)d\xi_{\Gamma}. \end{align*} If we further restrict $\xi\in \frac{A}{4}e_1 +Q_{\frac{A}{4}}$ and use \eqref{S2prop},
\eqref{conv}, and $A \ll N$, we obtain \begin{align*}
|I_{2}(\eta,\xi,T)| \lesssim 2^k A^{-1}N^{-1}A^{d(k-1)}\chi_{Q_{kA}}(\xi), \end{align*} which implies \begin{align} \bigg\vert \frac{R^{k}}{2^{k}}\sum_{\substack{\eta=(\eta_1, \dots, \eta_k) \in \Sigma^k \\ \eta_1+ \dots + \eta_k =0}}
I_{2}(\eta,\xi,T)\bigg\vert \lesssim A^{-1}N^{-1}R^{k}A^{d(k-1)}\chi_{Q_{kA}}(\xi). \label{I2con} \end{align}
Returning to \eqref{decomp} and using \eqref{I1con}, \eqref{I2con} and imposing $T^{2}AN\gg \max(1,C)$, we obtain \begin{align}
\| \Xi_{1}(\vec{\phi}_n)(T)\|_{H^{s}}
&\ge \| \jb{\xi}^s {\mathcal{F}} [\Xi_{1}(\vec{\phi}_n)](T)\|_{L^2_\xi (\frac{A}{4}e_1 +Q_{\frac{A}{4}})} \notag\\
&\gtrsim \Big( T^2 - CA^{-1} N^{-1} \Big) R^k A^{d(k-1)} \| \jb{\xi}^{s}\|_{L^{2}_{\xi}( \frac{A}{4}e_1 +Q_{\frac{A}{4}})} \notag\\ &\gtrsim T^2 R^k A^{d(k-1)} A^{\frac{d}{2}+s}, \notag \end{align} which shows \eqref{lowerbound}. \end{proof}
\begin{remark}\rm \label{REM:NLKGlb} The same result as in Proposition \ref{PROP:INSTAB} is valid for \eqref{NLKG}. Indeed, since the linear solution of \eqref{NLKG} is written as \[ S(t)(u_0,u_1)= \cos(t\jb{\nabla}) u_0 + \frac{\sin (t \jb{\nabla})}{\jb{\nabla}} u_1, \]
it suffices to replace $|\xi|$ in the proof with $\jb{\xi}$. More precisely, it follows from $\jb{\xi_j}-|\xi_j| = \frac1{\jb{\xi_j}+|\xi_j|}$ and \eqref{S1prop} that \begin{align} \bigg\vert \sum_{j=1}^{k} \varepsilon_{j} \jb{\xi_{j}} \bigg\vert
&\le \bigg\vert \sum_{j=1}^{k} \varepsilon_{j} |\xi_{j}| \bigg\vert + \sum_{j=1}^{k} (\jb{\xi_j}-|\xi_j|) \notag\\ &\lesssim A, \notag \end{align} for $\eta=(\eta_1,\ldots,\eta_k) \in \Sigma^k$, $(\varepsilon_1, \dots, \varepsilon_k) \in S_1 (\eta)$, and $\xi_{j}\in \eta_{j}+Q_{A}$. Similarly, from \eqref{S2prop}, we have \begin{align}
\bigg\vert \sum_{j=1}^{k} \varepsilon_{j}|\xi_j| \bigg\vert
&\ge \bigg\vert \sum_{j=1}^{k} \varepsilon_{j} |\xi_{j}| \bigg\vert - \sum_{j=1}^{k} (\jb{\xi_j}-|\xi_j|) \notag\\ &\sim N, \notag \end{align} for $\eta=(\eta_1,\ldots,\eta_k) \in \Sigma^k$, $(\varepsilon_1, \dots, \varepsilon_k) \in S_{2}(\eta)$, and $\xi_j \in \eta_{j}+Q_{A}$. Hence, from the same argument as in the proof of Proposition \ref{PROP:INSTAB}, the first multilinear term in the Picard expansion for \eqref{NLKG} satisfies \eqref{lowerbound}. \end{remark}
\subsection{Proof of Proposition \ref{PROP:NI}} \label{SUBSEC:32}
In order to prove Proposition~\ref{PROP:NI}, it suffices to show, given $n\in \mathbb{N}$, the following hold: \begin{align*}
\textup{(i)} & \quad RA^{\frac d2}N^s \ll \frac 1n, \\
\textup{(ii)} & \quad T^2 R^{k-1}A^{d(k-1)} \ll 1, \\
\textup{(iii)} & \quad T^{2}R^{k}A^{d(k-1)}A^{\frac{d}{2}+s} \gg n,\\
\textup{(iv)} & \quad T^2 R^k A^{d(k-1)}A^{\frac{d}{2}+s} \gg T^{4}R^{2(k-1)+1}A^{2d(k-1)}f_{s}(A), \\
\textup{(v)} & \quad (AN)^{-\frac{1}{2}}\ll T\ll \min \Big( A^{-1}, \frac 1n \Big),\\
\textup{(vi)} & \quad \| \vec{u}_0 \|_{\mathcal{H}^0}\ll RA^{\frac{d}{2}+s}, \,\, A\ll N, \,\, \| \vec{u}_0\|_{\overrightarrow{\mathcal{F}L}^{1}}\ll RA^d
\end{align*} for some particular choices of $R,T$, and $N$ all depending on $n$. While we could also allow $A$ to depend on $N$ as well, it turns out that we can choose $A$ independently of $N$ (and hence $n$) as we see below. Notice that condition (iv) reduces to satisfying \begin{align*} \textup{(iv')} & \quad T^{2}R^{k-1}A^{d(k-1)}f_{s}(A) \ll A^{\frac{d}{2}+s}. \end{align*}
The conditions (ii) and the last of (vi) ensure that the power series expansion \eqref{powerseries} is valid on $[0,T]$, where $T$ must satisfy \eqref{localtime}.
We now indicate how establishing (i) through (vi) suffices to prove Proposition~\ref{PROP:NI}.
When (ii) and (vi) hold, it follows from \eqref{data}, \eqref{omega}, \eqref{sizedatau}, \eqref{sizedata}, and Lemma~\ref{LEM:local}, that for each $n\in \mathbb{N}$, there is a unique solution $u_{n} \in C([0,T];\mathcal{F}L^{1}(\mathcal{M}))$ to \eqref{duhamel} with $(u_n , \partial_t u_n)|_{t=0}=\vec{u}_{0,n}$ and such that the power series expansion \eqref{powerseries} converges on $[0,T]$. By \eqref{hs1}, condition (i) ensures the first expression in \eqref{nexplosion}. From \eqref{hs4}, (ii), and (vi), we have \begin{align} \begin{split}
\Bigg\| \sum_{j=2}^{\infty} \Xi_{j}(\vec{u}_{0,n})(T) \Bigg\|_{H^{s}} & \lesssim Rf_{s}(A)\sum_{j=2}^{\infty} (CT^{2}R^{k-1}A^{d(k-1)})^{j} \\ & \lesssim T^{4}R^{2(k-1)+1}A^{2d(k-1)}f_{s}(A). \end{split} \label{tail} \end{align} Then, from Proposition~\ref{PROP:INSTAB} (thus requiring (v)), \eqref{hs2}, \eqref{hs3}, \eqref{tail}, (iii), (iv), and (vi), we have \begin{align*}
\|u_{n}(T)\|_{H^s}& \geq \|\Xi_{1}(\vec{\phi}_{n})(T)\|_{H^s}-\|\Xi_{0}(\vec{u}_{0,n})\|_{H^s} \\
& \hphantom{XX}-\|\Xi_{1}(\vec{u}_{0,n})(T)-\Xi_{1}(\vec{\phi}_{n})(T)\|_{H^s}-\bigg\| \sum_{j=2}^{\infty}\Xi_{j}(\vec{u}_{0,n})(T)\bigg\|_{H^s} \\
&\gtrsim T^{2}R^{k}A^{d(k-1)}A^{\frac{d}{2}+s}- (1+RA^{\frac{d}{2}}N^{s}) -T^2 \|\vec{u}_{0}\|_{\mathcal{H}^{0}}R^{k-1}A^{d(k-1)} \\ & \hphantom{X} -T^{4}R^{2(k-1)+1}A^{2d(k-1)}f_{s}(A) \\ & \sim T^{2}R^{k}A^{d(k-1)}A^{\frac{d}{2}+s} \gg n,
\end{align*} which establishes the second expression in \eqref{nexplosion} and hence Proposition~\ref{PROP:NI}. It remains to show, given $n\in \mathbb{N}$, we can choose $A,R$, and $T$ depending on $N$ and then $N=N(n)\gg 1$ so that conditions (i) through (vi) hold.
\noindent $\bullet$ \textbf{Case 1:} $-\frac1{k-1} \le s<0$.
\noindent We choose \begin{align} A=10, \qquad R=N^{-s-\delta}, \qquad T=N^{\frac {k-1}2( s + \frac \delta 2)}, \notag \end{align} for $0<\delta\ll1 $ sufficiently small so that \begin{align*} -s>\frac{k+1}{2}\delta. \end{align*} Our choice of $A$ ensures that $f_{s}(A)\sim A^{\frac{d}{2}+s}$ so (iv') essentially reduces to verifying (ii). Then, we have \begin{align*} RA^{\frac d2}N^{s}& \sim N^{-\delta}\ll \frac{1}{n}, \\ T^{2}R^{k-1}A^{d(k-1)}& \sim N^{-\frac{k-1}{2} \delta}\ll 1, \\ T^{2}R^{k}A^{d(k-1)} A^{\frac{d}{2}+s} & \sim N^{-s-\frac{k+1}{2}\delta}\gg n, \notag \\ TA &\sim N^{\frac{k-1}{2}(s+\frac{\delta}{2})}\ll \frac 1n, \\ T^{2}AN& \sim N^{(k-1)(s+\frac{1}{k-1}+\frac{\delta}{2})}\gg 1, \end{align*} since $k \ge 2$ and $-\frac{1}{k-1}\leq s<0$.
\noindent $\bullet$ \textbf{Case 2:} $s <-\frac1{k-1}$.
\noindent This case follows from Case 1 with $s=-\frac1{k-1}$. More precisely, we choose \begin{align} A=10, \qquad R=N^{\frac 1{k-1}-\delta}, \qquad T=N^{-\frac 12 +\frac {k-1}4\delta} \notag \end{align} for $0<\delta < \frac 2{k^2-1}$. Then, we obtain \begin{align*} RA^{\frac d2}N^{s}& \sim N^{s+\frac1{k-1}-\delta}\ll \frac{1}{n}, \\ T^{2}R^{k-1}A^{d(k-1)}& \sim N^{-\frac{k-1}{2} \delta}\ll 1, \\ T^{2}R^{k}A^{d(k-1)} A^{\frac{d}{2}+s} & \sim N^{\frac1{k-1}-\frac{k+1}{2}\delta}\gg n, \notag \\ TA &\sim N^{-\frac 12 + \frac{k-1}{4} \delta}\ll \frac 1n, \\ T^{2}AN& \sim N^{\frac{k-1}2\delta}\gg 1, \end{align*} since $k \ge 2$ and $s < - \frac1{k-1}$.
\begin{remark}\rm \label{RMK:dt} In this remark, we analyse the growth of the Sobolev norm of $\partial_t u$. First, note that the following analogue of Lemma \ref{LEM:Xiestimates} holds: Given $\vec \phi \in \overrightarrow{\mathcal{F}L}^{1}(\mathcal{M})$, $j\in \mathbb{N}$, $\vec{\psi}\in \overrightarrow{\mathcal{F}L}^{1}(\mathcal{M}) \cap \mathcal{H}^{0}(\mathcal{M})$, and $0<T\leq 1$, we have \begin{align}
\| \partial_t \Xi_{j}(\vec{\phi})(T)\|_{\mathcal{F}L^{-1,1}} &\leq C^{j}T^{2j-1}\|\vec{\phi}\|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j+1}, \label{FL1t}\\
\| \partial_t \Xi_{j}(\vec{\psi})(T)\|_{\mathcal{F}L^{-1,\infty}} &\leq C^{j}T^{2j-1}\|\vec{\psi}\|_{\overrightarrow{\mathcal{F}L}^{1}}^{(k-1)j-1} \|\vec{\psi}\|_{\mathcal{H}^{0}}^{2}.
\notag \end{align} \noindent We use the same notation as in Subsection \ref{SUBSEC:31}. Namely, we fix $\vec{u}_0\in \mathcal{S}(\mathcal{M})\times \mathcal{S}(\mathcal{M})$. We choose $\vec{\phi}_{n}=(\phi_{0,n},0)$, where $\phi_{0,n}$ given by \eqref{data}. Moreover, for $n\in \mathbb{N}$, we will choose $N(n)\gg 1$ and $R=R(N)\geq 1$ and $A\gg 1$, so that
\begin{align}
\|\vec{u}_0\|_{\mathcal{H}^{0}}\ll RA^{\frac{d}{2}+s}, \qquad A\ll N, \qquad \text{and} \qquad\|\vec{u}_0\|_{\overrightarrow{\mathcal{F}L}^{1}}\ll RA^{d}, \label{sizedatau2} \end{align} which is (vi) in Subsection \ref{SUBSEC:32}. With $\vec{u}_{0,n}:=\vec{u}_0+\vec{\phi}_{n}$, for each $n\in \mathbb{N}$. By \eqref{FL1t}, the solution $u_{n}\in C([0,T];\mathcal{F}L^{1}(\mathcal{M}))$ to \eqref{duhamel} with $(u_{n},\partial_t u_{n})\vert_{t=0}=\vec{u}_{0,n}$ is also in $C^1([0,T];\mathcal{F}L^{-1,1}(\mathcal{M}))$ and admits the power series expansion \begin{align*} \partial_t u_{n}=\sum_{j=0}^{\infty} \partial_t \Xi_{j}(\vec{u}_{0,n})=\sum_{j=0}^{\infty} \partial_t \Xi_{j}(\vec{u}_{0}+\vec{\phi}_{n})
\end{align*} provided \eqref{localtime} holds. Analogous to Lemma~\ref{LEM:multlin2}, we have the following multilinear estimates: \begin{align}
\| \partial_t \Xi_{1}(\vec{u}_{0,n})(t)- \partial_t \Xi_{1}(\vec{\phi}_{n})(t)\|_{H^{s-1}} &\lesssim t\|\vec{u}_{0}\|_{ \mathcal{H}^{0}}R^{k-1}A^{d(k-1)}, \label{hs132} \\
\|\partial_t \Xi_{j}(\vec{u}_{0,n})(t)\|_{H^{s-1}} &\lesssim C^j t^{2j-1}(RA^{d})^{(k-1)j} ( \|\vec{u}_0\|_{\mathcal{H}^0}+Rf_{s}(A)) \label{hs442} \end{align} for $j \in \mathbb{N}$ and $0<t \le 1$. The last estimate \eqref{hs442}, under \eqref{sizedatau2}, implies \begin{align} \begin{split}
\bigg\| \sum_{j=2} \partial_t \Xi_{j}(\vec{u}_{0,n})(T) \bigg\|_{H^{s-1}} &\lesssim T^{3}R^{2(k-1)+1}A^{2d(k-1)}f_{s}(A), \label{remaining2} \end{split} \end{align}
provided $T^{2}(RA^d)^{k-1}\ll 1$. Furthermore, following the proof of Proposition~\ref{PROP:INSTAB}, for integer $k\geq 2$, $s<0$ and $N^{-1}\ll T\ll A^{-1}$, we have \begin{align}
\|\partial_t \Xi_{1}(\vec{\phi}_{n})(T)\|_{H^{s-1}}\gtrsim TR^{k}A^{d(k-1)}A^{-1}A^{\frac{d}{2}+s}. \label{lowerbound2} \end{align} Now, we choose the parameters $A, T, R$ and $N=N(n)$ as in Section~\ref{SUBSEC:32}. In particular, \eqref{lowerbound2} with $A=10$ yields that \begin{align}
\|\partial_t \Xi_{1}(\vec{\phi}_{n})(T)\|_{H^{s-1}} \gtrsim TR^{k}. \label{lowerbound3} \end{align} Hence, from \eqref{hs132}, \eqref{remaining2}, and \eqref{lowerbound3}, we obtain \begin{align*}
\|\partial_t u_{n}(t_{n})\|_{H^{s-1}}\gg n. \end{align*}
\end{remark}
\section{Norm inflation in Fourier-amalgam spaces}\label{app:amalgam} In this section, we detail the necessary modifications to the proof of Theorem~\ref{THM:NI} needed to prove Theorem~\ref{THM:FLq}. We apply the same overall approach as detailed in Sections~\ref{SEC:2} and \ref{SEC:3}. As we saw earlier, the proof of Theorem~\ref{THM:FLq} reduces to proving norm inflation but for regular initial data: \begin{proposition} \label{PROP:NI2} Let $d\in \mathbb{N}$, $k\geq 2$ be an integer, $1\leq p,q\leq \infty$ and $s<0$. Fix $u_0,u_1\in \mathcal{S}(\mathcal{M})$. Then, for any $n\in \mathbb{N}$, there exists a smooth solution $u_n$ to the NLW~\eqref{NLW1} and $t_{n}\in (0, \frac{1}{n})$ such that \begin{align*}
\|(u_n (0), \partial_t u_{n}(0))-(u_0,u_1)\|_{X^{p,q}_{s}(\mathcal{M})}<\frac 1n \qquad \text{and} \qquad
\|u_{n}(t_n)\|_{X^{p,q}_{s}(\mathcal{M})}>n,
\end{align*} where the space $X^{p,q}_{s}(\mathcal{M})$ is defined in \eqref{Xspace}. \end{proposition}
In order to prove Proposition~\ref{PROP:NI2}, we only need to make modifications to statements and estimates appearing in Section~\ref{SEC:3}. Namely, we begin with the exact same setup provided by Section~\ref{SEC:2}, in particular, the power series expansion \eqref{uexpand} assured by Lemma~\ref{LEM:local}.
Given $A,R,N$ to be chosen later, depending on $n\in \mathbb{N}$ (if necessary), we set $\vec{\phi}_{n}=(\phi_{0,n}, 0)$ exactly as in \eqref{data}, \eqref{omega} and \eqref{Sig}. With $\mathcal{X}^{p,q}_{s}(\mathcal{M}):=X^{p,q}_{s}(\mathcal{M})\times X^{p,q}_{s-1}(\mathcal{M})$, we have \begin{align}
\| \vec{\phi}_{n}\|_{\mathcal{X}^{p,q}_{s}(\mathcal{M})}\sim RA^{\frac{d}{q}}N^{s}, \label{datasize1} \end{align}
with the understanding that the right hand side above is $RN^s$ if $q=\infty$. While \eqref{datasize1} is obvious when $p=q$ (since $\mathcal{X}^{q,q}_{s}(\mathcal{M})=\overrightarrow{\mathcal{F}L}^{s,q}(\mathcal{M})$), we provide a few details for the case when $p\neq q$ and $\mathcal{M}=\mathbb{R}^{d}$ to give a flavour of the arguments to follow relating to \eqref{fw}. We have $\|\vec{\phi}_{n}\|_{\mathcal{X}^{p,q}_{s}(\mathbb{R}^d)}=\| \phi_{0,n}\|_{\ft w^{p,q}_{s}(\mathbb{R}^d)}$. For fixed $n\in \mathbb{Z}^d$, \begin{align*}
\| \widehat \phi_{0,n}(\xi)\chi_{n+Q}(\xi)\|_{L^p_{\xi}}&=R\, \bigg\| \sum_{\eta \in \Sigma} \chi_{(\eta+Q_A)\cap (n+Q)}(\xi) \bigg\|_{L^{p}_{\xi}} \\
&=R \bigg( \sum_{\eta\in \Sigma} \text{meas}( (\eta+Q_A)\cap (n+Q)) \bigg)^{\frac{1}{p}}. \end{align*} For fixed $n\in \mathbb{Z}^{d}$, the above sum is non-zero if and only if there exists $\mu\in \{\pm1,\pm 2\}$ such that
$|n+\mu Ne_1| \sim A$. Since $A\ll N$, the sets $\{ |n+\mu Ne_1|\sim A\}_{\mu}$ are disjoint and hence, \begin{align*}
\| \phi_{0,n}\|_{\ft w^{p,q}_{s}(\mathbb{R}^d)} \lesssim R \bigg( \sum_{\mu\in \{\pm 1,\pm2\}} \sum_{|n+\mu Ne_1|\sim A} \jb{n}^{sq}\bigg)^{\frac{1}{q}} \sim RA^{\frac{d}{q}}N^{s}. \end{align*} The lower bound follows by bounding below the sum in $\eta$ above by just one particular choice of $\eta$. Similarly, we have \begin{align*}
\|\phi_{0,n}\|_{\ft w^{p,\infty}_{s}(\mathbb{R}^d)} \sim R\max_{\mu\in \{\pm 1,\pm2\}} \max_{|n+\mu Ne_1|\sim A} \jb{n}^{s} \sim RN^{s}. \end{align*}
We have the following analogue of Lemma~\ref{LEM:multlin2}. \begin{lemma}\label{LEM:multlin3} For any $s<0$ and $j\in \mathbb{N}$, the following estimates hold: \begin{align}
\|\vec{u}_{0,n}-\vec{u}_{0}\|_{\mathcal{X}^{p,q}_{s}(\mathcal{M})} &\lesssim RA^{\frac{d}{q}}N^{s}, \notag \\
\|\Xi_{0}(\vec{u}_{0,n})(t)\|_{X^{p,q}_{s}(\mathcal{M})} &\lesssim 1+RA^{\frac{d}{q}}N^{s}, \notag\\
\|\Xi_{1}(\vec{u}_{0,n})(t)-\Xi_{1}(\vec{\phi}_{n})(t)\|_{X^{p,q}_{s}(\mathcal{M})} &\lesssim t^{2}\|\vec{u}_{0}\|_{\mathcal{X}^{p,q}(\mathcal{M})}R^{k-1}A^{d(k-1)}, \label{hs32} \\
\|\Xi_{j}(\vec{u}_{0,n})(t)\|_{X^{p,q}_{s}(\mathcal{M})} &\lesssim C^j t^{2j}(RA^{d})^{(k-1)j} ( \|\vec{u}_0\|_{\mathcal{X}^{p,q}(\mathcal{M})}+Rf_{s,q}(A)), \label{hs42} \end{align} where $0<t\leq 1$ and where we define \begin{equation} f_{s,q}(A):= \begin{cases}
1 & \textup{if} \,\, \,s<-\frac{d}{q}, \\
\left( \log A \right)^{\frac{1}{q}}& \textup{if} \,\,\, s=-\frac{d}{q}, \\
A^{\frac{d}{q}+s} & \textup{if} \,\, \, s>-\frac{d}{q}, \end{cases}
\notag \end{equation} for $1\leq q<\infty$ and $f_{s,\infty}(A)= 1$. \end{lemma}
The proof of Lemma~\ref{LEM:multlin3} follows exactly the same ideas as the proof of Lemma~\ref{LEM:multlin2}. The only noteworthy modification is in showing \eqref{hs32} and \eqref{hs42}. For this, we use the following product estimate: \begin{align}
\| fg\|_{X_{0}^{p,q}(\mathcal{M})} \lesssim \|f\|_{X_{0}^{p,q}(\mathcal{M})}\|g\|_{\mathcal{F}L^{1}(\mathcal{M})}, \label{prodest} \end{align} for any $1\leq p, q\leq \infty$. When $\mathcal{M}=\mathbb{T}^{d}$, \eqref{prodest} follows by Young's inequality, while if $\mathcal{M}=\mathbb{R}^{d}$, \eqref{prodest} follows from the uniform decomposition \begin{align*} \sum_{n\in \mathbb{Z}^{d}}\chi_{n+Q}(\xi)=1, \quad \text{for every} \quad \xi \in \mathbb{R}^d \end{align*}
and two applications of Young's inequality. Similarly, we have the following analogue of Proposition~\ref{PROP:INSTAB}. \begin{proposition}\label{PROP:INSTAB2} Let $k\geq 2$ be an integer, $1\leq p,q \leq \infty$ and $s<0$. Let $\vec{\phi}_{n}$ be as in \eqref{data}. Then, for $(AN)^{-\frac 12} \ll T\ll A^{-1}$, we have \begin{align}
\| \Xi_{1}(\vec{\phi}_n)(T)\|_{X^{p,q}_{s}(\mathcal{M})} \gtrsim T^{2}R^k A^{d(k-1)}A^{\frac{d}{q}+s}.
\notag \end{align} \end{proposition}
Now as in Subsection~\ref{SUBSEC:32}, Lemma~\ref{LEM:multlin3} and Proposition~\ref{PROP:INSTAB2} imply that Proposition~\ref{PROP:NI2} follows provided we can show that given $n\in \mathbb{N}$, we can choose $A,R,T,$ and $N$ depending on $n$ such that the following conditions are satisfied: \begin{align*}
\textup{(i)} & \quad RA^{\frac dq}N^s \ll \frac 1n, \\
\textup{(ii)} & \quad T^2 R^{k-1}A^{d(k-1)} \ll 1, \\
\textup{(iii)} & \quad T^{2}R^{k}A^{d(k-1)}A^{\frac{d}{q}+s} \gg n,\\
\textup{(iv)} & \quad T^2 R^k A^{d(k-1)}A^{\frac{d}{q}+s}\gg T^{4}R^{2(k-1)+1}A^{2d(k-1)}f_{s, q}(A), \\
\textup{(v)} & \quad (AN)^{-\frac{1}{2}}\ll T\ll \min \Big( A^{-1}, \frac 1n \Big),\\
\textup{(vi)} & \quad \| \vec{u}_0 \|_{\mathcal{X}^{p,q}(\mathcal{M})}\ll RA^{\frac{d}{q}+s}, \,\, A\ll N, \,\, \| \vec{u}_0\|_{\overrightarrow{\mathcal{F}L}^{1}}\ll RA^d.
\end{align*} We notice that the only places where the indices $p$ and $q$ appear above are with respect to the parameter $A$ (also in the implicit constants but they do not matter). Therefore, we can take $A=10$, and choose $R=R(N)$ and $T=T(N)$ exactly as in Cases 1 and 2 in Subsection~\ref{SUBSEC:32} above. This completes the proof of Proposition~\ref{PROP:NI2} and hence also the proof of Theorem~\ref{THM:FLq}.
\begin{ackno}\rm The authors would like to thank Tadahiro Oh for suggesting this problem. M.O. would like to thank the School of Mathematics at the University of Edinburgh, where this manuscript was prepared, for its hospitality. J.\,F.~was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh. Both authors also acknowledge support from Tadahiro Oh's ERC starting grant no. 637995 “ProbDynDispEq”. M.O.~was supported by JSPS KAKENHI Grant numbers JP16K17624 and JP20K14342. \end{ackno}
\end{document} | arXiv | {
"id": "1909.03556.tex",
"language_detection_score": 0.5404528379440308,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Short-term forecast of EV charging stations occupancy probability using Big Data streaming analysis} \setlength{\parindent}{1em}
\begin{abstract} The widespread diffusion of electric mobility requires a contextual expansion of the charging infrastructure. An extended collection and processing of information regarding charging of electric vehicles may turn each electric vehicle charging station into a valuable source of streaming data. Charging point operators may profit from all these data for optimizing their operation and planning activities. In such a scenario, big data and machine learning techniques would allow valorizing real-time data coming from electric vehicle charging stations. \\ This paper presents an architecture able to deal with data streams from a charging infrastructure, with the final aim to forecast electric charging station availability after a set amount of minutes from present time. Both batch data regarding past charges and real-time data streams are used to train a streaming logistic regression model, to take into account recurrent past situations and unexpected actual events. The streaming model performs better than a model trained only using historical data.\\ The results highlight the importance of constantly updating the predictive model parameters in order to adapt to changing conditions and always provide accurate forecasts. \end{abstract}
\keywords{electric charging infrastructure \and electric vehicle \and big data streaming architecture \and occupancy status forecast \and streaming logistic regression}
\section{Introduction} \indent The Integrated National Plan for Energy and Climate (PNIEC)\cite{pniec}, published in January 2020 by the Italian Ministry of Economic Development, previews an intensive spread of electric mobility by 2030. The aim of reaching 4 millions battery electric vehicles (EVs) and 2 millions hybrid plug-in vehicles, starting from a number of 70,000 circulating EVs, is really challenging. The increase of EVs requires a contextual expansion of the charging station network \cite{arera}. The actual number of 16,700 charging points is expected to grow to 98,000-130,000, under the scenarios reported by Motus-E in a report published in 2020 \cite{motuse}. The research centre Ricerca sul Sistema Energetico (RSE S.p.A.) confirms this scenario, estimating a number of 31,500 fast charging points and 78,600 slow charging points by 2030, for a total of around 110,000 public charging points \cite{rse}. \\\indent In this context, a more efficient electric charge data processing and collection will be necessary. Each public charging station is indeed a potential data source and the exploitation and valorization of all these data can be useful for both charging point operators and final users. Charging point operators could benefit for their planning activity, while final users could receive updated information and forecasts of future occupancy status of the charging stations. \\ \indent Real-time data streams regarding charging station occupancies may be sent to a central system, allowing their integration and processing with big data and machine learning techniques. Possible final objectives could be the identification of the most appropriate collocation of new charging stations, the development of smart charging algorithms, the evaluation of the capacity of power distribution systems to handle extra charging loads and the assessment of the market value for the services provided by electric vehicles, as vehicle-to-grid solutions \cite{smartcity}. In addition, the management of data coming from electric vehicles and their charging stations has a crucial role for operation and planning future Smart Grids \cite{smartcity}. \\ \indent This paper proposes a big data streaming architecture for providing short-term forecasts of charging station occupancy probabilities. The predictive machine learning algorithm takes into account both recurrent situations linked to the past and actual unexpected events, in order to forecast the occupancy status more accurately. The importance of considering a mix of historical and actual conditions has been stressed during the Covid-19 disease 2019 pandemic: it has introduced a multitude of disruptions to daily life, which conventional forecasting models can not correctly predict. As regards the electric energy sector, this problem arises in the context of electrical consumption forecasts on the distribution grid \cite{covid}. However, mobility restrictions during lock-downs have also impacted charging habits of EV owners, with inevitable influences on predictions of occupancies of electric charging stations.
\section{Data and methods} In order to retrieve the occupancy probability of an EV charging station a classification model can be exploited. The logistic regression model is one of the most fundamental and widely used classification methods \cite{logregr} and has been selected for a first architectural development. \\ \indent However, a forecast model trained just using historical data can result in large forecasting errors, especially in the case of unexpected events. A prime example could be related to charging stations close to a stadium or an exhibition center: a model trained only with historical data can provide high occupancy probabilities just over days with yearly recurring events, while the occupancy probability will be low in other cases. \\ \indent As a consequence, the idea has been to initialize a Logistic Regression model with historical data related to past charges and to increasingly update the model using real-time data from the actual occupancy of EV charging stations. In this way both recurrent situations linked to the past and actual unexpected events can be taken into account. \\ \indent The conversion of available data about past charges into continuous data streams has allowed the development and testing of a big data streaming architecture, potentially able to manage real-time data coming from EV charging stations.
\subsection{Available data} The data under consideration refer to 1,724 EV charges from a selected charging station. They have the following characteristics: \begin{itemize} \item the charges have been supplied in a period of three consecutive years; \item the charge distributions within the different days of the week (Figure \ref{Fig4}) and the different hours of the day (Figure \ref{Fig5}) indicate a higher supply frequency in working days, with respect to Saturday and above all to Sunday, and in the hours between 9 and 18; \item the charge duration distributions, in minutes (Figure \ref{Fig6}) and hours (Figure \ref{Fig7}), display the highest frequency of charges lasting less than one hour, with a mean distribution value around 35-40 minutes. \end{itemize}
\begin{figure}
\caption{Weekly distribution of charges.}
\label{Fig4}
\end{figure}
\begin{figure}
\caption{Distribution of charges in each hour of day.}
\label{Fig5}
\end{figure}
\begin{figure}
\caption{Distribution of charge duration in minutes.}
\label{Fig6}
\end{figure}
\begin{figure}
\caption{Distribution of charge duration in hours.}
\label{Fig7}
\end{figure}
\subsection{Streaming architecture} The whole architecture has been implemented using Apache Spark and the functions from its MLlib library \cite{mllib}. In particular, the \textit{StreamingLogisticRegression} function included in the Spark MLlib library has been selected; the function is natively able to update the initialized model with the arrival of new data streams. The features chosen as model inputs are: \begin{itemize} \item two cyclical variables to represent hours of the day; \item two cyclical variables to represent months of the year; \item a categorical variable to distinguish business days from weekends; \item a categorical variable to distinguish working days from festivities; \item seven categorical variables to represent different days of the week. \end{itemize} \indent The year with the highest number of available data has been considered as a training set to initialize the Streaming Logistic Regression model. Having at disposal just static, historical data, it has been necessary to simulate continuous data streaming in order to test the architecture. This task has been realized with data from one of the two remaining years and using Apache Kafka. It is important to note that the principal aim was to demonstrate the feasibility of the streaming architecture implementation and not to select the best forecasting model.\\ \indent In Figure \ref{Fig8} the developed architecture is presented: \begin{itemize} \item initial dataset is imported in Tableau for preliminary analysis and visualizations; \item test data have been selected and transformed into a continuous simulated data stream, with a time resolution of one minute; data are sent into a Kafka topic by a Kafka Producer; \item training data have been used to initialize a \textit{StreamingLogisticRegression} model. Later, a Kafka Consumer reads the streaming data coming into the Kafka topic; this data stream has a dual functionality: on the one hand it allows an incremental update of the initialized model, on the other it is used to extract hour and date in the next 15 minutes and to provide as output the occupancy status forecast of the charging station, from the just updated model; \item occupancy probability of the considered charging station after 15 minutes from the actual time is saved in another Kafka topic, written on InfluxDB and visualized in Grafana. \end{itemize} \begin{figure}
\caption{Developed architecture in order to create a simulated continuous data streaming, to train a \textit{StreamingLogisticRegression} model and to extract the resulting output.}
\label{Fig8}
\end{figure} \section{Results} For all 525,601 minutes in the test set, the actual charge presence or absence has been compared to the occupancy forecasts, performed 15 minutes before. Figure \ref{Fig9} displays the model predictions and the actual occupancy status for the week 22-29 September of the selected year. The considered models are the Streaming Logistic Regression model and classical Logistic Regression model, trained just using historical entries and not updated with real-time data; the actual status is 1 if the charging station is occupied, 0 otherwise. \begin{figure}
\caption{Occupancy probability forecasts obtained with the Streaming Logistic Regression model (in yellow) and with the classical Logistic Regression model (in green), compared to the actual occupancy status (in light-blue). The actual status is 1 if the charging station is occupied, 0 otherwise. The week between 22 and 29 September of the selected year is displayed.}
\label{Fig9}
\end{figure}
In order to extract the occupancy status forecast from the occupancy probability, a standard threshold of 0.5 is usually chosen: a probability higher than 0.5 indicates a charge presence, while a probability lower than 0.5 indicates a charge absence. Again the class 1 stands for a charge presence, the class 0 for a charge absence. From a visual analysis of the results it appears that: \begin{itemize} \item Streaming Logistic Regression model learns from historical data a modular pattern, evident also for classical Logistic Regression model results. The occupancy probability decreases indeed during the night hours and in Saturday and Sunday, compared to the working days; \item occupancy probabilities from the streaming model are generally lower than those from the batch model. However, if a charging station is actually occupied, the streaming forecasts display a higher increase. This increase is more evident with long charges, when occupancy probabilities reach values above 0.8; \item occupancy probabilities are on average lower than 0.5; therefore, the threshold to extract the corresponding class should be probably set lower than the standard of 0.5. \end{itemize} The three indexes of precision, recall and F1-score allow a formalization of these results. Considering the number of false positives (FP), false negatives (FN), true positives (TP) and true negatives (TN) of the models, precision $p$ and recall $r$ are calculated as follows: \begin{equation} p= \frac{TP}{TP+FP} \quad \quad ; \quad \quad r =\frac{TP}{TP+FN} \end{equation}
F1-score is the harmonic mean of $p$ and $r$: \begin{equation} F_1 = \frac{2}{\frac{1}{r}+\frac{1}{p}} = 2 \cdot \frac{p \cdot r}{p+r} \end{equation}
Figures \ref{Fig10}a and \ref{Fig10}b show the precision/recall curve respectively for the classical and the Streaming Logistic Regressions; the sample includes all 525,601 minutes in the test set. It is evident that the streaming model has a higher precision for all the threshold values in the range 0.30-0.50, while the batch model generally presents better recalls. The combination of precision and recall in the F1-score attests more accurate predictions from the Streaming Logistic Regression, for all the set thresholds (Figure \ref{Fig10}c). \begin{figure}
\caption{(a) Precision/recall curve for the classical Logistic Regression model; (b) precision/recall curve for the Streaming Logistic Regression model; (c) F1-score of the classical and Streaming Logistic Regression. Threshold values are in the range 0.30-0.50. The sample includes all 525,601 minutes in the test set.}
\label{Fig10}
\end{figure} \\ Focusing on the Streaming Logistic Regression model, the threshold value producing the best results is 0.30, while a threshold of 0.35 provides well balanced precision and recall, with values for both the indexes between 0.63 and 0.67. However, a model with a better recall than precision will be chosen in the case of need to forecast the highest number of charges, with the risk of forecasting as a charge an event that will not be confirmed as an actual charge. On the contrary, a model with a better precision than recall will be chosen when it is necessary to forecast just correct charges, with the risk of losing some charge predictions.
\section{Conclusion} This paper presents a first model prototype to forecast the occupancy status probability of EV charging stations. The developed big data streaming architecture is based on Apache Spark, Spark Streaming and Apache Kafka. It receives streaming data from a charging station and provides as output the occupancy probability in the next 15 minutes. The selected forecasting model is the Streaming Logistic Regression, initialized using historical data and constantly updated with the arrival of real-time data streams. \\ \indent The model learns from historical data a modular pattern, with a probability decrease in the night hours and during the weekends. The real-time update of the model results in an occupancy probability increase when a charge is actually present. Therefore the streaming model provides better predictions than the batch model. The occupancy status retrieval has been done by fixing different threshold values: if the occupancy probability is higher than the set threshold the prediction states the charging station occupancy, otherwise it states the charging station availability. A threshold of 0.35 allows to seek a balance between precision and recall indexes, resulting in the range 0.63-0.67 in this case. \\ \indent The results highlight the necessity of a further optimization of Logistic Regression parameters, such as the regularization parameters, the streaming time window, the selected features and the gradient descent step. As regards the choice of the classification model, just the Logistic Regression model has been tested so far, but it is necessary to investigate which model is the best appropriate for the specific use case; other examples could be the Decision Tree Classifier, the Random Forest Classifier and the Gradient-Boosted Tree Classifier \cite{classification}. \\ \indent Moreover, a web or mobile application could be developed to display on a map the resulting occupancy probabilities in real-time for all the available charging stations. This application will provide a tool of easier and more immediate use, supporting an EV driver in the choice of the most probable free charging station, close to his destination. \\ \indent Finally, it could be useful to investigate the weights in the mix between batch and real-time data and understand how to calibrate this mix. This ability to keep historical and actual situations both into account can be considered indeed as the main strength of the proposed architecture and will become increasingly important for forecasting models related to various fields in a fast and continuously changing world.
\end{document} | arXiv | {
"id": "2104.12503.tex",
"language_detection_score": 0.8815906047821045,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{NMR investigations of quantum battery using star-topology spin systems}
\author{Jitendra Joshi}
\email{jitendra.joshi@students.iiserpune.ac.in}
\author{T S Mahesh}
\email{mahesh.ts@iiserpune.ac.in}
\affiliation{Department of Physics and NMR Research Center,\\
Indian Institute of Science Education and Research, Pune 411008, India}
\begin{abstract}
Theoretical explorations have revealed that quantum batteries can exploit quantum correlations to achieve faster charging, thus promising exciting applications in future technologies. Using NMR architecture, here we experimentally investigate various aspects of quantum battery with the help of nuclear spin-systems in star-topology configuration. We first carry out numerical analysis to study how charging a quantum battery depends on the relative purity factors of charger and battery spins. By experimentally characterizing the state of the battery spin undergoing charging, we estimate the battery energy as well as the \textit{ergotropy}, the maximum amount of work that is unitarily available for extraction. The experimental results thus obtained establish the quantum advantage in charging the quantum battery. We propose using the quantum advantage, gained via quantum correlations among chargers and battery, as a measure for estimating the size of the correlated cluster. We develop a simple iterative method to realize asymptotic charging that avoids oscillatory behaviour of charging and discharging. Finally, we introduce a load spin and realize a charger-battery-load circuit and experimentally demonstrate battery energy consumption after varying duration of battery storage, for up to two minutes. \end{abstract}
\maketitle
\section{Introduction} Recent advances in quantum technologies are revolutionizing the world with novel devices such as quantum computers, quantum communication, quantum sensors, and a host of other quantum-enhanced applications \cite{nielsen2002quantum,lo1998introduction}. The latest additions include quantum engines \cite{quan2007quantum,goswami2013thermodynamics}, quantum diode \cite{palacios2018atomically,nakamura1996ingan}, quantum transistor \cite{geppert2000quantum}, as well as quantum battery, an energy-storing device \cite{andolina2018charger,andolina2019extractable,aaberg2013truly} that is capable of exploiting quantum superpositions \cite{alicki2013entanglement, binder2015quantacell,campaioli2017enhancing,ferraro2018high,andolina2019quantum,campaioli2018quantum}. While quantum batteries open up novel applications, they are also exciting from the point of view of quantum thermodynamics \cite{2016,kosloff2013quantum,deffner2019quantum}, a rapidly emerging field that extends thermodynamical concepts to the quantum regime. It has been theoretically established that quantum batteries can exhibit faster charging in a collective charging scheme that exploits quantum correlations \cite{kamin2020entanglement,binder2015quantacell,campaioli2017enhancing}. Recently quantum batteries with various models showing quantum advantages have been introduced \cite{monsel2020energetic,kamin2020non}. They include quantum cavity \cite{pirmoradian2019aging,ferraro2018high,zhang2018enhanced,andolina2019extractable,crescente2020charging,julia2020bounds,mohan2021reverse,niedenzu2018quantum,caravelli2020random}, spin chain \cite{le2018spin,ghosh2020enhancement,PhysRevA.103.033715,rossini2019many,zakavati2021bounds,ghosh2021fast,santos2020stable}, Sachdev-Ye-Kitaev model \cite{rossini2020quantum,rosa2020ultra}, and quantum oscillators \cite{andolina2019quantum,zhang2019powerful,andolina2018charger,chen2020charging}. There also have been a few experimental investigations of quantum battery, such as the cavity assisted charging of an organic quantum battery \cite{quach2020organic}.
Here we describe an experimental exploration of quantum batteries formed by nuclear spin-systems of different sizes in star-topology configuration. Although, one can consider various other configurations, we find the star-topology systems to be particularly convenient for this purpose for the reasons mentioned in the review \cite{Mahesh_2021}. Using NMR methods, we study various aspects of quantum battery by experimentally characterizing its state via quantum state tomography. Thereby we monitor building up of battery energy during collective charging and establish the quantum speedup. We also estimate the quantity \textit{ergotropy}, that quantifies the maximum extractable work. By numerically quantifying quantum correlation in terms of entanglement entropy as well as discord, we reconfirm the involvement of correlations in yielding the quantum speedup. We therefore propose using the quantum speed to estimate size of the correlated cluster. We find this method to be much simpler compared to spatial phase-encoding method \cite{pande2017strong} or the temporal phase-encoding method (eg. \cite{krojanski2006reduced}). Unlike classical batteries, charging of a quantum battery is oscillatory, i.e., the quantum battery starts discharging after reaching the maximum charge. Recent theoretical proposals to realize a stable non-oscillatory charging were based on either adiabatic protocol \cite{santos2019stable} or shortcut to adiabaticity \cite{dou2022highly}. Here we propose and demonstrate a simple iterative procedure to realize asymptotic charging based on the differential storage times of the charger and battery spins. Finally, we describe implementing the Quantum Charger-Battery-Load (QCBL) circuit. A similar circuit has recently been theoretically discussed in Ref. \cite{santos2021quantum}. Using a 38-spin star-system we experimentally demonstrate QCBL circuit with battery storage up to two minutes before discharging energy on to the load spin.
The article is organised as follows. In Sec. \ref{theory}, we describe the theoretical modeling of quantum battery and describe the numerical analysis of battery performance in terms of relative purity factors of charger and battery spins. In Sec. \ref{sec:expt}, we describe the following experimental studies on quantum battery. The study of quantum advantage and ergotropy is reported in Sec. \ref{sec:qadv}. The proposal to use quantum advantage as a measure of cluster size is discussed in Sec. \ref{sec:clustersize}. The scheme to avoid oscillatory charging is described in Sec. \ref{sec:asymcharge}. Finally, we describe implementation of the QCBL circuit in Sec. \ref{sec:qcbl} before summarizing in Sec. \ref{Conclusion}.
\begin{figure}
\caption{A single spin-1/2 particle in an external magnetic field $B_0$ as a quantum battery. The ground state (a) and excited state (b) correspond respectively to uncharged and charged states of the battery. }
\label{fig:qb}
\end{figure}
\section{Theory} \label{theory} \subsection{A nuclear spin-battery} The simplest quantum battery (B) consists of a two-level quantum system, like a spin-1/2 particle placed in a magnetic field (Fig. \ref{fig:qb}). Here, the ground state $\ket{0}$ is modeled as a discharged or empty battery, while the excited state $\ket{1}$ is modeled as the fully charged battery. The spin battery can be charged either directly using an external drive \cite{binder2015quantacell,ferraro2018high} or indirectly via an ancillary spin, called charger spin (C) \cite{le2018spin,santos2021quantum}. Let us now consider the B-C spin system. Each of the two spins are governed by their local Hamiltonians $H_B$ and $H_C$, respectively, which for the sake of simplicity, are chosen to have zero ground-state energy. Moreover, we assume that the quantum system at an initial time $t = 0$ is in a factorized state \begin{align} \rho_{BC}(0) = \proj{0}_B\otimes\proj{1}_C, \label{eq:rhobc1chargerpure} \end{align} with $\proj{1}_C$ being the excited state of the charger.
We now introduce a coupling Hamiltonian $H_{BC}(t)$ between B and C, in order to transfer as much energy as possible from the charger to the battery over a finite charging duration $\tau$. Under the global Hamiltonian of the system BC \begin{align} \centering H(t) = H_B + H_C+ H_{BC}(t), \end{align} the joint system evolves as \begin{align} \label{unitary} \rho_{BC}(\tau) &= U(\tau) \rho_{BC}(0)U^{\dagger}(\tau) \nonumber \\ &~\mbox{with}~ U(\tau) = Te^{-i\int_0^\tau dt H(t)}, \end{align}
where $T$ is the time-ordering operator. The instantaneous state of battery $\rho_B(\tau) = \mathrm{Tr}_C(\rho_{BC}(\tau))$ is obtained by tracing out the charger. The goal is to maximize the local energy of the battery \begin{align} E_B^\mathrm{max} = E_B(\overline{\tau}) = \mathrm{Tr}(\rho_{B}(\overline{\tau}) H_B), \end{align} with the shortest possible charging time $\overline{\tau}$. For a given maximum energy charged $E_B^\mathrm{max}$, the charging power is defined as \begin{align} P = E_B^\mathrm{max}/\overline{\tau}. \end{align}
We now discuss two charging schemes, parallel and collective \cite{binder2015quantacell,ferraro2018high,le2018spin} as illustrated in Fig. \ref{fig:battery}. In parallel charging scheme (Fig. \ref{fig:battery}(a)), each of the $N$ batteries is independently charged to a maximum energy $E_B^\mathrm{max}/N$ by one of the $N$ chargers over a duration $\overline{\tau}_1$. Conversely, in the collective charging scheme (Fig. \ref{fig:battery}(b)), all the batteries together form a battery-pack that is charged to a maximum energy $E_B^\mathrm{max}$ simultaneously by $N$ chargers over a duration $\overline{\tau}_N$. The latter scheme exploits quantum correlations and hence is more efficient \cite{ferraro2018high,binder2015quantacell}. Let $P_1$ and $P_N$ be the the charging powers of the parallel and collective charging schemes respectively. The quantum advantage of collective charging is defined as \cite{campaioli2017enhancing} \begin{eqnarray} \label{gamma} \Gamma \equiv \frac{P_N}{P_1} = \frac{E_B^\mathrm{max}/\overline{\tau}_N}{N(E_B^\mathrm{max}/N)/\overline{\tau}_1} = \frac{\overline{\tau}_1}{\overline{\tau}_N}. \end{eqnarray}
We may also characterize the state of the battery during charging in terms of ergotropy, or the maximum work that can be extracted \cite{allahverdyan2004maximal}. Following Refs. \cite{allahverdyan2004maximal,pusz1978passive,campaioli2018quantum}, the ergotropy of a battery at time $\tau$ is given by \begin{eqnarray} \label{eq:ergo} {\cal E}(\rho_{B}(\tau)) = E_B(\rho_{B}(\tau)) - E_B(\rho_{B}^p(\tau)), \end{eqnarray}
where $E_B(\rho) = \mathrm{Tr}(\rho H_B)$ is the energy of the state $\rho$ and $\rho_{B}^{p}(\tau)$ is the passive state corresponding to $\rho_{B}(\tau)$. A passive state, or a zero-ergotropy state, is the one from which no work can be extracted by using unitary methods \cite{allahverdyan2004maximal,pusz1978passive}. To construct the passive state, we first spectrally decompose the state $\rho_{B}(\tau)$ and Hamiltonian $H_B$ as
\begin{align} \rho_{B}(\tau) &= \sum_{j} r_j \proj{r_j}, ~\mbox{where}~ r_1 \ge r_2 \ge \cdots, \mbox{and} \nonumber \\ H_B &= \sum_k E_k \proj{E_k} ~\mbox{where}~ E_1 \le E_2 \le \cdots. \label{eq:rhobhb}
\end{align} The passive state is diagonal in the energy basis formed by pairing descending order of populations $r_j$ with ascending order of energy $E_j$ levels, i.e., \begin{align} \rho_{B}^p(\tau) = \sum_j r_j \proj{E_j}. \label{eq:permutedrhob} \end{align} Note that the energy of the passive state is \begin{align} E_B(\rho_{B}^p(\tau)) = \sum_j r_j E_j. \end{align}
For a single spin battery described in Fig. \ref{fig:qb}, the eigenvalues of instantaneous state are of the form $(1\pm \epsilon)/2$ where $|\epsilon| \le 1$. Therefore, \begin{align} \rho_{B}(\tau) &= \frac{1+\epsilon}{2} \proj{0} + \frac{1-\epsilon}{2} \proj{1} ~~\mbox{and} \nonumber \\ E_B(\rho_{B}(\tau)) &= \hbar\omega_B \frac{1-\epsilon}{2}. \label{eq:rhobeb} \end{align} As long as $\epsilon \ge 0$, the ground state is still more populated than the excited state, and the battery remains in the passive state and ergotropy ${\cal E}(\rho_{B}(\tau)) = 0$. After sufficient charging, $\epsilon$ becomes negative, and the passive state changes to \begin{align} \rho_{B}^p(\tau) &= \frac{1-\epsilon}{2} \proj{0} + \frac{1+\epsilon}{2} \proj{1}, \nonumber \\ E_B(\rho_{B}^p(\tau)) &= \hbar\omega_B \frac{1+\epsilon}{2}, \nonumber \\ \mbox{and ergotropy}~ {\cal E}(\rho_{B}(\tau)) &= -\epsilon \hbar\omega_B ~~(\epsilon \le 0). \end{align}
For $|\epsilon| \ll 1$ we find that the dimensionless ratio \begin{align} \frac{{\cal E}(\rho_{B}(\tau))}{-\epsilon E_B(\rho_{B}(\tau))} &= \frac{2}{1-\epsilon} \approx 2. \label{eq:ergofactor} \end{align} In the following we describe the topology of the spin-systems used in our experiments.
\begin{figure}
\caption{Two charging schemes: (a) parallel charging scheme where a single battery is charged by an individual charger and (b) the collective charging scheme where a single battery is charged by multiple chargers. }
\label{fig:battery}
\end{figure}
\subsection{Star-topology network} \label{Star-topology network} We now consider the star-topology network in which a single central battery-spin uniformly interacts with a set of $N$ indistinguishable charger spins \cite{Mahesh_2021} as illustrated in Fig. \ref{fig:NMR} (a). Quantum battery in this configuration has been studied theoretically very recently \cite{PhysRevB.104.245418}. The spin-systems with $N = 3, 9, 12, 18,~\&~ 36$ studied in this work are shown in Fig. \ref{fig:NMR} (b-f).
\begin{figure}
\caption{(a) Star-topology configuration showing the central battery spin symmetrically surrounded by charger spins. (b-f) The star-topology nuclear spin-systems studied in this work. The strength $J$ of battery-charger interaction for each system is shown with the molecular structure, while other details are tabulated in (g). Note that all the nuclei considered here (B and C) are spin 1/2 nuclei.}
\label{fig:NMR}
\end{figure}
We consider the local Hamiltonians for the battery and charger to be \begin{align} H_B = \hbar \omega_B(1/2-S_z) ~\mbox{and}~ H_C = \hbar \omega_CI_z. \end{align} Here $S_{x,y,z}$ represent the $x,y,z$-spin operators for the battery spin with Larmor frequency $\omega_B$, $I_{x,y,z} = \sum_{i=1}^N I_{x,y,z}^i$ represent the collective $x,y,z$-spin operators for the chargers with Larmor frequency $\omega_C = \gamma \omega_B$, where $\gamma$ is the relative gyromagnetic ratio. Following Ref. \cite{andolina2019quantum}, we choose the interaction Hamiltonian, \begin{align} H_{BC}(t) &= \hbar 2\pi J \left( S_x I_x + S_y I_y \right), \end{align}
where $J \ll |\omega_{C(B)}|$ is the coupling constant between the battery and the charger spins.
The spin-system is prepared in the thermal equilibrium state, which is in a generalized form of Eq. \ref{eq:rhobc1chargerpure}, i.e., \begin{align} \rho_{BC}(0) &= \rho_B(0) \otimes \rho_C(0), ~\mbox{with} \nonumber \\ \rho_B(0) &= \frac{1+\epsilon}{2} \proj{0} +\frac{1-\epsilon}{2} \proj{1} ~\mbox{and} \nonumber \\ \rho_C(0) &= \left(\frac{1-\gamma\epsilon}{2} \proj{0} +\frac{1+\gamma\epsilon}{2} \proj{1} \right)^{\otimes N}, \label{eq:rhobcrmixed} \end{align} where $\epsilon$ and $\gamma \epsilon$ are the purity factors of the battery and charger spins respectively. Under the high-temperature approximation relevant for NMR conditions, $\epsilon \approx 10^{-5}$.
\begin{figure}
\caption{ (a) Battery energy ${\tt e}_B$ versus charging phase $\theta = 2\pi J \tau$ for different number $N$ of chargers in pure (solid lines) as well as mixed (dashed lines; $\epsilon = 10^{-5}$) state cases. (b) Quantum advantage $\Gamma$ versus $N$ for different purity values $\epsilon$. }
\label{fig:QAdv}
\end{figure}
We evolve the whole system for a duration $\tau$ under the total Hamiltonian in the interaction frame defined by $U_\mathrm{IF}(t) = e^{-i(H_B+H_C) t /\hbar}$. The dimensionless energy of the battery \begin{align} {\tt e}_B(\tau) = E_B(\tau)/\hbar \omega_B = \dmel{1}{\rho_B(\tau)} \end{align} is related to the normalized polarization of the battery \begin{align} m_B(\tau) = \expv{\sigma_z}_{\rho_B(\tau)}/\epsilon ~~\mbox{via}~~ {\tt e}_B(\tau) = \frac{1-m_B(\tau)}{2}. \label{mb2eb} \end{align} For the special case of pure state, i.e., $\epsilon = 1$ and also setting $\gamma=1$, we obtain the state and dimensionless energy as \begin{align} \label{eq:rho} \rho_B(\tau) = \cos^{2}(\sqrt{N}\theta/2) \proj{0} + \sin^{2}(\sqrt{N}\theta/2) \proj{1}
\end{align} \begin{align}
{\tt e}_B(\tau) = \sin^{2}(\sqrt{N}\theta/2)~
\mbox{in terms of}~\theta = 2\pi J \tau.
\label{eq:ebpure}
\end{align}
The energy is maximized for $\overline{\theta} = \pi/\sqrt{N}$ at optimal time \begin{align} \overline{\tau}_N &= \frac{\overline{\theta}}{2\pi J} = \frac{1}{2J\sqrt{N}},~~
\therefore ~~ \Gamma = \frac{\overline{\tau}_1}{\overline{\tau}_N} = \sqrt{N}, \end{align} clearly predicting the quantum speed-up.
The battery energy evolution for various numbers of charger spins are shown in Fig. \ref{fig:QAdv} (a).
Note that mixed state curves deviate from the pure state curves for $N \ge 3$. Here, while ${\tt e}_B$ exceeds the pure state value of unity, the maximum charging takes longer duration. The quantum advantage $\Gamma$ versus number of charger spins for $\gamma=1$ and various values of $\epsilon$ are shown in Fig. \ref{fig:QAdv} (b). In the following we discuss the experimental investigation of quantum battery.
\section{Experiments} \label{sec:expt} \subsection{Establishing quantum advantage \label{sec:qadv}} Our first aim is to establish the quantum advantage described in section \ref{Star-topology network} using various systems shown in Fig. \ref{fig:NMR}. The table containing information about the solvent, the relative gyromagnetic ratio ($\gamma$), and the T$_1$ relaxation time constant for each of the spin systems is shown in Fig. \ref{fig:NMR} (g). All the experiments were carried out in a 500 MHz Bruker NMR spectrometer at an ambient temperature of 298 K. The NMR pulse-sequence for the experiments is shown in Fig. \ref{fig:rootN} (a). Starting from thermal equilibrium state, we energize the charger spins by inverting their populations with the help of a $\pi$ pulse. This is followed by the charging propagator \begin{figure}
\caption{ (a) The NMR pulse sequence for charging quantum battery and measuring its energy. The wide and narrow rectangular pulses correspond to $\pi$ and $\pi/2$ pulses respectively. The shaped pulse in the lowest row corresponds to the pulsed-field-gradient (PFG) which dephases the coherences and retains populations. (b) The dots correspond to experimentally measured battery energy values ${\tt e}_B$ versus normalized charging duration $\tau/\overline{\tau}_N$ for the five spin-systems shown in Fig. \ref{fig:NMR}. Here the solid lines are spline-fits to guide the eye.(c) Quantum advantage $\Gamma$ versus the number $N$ of charger spins showing $\sqrt{N}$ dependence.}
\label{fig:rootN}
\end{figure}
\begin{align}
U_{XY}(\tau/n_0)
&= e^{-i H_{BC}\tau/n_0}
\nonumber \\
&\approx Y\cdot ZZ \cdot Y^\dagger \cdot X\cdot ZZ \cdot X^\dagger. \end{align} Here, $X(Y) = e^{-i(S_{x(y)}+I_{x(y)})\pi/2}$ and $ZZ = e^{-i S_zI_z \theta/m} $. Note that for $N \ge 2$, $[S_xI_x,S_yI_y] \neq 0$, and therefore we implement the interaction propagator via integral iterations $n \in [0,n_0]$ of $U_{XY}(\tau/n_0)$ with sufficiently large $n_0$ such that $\tau/n_0 \ll 1/(2J)$. Finally, after dephasing spurious coherences with the help of a pulsed-field-gradient (PFG), we apply a $\pi/2$ detection pulse and measure the battery polarization $m_B(\tau)$. During the detection period, we decouple charger spins using WALTZ-16 composite pulse sequence \cite{cavanagh1996protein}. \begin{figure}\label{fig:ergotropy}
\end{figure}
The experimentally measured battery energy ${\tt e}_B$ estimated from $m_B$ using Eq. \ref{mb2eb} for all five spin-systems shown in Fig. \ref{fig:NMR} are plotted versus normalized charging duration $\tau/\overline{\tau}_N$ in Fig. \ref{fig:rootN} (b). For an ideal pure-state system, we expect the maximum energy storage at $\tau/\overline{\tau}_N = 1$. On the other hand, for mixed state systems with $N \ge 3$, $\tau/\overline{\tau}_N$ slightly overshoots the unit value. However, in practical systems, the charging dynamics is affected by the experimental imperfections such as RF inhomogeneity (RFI), off-set and calibration errors, etc. In spite of these issues, the results shown in Fig. \ref{fig:rootN} (b) for all the systems show a remarkable agreement with the expected maximum charging duration at $\overline{\tau}_N$. The corresponding quantum advantage $\Gamma$ for all the systems are plotted versus the number $N$ of charger spins in Fig. \ref{fig:rootN} (c), where the solid line corresponds to the theoretically expected $\sqrt{N}$ function. Clearly, we observe a significant quantum advantage ranging from about 1.5 to over 6.
We now explain the experimental measurement of ergotropy for the subsystem consisting only the battery spin. To this end, we carry out the complete quantum state tomography \cite{nielsen2002quantum} of the battery spin while tracing out the charger spins using heteronuclear composite pulse decoupling. After reconstructing the density matrix $\rho_B(\tau)$ we use Eqs. (\ref{eq:ergo}-\ref{eq:permutedrhob}) to estimate the ergotropy value. The dots in Fig. \ref{fig:ergotropy} represent the experimentally estimated ratio of ergotropy to maximum energy (see Eq. \ref{eq:ergofactor}) plotted versus the normalized charging time $\tau/\bar{\tau}_{N}$. Here the solid lines are theoretical fits accounting for experimental nonidealities such as RFI, relaxation effects, etc. As explained after Eq. \ref{eq:rhobeb}, the battery spin remains in a passive state and exhibits zero ergotropy until its populations are saturated. Ideally for $\gamma=1$, the saturation occurs at time $1/(4J \sqrt{N})$ (follows from Eq. \ref{eq:rho}), while for $\gamma \ge 1$, it occurs earlier. Once the battery-spin populations begin to invert, the ergotropy ratio starts building up towards the value 2 (see Eq. \ref{eq:ergofactor}) and reaches its maximum at normalized charging time $\tau/\bar{\tau}_N = 1$. Thus, once again we observe the quantum advantage in charging of quantum battery.
\subsection{Determining size of the correlated cluster \label{sec:clustersize}} It has been shown that quantum correlation plays a key role while charging quantum battery via collective mode \cite{binder2015quantacell}. The same holds true for charging in the star-topology system. In Fig. \ref{fig:entropy}, we plot entanglement entropy as well as quantum discord against the normalized charging time $\tau/\overline{\tau}_9$ for a star-system with $N=9$ charger spins. For reference we also show the charging energy ${\tt e}_B$ for both pure (with $\epsilon = 1$, $\gamma = 1$) and mixed state (with $\epsilon = 10^{-5}$, $\gamma = 1$). To evaluate entanglement entropy we traced out charger spins, and evaluated the von Neumann entropy of the battery state. For evaluating quantum discord, we used the two-spin reduced state obtained by tracing out all spins except the battery spin and one of charger spins. We find that the maximum correlation is reached at $\tau/\overline{\tau}_9 = 0.5$, i.e., at half the maximum charging period. Both entanglement entropy and discord vanish at maximum charging period, i.e., $\tau/\overline{\tau}_9 = 1$, and the spins get uncorrelated \cite{binder2015quantacell,alicki2013entanglement}. Since (i) the quantum advantage is linked to the generation of correlated state \cite{binder2015quantacell} and (ii) the maximum charging period depends on the size of the correlated cluster, here we propose to use $\Gamma^2+1$ as an estimate for size of the correlated cluster. This is justified by the good agreement between the theory and experiment for all the five systems investigated in Fig. \ref{fig:rootN} (b) and (c). For example, the experimentally obtained value $\Gamma \approx 6$ for TTSS matches with the correlated cluster of 37 spins.
\begin{figure}
\caption{ Numerically calculated battery energy (with pure and mixed states), entanglement entropy (for pure state; $\epsilon = 1$, $\gamma = 1$), and quantum discord (for mixed state; $\epsilon = 10^{-5}$, $\gamma = 1$) versus the normalized charging duration $\tau/\overline{\tau}_9$ for $N=9$ star-system involving a single battery spin and nine charger spins. }
\label{fig:entropy}
\end{figure}
\subsection{Asymptotic charging \label{sec:asymcharge}} We now propose a simple method to avoid oscillatory charging and realize an asymptotic charging that keeps the quantum battery from discharging. The method relies on the differential storage times of the charger and the battery spins, i.e., $T_1^B \gg T_1^C$. It involves iteratively re-energizing the chargers followed by transferring the charge to the quantum battery after a carefully chosen delay. The scheme for the asymptotic charging is described by the pulse-sequence shown in Fig. \ref{fig:Asym_charg} (a). It involves a delay $\Delta$ before energizing the charger followed by charging the battery. However, unlike the unitary scheme described in section \ref{sec:qadv}, here the entire process including waiting time, re-energizing of the battery, and charging is iterated. The experimentally measured battery energy ${\tt e}_B$ of the asymptotic charging with TTSS system are shown by dots in Fig. \ref{fig:Asym_charg} (b), wherein the dashed lines represent the fits to asymptotic charging functions ${\tt e}_B(n\Delta) = {\tt e}_B^\Delta(1-e^{-n\Delta/T_\Delta})$. Note that for TTSS, $T_1^B = 115.4$ s which is much longer than $T_1^C = 3.3$ s (see Fig. \ref{fig:NMR} (g)). The estimated values of the charging time-constants $T_\Delta$ is plotted versus $\Delta$ in the inset of Fig. \ref{fig:Asym_charg} (b). It is clear that there is an optimal delay time $\Delta$ for which we observe maximum charging. Therefore, we monitored the saturation charging, i.e., ${\tt e}_B(20\Delta)$ versus the delay time $\Delta$ as shown in Fig. \ref{fig:Asym_charg} (c). For TTSS, we find the optimal delay ranges from 7.5 s to 10 s, to asymptotically achieve over 85 \% charging compared to the simple unitary method described in section. \ref{sec:qadv}.
\begin{figure}
\caption{ (a) The NMR pulse sequence for asymptotic charging of a quantum battery. (b) Battery energy ${\tt e}_B$ versus charging duration $n\Delta$ for three values of delay $\Delta$. Here the dashed lines represent the fits to asymptotic charging functions as described in the text. The charging time-constants for these three cases are plotted in the inset. (c) Battery energy at saturation ${\tt e}_B(20 \Delta)$ (after $n=20$ iterations) versus the delay $\Delta$ showing the optimal delay range from 7.5 s to 10 s. Here the dashed line is a spline curve fit to guide the eye.}
\label{fig:Asym_charg}
\end{figure}
\begin{figure}
\caption{(a) The QCBL circuit and its implementation in the 38-spin star-topology system (left) and the NMR pulse sequence for the QCBL circuit (right). Here the dashed lines are spline curve fits to guide the eye. (b) The energy of battery (${\tt e}_B$) and load (${\tt e}_L$) versus discharging parameter $J_{BL}\tau'$. (c) The energy of the load (${\tt e}_L$) extracted from the battery after a storage time $\tau_s$. The dashed line is an exponential fit as discussed in the text. }
\label{fig:CD1}
\end{figure}
\subsection{Quantum Charger-Battery-Load (QCBL) Circuit \label{sec:qcbl}} Now we describe the QCBL circuit consisting of charger (C), battery (B), as well as a load (L). Here we again use TTSS system, and consider all the proton spins together as charger, the central $^{29}$Si spin as the battery, and the peripheral $^{29}$Si spin as the load. Given the $5\%$ natural abundance of $^{29}$Si, the probability of both central and one of the four peripheral silicon nuclei to be $^{29}$Si isotope is $0.2\%$. In this system, the strength of the $^{29}$Si-$^{29}$Si interaction, i.e., $J_{BL} = 52.4$ Hz. The QCBL circuit and the corresponding spin labeling are illustrated on the left of Fig. \ref{fig:CD1} (a). The NMR pulse-sequence for QCBL is shown on the right side of Fig. \ref{fig:CD1} (a). We first charge the battery (B) as described in Sec. \ref{sec:qadv} and switch-off the C-B interactions by decoupling the charger spins throughout. Subsequently, we can introduce a battery storage duration $\tau_s$, after which we apply a Gaussian spin-selective $\pi/2$ pulse on L followed by a PFG (G$_{1z}$). This ensures that there is no residual polarization of the load (L) spin. We now introduce the discharging scheme $U_{XY}(\tau')$ between B and L. Note that, the $U_{XY}$ propagator can be exactly implemented in the case of two-spin interaction. Finally, we measure the polarizations of both B and L spins after destroying the spurious coherences using a second PFG G$_\mathrm{z2}$, and thereby estimate their energies ${\tt e}_B$ and ${\tt e}_L$ respectively. The experimental results of ${\tt e}_B$ and ${\tt e}_L$ are plotted versus $J_\mathrm{BL} \tau'$ in Fig. \ref{fig:CD1} (b). In our experiment, the load spin is beginning from a maximally mixed state instead of the ground state. Therefore, ${\tt e}_L$ starts with a value around 0.5 before raising towards the maximum value of 1.0 for $J_\mathrm{BL} \tau' =0.5$. At this value of $J_\mathrm{BL} \tau'$, we vary the battery storage time $\tau_s$ and monitor the load energy ${\tt e}_L$. The results are shown in Fig. \ref{fig:CD1} (c). As expected, the data fits to an exponential decay function $e^{-\tau_s/T_s}$ (dashed line in Fig. \ref{fig:CD1} (c)) with an estimated battery storage time-constant $T_s \approx 200$ s. This completes the demonstration of QCBL circuit.
\section{Summary and outlook} \label{Conclusion} Considering the potential applications of quantum technologies, it is of great interest to study energy storage and usage at the quantum level. In this context, there is a significant contemporary interest in studying quantum battery. We investigated various aspects of quantum battery using nuclear spin systems in star-topology molecules in the context of NMR architecture. We first theoretically compared the efficiency of the collective charging scheme (involving quantum correlation) with parallel (classical) scheme.
Using NMR methods, we experimentally studied collective charging scheme in a variety of spin-systems, each having a single battery spin and a set of charger spins whose number $N$ ranged between 3 and 36. By measuring the polarization of the battery spin, we estimated the battery energy and thereby established the quantum advantage $\Gamma = \sqrt{N}$ of the collective charging scheme.
An important parameter to characterize a quantum battery is ergotropy, which quantifies the maximum amount of work that can be extracted from a quantum system via unitary methods. For each spin-system, we performed the experimental quantum state tomography and estimated the ergotropy of the battery spin and its evolution during charging. We observed the $\sqrt{N}$ quantum advantage in ergotropy as well.
By numerically evaluating entanglement entropy and quantum discord for star-systems, we reconfirmed the established fact that the quantum advantage is realized via quantum correlation. Therefore, we proposed using $\Gamma^2+1$ as an estimate for the size of the correlated cluster. In particular, for a 37 spin-system, we obtained an experimental value of $\Gamma \approx 6$, which in this case matched well with the expected number.
We then addressed the issue of oscillatory charging wherein the battery starts discharging after overshooting the optimal charging duration. To this end, we proposed a simple asymptotic charging method that involves iteratively re-energizing the charger with a suitable delay. We experimentally demonstrated asymptotic charging and determined the optimal delay range.
Finally, we introduced a load spin to which the battery can deposit its energy after a suitable storage time, thus completing the complete charger-battery-load circuit. Using a 38-spin system, we showed that the battery spin can store energy for up to two minutes and yet was able to transfer the stored energy to the load spin.
We believe this work paves the way for further methodology developments towards the practical aspects of quantum batteries. Such developments may also contribute towards better understanding of quantum thermodynamics and its applications. One may also envisage an advanced circuit involving multiple elements such as quantum diodes, quantum transistors, and quantum heat engines, in addition to quantum batteries.
\end{document} | arXiv | {
"id": "2112.15437.tex",
"language_detection_score": 0.8069393634796143,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\tikzset{Rightarrow/.style={double equal sign distance,>={Implies},->}, triple/.style={-,preaction={draw,Rightarrow}}, quadruple/.style={preaction={draw,Rightarrow,shorten >=0pt},shorten >=1pt,-,double,double distance=0.2pt}}
\begin{abstract}
Given a span of $\infty$-categories one of whose legs is a right fibration and the other a cofibration, we construct an $\infty$-category, the exit path $\infty$-category of the span. This gives access to stratified geometry by means only of unstratified objects. \end{abstract}
\title{Linked spaces and exit paths}
\section*{Introduction}
This paper proposes a new, simplified, and `model-independent' approach to stratified geometry. More precisely, we advocate a particular level of resolution at which to consider a stratified space that is good enough for many purposes: a {\it linked space} is a triple $M,L,N$ of spaces together with a fibration $\pi$ and a cofibration $\iota$ as in the diagram \[
\begin{tikzcd}
& L \ar[dl,two heads,"{\pi}"'] \ar[dr,hook,"{\iota}"]\\
M && N
\end{tikzcd} \] To any such span we attach an $\infty$-category $\EEx$. Here, $M$ and $N$ model two strata, the former lower than the latter, $L$ is their link, and $\EEx$ is the exit path $\infty$-category \`a la Lurie--MacPherson--Treumann. This covers depth $1$, and a similar approach to higher depth is possible (cf.\ \Cref{rmk:exit shuffles}), as is a natural definition of maps of linked spaces. These will appear elsewhere in a treatment geared towards applications in factorisation homology and functorial field theory, but can be readily guessed from the depth-$1$ construction presented here.
We will mention only a few works from the vast literature related to stratified space theory. Some categorical considerations, e.g., in the context of homotopically stratified spaces \`a la Quinn \cite[]{quinn1988homotopically}, and in sister contexts of varying degrees of generality, appeared in \cite[]{miller2009popaths,treumann2009exit,woolf2009fundamental}, \cite[App.\ A]{luriehigheralgebra}. The more recent conically-smooth variety (\cite{ayala2017local}), which includes Whitney-stratified spaces (\cite{nocera2021whitney}) and thus in particular algebraic and analytic varieties, has seen categorical treatment in a different direction, yet our main construction was originally inspired by a specific result in this context: \cite[Lemma 3.3.5]{ayala2017stratified}, which identifies the space of paths (in the exit path $\infty$-category) that start and end in a given pair of strata as the link of that pair. In Quinn's context, a philosophically similar result is \cite[Theorem 6.3]{miller2013strongly}, which characterises equivalences of homotopically stratified spaces by probing on strata and (pairwise) homotopy links only. From this point of view, we show here that, conversely, the strata and the links are enough to construct the exit path $\infty$-category. We should mention that it may be beneficial to explore connections to the recent stratified homotopy theory(ies) of \cite{haine2018,nand2019simplicial,douteau2021homotopy,douteau2021homotopylinks}...
Our approach is quite different methodically and in intent, but completely compatible in effect. It affords a number of related luxuries, e.g.: \begin{enumerate}
\item By asking for strata and links only, we escape the need to introduce stratified paths and higher paths in order to consider the homotopy theory of stratified spaces, as would otherwise be necessary.
\item Since we do not need spaces stratified by maps to Alexandrov posets, we bypass any and all impositions of regularity or smoothness, besides the conditions on the link maps mentioned above.
\item Concerning accommodation of examples present in the literature, we remain happily agnostic about the particular definition of stratified space that applies. \end{enumerate} We should note with respect to point (1) that stratified paths \emph{do} appear in our construction, but are induced directly by the datum of a linked space. That the conditions imposed on $\pi$ and $\iota$ result directly in the `exit path simplicial set' (\Cref{dfn:exit of span}), defined for any span of simplicial sets as soon as $\iota$ is a cofibration, becoming an $\infty$-category (\Cref{thm:exit path category of linked space}) explains, in a sense, their incidence in known theory. In fact, we prove a slightly more general statement:
\begin{theorem*}[\ref{thm:exit path category of linked space}]
Let $\mathscr{M}$, $\mathscr{L}$, $\mathscr{N}$ be $\infty$-categories, $\pi\colon \mathscr{L}\to\mathscr{M}$ a right fibration, and $\iota\colon \mathscr{L}\to\mathscr{N}$ a cofibration. Then $\EEx\left(\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N}\right)$ is an $\infty$-category. \end{theorem*}
Examples are produced whenever spans consisting of a fibration and/or a cofibration are given. We discuss here only one novel infinite-dimensional example involving Grassmannians (\Cref{ex:grassmann}). Aside from it, we briefly discuss bordisms and defects (submanifolds) in \Cref{ex:bordisms,ex:defects}.
Our methods are elementary. We employ only some standard notions concerning $\infty$-groupoids and $\infty$-categories (\cite[Part I]{kerodon}). The technical difficulty in the adjoining of non-invertible paths is combinatorial, and some auxiliary definitions we use to overcome it, although interesting in themselves, do not play a major role in practise: \emph{one knows the `exit index' of an exit path (\Cref{dfn:induced exit paths}) when one sees one.} An amusing byproduct, which appears to be new, is an interpretation as exit path parametrisations of the Eilenberg--Zilber maps, incarnated here as shuffles (\Cref{constr:exit shuffles} ff.).
\subsection*{Acknowledgments} We thank K.\ İ.\ Berktav and A.\ S.\ Cattaneo for useful conversations. This research was supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation. We acknowledge partial support of SNSF Grant No.\ 200020\verb|_|192080.
\subsection*{Conventions} The set $\mathbb{N}$ of natural numbers includes zero. We denote by $\bm{\Delta}$ the simplex category, and its objects by $[n]$, $n\in\mathbb{N}$. Unless stated otherwise, $\Delta[n]=\Hom_{\bm{\Delta}}(-,[n])$ is the standard $n$-simplex, and we employ the Yoneda Lemma without mention. Coface and codegeneracy maps we simply call face and degeneracy maps. We say {\it $\infty$-category} to mean a quasicategory, and {\it $\infty$-groupoid} to mean a Kan complex. Cartesian products of simplicial sets are defined degreewise. Given two simplicial sets $\mathscr{C},\mathscr{D}$, we write $\mathscr{C}^{\mathscr{D}}=\mathrm{Fun}(\mathscr{D},\mathscr{C})$ for the simplicial set whose set $\left(\mathscr{C}^{\mathscr{D}}\right)_k$ of $k$-simplices is the set of maps $\mathscr{D}\times\Delta[k]\rightarrow\mathscr{C}$ of simplicial sets, together with the obvious simplicial maps. A {\it cofibration} of simplicial sets is a monomorphism.
\section{Non-invertible paths}
Let $\mathscr{M}$, $\mathscr{L}$ and $\mathscr{N}$ be $\infty$-groupoids. We wish to construct an $\infty$-category that interprets $\mathscr{L}$ as the space of \emph{non-invertible} paths from $\mathscr{M}$ to $\mathscr{N}$, without modifying the paths of $\mathscr{M}$ and $\mathscr{N}$, and such that vertices remain exactly those of $\mathscr{M}\amalg\mathscr{N}$. To this end, we first need maps $ \mathscr{L}\rightarrow\mathscr{M}, \mathscr{N}, $ which play the respective roles of source and target. For the sake of clarity, we separated the construction into two steps: first, in this Section, we discuss of the `space' of non-invertible paths, and then adjoin it to $\mathscr{M}\amalg\mathscr{N}$ in \Cref{sec:exit paths}.
\begin{definition}\label{dfn:paths starting in a subspace}
Let $\iota\colon \mathscr{L}\rightarrow\mathscr{N}$ be a map of simplicial sets. We call the simplicial set
\(
\mathscr{P}\coloneqq\mathscr{P}_{\iota}\coloneqq\mathscr{L}\times_{\mathscr{N}^{\{0\}}}\mathscr{N}^{\Delta[1]}
\)
the {\it mapping cocylinder} of $\iota$. \end{definition}
\begin{remark*}
\Cref{dfn:paths starting in a subspace} is a variation on the under-$\infty$-category construction, and reduces to it if $\mathscr{L}=\pt$ is the constant singleton, in that there is an equivalence
\(
\iota(\pt)/\mathscr{N}\simeq \pt\times_{\mathscr{N}^{\{0\}}}\mathscr{N}^{\Delta[1]}.
\)
Note that otherwise the coslice $\iota/\mathscr{N}$ does not model a space of paths starting in $\mathscr{L}$: its simplices, as simplices of $\mathscr{N}$, are higher-dimensional than required to begin with. \end{remark*}
\begin{lemma}\label{lem:mapping cocylinder}
Let $\mathscr{P}$ be as in \Cref{dfn:paths starting in a subspace}. If $\mathscr{N}$ is an $\infty$-groupoid, then $\mathscr{P}\simeq\mathscr{L}$. \end{lemma} \begin{proof}
We first observe that the source evaluation $\mathscr{N}^{\Delta[1]}\rightarrow\mathscr{N}^{\{0\}}$ is a Kan fibration.
Now, each fibre $\mathscr{N}^{\Delta[1]}_p\simeq p/\mathscr{N},$ $p\in\mathscr{N}_0$, is contractible by virtue of being an under-$\infty$-groupoid (\cite[018Y]{kerodon}). This verifies condition (4) of \cite[00X2]{kerodon}, which implies that $\mathscr{N}^{\Delta[1]}\rightarrow\mathscr{N}^{\{0\}}$ is an equivalence, or equivalently (by the same cited result), a trivial Kan fibration. As trivial Kan fibrations pull back to trivial Kan fibrations, the natural map $\mathbf{s}\colon \mathscr{P}\rightarrow \mathscr{L}$ is one such. As it is in particular a Kan fibration, the same result implies that $\mathbf{s}$ is an equivalence. \end{proof}
The mapping cocylinder appears in classical topology as follows: in the analogous construction with spaces $L,N$ and $\iota$ a continuous map, the natural map $P_\iota\rightarrow N$ is a fibration replacement for $\iota$ in view of a homotopy equivalence $L\simeq P_\iota$.
\begin{remark}\label{rmk:induced source and target maps for non-invertible paths}
There are two induced maps $\pi,\tau\colon \mathscr{P}\rightarrow\mathscr{M},\mathscr{N}$ defined as the compositions in the diagram
\begin{equation*}
\begin{tikzcd}
\mathscr{P} \ar[dd,bend right=40,dashed,"{\pi}"']\ar[d,"{\mathbf{s}}"] \ar[r]\ar[rr,bend left,dashed,"{\tau}"]\pbarrow & \mathscr{N}^{\Delta[1]} \ar[r]\ar[d] & \mathscr{N}^{\{1\}} \\
\mathscr{L} \ar[d,"{\pi}"] \ar[r,"{\iota}"] & \mathscr{N}^{\{0\}} \\
\mathscr{M}
\end{tikzcd}
\end{equation*}
where the map $\mathscr{N}^{\Delta[1]}\rightarrow\mathscr{N}^{\{1\}}$ is given by precomposition with ${\{1\}}\times\Delta[k]\hookrightarrow\Delta[{1}]\times\Delta[k]$. \end{remark}
Ideally, one would adjoin $\mathscr{P}$, using $\pi\colon\mathscr{L}\rightarrow\mathscr{M}$, to $\mathscr{M}\amalg\mathscr{N}$ as the space of non-invertible paths from $\mathscr{M}$ to $\mathscr{N}$, by employing $\pi,\tau$ of \Cref{rmk:induced source and target maps for non-invertible paths} as source and target maps, respectively. Unfortunately, for combinatorial reasons, $\mathscr{P}$ does not lend itself to this directly. Instead, we will extract data out of it that does.
First, let us delineate the problem in order to motivate the construction to follow.
\begin{remark}\label{rmk:shuffles} A vertex of $\mathscr{P}$ is a path of $\mathscr{N}$ that starts at a point in $\iota(\mathscr{L})$. One may coherently view this as a path which starts in $\mathscr{M}$, by projecting its source down to $\mathscr{M}$ via $\sigma_0$, and which, analogously, ends in $\mathscr{N}$ via $\tau$. For higher morphisms, however, a direct generalisation requires unnatural choices: for instance, a $1$-morphism in $\mathscr{P}$ may be depicted as \begin{equation}\label{eq:1-morphism mapping cocylinder}
{\small \begin{tikzcd}
\bullet \ar[r] & \bullet\\
\bullet\ar[ur,dashed,color=blue] \ar[u] \ar[r] &\bullet \ar[u]
\end{tikzcd}
} \end{equation} where the bottom edge is in $\iota(\mathscr{L})$, and the top edge is in $\mathscr{N}$. (We depict the $\Delta[1]$-coordinate in a $k$-morphism of $\mathscr{N}^{\Delta[1]}$, i.e., in a map $\Delta[1]\times\Delta[k]\rightarrow\mathscr{N}$ of simplicial sets, as the upwards vertical coordinate.) Two of the (non-degenerate) $2$-simplices of $\mathscr{N}$ we may extract are \begin{equation}\label{eq:triangles from square}
{\small \begin{tikzcd}
& \bullet \\
\bullet \ar[ur,color=blue]\ar[r] & \bullet\ar[u]
\end{tikzcd}} \end{equation} and \begin{equation}\label{eq:triangles from square 2}
{\small \begin{tikzcd}
\bullet \ar[r] & \bullet\\
\bullet \ar[u]\ar[ur,color=blue]
\end{tikzcd}} \end{equation} corresponding to two $(1,1)$-shuffles $\Delta[2]\rightarrow\Delta[1]\times\Delta[1]$ (\`a la Eilenberg--Mac Lane--Zilber \cite[]{eilenberg1953products,eilenberg1953groups}; see also \cite[00RF]{kerodon}).\footnote{Triangle \eqref{eq:triangles from square} is given by the $2$-simplex of $\Delta[1]\times\Delta[1]$ defined by $([2]\rightarrow[1],[2]\rightarrow[1])=((0,1\mapsto 0; 2\mapsto 1),(0\mapsto 0;1,2\mapsto 1))$ in $\bm{\Delta}$. Triangle \eqref{eq:triangles from square 2} is given by $((0\mapsto0;1,2\mapsto1),(0,1\mapsto0;2\mapsto1))$. The hypotenuse in both triangles is the edge $\left([1]\xrightarrow{\id}[1],[1]\xrightarrow{\id}[1]\right)\in\left(\Delta[1]\times\Delta[1]\right)_1$.} If we were to add \eqref{eq:1-morphism mapping cocylinder} as a $2$-morphism to $\mathscr{M}\amalg\mathscr{N}$, say with source edge the bottom one, then we would have to choose the hypotenuse of the triangle \eqref{eq:triangles from square} as the target edge, and the vertical edge as the intermediate $\overline{12}$-edge. But we may equally well make the analogous choice with triangle \eqref{eq:triangles from square 2}, declaring the left vertical edge the source. The problem is that {\it both} types of triangles are required for composition: if we wish later to concatenate, say, a path in $\mathscr{M}$ with a (non-invertible) $1$-morphism in $\mathscr{P}$, then we need (assuming there is a lift to $\mathscr{L}$) a triangle of the first type. Similarly, if we wish to concatenate a non-invertible $1$-morphism with a path in $\mathscr{N}$, we need a triangle of the second type. \end{remark}
\begin{construction}[exit shuffles]\label{constr:exit shuffles}
Any pair $1\leq j\leq k$ of natural numbers determines a $(1,k-1)$-shuffle $\mathcal{S}^k_{j}=\mathcal{S}_j\colon\Delta[k]\rightarrow \Delta[1]\times\Delta[{k-1}]$ by setting
\begin{equation*}
\mathcal{S}_{j}=\begin{cases}
\begin{bmatrix}
0 & 0 & \cdots & 0 & 1_{j} & 1_{j+1} & 1 & \cdots & 1 \\
0 & 1 & \cdots & j-1 & j-1 & j & j+1 & \cdots & k-1
\end{bmatrix}, & j<k \\
\begin{bmatrix}
0 & 0 & 0 & \cdots & 0 & 0 & 1 \\
0 & 1 & 2 & \cdots & k-2 & k-1 & k-1
\end{bmatrix}, & j=k
\end{cases}
\end{equation*}
in path notation, where the subscript $j$ indicates the column number, with column count starting at $0$. This is the non-degenerate element of $\left(\Delta[1]\times\Delta[{k-1}]\right)_k$ defined by the poset map $[k]\rightarrow[1]\times[k-1]$ given by
\[
i\mapsto
\begin{cases}
(0,i), & i<j\\
(1,i-1), & i\geq j.
\end{cases}
\]
We call $\mathcal{S}_j$ an {\it exit shuffle}, and $j$ its {\it exit index}. It has multiple left inverses, but we will use a particular one, $\mathcal{C}^k_j=\mathcal{C}_j$, defined to be postcomposition with the poset map $[1]\times[k-1]\rightarrow[k]$ given by
\begin{align*}
(0,i)&\mapsto
\begin{cases}
i, &i<j\\
j-1, &i\geq j
\end{cases}
\\
(1,i)&\mapsto
\begin{cases}
j, & i<j\\
i+1, & i\geq j
\end{cases}
.
\end{align*}
This choice for $\mathcal{C}$ is justified by \Cref{lem:meaning of exit index,lem:two classes} below. \end{construction}
\begin{definition}
\label{dfn:induced exit paths}
Let $\iota\colon\mathscr{L}\rightarrow\mathscr{N}$ be a map of simplicial sets.
For $k\geq1$, we define
$$
\Phat_{k-1}\subset \mathscr{N}_{k}\times\{1,\dots,k\}
$$
to be the subset consisting of pairs $(\gamma,j)$ such that in the diagram
\begin{equation*}
\begin{tikzcd}
\Delta[k]\ar[dr,bend right=10,"{\mathcal{S}_j}"',hook]\ar[rr,"{\gamma}"] & & \mathscr{N}\\
& \Delta[{1}]\times\Delta[{k-1}]\ar[ul,bend right=10,"{\mathcal{C}_j}"',two heads]\ar[ur,dashed,"{\Gamma=\gamma\circ\mathcal{C}_j}"']
\end{tikzcd}
\end{equation*}
the arrow $\Gamma$ lifts to the mapping cocylinder, i.e., it is in the image of the natural map $\mathscr{P}\rightarrow\mathscr{N}^{\Delta[1]}$. We call a pair $(\gamma,j)\in\Phat_*$ an {\it exit path of index $j$}.
\end{definition}
\begin{remark}[exit indices at depth $1$]\label{rmk:exit shuffles}
In terms of ordinary stratified geometry, \Cref{constr:exit shuffles} corresponds to the following phenomenon: a stratified $k$-simplex or $k$-chain $\Delta^{k}\rightarrow X$ of $X$ is a map of stratified spaces, where $\Delta^{k}=\overline{C}^k(\pt)$ is the $k$-fold closed cone on the point. The closed cone $\overline{C}(Y)$ of a stratified space $Y\rightarrow\mathcal{P}=\mathcal{P}_Y$, where $\mathcal{P}$ is the stratifying poset (equipped with the Alexandrov topology so that downward-closed subsets are closed) has $$\pt\coprod_{\{0\}\times Y}[0,1]\times Y$$ as its underlying space, and $$\mathcal{P}_{\overline{C}(Y)}=\mathcal{P}_{{Y}}^{\triangleleft},$$ i.e., $\mathcal{P}_{Y}$ with a minimal element adjoined, as its stratifying poset, together with the obvious stratification $\overline{C}(Y)\rightarrow\mathcal{P}_{Y}^{\triangleleft}$. Now, the stratified map $\Delta^{k}\rightarrow X$ comes with a commutative topological square
\begin{equation*}
\begin{tikzcd}
\Delta^{k} \ar[d] \ar[r,"{f}"] & X \ar[d]\\
\mathcal{P}_{\Delta^k} \ar[r,"{s_f}"'] & \mathcal{P}_X
\end{tikzcd}
.
\end{equation*}
Clearly we have $\mathcal{P}_{\Delta^k}\simeq[k]$ as posets. If $P_X\simeq\{a\prec b\}$, then the poset map $s_f$ is determined by a unique minimal `exit index' $j\in[k]$. Namely, let $j=0$ if $s_f$ is constant, or else let $j$ be the smallest number such that $s_f(j-1\prec j)=a\prec b$ (referring to $s_f$ applied to an arrow). This is well-defined since $[k]$ is connected. As we do not refer to stratified paths explicitly, however, the different levels (indices) at which a path may exit (from the stratum $X_a$) give for us different sorts of non-invertible paths. Note also that we do not consider exit shuffles of index $0$, as the corresponding $k$-chains are competely contained within the smooth manifold $X_b$, and similarly we do not consider `$j=k+1$', i.e., paths contained within $X_a$. (Besides, these indices do not determine shuffles in the ordinary sense.) The generalisation, using multiple exit indices, of the depth-$1$ \Cref{constr:exit shuffles} to higher depth is immediate from this analogy, albeit notationally heavy. \end{remark}
The aim of \Cref{dfn:induced exit paths} is three-fold.
\begin{itemize}
\item It helps group elements of $\Phat_\ast$ into three classes (\Cref{dfn:vertical bottom top}), which will play different roles.
\item It `fixes orientation', in the sense that the faces of $\gamma$ that touch $\mathscr{L}$ are directed \emph{away} from $\mathscr{L}$ due to the orientation of the accompanying $\Gamma\in\mathscr{P}_{*}$. This precludes `paths from $\mathscr{N}$ to $\mathscr{M}$', i.e., enter paths. The orientation depends on the exit index, so:
\item Unequal pairs $(\gamma,j)\neq(\gamma,j')\in\Phat_*$ that share their first coordinate play different roles, and this is indispensable, as will become clear.
\end{itemize}
\begin{definition}\label{dfn:vertical bottom top}
Let $k\geq1$, $(\gamma,j)\in\Phat_{k-1}$, and let $d_i=\partial^*_i$ be a face map.
Then $d_i(\gamma)$ is either
\begin{itemize}
\item {\it vertical} if it does not factor as follows:
\[
\begin{tikzcd}
\Delta[{k-1}]\ar[dr,dotted,"{\nexists}"]\ar[r,hook,"{\partial_i}"] & \Delta[{k}]\ar[dr,"{\mathcal{S}_j}"]\ar[rr,"{\gamma}"] && \mathscr{N}\\
& ({\{0\}}\amalg\{1\})\times\Delta[{k-1}]\ar[r,hook] & \Delta[{1}]\times\Delta[{k-1}]\ar[ur,"{\Gamma=\gamma\circ\mathcal{C}_j}"']
\end{tikzcd}
;
\]
\item or {\it low} if it factors as follows:
\[
\begin{tikzcd}
\Delta[{k-1}]\ar[dr,dotted,"{\exists}"]\ar[r,hook,"{\partial_i}"] & \Delta[{k}]\ar[dr,"{\mathcal{S}_j}"]\\
& \{0\}\times\Delta[{k-1}]\ar[r,hook] & \Delta[{1}]\times\Delta[{k-1}]
\end{tikzcd}
;
\]
\item or {\it upper} if it factors as follows:
\[
\begin{tikzcd}
\Delta[{k-1}]\ar[dr,dotted,"{\exists}"]\ar[r,hook,"{\partial_i}"] & \Delta[{k}]\ar[dr,"{\mathcal{S}_j}"]\\
& \{1\}\times\Delta[{k-1}]\ar[r,hook] & \Delta[{1}]\times\Delta[{k-1}]
\end{tikzcd}
.
\]
\end{itemize} \end{definition}
In the exit path $\infty$-category (\Cref{dfn:exit of span}), vertical faces will remain non-invertible, low faces will become simplices in $\mathscr{M}$, and upper faces in $\mathscr{N}$. Writing `$d_i(\gamma)$ is vertical', etc., is slightly abusive, since whether a face is vertical, low or upper depends stongly on the exit index. This should not cause any confusion because we do not use these adjectives in any other context. We took them from \cite{eilenberg1950semi}, where they were used in a similar context.
\begin{definition}\label{dfn:bemol exit index of face}
Let $k\geq1$. For $\partial_i$ and $\mathcal{S}_j$ as in \Cref{dfn:vertical bottom top}, and for $\sigma_i$ a degeneracy, we write
\begin{align*}
\text{\(
\flat^k_{j,i}=\flat_{j,i}\in[k-1]
\)
(resp.\ \(\sharp^k_{j,i}=\sharp_{j,i}\in[k]\))}
\end{align*}
for the smallest number whose image under
\begin{align*}
\text{$\mathcal{S}_j\partial_i\colon[k-1]\rightarrow[1]\times[k-1]$ (resp.\ under $\mathcal{S}_j\sigma_i\colon[k+1]\rightarrow[1]\times[k-1]$)}
\end{align*}
has first coordinate $1$. We leave $\flat^k_{k,k}$ undefined. \end{definition}
For instance, for $k=5$, $j=2$, $i\geq2$, we have $\flat=2$, but for $i<2$ (with $k$, $j$ unchanged), we have $\flat=1$; in general $\flat\in\{j,j-1\}$, depending on the relative positions of $i$ and $j$ in $\{0,\dots,k\}$. We will determine $\flat$ and $\sharp$ explicitly in the proof after \Cref{dfn:exit of span}: see \eqref{eq:formula for flat} and \eqref{eq:formula for sharp}.
\begin{lemma}\label{lem:meaning of exit index}\
\begin{itemize}
\item Let $k\geq2$ and assume $(j,i)\neq(k,k)$. The composition
\begin{equation*}
\begin{tikzcd}
\Delta[{1}]\times\Delta[{k-2}] \ar[r,"{\mathcal{C}_{\flat}}"] & \Delta[{k-1}]\ar[r,"{\partial_i}"] & \Delta[{k}]\ar[r,"{\mathcal{S}_j}"] & \Delta[{1}]\times\Delta[{k-1}],
\end{tikzcd}
\end{equation*}
where $\flat=\flat^k_{j,i}$, preserves the first coordinate.
\item The composition
\begin{equation*}
\begin{tikzcd}
\Delta[{1}]\times\Delta[{k}] \ar[r,"{\mathcal{C}_{\sharp}}"] & \Delta[{k+1}]\ar[r,"{\sigma_i}"] & \Delta[{k}]\ar[r,"{\mathcal{S}_j}"] & \Delta[{1}]\times\Delta[{k-1}],
\end{tikzcd}
\end{equation*}
where $\sharp=\sharp^k_{j,i}$, preserves the first coordinate. \end{itemize} \end{lemma} \begin{proof}
This is a direct check. \end{proof}
\begin{lemma}\label{lem:two classes}
Let $k\geq2$. If $(\gamma,j)\in\Phat_{k-1}$ and $d_i(\gamma)$ is vertical, then
\[
d_i(\gamma,j)\coloneqq \left(d_i\gamma,\flat^{k}_{j,i}\right)\in\Phat_{k-2}.
\]
\end{lemma} To illustrate, for $k=3$, \begin{equation}\label{eq:figure 1 vertical face}
{\small
\begin{tikzcd}
& {\bullet}\ar[dr,dash] & \\
\bullet\ar[ur,dash] & & {\color{blue}\bullet}\\
& {\color{blue}\bullet}\ar[dr,color=blue]\ar[uu,dash]\ar[from=ul, to=ur,dash,crossing over]\ar[ur,color=blue] & \\
{\bullet}\ar[rr,dash]\ar[ur,dash]\ar[uu,dash] & & {\color{blue}\bullet}\ar[uu,color=blue]
\end{tikzcd}
} \end{equation} is a vertical face of exit index $\flat=2=j-1$, where $(\gamma,3)$ itself, the `lower right' tetrahedron, is omitted. Similarly, \begin{equation}\label{eq:figure 2 vertical face}
{\small \begin{tikzcd}[column sep=normal, row sep=normal]
& {\color{blue}\bullet}\ar[dr,color=blue] & \\
\bullet\ar[ur,dash] & & {\color{blue}\bullet}\\
& \bullet\ar[dr,dash]\ar[uu,dash]\ar[from=dl,to=ur,crossing over,color=blue,bend right=25]\ar[from=dl,to=uu,crossing over,color=blue,bend right=10]\ar[from=ul, to=ur,dash,crossing over] & \\
{\color{blue}\bullet}\ar[rr,dash]\ar[ur,dash]\ar[uu,dash] & & \bullet\ar[uu,dash]
\end{tikzcd}
} \end{equation} is a vertical face of index $\flat=1=j$, where $(\gamma,1)$ is the upper left tetrahedron.\footnote{In these pictures, the boundary simplices are oriented clockwise.} \begin{proof}[Proof of \Cref{lem:two classes}]
It suffices to consider the diagram
\begin{equation}\label{eq:diagram of a face}
\begin{tikzcd}
\Delta[{k-1}] \ar[ddrr,hook,"{\mathcal{S}_\flat}"', bend right=10] \ar[r,hook,"{\partial_i}"] & \Delta[{k}] \ar[dr,hook,"{\mathcal{S}_j}"', bend right=10] \ar[rr,"{\gamma}"] & & \mathscr{N}\\
&& \Delta[{1}]\times\Delta[{k-1}] \ar[ul,two heads,"{\mathcal{C}_j}"',bend right=10] \ar[ur,dashed,"{\Gamma}"] & \\
&& \Delta[1]\times\Delta[{k-2}]\ar[uull,two heads,"{\mathcal{C}_\flat}",bend right=10] \ar[u,dashed,"{d'}"'] \ar[uur,dashed,"{\Gamma'}"',bend right] &
\end{tikzcd}
,
\end{equation}
which commutes by construction. \Cref{lem:meaning of exit index} implies in particular that the restriction of $d'=\mathcal{S}_i\partial_i\mathcal{C}$ to $\{0\}\times\Delta[{k-2}]$ factors through $\{0\}\times\Delta[{k-1}]$, which implies that $\Gamma'=\Gamma d'$ lifts to $\mathscr{P}_{k-2}$, as desired. Note that the case $j=i=k$ is precluded by verticality.
\end{proof}
\begin{remark}\label{rmk:who descends in general}
\Cref{lem:two classes} does not promote to an if-and-only-if statement. Low faces also descend to $\Phat_{k-2}$, but in a different way. Upper faces may or may not. These facts will play no role below. \end{remark}
We close this section by noting the completely analogous fact for degeneracies.
\begin{lemma}\label{lem:degenerate exit} Let $k\geq1$.
If $(\gamma,j)\in\Phat_{k-1}$, then
\(
s_i(\gamma,j)\coloneqq\left(s_i\gamma,\sharp^k_{j,i}\right)\in\Phat_{k}.
\) \end{lemma}
\section{Exit paths}\label{sec:exit paths}
\begin{remark}\label{rmk:iota cofibration}
If $\iota\colon\mathscr{L}\hookrightarrow\mathscr{N}$ is a cofibration, then an exit path $(\gamma,j)\in\Phat_{k-1}$ determines a canonical $(k-1)$-simplex $\Delta[{k-1}]\rightarrow\mathscr{L}$ of $\mathscr{L}$, namely (recall \Cref{dfn:induced exit paths}) the restriction of $\Gamma=\gamma\circ\mathcal{C}_j$ along $\{0\}\times\Delta[{k-1}]\hookrightarrow\Delta[{1}]\times\Delta[{k-1}]$ factors then \emph{uniquely} through $\mathscr{L}$. \end{remark}
We are now ready to give the main construction of this paper.
\begin{definition}\label{dfn:exit of span}
Let a span \[\mathfrak{S}=\left(\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xhookrightarrow{\iota}\mathscr{N}\right)\] of simplicial sets be given, where $\iota$ is a cofibration. We define a new simplicial set, $\EEx=\EEx(\mathfrak{S})$, as follows:
\begin{itemize}
\item $\EEx_0=\mathscr{M}_0\amalg\mathscr{N}_0$.
\item
\(
\EEx_k=\mathscr{M}_k\amalg \Phat_{k-1} \amalg\mathscr{N}_k
\)
for $k\geq1$.
\item Face and degeneracy maps restricted to $\mathscr{M}_k$ and $\mathscr{N}_k$ are those of $\mathscr{M}$ and $\mathscr{N}$.
\item For $k=1$ and $\gamma=(\gamma,1)\in\Phat_0\subset\mathscr{N}_1$, we set\footnote{(noting $\mathcal{S}=\id$, $\mathcal{C}=\id$ if $k=1$ (\Cref{constr:exit shuffles}), and using \Cref{rmk:iota cofibration,rmk:induced source and target maps for non-invertible paths})}
\begin{align*}
d_1(\gamma,1)&=\pi (d_1\gamma)\in\mathscr{M}_0,\\
d_0(\gamma,1)&=\tau (d_0\gamma)\in\mathscr{N}_0.
\end{align*}
\item For $k\geq2$, $(\gamma,j)\in\Phat_{k-1}$, and $d_i$ a face map:
\begin{itemize}
\item if $d_i\gamma$ is vertical,\footnote{(\Cref{dfn:vertical bottom top})} then we set
\(
d_i(\gamma,j)=\left(d_i\gamma,\flat_{j,i}\in\Phat_{k-2}\right).\footnote{(\Cref{lem:two classes})}
\)
\item if $d_i\gamma$ is low, then we set $d_i(\gamma,j)=\pi (d_i\gamma)\in\mathscr{M}_{k-1}$.
\item if $d_i\gamma$ is upper, then we set $d_i(\gamma,j)=\tau (d_i\gamma)\in\mathscr{N}_{k-1}$.
\end{itemize}
\item For $k\geq1$, $(\gamma,j)\in\Phat_{k-1}$, and $s_i$ a degeneracy: \(
s_i(\gamma,j)\coloneqq\left(s_i\gamma,\sharp_{j,i}\right)\in\Phat_k
\).\footnote{(\Cref{lem:degenerate exit})}
\end{itemize} \end{definition}
\begin{proof}[Proof that $\EEx$ is a simplicial set]
We verify the simplicial identities. Below, we assume $k\geq2$ or $k\geq3$ depending on need, and that $(\gamma,e)\in\Phat_{k-1}$.
\noindent\underline{$d_id_j=d_{j-1}d_i$ for $i<j$:} We start by showing that
\begin{equation}\label{eq:flat identity}
\flat^{k-1}_{\flat^k_{e,j},i}=\flat^{k-1}_{\flat^k_{e,i},j-1}.
\end{equation}
It helps to distinguish the cases
\begin{equation}\label{eq:three cases simplicial}
\text{(1) $e\leq i<j$, (2) $i< e\leq j$, and (3) $i<j<e$.}
\end{equation}
We have (by a direct check)
\begin{equation}\label{eq:formula for flat}
\flat^k_{e,j}=
\begin{cases}
e, & j\geq e\\
e-1, & j<e
\end{cases}
\end{equation}
and thus if (1), then $\mathrm{L}\coloneqq\flat^{k-1}_{\flat^k_{e,j},i}=\flat^{k-1}_{e,i}=e$ and $\mathrm{R}\coloneqq\flat^{k-1}_{\flat^k_{e,i},j-1}=\flat^{k-1}_{e,j-1}=e$.
If (2), then $\mathrm{L}=\flat^{k-1}_{e,i}=e-1$ and $\mathrm{R}=\flat^{k-1}_{e-1,j-1}=e-1$.
Finally, if (3), then $\mathrm{L}=\flat^{k-1}_{e-1,i}=e-2$ and $\mathrm{R}=\flat^{k-1}_{e-1,j-1}=e-2$. We should note that in the case (2), $e$ is at least $1$, and in (3) it is at least $2$, so that the expressions make sense. This finishes the verification if all involved faces of $(\gamma,e)$ involved are vertical. Otherwise, \Cref{lem:meaning of exit index} and Diagram \eqref{eq:diagram of a face} imply the statement; in any of the cases where the case excluded in \Cref{lem:meaning of exit index} is involved, the face in question is low. We will give this argument here once and will not repeat it in the verification of the other simplicial identities below: consider the diagram
\begin{equation}\label{eq:simplicial composition diagram}
\begin{tikzcd}[row sep=large ]
\Delta[{k-2}]\ar[d,hook,bend right,"{\mathcal{S}_{\flat'}}"']\ar[r,hook,"{\partial_i}"] & \Delta[{k-1}]\ar[d,hook,bend right,"{\mathcal{S}_{\flat}}"']\ar[r,hook,"{\partial_j}"] & \Delta[{k}]\ar[d,hook,bend right,"{\mathcal{S}_{e}}"'] \ar[r,"{\gamma}"] & \mathscr{N}\\
\Delta[{1}]\times\Delta[{k-3}]\ar[u,two heads,"{\mathcal{C}_{\flat'}}"',bend right] \ar[r,dashed] & \Delta[{1}]\times\Delta[{k-2}]\ar[u,two heads,"{\mathcal{C}_{\flat}}"',bend right]\ar[r,dashed] & \Delta[{1}]\times\Delta[{k-1}]\ar[u,two heads,"{\mathcal{C}_{e}}"',bend right]\ar[ur,dashed]
\end{tikzcd}
.
\end{equation}
Without loss of generality, say $d_i(d_j(\gamma))=(\partial_j\partial_i)^*\gamma$ is low, so we need to show that so is $d_{j-1}d_i(\gamma)$. That $\mathcal{S}_{\flat}\partial_{i}$ factors through $\{0\}\times\Delta[{k-2}]$ is equivalent to $\mathcal{S}_e\partial_j\mathcal{C}_{\flat}\mathcal{S}_{\flat}\partial_{i}$ factoring thusly per \Cref{lem:meaning of exit index}. Now, $\mathcal{S}_e\partial_j\mathcal{C}_{\flat}\mathcal{S}_{\flat}\partial_{i}=\mathcal{S}_e\partial_j\partial_i$ by the construction of $\mathcal{C}_\flat$, and similarly $\mathcal{S}_e\partial_j\partial_i=\mathcal{S}_e\partial_j\partial_i\mathcal{C}_{\flat'}\mathcal{S}_{\flat'}$. Together with the same calculation for $\partial_i$ and $\partial_j$ replaced respectively by $\partial_{j-1}$ and $\partial_i$ in Diagram \eqref{eq:simplicial composition diagram}, we see that
\begin{equation}\label{eq:simplicial factorisation check}
\mathcal{S}_e\partial_j\mathcal{C}_{\flat}\mathcal{S}_{\flat}\partial_{i}=\mathcal{S}_e\partial_j\partial_i\mathcal{C}_{\flat'}\mathcal{S}_{\flat'} \quad \text{and} \quad \mathcal{S}_{e}\partial_i\mathcal{C}_{\flat}\mathcal{S}_{\flat}\partial_{j-1}=\mathcal{S}_{e}\partial_i\partial_{j-1}\mathcal{C}_{\flat'}\mathcal{S}_{\flat'}.
\end{equation}
The indices $\flat^{(')}$ in the two equations are a priori \emph{not} the same (as they are calculated for different pairs of indices themselves), but we just showed above in \Cref{eq:flat identity} that the primed flats on the right hand sides do coincide. Combined with the same simplicial identity for $\mathscr{N}$, this means that the right hand sides in \eqref{eq:simplicial factorisation check} agree, which implies the statement.
\noindent\underline{$d_is_j=s_{j-1}d_i$ for $i<j$:} Similarly, we first show
\begin{equation}\label{eq:second simplicial identity}
\mathrm{L}=\flat^{k-1}_{\sharp^{k}_{e,j},i}=\sharp^{k-1}_{\flat^{k}_{e,i},j-1}=\mathrm{R},
\end{equation}
using the cases (1)--(3) from \eqref{eq:three cases simplicial}.
Note that
\begin{equation}\label{eq:formula for sharp}
\sharp^{k}_{e,j}=
\begin{cases}
e, & j\geq e\\
e+1, & j<e
\end{cases}
\end{equation}
which, together with \eqref{eq:formula for flat}, implies that if (1), then $\mathrm{L}=\flat^{k-1}_{e,i}=e$ and $\mathrm{R}=\sharp^{k-1}_{e,j-1}=e$.
If (2), then $\mathrm{L}=\flat^{k-1}_{e,i}=e-1$ and $\mathrm{R}=\sharp^{k-1}_{e-1,j-1}=e-1$.
Finally, if (3), then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e$ and $\mathrm{R}=\sharp^{k-1}_{e-1,j-1}=e$. Now, \Cref{lem:meaning of exit index} and Diagrams \eqref{eq:diagram of a face} and \eqref{eq:simplicial composition diagram} (mutatis mutandis; e.g., using \eqref{eq:second simplicial identity} instead of \eqref{eq:flat identity} for \eqref{eq:simplicial factorisation check}) again finish the verification, analogously to the above. We no longer mention this below.
\noindent\underline{$d_is_j=\id$ for $i=j$ or $i=j+1$:} We show
\begin{equation*}
\mathrm{L}=
\flat^{k-1}_{\sharp^k_{e,j},i}=e.
\end{equation*}
If $e\leq j$, then $\mathrm{L}=\flat^{k-1}_{e,i}=e$. If $i=j$ and $j<e$, then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e$. If $i=j+1$ and $e\geq i$, then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e$. This covers all cases.
\noindent\underline{$d_is_j=s_jd_{i-1}$ for $i>j+1$:} We show
\begin{equation*}
\mathrm{L}=\flat^{k-1}_{\sharp^{k}_{e,j},i}=\sharp^{k-1}_{\flat^{k}_{e,i-1},j}=\mathrm{R}.
\end{equation*}
If $e\leq j$, then $\mathrm{L}=\flat^{k-1}_{e,i}=e=\sharp^{k-1}_{e,j}=\mathrm{R}$. If $j+1\leq e < i-1$, then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e=\sharp^{k-1}_{e-1,j}=\mathrm{R}$. If $e=i-1$, then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e+1=\sharp^{k-1}_{e,j}=\mathrm{R}$. If $e=i$, then $\mathrm{L}=\flat^{k-1}_{e+1,i}=e=\sharp^{k-1}_{e-1,j}=\mathrm{R}$. Finally, if $e>i$, both sides are again equal to $e$.
\noindent\underline{$s_is_j=s_{j+1}s_i$ for $i\leq j$:} Finally, we show
\begin{equation*}
\mathrm{L}=\sharp^{k-1}_{\sharp^{k}_{e,j},i}=\sharp^{k-1}_{\sharp^{k}_{e,i},j+1}=\mathrm{R}.
\end{equation*}
Similarly to the first identity above, it helps to distinguish the cases
\begin{equation*}
\text{(1) $e\leq i\leq j$, (2) $i<e\leq j$, and (3) $i\leq j<e$.}
\end{equation*}
If (1), then $\mathrm{L}=\sharp^{k-1}_{e,i}=e=\sharp^{k-1}_{e,j+1}=\mathrm{R}$. If (2), then $\mathrm{L}=\sharp^{k-1}_{e,i}=e+1=\sharp^{k-1}_{e+1,j+1}=\mathrm{R}$. If (3), then $\mathrm{L}=\sharp^{k-1}_{e+1,i}=e+2=\sharp^{k-1}_{e+1,j+1}=\mathrm{R}$. \end{proof}
\begin{theorem}\label{thm:exit path category of linked space}
If $\mathscr{M},\mathscr{L},\mathscr{N}$ are $\infty$-categories, $\pi\colon\mathscr{L}\rightarrow\mathscr{M}$ is a right fibration, and $\iota\colon\mathscr{L}\rightarrow\mathscr{N}$ is a cofibration, then $\EEx\left(\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N}\right)$ is an $\infty$-category. \end{theorem}
\begin{comment} \begin{proof}[Proof of \Cref{thm:exit path category of linked space}]
A direct check of the weak Kan property is possible. We first give a verbose proof for inner $2$-horns, and then briefly discuss the general case, which is analogous.
Let $h\colon\Lambda^{2}_1\rightarrow\EEx$ be given. There are only two nontrivial cases: (1) where the $01$-edge is in $\mathscr{M}_1$ and the $12$-edge in $\Phat_0$, (2) where the $01$-edge is in $\Phat_0$ and the $12$-edge in $\mathscr{N}_1$. Note that two edges from $\Phat_0$ cannot give such an $h$: by construction, sources and targets of $1$-simplices from $\Phat_0$ are all in $\mathscr{M}_0$ and $\mathscr{N}_0$, respectively.
If (1), then the $01$-edge lifts to $\mathscr{L}$ along $\pi$ by assumption, and via $\iota$ embeds into $\mathscr{N}$, so that, combining this with the $12$-edge, we have a horn like the highlighted $2$-simplex in \eqref{eq:figure 1 vertical face} (the prism itself being unimportant). The lift in $\mathscr{N}$ to a $2$-simplex $H\colon\Delta^2\rightarrow\mathscr{N}$ induces then by the construction of $\Phat_1$ an element $H\in\Phat_1$ with exit index $2$, i.e., a lift. (Note that since the lift in $H$ of the $01$-edge of $h$ is bottom by construction, it is hit by $d_2$.)
If (2), then, forgetting $\Phat_0\rightarrow\mathscr{N}_1$, we first pick a lift in $\mathscr{N}_2$. The $02$-edge of such a lift has source vertex in $\iota(\mathscr{L}_0)$ by construction and so descends to $\Phat_0$. The lift itself descends to $\Phat_1$ with exit index $1$ (like the highlighted face $2$-simplex in \eqref{eq:figure 2 vertical face}).
Let now $h\colon\Lambda^{n}_i\rightarrow\EEx$ be given, where $0<i<n$. If any faces of $h$ are in $\mathscr{M}_{n-1}$, we lift them along $\pi$ to $\mathscr{L}_{n-1}$ and then embed them them into $\mathscr{N}_{n-1}$ via $\iota$ and so reduce the problem to the analogue of case (2) above. The lift $H\colon\Delta^n\rightarrow \mathscr{N}$ in $\mathscr{N}$ descends to $H\in\Phat_{n-1}$, with exit index determined by the number of faces lifted from $\mathscr{M}$. \end{proof} \end{comment} \begin{proof} We directly check the weak Kan property, first giving a verbose proof for inner $2$- and $3$-horns before the general case, which is analogous. The main idea is that given a horn with non-invertible faces, we can lift those in $\mathscr{M}$ to $\mathscr{N}$ along $\pi$ and take a filler therein, which, coupled with an appropriate exit index, lifts the original horn. We will sometimes not distinguish $\mathscr{L}$ from its image $\iota(\mathscr{L})$ in notation.
\noindent\underline{$2$-horns.} Let $h\colon \Lambda^2_1\to \EEx$ be given. The only two non-trivial cases occur when at least one of the edges \[
h|_{ij}\colon \{i<j\}=\Delta[1]\hookrightarrow\Lambda^2_1\xrightarrow{h} \EEx \] that constitute the horn lies in $\Phat_0$. \begin{enumerate}
\item First, say \[
h|_{01}=(h_{01},1)\in\Phat_0. \]
Then we necessarily have $h|_{12}\in \mathscr{N}_1$ as by construction the endpoint $d_{0}(h|_{01})=\tau(d_{0}h_{01})$, which must be the initial point of $h|_{12}$, lies in $\mathscr{N}_0$. Now the horn \[
h_{01}\cup h|_{12}\colon \Lambda^2_1\to \mathscr{N} \] has a filler $H\colon \Delta[2]\to \mathscr{N}$. But then $(H,1)\in\Phat_1$ fills $h$: the composition \( \mathcal{S}_{1}\circ \partial_0 \colon[1]\hookrightarrow[2]\hookrightarrow[1]\times[1] \) is \( 0\mapsto 1\mapsto (1,0)\), \(1\mapsto 2\mapsto (1,1) \) which means $d_{0}H$ is upper, so that \[
d_0(H,1)=d_{0}H=h|_{12}\in \mathscr{N}_1. \] Similarly, \( \mathcal{S}_1\circ\partial_2(0)=\mathcal{S}_1(0)=(0,0)\), \( \mathcal{S}_1\circ\partial_2(1)=\mathcal{S}_1(1)=(1,0) \) so that $d_2H$ is vertical, yielding \[ d_2(H,1)=(d_2H,\flat_{1,2})=(h_{01},1)\in\Phat_0 \] using \eqref{eq:formula for flat}. This shows that $(H,1)$ is a filler for $h$. \item Let us assume, more interestingly, that \[
h|_{12}=(h_{12},1)\in\Phat_0. \]
Necessarily, $h|_{01}\in\mathscr{M}_1$, as the initial point $d_{1}(h|_{12})=\pi(d_{1}h_{12})$, which must coincide with the endpoint of $h|_{01}$, is by construction in $\mathscr{M}_0$. The induced lifting problem \begin{equation*}
\begin{tikzcd}
\{1\}=\Lambda^{1}_{0}\ar[d,hook]\ar[r,"{d_{1}h_{12}}"] & \mathscr{L}\ar[d,"{\pi}"]\\
\Delta[1]\ar[r,"{h|_{01}}"']\ar[ur,dotted,"{H_{01}}"description] & \mathscr{M}
\end{tikzcd} \end{equation*} admits by assumption a solution $H_{01}$. We thus have an induced horn \[
\iota(H_{01})\cup h_{12}\colon\Lambda^2_{1}\to \mathscr{N} \] with, say, $H\in\mathscr{N}_2$ a filler. But now $(H,2)\in\Phat_1$ fills $h$: we have that $d_{0}H$ is vertical as \(
\mathcal{S}_2\circ\partial_0\colon [1]\to[1]\times[1]\) sends \(0\mapsto (0,1) \); \(1\mapsto (1,1),\) so \[
d_{0}(H,2)=(d_{0}H,\flat_{2,0})=(h_{12},1)\in\Phat_0. \] Similarly, \(
\mathcal{S}_{2}\circ \partial_2\colon 0\mapsto (0,0) \); \(1\mapsto (0,1)\) means $d_{2}H$ is low and so \[
d_{2}(H,2)=\pi(d_{2}H)=\pi(H_{01})=h|_{01}\in \mathscr{M}_1. \] This shows that $(H,2)$ is a filler for $h$. \end{enumerate}
\noindent\underline{$3$-horns.} Let first \[h\colon \Lambda^3_1\to\EEx\] be given, which misses the $023$-face. The non-trivial cases to check occur when $h$ is not wholly contained within $\mathscr{M}$ or $\mathscr{N}$. Suppose \begin{enumerate}
\item that the $013$-face
\[
h|_{013}\colon \Delta[2]=\{0<1<3\}\hookrightarrow\Lambda^3_1\to\EEx\footnote{We will continue using slightly abusive notation like $\Delta[2]=[0<1<3]$ which is similar to $\Delta[2]=\Delta\{0<1<3\}$ or $\Delta^{\{0<1<3\}}$ since it is suggestive, commonplace, and should cause no confusion.}
\]
is in $\mathscr{M}_2$. Then if any other non-degenerate sub-$2$-simplex of $h$ is also low,\footnote{We call a sub-simplex $(\Delta[\ell]\hookrightarrow\Lambda^{k}_{i}\to\EEx)\in\EEx_{k-1}$, $1\leq\ell<k$ of a horn \emph{low} if it is in $\mathscr{M}_{\ell}$, \emph{vertical} if in $\Phat_{\ell-1}$, and \emph{upper} if in $\mathscr{N}_{\ell}$. Similarly when $\ell=0$ with \emph{low/upper}.} so must all others, which would yield a non-case as $h$ would lie entirely within $\mathscr{M}$. But since no other sub-$2$-simplex of $h$ can be upper while $h|_{013}$ is low, we may assume that \emph{all other} non-degenerate sub-$2$-simplices of $h$ are vertical. Now, $h|_{123}=(h,e')\in\Phat_{1}$ must be vertical with the $03$-edge, common with the assumed low face $h|_{013}$, itself necessarily low. But then the vertex $h|_{2}\in\mathscr{N}_0$ must be upper, which is absurd since there is no exit index $e'\in\{1,2\}$ such that the exit shuffle $\mathcal{S}_{e'}\colon[2]\to[1]\times[1]$ sends $0$, $3$ to $\{0\}\times[1]$ while simultaneously sending $2$ to $\{1\}\times[1]$: $2<3$ in $[2]$ implies $\mathcal{S}_{e'}(2)<\mathcal{S}_{e'}(3)$ in $[1]\times[1]$. We conclude that $h|_{013}$ cannot be in $\mathscr{M}_2$ if $h$ is not already wholly within $\mathscr{M}$. \emph{So, this is a non-case.}
\item that the $123$-face
\[
h|_{123}\colon \Delta[2]=\{1<2<3\}\hookrightarrow\Lambda^3_1\to\EEx
\]
is in $\mathscr{M}_2$. All other $2$-faces being vertical similarly implies that the vertex $h|_{0}$ is upper, which gives a contradiction in the same way. We conclude that \emph{this is also a non-case.}
\item that the $012$-face
\[
h|_{012}\colon\Delta[2]=\{0<1<2\}\hookrightarrow\Lambda^3_{1}\to\EEx
\]
is in $\mathscr{M}_2$. We may similarly assume all other $2$-faces are vertical, and so in particular $h|_{3}$ upper, which here does not give a contradiction as $3\in \Lambda^3_{1}\subset\Delta[3]$ is final. We obtain that $h|_{013}=(h_{013},2),h|_{123}=(h_{123},2)\in\Phat_{1}$ both have exit index $2$, as they have a single low edge each (the $01$- and $12$-edges, respectively). Now, we have an induced horn in $\mathscr{L}$ given by $h|_{01}\cup h|_{12}\colon\Lambda^2_{1}\to\mathscr{L},$ which constitutes the lifting problem
\begin{equation*}
\begin{tikzcd}[column sep=large]
\Lambda^2_1\ar[d,hook]\ar[r,"{h_{01}\cup h_{12}}"] & \mathscr{L}\ar[d,"{\pi}"] \\
\Delta[2]\ar[ur,dotted,"{H_{012}}"description]\ar[r,"{h|_{012}}"'] & \mathscr{M}
\end{tikzcd}
\end{equation*}
which admits a solution $H_{012}$. This yields an induced horn
\[H_{012}\cup h_{013}\cup h_{123}\colon\Lambda^3_{1}\to\mathscr{N}\]
which admits a filler $H$. Now, $(H,3)\in\Phat_{2}$ fills $h$: the face $d_3H$ is low since $\mathcal{S}_3\circ\partial_3\colon[2]\hookrightarrow[3]\hookrightarrow[1]\times[2]$ is $i\mapsto (0,i)$, which implies
\[
d_{3}(H,3)=\pi(d_3H)=\pi(H_{012})=h|_{012},
\]
as desired. The remaining faces $d_{i}H$, $i\neq 3$, are vertical since $\mathcal{S}_{3}\circ \partial_i$ sends $0\mapsto(0,0)$ while $3\mapsto(1,2)$. By construction we have
\[
d_{0}(H,3)=(d_{0}H,\flat_{3,0})=(h_{123},2)=h|_{123}
\]
and similarly $d_{1}(H,3)=h|_{123}$, $d_{2}(H,3)=h|_{013}$, using $\flat_{3,i}=2$, as desired.
\item \label{RVJG15G} that $h$ has an upper $2$-face. Similarly to the above, we may argue that it cannot have a low $2$-face, and if it had any \emph{other} upper $2$-face, it would be contained entirely within $\mathscr{N}$ where the lifting problem is trivial, and so we may assume that the remaining $2$-faces are all vertical. Again similarly to the above, we have that the only non-non-case is when the $123$-face
\[
h|_{123}\colon\Delta[2]=\{1<2<3\}\hookrightarrow\Lambda^3_1\to\EEx
\]
is upper as the other cases contradict the partial order on $[1]\times[2]$; in particular, the vertex $h|_{0}$ is low. Moreover, any vertical $2$-face of $h$ must have exit index $1$ for otherwise it would have a low edge, which is impossible as there is only a single low vertex. Now, $h$ is in this case given by a horn $h_{012}\cup h_{013}\cup h_{123}=\widetilde{h}\colon\Lambda^3_1\to\mathscr{N}$ with $\widetilde{h}|_{0}$ in $\iota(\mathscr{L})$. Taking a filler $H$ of $\widetilde{h}$ in $\mathscr{N}$, we see that $(H,1)$ fills $h$: the faces $d_{1/2/3}H$ are vertical since $\mathcal{S}_1\colon[3]\to[1]\times[2]$ sends $0\mapsto(0,0)$; $1\leq i\mapsto (1,i-1)$, which means \[d_{i}(H,1)=(d_{i}H,\flat_{1,i})=(d_{i}H,1),\]for $i\geq 1$, as desired. On the other hand, $\mathcal{S}_1\circ \partial_0$ has image inside $\{1\}\times[2]$, so $d_{0}H$ is top. We obtain $d_{0}(H,1)=\tau(d_{0}H)=h|_{123}\in\mathscr{N}_2$, as desired.
\item \label{CO7D31L} finally that all $2$-faces of $h$ are vertical. Let us rule out a few possibilities by pigeon-holing arguments: the presence of three (out of the four in total) low resp.\ upper vertices implies that there is a low resp.\ upper face, which means we must have two low and two upper vertices each. Now, $h|_{0}$ and $h|_{1}$ must be low, and $h|_{2}$ and $h|_{3}$ upper. For if $h|_{0}$ were upper and $h|_{i}$ ($i\geq 1$) low, the edge $h|_{0i}$ would be a path $\mathscr{N}\to\mathscr{M}$, which is excluded by construction, and similarly if $h|_{1}$ were upper, taking $i\geq 2$. Therefore, the $2$-faces
\[
h|_{012},\ h|_{013},
\]
namely those that contain both $h|_{0}$ and $h|_{1}$, have exit index $2$, while those which contain only one low vertex have index $1$. Of latter type there is only one:
\[
h|_{123}.
\]
Although $h|_{023}$ is missing, it would have had to have index $1$ by the same argument. Therefore, adopting the notation from Case \eqref{RVJG15G}, we may take a lift $H\colon\Delta[3]\to\mathscr{N}$ of $\widetilde{h}\colon\Lambda^3_1\to\mathscr{N}$ such that the restriction to $\{0\}\times\Delta[2]$ of the composition $H\circ\mathcal{C}_{2}\colon\Delta[1]\times\Delta[2]\to\mathscr{N}$ still factors through $\mathscr{L}$, independently of the choice of $H$. Indeed, $(H,2)\in\Phat_{2}$ fills $h$: since $\flat_{2,2/3}=2$ and $\flat_{2,0/1}=1$, we have $d_{2/3}(H,2)=h|_{013/012}$, $d_{0}(H,2)=h|_{123}$. Note also that the exit index being $2$ excludes low or upper faces in this dimension.
\end{enumerate} Let now \[h\colon\Lambda^3_2\to\EEx\] be given, which misses the $013$-face. The non-trivial cases occur when \begin{enumerate}
\item $h$ has a low face. As in the case of a 1st $3$-horn, we may exclude all cases except the one where the low face is $h|_{012}\in\mathscr{M}_2$ and all other $2$-faces are again necessarily vertical, with sole upper vertex $h|_{3}$. Now, the faces $h|_{123},h|_{023}\in\Phat_{1}$ necessarily have exit index $2$, $h|_{ijk}=(h_{ijk},2)$, and their source edges $h_{12}, h_{02}\in\mathscr{L}_1$ lift two edges of the low face -- that is, we have an intermediate lifting problem of type
\begin{equation*}
\begin{tikzcd}[column sep=large]
\Lambda^2_{2}\ar[d,hook]\ar[r,"{h_{12}\cup h_{02}}"] & \mathscr{L}\ar[d,"{\pi}"] \\
\Delta[2]\ar[ur,dotted,"{H_{012}}"description] \ar[r,"{h|_{012}}"'] & \mathscr{M}
\end{tikzcd}
\end{equation*}
with solution $H_{012}$. (This is the first case where we see that it is not enough for $\pi$ to merely be an inner fibration.) This yields a horn
\[
H_{012}\cup h_{023}\cup h_{123}\colon\Lambda^3_{2}\to\mathscr{N},
\]
which admits a filler $H\in\mathscr{N}$. We observe that the restriction to $\{0\}\times\Delta[2]$ of $H \circ \mathcal{C}_{3}\colon \Delta[1]\times\Delta[2]\to\mathscr{N}$, which is precisely $H_{012}$, factors through $\mathscr{L}$ by construction. Indeed, $(H,3)$ fills $h$: $\mathcal{S}_{3}\circ\partial_{3}\colon[2]\to[1]\times[2]$ sends $i\mapsto(0,i)$, so $d_{3}H=H_{012}$ is low:
\[
d_{3}(H,3)=\pi(H_{012})=h|_{012}\in\mathscr{M}_2,
\]
as desired. In contrast, $\mathcal{S}_{3}\circ\partial_{i}$ for any $i<3$ clearly hits both $(0,-)$ and $(1,-)$ and so $d_{0/1/2}H$ are vertical. Using $\flat_{3,i}=2$ for $i<3$, we see
\[
d_{i}(H,3)=(h_{0\dots \widehat{i}\dots 3},2)=h|_{0\dots \widehat{i}\dots 3},
\]
as desired.
\item $h$ has an upper face. Analogously, we can assume that the upper face is $h|_{123}\in\mathscr{N}$ whence we have that the sole low vertex is $h|_{0}$ and that all other faces are vertical with exit index necessarily $1$. Exactly like in Case \eqref{RVJG15G} above, we have a horn $\widehat{h}\colon\Lambda^3_2\to\EEx$ with $\widetilde{h}|_{0}$ in $\iota(\mathscr{L})$, and with filler, say, $H\in\mathscr{N}_{3}$. We observe that $(H,1)$ again fills $h$. The check is exactly as in said Case.
\item all faces of $h$ are vertical. Analogously to Case \eqref{CO7D31L} above,
\[
h|_{012},
\]
necessarily has exit index $2$, and
\[
h|_{123},\ h|_{023}
\]
have exit index $1$. The missing face $h|_{013}$ would have had to have index $2$ by the same argument. We may thus choose a filler $H\in\mathscr{N}_3$ of $\widetilde{h}\colon\Lambda^3_2\to\mathscr{N}$ and analogously observe that $(H,2)\in\Phat_2$ fills $h$. \end{enumerate}
\noindent\underline{Horns of arbitrary dimension.} Let \[
h\colon\Lambda^n_{i}\to\EEx \] be given, with $0<i<n$. We will adopt the notation and results from the cases of inner $2$- and $3$-horns treated above. Suppose \begin{enumerate}
\item $h$ has a low face, which is necessarily $h|_{0\dots n-1}\in\mathscr{M}_{n-1}$; w.l.o.g., $h|_{n}\in\mathscr{N}_0$ is the sole upper vertex, and all other faces are vertical. Let us write $h|_{\widehat{j}}$ for $h|_{0\dots\widehat{j}\dots n}=h\circ \partial_j\colon\Delta[n-1]\to\EEx$ when that makes sense ($j\neq i$). We have that each vertical face
\[
h|_{\widehat{k}}\in\Phat_{n-2}\subset\EEx_{n-1},\ i\neq k< n
\]
has the common $(n-2)$-face $h|_{\widehat{k}\widehat{n}}$ with the low $h|_{\widehat{n}}$, which therefore gives a lift
\[
h_{\widehat{k}\widehat{n}}\in\mathscr{L}_{n-2}
\]
to $\mathscr{L}$ thereof, where we wrote
\[
h|_{\widehat{k}}=(h_{\widehat{k}},e)
\]
and $(h_{\widehat{k}})_{\widehat{n}}=\iota(h_{\widehat{k}\widehat{n}})$. As each $h|_{\widehat{k}}$ itself has a low face, its exit index is necessarily maximal, i.e.,
\[e=n-1.\]
Now, we obtain the intermediate lifting problem
\begin{equation}\label{3900AFH}
\begin{tikzcd}
\Lambda^{n-1}_{i}\ar[d,hook]\ar[r,"{\bigcup_{i\neq k\in[n-1]}h|_{\widehat{k}\widehat{n}}}"] &[5em] \mathscr{L}\ar[d,"{\pi}"]\\
\Delta[n-1]\ar[ur,dotted,"{H_{\widehat{n}}}"description]\ar[r,"{h|_{\widehat{n}}}"'] & \mathscr{M}
\end{tikzcd}
\end{equation}
with solution $H_{\widehat{n}}$. (It is imperative here that $\pi$ be a right fibration and not just an inner one, since $i=n-1$ is allowed.) This yields the horn
\[
\iota(H_{\widehat{n}})\cup\bigcup_{i\neq k\in[n-1]}\iota(h|_{\widehat{k}\widehat{n}})\colon \Lambda^n_{i}\to\mathscr{N}
\]
which has a filler $H\in\mathscr{N}_{n}$.
In fact, $(H,n)$ fills $h$: the restriction of $H\circ \mathcal{C}_{n}\colon\Delta[1]\times\Delta[n-1]\to\mathscr{N}$ to $\{0\}\times\Delta[2]$ is $\iota(H_{\widehat{n}})$, which factors through $\mathscr{L}$ by construction. Further, $\mathcal{S}_{n}\circ\partial_n\colon[n-1]\to[1]\times[n-1]$ sends $n>j\mapsto(0,j)$, so $d_{n}H$ is low, whence
\[
d_{n}(H,n)=\pi(H_{\widehat{n}})=h|_{\widehat{n}},
\]
as desired. Further, when $k<n$, we have $\mathcal{S}_n\circ\partial_j$ hitting both $\{0\}\times[n-1]$ and $\{1\}\times[n-1]$, so each $d_{k}H$ is vertical. Using $\flat_{n,k}=n-1$ for $k<n$, we have
\[
d_k(H,n)=(h_{\widehat{k}},n-1)=h|_{\widehat{k}},
\]
also as desired.
\item $h$ has an upper face, which is necessarily $h|_{\widehat{0}}\in\mathscr{N}_{n-1}$; w.l.o.g., $h|_{0}\in\mathscr{M}_0$ is the sole low vertex, and all other faces $h|_{\widehat{k}}=(h_{\widehat{k}},1)\in\Phat_{n-2}$ are vertical with exit index necessarily minimal.
Now, $h$ is given by a horn $\widetilde{h}\colon\Lambda^{n}_{i}\to\mathscr{N}$ with $\widetilde{h}|_{0}\in\iota(\mathscr{L}_0)$. Taking a filler $H$ of $\widetilde{h}$, we see that $(H,1)$ fills $h$: the restriction of $\mathcal{C}_{1}\colon[1]\times[n-1]\to[n]$ to $\{0\}\times[n-1]$ hits only $0$, so $H\circ\mathcal{C}_1$ factors through the mapping cocylinder $\mathscr{P}$ by construction, independently of the choice of filler $H$. Further, $\mathcal{S}_1\colon[n]\to[1]\times[n-1]$ sends only $0$ to $\{0\}\times[n-1]$ while $\mathcal{S}_1\circ\partial_0$ factors through $\{1\}\times[n-1]$. This means $d_{0}H$ is upper, so \[d_{0}(H,1)=h|_{\widehat{0}},\] as desired, and finally \[d_{k}(H,1)=(d_{k}H,\flat_{1,k})=(h_{\widehat{k}},1)=h|_{\widehat{k}}\] for every $k\geq 1$, also as desired.
\item all faces of $h$ are vertical. Then $h|_{0}\in\mathscr{M}_0$ is low and $h|_{n}\in\mathscr{N}_0$ upper, and moreover there must exist an index \(1\leq e\leq n\) such that
\begin{center}
$h|_{j}\in\mathscr{M}_0$ for $j<e$ and $h|_{j}\in\mathscr{N}_0$ for $j\geq e$
\end{center}
(we had $e=2$ for $3$-horns of both varieties discussed above) for otherwise there would exist a pair $0<j<j'<n$ such that $h|_{j}\in\mathscr{N}_{0}$ while $h|_{j'}\in\mathscr{M}_0$, which is absurd since the edge $h|_{jj'}$ would be of type $\mathscr{N}\to\mathscr{M}$. In fact, $e=1$ resp.\ $e=n$ are impossible, since then $h|_{\widehat{0}}$ resp.\ $h|_{\widehat{n}}$ would be low resp.\ upper; so
\[ 1<e<n.\]
(There is no $2$-horn both of whose faces are vertical, so we may assume $n\geq 3$.) Now, we claim that the exit indices of the faces $h|_{\widehat{j}}\in\Phat_{n-2}$, $j\neq i$, are determined by this $e$:
\begin{equation}\label{AFNUJCI}
h|_{\widehat{j}}=\begin{cases}
(h_{\widehat{j}},e), & j\geq e,\\
(h_{\widehat{j}},e-1), & j<e.
\end{cases}
\end{equation}
Indeed, $\mathcal{C}^{n-1}_{\ell}(\{0\}\times[n-2])=\{0,\dots,\ell-1\}$ for any $1\leq \ell\leq n-1$ implies that if $j\geq e$, then $h_{\widehat{j}}\circ\mathcal{C}^{n-1}_{e}$ factors through $\mathscr{P}$, as does $h|_{\widehat{j}}\circ \mathcal{C}^{n-1}_{e-1}$ if $j<e$. Conversely, suppose $h|_{\widehat{j}}$ has index $e'$: $(h|_{\widehat{j}})|_{0,\dots,e-1}$ must be low, which implies, by the definition of $\mathcal{S}_{e'}$, that $e'\geq e$, and since there are no further low vertices, we have $e'\leq e$.
Now, as $h$ induces (or is rather underlied by) a horn $\widetilde{h}\colon \Lambda^{n}_i\to\mathscr{N}$, we may choose a filler $H\in\mathscr{N}_3$ and claim that $(H,e)$ in turn is a filler for $h$. In order to ensure that $H\circ \mathcal{C}_{e}\colon\Delta[1]\times\Delta[n-1]\to\mathscr{N}$ factors through $\mathscr{P}$, it suffices to observe that the missing face $h|_{\widehat{i}}$ cannot be low, for then the choice of filler $H$ does not affect the factorisation property (in that $\widetilde{h}$ needs filling only away from $\iota(\mathscr{L})$). Indeed, the only such case would be when $i=n$, but $h$ is inner.
Finally, we check the exit indices of the faces of $H$: since $1<e<n$, no face of $H$ is low or upper, and \eqref{eq:formula for flat} implies $d_{j}(H,e)=h|_{\widehat{j}}$ due to \eqref{AFNUJCI}.
\end{enumerate} \end{proof}
\begin{definition}
We call a span $\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N}$ of $\infty$-groupoids resp.\ $\infty$-categories, with $\pi$ a Kan resp.\ right fibration, and $\iota$ a cofibration, a \emph{linked $\infty$-groupoid} or {\it linked space} resp.\ \emph{linked $\infty$-category}, of {\it depth $1$}. We call $\EEx$ its \emph{exit path $\infty$-category}. \end{definition}
Compatibility with \cite[Lemma 3.3.5]{ayala2017stratified} can be seen in our context from the construction of $\EEx$ and from a variant \Cref{lem:mapping cocylinder}; this will appear in future work. (The definition of the exit path $\infty$-category in \cite{ayala2017stratified} coincides up to equivalence with that in \cite[App.\ A]{luriehigheralgebra} by another result of the former work.)
We finally discuss a few examples that motivated this paper. We keep the discussion very brief as they will be subject of future work aimed at certain applications.
Any Kan (right) fibration $\pi$, or any cofibration $\iota$ alone gives an example with a trivial choice for the other leg: the identity cofibration or the final fibration to the point: \[\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{=}\mathscr{L}\] or \[\ast\leftarrow\mathscr{L}\xrightarrow{\iota}\mathscr{N}.\] Any $\infty$-category $\mathscr{X}$ gives a linked $\infty$-category \[\emptyset\leftarrow\emptyset\to\mathscr{X}\] with $\EEx(\emptyset\leftarrow\emptyset\to\mathscr{X})\simeq\mathscr{X}$. The other trivial construction \[\ast\leftarrow\mathscr{X}\xrightarrow{=}\mathscr{X}\] corresponds to taking the open cone of $\mathscr{X}$ -- literally in the ordinary stratified setting for $\mathscr{X}=\mathrm{Exit}(X)$, recalling that $\mathrm{Exit}(C(X))\simeq\mathrm{Exit}(X)^{\triangleleft}$ -- in that \[\ast\in\EEx\left(\ast\leftarrow\mathscr{X}\xrightarrow{=}\mathscr{X}\right)\simeq \mathscr{X}^{\triangleleft}\] is initial. This equivalence is an immediate consequence of the general result (to appear) that $\Hom_{\EEx}(p,q)$ for $p\in M$ and $q\in N$ is equivalent to the space of paths in $N$ that start in the fibre $\iota(L_{p})$ of $\pi$ at $p$ embedded into $N$ and end at $q$.
\begin{example}[Bordisms]\label{ex:bordisms} Since we only explicitly treated depth $1$, we restrict ourselves to manifolds with boundary, but the higher-depth treatment of corners is analogous. The linked space corresponding to a (smooth) manifold with boundary $(M,\partial M)$ has lower stratum $\partial M$, higher stratum $M^\circ=M\setminus\partial M$, link $L=\partial M$, $\pi=\id_{\partial M}$, and $\iota\colon\partial M\hookrightarrow M^\circ$ given by the flow along a nowhere-vanishing inward-pointing vector field along the boundary for a chosen nonzero time. An equivalent way to pick $\iota$ is to consider a tubular neighbourhood of the boundary diffeomorphic (via such a vector field) to $\partial M\times[0,1)\hookrightarrow M$, whose restriction $\partial M\times(0,1)\hookrightarrow M^\circ$ to positive time hits the interior, and take $\iota$ to be the restriction to $\partial M\times\{\frac{1}{2}\}$. \end{example}
\begin{example}[Defects]\label{ex:defects} With a smooth submanifold $N\subset M$ of positive codimension we may associate a linked space with lower stratum $N$ and higher stratum $M\setminus N$. The link is given by the sphere bundle of the normal bundle of $N$, with the obvious maps $\pi$ and $\iota$. For instance, the link of $\mathbb{R}\subset\mathbb{R}^3$ is an open (infinite) cylinder, whereas the link of $S^1\subset\mathbb{R}^3$ is a torus. \end{example}
\begin{example}[Depth-$1$ stratified Grassmannians]\label{ex:grassmann}
For $n,k\in\mathbb{N}$, consider the span
\[
\begin{tikzcd}
& B\OO(n)\times B\OO(k)\ar[dl,two heads,"{\pi}"']\ar[dr,hook,"{\boxplus}"]\\
B\OO(n) & & B\OO(n+k)
\end{tikzcd}
\]
where $\pi$ is the coordinate projection and $\boxplus$ is induced by direct-summing of vector spaces and the choice of a pairing function (bijection) $\mathbb{N}\times\mathbb{N}\cong\mathbb{N}$. This gives sub-$\infty$-categories of the stratified Grassmannian of \cite{ayala2020factorization} as treated in \cite{tetik2022stratified}.\footnote{We slightly deviated from the map $\boxplus$ used in \cite{tetik2022stratified} by using a pairing function, but only up to an equivalence induced by it.} A higher-depth treatment can reconstruct the full $\infty$-category, but we leave this for future work. \end{example}
\printbibliography
\begin{comment} \begin{definition}\label{dfn:simplices that factor through cylinders}
Let $\iota\colon\mathscr{L}\rightarrow\mathscr{N}$ be a map of simplicial sets. For each $k\geq1$, we write $\Phat_{k-1}\subset\mathscr{N}_k$ for the set of $k$-simplices $\gamma$ of $\mathscr{N}$, such that:
there exists a pair $(\Gamma,\alpha)\in \mathscr{P}_{k-1}\times(\Delta^1\times\Delta^{k-1})_{k}$, called a {\it factorising pair}, such that
\begin{equation}\label{item:1:simplices that factor through cylinders}
\begin{tikzcd}
\Delta^k\ar[dr,"{\alpha}"']\ar[rr,"{\gamma}"] && \mathscr{N}\\ & \Delta^1\times\Delta^{k-1}\ar[ur,"{\Gamma}"']
\end{tikzcd}
\end{equation}
commutes, where:
there is \emph{no} factorisation of $\alpha$ as in the diagram
\begin{equation}\label{item:simplices that factor through cylinders}
\begin{tikzcd}
\Delta^k\ar[d,dotted,"{\nexists}"']\ar[dr,"{\alpha}"]\\
\Delta^{\{1\}}\times\Delta^{k-1}\ar[r,hook]& \Delta^1\times\Delta^{k-1}
\end{tikzcd}
.
\end{equation}
Here, $\Gamma\colon\Delta^1\times\Delta^{k-1}\rightarrow\mathscr{N}$ is understood to be the image of $\Gamma$ under the natural map $\mathscr{P}\rightarrow\mathscr{N}^{\Delta^1}$. \end{definition}
In words, $\Phat_{k-1}$ is the set of $k$-simplicies of $\mathscr{N}$ that lie in a $k$-dimensional cylinder whose base lies entirely in $\mathscr{L}$ (i.e., in $\iota(\mathscr{L})$), and which are not entirely contained within the top face of the cylinder.
We should note that a pair $(\Gamma,\alpha)$ as in \Cref{dfn:simplices that factor through cylinders} is by no means uniquely determined by $\gamma$. In fact, neither coordinate fixes the other. Also, observe that Condition \eqref{item:simplices that factor through cylinders} is satisfied for all $\alpha$ that can partake in Condition \eqref{item:1:simplices that factor through cylinders} once it is satisfied for one. \begin{remark}[ad \eqref{item:1:simplices that factor through cylinders}]\label{rmk:ad condition 1}
The aim of Condition \eqref{item:1:simplices that factor through cylinders} of \Cref{dfn:simplices that factor through cylinders} is two-fold. First, bookkeeping: it helps group elements of $\Phat_*$ into three classes (\Cref{dfn:vertical bottom top}), which will play different roles. The second aim is to `fix orientation', in the sense that, for instance, the edges of $\gamma$ that touch $\mathscr{L}$ are necessarily directed {\it away} from $\mathscr{L}$ due to the construction of the path $\Gamma\in\mathscr{P}_{k-1}$ that accompanies it (cf.\ the arrows in triangles \eqref{eq:1-morphism mapping cocylinder} and \eqref{eq:triangles from square}). If we asked only that $\gamma$ have some vertex in $\mathscr{L}$, then a path starting in $\mathscr{N}\setminus\iota(\mathscr{L}_0)$ but ending in $\iota(\mathscr{L})$ would be allowed in $\Phat$, which would defeat its purpose. \end{remark} \begin{remark}[ad \eqref{item:simplices that factor through cylinders}] \label{rmk:ad condition 2}
If $\gamma$ is non-degenerate, Condition \eqref{item:simplices that factor through cylinders} of \Cref{dfn:simplices that factor through cylinders} is automatically satisfied, since such a factorisation would give a factorisation $\Delta^k\rightarrow\Delta^{k-1}\rightarrow\mathscr{N}$. This is another way of saying that the image of a $(1,k-1)$-shuffle (see \Cref{rmk:shuffles}) contains at least one vertex with first coordinate $0\in\Delta^1$ (by which we mean the vertex $[0]\hookrightarrow[1]$, $0\mapsto 0$) and at least one with first coordinate $1\in\Delta^1$.
For our purposes this would be too coarse on its own, as explained in \Cref{rmk:ad condition 1}.
\end{remark}
As alluded to above, a face of an element of $\Phat_{k-1}$ is naturally of one of two types: it is either in $\Phat_{k-2}$ or not. Since we will treat faces that differ in this manner separately, it will be helpful to characterise them. In fact, we will need to be slightly more discerning (cf.\ \Cref{rmk:who descends in general}):
\begin{lemma}
Let $k\geq1$. If $(\gamma,j)\in\Phat_{k-1}$ and $d_i(\gamma)$ is vertical, then $(d_i(\gamma),?)\in\Phat_{k-2}$. \end{lemma}
\begin{example}
The horizontal edge of triangle \eqref{eq:triangles from square 2} of \Cref{rmk:shuffles} is in general not in $\Phat_0$, as it not touching $\mathscr{L}$ precludes both Condition \eqref{item:1:simplices that factor through cylinders} and \eqref{item:simplices that factor through cylinders}. In contrast, the vertical edge and the hypotenuse, of either triangle, descend to $\Phat_0$. Similarly, the horizontal edge of triangle \eqref{eq:triangles from square} descends, albeit in a different way. \end{example}
A factorising pair for a vertical face $\gamma'\coloneqq d_i(\gamma)$ as in \Cref{lem:two classes} is not unique, even if one only uses a chosen factorising pair $(\Gamma,\alpha)$ for $\gamma$. Before we give a proof, we illustrate some possible choices in $k=3$. (When $k=1$, the statement is vacuosly true as there are no vertical faces, and we leave the case $k=2$ to the reader.) There is an easy case where there is a canonical choice, and one where there are multiple choices. In the former, the composition \[
\begin{tikzcd}
\Delta^{k-1}\ar[r,hook,"{\partial_i}"] & \Delta^k\ar[r,"{\alpha}"] & \Delta^1\times\Delta^{k-1}
\end{tikzcd} \] lies entirely within a vertical face of $\Delta^{1}\times\Delta^{k-1}$, i.e., there is a face map $\partial^Z\colon \Delta^{k-2}\hookrightarrow\Delta^{k-1}$ ($Z$ for {\it Z}ylinder) and a factorisation $\alpha'$ as in the diagram \[
\begin{tikzcd}
\Delta^{k-1}\ar[d,dotted,"{\alpha'}"]\ar[r,"{\alpha\circ\partial_i}"] & \Delta^{1}\times\Delta^{k-1}\\
\Delta^{1}\times\Delta^{k-2}\ar[ur,hook,"{\id\times\partial^Z}"']
\end{tikzcd}
. \] Then, the vertical face itself is a cylinder of one lower dimension, and a factorising pair for $\gamma'$ is provided by \(
\left((\id\times\partial^Z)^*\Gamma, \alpha'\right). \) For $k=3$ and $\partial^Z=\partial^Z_0$ with respect to clockwise orientation on the triangles, the figure
\begin{equation}\label{eq:inside vertical face of cylinder}
{\tiny \begin{tikzcd}
& {\bullet}\ar[dr,dash] & \\
\bullet\ar[ur,dash] & & {\color{blue}\bullet}\\
& {\color{blue}\bullet}\ar[dr,color=blue]\ar[uu,dash]\ar[from=ul, to=ur,dash,crossing over]\ar[ur,color=blue] & \\
{\bullet}\ar[rr,dash]\ar[ur,dash]\ar[uu,dash] & & {\color{blue}\bullet}\ar[uu,color=blue]
\end{tikzcd}
\quad \leadsto \quad
\begin{tikzcd}
{\color{green}\bullet}\ar[r,color=green] & {\color{blue}\bullet}\\
{\color{blue}\bullet}\ar[u,color=green]\ar[ur,color=blue]\ar[r,color=blue] & {\color{blue}\bullet} \ar[u,color=blue]
\end{tikzcd}
={\color{blue}\gamma'}\in\Phat_1
} \end{equation} depicts this situation, with $\gamma$ itself omitted.
The generic situation is depicted by the following figure: \begin{equation}
{\tiny \begin{tikzcd}[column sep=normal, row sep=normal]
& {\color{blue}\bullet}\ar[dr,color=blue] & \\
\bullet\ar[ur,dash] & & {\color{blue}\bullet}\\
& \bullet\ar[dr,dash]\ar[uu,dash]\ar[from=dl,to=ur,crossing over,color=blue,bend right=25]\ar[from=dl,to=uu,crossing over,color=blue,bend right=10]\ar[from=ul, to=ur,dash,crossing over] & \\
{\color{blue}\bullet}\ar[rr,dash]\ar[ur,dash]\ar[uu,dash] & & \bullet\ar[uu,dash]
\end{tikzcd}
\quad \leadsto\quad
\begin{tikzcd}
{\color{blue}\bullet} \ar[r,color=blue] & {\color{blue}\bullet}\\
{\color{blue}\bullet}\ar[ur,color=blue] \ar[u,color=blue] \ar[r,color=green] &{\color{green}\bullet} \ar[u,color=green]
\end{tikzcd}
\quad \text{or}\quad
\begin{tikzcd}
{\color{blue}\bullet} \ar[r,color=blue] & {\color{blue}\bullet}\\
{\color{green}\bullet} \ar[u,color=green] &{\color{blue}\bullet} \ar[u,color=blue]\ar[ul,color=blue]\ar[l,color=green]
\end{tikzcd}
} \end{equation} Here, the first square uses the (lower half of the) front face $(\id\times\partial^Z_1)^*\Gamma$, and the second the (upper half of the) left outer face $(\id\times\partial^Z_2)^*\Gamma$. Either choice will do. Similar considerations apply in higher dimensions.
We now give a proof that treats both cases simultaneously. The construction involves a non-canonical choice, but this is immaterial for our purposes.
\begin{proof}[Proof of \Cref{lem:two classes}]
Let $(\Gamma,\alpha)\in\mathscr{P}_{k-1}\times\left(\Delta^1\times\Delta^{k-1}\right)_k$ be a factorising pair for $\gamma\in\Phat_{k-1}$, and let $\gamma'\coloneqq d_i(\gamma)$ be a vertical face. As the case $k\leq2$ was discussed above, assume $k\geq3$. By means of
\[
\alpha_i\coloneqq\alpha\circ\partial_i\colon\Delta^{k-1}\rightarrow\Delta^1\times\Delta^{k-1},
\]
we may define a factorising pair
\[
(\Gamma',\alpha')\in\mathscr{P}_{k-2}\times\left(\Delta^1\times\Delta^{k-2}\right)_{k-1}
\]
for $\gamma'$ by completing the diagram
\begin{equation*}
\begin{tikzcd}
\Delta^{k-1}\ar[d,dotted,"{\alpha'}"']\ar[rr,"{\gamma'=\alpha_i^*\Gamma}"] & & \mathscr{N}\\
\Delta^{1}\times\Delta^{k-2}\ar[r,dotted,"{c}"']\ar[rru,dashed,"{\Gamma'}"']\ar[from=u,to=r,equal,crossing over] & \Delta^{k-1}\ar[r,"{\alpha_i}"'] & \Delta^1\times\Delta^{k-1}\ar[u,"{\Gamma}"']
\end{tikzcd}
.
\end{equation*}
Now, an $\alpha'$ as desired must be non-degenerate, and thus determined by a $(1,k-2)$-shuffle. We pick one for which we the definition of the left-inverse $c$ is particularly simple. Set
\[
\alpha'=\begin{bmatrix}
0 & 0 & 0 & \cdots & 0 & 0 & 1 \\
0 & 1 & 2 & \cdots & k-3 & k-2 & k-2
\end{bmatrix}
\]
in path notation.\footnote{This determines the element of $\left(\Delta^1\times\Delta^{k-2}\right)_{k-1}$ defined by the two degenerate maps $[k-1]\rightarrow[1]$, $[k-1]\rightarrow[k-2]$ in $\bm{\Delta}$ given by $(k-2\geq j\mapsto 0; k-1\mapsto 1)$, $(k-2\geq j\mapsto j; k-1\mapsto k-2)$. This specifies the `lower right' non-degenerate simplex, such as triangle \eqref{eq:triangles from square}, as opposed to triangle \eqref{eq:triangles from square 2}.} Clearly, $\alpha'$ satisfies Condition \eqref{item:simplices that factor through cylinders}. We define $c$ to be postcomposition with the map $[1]\times[k-2]\rightarrow[k-1]$ in $\bm{\Delta}$ given by $$(0,j)\mapsto j,\ (1,j)\mapsto k-1.$$ That $c\circ\alpha'=\id_{\Delta^{k-1}}$ is easily checked. Defining $\Gamma'$ as in the diagram, we see that $(\Gamma',\alpha')$ is a factorising pair, since $\Gamma'\circ\alpha'=\Gamma\circ\alpha_i\circ\id=\gamma'$ verifies Condition \eqref{item:1:simplices that factor through cylinders}. \end{proof}
We finish this section by noting the analogous fact in the degenerate direction. Its proof also involves a choice (that of a face) that will play no role.
\begin{lemma}\label{lem:degenerate vertical}
Let $k\geq1$, $\gamma\in\mathscr{P}^{\Delta}_{k-1}$, and $s_i=\sigma_i^*$ a degeneracy. Then $s_i(\gamma)\in\Phat_{k}$. \end{lemma}
The idea of the proof is to embed $s_i(\gamma)$ into a vertical face as indicated in the cylinder from \eqref{eq:inside vertical face of cylinder}.
\begin{proof}[Proof of \Cref{lem:degenerate vertical}]
Consider the face $\partial_k\colon\Delta^{k-1}\hookrightarrow\Delta^{k}$, which specifies the vertical face $\id\times\partial_k\colon\Delta^{1}\times\Delta^{k-1}\hookrightarrow\Delta^{1}\times\Delta^{k}$. The degeneracy $\sigma_{k}\colon\Delta^k\rightarrow\Delta^{k-1}$ satisfies $\sigma_k\circ\partial_k=\id$. Consider now the diagram
\begin{equation*}
\begin{tikzcd}
\Delta^{k+1} \ar[drrr,dashed, bend right=50,"{\alpha'}"] \ar[r,"{\sigma_i}"] & \Delta^k \ar[d,"{\alpha}"] \ar[r,"{\gamma}"] & \mathscr{N} \\
& \Delta^1\times\Delta^{k-1} \ar[ur,"{\Gamma}"] \ar[rr,hook,bend right=15,"{\id\times\partial_k}"] && \Delta^1\times\Delta^{k} \ar[ll,two heads,bend right=15,"{\id\times\sigma_k}"] \ar[ul,dashed,"{\Gamma'}"']
\end{tikzcd}
\end{equation*}
for $(\Gamma,\alpha)$ a factorising pair for $\gamma$. The pair $(\Gamma',\alpha')$ factorises $s_i(\gamma)$. \end{proof}
\section{Exit paths}
\begin{definition}\label{dfn:exit}
Given a span $$\mathfrak{S}=(\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N})$$ of simplicial sets, we define a simplicial set $\EEx=\EEx(\mathfrak{S})$ as follows:
\begin{itemize}
\item $\EEx_0=\mathscr{M}_0\amalg\mathscr{N}_0$.
\item
\(
\EEx_k=\mathscr{M}_k\amalg \Phat_{k-1} \amalg\mathscr{N}_k
\)
for $k\geq1$ (\Cref{dfn:simplices that factor through cylinders}).
\item Face and degeneracy maps restricted to $\mathscr{M}_k$ and $\mathscr{N}_k$ are those of $\mathscr{M}$ and $\mathscr{N}$.
\item For $k\geq1$ and $\gamma\in\Phat_{k-1}$, let $d_i$ be a face map. If $d^{\mathscr{N}}_i(\gamma)\in\mathscr{N}_{k-1}$ is
\begin{itemize}
\item vertical (\Cref{dfn:vertical bottom top}), then (\Cref{lem:two classes})
$$
d_i(\gamma)\in\Phat_{k-2}\subset\EEx_{k-1}.
$$
\item bottom, then let $(\Gamma,\alpha)$ be a factorising pair for $\gamma$. We have that $d^{\mathscr{N}}_i(\gamma)=(\alpha\circ\partial_i)^*\Gamma\in\mathscr{L}_{k-1}$ by construction (of $\mathscr{P}$), and so we set (\Cref{rmk:induced source and target maps for non-invertible paths})
$$
d_i(\gamma)=\pi(d^{\mathscr{N}}_i(\gamma))\in\mathscr{M}_{k-1}\subset\EEx_{k-1}.
$$
\item top, then similarly we set
\[
d_i(\gamma)=\tau(d_i^{\mathscr{N}}(\gamma))\in\mathscr{N}_{k-1}\subset\EEx_{k-1}.
\]
\end{itemize}
\item For $k\geq1$, $\gamma\in\Phat_{k-1}$, let $s_i$ be a face map. Then (\Cref{lem:degenerate vertical})
\[
s_i(\gamma)\in\Phat_k\subset\EEx_{k+1}.
\]
\end{itemize} \end{definition}
\begin{proof}[Proof that $\EEx$ is a simplicial set]
We check the simplicial identities. Let natural numbers $0\leq i,j\leq n$ be given. First, assume $i<j$.
\begin{itemize}
\item ($d_id_j=d_{j-1}d_i$) Let $\gamma\in\Phat_{n-1}\subset\EEx_n$. If $d^{\mathscr{N}}_j(\gamma)$ is vertical
\end{itemize} \end{proof}
\begin{theorem}\label{thm:exit path category of linked space}
If $\mathscr{M},\mathscr{L},\mathscr{N}$ are $\infty$-groupoids, $\pi\colon\mathscr{L}\rightarrow\mathscr{M}$ a Kan fibration, and $\iota\colon\mathscr{L}\rightarrow\mathscr{N}$ a cofibration, then $\EEx\left(\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N}\right)$ is an $\infty$-category. \end{theorem}
\begin{definition}
We call a span $\mathscr{M}\xleftarrow{\pi}\mathscr{L}\xrightarrow{\iota}\mathscr{N}$ of $\infty$-groupoids, with $\pi$ a Kan fibration, and $\iota$ a cofibration, a {\it linked $\infty$-groupoid} or {\it linked space}, of {\it depth $1$}. We call $\EEx$ its {\it exit path $\infty$-category}. \end{definition}
\begin{proof}[Proof of \Cref{thm:exit path category of linked space}]
\end{proof}
\subsection{Examples}
\begin{example}[Bordisms]
\end{example}
\begin{example}[Hypersurfaces]
\end{example}
\begin{example}[Depth-1 stratified Grassmannians]
For $n,k\in\mathbb{N}$, consider the span
\[
\begin{tikzcd}
& B\OO(n)\times B\OO(k)\ar[dl,two heads,"{\pi}"']\ar[dr,hook,"{\boxplus}"]\\
B\OO(n) & & B\OO(n+k)
\end{tikzcd}
\]
where $\pi$ is the coordinate projection and $\boxplus$ is induced by direct-summing of vector spaces and a pairing function (bijection) $\mathbb{N}\times\mathbb{N}\cong\mathbb{N}$. (Cf.\ \cite{tetik2022stratified}; we deviated slightly from the map $\boxplus$ used therein.) \end{example}
\end{comment}
\end{document} | arXiv | {
"id": "2301.02063.tex",
"language_detection_score": 0.7086158990859985,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} If $V$ is an irreducible algebraic variety over a number field $K$, and $L$ is a field containing $K$, we say that $V$ is {\em diophantine-stable} for $L/K$ if $V(L) = V(K)$. We prove that if $V$ is either a simple abelian variety, or a curve of genus at least one, then under mild hypotheses there is a set $S$ of rational primes with positive density such that for every $\ell \in S$ and every $n \ge 1$, there are infinitely many cyclic extensions $L/K$ of degree $\ell^n$ for which $V$ is diophantine-stable. We use this result to study the collection of finite extensions of $K$ generated by points in $V(\bar{K})$. \end{abstract}
\title{Diophantine stability}
\tableofcontents
\part{Introduction, conjectures and results} \label{part1} \section{Introduction} \label{intro} Throughout Part \ref{part1} (\S\ref{intro} through \S\ref{uncpf}) we fix a number field $K$.
\subsection{Diophantine stability}
For any field $K$, we denote by $\bar{K}$ a fixed separable closure of $K$, and by $G_K$ the absolute Galois group $\mathrm{Gal}(\bar{K}/K)$.
\begin{defn} Suppose $V$ is an irreducible algebraic variety over $K$. If $L$ is a field containing $K$, we say that $V$ is {\em diophantine-stable} for $L/K$ if $V(L) = V(K)$.
If $\ell$ is a rational prime, we say that $V$ is {\em $\ell$-diophantine-stable} over $K$ if for every positive integer $n$, and every finite set $\Sigma$ of places of $K$, there are infinitely many cyclic extensions $L/K$ of degree $\ell^n$, completely split at all places $v \in \Sigma$, such that $V(L) = V(K)$. \end{defn}
The main results of this paper are the following two theorems.
\begin{thm} \label{static-avs} Suppose $A$ is a simple abelian variety over $K$ and all $\bar{K}$-endo\-morphisms of $A$ are defined over $K$. Then there is a set $S$ of rational primes with positive density such that $A$ is $\ell$-diophantine-stable over $K$ for every $\ell \in S$. \end{thm}
\begin{thm} \label{static-curves} Suppose $X$ is an irreducible curve over $K$, and let $\tilde{X}$ be the normalization and completion of $X$. If $\tilde{X}$ has genus $\ge 1$, and all $\bar{K}$-endomorphisms of the jacobian of $\tilde{X}$ are defined over $K$, then there is a set $S$ of rational primes with positive density such that $X$ is $\ell$-diophantine-stable over $K$ for every $\ell \in S$. \end{thm}
\begin{rems} (1) Note that our assumptions on $A$ imply that $A$ is absolutely simple. It is natural to ask whether the assumption on $\mathrm{End}(A)$ is necessary, and whether the assumption that $A$ is simple is necessary. See Remark \ref{nonsimple} for more about the latter question.
(2) The condition on the endomorphism algebra in Theorems \ref{static-avs} and \ref{static-curves} can always be satisfied by enlarging $K$.
(3) For each $\ell \in S$ in Theorem \ref{static-avs} and each $n \ge 1$, Theorem \ref{quantthm} below gives a quantitative lower bound for the number of cyclic extensions of degree $\ell^n$ and bounded conductor for which $A$ is $\ell$-diophantine-stable. \end{rems}
We will deduce Theorem \ref{static-curves} from Theorem \ref{static-avs} in \S\ref{proof1} below, and prove the following consequences in \S\ref{uncpf}. Corollary \ref{uncount} is proved by applying Theorem \ref{static-curves} repeatedly to the modular curve $X_0(p)$, and Corollary \ref{shlap} by applying Theorem \ref{static-curves} repeatedly to an elliptic curve over $\mathbb{Q}$ of positive rank and using results of Shlapentokh.
\begin{cor} \label{uncount} Let $p\ge 23$ and $p\ne 37, 43, 67, 163$. There are uncountably many pairwise non-isomorphic subfields $L$ of $\bar{\mathbb{Q}}$ such that no elliptic curve defined over $L$ possesses an $L$-rational subgroup of order $p$. \end{cor}
\begin{cor} \label{shlap} For every prime $p$, there are uncountably many pairwise non-isomorphic totally real fields $L$ of algebraic numbers in $\Q_p$ over which the following two statements both hold: \begin{enumerate} \item There is a diophantine definition of ${\mathbb{Z}}$ in the ring of integers $\mathcal{O}_L$ of $L$. In particular, Hilbert's Tenth Problem has a negative answer for $\mathcal{O}_L$; i.e., there does not exist an algorithm to determine whether a polynomial (in many variables) with coefficients in $\mathcal{O}_L$ has a solution in $\mathcal{O}_L$.
\item There exists a first-order definition of the ring $\mathbb{Z}$ in $L$. The first-order theory for such fields $L$ is undecidable. \end{enumerate} \end{cor}
\subsection{Fields generated by points on varieties} Our original motivation for Theorem \ref{static-curves} was to understand, given a variety $V$ over $K$, the set of (necessarily finite) extensions of $K$ generated by a single $\bar{K}$-point of $V$. More precisely, we make the following definition.
\begin{defn} Suppose $V$ is a variety defined over $K$. A finite extension $L/K$ is {\em generated over $K$ by a point of $V$} if (any of) the following equivalent conditions hold: \begin{itemize} \item There is a point $x\in V(L)$ such that $x \notin V(L')$ for any proper subextension $L'/K$. \item There is an $x \in V(\bar{K})$ such that $L=K(x)$. \item There is an open subvariety $W \subset V$, an embedding $W \hookrightarrow {\mathbb A}^N$ defined over $K$, and a point in the image of $W$ whose coordinates generate $L$ over $K$. \end{itemize} If $V$ is a variety over $K$ we will say that {\em $L/K$ belongs to $V$} if $L/K$ is generated by a point of $V$ over $K$. Denote by $\mathcal{L}(V;K)$ the set of finite extensions of $K$ belonging to $V$, that is: $$ \mathcal{L}(V;K) := \{K(x)/K : x \in V({\bar{K}})\}. $$ \end{defn}
For example, if $V$ contains a curve isomorphic over $K$ to an open subset of $\mathbb{P}^1$, then it follows from the primitive element theorem that every finite extension of $K$ belongs to $V$. It seems natural to us to conjecture the converse. We prove this conjecture for irreducible curves. Specifically:
\begin{thm} \label{curves} Let $X$ be an irreducible curve over $K$. Then the following are equivalent: \begin{enumerate} \item all but finitely many finite extensions $L/K$ belong to $X$, \item $X$ is birationally isomorphic (over $K$) to the projective line. \end{enumerate} \end{thm}
Theorem \ref{curves} is a special case of Theorem \ref{genus0} below, taking $Y = \mathbb{P}^1$.
More generally, one can ask to what extent $\mathcal{L}(X;K)$ determines the curve $X$.
\begin{question} \label{ques} Let $X$ and $Y$ be irreducible smooth projective curves over a number field $K$. If ${\mathcal{L}}(X;K) = {\mathcal{L}}(Y;K)$, are $X$ and $Y$ necessarily isomorphic over ${\bar{K}}$? \end{question}
With $\bar{K}$ replaced by $K$ in Question \ref{ques}, the answer is ``no''. A family of counterexamples found by Daniel Goldstein and Zev Klagsbrun is given in Proposition \ref{GK} below. However, Theorem \ref{genus0} below shows that a stronger version of Question \ref{ques} has a positive answer if $X$ has genus zero.
We will write $\mathcal{L}(X;K) \approx \mathcal{L}(Y;K)$ to mean that $\mathcal{L}(X;K)$ and $\mathcal{L}(Y;K)$ agree up to a finite number of elements, i.e., the symmetric difference $$ {\mathcal{L}}(X;K) \cup {\mathcal{L}}(Y;K) - {\mathcal{L}}(X;K) \cap {\mathcal{L}}(Y;K) $$ is finite.
We can also ask Question \ref{ques} with ``$=$'' replaced by ``$\approx$''. Lemma \ref{binv} below shows that up to ``$\approx$'' equivalence, $\mathcal{L}(X;K)$ is a birational invariant of the curve $X$.
\begin{thm} \label{genus0} Suppose $X$ and $Y$ are irreducible curves over $K$, and $Y$ has genus zero. Then $\mathcal{L}(X;K) \approx \mathcal{L}(Y;K)$ if and only if $X$ and $Y$ are birationally isomorphic over $K$. \end{thm}
Theorem \ref{genus0} will be proved in \S\ref{curvesect}.
\subsection{Growth of Mordell-Weil ranks in cyclic extensions} Fix an abelian variety $A$ over $K$. Theorem \ref{static-avs} produces a large number of cyclic extensions $L/K$ such that $\mathrm{rank}(A(L)) = \mathrm{rank}(A(K))$. For fixed $m \ge 2$, it is natural to ask how ``large'' is the set $$ \mathcal{S}_m(A/K) := \{\text{$L/K$ cyclic of degree $m$} : \mathrm{rank}(A(L)) > \mathrm{rank} (A(K))\}. $$ In \S\ref{quant} we use the proof of Theorem \ref{static-avs} to give quantitative information about the size of $\mathcal{S}_{\ell^n}(A/K)$ for prime powers $\ell^n$.
Conditional on the Birch and Swinnerton-Dyer Conjecture, $\mathcal{S}_m(A/K)$ is closely related to the collection of $1$-dimensional characters $\chi$ of $K$ of order dividing $m$ such that the $L$-function $L(A,\chi; s)$ of the abelian variety $A$ twisted by $\chi$ has a zero at the central point $s= 1$. There is a good deal of literature on the statistics of such zeroes, particularly in the case where $A=E$ is an elliptic curve over ${\mathbb{Q}}$. For $\ell$ prime let $$
N_{E, \ell}(x):= |\{\text{Dirichlet characters $\chi$ of order $\ell$ : $\mathrm{cond}(\chi) \le x$ and $L(E,\chi,1) = 0$}\}|. $$ David, Fearnley and Kisilevsky \cite{DFK} conjecture that $ \lim_{x \to \infty}N_{E,\ell}(x) $ is infinite for $\ell \le 5$, and finite for $\ell \ge 7$. More precisely, the Birch and Swinnerton-Dyer Conjecture would imply $$ \log N_{E,2}(x) \sim \log(x),\\ $$ and David, Fearnley and Kisilevsky \cite{DFK} conjecture that as $x \to \infty$, $$ \log N_{E, 3}(x) \sim {\textstyle\frac{1}{2}}\log(x),\quad \log N_{E, 5}(x) \ll_{\epsilon} \epsilon\log(x) ~\text{for all $\epsilon > 0$}. $$
Examples with $L(E,\chi,1) = 0$ for $\chi$ of large order $\ell$ seem to be quite rare over ${\mathbb{Q}}$. Fearnley and Kisilevsky \cite{FK} provide examples when ${\ell}=7$ and one example with $\ell=11$ (the curve $E : y^2 + xy = x^3 + x^2 -32x + 58$ of conductor $5906$, with $\chi$ of conductor $23$).
In contrast, working over more general number fields there can be a large supply of cyclic extensions $L/K$ in which the rank grows. We will say that a cyclic extension $L/K$ is of {\em dihedral type} if there are subfields $k \subset K_0 \subset K$ and $L_0 \subset L$ such that $[K_0:k] = 2$, $L_0/k$ is Galois with dihedral Galois group, and $KL_0 = L$. The rank frequently grows in extensions of dihedral type, as can be detected for parity reasons, and sometimes buttressed by Heegner point constructions. See \cite[\S2, \S3]{growth} and \cite[Theorem B]{alc}. This raises the following natural question.
\begin{question} \label{1.12} Suppose $V$ is either an abelian variety or an irreducible curve of genus at least one over $K$. Is there a bound $M(V)$ such that if $L/K$ is cyclic of degree $\ell > M(V)$ and not of dihedral type, then $V(L) = V(K)$? \end{question}
A positive answer to Question \ref{1.12} for abelian varieties implies a positive answer for irreducible curves of positive genus, exactly as Theorem \ref{static-curves} follows from Theorem \ref{static-avs} (see \S\ref{proof1}).
\subsection{Outline of the paper} In \S\ref{curvesect} we prove Theorem \ref{genus0}. The rest of Part \ref{part1} is devoted to deducing Theorem \ref{static-curves} from Theorem \ref{static-avs}, and deducing Corollary \ref{uncount} from Theorem \ref{static-curves}. The heart of the paper is Part \ref{part2} (sections \ref{twists} through \ref{hyps}), where we prove Theorem \ref{static-avs}. In \S\ref{quant} we give quantitative information about the number of extensions $L/K$ relative to which our given abelian variety is diophantine-stable.
Here is a brief description of the strategy of the proof of Theorem \ref{static-avs} in the case when $\mathrm{End}(A) = \mathbb{Z}$ and $n = 1$. (For a more thorough description see \S\ref{intro2}, the introduction to Part \ref{part2}.) The strategy in the general case is similar, but must deal with the complexities of the endomorphism ring of $A$. If $L/K$ is a cyclic extension of degree $\ell$, we show (Proposition \ref{tower}) that $\mathrm{rank}(A(L)) = \mathrm{rank}(A(K))$ if and only if a certain Selmer group we call $\mathrm{Sel}(L/K,A[\ell])$ vanishes. The Selmer group $\mathrm{Sel}(L/K,A[\ell])$ is a subgroup of $H^1(K,A[\ell])$ cut out by local conditions $\mathcal{H}_\ell(L_v/K_v) \subset H^1(K_v,A[\ell])$ for every place $v$, that depend on the local extension $L_v/K_v$. Thus finding $L$ with $A(L) = A(K)$ is almost the same as finding $L$ with ``good local conditions'' so that $\mathrm{Sel}(L/K,A[\ell]) = 0$.
If $v$ is a prime of $K$, not dividing $\ell$, where $A$ has good reduction, we call $v$ ``critical'' if $\dim_{\F_\ell}A[\ell]/(\mathrm{Fr}_v-1)A[\ell] = 1$, and ``silent'' if $\dim_{\F_\ell}A[\ell]/(\mathrm{Fr}_v-1)A[\ell] = 0$. If $v$ is a critical prime, then the local condition $\mathcal{H}_\ell(L_v/K_v)$ only depends on whether $L/K$ is ramified at $v$ or not. If $v$ is a silent prime, then $\mathcal{H}_\ell(L_v/K_v) = 0$ and does not depend on $L$ at all. Given a sufficiently large supply of critical primes, we show (Propositions \ref{goodp} and \ref{l7.13}) how to choose a finite set $\Sigma_c$ of critical primes so that if $\Sigma_s$ is any finite set of silent primes, $L/K$ is completely split at all primes of bad reduction and all primes above $\ell$, and the set of primes ramifying in $L/K$ is $\Sigma_c \cup \Sigma_s$, then $\mathrm{Sel}(L/K,A[\ell]) = 0$.
The existence of critical primes and silent primes for a set of rational primes $\ell$ with positive density is Theorem \ref{main} of the Appendix by Michael Larsen. We are very grateful to Larsen for providing the Appendix, and to Robert Guralnick, with whom we consulted and who patiently explained much of the theory to us. We also thank Daniel Goldstein and Zev Klagsbrun for Proposition \ref{GK} below.
\section{Fields generated by points on varieties} \label{curvesect}
Recall that for a variety $V$ over $K$ we have defined $$ \mathcal{L}(V;K) := \{K(x)/K : x \in V({\bar{K}})\}. $$
\subsection{Brauer-Severi varieties}
Suppose that $X$ is a variety defined over $K$ and isomorphic over $\bar{K}$ to $\mathbb{P}^n$, i.e., $X$ is an $n$-dimensional Brauer-Severi variety. Let $\mathrm{Br}(K) := H^2(G_K,\bar{K}^\times)$ denote the Brauer group of $K$. As a twist of $\mathbb{P}^n$, $X$ corresponds to a class in $H^1(G_K,\mathrm{Aut}_{\bar{K}}(\mathbb{P}^n))$, so using the map $$ H^1(G_K,\mathrm{Aut}_{\bar{K}}(\mathbb{P}^n)) = H^1(G_K,\mathrm{PSL}_{n+1}(\bar{K})) \hookrightarrow H^2(G_K,\boldsymbol{\mu}_{n+1}) = \mathrm{Br}(K)[n+1] $$ $X$ determines (and is determined up to $K$-isomorphism by) a class $$ c_X \in \mathrm{Br}(K)[n+1]. $$ For every place $v$ of $K$, let $\mathrm{inv}_v : \mathrm{Br}(K) \to \mathrm{Br}(K_v) \to \mathbb{Q}/\mathbb{Z}$ denote the local invariant.
\begin{prop} \label{b-s} Suppose that $X$ is a Brauer-Severi variety over $K$, and let $c_X \in \mathrm{Br}(K)$ be the corresponding Brauer class. If $L$ is a finite extension of $K$ then the following are equivalent: \begin{enumerate} \item $X(L)$ is nonempty, \item $L \in \mathcal{L}(X;K)$, \item $[L_w:K_v]\mathrm{inv}_v(c_X) = 0$ for every $v$ of $K$ and every $w$ of $L$ above $v$. \end{enumerate} \end{prop}
\begin{proof} Let $n := \dim(X)$, and suppose $X(L)$ is nonempty. Then $X$ is isomorphic over $L$ to $\mathbb{P}^n$. If $K \subset F \subset L$ then the Weil restriction of scalars $\mathrm{Res}^F_K X$ is a variety of dimension $n[F:K]$, and there is a natural embedding $$ \mathrm{Res}^F_K X \longrightarrow \mathrm{Res}^L_K X. $$ If we define $W := \mathrm{Res}^L_K X - \cup_{K \subset F \subsetneq L} \mathrm{Res}^F_K X$ then $W$ is a (nonempty) Zariski open subvariety of the rational variety $\mathrm{Res}^L_K X$, so in particular $W(K)$ is nonempty. But taking $K$ points in the definition of $W$ shows that $$ W(K) = (\mathrm{Res}^L_K X)(K) - \cup_{K \subset F \subsetneq L} (\mathrm{Res}^F_K X)(K)
= X(L) - \cup_{K \subset F \subsetneq L} X(F). $$ Thus $X(L)$ properly contains $\cup_{K \subset F \subsetneq L} X(F)$, so $L \in \mathcal{L}(X;K)$ and (i) $\Rightarrow$ (ii).
If $v$ is a place of $K$ and $w$ is a place of $L$ above $v$, then (see for example \cite[Proposition 2, \S1.3]{cfs}) \begin{equation} \label{e22} \mathrm{inv}_w(\mathrm{Res}_L(c_X)) = [L_w:K_v]\mathrm{inv}_v(c_X). \end{equation} If $L \in \mathcal{L}(X;K)$, then by definition $X(L)$ is nonempty, so $X$ is isomorphic over $L$ to $\mathbb{P}^n$ and $\mathrm{Res}_L(c_X) = 0$. Thus \eqref{e22} shows that (ii) $\Rightarrow$ (iii).
Finally, if (iii) holds then $\mathrm{inv}_w(\mathrm{Res}_L(c_X)) = 0$ for every $w$ of $L$ by \eqref{e22}, so $\mathrm{Res}_L(c_X) = 0$ (see for example \cite[Corollary 9.8]{cft}). Hence $X$ is isomorphic over $L$ to $\mathbb{P}^n$, so $X(L)$ is nonempty and we have (iii) $\Rightarrow$ (i). \end{proof}
\begin{cor} \label{bscor} If $X$ and $Y$ are Brauer-Severi varieties, then $\mathcal{L}(X;K) = \mathcal{L}(Y;K)$ if and only if $\mathrm{inv}_v(c_X)$ and $\mathrm{inv}_v(c_Y)$ have the same denominator for every $v$. \end{cor}
\begin{proof} This follows directly from the equivalence (ii) $\Leftrightarrow$ (iii) of Proposition \ref{b-s}. \end{proof}
\subsection{Curves} For this subsection $X$ will be a curve over $K$, and we will prove Theorem \ref{genus0}.
\begin{lem} \label{binv} Suppose $X$ and $Y$ are curves defined over $K$ and birationally isomorphic over $K$. Then $\mathcal{L}(X;K) \approx \mathcal{L}(Y;K)$. \end{lem}
\begin{proof} If $X$ and $Y$ are birationally isomorphic, then there are Zariski open subsets $U_X \subset X$, $U_Y \subset Y$ such that $U_X \cong U_Y$ over $K$. Let $T$ denote the finite variety $X-U_X$. Then $$ \mathcal{L}(X;K) = \mathcal{L}(U_X;K) \cup \mathcal{L}(T;K), $$ and $\mathcal{L}(T;K)$ is finite. Therefore $\mathcal{L}(X;K) \approx \mathcal{L}(U_X;K)$, and similarly for $Y$, so $$ \mathcal{L}(X;K) \approx \mathcal{L}(U_X;K) = \mathcal{L}(U_Y;K) \approx \mathcal{L}(Y;K). $$ \end{proof}
Recall the statement of Theorem \ref{genus0}:
\begin{thmg0} Suppose $X$ and $Y$ are irreducible curves over $K$, and $Y$ has genus zero. Then ${\mathcal{L}}(X;K) \approx {\mathcal{L}}(Y;K)$ if and only if $X$ and $Y$ are birationally isomorphic over $K$. \end{thmg0}
\begin{proof}[Proof of Theorem \ref{genus0}] The `if' direction is Lemma \ref{binv}. Suppose now that $X$ and $Y$ are not birationally isomorphic over $K$; we will show that $\mathcal{L}(X;K) \not\approx \mathcal{L}(Y;K)$.
Replacing $X$ and $Y$ by their normalizations and completions (and using Lemma \ref{binv} again), we may assume without loss of generality that $X$ and $Y$ are both smooth and projective.
\noindent{\em Case 1: $X$ has genus zero.} In this case $X$ and $Y$ are one-dimensional Brauer-Severi varieties, so we can apply Proposition \ref{b-s}. Let $c_X, c_Y \in \mathrm{Br}(K)[2]$ be the corresponding Brauer classes. Since $X$ and $Y$ are not isomorphic, there is a place $v$ such that (switching $X$ and $Y$ if necessary) $\mathrm{inv}_v(c_X) = 0$ and $\mathrm{inv}_v(c_Y) = 1/2$. Let $T$ be the (finite) set of places of $K$ different from $v$ where $\mathrm{inv}_v(c_X)$ and $\mathrm{inv}_v(c_Y)$ are not both zero. If $L/K$ is a quadratic extension in which $v$ splits, but no place in $T$ splits, then by Proposition \ref{b-s} we have $L \in \mathcal{L}(X;K)$ but $L \notin \mathcal{L}(Y;K)$. There are infinitely many such $L$, so $\mathcal{L}(X;K) \not\approx \mathcal{L}(Y;K)$.
\noindent{\em Case 2: $X$ has genus at least one.} Let $K'/K$ be a finite extension large enough so that all $\bar{K}$-endomorphisms of the jacobian of $X$ are defined over $K'$, and $Y(K')$ is nonempty. By Theorem \ref{static-curves} applied to $X/K'$ we can find infinitely many nontrivial cyclic extensions $L/K'$ such that $X(L) = X(K')$, so in particular $L \notin {\mathcal{L}}(X;K)$. But $Y(L)$ is nonempty, so $L \in \mathcal{L}(Y;K)$ by Proposition \ref{b-s}. Since there are infinitely many such $L$, we conclude that $\mathcal{L}(X;K) \not\approx \mathcal{L}(Y;K)$. \end{proof}
\subsection{Principal homogeneous spaces for abelian varieties}
The following prop\-osition was suggested by Daniel Goldstein and Zev Klagsbrun. It shows that the answer to Question \ref{ques} is ``no'' if $\bar{K}$ is replaced by $K$. To see this, suppose that $A$ is an elliptic curve, and $\mathbf{a}, \mathbf{a}' \in H^1(K,A)$ generate the same cyclic subgroup, but there is no $\alpha \in \mathrm{Aut}_K(A)$ such that $\mathbf{a}' = \alpha\mathbf{a}$. Then the corresponding principal homogeneous spaces $X, X'$ are not isomorphic over $K$, but Proposition \ref{GK} shows that $\mathcal{L}(X;K) = \mathcal{L}(X';K)$.
\begin{prop} \label{GK} Fix an abelian variety $A$, and suppose $X$ and $X'$ are principal homogeneous spaces over $K$ for $A$ with corresponding classes $\mathbf{a}, \mathbf{a}' \in H^1(K,A)$. If the cyclic subgroups $\mathbb{Z}\mathbf{a}$ and $\mathbb{Z}\mathbf{a}'$ are equal, then $\mathcal{L}(X;K) = \mathcal{L}(X';K)$. \end{prop}
\begin{proof} Fix $n$ such that $n\mathbf{a} = 0$. The short exact sequence $$ 0 \to A[n] \to A(\bar{K}) \to A(\bar{K}) \to 0 $$ leads to the descent exact sequence $$ 0 \longrightarrow A(K)/nA(K) \longrightarrow H^1(K,A[n]) \longrightarrow H^1(K,A)[n] \longrightarrow 0, $$ and it follows that $\mathbf{a}$ can be represented by a cocycle $\sigma \mapsto a_\sigma$ with $a_\sigma \in A[n]$. Since $\mathbf{a}$ and $\mathbf{a}'$ generate the same subgroup, for some $m \in (\mathbb{Z}/n\mathbb{Z})^\times$ we can represent $\mathbf{a}'$ by $\sigma \mapsto a'_\sigma$ with $a'_\sigma = ma_\sigma$.
There are isomorphisms $\phi : A \to X$, $\phi' : A \to X'$ defined over $\bar{K}$ such that if $P \in A(\bar{K})$ and $\sigma \in G_K$, then $$ \phi(P)^\sigma = \phi(P^\sigma + a_\sigma), \qquad \phi'(P)^\sigma = \phi'(P^\sigma + a'_\sigma) $$ In particular, if $\sigma \in G_K$ then $$ \phi(P)^\sigma = \phi(P) \iff P^\sigma - P = -a_\sigma, $$ so \begin{equation} \label{three} \text{$K(\phi(P))$ is the fixed field of the subgroup $\{\sigma \in G_K : P^\sigma - P = -a_\sigma\}$} \end{equation} and similarly with $\phi$ and $\mathbf{a}$ replaced by $\phi'$ and $\mathbf{a}'$.
Suppose $L \in \mathcal{L}(X;K)$. Then we can fix $P \in A(\bar{K})$ such that $K(\phi(P)) = L$. In other words, by \eqref{three} we have \begin{equation} \label{four} G_L = \{\sigma \in G_K : P^\sigma - P = -a_\sigma\}. \end{equation} Since the set $\{P^\sigma - P + a_\sigma : \sigma \in G_K\}$ is finite and $m$ is relatively prime to $n$, we can choose $r \in \mathbb{Z}$ with $r \equiv m \pmod{n}$ such that $\{P^\sigma - P + a_\sigma : \sigma \in G_K\} \cap A[r] = 0$. Then by \eqref{four} \begin{multline*} \{\sigma \in G_K : (rP)^\sigma - rP = -a'_\sigma\}
= \{\sigma \in G_K : (rP)^\sigma - rP = -r a_\sigma\} \\
= \{\sigma \in G_K : P^\sigma - P = -a_\sigma\} = G_L, \end{multline*} so \eqref{three} applied to $\phi'$ and $\mathbf{a}'$ shows that $K(\phi'(rP)) = L$, i.e., $L \in \mathcal{L}(X';K)$. Thus $\mathcal{L}(X;K) \subset \mathcal{L}(X';K)$, and reversing the roles of $X$ and $X'$ shows that we have equality. \end{proof}
It seems natural to ask the following question about a possible converse to Proposition \ref{GK}.
\begin{question} \label{q2.7} Suppose that $A$ is an abelian variety, and $X, X'$ are principal homogeneous spaces for $A$ over $K$ with corresponding classes $\mathbf{a}, \mathbf{a}' \in H^1(K,A)$. If $\mathcal{L}(X;K) = \mathcal{L}(X';K)$, does it follow that $\mathbf{a}$ and $\mathbf{a}'$ generate the same $\mathrm{End}_K(A)$-submodule of $H^1(K,A)$? \end{question}
\begin{exa} Let $E$ be the elliptic curve 571A1 : $y^2 +y = x^3 - x^2 - 929x -10595$, with $\mathrm{End}_\mathbb{Q}(E) = \mathrm{End}_{\bar{\mathbb{Q}}}(E) = \mathbb{Z}$. Then the Shafarevich-Tate group $\mbox{\cyrr Sh}(E/\mathbb{Q}) \cong \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$, and the three nontrivial elements (which generate distinct cyclic subgroups of $H^1(\mathbb{Q},E)$) are represented by the principal homogeneous spaces \begin{align*} &X_1 : y^2 = -19x^4 + 112x^3 - 142x^2 - 68x - 7\\ &X_2 : y^2 = -16x^4 - 82x^3 - 52x^2 + 136x - 44\\ &X_3 : y^2 = -x^4 - 26x^3 - 148x^2 + 274x - 111. \end{align*} Let $d_1 = 17$, $d_2 = 41$, and $d_3 = 89$. A computation in Sage \cite{sage} shows that $\mathbb{Q}(\sqrt{d_i}) \in \mathcal{L}(X_j;\mathbb{Q})$ if and only if $i = j$, so the sets $\mathcal{L}(X_j;\mathbb{Q})$ are distinct. \end{exa}
\section{Theorem \ref{static-avs} implies Theorem \ref{static-curves}} \label{proof1}
In this section we deduce Theorem \ref{static-curves} from Theorem \ref{static-avs}.
\begin{lem} \label{preimp} The conclusion of Theorem \ref{static-curves} depends only on the birational equivalence class of $X$ over $K$. More precisely, if $X$, $Y$ are irreducible curves over $K$, birationally isomorphic over $K$, and $\ell$ is sufficiently large (depending on $X$ and $Y$), then $$ \text{$X$ is $\ell$-diophantine-stable over $K$ $\iff$ $Y$ is $\ell$-diophantine-stable over $K$}. $$ \end{lem}
\begin{proof} It suffices to prove the lemma in the case that $Y$ is a dense open subset of $X$. This is because any two $K$-birationally equivalent curves contain a common open dense subvariety.
Let $T := X-Y$. Then $T = \coprod_{i \in I} \mathrm{Spec}(K_i)$ for some finite index set $I$ and number fields $K_i$ containing $K$. Let $\delta = \max\{[K_i:K] : i \in I\}$. Then for every cyclic extension $L/K$ of prime-power degree $\ell^n$ with $\ell > \delta$, we have $L \cap K_i = K$ for all $i \in I$, so $T(L) = T(K)$ and $ X(L) = X(K) \iff Y(L) = Y(K). $ \end{proof}
It suffices, then, to prove Theorem \ref{static-curves} for irreducible projective smooth curves $X$.
\begin{lem} \label{preimp2} Suppose $f:X \to Y$ is a nonconstant map (defined over $K$) of irreducible curves over $K$. If $\ell$ is sufficiently large (depending on $X$, $Y$, and $f$), and $Y$ is $\ell$-diophantine-stable over $K$, then $X$ is $\ell$-diophantine-stable over $K$. \end{lem}
\begin {proof} By Lemma \ref{preimp} we may assume that $f:X \to Y$ is a morphism of finite degree, say $d$, of smooth projective curves. Let $L/K$ be a cyclic extension of degree $\ell^n$ with $\ell > d$ such that $Y(L) = Y(K)$. We will show that $X(L) = X(K)$.
Consider a point $x\in X(L)$, and let $y := f(x) \in Y(L)=Y(K)$. Form the fiber, i.e., the zero-dimensional scheme $T:=f^{-1}(y)$. Then $x\in T(L)$. As in the proof of Lemma \ref{preimp}, the reduction of the scheme $T$ is a disjoint union of spectra of number fields of degree at most $d$ over $K$. Since $\ell > d$, we have $T(L)=T(K)$ and hence $ x\in X(K)$. \end{proof}
\begin{lem} \label{lemimp} Theorem \ref{static-avs} $\implies$ Theorem \ref{static-curves}. \end{lem}
\begin{proof} Let $\tilde{X}$ be the completion and normalization of $X$. Let $D$ be a $K$-rational divisor on $\tilde{X}$ of nonzero degree $d$, and define a nonconstant map over $K$ from $\tilde{X}$ to its jacobian $J(\tilde{X})$ by $x \mapsto D - d \cdot [x]$. Let $A$ be a simple abelian variety quotient of $J(\tilde{X})$ defined over $K$, and let $Y \subset A$ be the image of $\tilde{X}$. Theorem \ref{static-avs} applied to $A$ shows that there is a set $S$ of primes, with positive density, such that $A$ (and hence $Y$ as well) is $\ell$-diophantine-stable over $K$ for every $\ell\in S$. It follows from Lemmas \ref{preimp} and \ref{preimp2} that (for $\ell$ sufficiently large) $X$ is $\ell$-diophantine-stable over $K$ for every $\ell\in S$ as well, i.e., the conclusion of Theorem \ref{static-curves} holds for $X$. \end{proof}
\section{Infinite extensions} \label{uncpf}
In this section we will prove Corollaries \ref{uncount} and \ref{shlap}.
\begin{thm} \label{unco} Suppose $V$ is either a simple abelian variety over $K$ as in Theorem \ref{static-avs} or an irreducible curve over $K$ as in Theorem \ref{static-curves}. For every finite set $\Sigma$ of places of $K$, there are uncountably many pairwise non-isomorphic extensions $L$ of $K$ in $\bar{K}$ such that all places in $\Sigma$ split completely in $L$, and $V(L) = V(K)$. \end{thm}
\begin{proof} Let $$ {\mathcal N}:= (n_1, n_2, n_3, \dots) $$ be an arbitrary infinite sequence of positive integers. Using Theorem \ref{static-curves}, choose a prime $\ell_1$ and a Galois extension $K_1/K$, completely split at all $v\in\Sigma$, that is cyclic of degree $\ell_1^{n_1}$ and such that $V(K_1) = V(K)$. Continue inductively, using Theorem \ref{static-curves}, to choose an increasing sequence of primes $\ell_1 < \ell_2 < \ell_3 < \cdots$ and a tower of fields $K \subset K_1 \subset K_2 \subset K_3 \subset \cdots$ such that $K_i/K_{i-1}$ is cyclic of degree $\ell_i^{n_i}$, completely split at all places above $\Sigma$, and $X(K_i) = X(K)$ for every $i$. Let $K_{\mathcal N}:= \cup_{i\ge 1}K_i \subset \bar{K} \cap K_v$.
We have that $X(K_{\mathcal N}) = X(K)$ for every $\mathcal N$. We claim further that no matter what choices are made for the $\ell_i$, the construction $$ {\mathcal N} \mapsto K_{\mathcal N} $$ establishes an {\it injection} of the (uncountable) set of sequences ${\mathcal N}$ of positive integers into the set of subfields of $\bar{K} \cap K_v$. To see this, observe that by writing a subfield $F \subset \bar{K}$ as a union of finite extensions of $K$, one can define the degree $[F:\mathbb{Q}]$ as a formal product $\prod_p p^{a_p}$ over all primes $p$, with $a_p \le \infty$ (i.e., a supernatural number). Then $[K_{\mathcal N}:K] = \prod_i \ell_i^{n_i}$, and since the $\ell_i$ are increasing, this formal product determines the sequence $\mathcal N$. Therefore there are uncountably many such fields $K_{\mathcal N}$, and they are pairwise non-isomorphic. \end{proof}
Recall the statement of Corollary \ref{uncount}:
\begin{coruncount} Let $p\ge 23$ and $p\ne 37, 43, 67, 163$. There are uncountably many pairwise non-isomorphic subfields $L$ of $\bar{\mathbb{Q}}$ such that no elliptic curve defined over $L$ possesses an $L$-rational subgroup of order $p$. \end{coruncount}
\begin{proof}[Proof of Corollary \ref{uncount}] By \cite{M}, if $p$ is a prime satisfying the hypotheses of the corollary, then the modular curve $X:=X_0(p)$ defined over ${\mathbb{Q}}$ only has two rational points, namely the cusps $\{0\}$ and $\{\infty\}$, and the genus of $X$ is greater than zero. Since the jacobian of $X$ is semistable, its endomorphisms are all defined over $\mathbb{Q}$ (see \cite{R}). Thus the hypotheses of Theorem \ref{static-curves} hold with $K := \mathbb{Q}$, and Theorem \ref{unco} produces uncountably many subfields $L$ of $\bar{\mathbb{Q}}$ such that $X_0(p)$ has no non-cuspidal $L$-rational points. \end{proof}
\begin{cor} \label{pscor} For every prime $p$, there are uncountably many pairwise non-isomorphic fields $L \subset \bar{\mathbb{Q}}$ such that \begin{enumerate} \item $L$ is totally real, \item $p$ splits completely in $L$, \item there is an elliptic curve $E$ over $\mathbb{Q}$ such that $E(L)$ is a finitely generated infinite group. \end{enumerate} \end{cor}
\begin{proof} Fix any elliptic curve $E$ over $\mathbb{Q}$ with positive rank, and without complex multiplication. Apply Theorem \ref{unco} to $E$ with $\Sigma = \{\infty,p\}$. \end{proof}
Recall the statement of Corollary \ref{shlap}:
\begin{corshlap} For every prime $p$, there are uncountably many pairwise non-isomorphic totally real fields $L$ of algebraic numbers in $\Q_p$ over which the following two statements both hold: \begin{enumerate} \item There is a diophantine definition of ${\mathbb{Z}}$ in the ring of integers $\mathcal{O}_L$ of $L$. In particular, Hilbert's Tenth Problem has a negative answer for $\mathcal{O}_L$; i.e., there does not exist an algorithm to determine whether a polynomial (in many variables) with coefficients in $\mathcal{O}_L$ has a solution in $\mathcal{O}_L$.
\item There exists a first-order definition of the ring $\mathbb{Z}$ in $L$. The first-order theory for such fields $L$ is undecidable. \end{enumerate} \end{corshlap}
\begin{proof}[Proof of Corollary \ref{shlap}] The corollary follows directly from Corollary \ref{pscor} and results of Shlap\-entokh, as follows. Suppose $L$ is an infinite extension of $\mathbb{Q}$ satisfying Corollary \ref{pscor}(i,ii,iii). Assertion (i) follows from Corollary \ref{pscor}(i,iii) and \cite[Main Theorem A]{Shlapentokh}. Since $p$ splits completely in $L$, the prime $p$ is $q$-bounded (for every rational prime $q$) in the sense of \cite[Definition 4.2]{Shlap2}, so assertion (ii) follows from Corollary \ref{pscor}(ii,iii) and \cite[Theorem 8.5]{Shlap2}. \end{proof}
\part{Abelian varieties and diophantine stability} \label{part2}
\section{Strategy of the proof} \label{intro2}
\begin{notation} For sections \ref{twists} through \ref{hyps} fix a simple abelian variety $A$ defined over an arbitrary field $K$ (in practice $K$ will be a number field or one of its completions). Let $\mathcal{R}$ denote the center of $\mathrm{End}_K(A)$, and $\mathcal{M} := \mathcal{R} \otimes \mathbb{Q}$. Since $A$ is simple, $\mathcal{M}$ is a number field and $\mathcal{R}$ is an order in $\mathcal{M}$. Fix a rational prime $\ell$ that does not divide the discriminant of $\mathcal{R}$, and fix a prime $\lambda$ of $\mathcal{M}$ above $\ell$. In particular $\ell$ is unramified in $\mathcal{M}/\mathbb{Q}$. Denote by $\mathcal{M}_\lambda$ the completion of $\mathcal{M}$ at $\lambda$. \end{notation}
In the following sections we develop the machinery that we need to prove Theorem \ref{static-avs}. Here is a description of the strategy of the proof.
The standard method---perhaps the only fully proved method---of finding upper bounds for Mordell-Weil ranks is the {\em method of descent} that seems to have been already present in some arguments due to Fermat and has been elaborated and refined ever since. These days ``descent" is done via computation of {\em Selmer groups}. To check for diophantine stability we will be considering the relative theory; that is, how things change when passing from our base field $K$ to $L$, a cyclic extension of prime power degree $\ell^n$ over $K$. The Galois group $\mathrm{Gal}(L/K)$ acts on the finite dimensional $\mathbb{Q}$-vector space $A(L)\otimes \mathbb{Q}$. Diophantine stability here requires that the action be trivial; i.e, it requires that for any Galois character $\chi_i:G_K \to {\mathbb{C}}^*$ of order $\ell^i$ ($0<i\le n$) that cuts out a nontrivial sub-extension, $L_i/K$ of $L/K$, the $\chi_i$-component of the $\mathrm{Gal}(L/K)$-representation $A(L)\otimes {\mathbb{C}}$ vanishes. Since this representation is defined over ${\mathbb{Q}}$, if, for $i >0$, the $\chi_i$-part of $A(L)\otimes {\mathbb{C}}$ vanishes then \begin{equation} \label{i1} A(L_i)\otimes {\mathbb{Q}} =A(L_{i-1})\otimes {\mathbb{Q}}. \end{equation}
Sections \ref{twists} and \ref{lflc} below prepare for a discussion of a certain relevant relative Selmer group, denoted $\mathrm{Sel}(L_i/K, A[\lambda])$ defined in Section \ref{sgss} that has the property that its vanishing implies \eqref{i1}. More precisely, Proposition \ref{tower} below gives: $$ \mathrm{rank}_\mathbb{Z} A(L) \le \mathrm{rank}_\mathbb{Z} A(K) +
\mathrm{rank}_{\mathbb{Z}}(\mathcal{R})\sum_{i=1}^n\phi(\ell^i)\cdot \dim_{\mathcal{R}/\lambda}\mathrm{Sel}(L_i/K, A[\lambda]). $$ The key to the technique we adopt is that for all cyclic $\ell^n$-extensions $L/K$ (for fixed $\ell$), the corresponding relative Selmer groups $\mathrm{Sel}(L/K, A[\lambda])$ are canonically `tied together' as finite dimensional subspaces of a single (infinite dimensional) $\mathcal{R}/\lambda$-vector space, namely $H^1(G_K, A[\lambda])$. The subspace $\mathrm{Sel}(L/K, A[\lambda])$ of $H^1(G_K, A[\lambda])$ is determined by specific local conditions at all places $v$ of $K$, these local conditions in turn being determined by $A/K_v$ and $L_v/K_v$ where $L_v$ is the completion of $L$ at any prime of $L$ above $v$. Even more specifically, $\mathrm{Sel}(L/K, A[\lambda])$ is determined by $A/K$ and the collection of local extensions $L_v/K_v$ for $v$ primes of $K$; moreover, an `artificial Selmer subgroup' of $H^1(G_K, A[\lambda])$ can be defined corresponding to any collection of local extensions $L_v/K_v$ even if this collection doesn't come from a global $L/K$.
Nevertheless, when passing from one global extension $L/K$ to another $L'/K$ of the same degree, one needs only change the local conditions that determine $\mathrm{Sel}(L/K, A[\lambda])$ at a finite set of primes $S$ to obtain the local conditions that determine $\mathrm{Sel}(L'/K, A[\lambda])$. Our aim, of course, is to find a large quantity of extensions $L/K$ with $\mathrm{Sel}(L/K, A[\lambda]) = 0$. We do this by starting with an arbitrary $L/K$ and then constructing inductively appropriate finite sets $\Sigma$, with changes of local conditions at the primes in $\Sigma$ corresponding to extensions $L'/K$ such that the $\mathrm{Sel}(L_i'/K, A[\lambda]) = 0$ for all $i$.
For this, it is essential that we are supplied with what we call {\em critical primes} and {\em silent primes}.
\noindent {\em Enough critical primes:} Critical primes are judiciously chosen primes $v$ for which a change of local condition at $v$ lowers the dimension of the corresponding Selmer group by $1$. They are primes $v$ of good reduction for $A$ and such that $\ell$ divides the order of the multiplicative group of the residue field of $v$ (no problem finding primes of this sort) and such that the action of the Frobenius element at $v$ on the vector space $A[\lambda]$ has a one-dimensional fixed space. Here---given some other hypotheses that will obtain when $\ell \gg 0$---we make use of global duality to guarantee that between the strictest local condition at $v$ and the most relaxed local condition at $v$, the corresponding Selmer groups differ in size by one dimension. Moreover, we engineer our choice of prime $v$ so that the localization map from $\mathrm{Sel}(L/K, A[\lambda])$ onto the one-dimensional Selmer local condition at $v$ is surjective. In this set-up, any change of local condition subgroup at $v$ will define an `artificial global Selmer group' of dimension $\dim_{\mathcal{R}/\lambda} \mathrm{Sel}(L/K, A[\lambda]) - 1$.
Iterating this process a finite number of times leads us to a modification of the initial local conditions at finitely many critical primes, such that the artificially constructed Selmer group is zero. This proved in Proposition \ref{l7.13}.
\noindent {\em Enough silent primes:} For $\ell \gg 0$, silent primes are primes $v$ of good reduction for $A$ such that $\ell$ divides the order of the multiplicative group of the residue field of $v$, and such that the Frobenius element at $v$ has {\em no} nonzero fixed vectors in its action on $A[\lambda]$. For these primes the local cohomology group vanishes, so changing the local extension $L'_v/K_v$ at such primes doesn't change the local condition, hence doesn't change the Selmer group. By making use of silent primes, we can ensure that we have infinitely many collections of local data such that the corresponding (artificial) Selmer group is zero. In addition, Larsen in his appendix requires the existence of silent primes in order to prove the existence of critical primes.
In the description above, we chose a finite collection of local extensions $L'_v/K_v$ with specified properties for the construction of our Selmer group, a single place $v$ at a time, to keep lowering dimension. At the end of this process, we need to have a {\em global} extension $L'/K$ corresponding to our collection of local extensions $\{L'_v/K_v\}_v$. The existence of such an $L'$ is given by Lemma \ref{l7.15}.
In the appendix, Michael Larsen proves a general theorem (Theorem \ref{main}) guaranteeing the existence of sufficiently many critical and silent primes in the general context of Galois representations on $A[\lambda]$ for $A$ a simple abelian variety over a number field.
\section{Twists of abelian varieties} \label{twists}
Keep the notation from from the beginning of \S\ref{intro2}. In this section we recall results from \cite{mrs} about twists of abelian varieties. We will use these twists in \S\ref{lflc} and \S\ref{sgss} to define the relative Selmer groups $\mathrm{Sel}(L/K,A[\lambda])$ described in \S\ref{intro2}.
Fix for this section a cyclic extension $L/K$ of degree $\ell^n$ with $n \ge 0$. Let $G := \mathrm{Gal}(L/K)$. If $n \ge 1$ (i.e., if $L \ne K$), let $L'$ be the (unique) subfield of $L$ of degree $\ell^{n-1}$ over $K$ and $G' := \mathrm{Gal}(L'/K) = G/G^{\ell^{n-1}}$.
\begin{defn} \label{twistdef} Define an ideal $\mathcal{I}_{L} \subset \mathcal{R}[G]$ by $$ \mathcal{I}_{L} := \begin{cases} \ker(\mathcal{R}[G] \longrightarrow \mathcal{R}[G']) & \text{if $n \ge 1$}, \\ \mathcal{R}[G] & \text{if $n = 0$}. \end{cases} $$ Then $\mathrm{rank}_\mathcal{R}(\mathcal{I}_{L}) = \varphi(\ell^n)$, where $\varphi$ is the Euler $\varphi$-function, and we define the {\em $L/K$-twist} $A_{L}$ of $A$ to be the abelian variety $\mathcal{I}_{L} \otimes A$ of dimension $\varphi(\ell^n)\dim(A)$ as defined in \cite[Definition 1.1]{mrs}. Concretely, if $n \ge 1$ then $$ A_{L} := \ker(\mathrm{Res}^L_K A \longrightarrow \mathrm{Res}^{L'}_K A). $$ Here $\mathrm{Res}^L_K A$ denotes the Weil restriction of scalars of $A$ from $L$ to $K$, and the map is obtained by identifying $\mathrm{Res}^L_K A = \mathrm{Res}^{L'}_K \mathrm{Res}^L_{L'} A$ and using the canonical map $\mathrm{Res}^L_{L'} A \to A$. If $n = 0$, we simply have $A_{K} = A$. \end{defn}
See \cite[\S3]{alc} or \cite{mrs} for a discussion of $A_{L}$ and its properties.
\begin{defn} \label{anotherdef} With notation as above, let $\mathbf{N}_{L/L'} := \sum_{\sigma\in\mathrm{Gal}(L/L')}\sigma \in \mathcal{R}[G]$ if $n \ge 1$ and $\mathbf{N}_{L/L'} = 0$ if $n = 0$, and define $$ R_{L} := \mathcal{R}[G]/\mathbf{N}_{L/L'}\mathcal{R}[G] $$ so $\mathrm{rank}_\mathcal{R} R_{L} = \varphi(\ell^n)$.
Fixing an identification $G \xrightarrow{\sim} \boldsymbol{\mu}_{\ell^n}$ of $G$ with the group of $\ell^n$-th roots of unity in $\bar{\mathcal{M}}$ induces an inclusion $$ R_{L} \hookrightarrow \mathcal{M}(\boldsymbol{\mu}_{\ell^n}) $$ that identifies $R_{L}$ with an order in $\mathcal{M}(\boldsymbol{\mu}_{\ell^n})$. Since $\ell$ is unramified in $\mathcal{M}/\mathbb{Q}$ we have that $\lambda$ is totally ramified in $\mathcal{M}(\boldsymbol{\mu}_{\ell^n})/\mathcal{M}$, and we let $\lambda_{L}$ denote the (unique) prime of $R_{L}$ above $\lambda$. \end{defn}
Note that $\mathcal{I}_{L}$ is the annihilator of $\mathbf{N}_{L/L'}$ in $\mathcal{R}[G]$, so $\mathcal{I}_{L}$ is an $R_{L}$-module. The following proposition summarizes some of the properties of $A_{L}$ proved in \cite{mrs} that we will need.
\begin{prop} \label{ros} \begin{enumerate} \item The natural action of $G$ on $\mathrm{Res}^L_K(A)$ induces an inclusion $R_{L} \subset \mathrm{End}_K(A_{L})$. \item For every commutative $K$-algebra $D$, and every Galois extension $F$ of $K$ containing $L$, there is a natural $R_{L}[\mathrm{Gal}(F/K)]$-equivariant isomorphism $$ A_{L}(D \otimes_K F) \cong \mathcal{I}_{L} \otimes_\mathcal{R} A(D \otimes_K F), $$ where $R_{L}$ acts on $A_{L}$ via the inclusion of (i) and on $\mathcal{I}_{L} \otimes A(D \otimes_K F)$ by multiplication on $\mathcal{I}_{L}$, and $\gamma \in \mathrm{Gal}(L/K)$ acts on $\mathcal{I}_{L} \otimes A(D \otimes_K F)$ as $\gamma^{-1} \otimes (1 \otimes\gamma)$. \item For every ideal $\mathfrak{b}$ of $\mathcal{R}$, the isomorphism of (ii) induces an isomorphism of $R_{L}[G_K]$-modules $$ A_{L}[\mathfrak{b}] \cong \mathcal{I}_{L} \otimes_\mathcal{R} A[\mathfrak{b}]. $$ \item For every commutative $K$-algebra $D$, the isomorphism of (ii) induces an isomorphism of $\mathcal{R}$-modules $$ A_{L}(D) \cong \mathcal{I}_{L} \otimes_{\mathcal{R}[G]} A(D \otimes_K L) $$ where $\gamma \in \mathrm{Gal}(L/K)$ acts on $D \otimes_K L$ as $1 \otimes \gamma$. \end{enumerate} \end{prop}
\begin{proof} The first assertion is \cite[Theorem 5.5]{mrs}, and the second is \cite[Lemma 1.3]{mrs}. Then (iii) follows from (ii) by taking $D := K$ and $F := \bar{K}$ (see \cite[Theorem 2.2]{mrs}), and (iv) follows from (ii) by setting $F := L$ and taking $\mathrm{Gal}(L/K)$ invariants of both sides (see \cite[Theorem 1.4]{mrs}). \end{proof}
\begin{cor} \label{roscor} The isomorphism of Proposition \ref{ros}(iii) induces an isomorphism of $\mathcal{R}[G_K]$-modules $$ A_{L}[\lambda_{L}] \cong A[\lambda]. $$ \end{cor}
\begin{proof} Fix a generator $\gamma$ of $G$, and let $\bar\gamma$ denote its projection to $R_{L}$. Then $\lambda_{L}$ is generated by $\lambda$ and $\bar\gamma-1$, so Proposition \ref{ros}(iii) shows that $$ A_{L}[\lambda_{L}] = A_{L}[\lambda][\bar\gamma - 1] = (\mathcal{I}_{L} \otimes A[\lambda])[\bar\gamma-1]. $$ If $L = K$ there is nothing to prove. If $L \ne K$ then $\mathcal{I}_{L}$ is defined by the exact sequence \begin{equation} \label{e5.5} 0 \longrightarrow \mathcal{I}_{L} \longrightarrow \mathcal{R}[G] \longrightarrow \mathcal{R}[G'] \longrightarrow 0. \end{equation} Tensoring the free $\mathcal{R}$-modules of \eqref{e5.5} with $A[\lambda]$ and taking the kernel of $\gamma-1$ gives \begin{equation} \label{e5.6} 0 \longrightarrow A_{L}[\lambda_{L}] \longrightarrow (\mathcal{R}[G]\otimes A[\lambda])[\gamma-1] \longrightarrow \mathcal{R}[G']\otimes A[\lambda]. \end{equation} Explicitly, $$ \textstyle (\mathcal{R}[G]\otimes A[\lambda])[\gamma-1] = \{\sum_{g \in G} g \otimes a : a \in A[\lambda]\} \cong A[\lambda], $$ and this is in the kernel of the right-hand map of \eqref{e5.6}, so the corollary follows. \end{proof}
\section{Local fields and local conditions} \label{lflc}
In this section we use the twists $A_L$ of \S\ref{twists} to define the local conditions that will be used in \S\ref{sgss} to define our relative Selmer groups $\mathrm{Sel}(L/K,A[\lambda])$.
Let $A$, $\mathcal{R}$, $\ell$, and $\lambda$ be as in \S\ref{twists}, and keep the rest of the notation of \S\ref{intro2} and \S\ref{twists} as well. For this section we restrict to the case where $K$ is a local field of characteristic zero, i.e., a finite extension of some $\mathbb{Q}_\ell$ or of $\mathbb{R}$. Fix for this section a cyclic extension $L/K$ of $\ell$-power degree, and let $G := \mathrm{Gal}(L/K)$.
\begin{defn} \label{localdef} Define $\mathcal{H}_{\lambda}(L/K) \subset H^1(K,A[\lambda])$ to be the image of the composition $$ A_{L}(K)/\lambda_{L} A_{L}(K) \hookrightarrow H^1(K,A_{L}[\lambda_{L}]) \cong H^1(K,A[\lambda]) $$ where $\lambda_{L}$ is as in Definition \ref{anotherdef}, the first map is the Kummer map, and the second map is the isomorphism of Corollary \ref{roscor}. (This Kummer map depends on the choice of a generator of $\lambda_{L}/\lambda_{L}^2$, but its image is independent of this choice.) When $L= K$, $\mathcal{H}_{\lambda}(K/K)$ is just the image of the Kummer map $$ A(K)/\lambda A(K) \hookrightarrow H^1(K,A[\lambda]) $$ and we will denote it simply by $\mathcal{H}_{\lambda}(K)$. We suppress the dependence on $A$ from the notation when possible, since $A$ is fixed throughout this section. \end{defn}
If $K$ is nonarchimedean of characteristic different from $\ell$, and $A/K$ has good reduction, we define $$ \HS{\ur}(K,A[\lambda]) := H^1(K^\mathrm{ur}/K,A[\lambda]), $$ the unramified subgroup of $H^1(K,A[\lambda])$.
\begin{lem} \label{xlem} Suppose $K$ is nonarchimedean of residue characteristic different from $\ell$. \begin{enumerate} \item We have $\dim_{\F_\ell}(\mathcal{H}_{\lambda}(L/K)) = \dim_{\F_\ell}A(K)[\lambda]$. \item If $A$ has good reduction and $\phi \in G_K$ is an automorphism that restricts to Frobenius in $\mathrm{Gal}(K^\mathrm{ur}/K)$, then $$ \dim_{\F_\ell}(\mathcal{H}_{\lambda}(L/K)) = \dim_{\F_\ell}A[\lambda]/(\phi-1)A[\lambda]. $$ \end{enumerate} \end{lem}
\begin{proof} Suppose $K$ is nonarchimedean of residue characteristic different from $\ell$. Then $A_L(K)$ has a subgroup of finite index that is $\ell$-divisible, so $$ A_L(K)/\lambda_L A_L(K) \cong A_L(K)_{\mathrm{tors}}/\lambda_L A_L(K)_{\mathrm{tors}}
\cong A_L(K)[\lambda_L] \cong A(K)[\lambda] $$ where the second isomorphism is non-canonical and the third is Corollary \ref{roscor}. Since $\mathcal{H}_{\lambda}(L/K) \cong A_L(K)/\lambda_L A_L(K)$ by definition, this proves (i).
If further $A$ has good reduction then $A[\lambda] \subset A(K^\mathrm{ur})$. If $\phi$ is an Frobenius automorphism in $\mathrm{Gal}(K^\mathrm{ur}/K)$, then $A(K)[\lambda] = A[\lambda]^{\phi=1}$ so $$ \dim_{\F_\ell}A(K)[\lambda] = \dim_{\F_\ell}A[\lambda]^{\phi=1} = \dim_{\F_\ell}(A[\lambda]/(\phi-1)A[\lambda]. $$ Now (ii) follows from (i). \end{proof}
\begin{lem} \label{urrem} Suppose $K$ is nonarchimedean of residue characteristic different from $\ell$, $A/K$ has good reduction, and $L/K$ is unramified. \begin{enumerate} \item If $\phi \in G_K$ is an automorphism that restricts to Frobenius in $\mathrm{Gal}(K^\mathrm{ur}/K)$, then evaluation of cocycles at $\phi$ induces an isomorphism $$ \HS{\ur}(K,A[\lambda]) \xrightarrow{\sim} A[\lambda]/(\phi-1)A[\lambda]. $$ \item The twist $A_{L}$ has good reduction, and $\mathcal{H}_{\lambda}(L/K) = \HS{\ur}(K,A[\lambda])$. In particular under these assumptions $\mathcal{H}_{\lambda}(L/K)$ is independent of $L$. \end{enumerate} \end{lem}
\begin{proof} This is well-known. For (i), see for example \cite[Lemma 1.3.2(i)]{eulersys}. That $A_{L}$ has good reduction when $L/K$ is unramified follows from the criterion of N{\'e}ron-Ogg-Shafarevich and Proposition \ref{ros}(iii). Since $A_{L}$ has good reduction and $L/K$ is unramified, we have $\mathcal{H}_\lambda(L/K) \subset \HS{\ur}(K,A[\lambda])$, and further $$ \dim_{\F_\ell}\mathcal{H}_\lambda(L/K) = \dim_{\F_\ell}(A[\lambda]/(\phi-1)A[\lambda])
= \dim_{\F_\ell}\HS{\ur}(K,A[\lambda]) $$ using Lemma \ref{xlem}(ii) for the first equality, and (i) for the second. This proves (ii). \end{proof}
\begin{lem} \label{ros2} Suppose $K$ is nonarchimedean of residue characteristic different from $\ell$, $A/K$ has good reduction, and $L/K$ is nontrivial and totally ramified. Let $L_1$ be the unique cyclic extension of $K$ of degree $\ell$ in $L$. Then the map $$ A_{L}(K)/\lambda_{L} A_{L}(K) \to A_{L}(L_1)/\lambda_{L} A_{L}(L_1) $$ induced by the inclusion $A_{L}(K) \subset A_{L}(L_1)$ is the zero map. \end{lem}
\begin{proof} Since $A/K$ has good reduction and the residue characteristic is different from $\ell$, we have that $K(A[\ell^\infty])/K$ is unramified. Since $L/K$ is totally ramified, $L \cap K(A[\ell^\infty]) = K$. Hence $A(L)[\ell^\infty] = A(K)[\ell^\infty]$, so by Proposition \ref{ros}(iii), \begin{multline} \label{invptor0} A_{L}(K)[\ell^\infty] = (\mathcal{I}_{L} \otimes A[\ell^\infty])^{G_K}
= ((\mathcal{I}_{L} \otimes A[\ell^\infty])^{G_L})^{G} \\
= (\mathcal{I}_{L} \otimes (A(L)[\ell^\infty]))^G
= (\mathcal{I}_{L} \otimes (A(K)[\ell^\infty]))^G. \end{multline} As in the proof of Corollary \ref{roscor}, tensoring the exact sequence \eqref{e5.5} with $A(K)[\ell^\infty]$ and taking $G$ invariants gives an exact sequence $$ 0 \longrightarrow (\mathcal{I}_{L} \otimes A(K)[\ell^\infty])^G \longrightarrow (\mathcal{R}[G] \otimes A(K)[\ell^\infty])^G
\longrightarrow (\mathcal{R}[G'] \otimes A(K)[\ell^\infty])^G. $$ Since $G$ acts trivially on $A(K)[\ell^\infty]$, we have $$ (\mathcal{R}[G] \otimes A(K)[\ell^\infty])^G = \{\textstyle\sum_{g\in G} g \otimes a : a \in A(K)[\ell^\infty]\}. $$ The map to $\mathcal{R}[G'] \otimes A(K)[\ell^\infty]$ sends $\sum_{g\in G} g \otimes a$ to $\ell\sum_{g \in G'} g \otimes a$, which is zero if and only if $a \in A[\ell]$. Therefore $$ (\mathcal{I}_{L} \otimes (A(K)[\ell^\infty]))^G = \{\textstyle\sum_{g \in G} g \otimes a : a \in A(K)[\ell]\}, $$ and combining this with \eqref{invptor0} gives \begin{equation} \label{invptor} A_{L}(K)[\ell^\infty] = \{\textstyle\sum_{g \in G} g \otimes a : a \in A(K)[\ell]\}. \end{equation} An identical calculation shows that \begin{equation} \label{invptor3} A_{L}(L_1)[\ell] = \{\textstyle\sum_{i = 0}^{\ell^n-1} (\gamma^i \otimes a_i) :
\text{$a_i \in A(K)[\ell]$ and $a_i = a_j$ if $i \equiv j \hskip -5pt\pmod{\ell}$}\}. \end{equation} If $a \in A(K)[\ell]$, then using the identification \eqref{invptor3} we have $\sum_{i=0}^{\ell^n-1}(\gamma^i \otimes ia) \in A_{L}(L_1)[\ell]$, and $$ (\gamma-1)\sum_{i=0}^{\ell^n-1}(\gamma^i \otimes ia) = -\sum_{i=0}^{\ell^n-1}\gamma^i \otimes a. $$ Taken together with \eqref{invptor}, this proves that $$ A_{L}(K)[\ell^\infty] \subset (\gamma-1) A_{L}(L_1) \subset \lambda_L A_{L}(L_1) $$ Now the lemma follows, because the map $$ A_{L}(K)[\ell^\infty] \twoheadrightarrow A_{L}(K)/\lambda_{L} A_{L}(K) $$ is surjective (since the residue characteristic of $K$ is different from $\ell)$. \end{proof}
\begin{prop} \label{gl} Suppose $A/K$ has good reduction, $K$ is nonarchimedean of resi\-due characteristic different from $\ell$, and $L/K$ is nontrivial and totally ramified. \begin{enumerate} \item If $K \subsetneq L' \subseteq L$ then $\mathcal{H}_{\lambda}(L'/K) = \mathcal{H}_{\lambda}(L/K)$. \item $\HS{\ur}(K,A[\lambda]) \cap \mathcal{H}_{\lambda}(L/K) = 0$. \end{enumerate} \end{prop}
\begin{proof} Let $L_1$ be the cyclic extension of $K$ of degree $\ell$ in $L$. In the commutative diagram $$ \xymatrix@R=15pt{ A_{L}(L_1)/\lambda_{L} A_{L}(L_1) \ar@{^(->}[r]
& H^1(L_1,A_{L}[\lambda_{L}]) \ar^-{\sim}[r] & H^1(L_1,A[\lambda]) \\ A_{L}(K)/\lambda_{L} A_{L}(K) \ar[u] \ar@{^(->}[r]
& H^1(K,A_{L}[\lambda_{L}]) \ar[u] \ar^-{\sim}[r] & H^1(K,A[\lambda]) \ar[u] } $$ the left-hand vertical map is zero by Lemma \ref{ros2}, so by definition of $\mathcal{H}_{\lambda}(L/K)$ we have \begin{equation} \label{hk} \mathcal{H}_{\lambda}(L/K) \subset \ker(H^1(K,A[\lambda]) \to H^1(L_1,A[\lambda])). \end{equation} Since the inertia group acts trivially on $A[\lambda]$, we have $A[\lambda]^{G_L} = A[\lambda]^{G_{L_1}} = A[\lambda]^{G_{K}}$, so \begin{multline} \label{kd} \ker(H^1(K,A[\lambda]) \to H^1(L_1,A[\lambda])) = H^1(L_1/K,A[\lambda]^{G_{L_1}}) \\
= H^1(L_1/K,A[\lambda]^{G_{K}})
= \mathrm{Hom}(\mathrm{Gal}(L_1/K),A(K)[\lambda]). \end{multline} We have (using Lemma \ref{xlem}(i) for the first equality) \begin{equation} \label{kd2} \dim_{\F_\ell}\mathcal{H}_{\lambda}(L/K) = \dim_{\F_\ell}A(K)[\lambda]
= \dim_{\F_\ell}\mathrm{Hom}(\mathrm{Gal}(L_1/K),A(K)[\lambda]). \end{equation} Combining \eqref{hk}, \eqref{kd}, and \eqref{kd2} shows that the inclusion \eqref{hk} must be an equality. This proves (i), because the kernel in \eqref{hk} depends only on $L_1$. Assertion (ii) follows from \eqref{hk} and the fact that (since $L_1/K$ is totally ramified) the restriction map $$ \HS{\ur}(K,A[\lambda]) \hookrightarrow \HS{\ur}(L_1,A[\lambda]) \subset H^1(L_1,A[\lambda]) $$ is injective. \end{proof}
\begin{rem} The proof of Proposition \ref{gl} shows that if $A$ has good reduction, and $L/K$ is a ramified cyclic extension of degree $\ell$, then $\mathcal{H}_{\lambda}(L/K)$ is the ``$L$-transverse'' subgroup of $H^1(K,A[\lambda])$, as defined in \cite[Definition 1.1.6]{kolysys}. \end{rem}
\section{Selmer groups and Selmer structures} \label{sgss}
In this section we use the definitions of \S\ref{twists} and \S\ref{lflc} to define the relative Selmer groups $\mathrm{Sel}(L/K,A[\lambda])$ described in \S\ref{intro2}.
Keep the notation of the previous sections, except that from now on $K$ is a number field. If $v$ is a place of $K$ we will denote by $L_v$ the completion of $L$ at some fixed place above $v$. We will write $A_L$, $R_L$, $\mathcal{I}_L$, and $\lambda_L$ for the objects defined in \S\ref{twists} using the extension $L/K$, and $A_{L_v}$, $R_{L_v}$, $\mathcal{I}_{L_v}$, and $\lambda_{L_v}$ for the ones corresponding to the extension $L_v/K_v$.
\begin{defn} \label{globalsd} If $L/K$ is a cyclic extension of $\ell$-power degree, we define the $\lambda$-Selmer group $\mathrm{Sel}(L/K,A[\lambda]) \subset H^1(K,A[\lambda])$ by $$ \mathrm{Sel}(L/K,A[\lambda]) := \{c \in H^1(K,A[\lambda]) : \text{$\mathrm{loc}_v(c) \in \mathcal{H}_{\lambda}(L_v/K_v)$ for every $v$}\}. $$ Here $\mathrm{loc}_v : H^1(K,A[\lambda]) \to H^1(K_v,A[\lambda])$ is the localization map, $K_v$ is the completion of $K$ at $v$, and $L_v$ is the completion of $L$ at any place above $v$. When $L = K$ this is the standard $\lambda$-Selmer group of $A/K$, and we denote it by $\mathrm{Sel}(K,A[\lambda])$. \end{defn}
\begin{rem} The Selmer group $\mathrm{Sel}(L/K,A[\lambda])$ defined above consists of all classes $c \in H^1(K,A[\lambda])$ such that for every $v$, the localization $\mathrm{loc}_v(c)$ lies in the image of the composition of the upper two maps in the diagram \begin{equation} \label{classsel} \raisebox{35pt}{ \xymatrix@R=15pt{ A_{L_v}(K_v)/\lambda_{L_v}A_{L_v}(K_v) \ar@{^(->}[r] & H^1(K_v,A_{L_v}[\lambda_{L_v}])
\ar_{\cong}[d] \\ & H^1(K_v,A[\lambda]) \\ A_{L}(K_v)/\lambda_{L}A_{L}(K_v) \ar@{^(->}[r] & H^1(K_v,A_{L}[\lambda_{L}])
\ar^{\cong}[u] }} \end{equation} On the other hand, the classical $\lambda_{L}$-Selmer group of $A_{L}$ is the set of all $c$ in $H^1(K,A[\lambda])$ such that for every $v$, $\mathrm{loc}_v(c)$ is in the image of the composition of the {\em lower} two maps. Our methods apply directly to the Selmer groups $\mathrm{Sel}(L/K,A[\lambda])$, but for our applications we are interested in the classical Selmer group. The following lemma shows that these two definitions give the same Selmer groups. \end{rem}
\begin{lem} \label{scl} The isomorphism of Proposition \ref{ros}(iii) identifies $\mathrm{Sel}(L/K,A[\lambda])$ with the classical $\lambda_{L}$-Selmer group of $A_{L}$. \end{lem}
\begin{proof} We will show that for every place $v$, the image of the composition of the upper maps in \eqref{classsel} coincides with the image of the composition of the lower maps, and then the lemma follows from the definitions of the respective Selmer groups. We will do this by constructing a vertical isomorphism on the left-hand side of \eqref{classsel} that makes the diagram commute.
Let $G := \mathrm{Gal}(L/K)$ and $G_v := \mathrm{Gal}(L_v/K_v)$. The choice of place of $L$ above $v$ induces an isomorphism \begin{equation} \label{is2} \mathcal{R}[G] \otimes_{\mathcal{R}[G_v]} A(L_v) \xrightarrow{\sim} A(K_v \otimes_K L). \end{equation} Using Proposition \ref{ros}(iv) and \eqref{is2} we have \begin{multline} \label{is3} A_{L}(K_v) = \mathcal{I}_L \otimes_{\mathcal{R}[G]} A(K_v \otimes_K L) \\
= \mathcal{I}_L \otimes_{\mathcal{R}[G]} (\mathcal{R}[G] \otimes_{\mathcal{R}[G_v]} A(L_v))
= \mathcal{I}_L \otimes_{\mathcal{R}[G_v]} A(L_v). \end{multline}
Suppose first that $L_v = K_v$, so $A_{L_v} = A$ in \eqref{classsel}. Tensoring \eqref{is3} with $R_L/\lambda_L$ gives $$ A_{L}(K_v)/\lambda_{L}A_{L}(K_v) \cong A(K_v) \otimes_\mathcal{R} \mathcal{I}_L/\lambda_L\mathcal{I}_L
\cong A(K_v)/\lambda A(K_v) $$ and inserting this isomorphism into \eqref{classsel} gives a commutative diagram. This proves the lemma in this case.
Now suppose $L_v \ne K_v$. The inclusion $\mathcal{R}[G_v] \hookrightarrow \mathcal{R}[G]$ induces an isomorphism \begin{equation} \label{is1} \mathcal{R}[G] \otimes_{\mathcal{R}[G_v]} \mathcal{I}_{L_v} \xrightarrow{\sim} \mathcal{I}_{L} \end{equation} (using here that $L_v \ne K_v$). Using Proposition \ref{ros}(iv) (with $K_v$ in place of $K$, and $D = K_v$) and \eqref{is1} we have \begin{multline*} \mathcal{I}_L \otimes_{\mathcal{R}[G_v]} A(L_v)
= (\mathcal{R}[G] \otimes_{\mathcal{R}[G_v]} \mathcal{I}_{L_v}) \otimes_{\mathcal{R}[G_v]} A(L_v) \\
= \mathcal{R}[G] \otimes_{\mathcal{R}[G_v]} A_{L_v}(K_v)
= R_L \otimes_{R_{L_v}} A_{L_v}(K_v) \end{multline*} since $\mathcal{R}[G_v]$ acts on $A_{L_v}$ through $R_{L_v}$. Combining this with \eqref{is3} gives the first equality of \begin{multline*} A_{L}(K_v)/\lambda_{L}A_{L}(K_v) = A_{L_v}(K_v) \otimes_{R_{L_v}} (R_{L}/\lambda_{L}) \\
= A_{L_v}(K_v) \otimes_{R_{L_v}} (R_{L_v}/\lambda_{L_v})
= A_{L_v}(K_v)/\lambda_{L_v}A_{L_v}(K_v) \end{multline*} and the second follows from the natural isomorphism $R_{L_v}/\lambda_{L_v} \cong R_L/\lambda_L$ (again using that $L_v \ne K_v$). As in the previous case, inserting this isomorphism into \eqref{classsel} gives a commutative diagram and completes the proof of the lemma. \end{proof}
\begin{prop} \label{tower} Suppose $L/K$ is a cyclic extension of degree $\ell^n$. Then $$ \mathrm{rank}_\mathbb{Z}(A(L)) \le \mathrm{rank}_\mathbb{Z}(A(K)) + \mathrm{rank}_\mathbb{Z}(\mathcal{R}) \;
\sum_{i=1}^{n} \varphi(\ell^i)\dim_{\mathcal{R}/\lambda}(\mathrm{Sel}(L_i/K,A[\lambda])) $$ where $L_i$ is the extension of $K$ of degree $\ell^i$ in $L$. \end{prop}
\begin{proof} There is an isogeny $$ \bigoplus_{i=0}^n A_{L_i} \longrightarrow \mathrm{Res}^L_K A $$ defined over $K$ (see for example \cite[Theorem 3.5]{alc} or \cite[Theorem 5.2]{mrs}). Since $A_{L_0} = A$, and $(\mathrm{Res}^{L_i}_K A)(K) = A(L_i)$, taking the $K$-points yields \begin{equation} \label{ri} \mathrm{rank}_\mathbb{Z} A(L) = \mathrm{rank}_\mathbb{Z} A(K) + \sum_{i=1}^{n} \mathrm{rank}_\mathbb{Z} A_{L_i}(K). \end{equation}
For every $i$, by Lemma \ref{scl} the Kummer map gives an injection $$ A_{L_i}(K) \otimes (R_{L_i}/\lambda_{L_i}) \hookrightarrow \mathrm{Sel}(L_i/K,A[\lambda]). $$ For every $i$ the natural map $\mathcal{R} \to R_{L_i}$ induces an isomorphism $\mathcal{R}/\lambda \to R_{L_i}/\lambda_{L_i}$, and $\mathrm{rank}_\mathbb{Z}(R_{L_i}) = \varphi(\ell^i) \, \mathrm{rank}_\mathbb{Z}(\mathcal{R})$, so \begin{align*} \mathrm{rank}_\mathbb{Z} A_{L_i}(K) &= \mathrm{rank}_\mathbb{Z}(R_{L_i}) \; \mathrm{rank}_{R_{L_i}}(A_{L_i}(K)) \\
&\le \varphi(\ell^i) \; \mathrm{rank}_\mathbb{Z}(\mathcal{R}) \; \dim_{\mathcal{R}/\lambda}(A_{L_i}(K) \otimes (R_{L_i}/\lambda_{L_i})) \\
&\le \varphi(\ell^i) \; \mathrm{rank}_\mathbb{Z}(\mathcal{R}) \; \dim_{\mathcal{R}/\lambda}(\mathrm{Sel}(L_i/K,A[\lambda])). \end{align*} Combined with \eqref{ri} this proves the inequality of the proposition. \end{proof}
\section{Twisting to decrease the Selmer rank} \label{decrease} \label{pav}
In this section we carry out the main argument of the proof of Theorem \ref{static-avs}. Namely, we show how to choose good local conditions on the fields $L$ so that the corresponding relative Selmer groups $\mathrm{Sel}(L/K,A[\lambda])$ vanish.
Let $A/K$, $\ell^n$, and $\lambda$ be as in the previous sections. Let $\mathcal{E} := \mathrm{End}_K(A)$, and recall that $\mathcal{R}$ is the center of $\mathcal{E}$. We will abbreviate $\mathbb{F}_\lambda := \mathcal{R}/\lambda$ and $\mathcal{E}/\lambda := \mathcal{E} \otimes_\mathcal{R} \mathbb{F}_\lambda$, so in particular $A[\lambda]$ is an $\mathcal{E}/\lambda$-module. Fix a polarization of $A$, and let $\alpha \mapsto \alpha^\dagger$ denote the Rosati involution of $\mathcal{E}$ corresponding to this polarization.
\begin{defn} The ring $M_d(\mathbb{F}_\lambda)$ of $d \times d$ matrices with entries in $\mathbb{F}_\lambda$ has a unique (up to isomorphism) simple left module, namely $\mathbb{F}_\lambda^d$ with the natural action. If $R$ is any ring isomorphic to $M_d(\mathbb{F}_\lambda)$, $W$ is a simple left $R$-module, and $V$ is a finitely generated left $\mathcal{E}/\lambda$-module, then $V \cong W^r$ for some $r$ and we call $r$ the {\em length} of $V$, so that $$ \mathrm{length}_{\mathcal{E}/\lambda}V = \frac{1}{d} \dim_{\mathbb{F}_\lambda}V. $$ \end{defn}
For this section we assume in addition that: \begin{align} \label{p5}\tag{H.1} &\text{$\ell \ge 3$ and $\ell$ does not divide the degree of our fixed polarization,} \\ \label{p7}\tag{H.2} &\text{there are isomorphisms $\mathcal{E} \otimes_\mathcal{R} \mathcal{M}_\lambda \cong M_d(\mathcal{M}_\lambda)$, $\mathcal{E}/\lambda \cong M_d(\mathbb{F}_\lambda)$ for some $d$,} \\ \label{p6}\tag{H.3} &\text{$A[\lambda]$ and $\mathcal{A}[\lambda^\dagger]$ are irreducible $\mathcal{E}[G_K]$-modules,}\\ \label{p4}\tag{H.4} &\text{$H^1(K(A[\lambda])/K,A[\lambda]) = 0$ and $H^1(K(A[\lambda^\dagger])/K,A[\lambda^\dagger]) = 0$,}\\ \label{p3}\tag{H.5} &\text{there is no abelian extension of degree $\ell$ of $K(\boldsymbol{\mu}_\ell)$ in $K(\boldsymbol{\mu}_\ell,A[\lambda])$,}\\ \label{p1}\tag{H.6} &\text{there is a $\tau_0 \in G_{K(\boldsymbol{\mu}_{\ell})}$ such that $A[\lambda]/(\tau_0-1)A[\lambda] = 0$,} \\ \label{p2}\tag{H.7} &\text{there is a $\tau_1 \in G_{K(\boldsymbol{\mu}_{\ell})}$ such that $\mathrm{length}_{\mathcal{E}/\lambda}(A[\lambda]/(\tau_1-1)A[\lambda]) = 1$}. \end{align}
We will show in \S\ref{hyps} below, using results of Serre, that almost all $\ell$ satisfy \eqref{p5} through \eqref{p3}. If $K$ is sufficiently large, then it follows from results of Larsen in the Appendix that \eqref{p1} and \eqref{p2} hold for a set of primes $\ell$ of positive density.
Suppose $U$ is a finitely generated subgroup of $K^\times$, and consider the following diagram:
\begin{equation} \label{Udiag} \raisebox{70pt}{ \xymatrix@C=45pt@R=30pt@!C0{ & K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n},A[\lambda]) \\ K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n}) \ar@{-}[ur] \\ &&& K(\boldsymbol{\mu}_{\ell},A[\lambda]) \ar@{-}[uull]\\ &&K(\boldsymbol{\mu}_\ell) \ar@{-}[ur] \ar@{-}[uull] && K(A[\lambda]) \ar@{-}[ul]\\ &&&K \ar@{-}[ur]\ar@{-}[ul] }} \end{equation}
\begin{lem} \label{6.5a} If $U$ is a finitely generated subgroup of $K^\times$, then in the diagram \eqref{Udiag} we have $$ K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n}) \cap K(\boldsymbol{\mu}_{\ell},A[\lambda]) = K(\boldsymbol{\mu}_{\ell}). $$ \end{lem}
\begin{proof} Let $F := K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n}) \cap K(\boldsymbol{\mu}_{\ell},A[\lambda])$. Then $F/K(\boldsymbol{\mu}_\ell)$ is a Galois $\ell$-exten\-sion, so if $F \ne K(\boldsymbol{\mu}_\ell)$ then $F$ contains a cyclic extension $F'/K(\boldsymbol{\mu}_\ell)$ of degree $\ell$. But since $F' \subset K(\boldsymbol{\mu}_{\ell},A[\ell])$, this is impossible by \eqref{p3}. This proves the lemma. \end{proof}
\begin{lem} \label{6.5b} If $U$ is a finitely generated subgroup of $K^\times$, then the restriction map $$ H^1(K,A[\lambda]) \longrightarrow H^1(K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n},A[\lambda]),A[\lambda]) $$ is injective. \end{lem}
\begin{proof} Let $F := K(\boldsymbol{\mu}_{\ell^n},U^{1/\ell^n})$. Restriction gives a composition \begin{equation} \label{sg} \mathrm{Gal}(F(A[\lambda])/F) \xrightarrow{\sim} \mathrm{Gal}(K(\boldsymbol{\mu}_\ell,A[\lambda])/K(\boldsymbol{\mu}_\ell))
\hookrightarrow \mathrm{Gal}(K(A[\lambda])/K) \end{equation} where the first map is an isomorphism by Lemma \ref{6.5a}, and the second map is injective with cokernel of order prime to $\ell$. The restriction map in the lemma is the composition of two restriction maps $$ H^1(K,A[\lambda])
\map{\;f_1\;} H^1(F, A[\lambda])
\map{\,f_2\;} H^1(F(A[\lambda]),A[\lambda]). $$ By \eqref{sg} and \eqref{p4}, we have $$ \ker(f_2) = H^1(F(A[\lambda])/F,A[\lambda]) = H^1(K(A[\lambda])/K,A[\lambda]) = 0. $$ Further, $$ \ker{f_1} = H^1(F/K,A(F)[\lambda]). $$ If $\tau_0 \in \mathrm{Gal}(K(\boldsymbol{\mu}_\ell,A[\lambda])/K(\boldsymbol{\mu}_\ell))$ is as in \eqref{p1}, then by \eqref{sg} we can find $\tau_0' \in \mathrm{Gal}(F(A[\lambda])/F)$ that restricts to $\tau_0$. But then $\tau_0'$ has no nonzero fixed points in $A[\lambda]$. Hence $A(F)[\lambda] = 0$, so $\ker(f_1) = 0$ as well and the proof is complete. \end{proof}
\begin{lem} \label{cocyclem} Suppose $F$ is a Galois extension of $K$ containing $K(A[\lambda])$, and $c$ is a cocycle representing a class in $H^1(K,A[\lambda])$ whose restriction to $F$ is nonzero. If $\sigma \in G_K$ and $(\sigma-1)A[\lambda] \ne A[\lambda]$, then the restriction of $c$ to $G_F$ induces a nonzero homomorphism $$ G_F \longrightarrow A[\lambda]/(\sigma-1)A[\lambda]. $$ \end{lem}
\begin{proof} Since $G_F$ acts trivially on $A[\lambda]$, the restriction of $c$ to $G_F$ is a (nonzero, by assumption) homomorphism $f : G_F^\mathrm{ab} \to A[\lambda]$. Recall that $\mathcal{E} := \mathrm{End}_K(A)$, and let $D \subset A[\lambda]$ denote the $\mathcal{E}$-module generated by the image of $f$. Since $c$ is a lift from $K$, we have that $f$ is $G_K$-equivariant, and in particular $D$ is a nonzero $\mathcal{E}[G_K]$-submodule of $A[\lambda]$. By \eqref{p6} it follows that $D = A[\lambda]$. But $(\sigma-1)A[\lambda]$ is a proper $\mathcal{E}$-stable submodule of $A[\lambda]$, so the image of $f$ cannot be contained in $(\sigma-1)A[\lambda]$. \end{proof}
Recall we have fixed a polarization of $A$ of degree prime to $\ell$ (by \eqref{p5}), and $\alpha \mapsto \alpha^\dagger$ is the corresponding Rosati involution of $\mathcal{E}$. The polarization induces a nondegenerate pairing $A[\ell] \times A[\ell] \to \boldsymbol{\mu}_\ell$, which restricts to a nondegenerate pairing \begin{equation} \notag A[\lambda] \times A[\lambda^\dagger] \to \boldsymbol{\mu}_\ell \end{equation} and induces an isomorphism \begin{equation} \label{dualiso} A[\lambda^\dagger] \cong \mathrm{Hom}(A[\lambda], \boldsymbol{\mu}_\ell). \end{equation} Note that if conditions \eqref{p5} through \eqref{p2} hold for $\lambda$, then they also hold for $\lambda^\dagger$ (with the same $\tau_0$ and $\tau_1$).
\begin{defn} If $\mathfrak{a}$ is an ideal of $\O_K$, define relaxed-at-$\mathfrak{a}$ and strict-at-$\mathfrak{a}$ Selmer groups to be, respectively, \begin{align*} \mathrm{Sel}(K,A[\lambda])^\mathfrak{a} &:= \{c \in H^1(K,A[\lambda]) : \text{$\mathrm{loc}_v(c) \in \mathcal{H}_{\lambda}(K_v)$ for every $v \nmid \mathfrak{a}$}\},\\ \mathrm{Sel}(K,A[\lambda])_\mathfrak{a} &:= \{c \in \mathrm{Sel}(K,A[\lambda])^\mathfrak{a} : \text{$\mathrm{loc}_v(c) = 0$ for every $v \mid \mathfrak{a}$}\}, \end{align*} and similarly with $\lambda$ replaced by $\lambda^\dagger$. Note that $$ \mathrm{Sel}(K,A[\lambda])_\mathfrak{a} \subset \mathrm{Sel}(K,A[\lambda]) \subset \mathrm{Sel}(K,A[\lambda])^\mathfrak{a}. $$ \end{defn}
\begin{defn} \label{Qdef} From now on let $\Sigma$ be a finite set of places of $K$ containing all places where $A$ has bad reduction, all places dividing $\ell\infty$, and large enough so that the primes in $\Sigma$ generate the ideal class group of $K$. Define $$ \mathcal{O}_{K,\Sigma} := \{x \in K : \text{$x \in \mathcal{O}_{K_v}$ for every $v \notin \Sigma$}\}, $$ the ring of $\Sigma$-integers of $K$. Define sets of primes $\mathcal{P} \subset \mathcal{Q}$ by \begin{align*} \mathcal{Q} &:= \{\mathfrak{p} \notin \Sigma : \mathbf{N}\mathfrak{p} \equiv 1 \pmod{\ell^n}\} \\ \mathcal{P} &:= \{\mathfrak{p}\in\mathcal{Q} : \text{the inclusion $K^\times \hookrightarrow K_\mathfrak{p}^\times$
sends $\mathcal{O}_{K,\Sigma}^\times$ into $(\mathcal{O}_{K_\mathfrak{p}}^\times)^{\ell^n}$}\}. \end{align*} Note that the action of $\mathcal{E}$ on $A[\lambda]$ makes $\HS{\ur}(K_\mathfrak{p},A[\lambda])$ an $\mathcal{E}$-module. Define partitions of $\mathcal{P}, \mathcal{Q}$ into disjoint subsets $\mathcal{P}_i, \mathcal{Q}_i$ for $i \ge 0$ by $$ \mathcal{Q}_i := \{\mathfrak{p}\in\mathcal{Q} : \mathrm{length}_{\mathcal{E}/\lambda}\HS{\ur}(K_\mathfrak{p},A[\lambda]) = i\}, \quad \mathcal{P}_i := \mathcal{Q}_i \cap \mathcal{P} $$ and if $\mathfrak{a}$ is an ideal of $\mathcal{O}_K$, let $\mathcal{P}_1(\mathfrak{a})$ be the subset of all $\mathfrak{p} \in \mathcal{P}_1$ such that the localization maps $$ \mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}} \map{\mathrm{loc}_\mathfrak{p}} \HS{\ur}(K_\mathfrak{p},A[\lambda]), \quad
\mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}} \map{\mathrm{loc}_\mathfrak{p}} \HS{\ur}(K_\mathfrak{p},A[{\lambda^\dagger}]) $$ are both nonzero.
Note that by Lemma \ref{urrem}(i) and \eqref{dualiso}, if $\mathfrak{p}\in\mathcal{Q}_i$ then $\mathrm{length}_{\mathcal{E}/\lambda^\dagger}\HS{\ur}(K_\mathfrak{p},A[\lambda^\dagger]) = i$ as well. \end{defn}
In the language of the Introduction and \S\ref{intro2}, the {\em critical primes} are the primes in $\mathcal{Q}_1$ and the {\em silent primes} are the primes in $\mathcal{Q}_0$.
\begin{prop} \label{goodp} \begin{enumerate} \item The sets $\mathcal{P}_0$ and $\mathcal{P}_1$ have positive density. \item Suppose $\mathfrak{a}$ is an ideal of $\O_K$ such that both $\mathrm{Sel}(K,A[\lambda])_\mathfrak{a}$ and $\mathrm{Sel}(K,A[\lambda^\dagger])_\mathfrak{a}$ are nonzero. Then $\mathcal{P}_1(\mathfrak{a})$ has positive density, and if $\mathfrak{p}\in\mathcal{P}_1(\mathfrak{a})$ then \begin{align*} \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}\mathfrak{p}} &= \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}} - 1, \\ \mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}\mathfrak{p}}
&= \mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}} - 1. \end{align*} \end{enumerate} \end{prop}
\begin{proof} Let $\tau_0, \tau_1$ be as in \eqref{p1} and \eqref{p2}. By Lemma \ref{6.5a}, $$ K(\boldsymbol{\mu}_\ell,A[\lambda]) \cap K(\boldsymbol{\mu}_{\ell^n},(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n}) = K(\boldsymbol{\mu}_\ell), $$ so for $i = 0$ or $1$ we can choose $\sigma_i \in G_K$ such that \begin{align} \label{si1} &\text{$\sigma_i = \tau_i$ on $A[\lambda]$},\\ \label{si2} &\text{$\sigma_i = 1$ on $K(\boldsymbol{\mu}_{\ell^n},(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n})$}. \end{align} Fix $i = 0$ or $1$, and suppose that $\mathfrak{p}$ is a prime of $K$ whose Frobenius conjugacy class in $\mathrm{Gal}(K(\boldsymbol{\mu}_{\ell^n},(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n},A[\lambda])/K)$ is the class of $\sigma_i$. Since Frobenius fixes $\boldsymbol{\mu}_{\ell^n}$ and $(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n}$ by \eqref{si2}, we have that $\boldsymbol{\mu}_{\ell^n}$ and $(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n}$ are contained in $K_\mathfrak{p}^\times$. Hence $\mathbf{N}\mathfrak{p} \equiv 1 \pmod{\ell^n}$ and the inclusion $K^\times \hookrightarrow K_\mathfrak{p}^\times$ sends $\mathcal{O}_{K,\Sigma}^\times$ into $(\mathcal{O}_{K,\mathfrak{p}}^\times)^{\ell^n}$, so by definition $\mathfrak{p}\in\mathcal{P}$.
By \eqref{si1} and Lemma \ref{urrem}, evaluation of cocycles on a Frobenius element for $\mathfrak{p}$ in $G_K$ induces an isomorphism \begin{equation} \label{last} \mathcal{H}_{\lambda}(K_\mathfrak{p}) = \HS{\ur}(K_\mathfrak{p},A[\lambda]) \map{~\sim~} A[\lambda]/(\tau_i-1)A[\lambda] \end{equation} and similarly for $\lambda^\dagger$. Thus $\mathfrak{p}\in\mathcal{P}_i$, so the Cebotarev Theorem shows that $\mathcal{P}_0$ and $\mathcal{P}_1$ have positive density. This is (i).
Fix an ideal $\mathfrak{a}$ of $\mathcal{O}_K$ and suppose that $c$ and $d$ are cocycles representing nonzero elements of $\mathrm{Sel}(K,A[\lambda])_\mathfrak{a}$ and $\mathrm{Sel}(K,A[\lambda^\dagger])_\mathfrak{a}$, respectively. Let $$ F := K(\boldsymbol{\mu}_{\ell^n},(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell^n},A[\lambda]), $$ and let $\sigma_1$ be as above. By Lemmas \ref{6.5b} and \ref{cocyclem}, the restrictions of $c$ and $d$ to $G_F$ induce nonzero homomorphisms $$ \tilde{c} : G_F \longrightarrow A[\lambda]/(\sigma_1-1)A[\lambda], \quad \tilde{d} : G_F \longrightarrow A[\lambda^\dagger]/(\sigma_1-1)A[\lambda^\dagger]. $$ Let $Z_c$ be the subset of all $\gamma \in G_F$ such that $c(\gamma) = -c(\sigma_1)$ in $A[\lambda]/(\sigma_1-1)A[\lambda]$, and similarly for $Z_d$ with $\lambda$ replaced by $\lambda^\dagger$. Since $\tilde{c}$ and $\tilde{d}$ are nonzero, $Z_c$ and $Z_d$ each have Haar measure at most $1/\ell$ in $G_F$, so $Z_c \cup Z_d \ne G_F$ (this is where we use that $\ell \ge 3$ in assumption \eqref{p5}).
Thus we can find $\gamma \in G_F$ such that $\tilde{c}(\gamma\sigma_1) \ne 0$ and $\tilde{d}(\gamma\sigma_1) \ne 0$. Since $\gamma$ acts trivially on $A[\lambda]$, this means that $$ c(\gamma\sigma_1) \notin (\sigma_1-1)A[\lambda] = (\gamma\sigma_1-1)A[\lambda] $$ and similarly for $d$. Let $N$ be a Galois extension of $K$ containing $F$ and such that the restrictions of $c$ and $d$ to $G_F$ factor through $\mathrm{Gal}(N/F)$. If $\mathfrak{p}$ is a prime whose Frobenius conjugacy class in $\mathrm{Gal}(N/K)$ is the class of $\gamma\sigma_1$, then $\mathrm{loc}_\mathfrak{p}(c) \ne 0$ and $\mathrm{loc}_\mathfrak{p}(d) \ne 0$, so $\mathfrak{p}\in\mathcal{P}_1(\mathfrak{a})$. Now the Cebotarev Theorem shows that $\mathcal{P}_1(\mathfrak{a})$ has positive density.
If $\mathfrak{p}\in\mathcal{P}_1(\mathfrak{a})$ then we have exact sequences of $\mathcal{E}/\lambda$ and $\mathcal{E}/\lambda^\dagger$-modules \begin{gather*} 0 \longrightarrow \mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}\mathfrak{p}} \longrightarrow \mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}} \map{\mathrm{loc}_\mathfrak{p}} \HS{\ur}(K_\mathfrak{p},A[{\lambda}]) \longrightarrow 0 \\ 0 \longrightarrow \mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}\mathfrak{p}} \longrightarrow \mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}}
\map{\mathrm{loc}_\mathfrak{p}} \HS{\ur}(K_\mathfrak{p},A[{\lambda^\dagger}]) \longrightarrow 0 \end{gather*} where the right-hand maps are surjective because they are nonzero and (by \eqref{last}) the target modules are simple. This completes the proof of (ii). \end{proof}
\begin{defn} \label{updef} Suppose $T$ is a finite set of primes of $K$, disjoint from $\Sigma$. We will say that an extension $L/K$ is {\em $T$-ramified and $\Sigma$-split} if every $\mathfrak{p}\in T - \mathcal{Q}_0$ is totally ramified in $L/K$, every $\mathfrak{p} \notin T$ is unramified in $L/K$, and every $v \in \Sigma$ splits completely in $L/K$. \end{defn}
The primes in $\mathcal{Q}_0$ are the silent primes referred to in the Introduction and \S\ref{intro2}. The local Selmer conditions at these primes are zero, so we need no condition on their splitting behavior in Definition \ref{updef}.
\begin{lem} \label{l7.15} Suppose $T$ is a nonempty finite subset of $\mathcal{P}$, and let $T_0 := T \cap \mathcal{P}_0$. For each $\mathfrak{p} \in T_0$ fix $e_\mathfrak{p}$ with $0 \le e_\mathfrak{p} \le n$. If $T = T_0$ assume in addition that some $e_\mathfrak{p} = n$. Then there is a cyclic extension $L/K$ of degree $\ell^n$ that is $T$-ramified and $\Sigma$-split, and such that if $\mathfrak{p} \in T_0$ then the ramification degree of $\mathfrak{p}$ in $L/K$ is $\ell^{e_\mathfrak{p}}$. \end{lem}
\begin{proof} Suppose $\mathfrak{p}\in\mathcal{P}$. Let $\mathbb{A}_K^\times$ denote the group of ideles of $K$, and let $K(\mathfrak{p})$ be the abelian extension of $K$ corresponding by global class field theory to the subgroup $$ Y := K^\times (\mathcal{O}_{K_\mathfrak{p}}^\times)^{\ell^n}
\prod_{v\in\Sigma}K_v^\times\prod_{v\notin\Sigma\cup\{\mathfrak{p}\}}\mathcal{O}_{K_v}^\times
\subset \mathbb{A}_K^\times. $$ Class field theory tells us that the inertia (resp., decomposition) group of a place $v$ in $\mathrm{Gal}(K(\mathfrak{p})/K)$ is the image of $\mathcal{O}_{K_v}^\times$ (resp., $K_v^\times$) in $\mathbb{A}_K^\times/Y$. If $v \nmid p$ then $\mathcal{O}_{K_v}^\times \subset Y$, so $K(\mathfrak{p})/K$ is unramified outside of $\mathfrak{p}$. If $v \in \Sigma$ then $K_v^\times \subset Y$, so every $v \in \Sigma$ splits completely in $K(\mathfrak{p})/K$. Since $\Sigma$ was chosen large enough to generate the ideal class group of $K$, the natural map $\mathcal{O}_{K_\mathfrak{p}}^\times \to \mathbb{A}^\times_K/Y$ is surjective, so $K(\mathfrak{p})/K$ is totally ramified at $\mathfrak{p}$. It follows from the definition of $\mathcal{P}$ that $\mathrm{Gal}(K(\mathfrak{p})/K) \cong \mathbb{A}_K^\times/Y$ is cyclic of order $\ell^n$. Now we can find an extension that is $T$-ramified and $\Sigma$-split, with the desired ramification degree at primes in $T_0$, inside the compositum of the fields $K(\mathfrak{p})$ for $\mathfrak{p}\in T$. \end{proof}
\begin{lem} \label{sames} Suppose $T$ is a finite subset of $\mathcal{P}$, and $L/K$ is a cyclic extension of degree $\ell^n$ that is $T$-ramified and $\Sigma$-split. If $K \subsetneq L' \subset L$ then $\mathrm{Sel}(L'/K,A[\lambda]) =\mathrm{Sel}(L/K,A[\lambda])$. \end{lem}
\begin{proof} We will show that $\mathcal{H}_\lambda(L'_v/K_v) = \mathcal{H}_\lambda(L_v/K_v)$ for every $v$. If $v \in \Sigma$ this holds because $L_v' = L_v = K_v$. If $v \in T - \mathcal{P}_0$ this holds by Proposition \ref{gl}(i). If $v \notin \Sigma \cup T$ this holds by Lemma \ref{urrem}(ii). Finally, if $v \in \mathcal{P}_0$ then $\mathcal{H}_\lambda(L'_v/K_v) = \mathcal{H}_\lambda(L_v/K_v) = 0$ by Lemmas \ref{xlem}(ii) and \ref{urrem}(i). Thus the two Selmer groups coincide in $H^1(K,A[\lambda])$. \end{proof}
In the terminology of the Introduction and \S\ref{intro2}, we next use critical primes (those in $\mathcal{P}_1$) to decrease the rank of the Selmer group, while the silent primes (those in $\mathcal{P}_0$) have no effect on the rank.
\begin{prop} \label{l7.13} Let $r := \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])$, $r^\dagger := \mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger])$, and suppose that $t \le \min\{r,r^\dagger\}$. \begin{enumerate} \item There is a set of primes $T \subset \mathcal{P}_1$ of cardinality $t$ such that $$ \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])_\mathfrak{a} = r-t, \quad
\mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger])_\mathfrak{a} = r^\dagger-t, $$ where $\mathfrak{a} := {\prod_{\mathfrak{p}\in T}\mathfrak{p}}$. \item If $T$ is as in (i), $T_0$ is a finite subset of $\mathcal{Q}_0$, and $L/K$ is a cyclic extension of $K$ of degree $\ell^n$ that is $(T_0 \cup T)$-ramified and $\Sigma$-split, then $$ \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(L/K,A[\lambda]) = r-t,
\quad \mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(L/K,A[\lambda^\dagger]) = r^\dagger-t. $$ \end{enumerate} \end{prop}
\begin{proof} We will prove (i) by induction on $t$. When $t = 0$ there is nothing to check.
Suppose $T$ satisfies the conclusion of the lemma for $t$, and $t < \min\{r,r^\dagger\}$. Let $\mathfrak{a} := {\prod_{\mathfrak{p}\in T}\mathfrak{p}}$. Then we can apply Proposition \ref{goodp}(ii), to choose $\mathfrak{p} \in \mathcal{P}_1(\mathfrak{a})$ so that $$ \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])_{\mathfrak{a}\mathfrak{p}} = r-t-1, \quad
\mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}\mathfrak{p}} = r^\dagger-t-1. $$ Then $T \cup \{\mathfrak{p}\}$ satisfies the conclusion of (i) for $t+1$.
Now suppose that $T$ is such a set, and $\mathfrak{a} := {\prod_{\mathfrak{p}\in T}\mathfrak{p}}$. Consider the exact sequences \begin{equation} \label{gdd} \raisebox{19pt}{ \xymatrix@C=12pt@R=7pt{ 0 \ar[r] & \mathrm{Sel}(K,A[\lambda]) \ar[r] & \mathrm{Sel}(K,A[\lambda])^{\mathfrak{a}} \ar^-{\oplus \mathrm{loc}_\mathfrak{p}}[rr]
&& \dirsum{\mathfrak{p}\in T}H^1(K_\mathfrak{p},A[\lambda])/\mathcal{H}_{\lambda}(K_\mathfrak{p}) \\ 0 \ar[r] & \mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}} \ar[r] & \mathrm{Sel}(K,A[\lambda^\dagger]) \ar^{\oplus \mathrm{loc}_\mathfrak{p}}[rr]
&& \dirsum{\mathfrak{p}\in T}\mathcal{H}_{\lambda^\dagger}(K_\mathfrak{p}). }} \end{equation} Using \eqref{dualiso} to identify $A[\lambda^\dagger]$ with the dual of $A[\lambda]$, the local conditions that define the Selmer groups $\mathrm{Sel}(K,A[\lambda])$ and $\mathrm{Sel}(K,A[\lambda^\dagger])$ (resp.\ $\mathrm{Sel}(K,A[\lambda])^{\mathfrak{a}}$ and $\mathrm{Sel}(K,A[\lambda^\dagger])_{\mathfrak{a}}$) are dual Selmer structures in the sense of \cite[\S2.3]{kolysys}. Thus we can use global duality (see for example \cite[Theorem 2.3.4]{kolysys}) to conclude that the images of the two right-hand maps in \eqref{gdd} are orthogonal complements of each other under the sum of the local Tate pairings. By our choice of $T$ the lower right-hand map is surjective, so the upper right-hand map is zero, i.e., \begin{equation} \label{siss} (\oplus_{\mathfrak{p}\in T} \mathrm{loc}_\mathfrak{p})(\mathrm{Sel}(K,A[\lambda])^{\mathfrak{a}}) \subset \dirsum{\mathfrak{p}\in T}\mathcal{H}_\lambda(K_\mathfrak{p}). \end{equation} Let $T_0$ be a finite subset of $\mathcal{Q}_0$, let $\mathfrak{b} := \prod_{\mathfrak{p}\in T_0}\mathfrak{p}$, and suppose $L$ is a cyclic extension that is $(T_0 \cup T)$-ramified and $\Sigma$-split. By definition (and Lemma \ref{urrem}(ii)), $\mathrm{Sel}(L/K,A[\lambda])$ is the kernel of the map $$ \mathrm{Sel}(K,A[\lambda])^{\mathfrak{a}\mathfrak{b}} \map{\oplus_{\mathfrak{p}\in T_0 \cup T} \mathrm{loc}_\mathfrak{p}}
\dirsum{\mathfrak{p}\in T_0 \cup T}H^1(K_\mathfrak{p},A[\lambda])/\mathcal{H}_\lambda(L_\mathfrak{p}/K_\mathfrak{p}). $$ We have $\mathcal{H}_\lambda(K_\mathfrak{p}) = \mathcal{H}_\lambda(L_\mathfrak{p}/K_\mathfrak{p}) = 0$ for every $\mathfrak{p}\in\mathcal{Q}_0$ by Lemmas \ref{xlem}(ii) and \ref{urrem}(i) and the definition of $\mathcal{Q}_0$, so in fact $\mathrm{Sel}(L/K,A[\lambda])$ is the kernel of the map \begin{equation} \label{siss2} \mathrm{Sel}(K,A[\lambda])^\mathfrak{a} \map{\oplus_{\mathfrak{p}\in T} \mathrm{loc}_\mathfrak{p}} \dirsum{\mathfrak{p}\in T}H^1(K_\mathfrak{p},A[\lambda])/\mathcal{H}_\lambda(L_\mathfrak{p}/K_\mathfrak{p}). \end{equation} By Proposition \ref{gl}(ii), $\mathcal{H}_\lambda(K_\mathfrak{p}) \cap \mathcal{H}_\lambda(L_\mathfrak{p}/K_\mathfrak{p}) = 0$ for every $\mathfrak{p} \in \mathcal{P}_1$. Combining \eqref{siss} and \eqref{siss2} shows that $\mathrm{Sel}(L/K,A[\lambda]) = \mathrm{Sel}(L/K,A[\lambda])_\mathfrak{a}$, so by our choice of $T$ we have $\mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(L/K,A[\lambda]) = r-t$. The proof for $\lambda^\dagger$ is the same. \end{proof}
\begin{thm} \label{ranks} Suppose that \eqref{p5} through \eqref{p2} all hold, and $n \ge 1$. Then for every finite set $\Sigma$ of primes of $K$, there are infinitely many cyclic extensions $L/K$ of degree $\ell^n$, completely split at all places in $\Sigma$, such that $A(L) = A(K)$. \end{thm}
\begin{proof} Enlarge $\Sigma$ if necessary so that the conditions of Definition \ref{Qdef} are satisfied. We may also assume without loss of generality that $$ \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda]) \le \mathrm{length}_{\mathcal{E}/\lambda^\dagger}\mathrm{Sel}(K,A[\lambda^\dagger]) $$ (if not, we can simply switch $\lambda$ and $\lambda^\dagger$; all the properties we require for $\lambda$ hold equivalently for $\lambda^\dagger$, using the isomorphism \eqref{dualiso}). Apply Proposition \ref{l7.13}(i) with $t := \mathrm{length}_{\mathcal{E}/\lambda}\mathrm{Sel}(K,A[\lambda])$ to produce a finite set $T \subset \mathcal{P}_1$.
Now suppose that $T_0$ is a finite subset of $\mathcal{Q}_0$. If $L/K$ is cyclic of degree $\ell^n$, $(T_0 \cup T)$-ramified and $\Sigma$-split, then Proposition \ref{l7.13} shows $\mathrm{Sel}(L/K,A[\lambda]) = 0$. Further, Lemma \ref{sames} shows that $\mathrm{Sel}(L'/K,A[\lambda]) = 0$ if $K \subsetneq L' \subset L$, so by Proposition \ref{tower} we have $\mathrm{rank}(A(L)) = \mathrm{rank}(A(K))$.
Since $\mathcal{P}_0$ has positive density (Proposition \ref{goodp}(i)), there are infinitely many finite subsets $T_0$ of $\mathcal{P}_0 \subset \mathcal{Q}_0$. For each such $T_0$, Lemma \ref{l7.15} shows that there is a cyclic extension $L/K$ of degree $\ell^n$ that is $(T_0 \cup T)$-ramified and $\Sigma$-split, and totally ramified at all primes in $T_0$ as well. These fields are all distinct, so we have infinitely many different $L$ with $\mathrm{rank}(A(L)) = \mathrm{rank}(A(K))$.
Now suppose that the set $T_0$ in the construction above contains primes $\mathfrak{p}_1, \mathfrak{p}_2$ with different residue characteristics. In particular $L/K$ is totally ramified at $\mathfrak{p}_1$ and $\mathfrak{p}_2$. If $A(L) \ne A(K)$, then (since $\mathrm{rank}(A(L)) = \mathrm{rank}(A(K))$) there is a prime $p$ and point $x \in A(L)$ such that $x \notin A(K)$ but $px \in A(K)$. It follows that the extension $K(x)/K$ is unramified outside of $\Sigma$ and primes above $p$. In particular $K \subset K(x) \subset L$ but $K(x)/K$ cannot ramify at both $\mathfrak{p}_1$ and $\mathfrak{p}_2$, so we must have $K(x) = K$, i.e., $x \in A(K)$. This contradiction shows that $A(L) = A(K)$ for all such $T_0$, and this proves the theorem. \end{proof}
\section{Proof of Theorem \ref{static-avs}} \label{hyps}
\begin{prop} \label{class} Conditions \eqref{p5} through \eqref{p3} hold for all sufficiently large $\ell$. \end{prop}
\begin{proof} This is clear for \eqref{p5}.
Recall that $\lambda$ was chosen not to divide the discriminant of $\mathcal{R}$, so $\mathcal{R}_\lambda$ is the ring of integers of $\mathcal{M}_\lambda$. Since $A$ is simple, $\mathcal{E} \otimes \mathbb{Q}$ is a central simple division algebra over $\mathcal{M}$, of some degree $d$. By the general theory of such algebras (see for example \cite[Proposition in \S18.5]{pierce}), for all but finitely many primes $\lambda$ of $\mathcal{M}$ we have $$\mathcal{E} \otimes_\mathcal{R} \mathcal{M}_\lambda \cong M_d(\mathcal{M}_\lambda).$$ If in addition $\lambda$ does not divide the index of $\mathcal{E}$ in a fixed maximal order of $\mathcal{E} \otimes_\mathcal{R} \mathcal{M}$, then $$ \text{$\mathcal{E} \otimes_\mathcal{R} \mathcal{R}_\lambda$ is a maximal order in $\mathcal{E} \otimes_\mathcal{R} \mathcal{M}_\lambda$.} $$ By \cite[Proposition 3.5]{auslander}, every maximal order in $M_d(\mathcal{M}_\lambda)$ is conjugate to $M_d(\mathcal{R}_\lambda)$, so for such $\lambda$ we have $$ \mathcal{E}/\lambda := \mathcal{E} \otimes_\mathcal{R} \mathbb{F}_\lambda
\cong M_d(\mathcal{R}_\lambda) \otimes_\mathcal{R} \mathbb{F}_\lambda = M_d(\mathbb{F}_\lambda) $$ which is \eqref{p7}.
Condition \eqref{p6} holds for large $\ell$ by Corollary \ref{irredcor} of the Appendix.
Let $B \subset \mathrm{Gal}(K(A[\lambda])/K)$ denote the subgroup acting as scalars on $A[\lambda]$. Then $B$ is a normal subgroup and we have the inflation-restriction exact sequence \begin{equation} \label{irs} H^1(K(A[\lambda])^B/K,A[\lambda]^B) \longrightarrow H^1(K(A[\lambda])/K,A[\lambda])
\longrightarrow H^1(B,A[\lambda]). \end{equation} Since $B$ has order prime to $\ell$, $H^1(B,A[\lambda] = 0$. Serre \cite[Th\'eor\`eme of \S 5]{Vigneras_na} shows that $B$ is nontrivial for all sufficiently large $\ell$. When $B$ is nontrivial, $A[\lambda]^B = 0$ so the left-hand term in \eqref{irs} vanishes and \eqref{p4} holds.
Let $\Gamma$ denote the image of $\mathrm{Gal}(K(\boldsymbol{\mu}_\ell,A[\lambda])/K(\boldsymbol{\mu}_\ell))$ in $\mathrm{Aut}(A[\lambda])$. Then \cite[Theorem 0.2]{LPfsag} shows that there are normal subgroups $\Gamma_3 \subset \Gamma_2 \subset \Gamma_1$ of $\Gamma$ such that $\Gamma_3$ is an $\ell$-group, $\Gamma_2/\Gamma_3$ has order prime to $\ell$, $\Gamma_1/\Gamma_2$ is a direct product of finite simple groups of Lie type in characteristic $\ell$, and $[\Gamma:\Gamma_1]$ is bounded independently of $\ell$. By Faltings' theorem (see for example the proof of \eqref{p6} referenced above) $\Gamma$ acts semisimply on $A[\lambda]$ for sufficiently large $\ell$, and then $\Gamma_3$ must be trivial. It follows that if $\ell$ is sufficiently large then $\Gamma$ has no cyclic quotient of order $\ell$, i.e., \eqref{p3} holds. \end{proof}
\begin{thm}[Larsen] \label{class3} Suppose that all $\bar{K}$-endomorphisms of $A$ are defined over $K$. Then the conditions \eqref{p1} and \eqref{p2} hold simultaneously for a set of primes $\ell$ of positive density. \end{thm}
\begin{proof} This is Theorem \ref{main} of the Appendix. \end{proof}
\begin{proof}[Proof of Theorem \ref{static-avs}] If all $\bar{K}$-endomorphisms of $A$ are defined over $K$, then by Proposition \ref{class} and Theorem \ref{class3} there is a set $S$ of rational primes with positive density such that our hypotheses \eqref{p5} through \eqref{p2} hold simultaneously for all $\ell \in S$. Thus Theorem \ref{static-avs} follows from Theorem \ref{ranks}. \end{proof}
\begin{proof}[Proof of Theorem \ref{static-curves}] Lemma \ref{lemimp} showed that Theorem \ref{static-curves} follows from Theorem \ref{static-avs}. \end{proof}
\begin{rem} \label{nonsimple} It is natural to try to strengthen Theorem \ref{static-avs} by removing the assumption that $A$ is simple. This generalization can be reduced to the problem, given a finite collection of abelian varieties, of finding many cyclic extensions for which they are all simultaneously diophantine-stable.
Precisely, suppose that $A_1,\ldots, A_m$ are pairwise non-isogenous absolutely simple abelian varieties, $\ell$ is a rational prime, and $\lambda_i$ is a prime ideal of the center of $\mathrm{End}(A_i)$ above $\ell$ for each $i$. Suppose $\ell$ is large enough so that \eqref{p5} through \eqref{p3} hold for every $A_i$.
If the results of the Appendix could be extended to show that for every $j$ there is an element $\tau_j \in G_{K(\boldsymbol{\mu}_\ell)}$ such that $$ \text{$A_i[\lambda_i]/(\tau_j-1)A_i[\lambda_i]$ \;is\;} \begin{cases} \text{zero if $i \ne j$}, \\ \text{a nonzero simple $\mathrm{End}(A_j)/\lambda_j$-module if $i = j$,} \end{cases} $$ then the methods of \S\ref{decrease} above would show that there is a set $S$ of rational primes with positive density such that for every $\ell \in S$ and every $n \ge 1$ there are infinitely many cyclic extensions $L/K$ of degree $\ell^n$ such that every $A_i$ is diophantine-stable for $L/K$. Using the argument at the end of the proof of Theorem \ref{ranks} it would follow that $S$ can be chosen so that the same result holds for every abelian variety isogenous over $K$ to $\prod_i A_i^{d_i}$. \end{rem}
\section{Quantitative results} \label{quant}
Fix a simple abelian variety $A/K$ such that $\mathrm{End}_K(A) = \mathrm{End}_{\bar K}(A)$, and an $\ell$ such that our hypotheses \eqref{p5} through \eqref{p2} all hold. The proof of Theorem \ref{static-avs}, and more precisely Theorem \ref{ranks}, makes it possible to quantify how many cyclic $\ell^n$-extensions $L/K$ are being found with $A(L) = A(K)$. For simplicity we will take $n = 1$, and count cyclic $\ell$-extensions. Keep the notation of the previous sections.
For real numbers $X > 0$, define \begin{align*} \mathcal{F}_K(X) &:= \{\text{cyclic extensions $L/K$ of degree $\ell$} : \mathbf{N}\mathfrak{d}_{L/K} < X\}, \\ \mathcal{F}^0_K(X) &:= \{L \in \mathcal{F}_K(X) : A(L) = A(K)\}, \end{align*} where $\mathbf{N}\mathfrak{d}_{L/K}$ denotes the absolute norm of the relative discriminant of $L/K$. For $\mathfrak{p} \notin \Sigma$ let $\mathrm{Fr}_\mathfrak{p} \in G_K$ denote a Frobenius automorphism for $\mathfrak{p}$. It follows from Definition \ref{Qdef} and Lemma \ref{urrem}(i) that $$ \mathcal{Q}_0 := \{\mathfrak{p} \notin \Sigma : \text{$\mathrm{Fr}_\mathfrak{p} = 1$ on $\boldsymbol{\mu}_\ell$ and $\mathrm{Fr}_\mathfrak{p}$ has no nonzero fixed
points in $A[\lambda]$}\}, $$ and let $$
\delta := \frac{|\{\sigma \in \mathrm{Gal}(K(\boldsymbol{\mu}_\ell,A[\lambda])/K(\boldsymbol{\mu}_\ell)) : \text{$\sigma$ has no nonzero fixed
points in $A[\lambda]\}|$}}
{[K(\boldsymbol{\mu}_\ell,A[\lambda]):K(\boldsymbol{\mu}_\ell)]} $$ The proof of Proposition \ref{goodp}(i) shows that $\mathcal{Q}_0$ has density $\delta/[K(\boldsymbol{\mu}_\ell):K]$, and \eqref{p1} and \eqref{p2} show that $0 < \delta < 1$.
\begin{thm}[Wright \cite{wright}] There is a positive constant $C$ such that $$
|\mathcal{F}_K(X)| \sim C X^{1/(\ell-1)} \log(X)^{(\ell-1)/[K(\boldsymbol{\mu}_\ell):K]-1} $$ as $X \to \infty$. \end{thm}
The main result of this section is the following.
\begin{thm} \label{quantthm} As $X \to \infty$ we have $$
|\mathcal{F}^0_K(X)| \gg X^{1/(\ell-1)}\log(X)^{(\ell-1)\delta/[K(\boldsymbol{\mu}_\ell):K]-1}. $$ \end{thm}
\begin{exa} Suppose $E$ is a non-CM elliptic curve, and $\ell$ is large enough so that the Galois representation $G_K \to \mathrm{Aut}(E[\ell]) = \mathrm{GL}_2(\mathbb{Z}/\ell\mathbb{Z})$ is surjective. Then $[K(\boldsymbol{\mu}_\ell):K] = \ell-1$, and an elementary calculation shows that the number of elements of $\mathrm{SL}_2(\mathbb{Z}/\ell\mathbb{Z})$ with nonzero fixed points is $\ell^2$. Thus $\delta = 1 - \ell/(\ell^2-1)$ so in this case $$
|\mathcal{F}_K(X)| \sim C X^{1/(\ell-1)}, \quad |\mathcal{F}^0_K(X)| \gg X^{1/(\ell-1)}/\log(X)^{\ell/(\ell^2-1)}. $$ \end{exa}
The rest of this section is devoted to a proof of Theorem \ref{quantthm}.
\begin{lem} \label{lem10.3} There is a finite subset $T_1 \subset \mathcal{Q}_0$ such that the natural map $$ \mathcal{O}_{K,\Sigma}^\times/(\mathcal{O}_{K,\Sigma}^\times)^\ell
\longrightarrow \prod_{v \in T_1}\mathcal{O}_{K_v}^\times/(\mathcal{O}_{K_v}^\times)^\ell $$ is injective. \end{lem}
\begin{proof} Suppose $u \in \mathcal{O}_{K,\Sigma}^\times$ and $u \notin (K^\times)^\ell$. Then $u \notin (K(\boldsymbol{\mu}_\ell)^\times)^\ell$, so by Lemma \ref{6.5a} and \eqref{p1} we can choose $\sigma \in G_K$ such that $\sigma = 1$ on $\boldsymbol{\mu}_\ell$, $\sigma$ has no nonzero fixed points in $A[\lambda]$, and $\sigma$ does not fix $u^{1/\ell}$. If $v \notin\Sigma$ and the Frobenius of $v$ on $K(\boldsymbol{\mu}_\ell,A[\lambda],(\mathcal{O}_{K,\Sigma}^\times)^{1/\ell})$ is in the conjugacy class of $\sigma$, then $v \in \mathcal{Q}_0$ and $u \notin (\mathcal{O}_{K_v}^\times)^\ell$. Taking a collection of such $v$ as $u$ varies gives a suitable set $T_1$. \end{proof}
Recall that $\mathbb{A}_K^\times$ denote the ideles of $K$. Fix a set $T_1$ as in Lemma \ref{lem10.3}.
\begin{lem} \label{lem10.4} The natural composition $$ \mathrm{Hom}(G_K,\boldsymbol{\mu}_\ell) \longrightarrow \mathrm{Hom}(\mathbb{A}_K^\times,\boldsymbol{\mu}_\ell) \longrightarrow
\prod_{v \in \Sigma}\mathrm{Hom}(K_v^\times,\boldsymbol{\mu}_\ell)
\prod_{v \notin\Sigma\cup T_1} \mathrm{Hom}(\mathcal{O}_{K_v}^\times,\boldsymbol{\mu}_\ell) $$ is surjective. \end{lem}
\begin{proof} By class field theory and our assumption that the primes in $\Sigma$ generate the ideal class group of $K$, we have an isomorphism $$ \mathrm{Hom}(G_K,\boldsymbol{\mu}_\ell) \cong
\mathrm{Hom}\biggl(\bigl(\prod_{v \in \Sigma}K_v^\times
\prod_{v \in T_1} \mathcal{O}_{K_v}^\times
\prod_{v \notin\Sigma\cup T_1} \mathcal{O}_{K_v}^\times\bigr)/\mathcal{O}_{K,\Sigma}^\times ,\boldsymbol{\mu}_\ell\biggr) $$ Now the lemma follows by a simple argument using Lemma \ref{lem10.3}; see for example \cite[Lemma 6.6(ii)]{KMR}. \end{proof}
As in the proof of Theorem \ref{ranks}, we can use Proposition \ref{l7.13} to fix a finite set $T \subset \mathcal{P}_1$ such that for every finite set $T_0 \subset \mathcal{Q}_0$, and every cyclic $\ell$-extension $L/K$ that is \begin{itemize} \item $(T_0 \cup T$)-ramified and $\Sigma$-split, \item ramified at two primes in $T_0$ of different residue characteristics, \end{itemize} we have $A(L) = A(K)$.
\begin{defn} Fix two primes $\mathfrak{p}_1,\mathfrak{p}_2 \in \mathcal{P}_0 - T_1$ of different residue characteristics, and let $T' := T \cup \{\mathfrak{p}_1,\mathfrak{p}_2\}$. For every finite subset $T_0$ of $\mathcal{Q}_0 - T_1$, let $\mathcal{C}(T_0) \subset \mathrm{Hom}(G_K,\boldsymbol{\mu}_\ell)$ be the subset of characters $\chi$ satisfying, under the class field theory surjection of Lemma \ref{lem10.4}, \begin{itemize} \item
$\chi|_{K_v^\times} = 1$ if $v \in \Sigma$, \item
$\chi|_{\mathcal{O}_{K_v}^\times} \ne 1$ if $v \in T' \cup T_0$, \item
$\chi|_{\mathcal{O}_{K_v}^\times} = 1$ if $v \notin \Sigma \cup T' \cup T_0 \cup T_1$. \end{itemize} \end{defn}
\begin{lem} \label{lem10.6} Let $\alpha$ be the (surjective) composition of maps in Lemma \ref{lem10.4}. Then for every finite subset
$T_0 \subset \mathcal{Q}_0-T_1$ we have $|\mathcal{C}(T_0)| = |\ker(\alpha)| (\ell-1)^{|T'|}(\ell-1)^{|T_0|}$. \end{lem}
\begin{proof} This is clear from the surjectivity of $\alpha$. \end{proof}
\begin{lem} \label{lem10.7} Suppose $T_0$ is a finite subset of $\mathcal{Q}_0-T_1$, and $\chi \in \mathcal{C}(T_0)$. Let $L$ be the fixed field of the kernel of $\chi$. Then: \begin{enumerate} \item $A(L) = A(K)$, \item the discriminant of $L/K$ is $\prod_{\mathfrak{p} \in T' \cup T_0}\mathfrak{p}^{\ell-1}$. \end{enumerate} \end{lem}
\begin{proof} The first assertion follows from the definition of $T$ above. For the second, by definition of $\mathcal{C}(T_0)$ we have that $L/K$ is cyclic of degree $\ell$, totally tamely ramified at $\mathfrak{p} \in T' \cup T_0$ and unramified elsewhere. \end{proof}
\begin{proof}[Proof of Theorem \ref{quantthm}] Define a function $f$ on ideals of $K$ by $$ f(\mathfrak{a}) := \begin{cases}
(\ell-1)^{|T_0|} & \text {if $T_0$ is a finite subset of $\mathcal{Q}_0-T_1$ and $\mathfrak{a} = \prod_{\mathfrak{p}\in T_0}\mathfrak{p}$}, \\ 0 & \text{if $\mathfrak{a}$ is not a squarefree product of primes in $\mathcal{Q}_0-T_1$}. \end{cases} $$ Then $\sum_\mathfrak{a} f(\mathfrak{a})\mathbf{N}\mathfrak{a}^{-s} = \prod_{\mathfrak{p}\in\mathcal{Q}_0-T_1}(1+(\ell-1)\mathbf{N}\mathfrak{p}^{-s})$, so $$ \log\biggl(\sum_\mathfrak{a} f(\mathfrak{a})\mathbf{N}\mathfrak{a}^{-s}\biggr) \approx (\ell-1)\sum_{\mathfrak{p}\in\mathcal{Q}_0-T_1}\mathbf{N}\mathfrak{p}^{-s}
\approx \frac{(\ell-1)\delta}{[K(\boldsymbol{\mu}_\ell):K]}\frac{1}{\log(s-1)} $$ where ``$\approx$'' means that the two sides are holomorphic on $\Re(s) > 1$ and their difference approaches a finite limit as $\Re(s)\to 1^+$. Therefore by a variant of the Ikehara Tauberian Theorem (see for example \cite[p.\ 322]{wintner}) we conclude that there is a constant $D$ such that $$ \sum_{\mathbf{N}\mathfrak{a} < X} f(\mathfrak{a}) \sim DX \log(X)^{(\ell-1)\delta/[K(\boldsymbol{\mu}_\ell):K] - 1}. $$ By Lemmas \ref{lem10.6} and \ref{lem10.7}, for every $\mathfrak{a}$ the number of cyclic $\ell$-extensions $L/K$ of discriminant $(\mathfrak{a}\prod_{\mathfrak{p}\in T'}\mathfrak{p})^{\ell-1}$ with $A(L) = A(K)$ is at least $f(\mathfrak{a})$, and the theorem follows. \end{proof}
\small\noindent {\sc Department of Mathematics, Indiana University, Bloomington, IN 47405, USA} {\em E-mail address:} {\href{mailto:larsen@math.indiana.edu}{\tt larsen@math.indiana.edu}}
\end{document} | arXiv | {
"id": "1503.04642.tex",
"language_detection_score": 0.657019317150116,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Randomized Solutions to Convex Programs with Multiple Chance Constraints hanks{This
manuscript is the preprint of a paper submitted to the SIAM Journal on Optimization and it is
subject to SIAM copyright. SIAM maintains the sole rights of distribution or publication of the work
in all forms and media.
If accepted, the copy of record will be available at http://www.siam.org.}
\begin{abstract} The scenario-based optimization approach (`scenario approach') provides an intuitive way of approximating the solution to chance-constrained optimization programs, based on finding the optimal solution under a finite number of sampled outcomes of the uncertainty (`scenarios'). A key merit of this approach is that it neither requires explicit knowledge of the uncertainty set, as in robust optimization, nor of its probability distribution, as in stochastic optimization. The scenario approach is also computationally efficient because it only requires the solution to a convex optimization program, even if the original chance-constrained problem is non-convex. Recent research has obtained a rigorous foundation for the scenario approach, by establishing a direct link between the number of scenarios and bounds on the constraint violation probability. These bounds are tight in the general case of an uncertain optimization problem with a single chance constraint.
This paper shows that the bounds can be improved in situations where the chance constraints have a limited `support rank', meaning that they leave a linear subspace unconstrained. Moreover, it shows that also a combination of multiple chance constraints, each with individual probability level, is admissible. As a consequence of these results, the number of scenarios can be reduced from that prescribed by the existing theory for problems with the indicated structural property. This leads to an improvement in the objective value and a reduction in the computational complexity of the scenario approach. The proposed extensions have many practical applications, in particular high-dimensional problems such as multi-stage uncertain decision problems or design problems of large-scale systems.\end{abstract}
\noindent\textbf{Key words:} Uncertain Optimization, Chance Constraints, Randomized Methods, Convex Optimization, Scenario Approach, Multi-Stage Decision Problems.
\section{Introduction}\label{Sec:Intro}
Optimization is ubiquitous in modern problems found in engineering, logistics, and other sciences. A common pattern is that a decision or design variable $x\in\BRd$ has to be selected from a subset of $\BRd$, as described by constraints $f_i:\BRd\to\BR$, and its quality is measured against some objective or cost function $f_0:\BRd\to\BR$: \begin{subequations}\label{Equ:OptProb}\begin{align}
\min_{x\in\BRd}\quad &f_0(x)\ec\\
\st\quad&f_{i}(x)\leq 0\qquad\fa i=1,2,\hdots,N\ef \end{align}\end{subequations}
\subsection{Chance-Constrained Optimization}
Unfortunately, in many practical applications the underlying problem data is uncertain. This uncertainty shall be represented with an abstract variable $\den\in\Delta$, where $\Delta$ is an uncertainty set whose nature is not specified. The uncertainty may affect the objective function $f_0$ and/or the constraints $f_i$. Thus for a particular decision $x$ it becomes uncertain what objective value is achieved and/or whether the constraints are indeed satisfied. The second situation represents a particular challenge, as good solutions are usually located on the boundary of the feasible set.
This gives rise to a trade-off problem between the (uncertain) objective value and the robustness of the chosen decision to a constraint violation. A large variety of approaches addressing this issue have been proposed in the areas of robust and stochastic optimization \cite{BaiEtAl:1997,BenTalNem:1998,BirgLouv:1997,KallMay:2011,KouvYu:1997, MulVan:1995,Prekopa:1995,Shapiro:2009}, with the preferred method of choice depending on the requirements of the application at hand.
In many practical applications, $\den$ can be assumed to be of a stochastic nature. In this case, the formulation of \emph{chance constraints}, where the decision variable $x$ has to be feasible with a least probability $(1-\ep)$ for $\ep\in(0,1)$, has proven to be an appropriate concept for handling the uncertainty in the constraints. However, chance-constrained optimization problems are usually very difficult to solve.
The \emph{scenario approach}, as explained below, represents an attractive method for finding an `approximate solution' to stochastic programs, since it is both intuitive and computationally efficient.
\subsection{The Scenario Approach}\label{Sec:SCP}
Recent contributions \cite{Cala:2009,CalaCamp:2005,Cala:2010,CampGar:2008,CampGar:2011} have revealed the theoretical links between the scenario approach and the solution to an optimization problem with a linear objective function and a single chance constraint ($\SCP$): \begin{subequations}\label{Equ:SCP}\begin{align}
\min_{x\in\BX}\quad &c\tp x\ec\\
\st\quad&\Pr\bigl[f(x,\den)\leq 0\bigr]\geq(1-\ep)\ef \end{align}\end{subequations} Here $\BX\subset\BRd$ is a compact and convex set, $c\tp$ denotes the transpose of a vector $c\in \BRd$, $\Pr[\cdot]$ is the probability measure on the uncertainty set $\Delta$, $f:\BRd\times\Delta \to\BR$ is a convex function in its first argument $x\in\BRd$ for $\Pr$-almost every uncertainty $\den\in\Delta$, and $\ep$ is some value in the open real interval $(0,1)$.
The chance constraint (\ref{Equ:SCP}b) is interpreted as follows. For any given $x\in\BRd$, the left-hand side represents the probability of the event that $x$ indeed belongs to the feasible set. Written more properly, \begin{equation}\label{Equ:ProbMeas}
\Pr\bigl[f(x,\den)\leq 0\bigr]:=\Pr\bigl\{\den\in\Delta\:\big|\:f(x,\den)\leq 0\bigr\}\ec \end{equation} however the left-hand side notation is kept throughout for brevity. Note that $x$ is considered to be a \emph{feasible point} of the chance constraint (\ref{Equ:SCP}b) if this probability is at least $(1-\ep)$.
\vspace*{0.15cm} \begin{remark}[Problem Formulation]\label{Rem:Generality}
The formulation of the $\SCP$ encompasses a vast range of problems, namely any uncertain
optimization problem that becomes convex if the value of $\den$ were fixed.
(a) Any uncertain convex objective function $f(\cdot,\den)$ can be included by an epigraph
reformulation, with the new objective being a scalar and hence linear \cite[Sec.\,3.1.7]
{BoydVan:2004}.
(b) Joint chance constraints, where $x$ must satisfy multiple convex constraints simultaneously
with probability $(1-\ep)$, are covered since the intersection of convex sets is convex.
(c) Additional deterministic, convex constraints can be included by intersection with the
compact set $\BX$. \end{remark} \vspace*{0.15cm}
The characterization of the feasible set of a chance constraint requires exact knowledge of the probability distribution of $\den$. Moreover, the feasible set is non-convex and difficult to express explicitly, except for very special cases \cite{BirgLouv:1997,KallMay:2011,Prekopa:1995,Shapiro:2009}. This makes the $\SCP$, in full generality and especially in higher dimensions $d$, an extremely difficult problem to solve.
The scenario approach can be used to find an \emph{approximate solution} to the $\SCP$, which is considered to be any point in $\BX$ that is feasible for the chance constraint with some given (very high) \emph{confidence} $(1-\theta)\in (0,1)$. This problem is usually not as hard, if an approximate solution is chosen in a low-violation region of the decision space (with high confidence). However, then the resulting objective value may be poor, in which case the approximate solution shall be called `\emph{conservative}'. Clearly, it is of major interest to find approximate solutions that are the least conservative (\ie with an objective value as low as possible), and this is the goal of the scenario approach.
The basic idea of the scenario approach is to draw a specific number $K\in\BN$ of samples (`\emph{scenarios}') from the uncertainty $\den$, and to take the optimal solution that is feasible under all of these scenarios (`\emph{scenario solution}') as an approximate solution. Computing the scenario solution involves a deterministic optimization program (`\emph{scenario program}'), which is obtained by replacing the chance constraint (\ref{Equ:SCP}b) with the $K$ sampled deterministic constraints.
By construction, the scenario program is a deterministic, convex optimization program that can be solved efficiently by standard algorithms \cite{BoydVan:2004,LuenYe:2008,NocWri:2006}. Moreover, the scenario approach is distribution-free in the sense that it does not rely on a particular mathematical model for the distribution of $\den$, or even its support set $\Delta$. In fact, both may be unknown; the only requirements are stated in the following assumption.
\vspace*{0.15cm} \begin{assumption}[Uncertainty]\label{Ass:Uncertainty}
(a) The uncertainty $\den$ is a random variable with (possibly unknown) probability measure
$\Pr$ and support set $\Delta$.
(b) A sufficient number of independent random samples from $\den$ can be obtained. \end{assumption} \vspace*{0.15cm}
Note that Assumption \ref{Ass:Uncertainty} is fairly general. It could even be argued that the scenario approach is at the heart of any robust and stochastic optimization method, because either the uncertainty set $\Delta$ or the probability distribution of $\den$ are usually constructed based on some (necessarily finite) experience of the uncertainty.
Tight bounds for the proper choice of the sample size $K$ are established by \cite{Cala:2010, CampGar:2008}, when linking it directly to the probability with which the scenario solution violates the chance constraint (\ref{Equ:SCP}b). Moreover, \cite{Cala:2010,CampGar:2011} show that the theory can be extended to the case where $R\leq K$ sampled constraints are discarded \emph{a posteriori}, that is after observing the outcomes of the $K$ samples. While this increases the complexity of the scenario approach (in terms of data requirement and computation), it can be used to improve the objective value achieved by the scenario solution. In fact, the scenario solution can be shown to converge to the exact solution of \eqref{Equ:SCP} when the number of discarded constraints are increased, given that some mild technical assumptions hold, \cf \cite[Sec.\,4.4]{CampGar:2011}
\subsection{Novel Contributions}
From a practical point of view, the strongest appeal of the scenario approach is the facility of its application and the low computational complexity. It becomes particularly attractive for uncertain optimization problems in higher dimensions, as these occur frequently in fields such as engineering or logistics. In these cases, an uncertain constraint will often not involve all decision variables simultaneously, as allowed by the general case of (\ref{Equ:SCP}b). Instead, multiple uncertain constraints may be present, each of them involving only a subset of the decision variables.
\vspace*{0.15cm} \begin{example}[Multi-Stage Decision Problems]\textnormal{ An important example are uncertain \emph{multi-stage decision problems} \cite[Cha.\,7]{BirgLouv:1997}, \cite[Cha.\,8]{KallMay:2011} \cite[Cha.\,13]{Prekopa:1995} \cite[Cha.\,3]{Shapiro:2009}, which occur in many fields such as production planning, portfolio optimization, or control theory. The basic setting is that some \emph{decision} (\eg on production quantities, buy/sell orders, or control inputs) has to be taken repeatedly at a finite number of time steps. Each decision affects the \emph{state} of the system (\eg inventory level, portfolio, or state variable) at the subsequent time step. Besides the decision, the state is also subject to uncertain influences (\eg customer demand, price fluctuations, or dynamic disturbances). If constraints on the state variables are present (\eg service levels, value at risk, or safety regions), this adds multiple uncertain constraints (one for the state of each time step) to the overall decision problem. Further deterministic constraints may hold for the decision variables, for example. The special structure of such a problem is that a constraint on the state at some time step involves only the decisions made prior to this time step, while the decisions afterwards are not involved.} \end{example} \vspace*{0.15cm}
This paper extends the theory of the scenario approach for problems where a single (or multiple) chance constraint(s) are present that involve only a subset of the decision variables. More precisely, the chance constraint(s) may affect only a certain subspace of the decision space, whose dimension will be called its `\emph{support rank}'. Other constraints, either deterministic or uncertain, cover the directions that are left unconstrained, so that the solution remains bounded.
The main result of this paper is that an uncertain constraint with a lower support rank can only supply a lower number of \emph{support constraints} \cite{Cala:2010,CalaCamp:2005,CampGar:2008}, and therefore its associated sample size can be reduced. This leads to a subtle shift from the idea of a `\emph{problem dimension}' in the existing theory to that of a `\emph{support dimension}' of a particular chance constraint. Moreover, it requires an extension of the existing theory to cope with multiple chance constraints in the uncertain optimization program. Finally, the approach of constraint removal \emph{a posteriori} is carried over almost analogously to this extended setting.
From a practical point of view, these extensions improve on the merits of the scenario approach for problems that have a structure described above. In particular, the lower sample sizes reduce the computational complexity of the scenario approach and simultaneously improve the objective value of the scenario solution. At the same time, the feasibility guarantees for the scenario solution remain as strong as before. Hence the extensions of this paper, when applicable, offer only advantages over the existing results on the scenario approach.
\subsection{Organization of the Paper}
Section \ref{Sec:Problem} contains the problem statement. Section \ref{Sec:Stucture} introduces some background on its properties, and states the rigorous definitions for the `\emph{support dimension}' and the `\emph{support rank}' of a chance constraint. Section \ref{Sec:ScenSol} contains the main results of this paper, which give the improved sample bounds in the presence of a single (or multiple) chance constraint(s) of limited support rank. Section \ref{Sec:DiscardConstr} extends this theory to the sampling-and-discarding procedure, which can be used to improve the objective value of the scenario solution, at the price of larger data requirements and an increased computational complexity. Section \ref{Sec:Example} presents a brief numerical example that demonstrates the application of the presented theory, as well as its potential benefits when compared to existing results.
\section{Problem Formulation}\label{Sec:Problem}
This section introduces the generalized problem formulation with multiple chance constraints, the corresponding scenario program, and some basic terminology.
\subsection{Stochastic Program with Multiple Chance Constraints}\label{Sec:MCP}
Consider the following extension of the $\SCP$ to an optimization problem with linear objective function and multiple chance constraints ($\MCP$): \begin{subequations}\label{Equ:MCP}\begin{align}
\min_{x\in\BX}\quad &c\tp x\ec\\
\st\quad&\Pr\bigl[f_{i}(x,\den)\leq 0\bigr]\geq(1-\ep_{i})
\qquad\fa i\in\BN_{1}^{N}\ec \end{align}\end{subequations} where $i$ is the chance constraint index in $\BN_{1}^{N}:=\{1,2,...,N\}$. The remarks for the $\SCP$ in Section \ref{Sec:SCP} apply analogously; in particular the following key assumption is made.
\vspace*{0.15cm} \begin{assumption}[Convexity]\label{Ass:Convexity}
The constraint functions $f_i:\BRd\times\Delta\to\BR$ of all chance constraints $i\in
\BN_{1}^{N}:=\{1,...,N\}$ are convex in their first argument $x\in\BRd$ for $\Pr$-almost every
$\den\in\Delta$. \end{assumption} \vspace*{0.15cm}
Other than Assumption \ref{Ass:Convexity}, the dependence of the functions $f_i(x,\den)$ on the uncertainty $\den$ is completely generic.
The use of `$\min$' instead of `$\inf$' in (\ref{Equ:MCP}a) is justified by the fact that the feasible set of a single chance constraint is closed under fairly general assumptions \cite[Thm.\,2.1]{KallMay:2011}. This implies that the feasible set of the $\MCP$ is compact, due to the presence of $\BX$, and the infimum is indeed attained.
It remains a standing assumption that the $\sigma$-algebra of $\Pr$-measurable sets in $\Delta$ is large enough to contain all sets whose probability is measured in this paper, like the ones in (\ref{Equ:MCP}b), \cf \cite[p.\,4]{CampGar:2008}.
In order to avoid technical issues, which are of little relevance for most practical applications, the following is assumed, \cf \cite[Ass.\,1]{CampGar:2008}.
\vspace*{0.15cm} \begin{assumption}[Existence and Uniqueness]\label{Ass:Uniqueness}
(a) Problem \eqref{Equ:MCP} admits at least one feasible point. By the compactness of
$\BX$, this implies that there exists at least one optimal point of \eqref{Equ:MCP}.
(b) If there are multiple optimal points of \eqref{Equ:MCP}, a unique one is selected by
the help of a \emph{tie-break rule} (\eg the lexicographic order on $\BRd$). \end{assumption} \vspace*{0.15cm}
In principle, an approximate solution to the $\MCP$ can be obtained by the classic scenario approach. Namely, a $\SCP$ can be setup with the same objective function (\ref{Equ:SCP}a) as the $\MCP$, and a chance constraint (\ref{Equ:SCP}b) defined by \begin{equation}\label{Equ:SingleReform}
f(x,\den):=\max\bigl\{f_1(x,\den),\hdots,f_N(x,\den)\bigr\}\qquad\text{and}\qquad
\ep:=\min\bigl\{\ep_{1},\ep_{2},...,\ep_{N}\bigr\}\ef \end{equation} Note that $f(x,\den)$ is convex in $x$ for almost every $\den$, since the pointwise maximum of convex functions is convex. Any feasible point of this $\SCP$ is also a feasible point of the $\MCP$, and hence an approximate solution to the $\SCP$ with confidence $(1-\theta)$ is also an approximate solution to the $\MCP$ with confidence $(1-\theta)$.
However, this procedure introduces a considerable amount of conservatism, because it requires the scenario solution to simultaneously satisfy \emph{all} constraints $i=1,...,N$ with the \emph{highest} of all probabilities $(1-\ep_{i})$. Clearly, this conservatism becomes more severe if the number of chance constraints $N$ is large and there is a great variation in the values of $\ep_{i}$.
\subsection{The Extended Scenario Approach}\label{Sec:MSP}
The extended scenario approach of this paper can be used to compute an approximate solution of the $\MCP$, which is a feasible point of every chance constraint $i=1,...,N$ with a given confidence probability of $(1-\theta_{i})$. The key difference from the classic scenario approach is that each chance constraint $i\in \BN_{1}^{N}$ is sampled separately, and with an individual sample size $K_{i}\in\BN$.
Let the \emph{random samples} pertaining to constraint $i$ be denoted $\de^{(i,\kappa_{i})}$, where $\kappa_{i}\in\{1,...,K_{i}\}$, and for brevity also as the collective \emph{multi-sample} $\om^{(i)}:=\{\de^{(i,1)},...,\de^{(i,K_{i})}\}$. The collection of all samples is combined in an overall multi-sample $\om:=\{\om^{(1)},...,\om^{(N)} \}$, with the total number of samples given by $K:=\sum_{i=1}^{N}K_{i}$. All of these samples can be considered `identical copies' of the random uncertainty $\den$, in the sense that they are themselves random variables and satisfy the following key assumption.
\begin{assumption}[Independence and Identical Distribution]\label{Ass:Independence}
The sampling procedure is designed such that the set of all random samples, together with the
actual random uncertainty,
\begin{equation*}
\bigcup_{i\in\BN_{1}^{N}}\bigl\{\de^{(i,1)},...,\de^{(i,K_{i})}\bigr\}
\cup\bigl\{\den\bigr\}
\end{equation*}
form a set of \emph{independent and identically distributed (\iid)} random variables. \end{assumption} \vspace*{0.15cm}
The multi-sample $\om$ is an element of $\Delta^{K}$, the $K$-th product of the uncertainty set $\Delta$, and it is distributed according to $\Pb^{K}$, the $K$-th product of the measure $\Pb$. The scenario program for multiple chance constraints ($\MSP[\om^{(1)},...,\om^{(N)}]$) is constructed as follows: \begin{subequations}\label{Equ:MSP}\begin{align}
\min_{x\in\BX}\quad & c\tp x\ec\\
\st\quad & f_i\bigl(x,\de^{(i,\kappa_{i})}\bigr)\leq 0
\qquad\fa\kappa_{i}\in\BN_{1}^{K_{i}},\:\:\fa i\in\BN_{1}^{N}\ef \end{align}\end{subequations} In problem \eqref{Equ:MSP}, the objective function of the $\MCP$ is minimized, while forcing $x$ to lie inside the constrained sets for all samples $\de^{(i,\kappa_{i})}$ substituted into the corresponding constraint $i\in\BN_{1}^{N}$. Clearly, the solution to problem \eqref{Equ:MSP} is itself a random variable, as it depends on the random multi-sample $\om$. For this reason, the scenario approach is a \emph{randomized method} for finding an approximate solution to the $\MCP$.
Of course, the $\MSP$ is actually solved for the observations of the random samples, leading to its deterministic instance ($\DSP[\bom^{(1)},...,\bom^{(N)}]$): \begin{subequations}\label{Equ:DSP}\begin{align}
\min_{x\in\BX}\quad & c\tp x\ec\\
\st\quad & f_i\bigl(x,\bde^{(i,\kappa_{i})}\bigr)\leq 0
\qquad\fa\kappa_{i}\in\BN_{1}^{K_{i}},\:\:\fa i\in\BN_{1}^{N}\ef \end{align}\end{subequations} Note that \eqref{Equ:DSP} arises from \eqref{Equ:MSP} by replacing the \emph{(random) samples} $\de^{(i,\kappa_{i})}$, $\om^{(i)}$, $\om$ with their \emph{(deterministic) outcomes} $\bde^{(i,\kappa_{i})}$, $\bom^{(i)}$, $\bom$. Throughout the paper, these outcomes are indicated by a bar, to distinguish them from the corresponding random variables. By Assumption \eqref{Ass:Convexity}, $\DSP$ constitutes a convex program that can be solved efficiently by a suitable algorithm for convex optimization, \cf \cite{BoydVan:2004,LuenYe:2008,NocWri:2006}.
Note that \eqref{Equ:MSP} remains important for analyzing the (probabilistic) properties of the (random) scenario solution. In fact, the subsequent theory is mainly concerned with showing that, with a very high confidence, the scenario solution is a feasible point of the chance constraints (\ref{Equ:MCP}b), provided that the sample sizes $K_{1},...,K_{N}$ are appropriately selected.
\subsection{Randomized Solution and Violation Probability}\label{Sec:RandSol}
In order to avoid unnecessary complications, the following technical assumption ensures that there always exists a feasible solution to the $\MSP$, \cf \cite[p.\,3]{CampGar:2008}.
\vspace*{0.15cm} \begin{assumption}[Feasibility]\label{Ass:Feasibility}
(a) For any number of samples $K_{1},...,K_{N}$, the $\MSP$ admits a feasible solution almost
surely.
(b) For the sake of notational simplicity, any $\Pr$-null set for which (a) may not hold is
assumed to be removed from $\Delta$. \end{assumption} \vspace*{0.15cm}
Assumption \ref{Ass:Feasibility} can be taken for granted in the majority of practical problems. When it does not hold in a particular case, a generalization of the presented theory accounting for the infeasible case can be developed along the lines of \cite{Cala:2010}.
Hence the existence of a solution to $\DSP$ is ensured, and uniqueness holds by Assumption \ref{Ass:Convexity} and by carry-over of the tie-break rule of Assumption \ref{Ass:Uniqueness}(b), see \cite[Thm.\,10.1,\,7.1]{Rocka:1970}. Therefore the \emph{solution map} \begin{equation}\label{Equ:SolMap}
\bxo:\Delta^{K}\to\BX \end{equation} is well-defined, returning the unique optimal point $\bxo(\bom^{(1)},...,\bom^{(N)})$ of the $\DSP$ for a given outcome of the multi-samples $\{\bom^{(1)},...,\bom^{(N)}\}\in \Delta^{K}$. The solution map can also be applied to the $\MSP$, for which it is denoted by $\xo:\Delta^{K}\to\BX$. Now $\xo(\om^{(1)},...,\om^{(N)})$ represents a random vector of unknown probability distribution, which is also referred to as the \emph{scenario solution}. In fact, its distribution is a complicated function of the geometry and the parameters of the problem.
Note that there are two levels of randomness present in the analysis. The first is introduced by the random samples in $\om$, which affect the choice of the scenario solution. The second is the actual random uncertainty $\den$, which determines whether or not the scenario solution is feasible with respect to the chance constraints (\ref{Equ:MSP}b). For this reason, the scenario approach presented here is also called a \emph{double-level-of-probability approach} \cite[Rem.\,2.3]{Cala:2009}.
To highlight the two probability levels more clearly, suppose first that the multi-sample $\bom$ has already been observed, so that the scenario solution $\bxo(\bom^{(1)},...,\bom^{(N)})$ is fixed. Then for each chance constraint $i=1,...,N$ in (\ref{Equ:MCP}b), the \emph{a posteriori violation probability} $\Vb_{i}(\bom^{(1)},...,\bom^{(N)})$ is given by \begin{equation}\label{Equ:ViolPost}
\Vb_{i}\bigl(\bom^{(1)},...,\bom^{(N)}\bigr):=
\Pb\bigl[f_{i}\bigl(\bxo(\bom^{(1)},...,\bom^{(N)}),\den\bigr)>0\bigr]\ef \end{equation} In particular, each $\Vb_{i}$ has a deterministic, yet generally unknown, value in $[0,1]$. If the multi-sample $\om$ has not yet been observed, the scenario solution $\xo(\om^{(1)},...,\om^{(N)})$ is a random vector and so the \emph{a priori violation probability} \begin{equation}\label{Equ:ViolPrior}
\V_{i}\bigl(\om^{(1)},...,\om^{(N)}\bigr):=
\Pb\bigl[f_{i}\bigl(\xo(\om^{(1)},...,\om^{(N)}),\den\bigr)>0\bigr] \end{equation} becomes itself a random variable on $(\Delta^{K},\Pb^{K})$, with support $[0,1]$. Hence the goal is to choose appropriate sample sizes $K_{1},...,K_{N}$ which ensure that $\V_{i}(\om^{(1)},...,\om^{(N)})\leq\ep_{i}$ for all $i=1,...,N$, with a sufficiently high confidence $(1-\theta_{i})$. Before these results are derived however, some structural properties of scenario programs and technical lemmas ought to be discussed.
\section{Structural Properties of the Constraints}\label{Sec:Stucture}
In this section, a structural property of a chance constraint is introduced which yields a reduction in the number of samples below the levels given by the existing theory \cite{CalaCamp:2005,Cala:2010,CampGar:2008}. This property relates to the new concept of the \emph{support dimension} or, in a form that is more easily checked for many practical instances, the \emph{support rank}.
\subsection{Support Constraints}\label{Sec:SupConstr}
The concept of a \emph{support constraint} carries over from the $\SCP$ case, \cf \cite[Def.\,4]{CalaCamp:2005}. An illustration is given in Figure \ref{Fig:SupConstr}.
\vspace*{0.15cm} \begin{definition}[Support Constraint]\label{Def:SupConstr}
Consider the $\DSP$ for some outcome of the multi-sample $\bom$.
(a) For some $i\in \BN_{1}^{N}$ and $\kappa_{i}\in\BN_{1}^{K_{i}}$, constraint $f_{i}(x,\bde^
{(i,\kappa_{i})})\leq 0$ is a \emph{support constraint} of \eqref{Equ:DSP} if its removal from
the problem entails a change in the optimal solution:
\begin{equation*}
\bxo\bigl(\bom^{(1)},...,\bom^{(N)}\bigr)\neq
\bxo\bigl(\bom^{(1)},...,\bom^{(i-1)},\bom^{(i)}\setminus\{\bde^{(i,\kappa_{i})}\},
\bom^{(i+1)},...,\bom^{(N)}\bigr)\ef
\end{equation*}
In this case the sample $\bde^{(i,\kappa_{i})}$ is also said `to \emph{generate} this support
constraint.'
(b) For each $i\in \BN_{1}^{N}$, the indices $\kappa_{i}$ of all samples that generate a support
constraint of the $\DSP$ are included in the set $\bSc_{i}$.
Moreover, the tuples $(i,\kappa_{i})$ of all support constraints of the $\DSP$ are collected in
the \emph{support (constraint) set} $\bSc$.
With some abuse of this notation, $\bSc=\bigcup_{i=1}^{N}\bSc_{i}$. \end{definition} \vspace*{0.15cm}
Definition \ref{Def:SupConstr}(a) can be stated equivalently in terms of the objective function: a sampled constraint is a support constraint if and only if the optimal objective function value (or its preference by the tie-break rule) is strictly larger than when the constraint were removed. To be more precise, Definition \ref{Def:SupConstr}(b), $\bSc$ may also account for the set $\BX$ as an additional support constraint. This minor subtlety is tacitly understood in the sequel.
\begin{figure}
\caption{Illustration of Definition \ref{Def:SupConstr} in $\BR^{2}$. The arrow indicates the
optimization direction, the bold lines are the \emph{support constraints} of the respective
configuration.}
\label{Fig:SupConstr}
\end{figure}
In the stochastic setting of the $\MSP[\om^{(1)},...,\om^{(N)}]$, whether or not a particular random sample $\de^{(i,\kappa_{i})}$ generates a support constraint becomes a random event, which can be associated with a certain probability. Similarly, the support constraint set $\Sc$, and its subsets $\Sc_{1},...,\Sc_{N}$ contributed by the various chance constraints, are naturally random sets.
\subsection{Support Dimension}\label{Sec:SupDim}
The link between the sample sizes $K_{1},...,K_{N}$ and the corresponding violation probability of the scenario solution depends decisively on the `dimensions' of the problem. The following lower bounds represent a mild technical condition, \cf \cite[Thm.\,3.3]{Cala:2010} and \cite[Def.\,2.3]{CampGar:2008}.
\vspace*{0.15cm} \begin{assumption}\label{Ass:SampleSize}
The sample sizes satisfy $K_{1},...,K_{N}\geq d$. \end{assumption} \vspace*{0.15cm}
In the existing literature, the dimension of the $\SCP$ has been characterized by \emph{Helly's dimension}, \cf \cite[Def.\,3.1]{Cala:2010}. In this paper, there is a subtle shift from the problem dimension to the dimension of chance constraint $i$ in the $\MCP$, embodied by its \emph{support dimension}.
\vspace*{0.15cm} \begin{definition}[Support Dimension]\label{Def:SupDim}
(a) Denote by $|\Sc|$ the (random) cardinality of the set $\Sc$. \emph{Helly's dimension} is the
smallest integer $\sd$ that satisfies
\begin{equation*}
\underset{\omega\in\Delta^{K}}{\esssup}\:|\Sc|\leq\sd\ef
\end{equation*}
(b) The \emph{support dimension} of a chance constraint $i\in\BN_{1}^{N}$ in the $\MSP$ is the
smallest integer $\sdi$ that satisfies
\begin{equation*}
\underset{\omega\in\Delta^{K}}{\esssup}\:|\Sc_{i}|\leq\sdi\ef
\end{equation*} \end{definition} \vspace*{0.15cm}
From a basic argument using Helly's Theorem, the number of support constraints $|\Sc|$ of any (feasible) convex optimization problem in $\BRd$ is upper bounded by the dimension of the decision space $d$, \cf \cite[Thm.\,2]{CalaCamp:2005}. This result implies that finite integers $\sd$ and $\sd_{1},...,\sd_{N}$ matching Definition \ref{Def:SupDim} always exist, so that the concepts of `Helly's dimension' and `support dimension' are indeed well-defined. Moreover, the result provides immediate upper bounds on the support dimension of each chance constraint $i\in\BN_{1}^{N}$ in \eqref{Equ:MSP}, namely $\sdi\leq\sd\leq d$.
It turns out that the support dimension $\sdi$ directly relates to the minimum sample size $K_{i}$ that is required for a given violation level $\ep_{i}$ and residual probability $\theta_{i}$. The basic mechanism shall be illustrated by the proposition below, for the simpler case of a \emph{single-level of probability} problem, \cf \cite[Thm.\,1]{CalaCamp:2005}.
\vspace*{0.15cm} \begin{proposition}[Probability Bound]\label{The:ProbBound}
Consider a particular constraint $i\in\BN_{1}^{N}$ in the $\MSP[\om^{(1)},...,\om^{(N)}]$ with
some fixed sample size $K_{i}$, and let $\hat{\sd}_{i}$ be an upper bound for its support
dimension $\sdi$. Then the following holds:
\begin{equation}\label{Equ:ProbBound1}
\Pb^{K+1}\bigl[f_{i}\bigl(\xo(\om^{(1)},...,\om^{(N)}),\den\bigr)>0\bigr]\leq
\frac{\hat{\sd}_{i}}{K_{i}+1}\ef
\end{equation} \end{proposition}
\vspace*{-0.3cm} \begin{proof}
Consider $\MSP':=\MSP[\om^{(1)},...,\om^{(i-1)},\om^{(i)}\cup\{\den\},\om^{(i+1)},
...,\om^{(N)}]$ and let $\Sc_{i}'\subset\{1,...,K_{i},K_{i}+1\}$ denote the set of support
constraints generated by samples from $\om^{(i)}\cup\{\den\}$, where $(K_{i}+1)\in\Sc_{i}'$
stands for $\den$ generating a support constraint.
Note that the event where $f_{i}\bigl(\xo(\om^{(1)},...,\om^{(N)}),\den\bigr)>0$ can be
equivalently expressed as $\den$ generating a support constraint of $\MSP'$.
Hence condition \eqref{Equ:ProbBound1} can be reformulated as
\begin{equation}\label{Equ:ProbBound2}
\Pb^{K+1}\bigl[(K_{i}+1)\in\Sc_{i}'\bigr]\leq\frac{\hat{\sd}_{i}}{K_{i}+1}\ef
\end{equation}
To analyze the event $(K_{i}+1)\in\Sc_{i}'$, observe that by Assumption \ref{Ass:Independence}
all samples in $\om^{(i)}\cup\{\den\}$ are \iid, whence all sampled instances of constraint $i$
in (\ref{Equ:MSP}b) along with `$f_{i}(\,\cdot\,,\den)\leq 0$' are probabilistically identical.
In particular, they are all equally likely to become a support constraint of $\MSP'$.
Hence if the number of support constraints $|\Sc_{i}'|$ were known, then
\begin{equation*}
\Pb^{K+1}\bigl[(K_{i}+1)\in\Sc_{i}'\bigr]=\frac{|\Sc_{i}'|}{K_{i}+1}\ef
\end{equation*}
Even though $|\Sc_{i}'|$ is a random variable, by Definition \ref{Def:SupDim}(b) $|\Sc_{i}'|\leq
\sdi$ almost surely, and by assumption $\sdi\leq\hat{\sd}_{i}$.
This immediately yields \eqref{Equ:ProbBound1}. \end{proof}
\subsection{The Support Rank}\label{Sec:SupRank}
In many practical cases, the support dimension $\sdi$ of a chance constraint $i\in\BN_{1}^{N}$ in the $\MSP$ is not known exactly. Then it has to be replaced by some upper bound. As argued above, the existing upper bound is given by the dimension $d$ of the decision space. However, this bound may not be tight in the case where the constraints satisfy a certain structural property, namely when they have a limited \emph{support rank}.
Intuitively speaking, the support rank is the dimension $d$ of the decision space less the maximal dimension of an (almost surely) \emph{unconstrained subspace}. The latter is understood as a linear subspace of $\BRd$ that cannot be constrained by the sampled instances of constraint $i$, for almost every value of the multi-sample $\om^{(i)}$.
Before the support rank is introduced in a rigorous manner, three examples of constraint classes with bounded support rank are described, in order to equip the reader with the necessary intuition behind this concept. They also show that very common constraint classes possess this property, and that in practical problems it can often be spotted easily.
\vspace*{0.15cm} \begin{example}\label{Exa:SupportRank}
\textnormal{For each of the following cases, a visual illustration can be found in Figure
\ref{Fig:SupportRank}.}
(a) Single Linear Constraint. \textnormal{Suppose some chance constraint $i\in\BN_{1}^{N}$ of
(\ref{Equ:MCP}b) takes the linear form
\begin{equation}\label{Equ:ExaLinConstr1}
f_{i}(x,\den)\equiv a\tp x-b(\den)\ec
\end{equation}
where $a\in\BR^{d}$, and $b:\Delta\to\BR$ is a scalar depending on the uncertainty in a generic
way.
Note that these constraints in the $\MSP$ are unable to constrain any direction in the subspace
orthogonal to the span of $a$, $\spn\{a\}^{\perp}$, regardless of the outcome of the
multi-sample $\om^{(i)}$.
Hence the support rank $\alpha$ of the chance constraint \eqref{Equ:ExaLinConstr1} is equal to
$1$.}
(b) Multiple Linear Constraints. \textnormal{As a generalization of case (a), suppose that some
chance constraint $i\in\BN_{1}^{N}$ of (\ref{Equ:MCP}b) is given by
\begin{equation}\label{Equ:ExaLinConstr2}
f_{i}(x,\den)\equiv A(\den) x-b(\den)\ec
\end{equation}
where $A:\Delta\to\BR^{r\times d}$ and $b:\Delta\to\BR^{r}$ represent a matrix and a vector that
depend on the uncertainty $\den$. Moreover, suppose that the uncertainty enters the matrix
$A(\den)$ in such a way that the dimension of the linear span of its rows $A_{j,\cdot}(\den)$,
for $j=1,...,r$, satisfies
\begin{equation*}
\dim\spn\bigl\{A_{j,\cdot}(\den)\:\big|\:j\in\BN_{1}^{r},\:\den\in\Delta\}\leq\beta<d\ef
\end{equation*}
Note that these constraints in the $\MSP$ are unable to constrain any direction in
$\spn\bigl\{A_{j,\cdot}(\den)\:\big|\:j\in\BN_{1}^{r},\:\den\in\Delta\}^{\perp}$, regardless of
the outcome of the multi-sample $\om^{(i)}$.
Hence the support rank of the chance constraint \eqref{Equ:ExaLinConstr2} is equal to $\beta$.}
(c) Quadratic Constraint. \textnormal{For a nonlinear example, consider the case where some
chance constraint $i\in\BN_{1}^{N}$ of (\ref{Equ:MCP}b) is given by
\begin{equation}\label{Equ:ExaLinConstr3}
f_{i}(x,\den)\equiv\bigl(x-x_{c}(\den)\bigr)\tp Q\bigl(x-x_{c}(\den)\bigr) - r(\den)\ec
\end{equation}
where $Q\in\BR^{d\times d}$ is positive semi-definite with $\rnk Q=\gamma<d$, and $x_{c}:\Delta
\to\BRd$, $r:\Delta\to\BR_{+}$ represent a vector and scalar that depend on the uncertainty.
Note that these constraints in the $\MSP$ are unable to constrain any direction in the null
space of the matrix $Q$, regardless of the outcome of the multi-sample $\om^{(i)}$.
Since this null space has dimension $d-\gamma$, the support rank of the chance constraint
\eqref{Equ:ExaLinConstr3} is equal to $\gamma$.} \end{example} \vspace*{0.15cm}
To introduce the support rank in a rigorous manner, pick a chance constraint $i\in\BN_{1}^{N}$ of the $\MCP$. For each point $x\in\BX$ and each uncertainty $\de\in\Delta$, denote the corresponding level set of $f_{i}:\BRd\times\Delta\to\BR$ by \begin{equation}\label{Equ:LevelSets}
F_{i}(x,\de):=\bigl\{\xi\in\BRd\:\big|\:f_{i}(x+\xi,\de)=f_{i}(x,\de)\bigr\}\ef \end{equation}
\begin{figure}
\caption{Illustration of Example \ref{Exa:SupportRank} in $\BR^{3}$. The arrows indicate the
dimension of the \emph{unconstrained subspace}, equal to $3$ minus the respective \emph{support
rank} $\alpha$, $\beta$, or $\gamma$. }
\label{Fig:SupportRank}
\end{figure}
Let $\CL$ be the collection of all linear subspaces in $\BRd$. In order to be unconstrained, select only those subspaces that are contained in almost all level sets $F_{i}(x,\de)$: \begin{equation}\label{Equ:SubSpace}
\CL_{i}:=\bigcap_{\de\in\Delta}\bigcap_{x\in\BRd}
\bigl\{L\in\CL\:\big|\:L\subset F_{i}(x,\de)\bigr\}\ef \end{equation} Introduce `$\preceq$' as the partial order on $\CL_{i}$ defined by set inclusion; \ie for any two subspaces $L,L'\in\CL_{i}$, $L\preceq L'$ if and only if $L\subseteq L'$. Then the following concepts are well-defined, as shown in Proposition \ref{The:UnconstrSub} below.
\vspace*{0.15cm} \begin{definition}[Unconstrained Subspace, Support Rank]\label{Def:SupportRank}
(a) The \emph{unconstrained subspace} $L_{i}$ of chance constraint $i\in\BN_{1}^{N}$ is the
unique maximal element in $\CL_{i}$, in the sense that $L\preceq L_{i}$ for all $L\in\CL_{i}$.
(b) The \emph{support rank} $\sri\in\BN_{0}^{d}$ of chance constraint $i\in\BN_{1}^{N}$ equals
to $d$ minus the dimension of $L_{i}$,
\begin{equation*}
\sri:=d-\dim L_{i}\ef
\end{equation*}
\vspace*{-0.5cm} \end{definition} \vspace*{0.15cm}
It is a minor technicality in Definition \ref{Def:SupportRank} that any $\Pb$-null set that adversely influences the dimension of the unconstrained subspace can be removed from $\Delta$; this is tacitly understood.
Observe that if $\CL_{i}$ contains only the trivial subspace, then the support rank is actually equal to Helly's dimension $d$. On the other hand, if $\CL_{i}$ contains more than the trivial subspace, then the support rank becomes strictly less than $d$.
\vspace*{0.15cm} \begin{proposition}[Well-Definedness of Unconstrained Subspace]\label{The:UnconstrSub}
The collection $\CL_{i}$ contains a unique maximal element $L_{i}$ in the set-inclusion sense,
\ie $L_{i}$ contains all other elements of $\CL_{i}$ as subsets. \end{proposition} \vspace*{0.15cm}
\begin{proof}
First, note that $\CL_{i}$ is always non-empty, because for every $x\in\BX$ and every $\de\in
\Delta$ the level set $F_{i}(x,\de)$ includes the origin by its definition in
\eqref{Equ:LevelSets}. Therefore $\CL_{i}$ contains (at least) the trivial subspace $\{0\}$.
Second, since every chain in $\CL_{i}$ has an upper bound (namely $\BRd$), \emph{Zorn's Lemma}
(or the \emph{Axiom of Choice}, \cf \cite[p.\,50]{Boll:1999}) implies that $\CL_{i}$ has at
least one maximal element in the `$\preceq$'-sense.
Third, in order to prove that the maximal element is unique, suppose that $L_{i}^{(1)},
L_{i}^{(2)}$ are two maximal elements of $\CL_{i}$.
It will be shown that their direct sum $L_{i}^{(1)}\oplus L_{i}^{(2)}\in\CL_{i}$, so that
$L_{i}^{(1)}\neq L_{i}^{(2)}$ would contradict their maximality.
According to \eqref{Equ:SubSpace}, it must be shown that $L_{i}^{(1)}\oplus L_{i}^{(2)}
\subset F_{i}(x,\de)$ for any fixed values $x\in\BX$ and $\de\in\Delta$. To see this, pick
\begin{equation*}
\xi\in L_{i}^{(1)}\oplus L_{i}^{(2)}\quad\Longrightarrow\quad\xi=\xi^{(1)}+\xi^{(2)}
\quad\text{for}\enspace\xi^{(1)}\in L_{i}^{(1)},\:\xi^{(2)}\in L_{i}^{(2)}\ef
\end{equation*}
Then apply \eqref{Equ:LevelSets} twice to obtain
\begin{equation*}
f_{i}(x+\xi^{(1)}+\xi^{(2)},\de)=f_{i}(x+\xi^{(1)},\de)=f_{i}(x,\de)\ec
\end{equation*}
because $\xi^{(2)}\in L_{i}^{(2)}$ and $\xi^{(1)}\in L_{i}^{(1)}$. \end{proof} \vspace*{0.15cm}
\subsection{The Support Rank Lemma}\label{Sec:RankLemma}
The following lemma provides the link between the support rank of a chance constraint and its support dimension.
\vspace*{0.15cm} \begin{lemma}[Support Rank]\label{The:RankLemma}
Suppose that a chance constraint $i\in\BN_{1}^{N}$ has the support rank $\sri\in\BN_{1}^{d}$.
Then its support dimension in the $\MSP$ is bounded by $\sdi\leq\sri$. \end{lemma} \vspace*{0.15cm}
\begin{proof}
Without loss of generality, the proof is given for the first chance constraint $i=1$.
Pick any random multi-sample $\bom\in\Delta^{K}$ (less any $\Pr^{K}$-null set for which the
support rank condition may not hold).
By the assumption, there exists a linear subspace $L_{1}\subset\BRd$ of dimension $d-\sr_{1}$
for which
\begin{equation*}
f_{1}(x+\xi)=f_{1}(x)\qquad\fa x\in\BX,\:\:\fa\xi\in L_{1}\ef
\end{equation*}
The orthogonal complement of $L_{1}$, $L_{1}^{\perp}$, is also a linear subspace of $\BRd$ with
dimension $\sr_{1}$, and every vector in $\BRd$ can be uniquely written as the orthogonal sum of
vectors in $L_{1}$ and $L_{1}^{\perp}$, \cf \cite[p.\,135]{Boll:1999}.
For the sake of a contradiction, suppose that $i=1$ contributes more than $\sr_{1}$ support
constraints to the resulting $\DSP$, \ie $|\bSc_{1}|\geq\sr_{1}+1$.
For any $\kappa_{1}\in\bSc_{1}$, let
\begin{equation*}
\bxo_{\kappa_{1}}:=
\bxo\bigl(\bom^{(1)}\setminus\{\bde^{(1,\kappa_{1})}\},\bom^{(2)},...,\bom^{(N)}\bigr)
\end{equation*}
be the solution obtained if this support constraint is omitted.
By Definition \ref{Def:SupConstr}, if a support constraint is omitted from $\DSP$, its solution
moves away from $\bxo_{0}$, \ie $\bxo_{0}\neq\bxo_{\kappa_{1}}$ for all $\kappa_{1}\in\bSc_{1}$.
Denote the collection of all solutions by
\begin{equation*}
X:=\bigl\{\bxo_{\kappa_{1}}\:\big|\:\kappa_{1}\in\bSc_{1}\bigr\}\cup\{\bxo_{0}\}\ec
\end{equation*}
so that $|X|\geq\sr_{1}+2$.
Observe that each $\bxo_{\kappa_{1}}$ is feasible with respect to all constraints of the $\DSP$,
except for the one generated by $\de^{(1,\kappa_{1})}$, which is necessarily violated according
to Definition \ref{Def:SupConstr}.
Since $\BRd$ is the orthogonal direct sum of $L_{1}$ and $L_{1}^{\perp}$, for each point in $X$
there is a unique orthogonal decomposition of
\begin{equation*}
\bxo_{\kappa_{1}}=v_{\kappa_{1}}+w_{\kappa_{1}}\ec\qquad\text{where}\enspace v_{\kappa_{1}}
\in L_{1},\enspace w_{\kappa_{1}}\in L_{1}^{\perp}\ec
\end{equation*}
where $\kappa_{1}\in\bSc_{1}\cup\{0\}$.
Consider the set
\begin{equation*}
W:=\bigl\{w_{\kappa_{1}}\:\big|\:\kappa_{1}\in\bSc_{1}\cup\{0\}\bigr\}\ef
\end{equation*}
By the hypothesis, $W$ contains at least $\sr_{1}+2$ distinct points in the
$\sr_{1}$-dimensional subspace $L_{1}^{\perp}$.
According to Radon's Theorem \cite[p.\,151]{Ziegler:2007}, $W$ can be split into two disjoint
subsets, $W_{A}$ and $W_{B}$, such that there exists a point $\tilde{w}$ in the intersection of
their convex hulls:
\begin{equation}\label{Equ:WInConvHull1}
\tilde{w}\in \conv\bigl\{W_{A}\bigr\}\cap \conv\bigl\{W_{B}\bigr\}\ef
\end{equation}
Split the indices in $\bSc_{1}\cup\{0\}$ correspondingly into $I_{A}$ and $I_{B}$, and observe
that every $w_{A}\in W_{A}$ satisfies the constraints in $I_{B}$:
\begin{equation*}
f_{1}\bigl(w_{A},\bde^{(1,\kappa_{1})}\bigr)\leq 0\quad\fa\kappa_{1}\in I_{B}
\qquad\Longrightarrow\qquad
f_{1}\bigl(\tilde{w},\bde^{(1,\kappa_{1})}\bigr)\leq 0\quad\fa\kappa_{1}\in I_{B}\ef
\end{equation*}
The last implication follows because $\tilde{w}\in\conv\{W_{A}\}$ and $f_{1}(\,\cdot\,,
\bde^{(1,\kappa_{1})})$ is convex.
Similarly, every point $w_{B}\in W_{B}$ satisfies the constraints in $I_{A}$:
\begin{equation*}
f_{1}\bigl(w_{B},\bde^{(1,\kappa_{1})}\bigr)\leq 0\quad\fa\kappa_{1}\in I_{A}
\qquad\Longrightarrow\qquad
f_{1}\bigl(\tilde{w},\bde^{(1,\kappa_{1})}\bigr)\leq 0\quad\fa\kappa_{1}\in I_{A}\ef
\end{equation*}
Combining both statements thus yields
\begin{equation}\label{Equ:ConstrSatisfaction}
f_{1}(\tilde{w},\bde^{(1,\kappa_{1})})\leq 0\qquad\fa\kappa_{1}\in\bSc_{1}\ef
\end{equation}
According to \eqref{Equ:WInConvHull1}, $\tilde{w}$ can be expressed as a convex combination of
elements in $W_{A}$ or $W_{B}$.
Splitting the points in $X$ into $X_{A}$ and $X_{B}$ correspondingly and applying the same
convex combination yields some
\begin{equation}\label{Equ:XInConvHull}
\tilde{x}\in\conv\bigl\{X_{A}\bigr\}\cap\conv\bigl\{X_{B}\bigr\}\ec
\end{equation}
and thereby also some $\tilde{v}\in L_{1}$ with $\tilde{x}=\tilde{v}+\tilde{w}$.
To establish the contradiction two things remain to be verified: first that $\tilde{x}$ is
feasible with respect to all constraints, and second that it has a lower cost (or a better
tie-break value) than $\bxo_{0}$.
For the first, $\tilde{x}\in\BX$ because all points of $X$ lie in $\BX$ and $\tilde{x}\in
\conv\{X\}$. Moreover, thanks to \eqref{Equ:ConstrSatisfaction},
\begin{equation*}
f_{1}\bigl(\tilde{x},\bde^{(1,\kappa_{1})}\bigr)=f_{1}\bigl(\tilde{w},\bde^{(1,\kappa_{1})}
\bigr)\leq 0\qquad\fa\kappa_{1}\in\bSc_{1}\ef
\end{equation*}
For the second, pick the set from $X_{A}$ and $X_{B}$ that does not contain $\bxo_{0}$; without
loss of generality, say this is $X_{A}$.
By construction, all elements of $X_{A}$ have a strictly lower objective function value (or at
least a better tie-break value) than $\bxo_{0}$.
By linearity this also holds for all points in $\conv\{X_{A}\}$, where $\tilde{x}$ lies
according to \eqref{Equ:XInConvHull}. \end{proof}
\vspace*{0.15cm} \begin{remark}[Support Rank versus Support Dimension]\label{Rem:SuppDim}
While the support rank $\sri$ is a property of chance constraint $i$ alone, the support
dimension $\sdi$ may depend on the overall setup of the $\MSP$.
The support dimension $\sdi$ constitutes the relevant basis for selecting the sample size
$K_{i}$. However, it may be difficult to determine for practical problems, as it may depend on
the interactions of multiple chance constraints (see Example \ref{Exa:SuppDim} below).
The support rank $\sri$ provides an easier-to-handle upper bound to $\sdi$, which can be used in
place of $\sdi$ for selecting $K_{i}$. \end{remark} \vspace*{0.15cm}
\begin{example}[Upper Bounding of Support Dimension]\label{Exa:SuppDim}
\textnormal{To illustrate the statements in Remark \ref{Rem:SuppDim}, consider a small example
of \eqref{Equ:MCP} in dimension $d=3$.
Let $\BX=[-1,1]^{3}$ be the unit cube, $c\tp=[0\,1\,1]$ with a lexicographic tie-break rule, and
two chance constraints $i=1,2$. Both constraints affect only the first and second coordinates
$x_{1}$ and $x_{2}$, leaving the choice of $x_{3}=-1$ for the third coordinate.
For $i=1$, the constraints are parallel hyperplanes constraining $x_{1}$ from below, where the
lower bound is given by the first uncertainty $\de_{1}$:
\begin{equation*}
f_{1}(x,\de) = -x_{1}+\de_{1}\ef
\end{equation*}
For $i=2$, the constraints are V-shaped, with the vertex located at $x_{1}=-\de_{2}$
and $x_{2}=-1$:
\begin{equation*}
f_{2}(x,\de) = \bigl|x_{1}+\de_{2}\bigr| - x_{2} - 1\ef
\end{equation*}
Both uncertainties $\de:=\{\de_{1},\de_{2}\}$ are uniformly distributed on the interval $[0,1]$.
The setup is illustrated in Figure \ref{Fig:SuppDim}.}
\textnormal{In this case, the support dimensions are $\sd_{1}=1$, $\sd_{2}=1$ and the support
ranks are $\sr_{1}=1$, $\sr_{2}=2$ for the constraints $i=1,2$. Notice that for $i=2$ the
support rank is strictly greater that its support dimension, due to the presence of constraint
$1$. Hence there is some conservatism in the upper bound, although both bounds are better than
the existing upper bound by the dimension of the decision space $d=3$ \cite[Thm.\,2]{CalaCamp:2005}.} \end{example} \vspace*{0.15cm}
\begin{figure}
\caption{Illustration of Example \ref{Exa:SuppDim}. The plot shows a projection on the $x_{1},
x_{2}$-plane for $x_{3}=-1$. The unit box $\BX$ is depicted by a dotted line. Two (possible)
samples are shown for the linear constraint $i=1$ ($x_{1}\geq\de_{1}$) and for the V-shaped
constraint $i=2$ ($x_{2}\geq\bigl|x_{1}+\de_{2}\bigr|-1$).}
\label{Fig:SuppDim}
\end{figure}
\section{Feasibility of the Scenario Solution}\label{Sec:ScenSol}
In the first part of this section, it is shown that for a proper choice of the sample sizes $K_{1},...,K_{N}$ the scenario solution $\xo\bigl(\om^{(1)},...,\om^{(N)}\bigr)$ is an approximate solution of the $\MCP$ (\ie it is a feasible point of each chance constraint $i=1,...,N$ in (\ref{Equ:MCP}b) with a high confidence $(1-\theta_{i})$). In the second part of this section, an explicit formula for computing the sample sizes $K_{1},..., K_{N}$ for given residual probabilities $\theta_{i}$ is provided.
\subsection{The Sampling Theorem}\label{Sec:MainThe}
Denote by $\B(\cdot\,;\cdot,\cdot)$ the beta distribution function, \cf \cite[p.\,26.5.3,\,26.5.7]{Abramowitz:1970}: \begin{equation}\label{Equ:BinDistr}
\B(\ep;n,K):=\sum_{j=0}^{n}{K\choose j}\ep^{j}(1-\ep)^{K-j}\ef \end{equation}
\vspace*{0.15cm} \begin{theorem}[Sampling Theorem]\label{The:Sampling}
Consider problem \eqref{Equ:MSP} under Assumptions \ref{Ass:Convexity}, \ref{Ass:Uniqueness},
\ref{Ass:Independence}, \ref{Ass:Feasibility}, \ref{Ass:SampleSize}. Then
\begin{equation}\label{Equ:Sampling}
\Pb^{K}\bigl[\V_{i}(\om^{(1)},...,\om^{(N)})>\ep_{i}\bigr]\leq\B(\ep_{i};\sri-1,K_{i})\ec
\end{equation}
for each chance constraint $i\in\BN_{1}^{N}$, whose support rank is $\sri$. \end{theorem} \vspace*{0.15cm}
\begin{proof}
The result is an extension of \cite[Thm.\,2.4]{CampGar:2008} for the classic scenario approach,
which is also used as a basis for this proof.\footnote{The authors thank an anonymous reviewer
for his/her helpful suggestions on simplifying the proof.}
Without loss of generality, consider the first chance constraint $i=1$; the result for the other
chance constraints $i=2,...,N$ follows analogously.
Consider the conditional probability
\begin{equation}\label{Equ:CondProb}
\Pb^{K}\bigl[\V_{1}(\om^{(1)},...,\om^{(N)})>\ep_{1}\:\big|\:
\om^{(2)},...,\om^{(N)}\bigr]\ec
\end{equation}
\ie the probability of drawing $\om^{(1)}$ such that $\xo(\om^{(1)},...,\om^{(N)})$ has a
probability of violating `$f_{1}(\,\cdot\,,\den)\leq 0$' that is higher than $\ep_{1}$, given
fixed values for the other samples $\om^{(2)},...,\om^{(N)}$.
Clearly, the quantity in \eqref{Equ:CondProb} generally depends on the multi-samples $\om^{(2)},
...,\om^{(N)}$. However, for $\Pb^{K_{2}+...+K_{N}}$-almost every value of these multi-samples
\eqref{Equ:CondProb} can be bounded by
\begin{equation}\label{Equ:CondProbBound}
\Pb^{K}\bigl[\V_{1}(\om^{(1)},...,\om^{(N)})>\ep_{1}\:\big|\:\om^{(2)},...,\om^{(N)}\bigr]
\leq \B(\ep_{1};\sr_{1}-1,K_{1})\ef
\end{equation}
Indeed, by Assumption \ref{Ass:Convexity}, for $\Pb^{K_{2}+...+K_{N}}$-almost every
$\om^{(2)},...,\om^{(N)}$ the function $\tilde{f}:\BRd\to\BR$ defined by
\begin{equation*}
\tilde{f}(x)\equiv \max_{i\in\BN_{2}^{N}}\max_{\kappa_{i}\in\BN_{1}^{K_{i}}}
f_{i}\bigl(x,\de^{(i,\kappa_{i})}\bigr)
\end{equation*}
is convex, as it is the point-wise maximum of convex functions.
Then all sampled constraints of $i=2,...,N$ can be expressed as the deterministic convex
constraint `$\tilde{f}(x)\leq 0$', which can be considered as part of the convex set $\BX$.
Thus for $\Pb^{K_{2}+...+K_{N}}$-almost every $\om^{(2)},...,\om^{(N)}$ the problem takes the
form of a classic $\SCP$, to which the results of \cite{CampGar:2008} apply.
In particular, \cite[Thm.\,2.4]{CampGar:2008} yields \eqref{Equ:CondProbBound} for
$\Pb^{K_{2}+...+K_{N}}$-almost every $\om^{(2)},...,\om^{(N)}$.
The difference from using the support rank $\sr_{1}$ in place of the optimization dimension $d$
in \cite[Thm.\,2.4]{CampGar:2008} is minor.
The key fact is that $\sr_{1}$ provides an upper bound for the number of support constraints
contributed by constraint $1$, according to Lemma \ref{The:RankLemma}, and hence it can replace
$d$ in \cite[Prop.\,2.2]{CampGar:2008} and all subsequent results.
The final result is obtained by deconditioning the probability in \eqref{Equ:CondProb}:
\begin{subequations}\begin{align*}
\Pb^{K}&\bigl[\V_{1}(\om^{(1)},...,\om^{(N)})>\ep_{1}\bigr]=\\
&=\int_{\om^{(2)},...\om^{(N)}}
\Pb^{K}\bigl[\V_{1}(\om^{(1)},...,\om^{(N)})>\ep_{1}\:\big|\:\om^{(2)},...,\om^{(N)}\bigr]
\Pb^{K_{2}}\bigl[\di\om^{(2)}\bigr]...\Pb^{K_{N}}\bigl[\di\om^{(N)}\bigr]\\
&\leq\int_{\om^{(2)},...\om^{(N)}}
\Phi(\ep_{1};\sr_{1}-1,K_{1})
\Pb^{K_{2}}\bigl[\di\om^{(2)}\bigr]...\Pb^{K_{N}}\bigl[\di\om^{(N)}\bigr]\\
&=\Phi(\ep_{1};\sr_{1}-1,K_{1})\ec
\end{align*}\end{subequations}
based on \cite[pp.\,183,222]{Shir:1996}, where the third line uses \eqref{Equ:CondProbBound}. \end{proof}
\subsection{Explicit Bounds on the Sample Sizes}\label{Sec:Chernoff}
Formula \eqref{Equ:Sampling} in Theorem \ref{The:Sampling} ensures that with a \emph{confidence level} of $1-\B(\ep_{i};\sr_{i}-1,K_{i})$, the violation probability $\V_{i}(\om^{(1)},...,\om^{(N)})\leq\ep_{i}$. However, in practical applications a given confidence level $(1-\theta_{i})\in (0,1)$ is often imposed, while an appropriate sample size $K_{i}$ has to be identified.
The most accurate way of finding this sample size is by observing that $\B(\ep_{i};\sr_{i}-1,K_{i})$ is a monotonically decreasing function in $K_{i}$ and applying a numerical procedure (\eg regula falsi) for computing the smallest sample size that ensures $\B(\ep_{i};\sr_{i}-1,K_{i})\leq\theta_{i}$. The resulting $K_{i}$ shall be referred to as the \emph{implicit bound} on the sample size.
For a qualitative analysis of the behavior of this implicit bound as $\ep_{i}$ and $\theta_{i}$ vary (and also for a good initialization of the regula falsi procedure), it is useful to derive an \emph{explicit bound} on the sample size $K_{i}$. Since formula \eqref{Equ:Sampling} cannot be readily inverted, the beta distribution function must first be controlled by some upper bound, which is then inverted.
A straightforward approach is to use a Chernoff bound \cite{Cher:1952}, as shown in \cite[Rem.\,2.3]{Cala:2009} and \cite[Sec.\,5]{Cala:2010}. This provides a simple explicit formula for $K_{i}$: \begin{equation}\label{Equ:ExpBound1}
K_{i}\geq\displaystyle\frac{2}{\ep_{i}}
\left[\log\Bigl(\frac{1}{\theta_{i}}\Bigr)+\sr_{i}-1\right]\ec \end{equation} where $\log(\cdot)$ denotes the natural logarithm. As shown in \cite[Cor.\,1]{AlamoEtAl:2010}, this can be further improved to a better, albeit more complicated bound for $K_{i}$: \begin{equation}\label{Equ:ExpBound2}
K_{i}\geq\displaystyle\frac{1}{\ep_{i}}
\left[\log\Bigl(\frac{1}{\theta_{i}}\Bigr)+
\sqrt{2(\sr_{i}-1)\log\Bigl(\frac{1}{\theta_{i}}\Bigr)}+\sr_{i}-1\right]\ef \end{equation}
\section{The Sampling-and-Discarding Approach}\label{Sec:DiscardConstr}
The sampling-and-discarding approach has previously been proposed for the classic scenario approach \cite{Cala:2010,CampGar:2011}; this section describes its extension to problems with multiple chance constraints.
The fundamental goal is to reduce the objective value of the scenario solution, while maintaining the same confidence levels for feasibility with respect to the chance constraints (see Section \ref{Sec:SCP}). To this end, the sample sizes $K_{i}$ are deliberately increased above the bounds derived in Section \ref{Sec:ScenSol}, in exchange for allowing a certain number of $R_{i}$ sampled constraints to be discarded \emph{a posteriori}, \ie after the outcomes of the samples have been observed.
In this section, first the possible procedures for discarding constraints are recalled. Second, the main result on the sampling-and-discarding approach for the $\MCP$ is stated. It provides an implicit formula for the selection of appropriate sample-and-discarding pairs $(K_{i},R_{i})$, which may again vary for different chance constraints $i=1,...,N$. Third, explicit bounds for the choice of pairs $(K_{i},R_{i})$ are provided.
\subsection{Constraint Discarding Procedure}\label{Sec:DiscardProc}
For each chance constraint of the $\MCP$, if $R_{i}\geq 0$ sampled constraints are to be discarded a posteriori, the discarding procedure is performed by a pre-defined \emph{(sample) removal algorithm}.
\vspace*{0.15cm} \begin{definition}[Removal Algorithm]\label{Def:RemAlg}
For each chance constraint $i=1,...,N$, the \emph{(sample) removal algorithm}
$\CA_{i}^{(K_{i},R_{i})}:\Om\to\tOmi$ is a deterministic function on the overall multi-sample
$\om\in\Om$.
It returns a subset of samples $\tom^{(i)}\in\tOmi$, in which $R_{i}$ out of the $K_{i}$ samples
in $\om^{(i)}\in\Omi$ have been removed. \end{definition} \vspace*{0.15cm}
Obviously, the algorithm should aim at improving the objective value from $\MSP[\om^{(1)},...,\om^{(N)}]$ to $\MSP[\tom^{(1)},...,\tom^{(N)}]$ as much as possible. Various possible removal algorithms are described in \cite[Sec.\,5.1]{Cala:2010}, and further references are found in \cite[Sec.\,2]{CampGar:2011}. Brief descriptions of the most important removal algorithms are listed below.
\vspace*{0.15cm} \begin{example}\label{Exa:ConstrRem}
(a) Optimal Constraint Removal. \textnormal{The best improvement of the objective function value
is achieved by solving the reduced problem for all possible ways of removing $R_{i}$ of the
$K_{i}$ samples.
However, a major drawback of this removal algorithm is its combinatorial complexity. Therefore
the algorithm becomes computationally intractable for larger values of $R_{i}$, in particular
when samples have to be removed for multiple constraints.}
(b) Greedy Constraint Removal. \textnormal{Starting by solving the $\MSP[\om^{(1)},...,
\om^{(N)}]$ for all $K_{i}$ samples, the $R_{i}$ samples are removed in $R_{i}$ sequentially
steps. In each step, a single sample is removed by the optimal constraint removal procedure.
Between multiple constraints $i$, the removal algorithm can either proceed in a fixed order or
again greedy-based. For most practical problems this algorithm can be expected to work almost
as good as (a), while carrying a much lower computational burden.}
(c) Marginal Constraint Removal. \textnormal{The $R_{i}$ samples are removed in $R_{i}$
sequential steps, where the removed sample in each step is selected according to the highest
Lagrange multiplier.
Compared to the greedy constraint removal, the decision is thus based on the highest marginal
cost improvement \cite[Cha.\,5]{BoydVan:2004}), instead of the highest total cost improvement.
In the case of multiple constraints $i$, the removal algorithm can either handle them all
together, or proceed sequentially.} \end{example} \vspace*{0.15cm}
The existing theory for the $\SCP$ \cite[Sec.\,4.1.1]{Cala:2010} and \cite[Ass.\,2.2]{CampGar:2011} assumes that all of the removed constraints are violated by the relaxed scenario solution.
\vspace*{0.15cm} \begin{assumption}[Violation of Discarded Constraints]\label{Ass:DiscViol}
Every chance constraint $i\in\BN_{1}^{N}$ with $R_{i}>0$ satisfies the following condition: for
almost every $\om\in\Om$, each of the constraints discarded by the removal algorithm
$\CA_{i}^{(K_{i},R_{i})}(\om)$ is violated by the solution of the reduced problem, \ie
\begin{equation}\label{Equ:DiscViol}
f_{i}\bigl(\xo(\tom^{(1)},...,\tom^{(N)}),\de^{(i,\kappa_{i})}\bigr)>0\qquad
\fa\de^{(i,\kappa_{i})}\in\bigl(\om\setminus\tom\bigr)\ef
\end{equation} \end{assumption} \vspace*{-0.2cm}
While Assumption \ref{Ass:DiscViol} is sufficient for the $\MCP$ as well, it may turn out to be too restrictive for some problem instances. In fact, due to the interplay of multiple chance constraints, it may not be possible to find $R_{i}$ constraints that are violated by the relaxed scenario solution (this situation may also occur for a single chance constraint, in the presence of a deterministic constraint set $\BX$). In this case, the \emph{monotonicity property}, as introduced below, provides a possible alternative.
\vspace*{0.15cm} \begin{definition}[Monotonicity Property]\label{Def:Monotonicity}
A chance constraint $i\in\BN_{1}^{N}$ is called \emph{monotonic} if for all $K_{i}\in\BN$ and
almost every $\om^{(i)}\in\Omi$ the following condition holds:
Every point in the feasible set of sampled instances of chance constraint $i$,
\begin{equation}\label{Equ:DefMono1}
\BX_{i}(\om^{(i)}):=\bigl\{\xi\in\bBR^{d}\:\big|\:f_{i}(\xi,\de^{(i,\kappa_{i})})\leq 0
\:\:\:\fa\kappa_{i}\in\BN_{1}^{K_{i}}\bigr\}\ec
\end{equation}
where $\bBR:=\BR\cup\{\pm\infty\}$, is violated by a new sampled constraint only if also the
optimal point in $\BX_{i}(\om^{(i)})$,
\begin{equation}\label{Equ:DefMono2}
\xo_{i}(\om^{(i)}):=\argmin\bigl\{c\tp\xi\:\big|\:\xi\in\BX_{i}(\om^{(i)})\bigr\}
\end{equation}
is violated.
In other words, for every $\xi\in\BX_{i}(\om^{(i)})$ and almost every $\den\in\Delta$,
\begin{equation}\label{Equ:DefMono3}
f_{i}\bigl(\xi,\den\bigr)>0\qquad\Longrightarrow\qquad
f_{i}\bigl(\xo_{i}(\om^{(i)}),\den\bigr)>0\ef
\end{equation} \end{definition} \vspace*{-0.3cm}
\begin{assumption}[Monotonicity of Chance Constraints]\label{Ass:Monotonicity}
Every chance constraint $i\in\BN_{1}^{N}$ enjoys the \emph{monotonicity property}. \end{assumption} \vspace*{0.15cm}
Definition \eqref{Def:Monotonicity} is easy to check for most practical problems, without involving any calculations. The following example illustrates the intuition behind this concept.
\vspace*{0.15cm} \begin{example}[Monotonic Chance Constraints]\label{Exa:Monotonicity}
\textnormal{Consider an $\MSP$ in $d=2$ dimensions, where $\BX=[-100,100]^{2}\subset\BR^{2}$ and
$c=[\:0\:\:1\:]\tp$, $\den=[\den_{1}\:\den_{2}\:\den_{3}]$ belongs to $\Delta=\{-1,1\}\times
[-1,1]\times [-1,1]$, and there are $N=2$ chance constraints.}
(a) Monotonic Chance Constraint. \textnormal{Let the first chance constraint $i=1$ be of the
linear form}
\begin{equation*}
\begin{bmatrix}
\de^{(1,\kappa_{1})}_{1} &1
\end{bmatrix}
x-\de^{(1,\kappa_{1})}_{2}\leq 0\qquad
\fa\kappa_{1}=1,...,K_{1}\ef
\end{equation*}
\textnormal{Observe that for any number $K_{1}\in\BN$ and every possible sample values
$\om^{(1)}$, an additional sample $\den$ either cuts off no point from $\BX_{1}(\om^{(1)})$,
or the the point $\xo_{1}(\om^{(1)})$ becomes infeasible.
This fact is illustrated in Figure \ref{Fig:Monotonicity}(a).
Therefore chance constraint $i=1$ enjoys the monotonicity property.}
(b) Non-Monotonic Chance Constraint. \textnormal{Let the second chance constraint $i=2$ be of
the linear form}
\begin{equation*}
\begin{bmatrix}
\de^{(2,\kappa_{2})}_{2} &1
\end{bmatrix}
x-\de^{(2,\kappa_{2})}_{3}\leq 0\qquad
\fa\kappa_{2}=1,...,K_{2}\ec
\end{equation*}
\textnormal{Observe that for any number $K_{2}$ there exist sample values $\om^{(2)}$ that make
it possible for a new sample $\den$ to cut off some previously feasible point from
$\BX_{2}(\om^{(2)})$, without rendering the point $\xo_{2}(\om^{(2)})$ infeasible.
A possible configuration of this type is depicted in Figure \ref{Fig:Monotonicity}(b).
Therefore chance constraint $i=2$ does not enjoy the monotonicity property.} \end{example} \vspace*{0.15cm}
\begin{figure}\label{Fig:Monotonicity}
\end{figure}
The usefulness of the monotonicity property is based on the following result, whose proof is an straightforward consequence of Definition \ref{Def:Monotonicity} and therefore omitted.
\vspace*{0.15cm} \begin{lemma}\label{The:Monotonicity}
Let $K_{i}\in\BN$ and $R_{i}\leq K_{i}$.
Suppose chance constraint $i\in\BN_{1}^{N}$ of $\MCP$ is monotonic and the removal algorithm
$\CA_{i}^{(K_{i},R_{i})}$ is sequential. Then for almost every $\om^{(i)}\in\Delta^{K_{i}}$ the
following holds:\\
(a) With probability one every point $\xi$ in the set $\BX_{i}(\om^{(i)})$ has a violation
probability less than or equal to that of the cost-minimal point $\xo_{i}(\om^{(i)})$:
\begin{equation}\label{Equ:Monotonicity}
\Pb\bigl[f_{i}(\xi,\den)>0\bigr]\leq
\Pb\bigl[f_{i}(\xo_{i}(\om^{(i)}),\den)>0\bigr]
\qquad\fa\xi\in\BX_{i}(\om^{(i)})\ef
\end{equation}
(b) The final solution $\xo_{i}(\tom^{(i)})$, where $\tom^{(i)}=
\CA_{i}^{(K_{i},R_{i})}(\om_{i})$, violates all $R_{i}$ removed constraints. \end{lemma}
\subsection{The Discarding Theorem}\label{Sec:DiscTheorem}
For the sampling-and-discarding approach, the following result holds for the $\MCP$.
\vspace*{0.15cm} \begin{theorem}[Discarding Theorem]\label{The:Discarding}
Consider the problem \eqref{Equ:MCP} under Assumptions \ref{Ass:Convexity},
\ref{Ass:Uniqueness}, \ref{Ass:Independence}, \ref{Ass:Feasibility}, \ref{Ass:SampleSize}, and
either \ref{Ass:DiscViol} or \ref{Ass:Monotonicity}.
Let $\CA_{i}^{(K_{i},R_{i})}$ be sample removal algorithms for each of its chance constraints
$i=1,...,N$, some of which may be trivial (\ie $R_{i}=0$).
Then it holds that
\begin{equation}\label{Equ:Discarding}
\Pb^{K}\bigl[\V_{i}(\tom^{(1)},...,\tom^{(N)})>\ep_{i}\bigr]\leq
\displaystyle{R_{i}+\sri-1 \choose R_{i}}\B(\ep_{i};R_{i}+\sri-1,K_{i})\ec
\end{equation}
where $\sri$ denotes the support rank of chance constraint $i$ and $\B(\cdot;\cdot,\cdot)$ the
beta distribution \eqref{Equ:BinDistr}. \end{theorem} \vspace*{0.15cm}
\begin{proof}
Here the $\MCP$ case is reduced to the $\SCP$ case, for which a detailed proof is available
in \cite[Sec.\,5.1]{CampGar:2011}.
First, suppose that Assumption \ref{Ass:DiscViol} holds. The proof in \cite[Sec.\,5.1]{CampGar:2011} works analogously for an arbitrary chance constraint $i\in\BN_{1}^{N}$, given
that an upper bound of the violation distribution is readily available from Theorem
\ref{The:Sampling}.
Second, suppose that Assumption \ref{Ass:Monotonicity} holds.
In this case the proof in \cite[Sec.\,5.1]{CampGar:2011} can be applied directly to the $\SCP$
which arises from the $\MCP$ if all chance constraints other than a particular
$i\in\BN_{1}^{N}$ are omitted (and also $\BX$ is omitted).
In particular, \eqref{Equ:Discarding} holds for the scenario solution of this $\SCP$, using
Lemma \ref{The:Monotonicity}(b).
Given that the chance constraint is monotonic and by virtue of Lemma \ref{The:Monotonicity}(a),
\eqref{Equ:Discarding} also holds for any point in $\BX_{i}(\om^{(i)})$, in particular for the
scenario solution of the $\MCP$. \end{proof} \vspace*{0.15cm}
The work of \cite{CampGar:2011} already provides an excellent account of the merits of the sampling-and-discarding approach, which does not require a restatement here. However, it should be emphasized that the scenario solution converges to the true solution of the $\MCP$ as the number of discarded constraints increases, provided that the constraints are removed by the optimal procedure of Example \ref{Exa:ConstrRem}(a).
\subsection{Explicit Bounds on the Sample-and-Discarding Pairs}\label{Sec:ExpDiscBounds}
Similar to Section \ref{Sec:ScenSol}, explicit bounds on the sample size $K_{i}$ can also be derived for the sampling-and-discarding approach, assuming the number of discarded constraints $R_{i}$ to be fixed. The technical details, using Chernoff bounds \cite{Cher:1952}, are worked out in \cite[Sec.\,5]{Cala:2010}. The resulting explicit bound is indicated here for the sake of completeness, \begin{equation}\label{Equ:ExpBoundSamp}
K_{i}\geq\displaystyle\frac{2}{\ep_{i}}\log\biggl(\frac{1}{\theta_{i}}\biggr)+
\frac{4}{\ep_{i}}\bigl(R_{i}+\sri-1\bigr)\ec \end{equation} where $\log(\cdot)$ denotes the natural logarithm.
Similarly, explicit bounds on the number of discarded constraints $R_{i}$ can be obtained, assuming the sample size $K_{i}$ to be fixed: \begin{equation}\label{Equ:ExpBoundDisc}
R_{i}\leq\ep_{i}K_{i}-\sri+1-\displaystyle\sqrt{2\ep_{i}K_{i}
\log\Bigl(\frac{(\ep_{i}K_{i})^{\sri-1}}{\theta_{i}}\Bigr)}\ef \end{equation} The technical details of this are found in \cite[Sec.\,4.3]{CampGar:2011}.
\section{Example: Minimal Diameter Cuboid}\label{Sec:Example}
The following academic example has been selected to highlight the strengths of the extensions to the scenario approach presented in this paper.
\subsection{Problem Statement}
Let $\den$ be a random point in $\Delta\subset\BRn$, whose distribution and support set are unknown, but sampled values can be obtained. The objective in this example is to construct the Cartesian product $C$ of closed intervals in $\BRn$ (`$n$-cuboid') of minimal $n$-diameter $W$, which is large enough to contain the point $\den$ in its $i$-th coordinate with probability $(1-\ep_{i})$. The setting is illustrated in Figure \ref{Fig:ExampleRP}.
Let $z\in\BRn$ denote the center point of the cuboid and $t\in\BRn_{+}$ the interval widths in each dimension, so that \begin{equation}\label{Equ:CuboidDef}
C=\bigl\{\xi\in\BRn\:\big|\:|\xi_{i}-z_{i}|\leq t_{i}/2\bigr\}\ef \end{equation} Then the corresponding stochastic program reads as follows: \begin{subequations}\label{Equ:ExaNonConvProg}\begin{align}
\min_{z\in\BRn,t\in\BRn_{+}}\quad&\|t\|_{2}\ec\\
\st\quad&\Pr\bigl[z_{i}-t_{i}/2\leq\den_{i}\leq z_{i}+t_{i}/2\bigr]\geq(1-\ep_{i})
\qquad\fa i\in\BN_{1}^{n}\ef \end{align}\end{subequations} Since the objective function is not linear, \eqref{Equ:ExaNonConvProg} has to be reformulated (see Remark \ref{Rem:Generality}(a)) as \begin{subequations}\label{Equ:ExaConvProg}\begin{align}
&\min_{z\in\BRn,t\in\BRn_{+},T\in\BR}\:T\ec\hspace{8.9cm}\\
&\quad\st\quad\|t\|_{2}\leq T\ec\\
&\quad\pst\quad\Pr\Bigl[\max\bigl\{z_{i}-t_{i}/2-\den_{i},-z_{i}-t_{i}/2+\den_{i}\bigr\}\leq 0
\Bigr]\geq(1-\ep_{i})\quad\fa i\in\BN_{1}^{n}\:. \end{align}\end{subequations}
Note that \eqref{Equ:ExaConvProg} takes the form of a $\MCP$, for a $d=2n+1$ dimensional search space and $N=n$ chance constraints: the objective function (\ref{Equ:ExaConvProg}a) is linear; constraint (\ref{Equ:ExaConvProg}b) is deterministic and convex; and each of the chance constraints in (\ref{Equ:ExaConvProg}c) is convex in $z,t$ for any fixed value of the uncertainty $\den\in \Delta$.
Here each of the chance constraints $i=1,...,n$ depends on exactly two decision variables $z_{i}$ and $t_{i}$, which is a special case of involving $[z;t;T]\in\BR^{2n+1}$ (see Remark \ref{Rem:Generality}(c)). The convex and compact set $\BX$ is constructed from the positivity constraints on $t$, the deterministic and convex constraint (\ref{Equ:ExaConvProg}b), and some artificial bounds assumed on all variables. Existence of a feasible solution, and hence Assumption \ref{Ass:Uniqueness}, holds automatically from the problem setup.
\begin{figure}
\caption{Illustration of the numerical example for $n=2$. The point $\den\in\Delta$ appears
at random in $\BR^{2}$, according to some unknown distribution; the points drawn here are $166$
\iid samples of $\den$.
The objective is to construct the smallest product of two closed intervals (`2-cuboid'), drawn
here as the shaded rectangle, such that the probability of failing to contain the realization of
$\den$ is smaller than $\ep_{1}$ and $\ep_{2}$ in dimension $1$ and $2$, respectively.
}
\label{Fig:ExampleRP}
\end{figure}
\subsection{Solution via Scenario Approach}
By inspection, each of the chance constraints $i=1,...,n$ has support rank $\sri=2$, because it only involves the two variables $z_{i}$ and $t_{i}$. For a fixed confidence level, \eg $\theta=10^{-6}$, the implicit sample sizes $K_{1},...,K_{n}$ in \eqref{Equ:Sampling} can be computed for given values of $n$ and $\ep_{1},...,\ep_{n}\in (0,1)$ by a bisection-based algorithm (see Section \ref{Sec:Chernoff}). For simplicity, all $\ep_{1}=...=\ep_{n}$ are selected as equal, and since $\sr_{1}=...=\sr_{N} =2$, the implicit sample sizes $K_{1}=...=K_{n}$ are also identical.
Given the outcomes of all multi-samples, the $\DSP$ is easily solved by the smallest $n$-cuboid that contains all sampled points; see also Figure \ref{Fig:ExampleRP}. In other words, here the $\DSP$ has an analytic solution.
Table \ref{Tab:SampleSize}(a) summarizes the implicit sample sizes required for guaranteeing various chance constraint levels $\ep_{i}$ in various dimensions $n$ (all with $\theta=10^{-6}$). These sample sizes are also compared to those from the classic scenario approach, based on a reformulation of \eqref{Equ:ExaConvProg} as an $\SCP$ according to the procedure outlined in Section \ref{Sec:MCP}.
Observe from Table \ref{Tab:SampleSize} that the $\SCP$-based sample sizes are always larger than those using the extensions of the $\MCP$ theory. This effect increases, in particular, as the dimension $n$ of the optimization space grows larger. The reason is that the support dimension of each chance constraint remains constant for all $n$, whereas Helly's dimension grows as it equals to $n$. The marginal growth of the sample size of the $\MCP$, despite the support rank $\sr_{i}=2$ being constant, is the result of adjusting the confidence level $\theta$ to be (evenly) distributed among the chance constraints, \ie $\theta_{i}=\theta/n$ for all $i=1,...,n$.
\begin{table}[H]
\renewcommand\arraystretch{1.1}
\centering
\subfloat[b][$\MCP$-based Scenario Approach.]{
\begin{tabular}{cr|rrrrrrr}
\multicolumn{2}{c|}{\esp\esp\multirow{2}{1.4cm}{sample size $K_{i}$}}
&\multicolumn{7}{c}{cuboid dimension $n=$}\\
& & \parbox[l][4mm]{11mm}{
2\:\:\:} & \parbox[l][4mm]{11mm}{
3\:\:\:} &
\parbox[l][4mm]{11mm}{
5\:\:\:} & \parbox[l][4mm]{11mm}{
10\:\:} &
\parbox[l][4mm]{11mm}{
50\:\:} & \parbox[l][4mm]{11mm}{
100\:} &
\parbox[l][4mm]{11mm}{
500\:}\\
\hline
\multirow{4}{*}{$\ep_{i}=$}
& 1\% & 1,734 & 1,777 & 1,831 & 1,903 & 2,072 & 2,144 & 2,311\\
& 5\% & 341 & 349 & 360 & 374 & 407 & 421 & 454\\
& 10\% & 166 & 170 & 176 & 182 & 199 & 205 & 221\\
& 25\% & 62 & 63 & 65 & 67 & 73 & 76 & 82
\end{tabular}}\\
\subfloat[b][$\SCP$-based Scenario Approach.]{
\begin{tabular}{cr|rrrrrrr}
\multicolumn{2}{c|}{\esp\esp\multirow{2}{1.4cm}{sample size $K_{i}$}}
&\multicolumn{7}{c}{cuboid dimension $n=$}\\
& & \parbox[l][4mm]{11mm}{
2\:\:\:} & \parbox[l][4mm]{11mm}{
3\:\:\:} &
\parbox[l][4mm]{11mm}{
5\:\:\:} & \parbox[l][4mm]{11mm}{
10\:\:} &
\parbox[l][4mm]{11mm}{
50\:\:} & \parbox[l][4mm]{11mm}{
100\:} &
\parbox[l][4mm]{11mm}{
500\:}\\
\hline
\multirow{4}{*}{$\ep_{i}=$}
& 1\% & 2,334 & 2,722 & 3,431 & 5,020 & 15,588 & 27,535 & 115,786\\
& 5\% & 459 & 536 & 677 & 992 & 3,095 & 5,477 & 23,093\\
& 10\% & 225 & 263 & 332 & 488 & 1,533 & 2,719 & 11,506\\
& 25\% & 84 & 99 & 125 & 186 & 595 & 1,063 & 4,550
\end{tabular}}
\caption{Implicit sample sizes $K_{1}=...=K_{n}$ for the $\MCP$-based and the $\SCP$-based
scenario approach, assuming a confidence level of $\theta=10^{-6}$, for varying problem
dimension $n$ and chance constraint levels $\ep_{1}=...=\ep_{n}$.\label{Tab:SampleSize}}
\renewcommand\arraystretch{1.0} \end{table}
The larger sample size of the $\SCP$-based approach, as compared to the $\MCP$-based approach, implies higher data requirements and higher computational efforts, but it also increases the conservatism of the scenario solution. The latter effect is quantified in Table \ref{Tab:RelObjValue}, showing the relative excess of the (average) objective function values of the $\SCP$-based solutions over those of the $\MCP$-based solutions. Note that the objective values achieved by the $\SCP$-based approach are always higher than those achieved by the $\MCP$-based approach, with the effect becoming increasingly significant as the dimension $n$ of the decision space grows larger.
\begin{table}[H]
\renewcommand\arraystretch{1.1}
\centering
\begin{tabular}{cr|rrrrrrr}
\multicolumn{2}{c|}{\multirow{2}{1.6cm}{\,\, relative obj. value}}
&\multicolumn{7}{c}{cuboid dimension $n=$}\\
& & \parbox[l][4mm]{11mm}{
2\:\:\:} & \parbox[l][4mm]{11mm}{
3\:\:\:} &
\parbox[l][4mm]{11mm}{
5\:\:\:} & \parbox[l][4mm]{11mm}{
10\:\:} &
\parbox[l][4mm]{11mm}{
50\:\:} & \parbox[l][4mm]{11mm}{
100\:} &
\parbox[l][4mm]{11mm}{
500\:}\\
\hline
\multirow{4}{*}{$\ep_{i}=$}
& 1\% & 2.4\% & 3.4\% & 5.0\% & 7.5\% & 14.8\% & 18.4\% & 26.9\%\\
& 5\% & 3.3\% & 4.6\% & 6.6\% & 9.8\% & 19.3\% & 23.8\% & 34.4\%\\
& 10\% & 3.9\% & 5.4\% & 7.6\% & 11.5\% & 22.2\% & 27.4\% & 39.3\%\\
& 25\% & 5.0\% & 7.2\% & 10.1\% & 15.1\% & 28.5\% & 34.7\% & 49.1\%
\end{tabular}
\caption{Objective function value of $\SCP$-based scenario solution as a percentage increase
over the $\MCP$-based scenario solution, based on the sample sizes in Table \ref{Tab:SampleSize}
and a multi-variate standard normal distribution for $\den$. Each of the indicated values
represents an average over one million simulation runs.\label{Tab:RelObjValue}}
\renewcommand\arraystretch{1.0} \end{table}
\begin{appendix}
\begin{comment}
\section{Notation}\label{Sec:Notation}
$\BN=\{0,1,2,...\}$ denotes the set of natural numbers and 0, $\BZ=\{...,-1,0,1,...\}$ the set of integral numbers. Subscripts and superscripts indicate that certain intervals of these sets are taken, \eg $\BN_{m}^{n}=\{m,m+1,...,n\}$. $\BR$ denotes the set of real numbers and $\BR_{+}$, $\BR_{0+}$ the set of positive, non-negative real numbers. $\bBR:=\BR\cup\{-\infty,+\infty\}$ also includes the infinities as elements, a common notation used in convex analysis \cite[Sec.\,4]{Rocka:1970}, and related notations such as $\bBR_{0+}$ or $\bBR^{n}$ are then self-explanatory. If $A\in\BR^{m\times n}$ and $b\in\BR^{n}$ are a real matrix and a real vector, then $A_{i,\cdot}$ and $A_{\cdot,j}$ denote the $i$-th row and the $j$-th column of $A$ respectively, and $b_{i}$ is the $i$-th element of $b$.
In $\BR^{n}$ and for $S\subset\BR^{n}$, $\conv(S)$ denotes the convex hull of $S$, $\spn(S)$ the linear hull of $S$, and $\dim(S)$ the dimension of the smallest linear subspace containing $S$. If $S,T\subset\BR^{n}$ are sets, their Minkowski sum is defined as \begin{equation*}
S\oplus T:=\bigl\{s+t\in\BR^{n}\:\big|\: s\in S,\:t\in T\}\ec \end{equation*} and their Pontryagin difference as \begin{equation*}
S\ominus T:=\bigl\{v\in\BR^{n}\:\big|\:v+t\in S\:\:\forall\:t\in T\}\enspace. \end{equation*} In general $(S\ominus T)\oplus T\neq S$, even if both $S$ and $T$ are compact and convex (in fact $(S\ominus T)\oplus T\subseteq S$, see \cite{KolmGilb:1998}). However equality holds if $T$ equals to a singleton $\{w\}$, in which case the notation is simplified to $(S\ominus w)\oplus w=S$.\\ For any linear subspace $L$ of $\BR^{n}$, its orthogonal complement is written as $L^{\perp}$. Now the Minkowski sum can also be used to denote the orthogonal direct sum $\BR^{n}=L\oplus L^{\perp}$ without ambiguity. \end{comment}
\section{Probability Distributions}\label{Sec:ProbDistr}
Several basic probability-related functions are used throughout this paper. The \emph{Binomial Distribution Function} \cite[p.\,26.1.20]{Abramowitz:1970} \begin{equation}\label{Equ:BinDistr}
\Phi(x;K,\ep):=\sum_{j=0}^{x}{K\choose j}\ep^{j}(1-\ep)^{K-j} \end{equation} expresses the probability of seeing at most $x\in\BN_{0}^{K}$ successes in $K\in\BN$ independent Bernoulli trails, where the probability of success is $\ep\in(0,1)$ per trial. The (real) \emph{Beta Function} \cite[p.\,6.2.1]{Abramowitz:1970} \begin{equation}\label{Equ:BetaFct1}
\B(a,b):=\int_{0}^{1}\xi^{a-1}(1-\xi)^{b-1}\di\ep \end{equation} is defined for any parameters $a,b\in\BR_{+}$, and $\xi\in(0,1)$; it also satisfies the identity \cite[p.\,6.2.2]{Abramowitz:1970} \begin{equation}\label{Equ:BetaFct2}
\B(a,b)=\B(b,a)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}\ec \end{equation} where $\Gamma:\BR_{+}\to\BR_{+}$ denotes the (real) \emph{Gamma Function} with $\Gamma(n+1)=n!$ for any $n\in\BN_{0}^{\infty}$ \cite[p.\,6.1.5]{Abramowitz:1970}. The corresponding \emph{Incomplete Beta Function} \cite[p.\,6.6.1]{Abramowitz:1970} is then given by \begin{equation}\label{Equ:IncBetaFct1}
\B(\ep;a,b):=\int_{0}^{\ep}\xi^{a-1}(1-\xi)^{b-1}\di\xi
=\int_{1-\ep}^{1}\xi^{b-1}(1-\xi)^{a-1}\di\xi\ec \end{equation} where the last equality follows by a simple substitution. An important identity is obtained from \cite[pp.\,3.1.1,\,6.6.2,\,26.5.7]{Abramowitz:1970}, \begin{equation}\label{Equ:IncBetaFct2}
\B(\ep;a,b)=\B(a,b)\sum_{j=a}^{a+b-1}{a+b-1 \choose j}\ep^{j}(1-\ep)^{a+b-1-j}\ec \end{equation} which can written more compactly by use of the binomial distribution \eqref{Equ:BinDistr}, see for instance \cite[p.\,3437]{Cala:2010}: \begin{equation}\label{Equ:IncBetaFct3}
\B(\ep;a,b)=\frac{1}{b}{a+b-1\choose b}^{-1}\Phi(b-1;a+b-1,1-\ep)\ef \end{equation}
\end{appendix}
\end{document} | arXiv | {
"id": "1205.2190.tex",
"language_detection_score": 0.7442501783370972,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract}
Under suitable technical assumptions, a description is given for the generators of $s$-residual intersections of an ideal $I$ in terms of lower residual intersections, if $s \geq \mu(I)-2$. This implies that $s$-residual intersections can be expressed in terms of links, if $\mu(I) \leq \height(I)+3$ and some other hypotheses are satisfied. \end{abstract} \title{Generators of Residual Intersections} \section{Introduction} In this paper, we give formulas for the generators of residual intersections under suitable assumptions. Let $(R,m)$ be a Noetherian local ring, and let $I$ be a proper $R$-ideal. The ideal $J$ is an $s$-\textit{residual intersection} of $I$ when $J = \mathfrak{a}:I$, where $\mathfrak{a} = (a_1,\dots,a_s) \subsetneq I$ and $\height J \geq s$. The residual intersection is called \textit{geometric} if $\height(I+J) \geq s+1.$ This is a generalization of linkage. Two proper ideals, $I$ and $J$, are said to be \textit{linked} if $I = \mathfrak{a}:J$ and $J = \mathfrak{a}:I$ for some $R$-ideal $\mathfrak{a}$ generated by a length $g$ regular sequence. The ideals $I$ and $J$ are said to be \textit{geometrically linked} if $\height(I+J) = g+1$.
Linkage is a well researched and well understood concept, in particular, when it comes to its connection to Cohen-Macualay and Gorenstein properties. In 1974, Peskine and Szpiro showed that if $R$ is a local Gorenstein ring and $I$ and $J$ are linked with $J = \mathfrak{a}:I$, then $R/J$ is Cohen-Macaulay if and only $R/I$ is Cohen-Macaulay. They also showed that in this case the canonical module of $R/I$ is $J/\mathfrak{a}$ \cite{PS}.
There have been attempts to find similar results for residual intersections. If $R$ is a local Gorenstein ring and $I$ is a proper $R$-ideal such that $R/I$ is Cohen-Macaulay, for an $s$-residual intersection $J$ of $I$, $R/J$ is not necessarily Cohen-Macaulay. However, there is a relationship. The first work exploring this relationship was in a 1983 paper by Huneke where he found that if $I$ is strongly Cohen-Macaulay and satisfies the $G_s$ condition then $R/J$ is Cohen-Macaulay \cite{H}. Papers by Herzog, Vasconcelos and Villarreal \cite{HVV} and by Hunke and Ulrich \cite{HU} explored how to weaken the strongly Cohen-Macaulay assumption. In 1994, Ulrich generalized this work and found settings where one can compute the canonical module of $R/J$ \cite{U}. There is still new research coming out on the computation of the canonical module of $R/J$, on when $R/J$ is Cohen-Macaulay and more recently, the relationship between residual intersections and the Gorenstein properties of Rees algebras \cite{HN, CNT, EU}.
However, the actual computation of residual intersections is not as well understood. Previously, it has been known how to compute an $s$-residual intersection in the following special cases - when, in the notation of the above definition, $I$ is a complete intersection \cite{HU}, when $I$ is a perfect ideal of height two \cite{H}, when $R/I$ is Gorenstein and $I$ is of height three \cite{KU}, and for certain $(\height(I)+1)$-residual intersections \cite{KMU}. Using their more general results, Bouca and Hassanzadeh gave a formula to compute an $s$-residual intersection of $I$ when $I$ is an almost complete intersection \cite{BH}.
In this paper, we study how to compute residual intersections in special cases. Under suitable technical assumptions, we are able to express $s$-residual intersections, for $s \geq \mu(I)-2$, in terms of $(\mu(I)-2)$-residual intersections (Theorem 2.5). Then we show how this implies that $s$-residual intersections can be expressed in terms of links, if $\mu(I) \leq \height(I)+3$ and some other hypotheses are satisfied (Corollary 3.1, Corollary 3.2, Corollary 3.3 and Corollary 3.5).
\section{Main Theorem}
We will be working in settings where the conditions of $G_s$ and weakly $s$-residually $S_2$ are relevant, so we must define them. Let $R$ be a Noetherian ring, $I$ an $R$-ideal, and $s$ an integer. When we say $I$ satisfies $G_s$, we mean that $\mu(I_p) \leq \dim R_p$ for all $p \in V(I)$ such that $\dim R_p \leq s-1$. An ideal $I$ is said to be \textit{weakly s-residually} $S_2$ if for every $i$ and with $\height (I) \leq i \leq s$ and every geometric $i$-residual intersection $J$ of $I$, $R/J$ is $S_2$.
The condition of being weakly $s$-residually $S_2$ is satisfied by a number of ideals. If $R$ is a local Cohen-Macaulay ring and $I$ is a generically complete intersection ideal, an almost complete intersection ideal and almost Cohen-Macaulay, then by \cite[Theorem 3.3]{HVV} $I$ satisfies $AN_s^-$ for every $s$, which is a stronger condition than being $s$-residually $S_2$. If $R$ is a Gorenstein local ring and $I$ is an almost almost complete intersection ideal and Cohen-Macaulay, then by \cite[page 259]{AH} $I$ is strongly Cohen-Macaulay and thus by \cite[Theorem 4.5]{CNT} $I$ satisfies $AN_s$ for every $s$, which is a stronger condition than being $s$-residually $S_2$. If $R$ is a Gorenstein local ring and $I$ is licci then by \cite[5.3]{HU} $I$ satisfies $AN_s$ for every $s$. Examples of licci ideals include perfect ideals of grade 2 (\cite{A}, \cite{G}) and perfect Gorenstein ideals of grade 3 \cite{W}. One should note that if $R$ is a local Cohen-Macaulay ring and $I$ is a strongly Cohen-Macaulay ideal then by \cite[Theorem 4.5]{CNT} $I$ satisfies $AN_s$. Another set of examples of $s$-residually $S_2$ ideals are ideals generated by submaximal minors of generic symmetric matrices \cite{K}. The defining ideals of general projections also produce large classes of ideals that are $s$-residually $S_2$ \cite[Section 5]{CEU}.
Before we can begin the proof of our main theorem we must prove a few preliminary lemmas, the first of which is a quick computation for a special case of residual intersections.
\begin{lem}Let $(R,m,k)$ be a Noetherian local ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal and $\mathfrak{a} \subseteq I$. If $\mu(I/\mathfrak{a}) \leq 1$, then $\Fitt_0(I/\mathfrak{a}) = \mathfrak{a}:I$.\end{lem} \begin{proof} Since $\mu(I/\mathfrak{a}) \leq 1$, there exists an $x \in I$ such that $(x)+\mathfrak{a} = I $. Let $\mathfrak{a}:I = (b_1,\dots,b_n)$. Let $A$ be a $1 \times n$ matrix such that $A = [b_1,\dots,b_n]$. Notice that the following is an exact sequence: $$R^n \xrightarrow{A} R \xrightarrow{x} I/\mathfrak{a} \rightarrow 0. $$
So $\Fitt_0(I/\mathfrak{a}) = I_1(A) = J$. \end{proof}
We make use of \cite[Lemma 1.3]{U}, and thus will restate it here for the convenience of the reader:
\begin{lem}\cite[Lemma 1.3]{U} Let $(R,m,k)$ be a Noetherian ring with $|k| = \infty$. Let $M$ be a finitely generated $R$-module, consider (not necessarily distinct) prime ideals $p_1,\dots,p_n$ of $R$, and submodules $N_1, \dots, N_m$ of $M$. Then there exists $x \in M$ such that for every $1 \leq i \leq n$ and $1 \leq j \leq m$ , $\mu((M/N_j+(x))_{p_i}) = \max\{0,\mu((M/N_j)_{p_i}) - 1 \}$
\end{lem}
The following lemma, while not critical to the proof of our main theorem, establishes the setting in which the main theorem is relevant.
\begin{lem} Let $(R,m,k)$ be a Noetherian local ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal and $\mathfrak{a} \subsetneq I$. Suppose $I$ satisfies $G_s$, $\mathfrak{a} : I$ is an $s$-residual intersection and $\mu(\mathfrak{a}+mI/mI) = t \leq s$. Then there exists a generating sequence $a_1,\dots,a_s$ of $\mathfrak{a}$ such that for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,s\}$, the following is true:
\begin{enumerate}
\item $\height(a_{\nu_1},\dots,a_{\nu_i}):I \geq i$ for $0 \leq i \leq s$;
\item $\mu((\mathfrak{a}/(a_{\nu_1},\dots,a_{\nu_i}))_p) \leq \dim R_p - i$ whenever $p \in V(I)$ and $i \leq \dim R_p \leq s-1$;
\item $\mu((a_1,\dots,a_t) + mI/mI) = t$. \end{enumerate} \end{lem}
\begin{proof} By induction on $0 \leq l \leq s$, we are going to construct elements $a_1,\dots,a_s$ such that for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,l\}$, the following are true: \begin{enumerate}[label=(\roman*)]
\item $\mu(\mathfrak{a}/(a_1,\dots,a_l)) = \max\{0, s-l\};$
\item $(\mathfrak{a}/(a_{\nu_1},\dots,a_{\nu_i}))_p = 0$ where $p \in \Spec(R)$ with $\dim R_p \leq i-1;$
\item $\mu((\mathfrak{a}/(a_{\nu_1},\dots,a_{\nu_i}))_p) \leq \dim R_p - i$ whenever $p \in V(I)$ and $i \leq \dim R_p \leq s-1;$
\item $\mu(\mathfrak{a}/\mathfrak{a}\cap mI + (a_1,\dots,a_l)) = \max\{0,t-l\}.$ \end{enumerate}
When $l = 0$, this is clear as $I$ satisfies $G_s$ and $I_p = a_p$ for all $p \in \Spec(R)$ such that $\dim R_p \leq s-1$.
Let $1 \leq l \leq s$ and assume $a_1,\dots,a_{l-1}$ have already been constructed. To obtain $a_l$, we wish to apply \cite[Lemma 1.3]{U} to the module $M = \mathfrak{a}$ and a finite family, $\mathcal{M}$, of submodules of the form $N = (a_{\nu_1},\dots,a_{\nu_i})$ for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,l-1\}$ and of the form $N = \mathfrak{a}\cap mI + (a_1,\dots,a_{l-1})$.
For $0 \leq j \leq s$ consider the $j$th Fitting ideals of the $R$-modules $\mathfrak{a}/N$, $F_j = \Fitt_j(\mathfrak{a}/N)$, which define the loci $V(F_j) = \{ p \in \Spec(R) \; | \; \mu((\mathfrak{a}/N)_p) > j\}$. Now, let $\mathcal{Q}$ be the finite subset of $\Spec(R)$ consisting of $m$, the maximal ideal, and of all minimal primes in $V(F_0)$ and all minimal primes ideals in $V(I+F_j)$, for $0\leq j \leq s$ and every $N \in \mathcal{M}$. By \cite[Lemma 1.3]{U}, there exists $a_l \in \mathfrak{a}$ such that $\mu((\mathfrak{a}/N+(a_l))_p) = \max\{0,\mu((\mathfrak{a}/N)_p-1)\}$ for every $p \in \mathcal{Q}$ and ever $N \in \mathcal{M}$.
Note that $a_1, \dots, a_k$ satisfy (i) and (iv) as $m \in \mathcal{Q}$ and, by induction, $\mu(\mathfrak{a}/(a_1,\dots,a_{l-1})) = \max\{0, s-(l-1)\}$ and $\mu\Big(\mathfrak{a}/\big(\mathfrak{a}\cap mI + (a_1,\dots,a_{l-1})\big)\Big) = \max\{0,t-(l-1)\}$. For (ii) and (iii), it suffices to check for factor modules of the form $\mathfrak{a}/(a_{\nu_1},\dots,a_{\nu_i},a_l) = \mathfrak{a}/(N+(a_l))$ for $N \in \mathcal{M}\backslash \{\mathfrak{a}\cap mI + (a_1,\dots,a_{l-1})\}$. Write $F_j = \Fitt_j(\mathfrak{a}/N)$.
To prove (ii), let $p \in \Spec(R)$ with $\dim R_p \leq i$. If $p \notin V(F_0)$, then $(\mathfrak{a}/N)_p = 0$ and we are done. So, assume that $F_0 \subseteq p$. Furthermore, by induction and (ii) applied to $\mathfrak{a}/N$, $\height F_0 \geq i$, which implies that $\dim R_p = i$ and thus $p$ is minimal in $V(F_0)$. Hence, $p \in \mathcal{Q}$. So, by our choice of $a_l$ we conclude that $\mu((\mathfrak{a}/N+(a_l))_p) = \max \{0, \mu((\mathfrak{a}/N)_p)-1\}$. If $p \notin V(\mathfrak{a})$, then $\mu((\mathfrak{a}/N)_p)\leq 1$. If $p \in V(\mathfrak{a})$, then $p \in V(I)$ since for $p \in \Spec(R)$ with $\dim R_p \leq s-1$, $I_p = \mathfrak{a}_p$. Thus, by induction and (iii) applied to $\mathfrak{a}/N$, $\mu((\mathfrak{a}/N)_p) = 0$. In either case, $\mu((\mathfrak{a}/N+(a_l))_p) = 0$ which shows (ii).
To prove (iii), let $p \in V(I)$ with $i+1 \leq \dim R_p \leq s-1$. Write $j = \dim R_p - i - 1$. Notice $0 \leq j \leq s-i - 2$. If $p \notin V(I+F_j)$, then $p \notin V(F_j)$ and hence $\mu((\mathfrak{a}/N)_p) \leq j$ so we are done. So, assume $I+F_j \subseteq p$. We will show $\height(I+F_j) \geq j+i+1$ for $0 \leq j \leq s-i-2$. Let $p'$ be in $ V(I+F_j)$. Since $p' \in V(F_j)$, $\mu((\mathfrak{a}/N)_{p'}) \geq j+1$. Since $p'$ is also in $V(I)$, $\mu((\mathfrak{a}/N)_{p'}) \leq \dim R_{p'}-i$, by induction and (iii) applied to $\mathfrak{a}/N$. Thus, $\dim R_{p'} \geq j+i+1$ and we are done. Therefore $j+i+1 = \dim R_p \geq \height (I+F_j) \geq j+i+1$, which shows that $p$ is minimal in $V(I+F_j)$ and thus $p \in \mathcal{Q}$. Now our choice of $a_l$ implies that $\mu((\mathfrak{a}/N+(a_l))_p) = \max\{0,\mu(\mathfrak{a}/N)_p-1\}$. By induction and (iii) applied to $\mathfrak{a}/N$, $\mu((\mathfrak{a}/N)_p) \leq \dim R_p - i$. Thus, $\mu((\mathfrak{a}/N+(a_l))_p) \leq \max \{ 0, \dim R_p-i-1 \} = \dim R_p -i -1$. \end{proof}
Note that condition (1) in Lemma 2.3 is equivalent to $(a_{\nu_1},\dots a_{\nu_i}):I$ being an $i$-residual for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,s\}$ and $0\leq i \leq s$. Meanwhile condition (2) implies that $(a_{\nu_1},\dots a_{\nu_i}):I$ is also geometric for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,s\}$ and $0\leq i \leq s-1$.
The following lemma is critical to proving the main theorem. Note that the proof applies the same technique as the previous lemma. This is a common technique, for instance it was used in \cite{U}.
\begin{lem} Let $I$ and $\mathfrak{a}$ be as in Lemma 2.3 and suppose $a_1,\dots,a_s$ have been selected as in Lemma 2.3 and that $t < s$. Then, we can select an $x \in I$ such that for $\mathfrak{a}' = (a_1,\dots,a_{s-1},x)$, $\mu(\mathfrak{a}'+mI/mI) = \mu((a_1,\dots,a_t,x) + mI/mI) = t+1$, and for any $\{a_{\nu_1},\dots,a_{\nu_i}\} \subseteq \{a_1,\dots,a_{s-1},x\}$, the following hold:
\begin{enumerate}
\item $\height(a_{\nu_1},\dots,a_{\nu_i}):I \geq i$ for $0 \leq i \leq s$;
\item $\mu((\mathfrak{a}/(a_{\nu_1},\dots,a_{\nu_i}))_p) \leq \dim R_p - i$ whenever $p \in V(I)$ and $i \leq \dim R_p \leq s-1$. \end{enumerate}
\end{lem}
\begin{proof} Note that $a_1,\dots,a_{s-1}$ satisfy the following conditions for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,s-1\}$:
\begin{enumerate}[label=(\Roman*)]
\item $\mu(I/ mI + (a_1,\dots,a_{t})) = n-t;$
\item $(I/(a_{\nu_1},\dots,a_{\nu_i}))_p = 0$ where $p \in \Spec(R)$ with $\dim R_p \leq i-1;$
\item $\mu((I/(a_{\nu_1},\dots,a_{\nu_i}))_p) \leq \dim R_p - i$ whenever $p \in V(I)$ and $i \leq \dim R_p \leq s-1.$ \end{enumerate}
We construct $x$ by applying \cite[Lemma 1.3]{U} to the module $M = I$ and a finite family, $\mathcal{M}$, of submodules of the form $N = (a_{\nu_1},\dots,a_{\nu_i})$ for every subset $\{\nu_1,\dots,\nu_i\} \subseteq \{1,\dots,s-1\}$ and of the form $N = mI + (a_1,\dots,a_t)$. For $0 \leq j \leq s$ consider the $j$th Fitting ideals of the $R$-modules $I/N$, $F_j = \Fitt_j(I/N)$, which define the loci $V(F_j) = \{ p \in \Spec(R) \; | \; \mu((I/N)_p) > j\}$. Now, let $\mathcal{Q}$ be the finite subset of $\Spec(R)$ consisting of $m$, the maximal ideal, and of all minimal primes in $V(F_0)$ and all minimal primes ideals in $V(I+F_j)$, for $0\leq j \leq s$ and every $N \in \mathcal{M}$. By \cite[Lemma 1.3]{U}, there exists $x \in I$ such that $\mu((I/N+(x))_p) = \max\{0,(I/N)_p-1\}$ for every $p \in \mathcal{Q}$ and ever $N \in \mathcal{M}$.
We show that $a_1,\dots,a_{s-1},x$ satisfy the following conditions for every subset $\{a_{\nu_1},\dots,a_{\nu_i}\} \subseteq \{a_1,\dots,a_{s-1},x\}$:
\begin{enumerate}[label=(\roman*)]
\item $\mu(I/ mI + (a_1,\dots,a_{t},x)) = n-t-1;$
\item $(I/(a_{\nu_1},\dots,a_{\nu_i}))_p = 0$ where $p \in \Spec(R)$ with $\dim R_p \leq i-1;$
\item $\mu((I/(a_{\nu_1},\dots,a_{\nu_i}))_p) \leq \dim R_p - i$ whenever $p \in V(I)$ and $i \leq \dim R_p \leq s-1.$ \end{enumerate}
Item (i) is clear as $m \in \mathcal{Q}$, $mI+(a_1,\dots,a_{t}) \in \mathcal{M}$ and $\max\{0,\mu(I/mI+(a_1,\dots,a_{t}))-1\} = \max \{0,n-t-1\} = n-t-1$. For (ii) and (iii), it suffices to check factor modules of the form $I/(a_{\nu_1},\dots,a_{\nu_{i-1}},x) = I/N+(x)$ for $N \in \mathcal{M}\backslash \{mI + (a_1,\dots,a_{t})\}$. Write $F_j = \Fitt_j(I/N)$.
To prove (ii), let $p \in \Spec(R)$ with $\dim R_p \leq i-1$. If $p \notin V(F_0)$, then $(I/N)_p = 0$ and we are done. So, assume that $p \in V(F_0)$. Furthermore, by (II) applied to $I/N$, $\height F_0 \geq i-1$, which implies that $\dim R_p = i-1$ and thus $p$ is minimal in $V(F_0)$. Hence, $p \in \mathcal{Q}$. So, by our choice of $x$ we conclude that $\mu((I/N+(x))_p) = \max \{0, \mu((I/N)_p)-1\}$. If $p \notin V(I)$, then $\mu((I/N)_p)\leq 1$. If $p \in V(I)$ by (III) applied to $I/N$, $\mu((I/N)_p) = 0$. In either case, $\mu((I/N+(x))_p) = 0$ which shows (ii).
To prove (iii), let $p \in V(I)$ with $i \leq \dim R_p \leq s-1$. Write $j = \dim R_p - i$. Notice $0 \leq j \leq s-i - 1$. If $p \notin V(I+F_j)$, then $p \notin V(F_j)$ and hence $\mu((I/N)_p) \leq j$, and we are done. So, assume $p \in V(I+F_j)$. We will show $\height(I+F_j) \geq j+i$ for $0 \leq j \leq s-i$. Suppose there exists $p' \in V(I+F_j)$ with $\dim R_{p'} \leq j+i-1 \leq s-1$. Since $p' \in V(F_j)$, $\mu((I/N)_{p'}) \geq j+1$. Since $p'$ is also in $V(I)$, $\mu((I/N)_{p'}) \leq \dim R_{p'} -i + 1$ by (III). Thus, $\dim R_{p'} \geq j+i$, which is a contradiction. Therefor $j+i = \dim R_p \geq \height (I+F_j) \geq j+i$, which shows that $p$ is minimal in $V(I+F_j)$ and thus $p \in \mathcal{Q}$. Now our choice of $x$ implies that $\mu((I/N+(x))_p) = \max\{0,\mu(I/N)_p-1\}$. By (III) applied to $I/N$, $\mu((I/N)_p) \leq \dim R_p - i + 1$. Thus, $\mu((I/N+(x))_p) \leq \max \{ 0, \dim R_p - i \} = \dim R_p -i$.\end{proof} Now, we are going to prove our main theorem.
\begin{thm}Let $(R,m,k)$ be a local Cohen-Macaulay ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal and $\mathfrak{a} \subsetneq I$. Let $\mu(I) = n$ and $s \geq n-2$. Suppose $I$ satisfies $G_s$ and is weakly $(s-2)$-residually $S_2$, and $\mathfrak{a} : I$ is an $s$-residual intersection. For any generating sequence $a_1, \dots, a_s$ of $\mathfrak{a}$ as in Lemma 2.3 one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I.$$ \end{thm}
\begin{proof} We induct on $s$. When $s = n-2$, this is clear. So, suppose $s \geq n-1$ and that the theorem is true for $s-1$. We use reverse induction on $t = \mu((\mathfrak{a}+mI)/mI)$. Begin with $t = n-1$, which implies that $\mu(I/\mathfrak{a}) = 1$, and thus applying Lemma 2.1, we have that $\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a})$. Finally, noting that $$\sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I \subseteq \mathfrak{a}:I$$
we have, $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I. $$
Now, suppose $t < n-1$ and that the theorem is true for $s$ and $t+1$. By Lemma 2.4, we can construct $\mathfrak{a}'$ generated by $a_1,\dots,a_{s-1},x$ such that, $\mu(\mathfrak{a}' + mI/mI) = t+1$ and the generators satisfy the hypothesis of our theorem. By our choices of $a_1,\dots,a_t,x$, we can select $n-t-1$ general elements $x_{t+1},\dots,x_{n-1}$ such that $I = (x_1,\dots,x_n)$ where $x_i = a_i$ for $i \leq t$ and $x_{n} = x$.
Let $J_{s-1} = (a_1,\dots,a_{s-1}):I$ and $\mathfrak{a}_{s-1} = (a_1,\dots,a_{s-1})$. Let $"\bar{\;}"$ represent images in $\overline{R} = R/J_{s-1}$. Applying \cite[Corollary 3.6.a, Proposition 3.3.b]{CEU}, we get that $\overline{a_s}$ and $\overline{x}$ are non-zerodivisors on $\overline{R}$. In particular, $\grade(\overline{I}) \geq 0$. As $\overline{x}$ and $\overline{a_s}$ are non-zerodivisors, we have that \begin{equation}
x(\mathfrak{a}:I)+J_{s-1} = a_s(\mathfrak{a}':I)+J_{s-1}. \end{equation} We claim that, similarly, \begin{equation} \begin{split}
a_s\Bigg(\Fitt_0(I/\mathfrak{a}') + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},x):I \Bigg) + J_{s-1} = \\
x \Bigg(\Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I \Bigg) + J_{s-1}. \end{split} \end{equation}
To prove this, first we show that $x(\Fitt_0(I/\mathfrak{a})) + J_{s-1} = a_s(\Fitt_0(I/\mathfrak{a}')) + J_{s-1}$. Letting $a_s = \sum_{i=1}^{n}c_ix_i$, we can define $n \times s$ matrices $B$ and $B'$ where $[a_1,\dots,a_s] = [x_1,\dots,x_{n}]B$ and $[a_1,\dots,a_{s-1},x] = [x_1,\dots,x_{n}]B'$ such that the only difference between them is that the last column of $B$ is $[c_1,\dots,c_{n}]^t$ and the last column of $B'$ is $[0,\dots,0,1]^t$. Consider an $n \times k$ matrix $A$ such that $R^k \xrightarrow{A} R^{n} \xrightarrow{[x_1,\dots,x_{n}]} I \rightarrow 0$ is exact.
Notice that $$R^{k+s} \xrightarrow{[A|B]} R^{n} \xrightarrow{[x_1,\dots,x_{n}]} I/\mathfrak{a} \rightarrow 0$$ is exact and $$R^{k+s} \xrightarrow{[A|B']} R^{n} \xrightarrow{[x_1,\dots,x_{n}]} I/\mathfrak{a}' \rightarrow 0$$ is exact. As $\Fitt_0(I/\mathfrak{a}) = I_{n}([A|B])$ and $\Fitt_0(I/\mathfrak{a}') = I_{n}([A|B'])$, we can reduce down to looking at $(n) \times (n+1)$ matrices $D$ and $D'$ where the only difference is that the last column of $D$ is $[c_1,\dots,c_{n}]^t$ and the last column of $D'$ is $[0,\dots,0,1]^t$. Let $E$ be $D$ with the last column removed. Equivalently, $E$ is $D'$ with the last column removed. Let $M_{i,j}$ be the cofactors of $E$. So we have the following,
$$I_{n}(D) = \Bigg(\det(E),\sum_{i=1}^{n}c_iM_{i,1},\dots,\sum_{i=1}^{n}c_iM_{i,n}\Bigg)$$ and
$$I_{n}(D')= (\det(E),M_{n,1},\dots,M_{n,n}).$$
Note that $\overline{[x_1,\dots,x_{n}]E}=0$ which gives that $\det(\overline{E})\:\overline{x_{n}}=0$. Since $\overline{x_{n}}$ is a non-zerodivisor, $\det(\overline{E}) = 0$. Thus $$x\;I_{n}(D) + J_{s-1} = \Bigg(\sum_{i=1}^{n}c_ix_{n}M_{i,1},\dots,\sum_{i=1}^{n}c_ix_{n}M_{i,n}\Bigg) + J_{s-1}$$ and
$$a_{s}\;I_{n}(D') + J_{s-1}= \Bigg(\sum_{i=1}^{n}c_ix_i M_{n,1},\dots,\sum_{i=1}^{n}c_ix_i M_{n,n}\Bigg) + J_{s-1}.$$
Let $N$ be $\overline{[A|B]}$ (equivalently $\overline{[A|B']}$) with the last column removed. Notice $$\overline{R}^{k+s-1} \xrightarrow{N} \overline{R}^{n} \xrightarrow{[x_1,\dots,x_{n}]} I/\mathfrak{a}_{s-1}\xrightarrow{} 0 $$ is exact. By \cite[Corollary 3.4]{CEU}, $I/\mathfrak{a}_{s-1} \cong \overline{I}$. Since $\grade(\overline{I})>0$, $I_{n}(N) \subseteq \ann_{\overline{R}}I/\mathfrak{a}_{s-1} = \ann_{\overline{R}} \overline{I} = 0$. For for any fixed $j$, $[\overline{M_{1,j}},\dots,\overline{M_{n,j}}]N = 0$, as all entries are determinants of $n \times n$ sub-matrices of $N$ and thus $0$. Note that $[\overline{x_1},\dots,\overline{x_{n}}]\overline{N}=0$. As $\grade(\overline{I}) > 0$, the kernal of the dual of $N$ has rank $1$. Thus, for any fixed $j$, $[\overline{x_1},\dots,\overline{x_{n}}]$ and $[\overline{M_{1,j}},\dots,\overline{M_{n,j}}]$ are linearly dependent. Thus $\overline{x_{n}M_{i,j}}=\overline{x_iM_{n,j}}$, which gives $x(I_{n}(D)) + J_{s-1} = a_{s}(I_{n}(D')) + J_{s-1}$. It follows that \begin{equation}
x(\Fitt_0(I/\mathfrak{a})) + J_{s-1} = a_s(\Fitt_0(I/\mathfrak{a}')) + J_{s-1}. \end{equation}
Now we show a similar relationship holds between the sums of $(n-2)$-residual intersections. First, note that letting $\{\nu_1,\dots,\nu_{n-3}\} \subseteq \{1,\dots,s-1\}$, we have $(a_{\nu_1},\dots,a_{\nu_{n-3}}):I \subseteq J_{s-1}$. Recall $a_s$ and $x$ are nonzero divisors on $R/(a_{\nu_1},\dots,a_{\nu_{n-3}}):I$. Thus, $$a_s((a_{\nu_1},\dots,a_{\nu_{n-3}},x):I)+J_{s-1} = x((a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I)+J_{s-1}.$$ So, we have
\begin{equation} \begin{split}
a_s\Bigg(\sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},x):I \Bigg) + J_{s-1} =\\
x \Bigg(\sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I \Bigg) + J_{s-1}. \end{split} \end{equation}
Thus, equation (2) holds by (3) and (4).
By induction on $s$, we have
$$J_{s-1} = \Fitt_0(I/\mathfrak{a}_{s-1}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I.$$
Likewise, by decreasing induction on $t$, we have
$$\mathfrak{a}':I = \Fitt_0(I/\mathfrak{a}') + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_{s-1},x\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I,$$
so \begin{equation}
\mathfrak{a}':I + J_{s-1} = \Fitt_0(I/\mathfrak{a}') + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},x):I \: + J_{s-1}. \end{equation}
Putting equations (1), (2) and (5) together, we have: \begin{eqnarray*} x(\mathfrak{a}:I)+J_{s-1} &=& a_s(\mathfrak{a}':I)+J_{s-1}\\ &=& a_s\Bigg(\Fitt_0(I/\mathfrak{a}') + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},x):I \Bigg) + J_{s-1} \\ &=& x \Bigg(\Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I \Bigg) + J_{s-1}. \end{eqnarray*}
As $\overline{x}$ is a non-zerodivisor, $$\mathfrak{a}:I \: + J_{s-1} = \Fitt_0(I/\mathfrak{a})+ \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I+J_{s-1}.$$
What we have left to do is "get rid of" $J_{s-1}$ on both sides. First, note $J_{s-1} \subseteq \mathfrak{a}:I$. Moreover, since $I/\mathfrak{a}_{s-1}$ maps onto $I/\mathfrak{a}$, $\Fitt_0(I/\mathfrak{a}_{s-1}) \subset \Fitt_0(I/\mathfrak{a})$. Thus, we have: \begin{eqnarray*} \mathfrak{a}:I &=& \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-3}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-3}},a_s):I + \\ &&\sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_{s-1}\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I\\ &=& \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I. \end{eqnarray*} \end{proof}
\section{Applications}
We begin our applications by showing that the main theorem provides an alternative proof for some already known results. Our first corollary was previously proven in \cite{HU}.
\begin{cor}
Let $(R,m,k)$ be a local Cohen-Macaulay ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal and $\mathfrak{a} \subsetneq I$. Suppose $I$ is a complete intersection. Then $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \mathfrak{a}.$$ \end{cor}
\begin{proof} This follows from Lemma 2.3 and Theorem 2.5. \end{proof}
The following corollary was also proven, without the generic complete intersection assumption but with a stronger assumption on the depth of $R/I$ in \cite[Corollary 5.5]{BH}.
\begin{cor}
Let $(R,m,k)$ be a local Cohen-Macaulay ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal and $\mathfrak{a} \subsetneq I$. Suppose $I$ is an almost complete intersection and generically a complete intersection and that $\depth R/I \geq \dim R/I - 1$. Let $\mathfrak{a} : I$ be an $s$-residual intersection. Then $\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \mathfrak{a}$. \end{cor}
\begin{proof} We may assume that $s \geq \height(I)$ and that $\mu(I) = \height(I)+1$. Since $I$ is generically a complete intersection and an almost complete intersection, $I$ is generated by a $d$-sequence. Thus, we can apply \cite[Theorem 3.4.c]{HVV} to get that $I$ satisfies $AN_s^-$. Thus, we can apply Lemma 2.3 and Theorem 2.5. So, we have $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{g-1}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g-1}}):I.$$ Note, since $g-1 < \height(I)$, $(a_{\nu_1},\dots,a_{\nu_{g-1}}):I = (a_{\nu_1},\dots,a_{\nu_{g-1}})$. Thus, $$\sum_{\{a_{\nu_1},\dots,a_{\nu_{g-1}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g-1}}):I = \mathfrak{a}.$$ \end{proof}
Now, we have the applications of our theorem to previously unknown cases.
\begin{cor} Let $(R,m,k)$ be a local Gorenstein ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal such that $R/I$ is Cohen-Macaulay and let $\mathfrak{a} \subsetneq I$. Let $\height(I) = g$, $\mu(I) = g+2$, $s \geq g$, and assume $I$ satisfies $G_s$. Let $\mathfrak{a} : I$ be an $s$-residual intersection. For any generating sequence $a_1, \dots, a_s$ of $\mathfrak{a}$ as in Lemma 2.3 one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{g}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g}}):I$$
where each $(a_{\nu_1},\dots,a_{\nu_{g}}):I$ is a link. \end{cor}
\begin{proof} Note that, as $R$ is Gorenstein and as $\mu(I) = g+2$ and $R/I$ Cohen-Macaulay, by \cite{AH} we have that $I$ is strongly Cohen-Macaulay. Since $I$ is strongly Cohen-Macaulay and satisfies $G_s$, $I$ satisfies $AN_s$ \cite{H}. Thus, we can apply Lemma 2.3 and Theorem 2.5 to get the equality. Further, as $R/I$ is Cohen-Macaulay, $I$ is unmixed. Since $I$ is unmixed and $R$ is Gorenstein, by \cite{PS} we have that each $(a_{\nu_1},\dots,a_{\nu_{g}}):I$ is a link. \end{proof}
Here, we restate \cite[Corollary 2.18]{KMU} in the way in which we shall use it.
\begin{thm} Let $R$ a local Gorenstein ring, let $I$ be an ideal of height $g>0$ such that $I$ is generically a complete intersections and assume that $R/I$ is Gorenstein. Let $\mathfrak{a}:I$ be a $(g+1)$-residual intersection of $I$. Then for any generating sequence $a_1,\dots,a_{g+1}$ of $\mathfrak{a}$ such that any length $g$ subsequence is a regular sequence, one has that $$\mathfrak{a}:I = \sum_{\{a_{\nu_1},\dots,a_{\nu_{g}}\} \subseteq \{a_1,\dots,a_{g+1}\}} (a_{\nu_1},\dots,a_{\nu_{g}}):I$$
if and only if $\,\Ext_R^1(I/I^2,R/I) = 0$. \end{thm}
Now, we state our final application.
\begin{cor}
Let $(R,m,k)$ be a local Gorenstein ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal such that $R/I$ is Gorenstein and let $\mathfrak{a} \subsetneq I$. Suppose $I$ is of height $g > 0$, $\mu(I) = g+3$, $\Ext^1_{R/I}(I/I^2,R/I) = 0$, and for some $s \geq g+1$, I is $G_{s}$ and weakly $(s-2)$-residually $S_2$. Let $\mathfrak{a}:I$ be an $s$-residual intersection. For any generating sequence $a_1, \dots, a_s$ of $\mathfrak{a}$ as in Lemma 2.3 one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{g}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g}}):I$$
where each $(a_{\nu_1},\dots,a_{\nu_{g}}):I$ is a link. \end{cor}
\begin{proof} This follows from Lemma 2.3 and Theorem 2.5 and Theorem 3.4. \end{proof}
Let $R$ be a Noetherian local ring, $I$ generically a complete intersection ideal and $A = R/I$, then $$\Ext^1_{R/I}(I/I^2,R/I) = T^2(A/R,A).$$ When $T^2(A/R,A)=0$, we say that $R$ is non-obstructed. If $R$ is regular this implies that there are no obstructions for lifting infinitesimal deformations. One should note that by \cite{B} if $I$ is a licci ideal then $T^2(A/R,A) = 0$.
\section{Another Approach}
Given that our main theorem requires two condition, that the ideal $I$ is both $G_s$ and weakly $s$-residually $S_2$, a natural question arises - can we weaken either condition? Approaching the question of computing the generators of residual intersections using the methods of Bouca and Hassanzadeh \cite{BH}, a partial answer arises. It is possible to eliminate the $G_s$ condition, however, with this method the depth condition is strengthened from $I$ being weakly $s$-residually $S_2$ to $I$ satisfying the $SD_1$ condition. It has yet to be determined if it is possible to eliminate the $G_s$ condition without strengthening the depth condition.
For clarity, we recall the definition of $SD_k$ here.
\begin{defn} Let $(R,m)$ be a Noetherian local ring of dimension $d$ and $I = (x_1,\dots,x_n)$ be an ideal of grade $g$. Let $k$ be an integer. We say that an ideal satisfies $SD_k$ if
$$\depth(H_i(x_1,\dots,x_n;R)) \geq \min\{d-g, d-n+i+k \}$$
for all $i \geq 0$. \end{defn}
In the case where $s$ is small, the $G_s$ and weakly $(s-2)$-residually $S_2$ conditions are very weak, especially compared to the $SD_1$ condition. Thus, the methods of Section 3 have an advantage when $s$ is small.
Before we go over how to use the methods of Bouca and Hassanzadeh to derive an analogous result to our main theorem, we will restate a few of the relevant results and definitions from \cite{BH}.
The most relevant of these definitions is the definition of an ideal that Bouca and Hassanzadeh call the disguised residual intersection. While they have an alternate construction for this ideal, we will use an equivalent definition.
\begin{defn}\cite[Theorem 4.9]{BH} \; Let $R$ be a Noetherian ring, $I = (x_1, \dots, x_n)$ and $\mathfrak{a} = (a_1,\dots, a_s) \subseteq I$ be ideals of $R$. Let $B = (c_{ij})$ be an $n\times s$ matrix such that $[a_1,\dots,a_s] = [x_1,\dots,x_n] B$. Let $e_1,\dots,e_n$ be the basis of $K_1(x_1,\dots,x_n; R)$ as an $R$-module. Let $\zeta_j = \sum_{i=1}^n c_{ij}e_i$ and $\Gamma_{\bullet}$ be the R-subalgebra of $K_\bullet(x_1,\dots,x_n; R)$ generated by $\{\zeta_1, \dots, \zeta_s \}$. Let $Z_\bullet = Z_\bullet(x_1,\dots,x_n; R)$ be the R-subalgebra of Koszul cycles. Then the disguised residual intersection is the ideal satisfying $\;\Kitt(\mathfrak{a},I)\cdot e_1\wedge\dots\wedge e_n = \langle \Gamma_\bullet \cdot Z_\bullet \rangle_n$.
\end{defn}
One should note that in \cite[Section 4.2]{BH}, it was shown that the disguised residual intersection does not depend on any choice of generators or the matrix $B$.
\begin{thm} \cite[Theorem 4.23]{BH} Let $R$ be a Noetherian ring, and keep the same notation as in Definition 4.1. Let $\grade(I) = g$. Let $\tilde{H}_\bullet$ be the R-subalgebra of $K_\bullet(x_1,\dots,x_n; R)$ generated by lifts of the generators of the Koszul homology modules such as to give Koszul cycles. Then
$$\Kitt(\mathfrak{a},I)\cdot e_1\wedge\dots\wedge e_n = \mathfrak{a}\cdot e_1\wedge\dots\wedge e_n + \sum_{i = \max \{0, n-s\}}^{n-g}\Gamma_{n-i} \cdot \tilde{H}_i. $$
\end{thm}
\begin{prop} \cite[Proposition 4.19]{BH} Let $R$ be a commutative ring and keep the notation of Definition 4.1. Then, we have
$$\langle \Gamma_\bullet \cdot \langle Z_1(x_1,\dots,x_n;R)\rangle\rangle_n = \Fitt_0(I/\mathfrak{a})\cdot e_1\wedge\dots\wedge e_n.$$ \end{prop}
\begin{rem} \cite[Remark 4.24]{BH} With regards to inclusions, $\Fitt_0(I/\mathfrak{a}) \subseteq \Kitt(\mathfrak{a},I) \subseteq \mathfrak{a}:I$. \end{rem}
\begin{thm} \cite[Theorem 5.1]{BH} Let $R$ be a local Cohen-Macaulay ring and $I$ be an ideal of height $g \geq 2$. Assume that $I$ satisfies $SD_1$. Then any $s$-residual intersection $J = \mathfrak{a}:I$ coincides with the disguised residual intersection. \end{thm}
Now we will derive an analogous result to our main theorem.
\begin{thm} Let $R$ be a local Cohen-Macaulay ring and $I$ be an ideal of height $g \geq 2$, with $\mu(I) = n$ and which satisfies the $SD_1$ condition. Let $J = \mathfrak{a}:I$ be an $s$-residual intersection with $s \geq n-2$. Then for any generating set $a_1, \dots, a_s$ of $\mathfrak{a}$, one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I.$$ \end{thm}
\begin{proof}Throughout this proof, we use the notation of Theorem 4.2 and Theorem 4.3. Note that this is clear when $s = n-2$. So, assume that $s \geq n-1$. If $s = n-1$, then by Theorem 4.3 and Theorem 4.6, $$\mathfrak{a}:I\cdot e_1\wedge\dots\wedge e_n = \mathfrak{a}\cdot e_1\wedge\dots\wedge e_n+\Gamma_{n-1}\cdot\tilde{H}_1 + \sum_{i=2}^{n-g}\Gamma_{n-i}\cdot\tilde{H}_i.$$
Similarly, if $s \geq n$, then by Theorem 4.3 and Theorem 4.6, $$\mathfrak{a}:I \cdot e_1\wedge\dots\wedge e_n = \mathfrak{a} \cdot e_1\wedge\dots\wedge e_n +\Gamma_{n}\cdot\tilde{H}_0 +\Gamma_{n-1}\cdot\tilde{H}_1 + \sum_{i=2}^{n-g}\Gamma_{n-i}\cdot\tilde{H}_i.$$
It is clear that $\Gamma_{n}\cdot\tilde{H}_0 \subseteq \Fitt_0(I/\mathfrak{a}) \cdot e_1\wedge\dots\wedge e_n$ and Proposition 4.4 implies that $\Gamma_{n-1}\cdot\tilde{H}_1 \subseteq \Fitt_0(I/\mathfrak{a}) \cdot e_1\wedge\dots\wedge e_n$. As $\Fitt_0(I/\mathfrak{a}) \subseteq \mathfrak{a}:I$, we have that if $s \geq n-1$, $$\mathfrak{a}:I \cdot e_1\wedge\dots\wedge e_n = \mathfrak{a} \cdot e_1\wedge\dots\wedge e_n + \Fitt_0(I/\mathfrak{a}) \cdot e_1\wedge\dots\wedge e_n + \sum_{i=2}^{n-g}\Gamma_{n-i}\cdot\tilde{H}_i.$$
Applying elementary properties of the exterior algebra and Theorem 4.3, one can see that
$$\mathfrak{a} \cdot e_1\wedge\dots\wedge e_n + \sum_{i=2}^{n-g}\Gamma_{n-i}\cdot\tilde{H}_i = \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} \Kitt((a_{\nu_1},\dots,a_{\nu_{n-2}}),I) \cdot e_1\wedge\dots\wedge e_n.$$
Note that, by Remark 4.5, $$ \Kitt((a_{\nu_1},\dots,a_{\nu_{n-2}}),I) \subseteq (a_{\nu_1},\dots,a_{\nu_{n-2}}):I \subseteq \mathfrak{a}:I. $$
So, putting it all together, we have that $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{n-2}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{n-2}}):I.$$
\end{proof}
One should note that, unlike in our main result, the colon ideals in the Theorem 4.7 are not necessarily $(n-2)$-residual intersections.
Now, we will state corollaries that are analogous to the reults of Section 3. We begin with a corollary analogous to Corollary 3.2. Note that Corollary 3.2 requires that $I$ be generically a complete intersection, but only requires $I$ to be almost Cohen Macaulay rather than Cohen Macaulay.
\begin{cor} \cite[Corollary 5.5]{BH} Let $R$ be a local Cohen-Macaulay ring and let $I$ be an almost complete intersection ideal which is Cohen-Macaulay. Let $J = \mathfrak{a}:I$ be an $s$-residual intersection. Then $\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a})+\mathfrak{a}.$ \end{cor}
The next result is more general than Corollary 3.3, as it does not require the $G_s$ condition.
\begin{cor} Let $R$ be a local Gorenstein ring and let $I$ be a proper $R$-ideal such that $R/I$ is Cohen-Macaulay and let $\mathfrak{a} \subsetneq I$. Let $\height(I) = g \geq 2$, $\mu(I) = g+2$ and $s \geq g$. Let $\mathfrak{a}:I$ be an $s$-residual intersection. For any generating sequence $a_1,\dots,a_s$ of $\mathfrak{a}$ one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{g}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g}}):I.$$ \end{cor} \begin{proof} Note that, as $R$ is Gorenstein and as $\mu(I) = g+2$ and $R/I$ Cohen-Macaulay, by \cite{AH} we have that $I$ is strongly Cohen-Macaulay, and thus satisfies $SD_1$. So, we can apply Theorem 4.7. \end{proof}
The next result is analogous to Corollary 3.5. Note that in Corollary 3.5 we require that $I$ is $G_s$ and weakly $(s-2)$-residually $S_2$ rather than $I$ being generically a complete intersection and satisfying condition $SD_1$.
\begin{cor}
Let $(R,m,k)$ be a local Gorenstein ring with $|k| = \infty$. Let $I$ be a proper $R$-ideal such that $R/I$ is Gorenstein and let $\mathfrak{a} \subsetneq I$. Suppose $I$ is of height $g \geq 2$, $\mu(I) = g+3$, $\Ext^1_{R/I}(I/I^2,R/I) = 0$, and $I$ is generically a complete intersection and satisfies the condition $SD_1$. Let $\mathfrak{a}:I$ be an $s$-residual intersection. For any generating sequence $a_1, \dots, a_s$ of $\mathfrak{a}$ such that any length $g$ subsequence is a regular sequence, one has $$\mathfrak{a}:I = \Fitt_0(I/\mathfrak{a}) + \sum_{\{a_{\nu_1},\dots,a_{\nu_{g}}\} \subseteq \{a_1,\dots,a_s\}} (a_{\nu_1},\dots,a_{\nu_{g}}):I$$
where each $(a_{\nu_1},\dots,a_{\nu_{g}}):I$ is a link. \end{cor}
\begin{proof} This follows from Theorem 4.7 and Theorem 3.4. \end{proof}
It should be noted that in the setting of both Corollary 4.9 and Corollary 4.10 we can always select generators of $\mathfrak{a}$ such that any length $g$ subsequence is a regular sequence.
Now we shall move on to one final notable comparison between the two methods. The following is a partial proof for Conjecture 5.8 from Bouca and Hassanzadeh \cite{BH}, proving that it is true in the case where $I$ satisfies $G_s$ and is weakly $s$-residually $S_2$. It should be noted that, by applying the strategy of Theorem 4.7, this would rederive the results of Section 3. However, the methods used in Section 3 are more elementary, as they do not require the notion of disguised residual intersections.
\begin{thm} Let $R$ be a Cohen-Macaulay ring, then $\mathfrak{a}:I = \Kitt(\mathfrak{a},I)$ whenever $a:I$ is an $s$-residual intersection and $I$ satisfies $G_s$ and is weakly $(s-2)$-residually $S_2$. \end{thm}
\begin{proof} As $I$ is $G_s$ we can select generators $a_1,\dots,a_s$ of $\mathfrak{a}$ such that for all $i<s$, $(a_1,\dots,a_i):I$ is a geometric $i$-residual intersection \cite[Corollary 1.6]{U} and $I \cap J_i = \mathfrak{a}_i$ \cite[Corollary 3.6]{CEU}. For all $i \leq s$, let $\mathfrak{a}_i = (a_1,\dots,a_i)$ and $J_i = \mathfrak{a}_i:I$. By \cite[Corollary 3.6, Lemma 2.4]{CEU}, $a_i$ is a non-zerodivsor in $R/J_{i-1}$ and, letting $\overline{\cdot}$ represent images in $R/J_{i-1}$, $\overline{J_i} = (\overline{a_i}):\overline{I}$.
Let $\overline{\cdot}$ represent images in $R/J_i$. We will first show, by induction on $i$ for $0 \leq i < s$ that $\Kitt(\mathfrak{a},I)$ surjects onto $\Kitt(\overline{\mathfrak{a}},\overline{I})$ for all $i$. Let $x_1,\dots,x_n$ be a generating set of $I$ and $H_i(x_1,\dots,x_n,R)$ represent the $i$th homology of $K_\bullet(x_1,\dots,x_i;R)$.
Suppose $i = 0$, then $I \cap J_i = 0$ and by \cite[Lemma 1.4]{H} $H_j(x_1,\dots,x_n,R)$ surjects onto $H_j(x_1,\dots,x_n,\overline{R})$ for all $j$. Combining this with Theorem 4.3 gives us that $\Kitt(\mathfrak{a},I)$ surjects onto $\Kitt(\overline{\mathfrak{a}},\overline{I})$.
Now, suppose $i>0$. Let $\cdot '$ represent images in $R/J_{i-1}$. By the inductive hypothesis $\Kitt(\mathfrak{a},I)$ surjects onto $\Kitt(\mathfrak{a}',I')$. Note that $a_i$ is a non-zerodivsor in $R'$ and let $\cdot ''$ be the image in $R/(a_i)+J_{i-1}$. By \cite[Theorem 4.27]{BH}, $\Kitt(\mathfrak{a}',I')$ surjects onto $\Kitt(\mathfrak{a}'',I'')$. Note that $J_i'' = 0:I''$. (Explain why this is the same as case $i = 0$), $\Kitt(\mathfrak{a}'',I'')$ surjects onto $\Kitt(\overline{\mathfrak{a}},\overline{I})$, thus we are done.
Now we prove that $\Kitt(\mathfrak{a},I) = \mathfrak{a}:I$. Let $\height(I)=g$. Note that if $s \leq g$, then by \cite[Proposition 5.10]{BH}, we are done.
So, assume $s \geq g+1$. Let $\overline{\cdot}$ represent images in $R/J_{s-1}$. Note that $\overline{\mathfrak{a}} = (\overline{a_s})$. Thus $\Kitt(\overline{\mathfrak{a}},\overline{I}) = \Kitt((\overline{a_s}),\overline{I})$. Since $a_s$ is a non-zerodivsor on $\overline{R}$, we have that $\grade((\overline{a_s}):\overline{I}) \geq 1$ and thus, by \cite[Remark 5.11]{BH}, $\Kitt((\overline{a_s}),\overline{I}) = (\overline{a_s}):\overline{I}$. Also note that $(\overline{a_s}):\overline{I} = \overline{\mathfrak{a}:I}$. So, putting this all together gives us that $\Kitt(\overline{\mathfrak{a}},\overline{I}) = \overline{\mathfrak{a}:I}$.
Since $\Kitt(\mathfrak{a},I)$ surjects onto $\Kitt(\overline{\mathfrak{a}},\overline{I})$, and by Remark 4.5, $\Kitt(\mathfrak{a},I) \subseteq \mathfrak{a}:I = \mathfrak{a}:I + J_{s-1}$, we conclude that $\Kitt(\mathfrak{a},I) = \mathfrak{a}:I$.
\end{proof}
\end{document} | arXiv | {
"id": "2111.14009.tex",
"language_detection_score": 0.6895424127578735,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Classical Ising model test for quantum circuits} \author{Joseph Geraci} \affiliation{Department of Mathematics, University of Toronto, Toronto, Ontario M5S 2E4, Canada} \affiliation{Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, CA 90089} \altaffiliation{Current address: Ontario Cancer Institute, MaRS-TMDT, Toronto, Ontario M5G 1L7, Canada} \author{Daniel A. Lidar} \affiliation{Departments of Chemistry, Electrical Engineering, and Physics, Center for Quantum Information Science and Technology, University of Southern California, Los Angeles, CA 90089}
\begin{abstract} We exploit a recently constructed mapping between quantum circuits and graphs in order to prove that circuits corresponding to certain planar graphs can be efficiently simulated classically. The proof uses an expression for the Ising model partition function in terms of quadratically signed weight enumerators (QWGTs), which are polynomials that arise naturally in an expansion of quantum circuits in terms of rotations involving Pauli matrices. We combine this expression with a known efficient classical algorithm for the Ising partition function of any planar graph in the absence of an external magnetic field, and the Robertson-Seymour theorem from graph theory. We give as an example a set of quantum circuits with a small number of non-nearest neighbor gates which admit an efficient classical simulation. \end{abstract}
\maketitle
\section{Introduction}
From its early days quantum computing was perceived as a means to efficiently simulate physics problem \cite{Feynman1,Lloyd:96}, and a host of results have been derived along these lines for quantum \cite {Wiesner:96,Meyer:97,Boghosian:97a,Abrams:97,Zalka:98,Lidar:98RC,Ortiz:00,Terhal:00,Freedman:00,WuByrdLidar:01,Aspuru-Guzik:05,Cirac:02,Cirac:08} , and classical systems \cite {Lidar:PRE97a,Yepez:01,Meyer:02,Georgeot:01a,Georgeot:01b,Terraneo:03,Lidar:04,dorit-tutte,JOE,Bravyi:07,aspuru-guzik1,aspuru-guzik2} . A\ natural problem relating quantum computation and statistical mechanics is to understand for which instances quantum computers provide a speedup over their classical counterparts for the evaluation of partition functions \cite{Lidar:PRE97a,Lidar:04}. For the \emph{Potts model}, results obtained in \cite{dorit-tutte} provide insight into this problem when the evaluation is an additive approximation. We provided a class of examples for which there is a quantum speedup when one seeks an exact evaluation of the Potts partition function \cite{JOE}.
In this work we address the connection between quantum computing and classical statistical mechanics from the opposite perspective. Namely, we seek to find restrictions on the power of quantum computing, by employing known results about efficiently simulatable problems in statistical mechanics. Specifically, we restrict our attention to the \emph{Ising model} partition function $Z$, and use a mapping between graph instances of the Ising model and quantum circuits introduced in \cite{JOE2}, to identify a certain class of quantum circuits which have an efficient classical simulation.
Restricted classes of quantum circuits which can be efficiently simulated classically have been known since the Gottesman-Knill theorem \cite {Nielsen:book}. This theorem states that a quantum circuit using only the following elements can be simulated efficiently on a classical computer:\ (1) preparation of qubits in computational basis states, (2) quantum gates from the Clifford group (Hadamard, controlled-NOT gates, and Pauli gates), and (3) measurements in the computational basis. Such \textquotedblleft stabilizer circuits\textquotedblright\ on $n$ qubits can be be simulated in $ O(n\log n)$ time using the graph state formalism \cite{anders:022334}. Other early results include Ref. \cite{valiant}, where the notion of matchgates was introduced and the problem of efficiently simulating a certain class of quantum circuits was reduced to the problem of evaluating the Pfaffian. This was subsequently shown to correspond to a physical model of noninteracting fermions in one dimension, and extended to noninteracting fermions with arbitrary pairwise interactions \cite{Knill:01,Terhal:01,Terhal:05} (see further generalizations in Refs. \cite{Bravyi:05,jozsa-2008}), and Lie-algebraic generalized mean-field Hamiltonians \cite{somma:190501}. Criteria for efficient classical simulation of quantum computation can also be given in terms of upper bounds on the amount of entanglement generated in the course of the quantum evolution \cite{vidal}.
A result that is more directly related to the one we shall present in this work is given in Ref. \cite{Bravyi:07}, but within the measurement-based quantum computation (MQC) paradigm. MQC relies on the preparation of a multi-qubit entangled resource state known as the cluster state. It is known that MQC with access to cluster states is universal for quantum computation. Reference \cite{Bravyi:07} considers \emph{planar code states} which are closely related to cluster states in that a sequence of Pauli-measurements applied to the two-dimensional cluster state can result in a planar code state. MQC with planar code states consists of a sequence of measurements $ \{M_{1},M_{2},\dots ,M_{n},M\}$ where the $M_{i}$ are one-qubit measurements and $M$ is a final measurement done on the remaining qubits in some basis which depends on the results of the $M_{i}$. Reference \cite{Bravyi:07} demonstrates that planar code states are not a sufficient resource for universal quantum computation (and can be classically simulated). This fact is attributed to the exact solvability of the Ising partition function on planar graphs. Our results complement the work in \cite{Bravyi:07}, as they are provided in terms of the circuit model, and generalize to Ising model instances that correspond to graphs which are not necessarily subgraphs of a two-dimensional grid.
Other conceptually related work uses the connection between graphs and quantum circuits and the formalism of tensor network contractions, to show that any polynomial-sized quantum circuit of $1$- and $2$- qubit gates, which has log depth and in which the $2$-qubit gates are restricted to act at bounded range, may be classically efficiently simulated \cite {Markov:tensor,jozsa-2008,yoran:170503}. A tensor network is a product of tensors associated with vertices of some graph $G$ such that every edge of $ G $ represents a summation (contraction) over a matching pair of indexes. We also use a relationship between quantum circuits and graphs but whose construction is quite different \cite{JOE2}. Also, Ref. \cite{Bravyi:08} connects matchgates and tensor network contractions to notions of efficient simulation.
Finally, other closely related work was recently reported in \cite{Nest:08} (see also \cite{nest:117207,nest:110501,Cuevas:08}), which addresses the classical simulatability of quantum circuits. Their results use a connection to the partition function of spin models, as do we, and they too provide a mapping between classical spin models and quantum circuits. Specifically pertinent to our work is the fact that they give criteria for the simulatability of quantum circuits, using the 2D Ising model. That is, circuits consisting of single qubit gates of the form $e^{i\theta \sigma _{x}}$ and nearest-neighbor gates of the form $e^{i\phi \sigma _{z}\otimes \sigma _{z}}$ are classically efficiently simulable. We shall discuss how the nearest-neighbor restrictions can be lifted while retaining efficient classical simulatability.
The structure of this paper is as follows. We begin with a brief review of the Ising model in Section \ref{sec:Ising}, where we define the Ising partition function $Z$. In Section \ref{sec2} we review quadratically signed weight enumerators (QWGT's) and their relationship to quantum circuits, and review the relationship between QWGT's and $Z$. In Section \ref{sec:mapping} we introduce an ansatz that allows one to associate graph instances of the Ising model with circuit instances of the quantum circuit model. In this section we derive a key result: an explicit connection between the partition function for the Ising model on a graph, and a matrix element of the unitary representing a quantum circuit which is related to this graph via the graph's incidence matrix [Eq.\emph{\ }(\ref{eq:element})]. We then present our main result in Section \ref{sec:proof}:\ a theorem on efficiently simulatable quantum circuits. The proof depends on the fact that there are algorithms for the efficient evaluation of $Z$ for planar instances of the Ising model. We also discuss the relation to previous work. In Section \ref {nextstep} we present a discussion and some suggestions for future work, including the possibility of a quantum algorithm for the additive approximation of $Z$. We conclude in Section \ref{sec:conc}. The Appendix gives a review of pertinent concepts from graph theory, and additional details, including some proofs.
\section{Ising Spin Model}
\label{sec:Ising}
We briefly introduce the Ising spin model accompanied by some notation and definitions. Let $G=(E,V)$ be a finite, arbitrary undirected graph with $|E|$
edges and $|V|$ vertices. In the Ising model each vertex $i$ is occupied by a classical spin $\sigma _{i}=\pm 1$, and each edge $(i,j)\in E$ represents a bond $J_{ij}$ (interaction energy between spins $i$ and $j$).
\begin{mydefinition} \label{def:Ising}An \emph{instance} of the Ising problem is the data $\Delta \equiv (G,\{J_{ij}\})$, i.e., $\Delta $ represents a weighted graph. \end{mydefinition}
The Hamiltonian of the spin system is \begin{equation} H_{\Delta }(\sigma )=-\sum_{(i,j)\in E}J_{ij}\sigma _{i}\sigma _{j}. \label{Ham} \end{equation}
A spin configuration $\sigma =\{\sigma _{i}\}_{i=1}^{|V|}$ is a particular assignment of spin values for all $|V|$ spins. A bond with $J_{ij}>0$ is called ferromagnetic, and a bond with $J_{ij}<0$ is called antiferromagnetic. The probability of the spin configuration $\sigma $ in thermal equilibrium for a system in contact with a heat reservoir at temperature $T$, is given by the Gibbs distribution: $P_{\Delta }(\sigma )={ \frac{1}{Z_{\Delta }}}W_{\Delta }(\sigma )$, where the Boltzmann weight is $ W_{\Delta }(\sigma )=\exp [-\beta H_{\Delta }(\sigma )]$, $\beta =1/kT$ is the inverse temperature in energy units, $k$ is the Boltzmann constant, and $Z_{\Delta }$ is the partition function: \begin{equation} Z_{\Delta }(\beta )\equiv \sum_{\sigma }\exp [-\beta H_{\Delta }(\sigma )]. \label{eq:Z} \end{equation} (Unless there is a risk of confusion we will from now on write $Z$ in place of $Z_{\Delta }(\beta )$ in order to simplify our notation.) Computation of the partition function is the canonical problem of statistical mechanics, since once $Z$ is known one can compute all thermodynamic quantities, such as the magnetization and heat capacity, by taking derivatives of $F=-k\log Z$ (the free energy)\ with respect to appropriate thermodynamic variables \cite{Reichl:book}.
In this work we restrict our attention to the case $J_{ij}\in \{-J,0,J\}$, with $J>0$, which already gives rise to the full complexity of spin glass models and the associated computational hardness \cite{Parisi:book}. For example, with the above restriction the problem of computing the partition function in the three-dimensional spin-glass is NP-hard \cite{Barahona}. \footnote{ A problem is called NP-hard if the existence of a polynomial-time algorithm for its solution implies the existence of such an algorithm for all NP-complete problems.}
\section{Quadratically Signed Weight Enumerators and their Relation to the Ising Partition Function}
\label{sec2}
Quadratically Signed Weight Enumerators (QWGTs) were introduced by Knill and Laflamme in Ref. \cite{Laflamme}.
\begin{mydefinition} A Quadratically Signed Weight Enumerator is a bi-variate polynomial of the form \begin{equation}
S(A,B,x,y)=\sum_{b\in \ker A}(-1)^{b^{t}Bb}x^{|b|}y^{n-|b|}, \label{eq:S} \end{equation} where $A$ and $B$ are $0,1$-matrices with $B$ of dimension $n\times n$ and $A $ of dimension $m\times n$. The variable $b$ in the summand ranges over $0,1$
-column vectors of dimension $n$ satisfying $Ab=0$ (in the kernel, or nullspace of $A$), $b^{t}$ is the transpose of $b$, and $|b|$ is the Hamming weight of $b$ (the number of ones in the vector $b$). All calculations involving $A,B$ or $b$ are done modulo $2$. \end{mydefinition}
Note that the evaluation of a QWGT, given that $x$ and $y$ are natural numbers, is in general $\#$P-hard, since it includes the evaluation of the weight enumerator polynomial of a classical linear code \cite{Welsh:book}.
\subsection{QWGTs from Quantum Circuits}
\label{sec:QWGT-QC}
We shall now review in some detail how QWGT's were arrived at in Ref. \cite {Laflamme} by considering expansions of quantum circuits. Let $\Omega $ be a quantum circuit formed by a temporal ordering of $N$ gates $g_{k}$, and let $ U(\Omega )=\prod\nolimits_{k=N}^{1}g_{k}=g_{N}\cdots g_{1}$ be the corresponding unitary operator. Note that a universal gate set can be achieved by allowing arbitrary rotations about tensor products of Pauli operators, i.e., each of the $N$ gates $g_{k}$ can be represented as
\begin{equation} e^{-i\sigma _{b}\theta /2}=\cos \left( \frac{\theta }{2}\right) I-i\sin \left( \frac{\theta }{2}\right) \sigma _{b}, \label{eq:G} \end{equation} where \begin{equation} \sigma _{b}=\bigotimes_{i=1}^{n}\sigma _{b_{i}}^{(i)}, \end{equation} with $n$ being the number of qubits, such that the Pauli matrices are \begin{eqnarray*} \sigma _{00} &=&I=\left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) ,\quad \sigma _{01}=\sigma _{X}=\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array} \right) , \\ \sigma _{11} &=&\sigma _{Y}=\left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) ,\quad \sigma _{10}=\sigma _{Z}=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) . \end{eqnarray*} Here $b_{i}\in \{00,01,10,11\}$, $b=\{b_{i}\}_{i=1}^{n}$ is a binary vector whose length is $2n$, and the superscript $(i)$ represents the qubit which is operated on by the corresponding Pauli matrix. A circuit constructed using gates of the form (\ref{eq:G}) may be approximated efficiently to accuracy $O(\epsilon /N)$ with $\mathrm{polylog}(N/\epsilon )$ overhead using a standard gate set, such as controlled-NOT\ with single-qubit gates, and there is a classical algorithm that computes such approximations efficiently \cite{Laflamme,Kitaev:96}. A universal set of one- and two-qubit gates can be obtained from $g_{k}$'s as in Eq.~(\ref{eq:G}) from the rotations with $\cos (\theta /2)=3/5$ (i.e., $\theta =2\arcsin (4/5)$) around operators of weight at most two (the weight of $\sigma _{b}$ is the number of non-zero pairs of bits in $b$) (Theorem 3.3, case (e), of \cite {Adleman:97}). Letting \begin{equation} \cos \left( \frac{\theta }{2}\right) =\frac{\alpha }{\gamma },\quad \sin \left( \frac{\theta }{2}\right) =\frac{\alpha ^{\prime }}{\gamma }, \label{eq:theta} \end{equation} so that $\gamma =\sqrt{\alpha ^{2}+\alpha ^{\prime 2}}$, we rewrite Eq.~(\ref {eq:G}) as \begin{equation} g_{k}=\frac{1}{\gamma }\left( \alpha I-i\alpha ^{\prime }\sigma _{b_{k}}\right) . \label{eq:Gk} \end{equation}
The gate set is still universal if $U(\Omega )$ is expressed as a product of \emph{real} gates \cite{Bernstein:93}, i.e., if each gate $g_{k}$ contains an odd number of $\sigma _{Y}$'s, so that $i\sigma _{b_{k}}$ in Eq. (\ref {eq:Gk}) is a real-valued matrix. Following Ref. \cite{Laflamme}, we adopt this convention, so that from now on $b_{k}$ is a binary vector of length $2n$, subject to the restriction that the $b_{k}$ can only contain an odd number of $11$'s. Moreover, the gate set is still universal if we assume that the orientation (the sign of $\theta $) is positive if the number of $ \sigma _{Y}$'s is $1\mathrm{mod}(4)$ and negative otherwise \cite{Laflamme}. This means that we can replace Eq.~(\ref{eq:Gk}) by \begin{equation} g_{k}=\frac{1}{\gamma }\left( \alpha I\pm i\alpha ^{\prime }\sigma _{b_{k}}\right) , \end{equation} with the sign determined by the number of $\sigma _{Y}$'s in $\sigma _{b_{k}} $. Then, by defining
\begin{equation}
\tilde{\sigma}_{b_{k}}=(-i)^{|b|_{Y}}\sigma _{b_{k}}, \end{equation}
where $|b|_{Y}$ is the (always odd) number of $\sigma _{Y}$'s occurring in $ \sigma _{b_{k}}$, we may write \begin{equation} g_{k}=\frac{1}{\gamma }(\alpha I+\alpha ^{\prime }\tilde{\sigma}_{b_{k}}), \label{gates} \end{equation} which is the desired representation of real-valued gates \cite{Laflamme}.
Now define $C$ to be the block diagonal matrix whose blocks consist of
\begin{equation} c=\left( \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right) , \end{equation} i.e., \begin{equation} C=\bigoplus_{i=1}^{n}c. \label{eq:C} \end{equation} Then the property that $b$ has an odd number of $11$'s is given by $b^{t}Cb=1 $. In addition, we have the multiplication rule
\begin{equation} \tilde{\sigma}_{b_{1}}\tilde{\sigma}_{b_{2}}=(-1)^{b_{1}^{t}Cb_{2}}\tilde{ \sigma}_{b_{1}\oplus b_{2}} \label{eq:brule} \end{equation} where the addition in the subscript is bit by bit modulo $2$.
Let $H$ be the $(2n\times N)$ matrix whose columns are the $b_{k}$: \begin{equation} H=(b_{1}\text{ }b_{2}\text{ }\cdots \text{ }b_{N}). \label{eq:H} \end{equation} $H$ is a linear size, bijective representation of the quantum circuit, where each column represents a gate and every pair of rows represents a qubit.
\begin{mydefinition} A matrix $H$ which is constructed according to Eq.~(\ref{eq:H}) is called the \textquotedblleft $H$-matrix representation\textquotedblright\ of the quantum circuit $\Omega $. \end{mydefinition}
Using the rule (\ref{eq:brule}) we then have the following expansion \cite {Laflamme}:
\begin{eqnarray} U(\Omega ) &=&\prod_{k=N}^{1}g_{k} \notag \\ &=&\prod_{k=N}^{1}\frac{1}{\gamma }(\alpha I+\alpha ^{\prime }\tilde{\sigma} _{b_{k}}) \notag \\
&=&\frac{1}{\gamma ^{N}}\sum_{a}(-1)^{Q_{aa}}\alpha ^{|a|}(\alpha ^{\prime
})^{N-|a|}\tilde{\sigma}_{Ha}, \label{eq:expand} \end{eqnarray} where \begin{equation} Q_{aa^{\prime }}\equiv a^{t}Qa^{\prime } \end{equation} and where the $N\times N$ lower-triangular matrix $Q$ is defined by \begin{equation} Q\equiv \mathrm{lwtr}(H^{t}CH), \label{eq:Q} \end{equation} and where $a$ ranges over all binary column vectors of length $N$. (Thus $ Q_{aa^{\prime }}$ is not a matrix element of $Q$; we use this notation merely for convenience.)
In order to make contact with the partition function of the Ising model, we shall be interested in the matrix element $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $, where $\rvert \mathbf{0}\rangle =\otimes _{i=1}^{n}\rvert 0_{i}\rangle $, and $\rvert 0_{i}\rangle $ is the $+1$ eigenvector of $\sigma _{Z}^{(i)}$. For this matrix element to be non-zero no qubit can be flipped, i.e., $U(\Omega )$ cannot contain any $\sigma _{X}$ or $\sigma _{Y}$ factors. When taking the same matrix element of the right-hand side of Eq.~(\ref{eq:expand}) we have $\langle \mathbf{0}\lvert \tilde{\sigma}_{Ha}\rvert \mathbf{0}\rangle $, and similarly, for this to be non-zero $\tilde{\sigma}_{Ha}$ cannot have $\sigma _{X}$ or $\sigma _{Y}$ factors. This is enforced by summing only over those binary vectors $a$ such that $CHa=0$ \cite{Laflamme}. Thus: \begin{equation} \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle =\frac{1}{\gamma
^{N}}\sum_{a\in \ker CH}(-1)^{Q_{aa}}\alpha ^{|a|}\alpha ^{\prime N-|a|}. \label{eq:0U0} \end{equation} A glance at the QWGT expression (\ref{eq:S})\ reveals a striking similarity to the latter matrix element.
\subsection{Example}
As a simple example meant to illustrate the correspondence between the $H$ -matrix representation of a quantum circuit $\Omega $ and the actual operation of the circuit, consider
\begin{equation*} H=\left[ \begin{array}{ccc} 1 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 1 & 0 \end{array} \right] \end{equation*} This matrix represents a circuit $\Omega $ comprising three gates (three columns) acting on three qubits (two rows per qubit), \begin{equation*} U(\Omega )=g_{3}g_{2}g_{1}, \end{equation*} with the following unitaries: \begin{eqnarray*} g_{1} &=&e^{-i\frac{\theta }{2}\sigma _{Z}^{(1)}\sigma _{X}^{(2)}\sigma _{Y}^{(3)}}=\frac{1}{\gamma }(\alpha I-i\alpha ^{\prime }\sigma _{Z}^{(1)}\sigma _{X}^{(2)}\sigma _{Y}^{(3)}), \\ g_{2} &=&e^{-i\frac{\theta }{2}\sigma _{Z}^{(1)}\sigma _{Z}^{(2)}\sigma _{Y}^{(3)}}=\frac{1}{\gamma }(\alpha I-i\alpha ^{\prime }\sigma _{Z}^{(1)}\sigma _{Z}^{(2)}\sigma _{Y}^{(3)}), \\ g_{3} &=&e^{-i\frac{\theta }{2}\sigma _{Y}^{(1)}\sigma _{Z}^{(2)}\sigma _{Z}^{(3)}}=\frac{1}{\gamma }(\alpha I-i\alpha ^{\prime }\sigma _{Y}^{(1)}\sigma _{Z}^{(2)}\sigma _{Z}^{(3)}). \end{eqnarray*} The superscripts represent which qubit is being acted upon and we have omitted the tensor product symbols. The Pauli operators can be read off from the corresponding column entries in $H$; thus the entry $(1$ $0)^{t}$ in the top position of the first column of $H$ represents the $\sigma _{Z}^{(1)}$ Pauli matrix in $g_{1}$, etc.
\subsection{QWGTs and the Ising Partition Function}
In Ref. \cite{Lidar:04} it was shown that the Ising partition function can be expressed in terms of a QWGT. Let $A$ be the incidence matrix of the graph $G=(E,V)$, i.e., \begin{equation} A_{v,(i,j)}=\left\{ \begin{array}{ll} 1 & \mbox{$(v=i \:\: {\rm and}\:\: (i,j)\in E)$} \\ 0 & \mbox{${\rm else}$} \end{array} \right. . \end{equation} Let us associate a binary vector \begin{equation} w=(w_{12},w_{13},\dots ) \end{equation}
of length $|E|$ with the bond distribution $\{J_{ij}=\pm J\}$, by letting
\begin{equation} w_{ij}={\frac{1-J_{ij}/J}{2}}, \end{equation} so that $w$ specifies whether edge $(i,j)$ supports a ferromagnetic ($ w_{ij}=0$) or antiferromagnetic ($w_{ij}=1$) bond. Thus we can give an equivalent definition of an instance of the Ising model (recall Definition \ref{def:Ising}) as the data $\Delta \equiv (G,w)$.
Let \begin{equation} \lambda =\tanh (\beta J), \label{eq:l} \end{equation}
and define the $|E|\times |E|$ matrix \begin{equation} B=\mathrm{dg}(w)=\left\{ \begin{tabular}{ll} $w$ & on the diagonal \\ $0$ & elsewhere \end{tabular} \right. . \end{equation} Writing the instance data as $\Delta \equiv (G,w)$ we then have (Theorem 2 of \cite{Lidar:04}): \begin{eqnarray}
Z_{\Delta }(\lambda ) &=&\frac{2^{|V|}}{(1-\lambda ^{2})^{|E|/2}}\sum_{a\in
\ker A}(-1)^{a^{t}Ba}\lambda ^{|a|} \notag \\
&=&\frac{2^{|V|}}{(1-\lambda ^{2})^{|E|/2}}S(A,\mathrm{dg}(w),\lambda ,1), \label{Z} \\
&=&\frac{2^{|V|}}{(1-\lambda ^{2})^{|E|/2}}\sum_{a\in \ker A}(-1)^{a\cdot w}\lambda ^{|a|} \label{eq:Z1} \end{eqnarray}
where $a$ in the sums ranges over all $0-1$ vectors of length $|E|$ satisfying $Aa=0$, where $a^{t}Ba=\sum_{i}a_{i}w_{i}a_{i}=a\cdot w$ (since $ a_{i}=0$ or $1$) was used in the second equality, and where the QWGT\ definition (\ref{eq:S}) was used in the last equality.
This establishes the link between QWGTs and the Ising model partition function. Because of the similarity to the matrix element $\langle \mathbf{0} \lvert U(\Omega )\rvert \mathbf{0}\rangle $, we expect to be able to relate the partition function to quantum circuits, via QWGTs. We take this up in the next section.
\begin{mydefinition} An even subgraph of a graph $G$ (or equivalently an Eulerian subgraph) is any subgraph of $G$ whose vertices are of even degree. Equivalently, these are paths in $G$ which begin and end at the same vertex, and which pass through each edge exactly once. \label{def:even} \end{mydefinition}
Now, note that the sum in \begin{equation} S(A,\mathrm{dg}(w),\lambda ,1)=\sum_{a\in \ker A}(-1)^{a\cdot w}\lambda
^{|a|} \end{equation} is over vectors that are in the kernel (nullspace) of $A$, which here means that only subgraphs having an even number of bonds emanating from all vertices are allowed, i.e., the sum is taken over all even subgraphs or equivalently, all Eulerian subgraphs.
In this work we will sometimes refer to even or Eulerian subgraphs as cycles.
\section{Connecting the Ising Model Partition Function to the Quantum Circuit Matrix Element}
\label{sec:mapping}
Our goal in this section is to connect the partition function $Z$ to $ \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $. To do so we will use a mapping found and described in detail in Ref. \cite{JOE2}.
\subsection{A circuit ansatz}
\label{sec:ansatz}
Focusing on the representation of the quantum circuit given in Eq.~(\ref {eq:expand}) and of the partition function given in Eq.~(\ref{eq:Z1}), we begin by asking ourselves if there exists some ansatz for the gate set $ g_{k} $ such that
\begin{equation}
U(\Omega )=\prod_{k=N}^{1}g_{k}\overset{?}{\propto }\sum_{a}(-1)^{a\cdot w}\lambda ^{|a|}\tilde{\sigma}_{Ha}, \label{eq:U1} \end{equation} where $\lambda =\tanh (\beta J)$. If such a form were possible, the two representations would be closely linked. Indeed, we can almost get this form. Let us take as an ansatz \begin{equation} g_{k}=\frac{1}{\sqrt{\lambda ^{2}+1}}(\lambda I+\tilde{\sigma}_{b_{k}}), \label{ansatz} \end{equation} i.e., the special case of Eq.~(\ref{gates}) with $\lambda =\alpha ^{\prime }/\alpha $, or \begin{equation} \tanh (\beta J)=\tan (\theta /2). \label{eq:abl} \end{equation} Note that since the inverse temperature $\beta $ and the bond strength $J$ are both positive, Eq.~(\ref{eq:abl}) restricts$\ (\theta /2)\mathrm{mod} 2\pi $ to be in the range $(0,\pi /2)\cup (\pi ,3\pi /2)$, or $\theta \mathrm{mod}4\pi $ to be in the range \begin{equation} R\equiv (0,\pi )\cup (2\pi ,3\pi ) \label{eq:range} \end{equation} (the range for which $\tan (\theta /2)>0$). Fortunately, this includes the case $\theta =2\arcsin (4/5)\approx 1.85\in R$ (i.e., $\lambda =4/3$), which, as noted above, allows a universal set of one and two qubit gates to be obtained.\footnote{ These observations were used in Ref. \cite{JOE2} to show that finding additive approximations of the signed generating function of Eulerian subgraphs over hypergraphs is BQP-complete.} Thus, we have not restricted the generality of the class of quantum circuits so far. On the other hand, most $\theta \in R$ do not correspond to universal quantum circuits.
Next, we obtain from Eq.~(\ref{eq:expand}):
\begin{eqnarray} U(\Omega ) &=&\prod_{k=N}^{1}\frac{1}{\sqrt{\lambda ^{2}+1}}(\lambda I+ \tilde{\sigma}_{b_{k}}) \notag \\
&=&\frac{1}{(\lambda ^{2}+1)^{N/2}}\sum_{a}(-1)^{Q_{aa}}\lambda ^{|a|}\tilde{ \sigma}_{Ha}. \label{U} \end{eqnarray} After taking matrix elements $\langle \mathbf{0}\lvert \cdot \rvert \mathbf{0 }\rangle $, we have, recalling Eq.~(\ref{eq:0U0}): \begin{equation} \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle =\frac{1}{
(\lambda ^{2}+1)^{N/2}}\sum_{a\in \ker CH}(-1)^{Q_{aa}}\lambda ^{|a|}. \label{0U0} \end{equation} Comparing Eqs.~(\ref{eq:Z1})\ and (\ref{0U0}), while using $\lambda =\alpha ^{\prime }/\alpha $, we see that a sufficient condition for them to be equal, is to identify the incidence matrix $A$ with $CH$ via \begin{equation} \tilde{A}=CH, \label{A=CH} \end{equation} where $\tilde{A}$ is a matrix containing twice the number of rows of the incidence matrix $A$, but where each even row is a zero row, and each consecutive odd row is equal to a row of the original incidence matrix $A$, and to equate the exponents, i.e., find an edge distribution $w$ which solves \begin{equation} a\cdot w\,\mathrm{mod}\,2=Q_{aa}\quad \forall a\in \ker A \label{eq:w} \end{equation} where $Q=\mathrm{lwtr}(H^{t}\tilde{A})$ [recall Eq.~(\ref{eq:Q})]. For then
\begin{eqnarray} \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle &=&\frac{1}{
(\lambda ^{2}+1)^{N/2}}\sum_{a\in \ker A}(-1)^{a\cdot w}\lambda ^{|a|} \notag \\
&=&\frac{(1-\lambda ^{2})^{\frac{|E|}{2}}}{(1+\lambda ^{2})^{\frac{|E|}{2}
}2^{|V|}}Z_{\Delta }(\lambda ). \label{eq:element} \end{eqnarray}
\emph{Equation }(\ref{eq:element})\emph{\ is a key result of this paper, as it establishes the equivalence between quantum circuits and the Ising model, for bond distributions }$w$\emph{\ that satisfy Eq. }(\ref{eq:w})\emph{, and }$\lambda $\emph{'s that satisfy Eq. }(\ref{eq:abl})\emph{. }
It has two consequences. First, if we are able to determine $\langle \mathbf{ 0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $, then we are able to determine the partition function $Z_{\Delta }(\lambda )$. Note that estimating $ \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ in general is BQP-complete \cite{jordan} and thus something one could do with a universal quantum computer. Alternatively, if we had a way of classically computing $ Z_{\Delta }(\lambda )$, then we would be able to classically simulate the quantum circuit $\Omega $ (if it were solving a decision problem) \cite{z2}. This latter alternative is the one we focus on in this paper.
\subsection{Circuit-Ising model compatibility}
The connections we established in the previous subsection between quantum circuits and the partition function imply certain restrictions. We flesh these out in the present subsection.
First, since we wish to work only with the physically relevant range of positive temperatures and positive $J$, we restrict the gate angles $\theta $ from now on to lie in $R$. Formally:
\begin{mydefinition} A gate angle $\theta $ [Eq.~(\ref{eq:theta})] for a gate $g_{k}$ [Eq.~(\ref {gates})] is said to be \textquotedblleft $\lambda $-compatible \textquotedblright\ if\ $\theta \in R$, where the range $R$ is defined in Eq.~(\ref{eq:range}). \end{mydefinition}
Next, we note that Eq.~(\ref{A=CH}) gives rise to a compatibility relation between circuits and graphs:
\begin{mydefinition} A quantum circuit $\Omega $, constructed with $\lambda $-compatible angles, is \textquotedblleft $G$-compatible\textquotedblright\ with a graph $G$ if the $H$-matrix representation of $\Omega $ satisfies Eq.~(\ref{A=CH}), where $A$ is the incidence matrix of $G$, and $C$ is defined in Eq.~(\ref{eq:C}). \end{mydefinition}
When we take a $G$-compatible circuit and plug its $H$-matrix into Eq.~(\ref {eq:w})\ we are not guaranteed that there exists a solution $w$. Hence we need an appropriate restriction of the class of $G$-compatible circuits:
\begin{mydefinition} A quantum circuit $\Omega $ is \textquotedblleft $Gw$-compatible \textquotedblright\ if it is $G$-compatible and if the solution set of Eq.~( \ref{eq:w}) is non-empty. \end{mydefinition}
We need a similar notion for the bond distributions:
\begin{mydefinition} A bond distribution $w$ is \textquotedblleft $G\Omega $-compatible \textquotedblright\ with a graph $G$ and circuit $\Omega $ if it satisfies Eq.~(\ref{eq:w}). \end{mydefinition}
Note that in this last definition the circuit $\Omega $ must be $Gw$ -compatible, for otherwise we are not guaranteed that the solution set of Eq.~(\ref{eq:w}) is non-empty. Note further that, as these definitions imply, Eqs.~(\ref{A=CH})-(\ref{eq:element}) describe a connection between quantum circuits and instances of the Ising model over given graphs. Namely, any $H$ which solves Eq.~(\ref{A=CH}) is a matrix representation of a circuit $\Omega $ which belongs to a class defined by the incidence matrix $ A $ of a given graph $G$. In addition, we can populate the vertices of $G$ with weights from the bond distribution $w$ provided $w$ is compatible. Thus:
\begin{mydefinition} \label{def:Omega_G}Let $\Gamma$ be any set of graphs for which a solution to Eq.~(\ref{eq:w}) exists. Then $\Omega _{\Gamma w}$ is the set of circuits which are $Gw$-compatible $\forall G \in \Gamma$. \end{mydefinition}
\begin{mydefinition} \label{def:I_G}$I(\Omega _{\Gamma w})$ is the class of Ising model instances $\{\Delta (G,w)\}_{w}$ whose graph is $G \in \Gamma$ and whose bond distributions $\{w\}$ are $G\Omega$-compatible $\forall G \in \Gamma$. \end{mydefinition}
Equation (\ref{eq:w}) is a system of linear equations over $GF(2)$. The number of equations is equal to the number of even subgraphs of the given graph or the total number of elements in the set $\ker (CH)$, and the number of unknowns is equal to the number of edges. However, in spite of the fact that the number of elements in $\ker (CH)$ scales exponentially in the number of vertices, it turns out that finding a $w$ which solves Eq. (\ref {eq:w}) can be done efficiently [see Eq. (\ref{eq:w-sol}) below]. Let us further stress\emph{\ }that Eq.~(\ref{eq:w}) is only a sufficient condition for the equality of Eqs.~(\ref{eq:Z1})\ and (\ref{0U0}), and does not capture the whole set of possible graph instances that our scheme can handle. We define our instances via this condition because it simplifies the analysis and it allows us to extract information about an interesting set of quantum circuits which may be classically simulated. We discuss more general sufficient conditions in Section \ref{nexta}, but leave the development of a complete understanding of the actual graph instances that our mapping can handle, and in particular finding necessary conditions for the equality of Eqs.~(\ref{eq:Z1})\ and (\ref{0U0}), as a problem for future study.
\section{Circuits Corresponding to Certain Planar Graphs have an Efficient Classical Simulation}
\label{sec:proof}
Let us recap the general idea we have developed so far. At the basis of our construction are an inverse temperature $\beta $, bond strength $J$, and a given graph $G$. We use this graph to first identify a compatible class of quantum circuits $\Omega _{G}$ (Definition \ref{def:Omega_G}). This class is restricted to a subclass $\Omega _{Gw}$ of circuits for which there exist solutions to Eq.~(\ref{eq:w}). Such solutions are used to assign weights to the graph's edges (a bond distribution), which yields a class of Ising model instances compatible with $G$ and $\Omega _{G}$ (Definition \ref{def:I_G}). In other words, we go from the unweighted graph to a class of compatible circuits, and from there to back to the graph, which is now populated by a class of compatible Ising models. Each circuit is also parametrized by an angle $\theta $, and when we vary $\theta $ in the range $R$ [Eq.~(\ref {eq:range})] we also vary over $\beta $ and $J$, via $\tan (\theta /2)=\tanh (\beta J)$. However, not all values of $\theta $ correspond to universal circuits. Conversely, not every circuit need correspond to a physical (positive) temperature.
In more detail, we identify the class of quantum circuits $\Omega _{G}$ compatible with $G$ (whose incidence matrix is $A$) by solving Eq.~(\ref {A=CH}) for the matrices $H$ representing each (or some) $\Omega \in \Omega _{G}$, and then find the subset $\Omega _{Gw}$ for which the solution set to Eq.~(\ref{eq:w}) is non-empty. We then look for a bond distribution $w$ that satisfies Eq.~(\ref{eq:w}) for a given $H$. Every such $w$ defines an Ising model instance $\Delta (G,w)$ that is compatible with $G$ and the corresponding $\Omega \in \Omega _{G}$. We are guaranteed that provided such a bond distribution $w$ exists, the partition function for the corresponding Ising model is proportional to the matrix element $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ [Eq.~(\ref{eq:element})]. In other words, any bond distribution $w$ that satisfies Eq.~(\ref{eq:w}) induces a direct connection between quantum computation and the Ising model on a graph with that same bond distribution.
It is important to emphasize that Eq.~(\ref{eq:w}) will not always have a solution $w$. Whether or not this is the case is entirely determined by the given graph $G$, since $G$, via its incidence matrix $A$, determines the class of $G$-compatible circuits $\Omega _{G}$ [i.e., the matrices $H$ that solve Eq.~(\ref{A=CH})], and together they determine $a$ and $Q_{aa}$ that go into Eq.~(\ref{eq:w}), which $w$ needs to solve. Thus, it makes sense to define a class of graphs for which there exists a solution $w$ to Eq.~(\ref {eq:w}).
\begin{mydefinition} \label{def:Theta} $\Theta$ is the set of graphs for which a solution to Eq.~( \ref{eq:w}) exists. \end{mydefinition}
In order to characterize $\Theta $ we require some basic ideas from graph theory, such as obstruction sets and downward closure. These are reviewed in Appendix \ref{app:graphs}. We shall prove:
\begin{mylemma} \label{finite-obs} The obstruction set for $\Theta $ is finite. \end{mylemma}
This will come as a consequence of $\Theta $'s downward closure and the Robertson-Seymour theorem -- Theorem \ref{th:RS}. This is proved in Appendix \ref{proofs}. We shall also prove there that there are no solutions to Eq.~( \ref{eq:w}) for the graphs $K_{4}$ (the complete graph on four vertices) and $\bar{K}_{3,3}$, where $\bar{K}_{3,3}$ is $K_{3,3}$ with one edge missing (which edge does not matter, since upon deletion of another edge one can just relabel the edges and nodes and obtain exactly the same incidence structure). Formally:
\begin{mylemma} \label{lem:obs} The obstruction set for $\Theta $ includes $\bar{K}_{3,3}$ and $K_{4}$. \end{mylemma}
The proof is described in Appendix \ref{proofs}. See Fig.~\ref{graphs} in Appendix \ref{app:graphs} for a pictorial representation of the graphs mentioned in Lemma~\ref{lem:obs}. As a consequence we will find that \emph{ all graphs in }$\Theta $ \emph{are planar}.
We shall clarify this conclusion and the previous lemma in the next subsection, but given their validity, a $G\Omega $-compatible bond distribution $w$ necessarily corresponds to a \emph{planar} graph $G$. From this it follows that its partition function can be efficiently computed classically, and hence the corresponding class of quantum circuits, i.e., $ \Omega _{\Theta w}$, also has an efficient classical simulation. Let us be precise about what we mean by \textquotedblleft classically efficiently simulatable\textquotedblright\ (CES).
\begin{mydefinition} \label{def:CES}A uniform family $\mathcal{G}_{n}=\{\Omega _{i}\}$ of $n$
-qubit quantum circuits is \textquotedblleft classically efficiently simulatable\textquotedblright\ (CES)\ if the matrix element $|\langle
\mathbf{0}\lvert U(\Omega _{i})\rvert \mathbf{0}\rangle |$ of each circuit in $\mathcal{G}_{n}$ can be obtained to $k$ digits of precision in time $ \mathrm{poly}(n,k)$ by classical means \cite{z2}. \end{mydefinition}
This definition is a modified version of the one given in Ref. \cite {jozsa-2008}, which also includes a discussion on how it can be weakened.
When we collect the observations above we arrive at an efficient classical test for whether a given quantum circuit is CES. This is summarized in Theorem \ref{th}, which is our main result and the subject of the remainder of the paper:
\begin{mytheorem} \textbf{\emph{(Circuits Corresponding to Certain Planar Graphs have an Efficient Classical Simulation)}}\newline \label{th} The class of quantum circuits $\Omega _{\Theta w}$ is CES. Deciding whether a given a graph $G$ is in $\Theta $ can be efficiently decided. \end{mytheorem}
The theorem comprises two parts. In the first we characterize an entire class of CES\ quantum circuits. The proof we offer below is not constructive, i.e., we prove that there \emph{exists} an efficient classical simulation of the class of quantum circuits $\Omega _{\Theta w}$, and also provide a test of non-membership in $\Omega _{\Theta w}$ for a given quantum circuit. In the second part we given an explicit construction which decides whether a given graph belongs to the set of graphs resulting in CES circuits. To illustrate this part, we discuss a class of graphs (which is a subset of $\Omega _{\Theta w}$) in Section \ref{sparsegraphs}, for which we can explicitly find the $G\Omega $-compatible bond distribution. This class is highly restricted in that the number of even subgraphs only grows polynomially in the number of vertices, whereas in general, including restrictions to planar graphs, the number of even subgraphs grows exponentially. Nonetheless, the class of quantum circuits that one obtains under this restriction is interesting in light of some new results about the classical simulatability of quantum circuits \cite{Nest:08,jozsa-2008}.
For the benefit of the reader we summarize the scheme of the first claim of the proof informally. This will also serve to summarize again the mapping between quantum circuits and graphs.
\begin{enumerate} \item \textbf{Given:} any subset $\Gamma $ of $\Theta $. Every $G\in \Gamma $ has a $G\Omega $-compatible bond distribution for some quantum circuit $ \Omega $, by assumption.
\item Take the incidence matrices $CH$ of the graphs in $\Gamma$ and transform them into the \emph{H-matrix representations} of the corresponding quantum circuits. The following constraint must be respected: Every column must have one $Y$-operation and can have at most one $X$-operation. [This constraint comes from the fact that $CH$ should be an incidence matrix for a graph, where $C$ is the block-diagonal matrix defined in Eq.~(\ref{eq:C}). Without it one has a correspondence between quantum circuits and hypergraphs \cite{JOE2}. Indeed, if the incidence matrix has more than two ones per column than one has a hypergraph.]
\item Thus $\Gamma$ corresponds to a set of quantum circuits $\Omega_{\Gamma w}$, i.e., every quantum circuit $\Omega \in \Omega_{\Gamma w}$ is \emph{ Gw-compatible} for some $G \in \Gamma$.
\item Show that our mapping from circuits to graphs defines a \textquotedblleft downward closed set\textquotedblright\ of graphs, which means that we may apply the Robertson-Seymour Theorem \cite{RS}. This theorem guarantees that there is a finite set of graphs (obstruction set) for which we can test whether or not $G$ has any members of this set as a graph minor \cite{Graph} (at most cubic complexity in the number of quantum gates in $\Omega$).
\item Define $\Theta $, via this obstruction set, i.e., a graph is a member of $\Theta $ if it does not have $K_{4}$ and $\bar{K}_{3,3}=K_{3,3}$ \emph{ with one edge deleted} as minors (there may be other forbidden minors). One has a set of circuits which correspond to the graphs which have a satisfying bond distribution $w$ for equation (\ref{eq:w}). We call the corresponding class of quantum circuits $\Omega _{\Theta w}$. (This specific obstruction set has been tested with mathematical software. See Appendix \ref{app:algo}.)
\item Due to the fact that these graphs are planar, the partition function $ Z $ of any graph in $\Theta$ can be computed efficiently by a classical computer \cite{Welsh:book}.
\item Using equation (\ref{eq:element}), show that knowledge of $Z$ can be used to determine the outcome of a quantum circuit $\Omega \in \Omega_{\Gamma w}$ for a decision problem.
\item \textbf{Conclude:} Families of quantum circuits in $\Omega _{\Theta w}$ which solve a decision problem can be classically simulated. \end{enumerate}
Being that $\Gamma $ is a subset of $\Theta $, any subset $\Omega _{\Gamma w} $ of $\Omega _{\Theta w}$ is CES. \newline
Conversely, we have a test for non-membership in the set $\Omega _{\Theta w}$ :
\begin{enumerate} \item \textbf{Input} a quantum circuit $\Omega$.
\item Transform $\Omega $ into a matrix whose columns represent Pauli operations (that are to be exponentiated) and every pair of rows are the qubits being acted upon as described in Section \ref{sec:QWGT-QC}. This matrix is called $H$ [Eq.~(\ref{eq:H})] and is in 1-to-1 correspondence with $\Omega $. As above, the following constraint must be respected: Every column must have one $Y$-operation and can have at most one $X$-operation.
\item After the above transformation, construct a corresponding incidence matrix $CH$ of a graph $G$.
\item Check the graph $G$ for the minors $\bar{K}_{3,3}$ and $K_4$.
\item \textbf{Conclude:} If either of these are minors of $G$ then reject $ \Omega$. \end{enumerate}
\subsection{Ordering Lemma}
The following lemma allows us to introduce an ordering on the elements in $ \Theta$.
\begin{mylemma} \label{graphK} If a graph $G$ is a member of $\Theta$, then so is $ G\setminus e_{j}$ or $G/e_{j}$, i.e., the deletion or contraction of an arbitrary edge $e_{j}$ from a graph in $\Theta$ is also in $\Theta$. \end{mylemma}
The proof of this lemma is technical and is given in Appendix \ref{proofs}. Lemma \ref{graphK} implies that $\Theta $ is a \emph{downwardly closed set with respect to the minor ordering}. Hence we can apply the Robertson-Seymour theorem \cite{RS}, Theorem \ref{th:RS}, which states that \emph{any graph may be tested for membership in a given downwardly closed set of graphs by just searching the graph for a finite set of minors.} The complexity of doing this, given knowledge of the minors one is looking for, can be shown to be cubic in the number of edges. We implemented this to test Eq.~(\ref{eq:w}) for non-planar solutions, as we describe next.
\subsection{Equation (\protect\ref{eq:w}) implies planarity}
Using mathematical software we demonstrated that the mapping between graphs and circuits described above, with the sufficient condition given by Eq.~( \ref{eq:w}), cannot be satisfied for $K_{5}$ and $K_{3,3}$. That is, $K_{5}$ and $K_{3,3}$ are forbidden minors for $\Theta $. The algorithm we implemented to check this is described in Appendix \ref{app:algo}. However, a finite graph is planar if and only if it does not have $K_{5}$ or $K_{3,3}$ as minors (Wagner's theorem; see Appendix \ref{app:graphs}). As stated earlier, we thus have
\begin{mylemma} \label{lem:planar}All graphs in $\Theta $ are planar. \end{mylemma}
We remark that we have been able to find examples of planar graphs for which there do exist solutions $w$ to Eq.~(\ref{eq:w}), e.g., $K_{2,3}$. As we explain below, this means that $\Theta $ includes planar graphs which are not outerplanar.
\subsection{Knowledge of the matrix element determines output to a decision problem}
The standard way in which a quantum circuit $U$ solves a decision problem, is to measure, say, the first qubit, and decide the problem according to this measurement outcome. In Ref. \cite{z2} it was shown that for every such decision problem, there exists another quantum circuit $U^{\prime }$ such that the evaluation of $\langle 0|U^{\prime }|0\rangle $ is equivalent to the decision problem solved by applying $U$ and measuring the first qubit. In this sense we have:
\begin{mylemma} \label{lem:decision}Knowledge of $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ suffices to determine the output of a quantum circuit which is being used to solve a decision problem. \end{mylemma}
For a proof see, e.g., Ref. \cite{z2}.
\subsection{Proof of Theorem \protect\ref{th}}
Collecting everything we now prove our main theorem. We first need one more technical Lemma:
\begin{mylemma} \label{quad-form}\emph{A quadratic form }$x^{t}Ax$ \emph{over GF(2)\ is linear in }$x$ \emph{(equal to }$x^{t}\mathrm{diag}(A)$\emph{) iff }$A$\emph{ \ is symmetric.} \end{mylemma}
Here $\mathrm{diag}(A)$ denotes a vector comprising the diagonal of $A$. The proof of this Lemma is presented in Appendix \ref{proofs}.
\begin{proof}[Proof of Theorem \protect\ref{th}] We start from the second claim of the Theorem, namely we prove that we can efficiently decide whether a given graph belongs to the set $\Theta $, and that we can find some $w$ if it does belong. Let $G$ be a given graph and let $K$ be the matrix whose columns are a basis of $\mathrm{Ker}(A)$, where $A$ is the incidence matrix of $G$. This means that any $a\in \mathrm{ Ker}(A)$ may be written as $a=Kx$ where $x$ is an arbitrary $m=\mathrm{dim}( \mathrm{Ker}(A))$-dimensional binary vector. Using this we may rewrite Eq. ( \ref{eq:w}) over GF(2) as \begin{equation} x^{t}K^{t}QKx=(Kx)^{t}w. \label{eq:w-new} \end{equation} Since the right-hand side is linear in $x$ for all $x$, the left-hand side must also be linear in $x$. It follows by Lemma \ref{quad-form} that $K^{t}QK $ is symmetric, and moreover that the quadratic form $x^{t}K^{t}QKx$ can be written as $x^{t}\mathrm{diag}(K^{t}QK)$. Thus, solving Eq. (\ref{eq:w}) for $w$ is equivalent to solving the linear system $x^{t}K^{t}w+x^{t}\mathrm{diag }(K^{t}QK)=0$, or \begin{equation} x^{t}(K^{t}w+\mathrm{diag}(K^{t}QK))=0. \end{equation} Since this equation must be true for all $x$, it follows that $K^{t}w+ \mathrm{diag}(K^{t}QK)=0$ and hence that $w$ is the solution to \begin{equation} K^{t}w=\mathrm{diag}(K^{t}QK). \label{eq:w-sol} \end{equation} Since $K$ is efficiently constructable, $w$ can also be found efficiently using standard methods for solving linear equations over GF(2).
Now for the first claim of the theorem, which states that the circuits corresponding to the graphs in $\Theta $ are CES. $\Theta $ is a downwardly closed set of graphs which generates a set of compatible quantum circuits $ \Omega _{\Theta w}$ via the mapping described above. In turn we have the corresponding Ising model instances $I(\Omega _{\Theta w})$ and by assumption, the \emph{G$\Omega $-compatible} bond distributions $w$ for the circuits $\Omega \in \Omega _{\Theta w}$ and graphs $G\in \Theta $. Lemma \ref{lem:planar} states that these instances are planar. It follows that they are CES, i.e., we may compute the Ising partition function for any of these instances efficiently with a classical computer. This is due to a result by Kasteleyn, who gave a classical algorithm for the exact evaluation of the Ising partition function of any planar graph in the absence of an external magnetic field \cite{K}. According to our definition of CES quantum circuits (Definition \ref{def:CES}), all we need is to be able to obtain the evaluation in a time polynomial in the number of qubits (which translates to the number of vertices), and the desired number of bits of precision of $ Z_{\Delta }(\lambda )$, which is achieved by the algorithm given in \cite{K} . Now, since $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle \propto Z_{\Delta }(\lambda )$ [Eq.~(\ref{eq:element})], and for any graph in $\Theta $ we have an efficient way of classically determining $Z_{\Delta }(\lambda )$, we are thus able to determine the matrix element $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ for any quantum circuit in $\Omega _{\Theta w}$ efficiently. It follows from Lemma \ref {lem:decision} that any quantum circuit in $\Omega _{\Theta w}$ which solves a decision problem is CES. \end{proof}
We remark that this technique can be used to prove that quantum circuits which correspond to non-planar classes of graphs for which the Ising partition function has efficient classical evaluation schemes, e.g., graphs of bounded tree width, are CES. We suspect that some of the results obtained in Ref. \cite{Markov:tensor} may be reproduced in this way.
We further remark that due to algorithms for planarity testing, given a quantum circuit one can test if it belongs to the class $\Omega _{\Theta w}$
of CES quantum circuits. For example, a simple test follows from the Eulerian criterion of planarity: $|E|\leq 3|V|-6$ where $|E|$ is the number of edges and $|V|$ is the number of vertices. (This follows from the application of a handshaking lemma to the famous relation $|F|-|E|+|V|=2$, where $F$ is the number of faces \cite{Graph}.) Examining the close relationship between the circuit representation $H$ and the incidence matrix $CH$, one can give the restriction \begin{equation} \mathrm{number\hspace{2pt}of\hspace{2pt}gates}\leq 3(\mathrm{number\hspace{ 2pt}of\hspace{2pt}qubits})-6, \label{euler} \end{equation} provided that the universal gate set consists of rotations about products of Pauli operations. A circuit $\Omega $ for which Eq.~(\ref{euler})\ holds generates a planar graph $G$ via the mapping we have described. Provided Eq. (\ref{eq:w}) has a non-trivial solution $w$, it follows that $\Omega $ is $Gw$-compatible, and hence CES by planarity of $G$.
\section{Further characterization of the class of CES\ quantum circuits $ \Omega _{\Theta w}$}
\label{sparsegraphs} Our motivation in this subsection is to present a result on CES quantum circuits which allows a comparison to the recent results presented in \cite{Nest:08} and \cite{jozsa-2008}. In both papers, results dependent on quantum gates being restricted to nearest neighbor qubit operations in one dimension are presented. Via our construction we derive a similar result, but show that the restriction to nearest-neighbor operations and one dimension can be lifted. We begin with a simple example.
\subsection{Graphs in $\Theta $ are compatible with CES circuits which include non-nearest-neighbor operations}
Recent work in Ref. \cite{Nest:08} demonstrates that any circuit that is built out of $X$-rotations and nearest neighbor $Z\otimes Z$ rotations can be efficiently simulated. Now assume that one is restricted to a class of planar graphs, $\Theta _{p}$, for which the number of even subgraphs scales polynomially with the number of vertices.\ This restriction is not necessary and is introduced merely for simplicity. Let us call the corresponding set of quantum circuits (under the mapping presented above) $\Omega _{p}$. Upon inspection of the incidence matrix of a typical graph in $\Theta _{p}$ one sees that even though the majority of incident vertices are nearest neighbor, there are several that are not, no matter how one labels the vertices. For example, consider the graph depicted in Fig. \ref{pc}.
\begin{figure}
\caption{A graph with only one cycle.}
\label{pc}
\end{figure}
The incidence matrix is given by \begin{equation*} \left [ \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{array} \right ] \end{equation*} and one possible circuit representation $H$ is given by \begin{equation*} \left [ \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 \end{array} \right ] \end{equation*}
Note that the fifth and sixth columns correspond to gates of the form \begin{equation*} e^{-i\theta (\sigma _{X}^{(4)}\otimes \sigma _{Y}^{(6)})}\hspace{0.6cm} \mathrm{and}\hspace{0.6cm}e^{-i\theta (\sigma _{Y}^{(2)}\otimes \sigma _{X}^{(6)})} \end{equation*} respectively. The superscripts indicate which qubit is being operated on, and thus one can clearly see that \emph{non-nearest neighbor interactions are possible}. This example demonstrates that our construction may extend the results in \cite{Nest:08}, for example, by linking together graphs like the \textquotedblleft necklace\textquotedblright\ shown. Note, however, that the nearest-neighbor restriction can only be relieved slightly as most of the interactions will in fact remain nearest-neighbor. Taking this example as motivation, what follows is a more\ general construction which will be used to pursue a better understanding of CES circuits.
\subsection{A class of CES\ circuits with non-nearest neighbor gates}
We now define a simple subclass of planar graphs which have a polynomial (in the number of vertices) number of even subgraphs. Because of planarity this class of graphs correspond to CES quantum circuits. However, note that this class of graphs is by no means an exhaustive characterization of all planar graphs which have a polynomial number of even subgraphs.
\begin{mydefinition} A basis of the null space of the incidence matrix $CH$ is referred to as a cycle basis. \end{mydefinition}
\begin{mydefinition} Let $\Theta _{pc}$ be those planar graphs with $V$ vertices and $E=V+O(\log V^{k})$ edges, where $k\in \mathbb{R^{+}}$. \end{mydefinition}
\begin{myproposition} \label{polyinV} $\Theta _{pc}$ has a polynomial, in $V$, number of even subgraphs. \end{myproposition}
\begin{proof} A cycle basis consists of a set of connected even subgraphs of a given graph in $\Theta _{pc}$. The dimension of the null space (or the number of elements of the cycle basis) is in this case equal to the number of edges minus the rank of $CH$. Recall also that the rank of the incidence matrix equals the number of edges minus the number of components (which is one in our case). Thus, asymptotically, one has, \begin{equation} \mathrm{nullity}=V+O(k\log V)-\mathrm{rank}(CH)=O(\log V^{k}). \end{equation} Now, note that the null space allows all possible sums of the basis and therefore we are left with $O(V^{k})$ elements as claimed. \end{proof}
One could imagine graphs in $\Theta _{pc}$ as being sparse graphs consisting of cycles (even subgraphs) strung together along trees without too many branching points. This is due to the relationship between $E$ and $V$ given above. One can see that a branch without a cycle always adds an additional vertex (one edge has two vertices) and the only way that the relationship between vertices and edges can be satisfied is if the number of branches is kept smaller than the number of cycles. That is, there will need to be \emph{ more cycles than edges that do not terminate at a cycle}. Further, the incidence matrix of these structures, like the example above in Fig.~\ref{pc} , will have columns that consist of nearest neighbor consecutive \textquotedblleft 1's\textquotedblright\ for the majority of positions. This rule is broken when a tree branches and when one runs into a cycle. By Theorem \ref{polyinV}, this can only happen $O(\log V^{k})$ times. As $E$ is the number of gates and $V$ is the number of qubits in the corresponding quantum circuit, we have just proven:
\begin{mycorollary} A quantum circuit consisting of gates of the form $e^{-i\theta (X^{(i)}\otimes Y^{(j)})}$, which act on nearest neighbor qubits except for $
O(k\log (\#\mathrm{of}\hspace{0.08cm}\mathrm{qubits}))$ gates, which can act on qubits $i$ and $j$ such that $|i-j|\geq 2$, is CES. \end{mycorollary}
Note: $Z$ operations may be included in the exponent of this operator at any position not occupied by the $X$ and $Y$ operations.
The important point in the last corollary is that we allow non-nearest neighbor gates, thus extending the results of \cite{Nest:08}, and also \cite {Jozsa:08}, where only nearest-neighbor CES\ quantum circuits were considered.
\subsection{$\Theta $ includes some but not all outerplanar graphs}
So far we have stressed the special role of $K_{3,3}$ and $K_{5}$, which led us to the conclusion that all graphs in $\Theta $ are planar. However, it turns out that we can be more specific, since we have also been able to show that $K_{4}$ and $\bar{K}_{3,3}$ are forbidden minors for $\Theta $ (Lemma \ref{lem:obs}; see Appendix \ref{app:algo} for a description of the proof, using mathematical software). In other words, there does not exist a solution to Eq.~(\ref{eq:w}) for the graphs $K_{4}$ and $\bar{K}_{3,3}$. These graphs play a role in characterizing the set of \emph{outerplanar graphs} \cite{Graph}, which we define next:
\begin{mydefinition} For any planar graph, there are regions bounded by the cycles of the graph and an unbounded region outside of all the cycles. An outerplanar graph is a planar graph for which every vertex is within the unbounded region when it is embedded in the plane such that no edges intersect. \end{mydefinition}
For example, the graph in Fig.~\ref{pc} is outerplanar. More informally, a graph is outerplanar if it can be embedded in the plane such that all vertices lie on the outer (exterior) face. A graph $G$ is outerplanar iff $ K_{1}+G$ (a new vertex is connected to all vertices of $G$) is planar \cite {Wiegers:06}. The characterization of relevance to us is the following analog of Kuratowski's theorem for planar graphs (described in Appendix~\ref {app:graphs}):
\begin{mytheorem}[Chartrand \& Harrary \protect\cite{Chartrand:67}] \label{th:OP}A graph is outerplanar if and only if it has no subgraph homeomorphic to $K_{4}$ or $K_{2,3}$. \end{mytheorem}
In other words, $K_{2,3}$ is a forbidden minor for outerplanar graphs, where $K_{2,3}$ is like $K_{3,3}$ except that one side of the bipartite graph has two vertices instead of three.
\begin{myproposition} \label{prop:K33-1}If a graph is outerplanar then it does not have $\bar{K} _{3,3}$ as a minor. \end{myproposition}
\begin{proof} Assume that $\bar{K}_{3,3}$ is a minor of some $G\in \mathrm{Outerplanar}$. Then $G$ has $K_{2,3}$ as a minor, since $K_{2,3}$ is a minor of $\bar{K} _{3,3}$. (This is easy to see: just contract one edge and delete another.) But by Theorem \ref{th:OP} outerplanar graphs cannot have $K_{2,3}$ as a minor, which is a contradiction. Thus $G$ cannot be outerplanar. \end{proof}
Note that the converse is not necessarily true, i.e., not all graphs which do not have $\bar{K}_{3,3}$ as a minor are outerplanar.
\begin{myproposition} $\mathrm{Outerplanar\hspace{0.08cm}Graphs}\cap \Theta \neq \emptyset$ and $ \mathrm{Outerplanar\hspace{0.08cm}Graphs}\neq \Theta $. \end{myproposition}
\begin{proof} By Lemma \ref{lem:obs}, if $G\in \Theta $ then $G$ cannot have $K_{4}$ or $ \bar{K}_{3,3}$ as minors. By Theorem \ref{th:OP} and Proposition \ref {prop:K33-1}, if $G^{\prime }$ is an outerplanar graph then it cannot have $ K_{4}$ or $\bar{K}_{3,3}$ as minors either. This suggests that $\Theta $ may have graphs in common with the set of outerplanar graphs. We have verified, using mathematical software, that the intersection is indeed nonempty. For example, we have found that certain trees with cycles, which are outerplanar by construction, are in $\Theta $. Moreover, we have verified using mathematical software that $K_{2,3}\in \Theta $, i.e., it is not a forbidden minor for the existence of a solution $w$. But $K_{2,3}$ is not outerplanar, hence there are graphs in $\Theta $ which are not outerplanar (in particular, all subdivisions of $K_{2,3}$). \end{proof}
This is interesting since some problems are NP-complete for subclasses of planar graphs but solvable in polynomial time for outerplanar graphs. Some examples are the chromatic number, Hamiltonian path and Hamiltonian circuit. Another example is the page number which is one for outerplanar graphs. This means that we can embed the vertices on a line which divides the plane into two subplanes, and draw all edges in one of the subplanes without crossing. In a sense, outerplanar graphs are \textquotedblleft easy\textquotedblright\ computationally. The fact that $\Theta $ includes non-outerplanar graphs thus suggests that it may include interesting computational problems.
\section{Discussion and Future Directions}
\label{nextstep}
In this section we briefly discuss two possible future directions for research.
\subsection{General condition for the bond distribution}
\label{nexta} So far we have assumed the sufficient condition (\ref{eq:w}) in order to obtain the desired equality between the partition function and the circuit matrix element $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ [Eqs.~(\ref{eq:Z1})\ and (\ref{0U0})]. We have also shown that a satisfying bond distribution $w$ can always be efficiently tested for an computed, under Eq. (\ref{eq:w}). Let us now relax the constraint of Eq.~(\ref{eq:w}) by considering a more general way in which the desired proportionality, \begin{equation}
\sum_{a\in \ker A}(-1)^{a\cdot w}\lambda ^{|a|}\propto \sum_{a\in \ker CH}(-1)^{a^{t}\mathrm{lwtr}(H^{t}CH)a}\lambda ^{|a|}, \label{equal} \end{equation} can be obtained. Indeed, Eq.~(\ref{eq:w}) is clearly not a necessary condition. The following construction demonstrates that it is likely that the number of cases which \emph{do not have a solution $w$} for the bond distribution is much smaller than the case we analyzed given by Eq.~(\ref {eq:w}).
Note that in Eq.~(\ref{equal}) the powers of the $\lambda $'s are the weights of the null vectors $a$, that is the number of ones in $a$. Thus it is possible for an equality to occur for a given term in the sum in the two sides of Eq.~(\ref{equal}) for different $a$'s, as long as the weights of the $a$'s are equal. This gives us the constraint for the following. One can organize all the $a$'s in bins in terms of weights from $1$ to $|E|=N$. Let us now take bin $r$, i.e., the set of vectors of weight $r$. Let $ a_{r1},\dots ,a_{rn}$ be all the null vectors of $CH$ of weight $r$. Then, if for all $r$
\begin{eqnarray} \{a_{rj_{1}}^{t}\mathrm{lwtr}(H^{t}CH)a_{rj_{1}} &=&a_{rj_{2}}\cdot w\}\wedge \notag \\ \{a_{rj_{2}}^{t}\mathrm{lwtr}(H^{t}CH)a_{rj_{2}} &=&a_{rj_{3}}\cdot w\}\wedge \cdots \wedge \notag \\ \{a_{rj_{n}}^{t}\mathrm{lwtr}(H^{t}CH)a_{rj_{n}} &=&a_{rj_{1}}\cdot w\}, \label{conj} \end{eqnarray} where $\{j_{1},...,j_{n}\}$ is any permutation of the numbers $\{1,...,n\}$, then Eq.~(\ref{equal}) would be satisfied. Clearly, Eq.~(\ref{eq:w}) is a special case of this more general condition. This demonstrates that it is likely that a satisfying bond distribution $w$ can be found for a given graph even if the sufficient condition (\ref{eq:w}) cannot be satisfied. Loosely, this is due the fact that there are many conjunctive statements of the form (\ref{conj}) that can be satisfying. In fact it seems likely that also non-planar graphs (i.e., those containing $K_{5}$ or $K_{3,3}$ as minors) may have a satisfying $w$ according to Eq.~(\ref{conj}). We leave this as a problem for future investigation. An important point concerning the more general condition (\ref{conj}) is that we do not know if it can be efficiently tested for a satisfying bond distribution $w$, let alone solved for such a $w$.
\subsection{Computing the Ising partition function}
As mentioned in Section \ref{sec:ansatz}, Eq. (\ref{eq:element}) has two consequences, and our focus in this paper has been on the ability to find CES circuits using known results about the hardness of computing partition functions. Let us now briefly consider the other consequence, namely the fact that if we are able to determine $\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $, then we are able to determine the partition function $Z_{\Delta }(\lambda )$.
A fully-polynomial randomized approximation scheme (fpras) for the fully-ferromagnetic Ising partition function was presented in \cite {Jerrum:90}. It is well known that having an fpras for the non-ferromagnetic Ising model implies that $NP=RP$ (randomized polynomial time) which would be quite unexpected \cite{Welsh:book}. It should therefore be of no surprise that no fpras for this problem has been found, even with quantum resources. However \emph{additive} approximation schemes seem likely and in fact one was given in \cite{dorit-tutte} for the related Potts model partition function, even though the instances that they were able to account for are not known to be BQP-complete and the hardness is in fact unknown.
Equation (\ref{eq:element}) precisely relates a matrix element of a quantum circuit with the value of the partition function of the Ising model for a corresponding graph instance. This means that if we could approximate the matrix element, we would have an approximation for the Ising partition function. Due to the Hadamard test, it is well known that a polynomial estimation of this matrix element is BQP-complete. (See Ref. \cite {jordan} for a description of the Hadamard test.) Specifically, by making $ 1/\epsilon ^{2}$ measurements, one can either have $\mathrm{Re}\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ or $\mathrm{Im}\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle $ to precision $\epsilon $, but one must keep in mind that this approximation is an additive one. This means that with some probability of success bounded below (say by $.75$ ) the approximation returns $m$ such that \begin{equation} \langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle -\delta \cdot p<m<\langle \mathbf{0}\lvert U(\Omega )\rvert \mathbf{0}\rangle +\delta \cdot p, \label{eq:approx} \end{equation} where $p$ is a polynomially small parameter and $\delta $ is the approximation scale of the problem. Note that if $\delta =O(\langle \mathbf{0 }\lvert U(\Omega )\rvert \mathbf{0}\rangle )$ then the approximation will be an fpras \cite{dorit-tutte}. Equations (\ref{eq:element}) and (\ref {eq:approx}) taken together quantify how a measurement of $\langle \mathbf{0} \lvert U(\Omega )\rvert \mathbf{0}\rangle $ yields an approximation of the partition function of the Ising model instance $\Delta $ corresponding to the circuit $\Omega $.
\section{Conclusions}
\label{sec:conc}
We have provided a construction that allows one to determine if a given quantum circuit corresponds to a class of quantum circuits which are classically efficiently simulatable (CES). This was done by looking at the corresponding graph instances of the classical Ising model using a mapping previously introduced in \cite{JOE2}. This was then used to conclude that any class of quantum circuits which solve decision problems and are restricted to certain planar graph instances are CES. Our main result is stated in Theorem \ref{th}, which characterizes the class of CES\ circuits via the set of planar graphs $\Theta $. We have given a partial characterization of $\Theta $ by stating that its obstruction set includes $\bar{K}_{3,3}$ and $K_{4}$ (and hence, by downward closure also $ K_{3,3} $ and $K_{5}$). An interesting open problem is to give a complete characterization of the obstruction set; we know from the Robertson Seymour Theorem that this set is finite, since we have proved that $\Theta $ is downwardly closed.
Our mapping can also be used to construct a quantum algorithm for the additive approximation of the partition function. However, there are two issues. The instances we are able to handle are constrained by our use of equation (\ref{eq:w}) which does not capture all the ways a certain bond distribution may satisfy equation (\ref{equal}), but which simplifies our analysis greatly. The other issue is the fact that our mapping may fail to provide information about the bond distribution of a given graph.
An open problem is to obtain a better understanding of what the complexity of finding the bond distribution for a particular graph instance is. This understanding will have consequences in our knowledge of where BQP is in the complexity hierarchy, as we will be able to relate the simulatability of universal quantum circuit families with the complexity of finding bond distributions. On the other hand, it is possible that the complexity of finding bond distributions is somehow incorporated in the power of the quantum circuit that corresponds to the graph instance of the Ising model, in the sense that the circuit corresponding to a planar graph (under our mapping) may not be CES, because the effort of obtaining the bond distribution via Eq. (\ref{conj}) blocks such a simulation.
\begin{acknowledgments} This material is based upon work supported by the National Science Foundation (NSF) under Grant No. PHY-0802678, and by the Army Research Office under grant W911NF-05-1-0440. DAL thanks the Institute for Quantum Information (IQI) at Caltech where part of this work was done. IQI is supported by the NSF under Grant No. PHY-0803371. \end{acknowledgments}
\appendix
\label{sec:app}
\section{Essential elements from Graph Theory}
\label{app:graphs}
Here we review some essential definitions and theorems form graph theory needed for the results presented in this work. A good reference for these concepts is the wikipedia article on planar graphs, or Ref. \cite{Graph}.
\begin{mydefinition} A subgraph $H$ of a given graph $G$ is called a \emph{minor} (or child) of $ G $ if it is isomorphic to a graph that can be obtained from $G$ via a sequence of edge deletions, edge contractions, or deletion of isolated vertices. Edge contraction is the process of removing an edge and combining its two endpoints into a single vertex. Edge deletion removes an edge without removing its vertices. \end{mydefinition}
\begin{mydefinition} The set of graphs $S$ is \emph{downwardly closed with respect to minor ordering} if whenever $G$ is a member of $S$, then so is any minor of $G$. \end{mydefinition}
A trivial consequence of the definition of downwardly closed sets is that every such set has an obstruction set:
\begin{mycorollary} An \emph{obstruction set} for $G$ is a set of minors of $G$, also called \emph{forbidden minors}, with the property that they prevent downward closure. \end{mycorollary}
In other words, if one constructs a set of minors of $G$ and encounters a minor that violates the property for which downward closure is being tested, such a minor is called forbidden, and belongs to the obstruction set. Now comes a seminal theorem due to Robertson and Seymour \cite{RS}:
\begin{mytheorem} \label{th:RS}(Robertson--Seymour) Every downwardly closed set of graphs (possibly infinite) has a finite obstruction set, i.e., a finite set of forbidden minors. \end{mytheorem}
For our purposes an important example are the planar graphs:
\begin{mydefinition} A \emph{planar graph} is a graph which can be embedded in the plane, i.e., it can be drawn on the plane in such a way that its edges may intersect only at their endpoints, i.e., edges never cross. A \emph{nonplanar graph} is a graph which cannot be drawn in the plane without edge intersections. \end{mydefinition}
There are two particularly important nonplanar graphs, denoted $K_{5}$ (the complete graph on five vertices; complete means that each pair of vertices are connected by an edge) and $K_{3,3}$ (the complete bipartite graph on six vertices, three of which connect to each of the other three). They are depicted in Fig.~\ref{graphs}.
\begin{figure}
\caption{The various graphs playing a role in defining the obstruction set for $\Theta$ and outerplanar graphs.}
\label{graphs}
\end{figure}
Planarity is characterized by Wagner's Theorem \cite{Graph}:
\begin{mytheorem}[Wagner] \label{th:Wag} A finite graph is planar if and only if it does not have $ K_{5}$ or $K_{3,3}$ as a minor. \end{mytheorem}
In other words, if one deletes or contracts the edges of a graph and finds one of these minors, then the graph is not planar. Hence $K_{5}$ and $ K_{3,3} $ are forbidden minors (form an obstruction set) for planarity.
For completeness we note that an alternative characterization can be given in terms of the concept of a subdivision of a graph:
\begin{mydefinition} A \emph{subdivision} of a graph results from inserting vertices into edges. \end{mydefinition}
Thus, while deletion and contraction of edges shrinks a graph down, subdivision builds it up.
\begin{mydefinition} Two graphs $G$ and $G^{\prime }$ are \emph{homeomorphic} if there is an isomorphism from some subdivision of $G$ to some subdivision of $G^{\prime }$ . \end{mydefinition}
\begin{mytheorem}[Kuratowski] A finite graph is planar if and only if it has no subgraph homeomorphic to $ K_{5}$ or $K_{3,3}$. \end{mytheorem}
In other words, a finite graph is planar if and only if it does not contain a subgraph that is isomorphic to a subdivision of $K_{5}$ or $K_{3,3}$.
In this work we give a criterion for graph membership (in a certain set $ \Theta $ which is defined in Definition \ref{def:Theta}) based on the existence of a solution for the system of linear equations over $GF(2)$ defined by Eq.~(\ref{eq:w}). We want to be guaranteed that this membership has an ordering in the sense that if $G$ is a member, then so is every minor of $G$. In other words, we are testing for downward closure. The Robertson--Seymour theorem guarantees that membership in our set $\Theta $ is not obstructed by an \emph{infinite} set of graphs. However, the situation is in fact far better: membership of a graph to a fixed downward closed set can be checked by running a polynomial time algorithm for all elements of the obstruction set (if it is known), since searching for a minor on a given graph only requires cubic time \cite{Graph}. In fact, checking whether a graph is planar can be done in linear time.
For completeness we include the definition of a \emph{hypergraph} as our correspondence between quantum circuits and graphs is actually a mapping between circuits and hypergraphs \cite{JOE2}.
\begin{mydefinition} A hypergraph is a generalization of a graph where edges are replaced by \emph{hyperedges}. Let $V=\{v_{1},v_{2},\dots ,v_{k}\}$ be the set of vertices and let $E=\{e_{1},e_{2},\dots ,e_{n}\}$ be the set of hyperedges. Each $e_{i}=\{v_{i1},v_{i2},\dots ,v_{im}\}$ is a collection of vertices where each $v_{ij}\in V$. \end{mydefinition}
Thus the main difference from ordinary graphs is that edges consist of arbitrary collections of vertices rather than two and thus graphs are special cases of hypergraphs. As shown in Ref. \cite{JOE2}, the existence of hyperedges is what gives us access to the universal gate set presented in \cite{Laflamme}, via the two assumptions enumerated at the end of Section \ref{sec:mapping}. However, the circuits that correspond to graphs is what is interesting here, as our results depend on information about the Ising partition function defined on ordinary graphs.
\section{Proof of Lemmas}
\label{proofs}
Here we present the proof of lemmas \ref{finite-obs}, \ref{graphK}, and \ref {quad-form}, which we repeat for convenience:
\noindent \textbf{Lemma \ref{graphK}} \emph{If a graph $G$ is a member of $ \Theta$, then so is $G\setminus e_{j}$ or $G/e_{j}$, i.e., the deletion or contraction of an arbitrary edge $e_{j}$ from a graph in $\Theta$ is also in $\Theta$.}
\begin{proof} Assume that $G\in \Theta $. Recall that a graph $G$ is an element of $\Theta $ if there exists some solution $w$ to the set of linear equations over $ GF(2)$ \begin{equation} A^{(G)}w=\alpha ^{(G)} \end{equation} where $A^{(G)}$ is the matrix whose rows are elements, $a_{i}$, of the nullspace of the incidence matrix of the graph $G$ (given as the Ising instance), and $\alpha ^{(G)}$ is the vector whose entries are the $a_{i}^{t} \mathrm{lwtr}(H^{t}CH)a_{i}$. These null elements, $a_{i}$, correspond to the even subgraphs of $G$ and will be referred to as cycles (recall Definition~\ref{def:even}). From elementary linear algebra we know that a solution exists if $\alpha ^{(G)}$ may be written as a linear combination of columns of $A^{(G)}$ or in other words if \begin{equation}
\mathrm{Rank}[A^{(G)}|\alpha ^{(G)}]=\mathrm{Rank}[A^{(G)}]. \end{equation} We must demonstrate that after we either delete or contract an edge, and arrive at the subgraph $G^{\prime }$, we have \begin{equation}
\mathrm{Rank}[A^{(G^{\prime })}|\alpha ^{(G^{\prime })}]=\mathrm{Rank} [A^{(G^{\prime })}]. \end{equation}
\begin{widetext} We shall demonstrate this with an edge deletion as the case of a contraction is similar. We begin with a given graph $G$ which is a member of $\Gamma _{w} $. We have \begin{equation} ({G^{\prime }}^{t}CG^{\prime })^{(G)}=\left[ \begin{array}{cccc} G_{11}^{\prime } & G_{21}^{\prime } & \cdots & G_{v1}^{\prime } \\ G_{12}^{\prime } & G_{22}^{\prime } & \cdots & G_{v2}^{\prime } \\ \vdots & \vdots & \vdots & \vdots \\ G_{1s}^{\prime } & G_{2s}^{\prime } & \cdots & G_{vs}^{\prime } \\ \vdots & \vdots & \vdots & \vdots \\ G_{1N}^{\prime } & G_{2N}^{\prime } & \cdots & G_{vN}^{\prime } \end{array} \right] \left[ \begin{array}{cccccc} G_{21}^{\prime } & G_{22}^{\prime } & \cdots & G_{2s}^{\prime } & \cdots & G_{2N}^{\prime } \\ 0 & 0 & \cdots & 0 & \cdots & 0 \\ G_{41}^{\prime } & G_{42}^{\prime } & \cdots & G_{4s}^{\prime } & \cdots & G_{4N}^{\prime } \\ 0 & 0 & \cdots & 0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ G_{v1}^{\prime } & G_{v2}^{\prime } & \cdots & G_{vs}^{\prime } & \cdots & G_{vN}^{\prime } \\ 0 & 0 & \cdots & 0 & \cdots & 0 \end{array} \right] \end{equation}
Using Einstein notation this equals \begin{equation} \left[ \begin{array}{cccccc} G^{\prime }_{2i-1,1}{G^{\prime }}^{2i,1} & G^{\prime }_{2i-1,1}{G^{\prime }} ^{2i,2} & \cdots & G^{\prime }_{2i-1,1}{G^{\prime }}^{2i,s} & \cdots & G^{\prime }_{2i-1,1}{G^{\prime }}^{2i,N} \\ G^{\prime }_{2i-1,2}{G^{\prime }}^{2i,1} & G^{\prime }_{2i-1,2}{G^{\prime }} ^{2i,2} & \cdots & G^{\prime }_{2i-1,2}{G^{\prime }}^{2i,s} & \cdots & G^{\prime }_{2i-1,2}{G^{\prime }}^{2i,N} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ G^{\prime }_{2i-1,N}{G^{\prime }}^{2i,1} & G^{\prime }_{2i-1,N}{G^{\prime }} ^{2i,2} & \cdots & G^{\prime }_{2i-1,N}{G^{\prime }}^{2i,s} & \cdots & G^{\prime }_{2i-1,N}{G^{\prime }}^{2i,N} \end{array} \right] \label{HCH} \end{equation}
Keep in mind that if we were to delete an edge from $G$, this would correspond to losing a column from $CH$ which would correspond to losing, say the $s^{\mathrm{th}}$ column from matrix (\ref{HCH}). Taking the lower triangular portion of matrix (\ref{HCH}) and calculating we find that the $ m^{\mathrm{th}}$ element of the vector $\alpha ^{(G)}$ is \begin{eqnarray} \alpha _{m}^{(G)} &=&a_{m2}[a_{m1}G_{2i-1,2}^{\prime }{G^{\prime }} ^{2i,1}]+a_{m3}[a_{m1}G_{2i-1,3}^{\prime }{G^{\prime }} ^{2i,1}+a_{m2}G_{2i-1,3}^{\prime }{G^{\prime }}^{2i,2}]+\cdots \notag \\ &+&a_{mk}[a_{m1}G_{2i-1,k}^{\prime }{G^{\prime }}^{2i,1}+\cdots +a_{m(k-1)}G_{2i-1,k}^{\prime }{G^{\prime }}^{2i,k-1}]+\cdots \notag \\ &+&a_{mn}[a_{m1}G_{2i-1,N}^{\prime }{G^{\prime }}^{2i,1}+\cdots +a_{m(N-1)}G_{2i-1,N}^{\prime }{G^{\prime }}^{2i,N-1}] \end{eqnarray} where $a_{mi}$ is the $i^{\mathrm{th}}$ element of the $m^{\mathrm{th}}$ null vector of $CH$ or the matrix element $A_{m,i}$.
Now, let \begin{eqnarray} \xi _{m}^{s} &=&a_{m(s+1)}a_{ms}G_{2i-1,s+1}^{\prime }{G^{\prime }} ^{2i,s}+a_{m(s+2)}a_{ms}G_{2i-1,s+2}^{\prime }{G^{\prime }}^{2i,s}+\cdots +a_{mN}a_{ms}G_{2i-1,N}^{\prime }{G^{\prime }}^{2i,s}. \end{eqnarray} \end{widetext} This is the portion of $\alpha _{m}^{(G)}$ that would vanish if we were to omit the edge that corresponds to the $s^{\mathrm{th}}$ column of the matrix (\ref{HCH}). This means that if we remove this edge, we will end up with the subgraph $G^{\prime }$ and we can write \begin{equation} \alpha _{m}^{(G)}=\alpha _{m}^{({G^{\prime })}}+\xi _{m}^{s}. \end{equation} This equation is saying that the $m^{\mathrm{th}}$ entry of the right hand side of \begin{equation} A^{(G)}w=\alpha ^{(G)} \end{equation} is given by the $m^{\mathrm{th}}$ entry of the right hand side of \begin{equation} A^{({G^{\prime })}}w=\alpha ^{({G^{\prime })}} \end{equation} (the corresponding system of equations for the graph $G^{\prime }$) plus the term $\xi _{m}^{s}$.
From the assumption that $G\in \Gamma _{w}$ and by construction we have \begin{equation} \alpha ^{(G)}=\left[ \begin{array}{ccc} \alpha _{1}^{({G^{\prime })}} & + & \xi _{1}^{s} \\ \alpha _{2}^{({G^{\prime })}} & + & \xi _{2}^{s} \\ \vdots & \vdots & \vdots \\ \alpha _{K}^{({G^{\prime })}} & + & \xi _{K}^{s} \end{array} \right] =\sum_{i}\delta _{i}c_{i} \end{equation}
where the $\delta _{i}$ are coefficients in $GF(2)$ and the $c_{i}$ are columns of $A^{(G)}$, i.e., $\mathrm{Rank}[A^{(G)}|\alpha ^{(G)}]=\mathrm{ Rank}[A^{(G)}].$ Thus we have \begin{equation} \alpha ^{({G^{\prime })}}=\sum_{i}\delta _{i}c_{i}-\xi ^{s}. \end{equation} How does the matrix $A$ change as we go from $G\longrightarrow {G^{\prime }}$ by this edge deletion? If the edge is a \emph{dangling edge}, i.e., not part of a cycle, then we lose a column (column $s$) but if the edge deletion causes the breaking of $M$ cycles, then $A$ will lose $M$ rows (in addition to column $s$), as the rows encode the cycle structure of the graph. In this case, the dimension (or length) of $\alpha ^{(G^{\prime })}$ will be $M$ less than the dimension of $\alpha ^{(G)}$ and the $c_{i}$ will also be shorter by $M$ entries. We call these shorter $c_{i}$, $c_{i}^{\prime }$. Further, and most importantly, $\xi ^{s}$ will vanish, as mentioned. After taking this into consideration we now can conclude that \begin{equation} \alpha ^{(G^{\prime })}=\sum_{i\neq s}\delta _{i}c_{i}^{\prime } \end{equation} where $A^{(G^{\prime })}=[c_{1}^{\prime }c_{2}^{\prime }\cdots c_{N-1}^{\prime }].$ Thus, \begin{equation}
\mathrm{Rank}[A^{(G^{\prime })}|\alpha ^{(G^{\prime })}]=\mathrm{Rank} [A^{(G^{\prime })}]. \end{equation}
The proof for edge contractions is similar. The main difference is that an edge contraction does not cause the loss of a cycle except when the edge in question belongs to a cycle of length three. Thus in general, the contraction case is simpler except when dealing with cycles of length three. In this case the proof caries over in the same way. \end{proof}
Thus the set of graphs $\Theta$ is downwardly closed.
\noindent \textbf{Lemma \ref{finite-obs}} \emph{The obstruction set for $ \Theta $ is finite.}
\begin{proof} The set of graphs $\Theta $ is \emph{downwardly closed} by the above lemma. One may then apply the Robertson-Seymour Theorem (Theorem \ref{th:RS}) and immediately conclude that the number of forbidden minors of $\Theta $ is finite. \end{proof}
\noindent \textbf{Lemma \ref{quad-form}} \emph{A quadratic form }$x^{t}Ax$ \emph{over GF(2)\ is linear in }$x$\emph{\ (equal to }$x^{t}\mathrm{diag}(A)$ ) \emph{iff }$A$\emph{\ is symmetric.}
\begin{proof} Let $x$ be an $m$-dimensional column vector and $A$ an $m\times m$ matrix, both over GF(2). Consider the quadratic form $x^{t}Ax= \sum_{ij}x_{i}A_{ij}x_{j}$ and assume that $A$ is symmetric:\ $A=A^{t}$. Then \begin{eqnarray} x^{t}Ax &=&\sum_{i<j}A_{ij}x_{i}x_{j}+\sum_{i}A_{ii}x_{i}^{2}+ \sum_{i>j}A_{ij}x_{i}x_{j} \notag \\ &=&\sum_{i<j}A_{ij}x_{i}x_{j}+\sum_{i}A_{ii}x_{i}+\sum_{j>i}A_{ij}x_{j}x_{i}, \end{eqnarray} where in the second line we used $x_{i}^{2}=x_{i}$ [true over GF(2)], exchanged $i$ and $j$ in the third summand and used $A_{ij}=A_{ji}$. The first and third summands are equal and hence add up to zero over GF(2). We are left with the, second, linear term, i.e., $x^{t}Ax=x^{t}\mathrm{diag}(A)$ , where $\mathrm{diag}(A)$ denotes a vector comprising the diagonal of $A$.
Next, assume that $A$ is not symmetric. Then there exists a pair of indices $ i^{\prime }<j^{\prime }$ such that $A_{i^{\prime }j^{\prime }}\neq A_{j^{\prime }i^{\prime }}$, i.e., $A_{i^{\prime }j^{\prime }}+A_{j^{\prime }i^{\prime }}=1$. As above, we have: $x^{t}Ax=\sum_{i<j}A_{ij}x_{i}x_{j}+ \sum_{i}A_{ii}x_{i}+\sum_{j>i}A_{ji}x_{j}x_{i}$. Consider the index pair $ (i^{\prime },j^{\prime })$ in this sum:\ \begin{equation} A_{i^{\prime }j^{\prime }}x_{i^{\prime }}x_{j^{\prime }}+A_{j^{\prime }i^{\prime }}x_{j^{\prime }}x_{i^{\prime }}=(A_{i^{\prime }j^{\prime }}+A_{j^{\prime }i^{\prime }})x_{i^{\prime }}x_{j^{\prime }}=x_{i^{\prime }}x_{j^{\prime }}. \end{equation} Thus the quadratic form contains at least one non-linear (quadratic) term $ x_{i^{\prime }}x_{j^{\prime }}$. \end{proof}
\section{Algorithm for Minor Testing}
\label{app:algo}
Note that all calculations are done modulo 2.
\textbf{Input:} A graph $G$ for which we wish to determine if there exists some satisfying edge interaction $w$ that satisfies Eq.~(\ref{eq:w}). Specifically, we considered $G=K_{3,3}$, $K_{5}$, $\bar{K}_{3,3}$ ($K_{3,3}$ with one edge deleted) and $K_{4}$.
\textbf{Output:} A binary vector $w$ that is a satisfying bond distribution or a null vector (indicating no such bond distribution).
\begin{enumerate} \item From the incidence matrix $A$ of $G$ obtain the following items:
\begin{enumerate} \item All vectors $a_{i}$ belonging to the null space $\mathcal{L}$ of the incidence matrix. These row vectors form a matrix $M$.
\item Construct a matrix representation $H$ of the possible corresponding quantum circuits (under the mapping presented earlier). From $H$ construct $ Q=\mathrm{lwtr}H^{t}CH$. This matrix will have variables $z_{i}$ corresponding to all the possible ways that one can include or omit $Z$ operations (changing these affects the types of edge interactions that one obtains, if any.) \end{enumerate}
\item Form the vector $B$ whose $i^{\mathrm{th}}$ entry is $ B_{i}=a_{i}^{t}Qa_{i}$, where $a_{i}$ are the elements of the null space of $ \mathcal{L}$. This is the left-hand side of Eq.~(\ref{eq:w}). Each entry of $ B$ consists of linear equations whose variables $z_{i}$ represent the presence or absence of a $Z$ operation in a quantum circuit that corresponds to $G$.
\item Form a matrix $W$ whose rows are all possible bonds.
\item Produce a matrix $D$ whose $i^{\mathrm{th}}$ row $D_{i}$ is equal to $ MW_{i}$. These are all possible values of the right hand side of Eq.~(\ref {eq:w}). (Note that due to symmetry there will be many repeats, so that the total number of possible bonds to check is far fewer than all possible bonds.)
\item Attempt to solve the system of linear equations $
B_{k}=D_{1,k},B_{k}=D_{2,k},\dots ,B_{k}=D_{|L|,k}$ over $GF(2)$ (where $k$ runs from $1$ to the number of rows of $D$) for the variables $z_{i}$. A solution for some $k$ gives information for a specific circuit representation $H$. If no solution for the $z_{i}$ exists, then there is no satisfying bond distribution $w$ for Eq.~(\ref{eq:w}). This is indeed the case for $K_{3,3}$, $K_{5}$, $\bar{K}_{3,3}$ and $K_{4}$. If there is a solution for some fixed $k$, continue.
\item Take this specific $H$ (i.e., this $H$ has no variables and corresponds to a specific circuit given by the solution of the $z_{i}$ above) and now form $B_{i}=a_{i}^{t}(\mathrm{lwtr}H^{t}CH)a_{i}$ again. This time however, $B$ contains no variables and is a numerical vector. Thus, one now has the binary vector $B$ and the matrix $M$.
\item Solve the linear system $Mw=B$ and output the edge interaction $w$. \end{enumerate}
Having applied this algorithm we proved using mathematical software that the obstruction set for $\Theta $ includes $K_{3,3}$, $K_{5}$, $\bar{K}_{3,3}$ and $K_{4}$, i.e., Lemma \ref{lem:obs}.
\end{document} | arXiv | {
"id": "0902.4889.tex",
"language_detection_score": 0.8122671842575073,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Multiphoton-state-assisted entanglement purification of material qubits}
\author{J\'ozsef Zsolt Bern\'ad} \email{Zsolt.Bernad@physik.tu-darmstadt.de} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \author{Juan Mauricio Torres} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \author{Ludwig Kunz} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany} \author{Gernot Alber} \affiliation{Institut f\"{u}r Angewandte Physik, Technische Universit\"{a}t Darmstadt, D-64289, Germany}
\date{\today}
\begin{abstract} We propose an entanglement purification scheme based on material qubits and ancillary coherent multiphoton states. We consider a typical QED scenario where material qubits implemented by two-level atoms fly sequentially through a cavity and interact resonantly with a single-mode of the radiation field. We explore the theoretical possibilities of realizing a high-fidelity two-qubit quantum operation necessary for the purification protocol with the help of a postselective balanced homodyne photodetection. We demonstrate that the obtained probabilistic quantum operation can be used as a bilateral operation in the proposed purification scheme. It is shown that the probabilistic nature of this quantum operation is counterbalanced in the last step of the scheme where qubits are not discarded after inadequate qubit measurements. As this protocol requires present-day experimental setups and generates high fidelity entangled pairs with high repetition rates, it may offer interesting perspectives for applications in quantum information theory. \end{abstract}
\pacs{03.67.Bg, 03.67.Lx, 42.50.Ct, 42.50.Ex} \maketitle
\section{Introduction}
Entanglement purification \cite{Bennett1,Deutsch} is an important protocol which overcomes detrimental effects of noisy channels and generates high fidelity pure entangled states from a large number of not-too-low fidelity states. The controlled-NOT gate stays at the core of the protocol and it was experimentally demonstrated earlier than the proposal for the entanglement purification \cite{Monroe}. First experimental implementations were done more than a decade ago using photonic qubits \cite{Zeilinger} and material qubits \cite{Wineland}. The purification protocol has found application in the proposals for quantum repeaters \cite{Sangouard} which enables long distance quantum key distribution \cite{Ekert} and quantum teleportation \cite{Bennett2}. The quantum repeater proposed by Ref. \cite{Briegel} has three sequentially applied building blocks: in the first step an entanglement is generated between neighboring nodes; in the second step entanglement purification is carried out over the ensemble of low-fidelity entangled pairs; in the last step the entanglement swapping procedure transforms the entangled states on the neighboring stations into entangled states on the second neighboring stations, thus increasing the distance of shared entanglement. There is a specific quantum repeater based on hybrid protocols combining continuous and discrete variables \cite{vanLoock1, vanLoock2, vanLoock3}. We have already discussed two building blocks of a hybrid quantum repeater scheme \cite{Bernad1, Bernad2, Torres} based on coherent multiphoton states and resonant matter-field interactions which have advantages in the photonic postselection measurements \cite{Bernad1}. Additionally, multiphoton coherent states can be produced with high repetition rates and they have high transmission rates in the channels connecting the quantum nodes. For example, in long distance quantum key distribution scenarios coherent states with both low \cite{Korzh} and high mean photon numbers \cite{Joquet} have already been successfully applied. Recently, an entanglement purification scheme has been proposed in the context of the hybrid quantum repeater using chains of atoms, optical cavities and far-off resonant matter-field interactions \cite{Gonta}. The difficulty in doing this is due to the long interaction times or large number of photons involved in such a QED scenario. While single-mode fields with high mean photon numbers are not an experimental issue, the justification of far-off resonant matter-field interactions requires significant difference between the frequency of the material transition and the frequency of the single-mode field and this difference has to be further increased with the increase of the mean photon number in the cavity.
In this paper we discuss entanglement purification schemes which are based on resonant interactions between flying material qubits and a single-mode cavity field \cite{Haroche}. At the core of our scheme is the one-atom maser which has been experimentally investigated over the last few decades \cite{Rempe}. Our motivation is to augment our previous work with the missing entanglement purification protocol. Thus we require that the chosen scheme, though being not the only possibility to realize an entangling quantum operation \cite{Cirac}, must be compatible with the architecture of a hybrid quantum repeater based on coherent multiphoton states and resonant matter-field interactions. We focus on resonant matter-field interactions between material qubits and a single-mode cavity prepared initially in a coherent state. The two material qubits fly sequentially through the cavity and interact with the single-mode field resulting in a joint quantum state which after a successful postselective balanced homodyne photodection yields an entangling two-qubit quantum operation. We demonstrate that this probabilistic quantum operation can replace the controlled-NOT gate in the purification schemes of Refs. \cite{Bennett1} and \cite{Deutsch}. Furthermore, in our schemes the qubits do not have to be discarded after inadequate qubit measurement results. There is a specific Bell diagonal state which is generated in hybrid quantum repeaters and thus being a good example of the purification scheme of Ref. \cite{Deutsch}. We discuss the performance of our proposed purification protocols in this specific scenario. Furthermore, we also investigate the role of the spontaneous decay in the material qubits and the damping of the cavity field mode. Thus we present a truly microscopic model of this QED scenario.
This paper is organized as follows. In Sec. \ref{Model} we introduce the theoretical model. In Sec. \ref{Entmodel} we determine the form of the two-qubit quantum operation which is generated by a postselective balanced homodyne photodetection. Numerical results are presented for the success probability of obtaining this quantum operation. These results are employed in Sec. \ref{Purify} to realize entanglement purification. In Sec. \ref{loss} we study the role of spontaneous decay and cavity losses and their effect on the purification schemes. Details of the relevant photon states of the unitary model are collected in the Appendix.
\section{Model} \label{Model}
In this section we discuss a QED model consisting of a single-mode cavity in which two atoms, implementing the material qubits, are injected sequentially such that at most one atom at a time is present inside the cavity. The field inside the cavity is prepared initially in a coherent state and after both interactions the state of the field gets correlated with the state of the qubits. This scenario, illustrated in Fig.~\ref{fig:setup}, is motivated by the progress in atom-cavity implementations, whereas with the help of cutting-edge technology all the relevant parameters which justify our setup are well under control \cite{Haroche,Haroche2}. We present the solution to this model and discuss its properties with the help of the coherent state approximation \cite{Gea-Banacloche}.
Let us consider two qubits $A_1$ and $A_2$ with ground states $\ket{0}^\ell$ and excited states $\ket{1}^\ell$ ($\ell \in\{A_1,A_2\}$). These qubits pass through a cavity in sequence and interact with a single-mode radiation field which is in resonance with the qubit's transition frequency. This corresponds to the well-known resonant Jaynes-Cummings-Paul interaction \cite{Jaynes,Paul}. Due to the resonant condition we are going to work in a time independent interaction picture with respect to the free energy of the cavity and the two qubits. In the dipole and rotating-wave approximation the Hamiltonian accounting for the dynamics of qubits and field is given by ($\hbar=1$) \begin{align}
\hat H^\ell=g \left(\hat{a} \hat{\sigma}^\ell_+ + \hat{a}^{\dagger} \hat{\sigma}^\ell_- \right),
\quad \ell \in \{A_1,A_2\}.
\label{} \end{align} We have considered the raising and lowering operators $\hat{\sigma}^\ell_+=\ket{1} \bra{0}^\ell$ and $\hat{\sigma}^\ell_-=\ket{0}\bra{1}^\ell$, and the vacuum Rabi splitting $2g$ for the dipole transition. Furthermore, $\hat{a}$ ($\hat{a}^\dagger$) is the destruction (creation) operator of the field mode.
We are interested in the situation where there are no initial correlations between the field and the qubits. Therefore, we choose an initial state of the form \begin{equation} \ket{\Psi_0}= \left (c_{00} \ket{00} + c_{01} \ket{01} + c_{10} \ket{10} + c_{11} \ket{11} \right) \ket{\alpha}, \label{initial} \end{equation} with the qubits set in an arbitrary state in the basis $\ket{ij}=\ket{i}^{A_1}\ket j^{A_2}$ ($i,j \in \{0,1\}$),
and the field is in a coherent state \begin{equation}
\label{chst}
\ket{\alpha}=\sum_{n=0}^\infty
e^{-\frac{|\alpha|^2}{2}}
\frac{\alpha^n}{\sqrt{n!}}
\ket n,
\quad\alpha=\sqrt{\bar n}\,e^{i\phi} \end{equation} written in terms of the photon-number states $\ket{n}$ ($n\in {\mathbb N}_0$) and with the phase $\phi$. As stated before, we are interested in the most simple scenario where the two qubits interact independently and sequentially with the field. Therefore, the evolution operator $\hat{U}(t)$ can be written as a product of separate evolution operators and the temporal state vector can be evaluated as \begin{align}
\ket{\Psi(\tau)}=
\hat U(\tau)\ket{\Psi_0},\quad
\hat U(\tau)= e^{-i \hat H^{A_2}\tau} e^{-i \hat H^{A_1}\tau},
\label{evolution} \end{align} where we considered equal interaction times.
Solving the state vector is not a complicated task as it is based on the well known solutions of the resonant Jaynes-Cummings-Paul model (see for example \cite{Schleich}). The result is a time dependent quantum state $\ket{\Psi(\tau)}$ of the tripartite system that can be expressed in the following form \begin{eqnarray}
\ket{\Psi(\tau)}&=&\ket{00}\ket{g_{00}(\tau)}
+\ket{01}\ket{g_{01}(\tau)}
+\ket{10}\ket{g_{10}(\tau)} \nonumber \\
&+&\ket{11}\ket{g_{10}(\tau)},
\label{psi} \end{eqnarray} where the unnormalized field states $\ket{g_{ij}(\tau)}$ are presented in Appendix \ref{App}.
\begin{figure}
\caption{A cavity-QED setup for a probabilistic two-qubit quantum operation. Two qubits $A_1$ and $A_2$ fly sequentially through a cavity
and they interact resonantly with a single-mode field. The field is initially prepared in a coherent state $\ket{\alpha}$.
After both qubits passed through the cavity, the field state is postselected by a balanced homodyne detection which is depicted as a detector outside the cavity.
Provided that we are successful the resulting two-qubit quantum operation is applied in the entanglement purification schemes in Sec. \ref{Purify}.}
\label{fig:setup}
\end{figure}
In order to obtain a better understanding of the field states we concentrate on the case of large mean photon number $\bar n \gg 1$ and interaction times $\tau$ such that the Rabi frequency $g \sqrt{n}$ can be linearised around $\bar n$. This procedure can be justified for short interaction times $\tau$ that fulfill the condition $g\tau\ll \sqrt{\bar n}$. This corresponds to a time scale well below the well-known revival phenomena of the population inversion in the Jaynes-Cummings-Paul model \cite{Cummings,Eberly}. Thus one can find that the state of Eq. \eqref{psi} can be approximated by \begin{equation} \ket{\Psi(\tau)}\approx \ket{\psi_-} \ket{\alpha e^{-i\frac{g}{\sqrt{\bar n}}\tau}} + \ket{\psi_+} \ket{\alpha e^{i\frac{g}{\sqrt{\bar n}}\tau}} + \ket{\psi_\star} \ket{\alpha}
\label{psi_mpn} \end{equation} with the two-qubit unnormalized states \begin{align}
\ket{\psi_\star}&=
\frac{c_{01}-c_{10}}{\sqrt2}\ket{\Psi^-}
+
\frac{c_{00}e^{i\phi}-c_{11}e^{-i\phi}}{\sqrt2}\ket{\Phi^-_{\phi}},
\label{Malpha}
\\
\ket{\psi_\pm}&=
\frac{c_{00}e^{i\phi} + c_{11} e^{-i \phi} \mp c_{01}\mp c_{10} }{2 e^{\mp ig\sqrt{\bar n}\tau}} \frac{\ket{\Phi^+_{\phi}}\mp\ket{\Psi^+} }{\sqrt2}
\label{Mmp} \end{align} which have been written in terms of the Bell states \begin{align}
\ket{\Psi^\pm}&=\frac{1}{\sqrt2}\left(\ket{01}\pm\ket{10}\right), \\
\ket{\Phi^\pm_\phi}&=\frac{1}{\sqrt2}\left(e^{-i\phi}\ket{00}\pm e^{i\phi}\ket{11}\right),
\quad \ket{\Phi^\pm}=\ket{\Phi^\pm_0}.
\label{Bellstates} \end{align} One can note that the state in Eq. \eqref{psi_mpn} involves only three coherent states: two of them are $\ket{\alpha e^{\pm ig\tau/\sqrt n}}$ that rotate with frequencies of opposite sign and a third one $\ket{\alpha}$ which corresponds to the initial coherent sate. The approximation in Eq. \eqref{psi_mpn} makes evident that a postselective field measurement can be used to prepare an entangled two-qubit state. Of course the simplest non-trivial situation is when the three coherent states are nearly orthogonal. For this purpose we consider the overlaps between $\ket{\alpha}$, $\ket{\alpha e^{\pm ig \tau/\sqrt n}}$ that yield \begin{equation}
\langle \alpha \ket{\alpha e^{\pm i\frac{g}{\sqrt{\bar n}}\tau}}=
\exp{\left[-\bar n \left(1-e^{\pm i\frac{g}{\sqrt{\bar n}}\tau}\right)\right]}
\approx e^{- \frac{g^2\tau^2}{2}}.
\label{overlap} \end{equation} The last approximation holds for $g\tau \ll \sqrt{\bar n}$ and shows that the overlap nearly vanishes for interaction times $g\tau>\sqrt2$. It can be shown that the overlap between the other two states vanishes faster in time. Therefore we consider interaction times that fulfill the condition \begin{equation}
\sqrt 2<g\tau\ll\sqrt{\bar n}.
\label{taucondition} \end{equation} We emphasize that the first inequality is to ensure orthogonal field states, while the second inequality sets a bound in time where the coherent state approximation is valid. We close this section by pointing out an interesting fact that a similar result to Eq. \eqref{psi_mpn} can be obtained by choosing a setup where the two qubits interact simultaneously with a single-mode field for a time $\tau$. In our previous works \cite{Torres,Torres2} we have shown that in the coherent state approximation the two-atom Tavis-Cummings model results in a solution where the two-qubit state $\ket{\psi_\star}$ is paired up with $\ket\alpha$.
\section{Entangling quantum operation} \label{Entmodel} \subsection{Postselection by projection onto $\ket\alpha$}
Our subsequent investigation is to determine a field measurement which is capable to realize conditionally an entangling two-qubit quantum operation. Eq. \eqref{overlap} shows that the overlaps between the coherent states approximately vanish for interaction times $g \tau>\sqrt2$. Thus a postselective measurement on the field states has the possibility to generate three two-qubit quantum operations which used on the initial state in Eq. \eqref{initial} result in the states of Eqs. \eqref{Malpha} and \eqref{Mmp}. However, only the two-qubit state in Eq. \eqref{Malpha} is a good candidate for an entanglement purification scheme. The reason is that the states $\ket{\psi_{\pm}}$ are separable states. Only the state $\ket{\psi_\star}$ has the potential to be entangled. In order to postselect the state $\ket{\psi_\star}$ one has to implement the following quantum operation for any initial two-qubit state $\ket{\psi}$ \begin{align}
\hat W_1{(\phi,\bar n)}\ket\psi
=\bra{\alpha}\hat U(\tau)\ket{\psi}\ket\alpha
\approx
\ket{\psi_\star}.
\label{W1} \end{align} The operation is performed by first letting the qubits interact with the field, as depicted in Fig. \ref{fig:setup}. This is described by the evolution $\hat U(\tau)\ket{\psi}\ket\alpha$, with the evolution operator in Eq. \eqref{evolution}. After the interaction, the state of the field is projected onto the coherent state $\ket\alpha$.
By appropriate values of $\bar n=|\alpha|^2$ and $\tau$ (see Eq. \eqref{taucondition}) this operation approaches the quantum operation $\hat W_1{(\phi,\bar n)}\to\hat M_\phi$ that can be represented as the sum of two projectors on Bell states:
\begin{equation}
\hat{M}_\phi=\ketbra{\Psi^-}{\Psi^-}+\ketbra{\Phi^-_\phi}{\Phi^-_\phi}, \quad \hat M=\hat M_0.
\label{Qgate} \end{equation} In particular, its action on the atomic states of the standard basis can be listed as \begin{eqnarray}
\hat M_\phi\ket{01}=\tfrac{1}{\sqrt2}\ket{\Psi^-},\quad
\hat M_\phi\ket{10}=-\tfrac{1}{\sqrt2}\ket{\Psi^-},
\nonumber \\
\hat M_\phi\ket{00}=\tfrac{e^{i\phi}}{\sqrt2}\ket{\Phi^-_\phi},\quad
\hat M_\phi\ket{11}=-\tfrac{e^{-i\phi}}{\sqrt2}\ket{\Phi^-_\phi}.
\nonumber \end{eqnarray} This entanglement generating property of $\hat M_\phi$ allows us to use it as a bilateral operation in entanglement purification schemes of Refs. \cite{Bennett1} and \cite{Deutsch} as we will show in Sec. \ref{Purify}. A practical question is how to realize the postselective measurement of the field. In the next subsection we investigate this issue by means of balanced homodyne photodetection \cite{Lvovsky}.
\subsection{Postselection by balanced homodyne photodetection}
\begin{figure*}
\caption{ Left panel: Husimi Q-function of the field state defined in Eq. \eqref{Husimi} after the interaction between the cavity and the qubits as depicted in Fig. \ref{fig:setup}. Right panel: The corresponding probability distribution $P(p)$ for the $p$ quadrature defined in Eq. \eqref{probhomod} in full line and the weighted $p$ quadrature distribution of the initial coherent state $\ket{\alpha}$ as a reference in dashed line. The interaction time is $\tau=2/g$. The initial tripartite state in Eq. \eqref{initial} is considered to be $\ket{00} \ket{\alpha}$ with $\alpha =10$.
}
\label{fig:QA}
\end{figure*}
In the following we return to our exact calculations in Eq. \eqref{psi} and show that this quantum operation can be probabilistically implemented with fidelity close to unity by measuring the state of the field with a balanced homodyne photodetection. First let us study the evolution of the field in phase space with dimensionless coordinates $x$ and $p$. We use for this purpose the Husimi Q-function, defined as \begin{equation}
Q(\beta,\tau) = \frac{1}{\pi} \bra{\beta} \hat{\rho}^F(\tau)\ket{\beta}, \quad \beta=\frac{x+i\,p}{\sqrt2},
\label{Husimi} \end{equation} where we have introduced the reduced density matrix of the field state $\hat{\rho}^F(\tau) = \mathrm{Tr}_{A_1,A_2}\left\{\ket{\Psi(\tau)}\bra{\Psi(\tau)}\right\}$ with $\mathrm{Tr}_{A_1,A_2}$ being the partial trace over the qubits. Figure \ref{fig:QA} shows the $Q$ function of the field after the interactions with the two qubits. The initial field state is characterized by $\alpha=10$, i.e., $\phi=0$. Although the results can be extended to arbitrary values of $\phi$, for the sake of simplicity here and in the rest of the paper we consider $\phi=0$. It can be noted that the $Q$ function is composed of three spots, each of which corresponds to a coherent state in Eq. \eqref{psi_mpn}. During the first interaction the initial coherent state splits into two spots that evolve with frequencies of opposite sign. When qubit $A_2$ interacts with the field emerged after the interaction with qubit $A_1$, both spots split up again. Due to the fact that the interaction time for both of the qubits is equal, the spots moving backwards meet again at the initial position. Furthermore, the state at the initial position is close to a coherent state while the two other spots are deformed due to the nonlinear dependence of the Rabi frequencies on the photon-number. It is an interesting feature that the initial coherent state is almost restored and this makes the central contribution to the field state an attractive candidate to be measured. Provided that we are successful in this measurement we generate the two-qubit quantum operation in Eq. \eqref{Qgate}.
In the next step we focus on the postselective field measurement. We briefly recapitulate the basic features which lead to a quadrature measurement of the field with the help of a balanced homodyne measurement \cite{Lvovsky,Torres}. The field state, subject to detection, is superposed with a strong local coherent state, i.e. high mean number of photons, at a $50\%$ reflecting beam splitter, and the modes emerging from the beam splitter are measured measured by two photodetectors. We consider in our scheme ideal photodetectors. The measured signal is the difference of photon numbers of the two photodetectors. Dividing the measured signal by the square root of two times the local coherent state's mean photon number results in a signal, which corresponds to a projective measurement of a quadrature operator $\ket{x_\theta}\bra{x_\theta}$ on the field state. The eigenvalue equation of the quadrature $\ket{x_\theta}$ reads \begin{equation} \hat x_\theta\ket{x_\theta}= \frac{1}{\sqrt{2}}\left(\hat{a} e^{-i\theta}+\hat{a}^\dagger e^{i\theta}\right)\ket{x_\theta}=x_\theta \ket{x_\theta}, \label{quadrature} \end{equation} where $\theta$ is the phase of the local oscillator. $\hat{a}$ and $\hat{a}^\dagger$ are the annihilation and creation operators of the single-mode field to be measured. Here, we assume that the emerged field state in the cavity can be perfectly transferred to this single-mode field. Due to the phase space structure seen in Fig.~\ref{fig:QA} it is reasonable to select the phase of the local oscillator to be $\theta=\frac{\pi}{2}$, i.e., the coordinate $p=x_{\pi/2}$. The reason is that the field contribution paired with the two-qubit quantum operation is the farthest from the other field contributions in this particular quadrature measurement. We remark that the results can be extended to arbitrary $\phi\neq 0$ by shifting the phase of the quadrature to be measured, i.e., $x_{\phi+\pi/2}$.
In order to postselect the two-qubit state $\ket{\psi_\star}$, one requires to project the field state with the projector $\ketbra{p}{p}$ restricted to the interval $p\in [-2,2]$. This ensures that the measurement is selecting only the middle contribution in phase space that corresponds to the coherent state $\ket{\alpha}$ and also the highest probability of postselecting the two-qubit state $\ket{\psi_\star}$. In this case the postselected two-qubit quantum operation takes the form \begin{align}
\hat W_2{(p,\bar n)}\ket\psi
&=\bra{p}\hat U(\tau)\ket{\psi}\ket\alpha
\approx\ket{\psi_\star},
\nonumber\\
p&\in[-2,2],\quad \alpha=\sqrt{\bar n}.
\label{W2} \end{align} The probability for such an event is given by \begin{align}
P_H= \int_{-2}^2P(p)dp,\quad P(p)= \Tr{\ketbra{p}{p}\,\ketbra{\Psi(\tau)}{\Psi(\tau)}}, \label{probhomod} \end{align} which is obtained by integrating the probability distribution of the field $P(p)$ in the $p$ quadrature. In the limit of high mean photon numbers, this can be approximated by integrating the function \begin{equation}
P_H\approx \int_{-2}^2 \frac{|\braket{p}{\alpha}|^2}{|\braket{\psi_\star}{\psi_\star}|^2}dp
=\frac{{\rm erf}(2)}{|\braket{\psi_\star}{\psi_\star}|^2}.
\label{} \end{equation} with the error function ${\rm erf}(2)=0.995322$ \cite{AS}. For large mean photon numbers $\bar n$ and with the interaction time fulfilling condition \eqref{taucondition} the postselected two-qubit quantum operation approaches the quantum operation $\hat M$ in Eq. \eqref{Qgate}.
In the right panel of Fig. \ref{fig:QA} we have plotted in full line the distribution $P(p)$ rotated $90$ degrees clockwise to have a better comparison with the $Q$ function in the left panel. We have also plotted by a dashed line the distribution
$|\braket{p}{\alpha}|^2/|\braket{\psi_\star}{\psi_\star}|^2$ to compare with $P(p)$ and show the difference between the coherent state $\ket{\alpha}$ and the field state $\hat{\rho}^F(\tau)$ emerged after the matter-field interactions. In the case of the coherent state the integration over all relevant quadrature values $p \in [-2,2]$ results in an almost perfect projection onto the coherent state. However, in the case of the field state $\hat{\rho}^F(\tau)$ this projection is only achieved for certain interaction times and large mean photon numbers.
\begin{figure}
\caption{The fidelity $F_\star$
of the two-qubit state in Eq. \eqref{W2} with respect to $\ket{\psi_\star}$ after a successful projective measurement on the quadrature
$\ket{p}$, with $p=0$. The initial state of the two qubits is set to $\ket{00}$ and
the initial coherent state was taken with real $\alpha=\sqrt{\bar n}$. Four curves are presented for different values of the mean photon number $\bar n \in \{10,50,100,200\}$ as described in the legend.}
\label{fig:fid1}
\end{figure}
In order to see how well the state $\ket{\psi_\star}$ can be generated, we consider the fidelity \begin{equation}
F_\star = |\bra{\psi_\star} \hat W_2{(p,\bar n)}\ket{\psi}|^2
\label{fstar} \end{equation} after a successful projective measurement on the quadrature $\ket{p}$. Figure \ref{fig:fid1} shows the fidelity $F_\star$ as a function of the interaction time $\tau$ and for different values of mean photon number $\bar n$. The quadrature measurement was taken always at the middle of the distribution $p=0$ and qubits were considered initially in the state $\ket{00}$, i.e., both in the ground state. The fidelity increases as a function of time until it reaches it maximum value around $\tau g=2$. This is the time required for the coherent states of the field to be distinguishable. Afterwards the fidelity drops down again as the coherent state approximation breaks down with increasing time. However this decrease in fidelity is slower for larger values of the mean photon number $\bar n$, in agreement with the limits of the interaction time given in Eq. \eqref{taucondition}.
\section{Entanglement purification} \label{Purify}
In this section it is demonstrated how the two-qubit quantum operation in Eq. \eqref{Qgate} can be used for implementing entanglement purification schemes. The basic idea of entanglement purification is to increase the degree of entanglement of a qubit pair at the expense of another qubit pair. Therefore, the protocol can be assumed to start with
a product state of two entangled qubit pairs \begin{equation} \hat{\boldsymbol{\rho}}=\hat{\rho}^{A_1,B_1} \otimes \hat\rho^{A_2,B_2}, \label{InitPur} \end{equation} where $A$ and $B$ are two spatially separated quantum systems. The task has to be accomplished by applying local quantum operations and measurements on sides $A$ and $B$ separately. The measurement procedure leads to the destruction of one of the pairs, say $\hat\rho^{A_2,B_2}$. The final result is a qubit pair $\hat \rho'^{A_1,B_1}$ with a higher fidelity with respect to a maximally entangled sate, typically chosen to be the Bell state $\ket{\Psi^-}$. Provided one has a large number of qubit pairs, the iteration of the protocol leads to the distillation of a maximally entangled state. In the following, we discuss two of the most well-known protocols \cite{Bennett1,Deutsch} and present alternative versions using the quantum operation of Sec. \ref{Entmodel}.
{\it Scheme 1.} The first method presented here is based on the pioneering work of Ref. \cite{Bennett1} where the entanglement purification protocol distills the entangled state $\ket{\Psi^-}$ from a large ensemble of states $\hat{\rho}$ with the property $\bra{\Psi^-} \hat{\rho} \ket{\Psi^-}>\frac{1}{2}$. The protocol for two qubit pairs can be summarized in five steps:\\ ({\bf B1}) Transform both $\hat{\rho}$ into the Werner form. \\ ({\bf B2}) Apply $\hat{\sigma}_y^{A_1}$ and $\hat{\sigma}_y^{A_2}$ (Pauli spin operators).\\
({\bf B3}) Perform the bilateral operation $\hat U_{\rm CNOT}^{A_1\rightarrow A_2}\otimes\hat U_{\rm CNOT}^{B_1\rightarrow B_2}$.\\
({\bf B4}) Measure the target pair ($A_2,B_2$). \\ ({\bf B5}) If the measurement result is either $\ket{00}$ or $\ket{11}$, perform a $\hat{\sigma}_y^{A_1}$ rotation; otherwise discard pair ($A_1,B_1$).
These steps are applied to a whole ensemble and result in halving the number of pairs and yielding a new ensemble with bipartite states $\hat{\rho}'$. The fidelity of the pairs in the new ensemble $F'=\bra{\Psi^{-}} \hat{\rho}' \ket{\Psi^{-}}$ is larger than the fidelity of the pairs in the processed ensemble $F=\bra{\Psi^{-}} \hat{\rho} \ket{\Psi^{-}}$ provided that initially $\bra{\Psi^-} \hat{\rho} \ket{\Psi^-}>\frac{1}{2}$. Now, these steps are repeated from the beginning and this iteration leads to the purification of $\ket{\Psi^-}$. The requirement for the initial state $F=\bra{\Psi^-} \hat{\rho} \ket{\Psi^-}>\frac{1}{2}$ can be overcome by a certain filtering operation, aimed to exploit entanglement in a different way \cite{Horodecki}.
Let us briefly recapitulate step ({\bf B1}) due to its use in our subsequent discussions. A general bipartite state can be converted to the Werner state \begin{eqnarray}
\hat{\rho}_W(F) &=& F \ket{\Psi^{-}}\bra{\Psi^{-}} + \frac{1-F}{3} \ket{\Psi^{+}}\bra{\Psi^{+}} \nonumber \\
&+& \frac{1-F}{3} \ket{\Phi^{-}}\bra{\Phi^{-}}+ \frac{1-F}{3} \ket{\Phi^{+}}\bra{\Phi^{+}},
\label{wernerstate} \end{eqnarray} with the help of a linear projection \cite{Werner}, called also as the twirling operation. It has also been shown that $12$ local random unitary operations from the $SU(2)$ group, are necessary and sufficient to bring any two-qubit state $\hat{\rho}$ into a Werner state \cite{Bennett3}; four operations are needed to bring $\hat \rho$ into a state $\hat \rho_{\rm BD}$ which is diagonal in the Bell basis and in turn three more operations transform $\hat \rho_{\rm BD}$ into a Werner state $\hat\rho_W$ (we will omit the dependence on $F$ when no ambiguity arises). This can be written as \begin{eqnarray}
\hat \rho_W= \frac{1}{3} \sum_{j=1}^3B^\dagger_j\hat\rho_{\rm BD}\hat B_j,\quad
\hat \rho_{\rm BD}=\frac{1}{4}\sum_{j=1}^4 \hat B_j^\dagger \hat B_j^\dagger\hat \rho \hat B_j \hat B_j, \nonumber \\
\label{twirlingop} \end{eqnarray} where we have used the $4$ unitary transformations \begin{align}
\hat B_j&=\hat b_j^A\otimes \hat b_j^B,\quad
\hat b_1^\ell=\frac{\hat{\mathbb{I}}^\ell+i\hat\sigma_x^\ell}{\sqrt2},\quad
\hat b_2^\ell=\frac{\hat{\mathbb{I}}^\ell-i\hat\sigma_y^\ell}{\sqrt2},\quad
\nonumber\\
\hat b_3^\ell&=\ketbra{1}{1}^\ell+i\ketbra{0}{0}^\ell,\quad \hat b_4^\ell=\hat{\mathbb{I}}^\ell,
\quad \ell\in\{A,B\}.
\label{Bs} \end{align} which have been expressed in terms of the local unitary transformations $\hat b_j$ acting on a single qubit, the Pauli spin operators $\hat\sigma_x$ and $\hat\sigma_y$ and the identity map $\hat{\mathbb{I}}$. All three states have the same fidelity with respect to $\ket{\Psi^-}$, i.e \begin{equation}
F=
\bra{\Psi^-}\hat \rho\ket{\Psi^-}=
\bra{\Psi^-}\hat \rho_{\rm BD}\ket{\Psi^-}=
\bra{\Psi^-}\hat \rho_W\ket{\Psi^-}.
\label{} \end{equation}
In our scheme we consider that each qubit pair flies through cavities on side $A$ and $B$ and after two sequential interaction of the qubits with the single-mode fields two postselective field measurements are performed. This method generates two probabilistic two-qubit quantum operations on the two pairs on side $A$ and $B$ as shown in Sec. \ref{Model}. These quantum operations replace the controlled-NOT operations used in the original purification procedure. Our alternative version of the protocol ({\bf aB}) requires the following four steps:
\noindent ({\bf aB1}) We assume that the every spatially separated pair is entangled and is brought in the Werner state by local random unitary operations. This is equivalent to ({\bf B1}). We denote a four-qubit state by $\hat{\boldsymbol{\rho}}$. Therefore, the four-qubit input state reads \begin{equation} \hat{\boldsymbol{\rho}}=\hat{\rho}_W^{A_1,B_1} \otimes \hat\rho_{W}^{A_2,B_2} \label{Werner1} \end{equation} with the Werner state defined in Eq. \eqref{wernerstate}.\\ ({\bf aB2}) We apply now the two-qubit quantum operations which results in the state \begin{equation} \hat{\boldsymbol{\rho}}'=
\frac{\hat Q \hat{\boldsymbol{\rho}} \hat Q^\dagger }{\mathrm{Tr} \left\{\hat Q^\dagger \hat Q \hat{\boldsymbol{\rho}}\right\}} ,\quad \hat Q=\hat{M}^{A_1,A_2}\otimes \hat{M}^{B_1,B_2}, \label{map} \end{equation} where $\hat M^{\ell_1,\ell_2}$ is the matrix $\hat M$ in Eq. \eqref{Qgate} acting on qubits $\ell_1$ and $\ell_2$, with $\ell\in\{A,B\}$. The success probability to obtain this state is given by the normalization factor \begin{equation}
\Tr{\hat Q^\dagger \hat Q \hat{\boldsymbol{\rho}}}=\frac{5 - 4 F + 8 F^2}{18}.
\label{exprob} \end{equation} ({\bf aB3}) One of the pairs is now locally measured, for instance ($A_2,B_2$). We remark that in our scheme the two qubit pairs can be treated symmetrically. \\ ({\bf aB4}) Depending on the four possible measurement events we use the following strategy: in the cases when both qubits are in $\ket{0}$ or $\ket{1}$ we apply a local unitary transformation $\hat \sigma_x^{A_1}$ on the unmeasured pair; otherwise do nothing. This step is fundamentally different from ({\bf B5}) because there are no inadequate measurement results and we do not have to discard the unmeasured pair.
It is interesting to note that the success probability of the protocol is determined by step ({\bf aB2}) compared with the original scheme of Ref. \cite{Bennett1} where the selective measurement on the qubit pairs in step ({\bf B5}) specifies this probability. Provided that we are successful in the photonic postselection we generate a bipartite state $ \hat \rho'$ with fidelity \begin{equation}
\bra{\Psi^{-}} \hat{\rho}' \ket{\Psi^{-}}=F'=\frac{1 - 2 F + 10 F^2}{5 - 4 F + 8 F^2}.
\label{newfid} \end{equation} This is exactly the same equation obtained in Ref. \cite{Bennett1} and our scheme has a success probability $P=(5 - 4 F + 8 F^2)/18$. The dependency on $F$ for both the new fidelity $F'$ and the success probability $P$ are shown in Fig. \ref{probfid}.
Let us consider now an input four-qubit state with different fidelities \begin{equation} \hat{\boldsymbol{\rho}}=\hat{\rho}_W^{A_1,B_1}(F_1) \otimes \hat\rho_{W}^{A_2,B_2}(F_2). \label{Werner2} \end{equation} Applying the purification protocol we obtain the following fidelity \begin{equation}
F'=\frac{1-F_1-F_2+10F_1F_2}{5-2F_1-2F_2+8F_1F_2} \end{equation} with success probability \begin{equation}
P=\frac{5-2F_1-2F_2+8F_1F_2}{18}. \end{equation} If one chooses $F_1=0.4$ and $F_2=0.75$, then the purification protocol generates a bipartite state with fidelity $F'=0.558$. In general this means that the ensemble of pairs can have different fidelities and the only condition of a successful purification is that the average fidelity of the ensemble is larger than $0.5$.
{\it Scheme 2.} Now we turn our attention to the method in Ref. \cite{Deutsch} which is conceptually similar to Ref. \cite{Bennett1} and operates not on Werner states but on states diagonal in the Bell basis \begin{align}
\hat{\rho}_{\rm BD}(F,F_1,F_2,F_3) &= F \ket{\Psi^{-}}\bra{\Psi^{-}}
+F_1 \ket{\Phi^{-}}\bra{\Phi^{-}}
\nonumber \\&+
F_2 \ket{\Phi^{+}}\bra{\Phi^{+}}
+ F_3 \ket{\Psi^{+}}\bra{\Psi^{+}} \end{align} with $F+F_1+F_2+F_3=1$. In the case when we start initially with an arbitrary state, then a twirling operation with four unitary operators (see Eq. \eqref{twirlingop}) is required in order to bring this state in a Bell diagonal form. We remark that this scheme purifies state $\ket{\Phi^+}$, therefore increasing the value of $F_2$. The protocol for two qubit pairs can be summarized in four steps:\\ ({\bf D1}) Apply the unitary operation $\hat b_1^{\dagger A_1}\otimes\hat b_1^{\dagger A_2}\otimes\hat b_1^{B_1}\otimes\hat b_1^{B_2}$, see Eq. \eqref{Bs} .\\
({\bf D2}) Perform the bilateral operation $\hat U_{\rm CNOT}^{A_1\rightarrow A_2}\otimes\hat U_{\rm CNOT}^{B_1\rightarrow B_2}$.\\ ({\bf D3}) Measure the target pair ($A_2,B_2$). \\
({\bf D4}) If the measurement result is either $\ket{00}$ or $\ket{11}$ then the unmeasured pair is kept; otherwise is discarded.
In our alternative scheme ({\bf aD}) we purify again with respect to $\ket{\Psi^-}$. Provided that an ensemble of Bell diagonal states is generated among the flying qubits we proceed with the following four steps:\\
({\bf aD1}) To the four-qubit input state \begin{equation} \hat{\boldsymbol{\rho}}=\hat{\rho}_{\rm BD}^{A_1,B_1}(F,F_1,F_2,F_3) \otimes \hat\rho_{\rm BD}^{A_2,B_2}(F,F_1,F_2,F_3). \end{equation} we directly apply the two-qubit quantum operation of Eq. \eqref{map}. We obtain a $\hat{\boldsymbol{\rho}}'$ with success probability $(F+F_1)^2/2+(F_2+F_3)^2/2$.\\ ({\bf aD2}) The same as ({\bf aB3}).\\ ({\bf aD3}) The same as ({\bf aB4}).\\ ({\bf aD4}) Apply the rotation $\hat b_3^{A_1}\otimes \hat b_3^{B_1}$.
Provided that we are successful we obtain the following Bell diagonal state: \begin{equation}
\hat{\rho}'_{\rm BD}\left(\frac{F^2+F_1^2}{D},\frac{2F_2F_3}{D},\frac{2FF_1}{D},\frac{F_2^2+F_3^2}{D}\right) \end{equation} with $D=(F+F_1)^2+(F_2+F_3)^2$. Our step ({\bf aD4}) is analog to the step ({\bf D1}) and flips the Bell states $\ket{\Phi^\pm}$ while leaving $\ket{\Psi^\pm}$ invariant. This map redistributes the fidelities in order to obtain a purification by iteration. This result is similar to the one obtained in Ref. \cite{Deutsch}, with the only difference that protocol {\bf aD} purifies with respect to the state $\ket{\Psi^-}$ and protocol {\bf D} purifies with respect to the state $\ket{\Phi^{+}}$.
An interesting feature arises when one considers the following special Bell diagonal state \begin{equation}
\hat{\rho}_\Psi(F)=F \ket{\Psi^-}\bra{\Psi^-}+(1-F) \ket{\Psi^+}\bra{\Psi^+}
\label{nWerner} \end{equation} which is naturally generated in proposals for hybrid quantum repeaters \cite{vanLoock1, Bernad1,Gonta2}. The four-qubit state reads \begin{equation}
\hat{\boldsymbol{\rho}}= \hat{\rho}^{A_1,B_1}_\Psi(F) \otimes \hat{\rho}^{A_2,B_2}_\Psi(F). \end{equation} The step ({\bf aD4}) in our protocol is actually not required for this type of initial states, as we never populate the states $\ket{\Phi^\pm}$. However, this step will prove to be of crucial importance when applying a noisy version of the operation $\hat M$ of Eq. \eqref{map} such as our proposed cavity-QED version in Eq. \eqref{W2}.
Thus our protocol ({\bf aD}) yields the bipartite state \begin{eqnarray} \hat\rho_\Psi\left(\frac{F^2}{1-2F+2F^2}\right), \label{newfid2} \end{eqnarray} with success probability \begin{equation}
P=\frac{1-2F+2F^2}{2}.
\label{probaD} \end{equation}
\begin{figure}
\caption{Top panel: The success probability of the entanglement purification protocols. Bottom panel: The achieved new fidelities after a successfully applied protocol.
Both figures are shown for the initial state in Eq. \eqref{nWerner}. The plots show in full line the results of protocol ({\bf aB})
and in dashed line the results of protocol ({\bf aD}).
}
\label{probfid}
\end{figure}
In the case of different input fidelities \begin{equation}
\hat{\boldsymbol{\rho}}= \hat{\rho}^{A_1,B_1}_\Psi(F_1) \otimes \hat{\rho}^{A_2,B_2}_\Psi(F_2) \end{equation} we obtain the bipartite state \begin{eqnarray} \hat\rho_\Psi\left(\frac{F_1F_2}{(1-F_1)(1-F_2)+F_1F_2}\right) \end{eqnarray} with success probability \begin{equation}
P=\frac{1-F_1-F_2+2F_1F_2}{2}. \end{equation}
\begin{figure}
\caption{Top panel: The fidelity of the states achieved after $N$ successful iterations and based on Eqs. \eqref{newfid} (normal dots) and \eqref{newfid2} (thicker dots).
Bottom panel: The average number of qubit pairs $\overline{N}_{Q}$ required to reach the final fidelity of $0.99$ as a function of the initial fidelity $F$.
As an inital condition we considered a special Bell diagonal state given in Eq. \eqref{nWerner}. The plots show that
the purification scheme ({\bf aD}) (thicker dots) outperforms the purification scheme ({\bf aB}) (normal dots).}
\label{itdependence}
\end{figure}
In Fig. \ref{probfid} we compare the fidelity and success probability obtained from both of our protocols ({\bf aB}) in full line and ({\bf aD}) in dashed line for initial states of the form of Eq. \eqref{nWerner}. The function $F'(F)$ in dashed line shows a more concave shape than the full line counterpart. This means that less iterations are required in order to attain almost unit fidelity as shown in the top panel of Fig. \ref{itdependence}. The success probability for the protocol ({\bf aD}) is slightly lower than that of ({\bf aB}) as shown in the bottom panel of Fig. \ref{probfid}. However, the number of the average qubit pairs needed for the purification is more sensitive to the number of iterations required as it is shown in the bottom panel of Fig. \ref{itdependence}. These numbers were obtaind with the help of the fidelity dependent probabilities in Eqs. \eqref{exprob} and \eqref{probaD}. Thus the protocol ({\bf aD}) is more efficient than protocol ({\bf aB}) and it is also the most robust against noisy implementations as it is demonstrated in the subsequent section.
\section{Effects of a one-atom maser on the purification scheme} \label{loss}
\begin{figure}
\caption{
The achieved new fidelities with respect to the Bell state $\ket{\Psi^-}$ after several successful iterations and for
different values of the mean photon number $\bar n$. Top panel: Protocol ({\bf aB}).
Bottom panel: Protocol ({\bf aD}). Both figures are considered for the same initial state of Eq. \eqref{nWerner} with fidelity $F=0.7$.
We employ the two-qubit quantum operation $\hat W_2(0.5,\bar n)$
in Eq. \eqref{W2} with different values of the mean photon number $\bar n$ and use non-identical interaction times for system $A_1$ and $A_2$, i.e., $\tau_{A_2}/\tau_{A_1}=1.1$.
}
\label{fiditer}
\end{figure}
In this section we analyze the physical boundaries of our model proposed in Sec. \ref{Model} in the application of the entanglement purification protocols of Sec. \ref{Purify}. We consider the two-qubit quantum operation $\hat W_2(p,\bar n)$ of Eq. \eqref{W2} as an approximation of the entangling two-qubit quantum operation $\hat M$ of Eq. \eqref{Qgate} which is the core element in our protocol. The value of the quadrature $p$ is obtained by a balanced-homodyne measurement of the field. The approximation becomes more accurate with increasing values of the mean photon number $\bar n$ of the initial coherent state and provided that the interaction time $\tau$ for each atom fulfills the condition of Eq. \eqref{taucondition}. In the experimental setting of S. Haroche \cite{Hagley,Nogues} the interaction time $\tau$ is not equal for each atom. However, it can be shown that for any two atoms $A_1$, $A_2$ the interaction times fulfill
the inequality $|\tau_{A_1}/\tau_{A_2}-1|\leqslant 0.01$. This is achieved by Doppler-selective optical pumping techniques \cite{Hagley,Nogues} that significantly reduce the width of the velocity distribution, which directly affects the matter-field interaction times.
In Fig. \ref{fiditer}, we have plotted the fidelities as a function of the iterations $N$ for the protocols presented in Sec. \ref{Purify}, where we employ $\hat W_2(0.5,\bar n)$ in place of $\hat M$. Additionally we have taken into account that the first (second) atom interacts for a time $2/g$ ($2.2/g$) with the field in order to demonstrate the stability with respect to deviations in the interaction times. The result for protocol ({\bf aB}) is in the top panel and for ({\bf aD}) in the bottom panel for an initial states of the form of Eq. \eqref{nWerner}.
With this initial states the step ({\bf aD4}) is unnecessary when using the perfect two-qubit quantum operation $\hat M$. In contrast this step ({\bf aD4}) plays a crucial role with the operation $\hat W_2$ and initial states of \eqref{nWerner}. We ran simulations (not shown here) without step ({\bf aD4}) and found that the fidelity drops after few iterations due to the noisy quantum operation $\hat W_2(p,\bar n)$ that populates the other Bell states $\ket{\Phi^\pm}$. We have considered an ensemble of qubit pairs with moderate input fidelity $F=0.7$. We see that the protocol ({\bf aD}) attains high fidelities quite rapidly in $N=5$ iterations outperforming protocol ({\bf aB}) in regard to the average number of qubit pairs needed for the purification. Taking into account the success probability of protocol ({\bf aD}), one would require an average number of $2600$ qubit pairs to obtain a final fidelity of $0.999999$. This simple analysis suggests that the core mechanism of our purification scheme is feasible taking into account typical experiments where $35700$ atomic samples are sent through a cavity \cite{Haroche2}. For the interaction times we have chosen realistic parameters based on Ref. \cite{Haroche2} that reports an interaction time of $24\, \mu s$ ($6\, mm$ waist / $250\, m s^{-1}$), atoms separated by $70\, \mu s$ time intervals, and coupling strengths of $g=2 \pi \times 51$ kHz.
Now we turn our attention to the effects of photonic losses in our protocol described by the cavity damping rate $\kappa$ and the spontaneous emission $\gamma$ of the atoms. State of the art microwave cavities present very low values of $\kappa$ \cite{Haroche2}. However, our protocol requires the cavity field to leak in order to implement a balanced homodyne photodetection. For this reason it would be more favorable to use a cavity with a Q-factor in between the current technology and previous realizations that present ratios of roughly $g/\kappa=60$ \cite{Haroche}. In such case $3\, ms$ is enough to empty the cavity ($e^{-3}\approx 0.05$) and measuring a single quadrature takes $1\, \mu s$ \cite{Hansen} or $5.5\, ns$ \cite{Cooper}. Considering these time scales and the fact that the Rydberg atoms used in the S. Haroche experiments have a ratio of $g/\gamma=3000$, it is to be expected that atomic spontaneous decay does not play a major destructive effect in one step of our protocol. Nevertheless, it could play a role during the iteration procedure and therefore the qubits coherency must be kept in order that the purification procedure works. We do not elaborate on this here, but merely estimate that if the realization of one iteration is dominated by cavity leakage of time $3\, ms$, then protocols above $N=10$ iterations are sensitive to the spontaneous decay of the atoms.
In the presence of losses the ideal two-qubit quantum operation $\hat{\rho} \rightarrow \hat{M} \hat{\rho} \hat{M}$ has to be replaced by a more general quantum operation $\hat{\rho} \rightarrow \mathcal{E}\hat{\rho}$ which depends on $\kappa$ and $\gamma$. In the following we investigate numerically the effect of this general quantum operation on the entanglement purification protocol. We consider a Markovian description in which the evolution of an initial density matrix $\hat\varrho_0$, describing both atoms and the cavity, is given by \begin{equation}
\hat\varrho(\tau,\tau_f)={\mathcal V}(\tau,\tau_f)\hat\varrho_0,\quad {\mathcal V}(\tau,\tau_f)=
e^{{\mathcal L}^{A_2}\tau}e^{{\mathcal L}_{f}\tau_f}
e^{{\mathcal L}^{A_1}\tau}. \label{lossmodel} \end{equation} The evolution operator ${\mathcal V}(\tau,\tau_f)$ is generated by the Liouvillians \begin{equation}
{\mathcal L}^\ell\hat\varrho=i\left[\hat \varrho,\hat H^\ell\right]+
{\mathcal L}_f
\hat\varrho
, \quad {\mathcal L}_f={\mathcal L}_{\rm HO}+{\mathcal L}_{\rm at}^{A_1}+{\mathcal L}_{\rm at}^{A_2},
\label{Liouvillian1} \end{equation} with $\ell\in\{A_1,A_2\}$, that have been written in terms of the dissipators \begin{align}
{\mathcal L}_{\rm HO}\hat\varrho&=
\tfrac{1}{2}\kappa(n_T+1)
\left(2\hat a \hat\varrho\hat a^\dagger-\hat a^\dagger \hat a\hat\varrho-\hat\varrho\hat a^\dagger \hat a\right)
\nonumber\\
&\qquad\,\,\,\,+
\tfrac{1}{2}\kappa n_T
\left(2\hat a^\dagger \hat\varrho\hat a-\hat a\hat a^\dagger \hat\varrho-\hat\varrho\hat a\hat a^\dagger\right),
\nonumber\\
{\mathcal L}_{\rm at}^\ell\hat\varrho&=\tfrac{1}{2}\gamma
\left(2\hat\sigma_-^\ell\hat\varrho\hat\sigma_+^\ell
-\hat\sigma_+^\ell\hat\sigma_-^\ell\hat\varrho-\hat\varrho\sigma_+^\ell\hat\sigma_-^\ell \right)
\label{} \end{align} which describe the losses of the cavity and spontaneous emission of the atoms respectively. The evolution operator ${\mathcal V}(\tau,\tau_f)$ reflects the fact that at all times the dissipation mechanisms are active in the system, while the interaction only happens first for time $\tau$ between cavity and atom $A_1$ and for the same amount of time $\tau$ between cavity and atom $A_2$. In between the interactions there is a time of free evolution $\tau_f$ where only dissipation effects take place. In typical microwave experiments, the average number of thermal photons $ n_T$ is equal to $0.05$ towards which the field evolves with rate $\kappa$ (see Ref. \cite{Haroche2}).
The initial condition is taken to be the same as in Eq. \eqref{initial}.
In order to efficiently compute the dynamics for high photon numbers, we evaluate the quantum operation \begin{align} {\mathcal E}(\tau,\tau_f,p)\hat\rho &=\bra p\left({\mathcal V}(\tau,\tau_f)\hat \rho\ketbra{\alpha}{\alpha}\right)\ket p \nonumber\\ &=\sum_{i,j,k,l=0}^3{\mathcal E}_{k,l,i,j} \rho_{i,j}\ketbra{\varphi_k}{\varphi_l}, \label{Channel} \end{align} where $\ket{\varphi_i}\in\{\ket{00},\ket{01},\ket{10},\ket{11}\}$ and $\rho_{i,j}=\bra{\varphi_i}\hat \rho \ket{\varphi_j}$. One can find that the entries of ${\mathcal E}={\mathcal E}(\tau,\tau_f,p)$ are given by \begin{align} {\mathcal E}_{k,l,i,j}=\Tr{\ketbra{\varphi_l}{\varphi_k}\otimes\ketbra{p}{p} {\mathcal V}(\tau) \ketbra{\alpha}{\alpha}\otimes \ketbra{\varphi_i}{\varphi_j}}. \label{ChannelEntries} \end{align} The quantum operation in Eq. \eqref{Channel} is the noisy analog to the quantum operation in Eq. \eqref{W2}. Ideally, when $\bar n \gg 1$ and the decay constants tend to zero then ${\mathcal E}\hat\varrho\to\hat M \hat\varrho \hat M$. In the same way as the quantum operation $\hat M \cdot \hat M$, ${\mathcal E} \cdot$ also does not preserve the trace. For the quantum purification protocol, the noisy analog to Eq. \eqref{map} takes the following form \begin{equation} \hat{\boldsymbol{\rho}}'= \frac{{\mathcal E}^{A_1,A_2}{\mathcal E}^{B_1,B_2}\hat{\boldsymbol{\rho}}}{\Tr{{\mathcal E}^{A_1,A_2}{\mathcal E}^{B_1,B_2}\hat{\boldsymbol{\rho}}}}. \label{} \end{equation}
\begin{figure}
\caption{
The achieved fidelities with respect to the Bell state $\ket{\Psi^-}$ after several successful iterations and for different values of the cavity decay rate $\kappa$.
Top panel: Protocol ({\bf aB}). Bottom panel: Protocol ({\bf aD}). Both figures are considered for the same initial state of Eq. \eqref{nWerner} with fidelity $F=0.7$.
We employ the quantum operation ${\mathcal E}(2/g,3/g,0.15)$ in Eq. \eqref{Channel} for a mean photon number $\bar n =500$. The crosses
show the ideal two-qubit quantum operation $\hat{\rho} \rightarrow \hat M \hat{\rho} \hat M $. Spontaneous decay rate was set to
$g/\gamma=3000$ and we considered an average thermal photon number $n_T=0.1$.}
\label{fiditerloss}
\end{figure}
In Fig. \ref{fiditerloss} we plot the achieved fidelity after several successful iterations using different values of the decay constant $\kappa$ and an average thermal photon number $n_T=0.1$. We have numerically evaluated the quantum operation ${\mathcal E} \cdot$ as indicated in Eq. \eqref{ChannelEntries}. We have considered a chopped Hilbert space with $N_F=\lfloor \bar n+ 4\sqrt{\bar n} \rfloor$ \cite{floor} Fock states and constructed an $4N_F^2\times 4N_F^2$ matrix describing the Liouvillians in \eqref{Liouvillian1}. We have chosen the value of the quadrature $p$ to be $0.15$, on which the field state is projected. Numerical investigations show that an increase in the absolute value of $p$ implies a slightly decreased performance in the purification protocols. This is due to the lossy dynamics which brings closer the outer field contributions and thus distorting the boundaries of the central peak (see Fig. \ref{fig:QA} for the ideal case). Therefore, for quadrature values being farthest from the origin in the interval $[-2,2]$ we obtain more noisy versions of the ideal two-qubit quantum operation $\hat{\rho} \rightarrow \hat M \hat{\rho} \hat M$. Provided that we use the parameters of the experimental setup in Ref. \cite{Haroche2} the quadrature measurements around the central peak always generate a high fidelity two-qubit quantum operation in regard to the ideal one. The time of the free evolution is set to be larger than the interaction time in order to simulate almost the same conditions which are present in experimental scenarios. It can be noticed that protocol ({\bf aD}) is more robust against the effects of losses and surprisingly $N=5$ iterations are required to achieve its maximum fidelity. In this case, the step ({\bf aD4}) plays a crucial role in the stabilization of the protocol.
\section{Conclusions} \label{conclusions}
We have discussed implementations of entanglement purification protocols in the context of a hybrid quantum repeater. Our scheme is based on the one-atom maser, thus making our proposal a good experimental candidate. It has been demonstrated that a probabilistic two-qubit quantum operation can be realized with the help of ancillary multiphoton states. The two qubits fly sequentially through a single-mode cavity, initially prepared in a coherent state, and interact with the radiation field. The emerged field state is measured by a balanced homodyne photodetection. We have shown that for resonant matter-field interactions and large values of the mean photon number, the two-qubit quantum operation in Eq. \eqref{Qgate} can be implemented with high fidelity. This is based on the fact that for interaction times characterizing the collapse phenomena in the Jaynes-Cummings-Paul model the field contribution correlated with this quantum operation can be perfectly distinguished from the other field contributions correlated with other components of the two-qubit state. We have shown that the obtained probabilistic two-qubit quantum operation can replace the controlled-NOT gate in standard purification protocols \cite{Bennett1, Deutsch}. This approach have resulted in two alternative purification protocols, called in the main text ({\bf aB}) and ({\bf aD}), which are conceptually similar to their standard versions. These new protocols discard qubit pairs due to unsuccessful photonic postselection, but in the case of qubit measurements all the unmeasured qubit pairs are kept and only a measurement dependent unitary rotation is performed on them. We have compared these protocols for initial states which are in a special Bell diagonal form and they are generated in the proposals for hybrid quantum repeaters.
Finally, we have investigated the role of the losses in our proposed scheme. We have taken into account the damping rate of the cavity and the spontaneous decay of the qubits. We have conducted numerical investigations which show that our scheme is sensitive to the cavity damping rate in the sense that high fidelities $F>0.95$ can be achieved but never unit fidelity. These numerics were based on parameters taken from real experimental setups. There is also a trade-off between good and bad cavities because high-$Q$ cavities enhance the fidelity of the two-qubit quantum operation, but on the other hand the leakage of the field which has to be measured takes a longer time, thus increasing the chance of a spontaneous decay in the qubits. In general, we have found that protocol ({\bf aD}) which does not employ the twirling operation is more efficient than protocol ({\bf aB}) by means of the average number of qubit pairs needed for obtaining high fidelity Bell states. Furthermore, protocol ({\bf aD}) can correct errors in the implementation of the two-qubit quantum operation.
In view of recent developments in quantum communication and quantum state engineering this work might offer interesting perspectives. The results clearly show the limitations of a purification protocol in a hybrid quantum repeater based on multiphoton states, but on the positive side the proposed scheme has a high repetition rate. The proposed scheme can be already implemented in a one-atom maser setup. However, other implementations may include condensed-matter qubits which are coupled to single-mode radiation fields \cite{Sun}, trapped ions inside a cavity \cite{Casabone}, and neutral atoms coherently transported into an optical resonator \cite{Reimann}.
\begin{acknowledgments} This work is supported by the BMBF project Q.com. \end{acknowledgments}
\appendix \section{The states of the field} \label{App} In this appendix we present the unnormalized field states which appear in equation~\eqref{psi}. They are defined by \begin{align}
\ket{g_{00}(\tau)} &= \sum_{n=0}^{\infty} \expal \frac{\alpha^n}{\sqrt{n!}} \Big [ c_{00} \cos\left(\Omega_{n-1} \tau \right) \cos\left(\Omega_{n-1} \tau \right) \ket{n} \nonumber \\
&- i c_{10} \sin \left(\Omega_n \tau \right) \cos\left(\Omega_{n} \tau \right) \ket{n+1} \nonumber \\
&- i c_{01} \cos\left(\Omega_{n-1} \tau \right) \sin \left(\Omega_n \tau \right) \ket{n+1} \nonumber \\
&- c_{11} \sin \left(\Omega_n \tau \right)\sin \left(\Omega_{n+1} \tau \right) \ket{n+2} \Big] \; , \end{align} \begin{align}
\ket{g_{01}(\tau)} &= \sum_{n=0}^{\infty} \expal \frac{\alpha^n}{\sqrt{n!}} \Big[ c_{01} \cos\left(\Omega_{n-1} \tau \right) \cos\left(\Omega_{n} \tau \right) \ket{n} \nonumber \\
&- i c_{00} \frac{\alpha}{\sqrt{n+1}} \cos\left(\Omega_{n} \tau \right) \sin\left(\Omega_{n} \tau \right) \ket{n} \nonumber \\
&- i c_{11} \sin\left(\Omega_{n} \tau \right) \cos\left(\Omega_{n+1} \tau \right) \ket{n+1} \nonumber \\
&- c_{10} \sin\left(\Omega_{n} \tau \right) \sin\left(\Omega_{n} \tau \right) \ket{n} \Big] \; , \end{align} \begin{align}
\ket{g_{10}(\tau)} &= \sum_{n=0}^{\infty} \expal \frac{\alpha^n}{\sqrt{n!}} \Big[ c_{10} \cos\left(\Omega_{n} \tau \right) \cos\left(\Omega_{n-1} \tau \right) \ket{n} \nonumber \\
&- i c_{11} \cos\left(\Omega_{n} \tau \right) \sin\left(\Omega_{n} \tau \right) \ket{n+1} \nonumber \\
&- i c_{00} \frac{\alpha}{\sqrt{n+1}} \sin\left(\Omega_{n} \tau \right) \cos\left(\Omega_{n-1} \tau \right) \ket{n} \nonumber \\
&- c_{01} \sin\left(\Omega_{n-1} \tau \right) \sin\left(\Omega_{n-1} \tau \right) \ket{n} \Big] \; , \end{align} \begin{align}
\ket{g_{11}(\tau)} &= \sum_{n=0}^{\infty} \expal \frac{\alpha^n}{\sqrt{n!}} \Big[ c_{11} \cos\left(\Omega_{n} \tau \right) \cos\left(\Omega_{n} \tau \right) \ket{n} \nonumber \\
&- i \frac{\alpha}{\sqrt{n+1}} c_{10} \cos\left(\Omega_{n+1} \tau \right) \sin\left(\Omega_{n} \tau \right) \ket{n} \nonumber \\
&- i \frac{\alpha}{\sqrt{n+1}} c_{01} \sin\left(\Omega_{n} \tau \right) \cos\left(\Omega_{n} \tau \right) \ket{n} \nonumber \\
&- \frac{\alpha^2}{\sqrt{(n+1)(n+2)}} c_{00} \sin\left(\Omega_{n+1} \tau \right) \sin\left(\Omega_{n} \tau \right) \ket{n} \Big] \; , \end{align} where $\Omega_n = g \sqrt{n+1}$.
\end{document} | arXiv | {
"id": "1512.09095.tex",
"language_detection_score": 0.7819895148277283,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Secretary problem with non-uniform arrivals]
{The secretary problem with non-uniform arrivals via a left-to-right minimum exponentially tilted distribution}
\author{Ross G. Pinsky}
\address{Department of Mathematics\\ Technion---Israel Institute of Technology\\ Haifa, 32000\\ Israel} \email{ pinsky@math.technion.ac.il}
\urladdr{https://pinsky.net.technion.ac.il/}
\subjclass[2000]{60G40, 60C05} \keywords{secretary problem, optimal stopping, left-to-right minimum, random permutation}
\date{}
\begin{abstract}
We solve the secretary problem in the case that the ranked items arrive in a statistically biased order rather than in uniformly random order. The bias is given by the left-to-right minimum exponentially tilted distribution with parameter $q\in(0,\infty)$. That is, for $\sigma\in S_n$, $P_n(\sigma)$ is proportional to $q^{\text{LR}^{-}_n(\sigma)}$, where the left-to-right minimum statistic $\text{LR}^-_n$ is defined by $$
\text{LR}^{-}_n(\sigma)=|\{j\in[n]: \sigma_j=\min\{\sigma_i:1\le i\le j\}\}|,\ \sigma\in S_n. $$ For $q\in(0,1)$, higher ranked items tend to arrive earlier than in the case of the uniform distribution, and for $q\in(1,\infty)$,
they tend to arrive later, where the highest ranked item is denoted by 1 and the lowest ranked item is denoted by $n$. In the classical problem, the asymptotically optimal strategy is to reject the first $M_n^*$ items, where $M_n^*\sim\frac ne$, and then to select the first item ranked higher than any of the first $M_n^*$ items (if such an item exists). This yields $e^{-1}$ as the limiting probability of success. With the above bias on arrivals, we calculate the asymptotic behavior of the optimal strategy $M_n^*$ and the corresponding limiting probability of success, for all regimes of $\{q_n\}_{n=1}^\infty$. In particular, if the leading order asymptotic behavior of $\{q_n\}_{n=1}^\infty$ is at least $\frac1{\log n}$, and if also its order is no more than $o(n)$, then the limiting probability of success when using an asymptotically optimal strategy is $e^{-1}$; otherwise,
this limiting probability of success is greater than $e^{-1}$. Also, the limiting fraction of numbers, $\lim_{n\to\infty}\frac{M^*_n}n$, that are summarily rejected by an asymptotically optimal strategy lies in $(0,1)$ if and only if $\lim_{n\to\infty}q_n\in(0,\infty)$. \end{abstract}
\maketitle
\section{Introduction and Statement of Results} In a recent paper \cite{P22} we analyzed the secretary problem in the case that the order of arrival is biased by a Mallows distribution. The family of Mallows distributions is obtained by exponential tilting via the inversion statistic, which introduces a bias whereby smaller numbers tend to appear earlier and larger numbers tend to appear later (if the parameter $q\in(0,1)$) or vice versa (if the parameter $q>1$) than in the uniform case. In this paper we study the secretary problem with a different bias, obtained by exponential tilting via the left-to-right minimum statistic. This latter tilting also creates a bias whereby smaller numbers tend to appear earlier and larger numbers tend to appear later (if the parameter $q\in(0,1)$) or vice versa (if the parameter $q>1$) than in the uniform case. It turns out that the secretary problem with bias via the left-to-right minimum statistic yields a richer array of behavior than in the case of the Mallows distribution, and the proofs of the results require a considerably more delicate analysis
than in the case of the Mallows distribution.
Recall the classical secretary problem: For $n\in\mathbb{N}$, a set of $n$ ranked items is revealed, one item at a time, to an observer whose objective is to select the item with the highest rank. The order of the items is completely random; that is, each of the $n!$ permutations of the ranks is equally likely. At each stage, the observer only knows the relative ranks of the items that have arrived thus far, and must either select the current item, in which case the process terminates, or reject it and continue to the next item. If the observer rejects the first $n-1$ items, then the $n$th and final item to arrive must be accepted. Denote by $\mathcal{S}(n,M_n)$, for $M_n\in\{0,1,\cdots, n-1\}$, the strategy whereby one rejects the first $M_n$ items and then selects the first later arriving item that is ranked higher than any of the first $M_n$ items (if such an item exists). As is very well known, asymptotically as $n\to\infty$, the optimal strategies $\mathcal{S}(n,M_n^*)$ are those for which $M^*_n\sim \frac ne$, and the corresponding limiting probability of successfully selecting the item of highest rank is $e^{-1}$.
Over the years, the secretary problem has been generalized in many directions.
For the secretary problem in its classical setup, but with items arriving in a non-uniform order, see \cite{GM, Pf, KKN} as well as \cite{P22}.
See \cite{GD} and \cite{GK} for some variations of the classical setup with items arriving in non-uniform order. See \cite{B00} for a different approach to the secretary problem.
See \cite{F83,F89} for a history of the problem and some natural variations and generalizations.
We now define the distribution obtained by exponential tilting via the left-to-right minimum statistic. For a permutation $\sigma\in S_n$, a number $j\in[n]$ satisfying $\sigma_j=\min\{\sigma_i:1\le i\le j\}$ is called a left-to-right minimum for $\sigma$; note that a left-to-right minimum denotes the location of a minimum and not the value of a minimum. The left-to-right minimum statistic $\text{LR}^-_n$ is defined by $$
\text{LR}^{-}_n(\sigma)=|\{j\in[n]: \sigma_j=\min\{\sigma_i:1\le i\le j\}\}|,\ \sigma\in S_n. $$ For each $q>0$, define the left-to-right minimum exponentially tilted distribution $P_n^{\text{LR}^-;q}$ on $S_n$ by $$ P_n^{\text{LR}^-;q}(\sigma)= \frac{q^{\text{LR}^{-}_n(\sigma)}}{q^{(n)}},\ \sigma\in S_n, $$ where \begin{equation}\label{raising} q^{(n)}:=q(q+1)\cdots(q+n-1) \end{equation} is the raising factorial. The fact that $q^{(n)}$ is the correct normalization constant follows from the constructions in section \ref{construction}.
Before presenting our results on the secretary problem, we present a simple result concerning the behavior of the expectation of the left-to-right minimum statistic under $P_n^{\text{LR}^-;q_n}$ for various regimes of $\{q_n\}_{n=1}^\infty$.
\begin{proposition}\label{lrminprop} \noindent i. Let $q_n=o(\frac1{\log n})$. Then $$ \lim_{n\to\infty}E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n=1. $$ \noindent ii. Let $\lim_{n\to\infty}q_n\log n=c\in(0,\infty)$. Then $$ \lim_{n\to\infty}E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n=1+c. $$ \noindent iii. Let $\lim_{n\to\infty}q_n\log n=\infty$ and $q_n=O(1)$. Then $$ E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n\sim q_n\log n. $$ \noindent iv. Let $q_n\to\infty$ and $q_n=o(n)$. Then $$ E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n\sim q_n\log\frac{n+q_n}{1+q_n}. $$ In particular, if $q_n\sim cn^\alpha$, with $c>0$ and $\alpha\in(0,1)$, then $$ E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n\sim c(1-\alpha)n^\alpha\log n. $$ \noindent v. Let $q_n\sim cn$, with $c>0$. Then $$ E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n\sim c(\log\frac{1+c}c)n. $$ In particular, $c(\log\frac{1+c}c)\to\begin{cases} 0,\ \text{if}\ c\to 0;\\ 1,\ \text{if}\ c\to\infty.\end{cases}$
\noindent vi. Let $\lim_{n\to\infty}\frac{q_n}n=\infty$. Then $$ E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n\sim n. $$ \end{proposition}
For any permutation, the right-most location of a left-to-right minimum is the location at which the number 1 appears. In light of this, it is intuitive from the definition of the distribution and from Proposition \ref{lrminprop} that when $q\in(0,1)$ there is a tendency for the number 1 to appear early and when $q>1$ there is a tendency for the number 1 to appear late. In fact,
for $i<j$, an exponentially tilted distribution via the left-to-right minimum statistic has a greater effect on the placement of the number $i$ than on the placement of the number $j$, and in particular, it has the greatest effect on the placement of the number 1. This tendency can be understood much more explicitly from the first of two constructions of $P_n^{\text{LR}^-;q_n}$ given in section \ref{construction}. In that construction, a random permutation distributed as $P_n^{\text{LR}^-;q_n}$ is built location by location, starting with the $n$th and final location, and moving backward one location at a time. The probability that any number $j$ is placed in the final location is the same for all $j\in[n]-\{1\}$, but is $q$ times as much for $j=1$. Using induction, let $m\in\{1,\cdots, n-2\}$, and assume now that the locations $n,n-1,\cdots, n-m+1$ have already been filled, say by numbers $\{i_k\}_{k=n-m+1}^n$. Then every number in $[n]-\{i_k\}_{k=n-m+1}^n$, except for the smallest one of them, has the same probability of appearing in location $n-m$, while the smallest of them has $q$ times as much probability to appear there. In the final step, location 1 is filled by the one remaining number.
In light of the discussion in the above paragraph, as we turn now to the secretary problem, \it our convention will be that the number 1 represents the highest ranking.\rm\ Thus, for $q\in(0,1)$, there is a tendency for the highest ranked item to arrive earlier than in the case of the uniform distribution, while for $q>1$, their is a tendency for it to arrive later.
If the order of arrival of the items is biased via the left-to-right minimum exponentially tilted distribution $P_n^{\text{LR}^-;q}$ with parameter $q>0$, let $\mathcal{P}_n^q(\mathcal{S}(n,M_n))$ denote the probability of successfully selecting the item of highest rank when employing the strategy $\mathcal{S}(n,M_n)$, which was defined in the second paragraph of the paper. The following theorem determines the asymptotically optimal strategies $\mathcal{S}(n,M_n^*)$ and the corresponding limiting probability of success, for all regimes of $\{q_n\}_{n=1}^\infty$.
\begin{theorem}\label{seclrmin} \noindent i. Let $q_n=o(\frac1{\log n})$. Then the asymptotically optimal strategy is $\mathcal{S}(n,M_n^*)$, where $M_n^*=0$. (That is, the optimal strategy is to choose the first item.) The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=1. $$ \noindent ii. Let $q_n\sim\frac c{\log n}$, with $c\in(0,1)$. Then the asymptotically optimal strategy is $\mathcal{S}(n,M_n^*)$, where $M_n^*=0$. (That is, the optimal strategy is to choose the first item.) The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=e^{-c}. $$ \noindent iii. Let $q_n\sim\frac1{\log n}$. Then the asymptotically optimal strategies are $\mathcal{S}(n,M_n^*)$, where $M_n^*=k$, for all $n\in\mathbb{N}$, where $k\in\mathbb{Z}^+$ is arbitrary, or $\lim_{n\to\infty}M_n^*=\infty$ and $\lim_{n\to\infty}\frac{\log M_n^*}{\log n}=0$. The
corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=e^{-1}. $$ \noindent iv. Let $q_n$ satisfy $lim_{n\to\infty}q_n=0$ and $\lim_{n\to\infty}q_n\log n>1$. Then the asymptotically optimal strategies are $\mathcal{S}(n,M_n^*)$, where $q_n\log \frac n{M^*_n}\sim1$. (If $q_n\sim\frac c{\log n}$ with $c>1$, then $\lim_{n\to\infty}\frac{\log M^*_n}{\log n}=\frac{c-1}c$, and in particular, one can choose $M_n^*\sim n^{1-\frac1c}$.) The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=e^{-1}. $$ \noindent v. Let $\lim_{n\to\infty}q_n=q\in(0,\infty)$. Then the asymptotically optimal strategies are $\mathcal{S}(n,M_n^*)$, where \begin{equation*}\label{optimali} M_n^*\sim ne^{-\frac1q}. \end{equation*}
The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=e^{-1}. $$ \noindent vi. Let $q_n\to\infty$ and $q_n=o(n)$. Then the asymptotically optimal strategies is $\mathcal{S}(n,M_n^*)$, where \begin{equation*}\label{optimali} n-M_n^*\sim\frac n{q_n}. \end{equation*} (In particular, if $q_n\sim cn^\alpha$, with $\alpha\in(0,1)$, then $n-M_n^*\sim\frac{n^{1-\alpha}}c$.)
The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=e^{-1}. $$ \noindent vii. Let $q_n\sim cn$, with $c\in(0,1)$. Then the asymptotically optimal strategy is $\mathcal{S}(n,M_n^*)$, where \begin{equation}\label{optimalvii} M_n^*=n-L,\ \text{if}\ \frac1L\le c<\frac1{L-1}, \ \text{where}\ 2\le L\in\mathbb{N}. \end{equation} The corresponding limiting probability of success is \begin{equation}\label{optimalviiprob} \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=\frac{cL}{(1+c)^L}, \ \text{if}\ \frac1L\le c<\frac1{L-1}, \ \text{where}\ 2\le L\in\mathbb{N}. \end{equation} In particular, $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))>e^{-1}. $$ \noindent viii. Let $q_n\sim cn$, with $c\ge1$. Then the asymptotically optimal strategy is $\mathcal{S}(n,M_n^*)$, where $M_n^*=n-1$. (That is, the optimal strategy is to choose the last item.) The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=\frac c{1+c}. $$ \noindent ix. Let $\lim_{n\to\infty}\frac{q_n}n=\infty$. Then the asymptotically optimal strategy is $\mathcal{S}(n,M_n^*)$, where $M_n^*=n-1$. (That is, the optimal strategy is to choose the last item.) The corresponding limiting probability of success is $$ \lim_{n\to\infty}\mathcal{P}_n^q(\mathcal{S}(n,M_n^*))=1. $$ \end{theorem}
\noindent \bf Remark 1.\rm\
The fact that the optimal asymptotic probability of success is always at least $\frac1e$ can be explained by a result of Bruss \cite{B00}. For $n\in\mathbb{N}$, let $\{I_j\}_{j=1}^n$ be a sequence of independent indicator functions, which are observed sequentially. The observer's objective is to stop at the last $k$ for which $I_k=1$. Let $p_j$ denote the probability that $I_j=1$. One of the results of that paper is that an optimal strategy as $n\to\infty$ yields an optimal limiting probability of at least $\frac1e$, for all choices of $\{p_j\}_{j=1}^\infty$. This result of Bruss can be applied to the classical secretary problem. Indeed, let $I_k$ be equal to 1 or 0 according to whether or not the $k$th item is the highest ranked item among the first $k$ items. It is easy to check that the $\{I_k\}_{k=1}^n$ are independent under the uniform distribution. It turns out that this independence also holds under the distributions $P_n^{\text{LR-;q}}$ (as well as under the Mallows distributions mentioned above). The proof of this independence for $P_n^{\text{LR-;q}}$ is given in section \ref{construction}.
\noindent \bf Remark 2.\rm\ Note that if the leading order asymptotic behavior of $\{q_n\}_{n=1}^\infty$ is at least $\frac1{\log n}$, and if also its order is no more than $o(n)$, then the limiting probability of success when using an asymptotically optimal strategy is $e^{-1}$; otherwise,
this limiting probability of success is greater than $e^{-1}$. Note also that the limiting fraction of numbers, $\lim_{n\to\infty}\frac{M^*_n}n$, that are summarily rejected by an asymptotically optimal strategy lies in $(0,1)$ if and only if $\lim_{n\to\infty}q_n\in(0,\infty)$.
\noindent\bf Remark 3.\rm\ Note the following asymmetry with respect to the cases where an optimal strategy is $M^*_n=k$, for fixed $k\in\mathbb{N}$, and the cases where the optimal strategy is $M^*_n=n-L$, for $2\le L\in\mathbb{N}$. For $k\in\mathbb{N}$, the strategy $M_n^*=k$ is optimal when $q_n\sim\frac1{\log n}$, in which case the limiting probability of success is $e^{-1}$. However, for such $q_n$, this strategy $M_n^*=k$ is not the unique optimal strategy. On the other hand, for $2\le L\in\mathbb{N}$, the strategy $M_n^*=n-L$ is optimal when $q_n\sim cn$, where $\frac1L\le c<\frac1{L-1}$. This strategy is the unique optimal strategy for such $q_n$, and the limiting probability of success is $\frac{cL}{(1+c)^L}>e^{-1}$.
\bf\noindent Remark 4.\rm\ As noted in the introduction, the secretary problem with bias via a Mallows distribution was analyzed in \cite{P22}. The Mallows distributions $P_n^{\text{Mall};q}$ are obtained by exponential tilting via the inversion statistic $I_n$, which is defined by $I_n(\sigma)=\sum_{1\le i<j\le n}1_{\sigma_j<\sigma_i}$, for $\sigma\in S_n$. Thus, $P_n^{\text{Mall};q}(\sigma)$ is proportional to $q^{I_n(\sigma)}$. There are a variety of ways to see that tilting via the inversion statistic has a stronger effect than tilting via the left-to-right minimum statistic. In terms of the secretary problem, this can be seen from the fact that the limiting probability of success with left-to-right minimum tilting is $e^{-1}$ as long as $\{q_n\}_{n=1}^\infty$ behaves like $o(n)$ and is at least as large as $\frac1{\log n}$. However, as seen in \cite{P22}, for constant $q_n=q\neq1$, the limiting probability of success is larger than $e^{-1}$.
The following theorem gives the exact formula for $\mathcal{P}_n^q(\mathcal{S}(n,M_n))$, for any $n,q,M_n$. \begin{theorem}\label{lrminsecexact} For $n\in\mathbb{N}$ and $q>0$, \begin{equation}\label{lrminsecexactform} \mathcal{P}_n^q(\mathcal{S}(n,M_n))=\begin{cases}q\frac{M_n}n\big(\frac{n!}{M_n!}\frac1{\prod_{l=M_n}^{n-1}(l+q)}\big)\sum_{j=M_n}^{n-1}\frac1j, \ M_n\in\{1,\cdots, n-1\};\\ \frac{(n-1)!}{\prod_{l=1}^{n-1}(l+q)},\ M_n=0.\end{cases} \end{equation}
\end{theorem}
The number $s(n,j)$ of permutations of $S_n$ with exactly $j$ left-to-right minima coincides with the number of permutations of $S_n$ with exactly $j$ cycles. The numbers $\{s(n,j)\}$ are called the unsigned Stirling numbers of the first kind. A proof of this equivalence can be given by showing that the two quantities above satisfy the same difference equation and the same boundary conditions. An alternative proof is via the explicit bijection provided by Foata's Transition Lemma \cite{B}. This bijection maps permutations with $j$ cycles to permutations with $j$ left-to-right minima. (Actually, using the definition of canonical cycle notation as presented in \cite{B}, permutations with $j$ cycles are mapped to permutations with $j$ left-to-right-maxima, but one can easily adjust the definition of canonical cycle notation so that permutations with $j$ cycles are mapped to permutations with $j$ left-to-right minima.)
The well-known Ewings sampling distributions are the family of distributions on $S_n$ obtained by exponential tilting via the cycle statistic. That is, the probability of any $\sigma\in S_n$ is proportional to $q^{\text{cyc}_n(\sigma)}$, where $\text{cyc}_n(\sigma)$ denotes the number of cycles in $\sigma$. It then follows that the distribution $P_n^{\text{LR}^-;q}$ is the push-forward distribution obtained from the Ewings sampling distribution with parameter $q$ via the bijection from the Transition Lemma.
In order to prove
Proposition \ref{lrminprop} and Theorem \ref{seclrmin}, it will be essential to have a so-called online construction of a random permutation
distributed as $P_n^{\text{LR}^-;q}$. Such an online construction for the Ewens sampling distributions can be obtained by a minor tweaking of the classical Feller construction that builds a uniformly random permutation cycle by cycle \cite{ABT, P14}. However, combining this construction with the push forward defined above does not yield a useful tool for proving Proposition \ref{lrminprop} and Theorem \ref{lrminsecexact}.
In section \ref{construction} we give two useful online constructions
of a random permutation distributed according to a left-to-right minimum exponentially tilted distribution.
The first one will be used to prove Proposition \ref{lrminprop} and Theorem \ref{lrminsecexact}, and the second one will be used to establish the independence noted in Remark 1 after Theorem \ref{seclrmin}.
We prove Proposition \ref{lrminprop} in section \ref{proofprop}. We prove Theorem \ref{lrminsecexact} in section \ref{proofexact}, and then use it to prove
Theorem \ref{seclrmin} in section \ref{proofsec}.
\section{On-line constructions of left-to-right minimum exponentially tilted distributions}\label{construction}
We describe two online methods for constructing a random permutation $\Pi^{(n)}$ distributed as $P_n^{\text{LR}^-;q}$. Fix $q>0$.
The first construction builds the permutation location by location, starting with the right-most location.
For each $m\in\mathbb{N}$, define the distribution $p^{(m)}$ on $[m]$ by \begin{equation}\label{p's} p_i^{(m)}=\begin{cases} \frac q{q+m-1},\ i=1;\\ \frac1{q+m-1},\ i=2,\cdots, m.\end{cases} \end{equation} Fix $n\in\mathbb{N}$. To construct the random permutation $\Pi^{(n)}=\Pi^{(n)}_1\Pi^{(n)}_2\cdots \Pi_n^{(n)}$, distributed as $P_n^{\text{LR}^-;q}$, make $n$ independent samples, one from each of the distributions
$\{p^{(m)}\}_{m=1}^n$. For $m\in[n]$, denote by $\kappa_m$ the number obtained in sampling from $p^{(m)}$. Define $\Pi^{(n)}_n=\kappa_n$. Now inductively, if $\Pi^{(n)}_n,\Pi^{(n)}_{n-1},\cdots ,\Pi^{(n)}_{m+1}$ have already been defined, let $\Pi^{(n)}_m=\Psi_m(\kappa_m)$, where $\Psi_m$ is the increasing bijection from $[m]$ to $[n]-\{\Pi^{(n)}_k\}_{k=m+1}^n$. Thus, for example, if $n=8$ and we sample $\kappa_8=2, \kappa_7=6,\kappa_6=1, \kappa_5=4,\kappa_4=2,\kappa_3=2,\kappa_2=1, \kappa_1=1$, then $\Pi^{(8)}=83546172$.
By construction, the random permutation $\Pi^{(n)}$ has a left-to-right minimum at location $m$ if and only if $\kappa_m=1$. Thus, from \eqref{p's}, for any $\sigma\in S_n$, the probability that $\Pi^{(n)}=\sigma$ is equal to $\frac{q^{\text{LR}^-_n(\sigma)}}{q^{(n)}}$, where $q^{(n)}$ is as in \eqref{raising}.
The above construction of a random permutation is a minor adaptation of the so-called $p$-shifted construction of a random permutation. See, for example, \cite{PT} and \cite{P21}. From Proposition 1.7 and Remark 3 following it in \cite{P21}, it follows that a $p$-shifted random permutation can also be constructed in a useful alternative fashion. This leads to the second construction of a random permutation $\Pi^{(n)}$ with a left-to-right minimum exponentially tilted distribution.
Let $\{Y_m\}_{m=2}^\infty$ be a sequence of independent random variables with \begin{equation}\label{Y} P(Y_m=j)=\begin{cases}\frac q{q+m-1},\ j=0;\\ \frac1{q+m-1},\ j=1,\cdots, m-1.\end{cases} \end{equation} Consider now a horizontal line on which to place the numbers in $[n]$. We begin by placing down the number 1. Then inductively, if we have already placed down the numbers $1,2,\cdots, m-1$, the number $m$ gets placed down in the position for which there are $Y_m$ numbers to its left. For example, for $n=8$, if $Y_2=1$, $Y_3=0$, $Y_4=1$, $Y_5=1$, $Y_6=3$, $Y_7=5$, $Y_8=7$, then we obtain the permutation $\Pi^{(8)}=83546172$. By the construction, for $m\in [n]$, the location of $m$ in the random permutation $\Pi^{(n)}$ will
be a left-to-right minimum for the random permutation $\Pi^{(n)}$ if and only if $Y_m=0$. Thus, from \eqref{Y}, it follows that for any $\sigma\in S_n$, the probability that $\Pi^{(n)}=\sigma$ is equal to $\frac{q^{\text{LR}^-_n(\sigma)}}{q^{(n)}}$.
We use this second construction now to prove the independence noted in Remark 1 after Theorem \ref{seclrmin}. We want to prove that for any $n\in\mathbb{N}$, the events $\{\sigma\in S_n: \sigma_m=\min(\sigma_1,\cdots, \sigma_m)\}, m=1,\cdots, n$, are independent under $P_n^{\text{LR}^-;q}$. (The event $\{\sigma\in S_n: \sigma_m=\min(\sigma_1,\cdots, \sigma_m)\}$ is the event that $m$ is a left-to-right minimum for $\sigma$.) It is easy to show that the number of left-to-right minima in a permutation coincides with that of its inverse; that is, $\text{LR}^{-}_n(\sigma)=\text{LR}^{-}_n(\sigma^{-1}),\ \sigma\in S_n$. From this fact along with the definition of the exponentially tilted measure, it follows that if $\sigma$ is distributed according to $P_n^{\text{LR}^-;q}$, then $\sigma^{-1}$ is also distributed according to $P_n^{\text{LR}^-;q}$. Consequently, to prove the independence of the above events under $P_n^{\text{LR}^-;q}$, it suffices to prove the independence of the events $\{\sigma\in S_n: \sigma^{-1}_m=\min(\sigma^{-1}_1,\cdots, \sigma^{-1}_m)\}, m=1,\cdots, n$, under $P_n^{\text{LR}^-;q}$. The event $\{\sigma\in S_n: \sigma^{-1}_m=\min(\sigma^{-1}_1,\cdots, \sigma^{-1}_m)\}$ is the event that in the permutation $\sigma$, the number $m$ appears to the left of the numbers $1,\cdots, m-1$. Thus, from the second construction, this event is the event $\{Y_m=0\}$. This completes the proof since the $\{Y_m\}_{m=1}^n$ are independent.
\section{Proof of Proposition \ref{lrminprop}}\label{proofprop} We use the first online construction in section \ref{construction} and employ the notation from there. Under the distribution $P_n^{\text{LR}^-;q}$, a left-to-right minimum occurs at position $j$ if and only if $\kappa_j=0$, which occurs with probability $\frac q{j-1+q}$. Therefore \begin{equation}\label{expectationform} E_n^{\text{LR}^-;q_n}\text{LR}^{-}_n=\sum_{j=1}^n\frac{q_n}{j-1+q_n}=1+\sum_{j=1}^{n-1}\frac{q_n}{j+q_n}. \end{equation} We have $$ \sum_{j=2}^n\frac1{j+q_n}\le\int_1^{n-1}\frac1{x+q_n}dx\le\sum_{j=1}^{n-1}\frac1{j+q_n}, $$ from which it follows that \begin{equation}\label{intlog} q_n\log\frac{n-1+q_n}{1+q_n}\le\sum_{j=1}^{n-1}\frac{q_n}{j+q_n}\le q_n\log\frac{n-1+q_n}{1+q_n}+\frac{q_n}{1+q_n}-\frac{q_n}{n+q_n}. \end{equation} Parts (i)-(v) follow almost immediately from \eqref{expectationform} and \eqref{intlog}. Part (vi) follows from \eqref{expectationform} and \eqref{intlog} and the fact that $\log\frac{n-1+q_n}{1+q_n}=\log(1+\frac{n-2}{1+q_n})\sim\frac n{q_n}$, for $q_n$ as in part (vi).
$\square$
\section{Proof of Theorem \ref{lrminsecexact}}\label{proofexact} Let $\sigma=\sigma_1\sigma_2\cdots\sigma_n\in S_n$ represent the rankings of the $n$ items that arrive one by one. That is, $\sigma_j$ is the ranking of the $j$th item to arrive. Recall that our convention is that the number 1 represents the highest ranking. First consider the case $M_n=0$. The strategy $\mathcal{S}(n,0)$ will select the highest ranked item if and only if $\sigma_1=1$.
We use the first online construction in section \ref{construction}, and employ the notation from there. The event $\{\sigma_1=1\}$ occurs if and only if $\kappa_l\neq1$, for $l=2,\cdots, n$. Thus $$ P_n^{\text{LR}^-;q}(\sigma_1=1)=\prod_{l=2}^n\frac{l-1}{l-1+q}. $$
This gives \eqref{lrminsecexactform} for the case $M_n=0$.
From now on, assume that $M_n\ge1$. Then the strategy $\mathcal{S}(n,M_n)$ will select the highest ranking item if and only if for some $j\in\{M_n+1,\cdots, n\}$, one has $\sigma_j=1$ and $\min(\sigma_1,\cdots,\sigma_{j-1})=\min(\sigma_1,\cdots, \sigma_{M_n})$. So \begin{equation}\label{basicformula} \mathcal{P}_n^q(\mathcal{S}(n,M_n))=\sum_{j=M_n+1}^nP_n^{\text{LR}^-;q}(\sigma_j=1,\min(\sigma_1,\cdots,\sigma_{j-1})=\min(\sigma_1,\cdots,\sigma_{M_n})). \end{equation} We continue to use the first online construction in section \ref{construction}, and to employ the notation from there.
The event $\{\sigma_j=1\}$ occurs if and only if $\kappa_l\neq1$, for $l=j+1,\cdots, n$ and $\kappa_j=1$, while the event
$\min(\sigma_1,\cdots,\sigma_{j-1})=\min(\sigma_1,\cdots,\sigma_{M_n})$ occurs if and only if
$\kappa_l\neq1$, for $l=M_n+1,\cdots, j-1$. Thus,
\begin{equation}\label{calcprob} \begin{aligned} & P_n^{\text{LR}^-;q}(\sigma_j=1,\min(\sigma_1,\cdots,\sigma_{j-1})=\min(\sigma_1,\cdots,\sigma_{M_n}))=\\ & \big(\prod_{l=j+1}^n\frac{l-1}{l-1+q}\big)\big(\frac q{j-1+q}\big)\big(\prod_{l=M_n+1}^{j-1}\frac{l-1}{l-1+q}\big)=\\ &\frac{q(n-1)!}{(j-1)(M_n-1)!}\frac1{\prod_{l=M_n+1}^n(l-1+q)}. \end{aligned}
\end{equation} Now \eqref{lrminsecexactform} follows from \eqref{basicformula} and \eqref{calcprob}.
$\square$
\section{Proof of Theorem \ref{seclrmin}}\label{proofsec} To prove the theorem, we perform an asymptotic analysis on \eqref{lrminsecexactform} with $q=q_n$. We begin with the estimates that are needed to analyze the cases (v)-(ix), in which $\{q_n\}_{n=1}^\infty$ is bounded away from zero. Then we prove cases (v)-(ix) of the theorem. After that we prove some additional estimates that are needed for the cases (i)-(iv), in which $\lim_{n\to\infty}q_n=0$. And then we prove cases (i)-(iv) of the theorem.
Using the well-known fact that \begin{equation*}\label{eulergamma} \sum_{j=1}^n\frac1j=\log n+\gamma+O(\frac1n),\ \text{where}\ \gamma\ \text{is the Euler-Mascheroni constant}, \end{equation*} we have \begin{equation}\label{logterm} \sum_{j=M_n}^{n-1}\frac1j=\log \frac n{M_n}+O(\frac1{M_n}). \end{equation} We write \begin{equation}\label{factorialandproduct} \frac{n!}{M_n!}\frac1{\prod_{l=M_n}^{n-1}(l+q_n)}=\prod_{l=M_n+1}^n\frac l{l-1+q_n} \end{equation} Using the Taylor expansion \begin{equation}\label{Taylor} \log(1+x)=x-\frac12c_xx^2,\ \text{for}\ x>-1,\ \text{where}\ c_x\in(0,1), \end{equation} and using \eqref{logterm} for the final equality, we have \begin{equation}\label{logprod} \begin{aligned} &\log\prod_{l=M_n+1}^n\frac {l-1+q_n}l=\sum_{l=M_n+1}^n\log(1+\frac{q_n-1}l)=\\ &\sum_{l=M_n+1}^n\big(\frac{q_n-1}l-\frac12c_{q_n,l}\frac{(q_n-1)^2}{l^2}\big)=\\ &(q_n-1)\log \frac n{M_n}+O(\frac{q_n-1}{M_n})+O\big((q_n-1)^2(\frac1{M_n}-\frac1n)\big), \end{aligned} \end{equation} where $c_{q_n,l}\in(0,1)$. From \eqref{factorialandproduct} and \eqref{logprod}, we have \begin{equation}\label{productterm} \frac{n!}{M_n!}\frac1{\prod_{l=M_n}^{n-1}(l+q_n)}=(\frac{M_n}n)^{q_n-1}\exp\Big(O(\frac{q_n-1}{M_n})+O\big((q_n-1)^2(\frac1{M_n}-\frac1n)\big)\Big). \end{equation} From \eqref{lrminsecexactform}, \eqref{logterm} and \eqref{productterm}, we have \begin{equation}\label{probanalyzed} \begin{aligned} &\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))=\\ &q_n(\frac{M_n}n)^{q_n}\big(\log\frac n{M_n}+O(\frac1{M_n})\big)\exp\Big(O(\frac{q_n-1}{M_n})+O\big((q_n-1)^2(\frac1{M_n}-\frac1n)\big)\Big),\\ &\text{if}\ M_n\ge1. \end{aligned} \end{equation}
Using the inequality $1-x\le e^{-x}$, for $x\ge0$, we also have \begin{equation}\label{anotherest} \prod_{l=M_n+1}^n\frac l{l-1+q_n}=\prod_{l=M_n+1}^n(1-\frac{q_n-1}{l-1+q_n})\le \exp\big(-(q_n-1)\sum_{l+M_n+1}^n\frac1{l-1+q_n}\big). \end{equation}
From \eqref{logterm}, it follows that \begin{equation}\label{harmonicest} \sum_{l+M_n+1}^n\frac1{l-1+q_n}\ge C_{x,c}>0,\ \text{if}\ \lim_{n\to\infty}\frac{q_n}n\le c<\infty\ \text{and}\
\lim_{n\to\infty}\frac{M_n}n\le x,
\ \text{for}\ x\in(0,1). \end{equation} From \eqref{lrminsecexactform}, \eqref{logterm}, \eqref{anotherest} and \eqref{harmonicest}, we have \begin{equation}\label{probanalyzedagain} \begin{aligned} &\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))\le q_n\frac{M_n}n\big(\log\frac n{M_n}+O(\frac1{M_n})\big)\exp(-C_{x,c}(q_n-1)), \text{where}\ C_{x,c}>0,\\ &\text{if}\ \lim_{n\to\infty}\frac{q_n}n\le c<\infty \ \text{and}\ \lim_{n\to\infty}\frac{M_n}n\le x, \ \text{for}\ x\in(0,1). \end{aligned} \end{equation}
We now use the above results to prove parts (v)-(ix). We begin with part (v). It is easy to see that without loss of generality we can assume that $q_n=q$ is independent of $n$.
If $\lim_{n\to\infty}\frac{M_n}n=x\in[0,1]$, then from \eqref{probanalyzed}, $$ \lim_{n\to\infty}\mathcal{P}_n^{q}(\mathcal{S}(n,M_n))=\begin{cases}-qx^q\log x, \ \text{if}\ x\in(0,1];\\ 0, \text{if}\ x=0.\end{cases} $$ The function $-qx^q\log x$, for $x\in(0,1]$, attains its maximum value $e^{-1}$ at $x=e^{-\frac1q}$. This completes the proof of part (v).
We now prove part (vi), where we assume that $q_n\to\infty$ and $q_n=o(n)$. It follows from \eqref{probanalyzedagain} that if $\lim_{n\to\infty}\frac{M_n}n<1$, then $\lim_{n\to\infty}\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))=0$. Thus, we assume that $\lim_{n\to\infty}\frac{M_n}n=1$ and write \begin{equation}\label{Mn} M_n=n-y_n, \ \text{where}\ 1\le y_n=o(n). \end{equation} Then from \eqref{probanalyzed}, we have \begin{equation}\label{eston} \begin{aligned} &\mathcal{P}_n^{q}(\mathcal{S}(n,M_n))=q_n(1-\frac{y_n}n)^{q_n}\big(\log(1+\frac{y_n}{n-y_n})+O(\frac1n)\big)e^{o(1)}=\\ &q_n(1-\frac{y_n}n)^{q_n}\big(\frac{y_n}n+o(1)\big)e^{o(1)}. \end{aligned} \end{equation} From \eqref{eston}, it follows that \begin{equation}\label{finaleston} \lim_{n\to\infty}\mathcal{P}_n^{q}(\mathcal{S}(n,M_n))=ze^{-z},\ \text{if}\ \lim_{n\to\infty}\frac{q_ny_n}n=z\in[0,\infty). \end{equation} The function $ze^{-z}$ attains its maximum value of $e^{-1}$ at $z=1$. This completes the proof of part (vi).
We now turn to parts (vii) and (viii) together, where $q_n\sim cn$, for some $c>0$. In this case too it follows from \eqref{probanalyzedagain} that if $\lim_{n\to\infty}\frac{M_n}n<1$, then $\lim_{n\to\infty}\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))=0$. Thus, we may assume that $M_n$ satisfies \eqref{Mn}. Then from \eqref{anotherest}, we have \begin{equation}\label{anotherestagain} \prod_{l=M_n+1}^n\frac l{l-1+q_n}\le e^{-ay_n},\ \text{for some}\ a>0. \end{equation} And from \eqref{lrminsecexactform}, \eqref{logterm} and \eqref{anotherestagain}, we have \begin{equation}\label{qncnest} \mathcal{P}_n^{q}(\mathcal{S}(n,M_n))\le q_n\frac{M_n}n\big(\log \frac n{M_n}+O(\frac1{M_n})\big)e^{-ay_n}\sim cn\big(\frac {y_n}n+o(1)\big)e^{-ay_n}. \end{equation} From \eqref{qncnest}, it follows that $\lim_{n\to\infty}\mathcal{P}_n^{q}(\mathcal{S}(n,M_n))=0$, if $\lim_{n\to\infty}y_n=\infty$. Thus, we may assume now that \begin{equation}\label{MnL} M_n=n-L,\ L\in\mathbb{N}. \end{equation} From \eqref{lrminsecexactform}, we then have \begin{equation} \mathcal{P}_n^{q}(\mathcal{S}(n,M_n))\sim cn(1+c)^{-L}(\frac Ln)=\frac{cL}{(1+c)^L}. \end{equation} One has $\frac{cL}{(1+c)^L}\ge\frac{c(L+1)}{(1+c)^{L+1}}$ if and only if $c\ge\frac1L$. This shows that if $c\in(0,1)$, then the optimal strategy is with $M_n^*$ as in \eqref{optimalvii}, and the limiting probability of success is as in \eqref{optimalviiprob}. It also shows that if $c\ge1$, then the optimal strategy is with $M^*_n=n-1$ and the limiting probability of success is $\frac c{1+c}$. This
completes the proof of parts (vii) and (viii), except for the claim in (vii) that
$\lim_{n\to\infty}\mathcal{P}_n^{q}(\mathcal{S}(n,M^*_n))>e^{-1}$.
We now prove this last claim. One can show that for fixed $2\le L\in\mathbb{N}$, the expression on the right hand side of \eqref{optimalviiprob}, considered as a function
of $c\in[\frac1L,\frac1{L-1}]$ attains its maximum value at the right hand endpoint, where it is equal to $(1+\frac1{L-1})^{-(L-1)}$.
The claim is proved by noting that $(1+\frac1n)^n$ increases to $e$ as $n\to\infty$.
Finally, we turn to part (ix). The proof of this part follows from part (vi) of Proposition \ref{lrminprop}.
We now turn to the additional estimates needed to treat the cases in which $\lim_{n\to\infty}q_n=0$. From \eqref{lrminsecexactform}, for $M^*_n=0$, \begin{equation}\label{M=0} \begin{aligned} &\log\mathcal{P}_n^{q_n}(\mathcal{S}(n,0))=\sum_{l=1}^{n-1}\log \frac l{l+q_n}=\sum_{l=1}^{n-1}\log(1-\frac{q_n}{l+q_n})=-q_n\sum_{l=1}^{n-1}\frac1{l+q_n}+O(q_n^2)=\\ &-q_n\log n+O(q_n),\ \text{if}\ \lim_{n\to\infty}q_n=0. \end{aligned} \end{equation}
For fixed $M\in\mathbb{N}$, we have \begin{equation}\label{fixedM} \prod_{l=M+1}^n\frac {l-1+q_n}l=\frac{M+q_n}n\prod_{l=M+1}^{n-1}(1+\frac{q_n}l). \end{equation} Also, using \eqref{Taylor} and \eqref{logterm}, we have \begin{equation}\label{fixedMagain} \begin{aligned} &\log\prod_{l=M+1}^{n-1}(1+\frac{q_n}l)=\sum_{l=M+1}^{n-1}\log(1+\frac{q_n}l)=q_n\sum_{l=M+1}^{n-1}\frac1l-\frac{q_n^2}2\sum_{l=M+1}^{n-1}\frac{c_{q_n,l}}{l^2}=\\ &q_n\big(\log n+O(1)\big)+O(q_n^2)=q_n\log n+O(q_n),\ \text{if}\ \lim_{n\to\infty}q_n=0, \end{aligned} \end{equation} where $c_{q_n,l}\in(0,1)$. From \eqref{factorialandproduct}, \eqref{fixedM} and \eqref{fixedMagain}, we have \begin{equation}\label{Mqnto0} \frac{n!}{M!}\frac1{\prod_{l=M}^{n-1}(l+q_n)}\sim\frac{n^{1-q_n}}M,\ \text{if}\ \lim_{n\to\infty}q_n=0,\ \text{for}\ M\in\mathbb{N}. \end{equation} From \eqref{lrminsecexactform}, \eqref{logterm} and \eqref{Mqnto0}, we have \begin{equation}\label{probMqnto0} \begin{aligned} &\mathcal{P}_n^{q_n}(\mathcal{S}(n,M))\sim q_n\frac Mn\frac{n^{1-q_n}}M(\log \frac nM)\sim q_n(\frac1n)^{q_n}\log n=(q_n\log n)e^{-q_n\log n},\\ & \text{if}\ \lim_{n\to\infty}q_n=0, \ \text{for}\ M\in\mathbb{N}. \end{aligned} \end{equation} From \eqref{probanalyzed}, we have \begin{equation}\label{probMnqnto0} \begin{aligned} &\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))\sim q_n(\frac{M_n}n)^{q_n}\log\frac n{M_n}=(q_n\log \frac n{M_n})e^{-q_n\log \frac n{M_n}},\\ &\text{if}\ q_n\ \text{is bounded and}\ \lim_{n\to\infty}M_n=\infty. \end{aligned} \end{equation}
We now prove
parts (i)-(iv). We begin with part (i), where $q_n=o(\frac1{\log n})$. From \eqref{M=0}, \eqref{probMqnto0} and \eqref{probMnqnto0}, and the fact that the function $xe^{-x}$ attains its maximum at $x=1$, it follows that the optimal strategy is $\mathcal{S}(n,M_n^*)$, with $M^*_n=0$, and the limiting probability of success is 1. (Alternatively, part (i) follows from part (i) of Proposition \ref{lrminprop}.)
We now turn to part (ii), where $q_n\sim\frac c{\log n}$, with $c\in(0,1)$. If we choose $M_n=M$ to be fixed, then by \eqref{probMqnto0}, \begin{equation}\label{Mfixedc<1} \mathcal{P}_n^{q_n}(\mathcal{S}(n,M))\sim ce^{-c}. \end{equation} If we choose $M_n$ such that $\lim_{n\to\infty}M_n=\infty$, then from \eqref{probMnqnto0}, \begin{equation}\label{Mnc<1} \mathcal{P}_n^{q_n}(\mathcal{S}(n,M))\sim c(1-\frac{\log M_n}{\log n})e^{-c(1-\frac{\log M_n}{\log n})}. \end{equation} If $c\in(0,1)$, the function $H_c(x)=c(1-x)e^{-c(1-x)}$ attains its maximum over $x\in[0,1]$ at $x=0$, where it is equal to $ce^{-c}$. Thus, from \eqref{Mnc<1}, \begin{equation}\label{Mnc<1again} \limsup_{n\to\infty}\mathcal{P}_n^{q_n}(\mathcal{S}(n,M_n))\le ce^{-c}. \end{equation} On the other hand, from \eqref{M=0}, \begin{equation}\label{M=0c<1} \lim_{n\to\infty}\mathcal{P}_n^{q_n}(\mathcal{S}(n,0))=e^{-c}. \end{equation} From \eqref{Mfixedc<1}, \eqref{Mnc<1again} and \eqref{M=0c<1}, if follows that the optimal strategy is $\mathcal{S}(n,M_n^*))$, with $M^*_n=0$, and the limiting probability of success is $e^{-c}$.
We now turn to part (iii), where $q_n\sim\frac1{\log n}$. The analysis above for part (ii) goes through just as well when $c=1$. Thus, from the previous paragraph we conclude that the optimal strategies $\mathcal{S}(n,M_n^*)$ are those with $M_n^*=k\in \mathbb{Z}^+$ or $\lim_{n\to\infty}M_n^*=\infty$ with
$\lim_{n\to\infty}\frac{\log M_n}{\log n}=0$, and the limiting probability of success is $e^{-1}$.
We now turn to part (iv), where $\lim_{n\to\infty}q_n=0$ and $\lim_{n\to\infty}q_n\log n>1$. From \eqref{M=0}, \eqref{probMqnto0} and \eqref{probMnqnto0}, and the fact that the function $xe^{-x}$ attains its maximum at $x=1$, it follows that that optimal strategies are $\mathcal{S}(n,M_n^*)$, where $q_n\log \frac n{M_n}\sim1$, and the limiting probability of success is $e^{-1}$.
$\square$
\end{document} | arXiv | {
"id": "2112.07930.tex",
"language_detection_score": 0.6491093039512634,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\maketitle
\begin{abstract} Let $\sfk$ be a $p$-adic field and let $\mathbf{G}(\sfk)$ be the $\sfk$-points of a connected reductive group, inner to split. The set of Aubert-Zelevinsky duals of the constituents of a tempered L-packet form an Arthur packet for $\mathbf{G}(\sfk)$. In this paper, we give an alternative characterization of such Arthur packets in terms of the wavefront set, proving in some instances a conjecture of Jiang-Liu and Shahidi. Pursuing an analogy with real and complex groups, we define some special unions of Arthur packets which we call \emph{weak} Arthur packets and describe their constituents in terms of their Langlands parameters. \end{abstract}
\tableofcontents
\section{Introduction}
Let $\mathsf k$ be a nonarchimedean local field of characteristic $0$ with ring of integers $\mathfrak o$, finite residue field $\mathbb F_q$ of cardinality $q$ and valuation $\mathsf{val}_{\mathsf k}$. Fix an algebraic closure $\bar{\mathsf k}$ of $\mathsf k$ and let $\kun\subset \bar{\mathsf k}$ be the maximal unramified extension of $\mathsf k$ in $\bar{\mathsf k}$. Let $\mathbf{G}$ be a connected reductive algebraic group defined over $\sfk$ and let $\mathbf{G}(\sfk)$ be the group of $\sfk$-rational points. We assume throughout that $\mathbf{G}(\sfk)$ is inner to split.
Let $W_{\sfk}$ be the Weil group associated to $\sfk$ \cite[(1.4)]{Tate1979}. Then the Weil-Deligne group is the semidirect product \cite[\S 8.3.6]{Deligne1972}
$$W_{\sfk}' = W_{\sfk} \ltimes \CC$$
where $W_{\sfk}$ acts on $\CC$ via
$$w x w^{-1}=\|w\| x,\qquad x\in \mathbb C,\ w\in W_k.$$
Let $G^{\vee}$ denote the complex Langlands dual group associated to $\mathbf{G}$, see \cite[\S2.1]{Borel1979}.
\begin{definition}\label{def:Langlandsparams} A \emph{Langlands parameter} is a continuous homomorphism
\begin{equation}
\phi: W_{\sfk}'\rightarrow G^\vee \end{equation}
which respects the Jordan decompositions in $W_{\sfk}'$ and $G^\vee$ (\cite[\S8.1]{Borel1979}). \end{definition}
To each Langlands parameter $\phi$, one hopes to associate an $L$-packet $\Pi^{\mathsf{Lan}}_{\phi}(\mathbf{G}(\sfk))$ of irreducible admissible $\mathbf{G}(\sfk)$-representations \cite[\S10]{Borel1979} (note: these packets have not been defined in full generalality, see \cite{Arthur2013} or \cite{Kaletha2022} for a discussion of the status of the local Langlands conjectures).
\begin{definition} An \emph{Arthur parameter} is a continuous homomorphism
$$\psi: W_{\sfk}' \times \mathrm{SL}(2,\CC) \to G^{\vee}$$
such that \begin{itemize}
\item[(i)] The restriction of $\psi$ to $W_{\sfk}'$ is a tempered Langlands parameter.
\item[(ii)] The restriction of $\psi$ to $\mathrm{SL}(2,\CC)$ is algebraic. \end{itemize} Arthur parameters are considered modulo the $G^{\vee}$-action on the target. \end{definition}
For each Arthur parameter $\psi$ there is an associated Langlands parameter $\phi_{\psi}: W_{\sfk}' \to G^{\vee}$ defined by the formula
\begin{equation}\label{eq:ArthurLanglands}\phi_{\psi}(w) = \psi(w,\begin{pmatrix}\|w\|^{1/2} & 0\\0 & \|w\|^{-1/2} \end{pmatrix}), \qquad w \in W_{\sfk}',\end{equation}
where $\|\bullet\|$ is the pull-back to $W_{\sfk}'$ of the norm map on $W_{\sfk}$. The local version of Arthur's conjectures can be stated succinctly as follows.
\begin{conj}[Section 6, \cite{Arthur1989}]\label{conj:Arthur} For each Arthur parameter $\psi$, there is an associated finite set $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$ of irreducible admissible $\mathbf{G}(\sfk)$-representations called the \emph{Arthur packet} attached to $\psi$. The set $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$ should satisfy various properties, including: \begin{itemize}
\item[(i)] $\Pi^{\mathsf{Lan}}_{\phi_{\psi}}(\mathbf{G}(\sfk)) \subseteq \Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$,
\item[(ii)] $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$ consists of unitary representations,
\end{itemize} along with several other properties (e.g. stability, endoscopy), see \cite[Section 4]{Arthur1989} or \cite[Chapter 1]{AdamsBarbaschVogan}. \end{conj}
Although a general definition is lacking, the packets $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$ have been defined in various special cases. For quasisplit orthogonal and symplectic groups, Arthur has defined them using endoscopic transfer and has proved that they satisfy all of the conjectured properties \cite[Theorems 1.5.1 and 2.2.1]{Arthur2013}. His constructions were extended in \cite{Mok2015} to quasisplit unitary groups. Explicit definitions, compatible with Arthur's, were proposed by M\oe glin in \cite{Mo1,Mo2}, see also the exposition in \cite{Xu} and the references therein. For split $G_2$, some Arthur packets were constructed in \cite{GanGurevich}. Recently, Cunningham and his collaborators have proposed a `microlocal' approach to defining Arthur packets, see \cite{Cunningham}.
There is one simple class of Arthur parameters for which a general (i.e. case-free) definition of Arthur packets is available.
\begin{definition}\label{def:unipotentparam}
An Arthur parameter $\psi$ is \emph{basic} if the restriction $\psi|_{W_{\sfk}'}$ is trivial. \end{definition}
By the Jacobson-Morozov theorem, there is a natural bijection
\begin{align*} \{\text{nilpotent adjoint } G^{\vee}\text{-orbits}\} &\xrightarrow{\sim} \{\text{basic Arthur parameters}\}/G^{\vee}\\ \OO^{\vee} &\mapsto \psi_{\OO^{\vee}}. \end{align*}
For each nilpotent orbit $\OO^{\vee} \subset \fg^{\vee}$, we choose an $\mathfrak{sl}(2)$-triple $(e^{\vee},f^{\vee},h^{\vee})$ and consider the semisimple element $q^{\frac{1}{2}h^{\vee}} \in G^{\vee}$. This element is well-defined modulo conjugation by $G^{\vee}$, and therefore determines an `infinitesimal character', see Section \ref{subsec:LLC}. There is a well-known involution defined by Aubert and Zelevinsky on the set of irreducible admissible $\mathbf{G}(\sfk)$-representations, see Section \ref{subsec:AZ}. We denote this involution by $X \mapsto \mathrm{AZ}(X)$. The Arthur packets attached to basic Arthur parameters can be defined as follows, , see for example \cite[\S7.1]{Arthur2013}.
\begin{definition} Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated basic Arthur parameter. Then $\Pi^{\mathsf{Art}}_{\psi_{\OO^{\vee}}}(\mathbf{G}(\sfk))$ is the set of irreducible $\mathbf{G}(\sfk)$-representations $X$ with unipotent cuspidal support such that \begin{itemize}
\item[(i)] The infinitesimal character of $X$ is $q^{\frac{1}{2}h^{\vee}}$.
\item[(ii)] $\AZ(X)$ is tempered. \end{itemize} \end{definition}
The \emph{wavefront set} of an admissible $\mathbf{G}(\sfk)$-representation is a fundamental invariant coming from the Harish-Chandra-Howe local character expansion. In its classical form, the wavefront set $\WF(X)$ of $X$ is a collection of nilpotent $\bfG(\sfk)$-orbits in the Lie algebra $\mathfrak g(\sfk)$, namely the \emph{maximal} nilpotent orbits for which the Fourier transforms of the associated orbital integrals contribute to the local character expansion of the distribution character of $X$, \cite[Theorem 16.2]{HarishChandra1999}. In this paper, we consider two coarser invariants (see Section \ref{s:wave} for the precise definitions). The first of these invariants is the \emph{algebraic wavefront set}, denoted $\hphantom{ }^{\bar{\sfk}}\WF(X)$. This is a collection of nilpotent orbits in $\mathfrak{g}(\bark)$, see for example \cite[p. 1108]{Wald18} (where it is simply referred to as the `wavefront set' of $X$). The second invariant is $\CUWF(X)$, a natural refinement of $^{\bark}\WF(X)$ called the \emph{canonical unramified wavefront set}, defined recently in \cite{okada2021wavefront}. This is a collection of nilpotent orbits $\mathfrak g(\kun)$ (modulo a certain equivalence relation $\sim_A$). The relationship between these three invariants is as follows: the algebraic wavefront set $\hphantom{ }^{\bar{\sfk}}\WF(X)$ is deducible from the usual wavefront set $\WF(X)$ as well as the canonical unramified wavefront set $\CUWF(X)$. It is not known whether the canonical unramified wavefront set is deducible from the usual wavefront set, but we expect this to be the case, see \cite[Section 5.1]{okada2021wavefront} for a careful discussion of this topic.
For real and complex groups, it is possible to define Arthur packets in terms of the wavefront set, see \cite{BarbaschVogan1985} and \cite[Section 27] {AdamsBarbaschVogan}. Thus, it is natural to hope for a similar definition in the $p$-adic setting. Our main result (Theorem \ref{thm:main}) gives an alternative characterization of the Arthur packets attached to basic Arthur parameters in terms of the canonical unramified wavefront set. The most precise version of our result requires two additional ingredients which will be introduced in Section \ref{sec:preliminaries} (the classification of nilpotent orbits in $\fg(\bar{\sfk})$ and Achar duality). Here is a simplified version.
\begin{theorem}\label{thm:mainintro} Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated basic Arthur parameter. Then $\Pi^{\mathsf{Art}}_{\OO^{\vee}}(\mathbf{G}(\sfk))$ is the set of irreducible representations $X$ with unipotent cuspidal support such that \begin{itemize}
\item[(i)] The infinitesimal character of $X$ is $q^{\frac{1}{2}h^{\vee}}$.
\item[(ii)] The canonical unramified wavefront set $\CUWF(X)$ is minimal subject to (i). \end{itemize} \end{theorem}
As a consequence of Theorem \ref{thm:mainintro}, we prove a conjecture of Jiang-Liu and Shahidi in the case of basic Arthur packets (see Corollary \ref{cor:jiang} below).
If we replace $\CUWF(X)$ in condition (ii) with the coarser invariant $^{\bar{\sfk}}\WF(X)$, we get a larger set of representations, which we call a \emph{weak Arthur packet}. We conjecture that this set is a union of Arthur packets (and, in particular, that its constituents are unitary).
\subsection{Acknowledgments}
The authors would like to thank Kevin McGerty and David Vogan for many helpful conversations. The authors would also like to thank Anne-Marie Aubert, Colette M\oe glin, David Renard, and Maarten Solleveld for their helpful comments and corrections on an earlier draft of this paper. This research was partially supported by the Engineering and Physical Sciences Research Council under grant EP/V046713/1. The third author was supported by Aker Scholarship.
\section{Preliminaries}\label{sec:preliminaries}
Let $\mathsf{k}$ be a nonarchimedean local field of characteristic $0$ with residue field $\mathbb{F}_q$ of sufficiently large characteristic, ring of integers $\mathfrak{o} \subset \mathsf{k}$, and valuation $\mathsf{val}_{\mathsf{k}}$. Fix an algebraic closure $\bar{\mathsf{k}}$ of $\mathsf{k}$ with Galois group $\Gamma$, and let $\kun \subset \bar{\mathsf{k}}$ be the maximal unramified extension of $\mathsf{k}$ in $\bar{\mathsf{k}}$. Let $\mf O$ be the ring of integers of $\kun$. Let $\mathrm{Frob}$ be the geometric Frobenius element of $\mathrm{Gal}(\kun/\mathsf{k})$, the topological generator which induces the inverse of the automorphism $x\to x^q$ of $\mathbb{F}_q$.
Let $\bfG$ be a connected reductive algebraic group defined over $\sfk$, inner to split, and let $\bfT \subset \mathbf{G}$ be a maximal torus. For any field $F$ containing $\sfk$, we write $\mathbf{G}(F)$, $\mathbf{T}(F)$, etc. for the groups of $F$-rational points. Let $\bfG_{\ad}=\bfG/Z(\bfG)$ denote the adjoint group of $\bfG$.
Write $X^*(\mathbf{T},\bark)$ (resp. $X_*(\mathbf{T},\bark)$) for the lattice of algebraic characters (resp. co-characters) of $\mathbf{T}(\bark)$, and write $\Phi(\mathbf{T},\bark)$ (resp. $\Phi^{\vee}(\mathbf{T},\bark)$) for the set of roots (resp. co-roots). Let
$$\mathcal R=(X^*(\mathbf{T},\bark), \ \Phi(\mathbf{T},\bark),X_*(\mathbf{T},\bark), \ \Phi^\vee(\mathbf{T},\bark), \ \langle \ , \ \rangle)$$
be the root datum corresponding to $\mathbf{G}$, and let $W$ the associated (finite) Weyl group. Let $G$ be the complex reductive group with the same absolute root data as $\bfG$ and let $\mathbf{G}^\vee$ be the Langlands dual group of $\bfG$, i.e. the connected reductive algebraic group defined and split over $\ZZ$ corresponding to the dual root datum
$$\mathcal R^\vee=(X_*(\mathbf{T},\bark), \ \Phi^{\vee}(\mathbf{T},\bark), X^*(\mathbf{T},\bark), \ \Phi(\mathbf{T},\bark), \ \langle \ , \ \rangle).$$ Set $\Omega=X_*(\mathbf{T},\bark)/\ZZ \Phi^\vee(\mathbf{T},\bark)$. The center $Z(\bfG^\vee)$ can be naturally identified with the irreducible characters $\mathsf{Irr} \Omega$, and dually, $\Omega\cong X^*(Z(\bfG^\vee))$. For $\omega\in\Omega$, let $\zeta_\omega$ denote the corresponding irreducible character of $Z(\bfG^\vee)$.
For details regarding the parametrization of inner twists of a group $\bfG(\mathsf k)$, see \cite[\S2]{Vogan1993}, \cite{Kottwitz1984}, \cite[\S2]{Kaletha2016}, or \cite[\S1.3]{ABPS2017} and \cite[\S1]{FengOpdamSolleveld2021}. We only record here that the equivalence classes of inner twists of $\mathbf G$ are parameterized by the Galois cohomology group \[H^1(\Gamma, \mathbf G_{\ad})\cong H^1(F,\mathbf G_{\ad}(\kun))\cong\Omega_{\ad}\cong \Irr Z(\bfG^\vee_{\mathsf{sc}}), \] where $\bfG^\vee_{\mathsf{sc}}$ is the Langlands dual group of $\bfG_{\ad}$, i.e., the simply connected cover of $\bfG^\vee$, and $F$ denotes the action of $\mathrm{Frob}$ on $\bfG(\kun)$. We identify $\Omega_{\ad}$ with the fundamental group of $\bfG_{\ad}$. The isomorphism above is determined as follows: for a cohomology class $h$ in $H^1(F, \mathbf G_{\ad}(\kun))$, let $z$ be a representative cocycle. Let $u\in \bfG_{\ad}(\kun)$ be the image of $F$ under $z$, and let $\omega$ denote the image of $u$ in $\Omega_{\ad}$. Set $F_\omega=\Ad(u)\circ F$. The corresponding rational structure of $\bfG$ is given by $F_\omega$. Let $\bfG^\omega$ be the connected reductive group defined over $\sfk$ such that $\bfG(\kun)^{F_\omega}=\bfG^\omega(\mathsf k)$.
\
If $H$ is a complex reductive group and $x$ is an element of $H$ or $\fh$, we write $H(x)$ for the centralizer of $x$ in $H$, and $A_H(x)$ for the group of connected components of $H(x)$. If $S$ is a subset of $H$ or $\fh$ (or indeed, of $H \cup \fh$), we can similarly define $H(S)$ and $A_H(S)$. We will sometimes write $A(x)$, $A(S)$ when the group $H$ is implicit. The subgroups of $H$ of the form $H(x)$ where $x$ is a semisimple element of $H$ are called \emph{pseudo-Levi} subgroups of $H$.
Let $\mathcal C(\bfG(\mathsf k))$ be the category of smooth complex $\bfG(\mathsf k)$-representations and let $\Pi(\mathbf{G}(\mathsf k)) \subset \mathcal C(\bfG(\mathsf k))$ be the set of irreducible objects. Let $R(\bfG(\mathsf k))$ denote the Grothendieck group of $\mathcal C(\bfG(\mathsf k))$.
\subsection{Nilpotent orbits}\label{subsec:nilpotent}
Let $\mathcal N$ be the functor which takes a field $F$ containing $\sfk$ to the set of nilpotent elements of $\mf g(F)$. By `nilpotent' in this context we mean the unstable points (in the sense of GIT) with respect to the adjoint action of $\bfG(F)$, see \cite[Section 2]{debacker}. For $F$ algebraically closed this coincides with all the usual notions of nilpotence. Let $\mathcal N_o$ be the functor which takes $F$ to the set of orbits in $\mathcal N(F)$ under the adjoint action of $\bfG(F)$. When $F$ is $\sfk$ or $\kun$, we view $\mathcal N_o(F)$ as a partially ordered set with respect to the closure ordering in the topology induced by the topology on $F$. When $F$ is algebraically closed, we view $\mathcal N_o(F)$ as a partially ordered set with respect to the closure ordering in the Zariski topology. For brevity we will write $\mathcal N(F'/F)$ (resp. $\mathcal N_o(F'/F)$) for $\mathcal N(F\to F')$ (resp. $\mathcal N_o(F\to F')$) where $F\to F'$ is a morphism of fields. For $(F,F')=(\sfk,\kun)$ (resp. $(\sfk,\bark)$, $(\kun,\bark)$), the map $\mathcal N_o(F'/F)$ is strictly increasing. We will write $\mathcal N$ for the nilpotent cone of the Lie algebra of $G$ and $\mathcal N_o$ for its $\Ad(G)$ orbits. In this case we also define $\mathcal N_{o,c}$ (resp. $\mathcal N_{o,\bar c}$) to be the set of all pairs $(\OO,C)$ such that $\OO\in \mathcal N_o$ and $C$ is a conjugacy class in the fundamental group $A(\OO)$ of $\OO$ (resp. Lusztig's canonical quotient $\bar A(\OO)$ of $A(\OO)$, see \cite[Section 5]{Sommers2001}). There is a natural map \begin{equation}
\mf Q:\mathcal N_{o,c}\to\mathcal N_{o,\bar c}, \qquad (\OO,C)\mapsto (\OO,\bar C) \end{equation} where $\bar C$ is the image of $C$ in $\bar A(\OO)$ under the natural homomorphism $A(\OO)\twoheadrightarrow \bar A(\OO)$. There are also projection maps $\pr_1: \cN_{o,c} \to \cN_o$, $\pr_1: \cN_{o,\bar c} \to \cN_o$. We will typically write $\mathcal N^\vee$, $\mathcal N^\vee_o, \cN^{\vee}_{o,c}$, and $\cN^{\vee}_{o,\bar c}$ for the sets $\mathcal N$, $\mathcal N_o, \cN_{o,c}$, and $\cN_{o,\bar c}$ associated to the Langlands dual group $G^\vee$. When we wish to emphasise the group we are working with we include it as a superscript e.g. $\mathcal N_o^\bfG(k)$.
Recall the following classical result.
\begin{lemma}[Corollary 3.5, \cite{Pommerening} and Theorem 1.5, \cite{Pommerening2}]\label{lem:Noalgclosed}
Let $F$ be algebraically closed with good characteristic for $\bfG$.
Then there is canonical isomorphism of partially ordered sets $\Theta_F:\mathcal N_o(F)\xrightarrow{\sim}\mathcal N_o$. \end{lemma}
\subsection{Duality for nilpotent orbits} \label{sec:duality} Write
\begin{equation}\label{eq:dBV} d: \cN_0 \to \cN_0^{\vee}, \qquad d: \cN_0^{\vee} \to \cN_0. \end{equation}
for the \emph{Barbasch-Lusztig-Spaltenstein-Vogan duality maps} (see \cite[Chapter 3]{Spaltenstein} or \cite[Appendix A]{BarbaschVogan1985}). Write
\begin{equation}
d_S: \cN_{o,c} \twoheadrightarrow \cN^{\vee}_o, \qquad d_S: \cN^{\vee}_{o,c} \twoheadrightarrow \cN_o \end{equation}
for the duality maps defined by Sommers in \cite[Section 6]{Sommers2001} and
\begin{equation}
D: \cN_{o,\bar c} \to \cN^{\vee}_{o,\bar c}, \qquad D: \cN^{\vee}_{o,\bar c} \to \cN_{o,\bar c} \end{equation}
for the duality maps defined by Achar in (\cite[Section 1]{Acharduality}). Since the latter two maps are less well known we give a (slightly non-standard) account of their definitions and basic properties.
The original precursor to the duality map $d$ is the involution on the set of two sided cells of the finite Hecke algebra $\mathcal H$ attached to $G$ with equal parameters apparent in \cite{weylcells}. Upon composing this involution with the Springer correspondence (and identifying the two Hecke algebras obtained from $G$ and $G^\vee$) one obtains the duality map $d$ \cite[Chapter 3]{Spaltenstein}. The map $d$ is not an involution, but satisfies $d^3=d$. The reason is that the 2-sided cells of $\mathcal H$ only biject with $\im(d)$ (the so called special nilpotent orbits) instead of whole of $\mathcal N_o$ and so $d$ is only an involution when restricted to this set.
One drawback of this construction however is that the non-special nilpotent orbits only play a peripheral role. Here the affine Hecke algebra $\mathcal H_{aff}$ of $G^\vee$ with equal parameters hints at a remedy for the situation. In particular, the 2-sided cells of $\mathcal H_{aff}$ are in bijection with $\mathcal N_o^\vee$. Moreover, for any element $s\in T$, the finite Hecke algebra $\mathcal H_s$ associated to connected centraliser $C_G^\circ(s)$ embeds into $\mathcal H_{aff}$. Along with an induction operation on 2-sided cells defined in \cite[Section 6.5]{cellsiv}, we are now in a position to define $d_S$. For a pair $(\OO,C)\in \mathcal N_{o,c}$ let $x\in \OO$ and $s$ be a semisimple element of $C_G(x)$ such that $sC_G^\circ(x)$ lies in $C$. Conjugating $s$ appropriately we may assume that it lies in $T$. Write $L_s = C_G^\circ(s)$ and $d^{L_s}$ for the duality map $d$ for the complex reductive group $L_s$. Then
$$d_S(\OO,C) = \Ind d^{L_s}(L_s.x).$$
Here we interpret $d^{L_s}(L_s.x)$ as a 2-sided cell of $\mathcal H_s$ and the resulting induced cell of $\mathcal H_{aff}$ as an elements of $\mathcal N_o^\vee$ using the bijections outlined in this and the preceeding paragraph. Unravelling these constructions gives the clean construction given in \cite{Sommers2001} using Springer theory, and the resulting map $d_S:\mathcal N_{o,c}\to \mathcal N_o^\vee$ is well-defined and has image the whole of $\mathcal N_o^\vee$. Moreover, it is clear that $d_S(\OO,1) = d(\OO)$ and so it extends the usual map $d$.
This finally brings us to the definition of $D$. In defining $d_S$ we obtained a map which is no longer symmetric in target and source. Achar's duality map rectifies this. In \cite[Proposition 15]{Sommers2001}, Sommers shows that $d_S$ factors through $\mf Q:\mathcal N_{o,c}\to \mathcal N_{o,\bar c}$, and by \cite[Theorem 1]{Acharduality} we have an embedding
$$\mathcal N_{o,\bar c}\xhookrightarrow{i} \mathcal N_o\times \mathcal N_o^\vee, \quad (\OO,C)\mapsto (\OO,d_S(\OO,C)).$$
There is also of course the corresponding embedding for the dual object
$$\mathcal N^\vee_{o,\bar c}\xhookrightarrow{i^\vee} \mathcal N_o^\vee\times \mathcal N_o$$
and this suggests a very natural candidate for $D$, namely $D=(i^\vee)^{-1}\circ \tilde D \circ i$ where
$$\tilde D:\mathcal N_o\times \mathcal N_o^\vee \to \mathcal N_o^\vee\times \mathcal N_o, \quad (\OO,\OO^\vee) \mapsto (\OO^\vee,\OO).$$
Indeed, whenever $\tilde D\circ i(\xi) \in \im i^\vee$, this is equivalent to the definition for $D(\xi)$ given by Achar. However in general, not all $\xi \in \mathcal N_{o,\bar c}$ enjoy this property. Let us call $\xi \in \mathcal N_{o,\bar c}$ \emph{special} if $\tilde D\circ i(\xi) \in \im i^\vee$ and let us endow $\mathcal N_o\times \mathcal N_o^\vee$ with the partial order
$$(\OO_1,\OO_1^\vee)\le (\OO_2,\OO_2^\vee), \text{ if } \OO_1\le \OO_2, \OO_1^\vee\ge\OO_2^\vee$$
(which in turn endows $\mathcal N_{o,\bar c}$ with a partial order via the embedding $i$). By \cite[Theorem 2.4]{Acharduality}, every $\xi\in \mathcal N_{o,\bar c}$ is dominated by a unique smallest special $\xi'$. We can then define $D(\xi) = (i^\vee)^{-1}\circ \tilde D\circ i(\xi')$, and it is easily seen that this is equivalent to Achar's definition of $D$. The resulting map $D:\mathcal N_{o,\bar c}\to \mathcal N_{o,\bar c}^\vee$ is then a duality map in much that same way that $d$ is: it satisfies $D^3 = D$. It also extends $d_S$ in the sense that $\pr_1\circ D = d_S$. These results are enough for our purposes, but let us just note that in obvious analogy to the preceding situations one might wonder if $\im D$ is in bijection with the 2-sided cells of some Hecke algebra. This is to the best of out knowledge still unknown.
\subsection{Nilpotent orbits over $\kun$} Let $\bfT$ be a maximal $\sfk$-split torus of $\bfG$, $\bfT_1$ be a maximal $\kun$-split torus of $\bfG$ defined over $\sfk$ and containing $\bfT$, and $x_0$ be a special point in $\mathcal A(\bfT_1,\kun)$. In \cite[Section 2.1.5]{okada2021wavefront} the third-named author constructed a bijection $$\theta_{x_0,\bfT_1}:\mathcal N_o^{\bfG}(\kun)\xrightarrow{\sim}\mathcal N_{o,c}$$ which should be viewed as an analogoue of the bijection in Lemma \ref{lem:Noalgclosed} for nilpotent orbits over $\kun$. This bijection enjoys a number of properties summarised in the following theorem. \begin{theorem}
\label{lem:paramNoK}
\cite[Theorem 2.20, Theorem 2.27, Proposition 2.29]{okada2021wavefront}
The bijection
$$\theta_{x_0,\bfT_1}:\mathcal N_o^{\bfG}(\kun)\xrightarrow{\sim}\mathcal N_{o,c}$$
is natural in $\bfT_1$, equivariant in $x_0$, and makes the following diagram commute:
\begin{equation}
\begin{tikzcd}[column sep = large]
\mathcal N_o^{\bfG}(\kun) \arrow[r,"\theta_{x_0,\bfT_1}"] \arrow[d,"\mathcal N_o(\bark/\kun)",swap] & \mathcal N_{o,c} \arrow[d,"\pr_1"] \\
\mathcal N_o(\bark) \arrow[r,"\Theta_{\bark}"] & \mathcal N_o.
\end{tikzcd}
\end{equation} \end{theorem} One important consequence of the equivariance in $x_0$ is that the composition
$$d^{un}:= d_S\circ \theta_{x_0,\bfT_1}:\cN_o(\kun)\to \cN_o^\vee$$
is independent of the choice of $x_0$ \cite[Proposition 2.32]{okada2021wavefront}. We suppress the $\bfT_1$ from the notation since this choice is implicit from fixing the root data and the dual group.
For $\OO_1,\OO_2\in \mathcal N_o(\kun)$ define $\OO_1\le_A\OO_2$ by
$$\OO_1\le_A \OO_2 \iff \mathcal N_o(\bark/\kun)(\OO_1) \le \mathcal N_o(\bark/\kun)(\OO_2),\text{ and } d^{un}(\OO_1)\ge d^{un}(\OO_2)$$
and let $\sim_A$ denote the equivalence classes of this pre-order. Recall from the previous section that $\cN_{o,\bar c}$ can be endowed with a partial ordering coming from the embedding $\cN_{o,\bar c}\hookrightarrow \cN_o\times \cN_o^\vee$. The following proposition parameterises $\cN_o(\kun)/\sim_A$ in terms of $\cN_{o,\bar c}$. \begin{prop}
\cite[Theorem 2.33]{okada2021wavefront}
\label{prop:unramclasses}
The composition $\mf Q\circ \theta_{x_0,\bfT_1}:\mathcal N_o(\kun)\to \mathcal N_{o,\bar c}$ descends to an isomorphism of partial orders
$$\bar\theta:\mathcal N_o(\kun)/\sim_A\to \mathcal N_{o,\bar c}$$
which does not depend on $x_0$. \end{prop}
\subsection{Representations with unipotent cuspidal support}\label{subsec:LLC} We briefly recall the classification of irreducible representations with unipotent cuspidal support defined in \cite[\S1.6,\S1.21]{Lu-unip1}. If $\mathcal S$ is a subset of $G^\vee$, let \[Z^1_{G^\vee_{\mathsf{sc}}}(\mathcal S)=\text{ preimage of }G^\vee(\mathcal S)/Z(G^\vee)\text{ under the projection } G^\vee_{\mathsf{sc}}\to G^\vee_{\ad}, \] and let $A^1(\mathcal S)$ denote the component group of $Z^1_{G^\vee_{\mathsf{sc}}}(\mathcal S)$.
Write $\Phi(G^\vee)$ for the set of $G^\vee$-orbits (under conjugation) of triples $(s,n,\rho)$ where \begin{itemize}
\item $s\in G^\vee$ is semisimple,
\item $n\in \mathfrak g^\vee$ such that $\operatorname{Ad}(s) n=q n$,
\item $\rho\in \mathrm{Irr}(A^1_{G^{\vee}}(s,n))$.
\end{itemize} Without loss of generality, we may assume that $s\in T^\vee$. Note that $n\in\mathfrak g^\vee$ is necessarily nilpotent. The group $G^\vee(s)$ acts with finitely many orbits on the $q$-eigenspace of $\Ad(s)$
$$\mathfrak g_q^\vee=\{x\in\mathfrak g^\vee\mid \operatorname{Ad}(s) x=qx\}$$
In particular, there is a unique open $G^\vee(s)$-orbit in $\mathfrak g_q^\vee$.
Fix an $\mathfrak{sl}(2)$-triple $\{n^-,h^\vee,n\} \subset \fg^{\vee}$ with $h^\vee\in \mathfrak t^\vee_{\mathbb R}$ and set
$$s_0:=sq^{-\frac{h^\vee}{2}}.$$
Then $\operatorname{Ad}(s_0)n=n$.
A parameter $(s,n,\rho)\in \Phi(G^\vee)$ is called \emph{discrete} if $Z_{G^\vee}(s,n)$ does not contain a nontrivial torus. A discrete parameter is called \emph{cuspidal} if $(u,\rho)$ is a cuspidal pair in $Z_{G^\vee}(s)$ in the sense of Lusztig.
Let $\Pi^{\mathsf{Lus}}(\bfG^\omega(\mathsf k))$ denote the equivalence classes of irreducible $\bfG^\omega(\mathsf k)$-representations with unipotent cuspidal support and \[\Pi^{\mathsf{Lus}}(\bfG)=\bigsqcup_{\omega\in\Omega_{\mathsf{ad}}} \Pi^{\mathsf{Lus}}(\bfG^\omega(\mathsf k)). \] (In this subsection, $\bfG^1(\mathsf k)$ is the split form.) The following theorem is a combination of several results, namely \cite[Theorems 7.12, 8.2, 8.3]{KL} for $\bfG$ adjoint and Iwahori-spherical representations, \cite[Corollary 6.5]{Lu-unip1} and \cite[Theorem 10.5]{Lu-unip2} for $\bfG$ adjoint and representations with unipotent cuspidal support, \cite[Theorem 3.5.4]{Re-isogeny} for $\bfG$ arbitrary and Iwahori-spherical representations, and \cite{FengOpdamSolleveld2021,FengOpdam2020,Sol-LLC} for $\bfG$ arbitrary and representations with unipotent cuspidal support. See \cite[\S2.3]{AMSol} for a discussion of the compatibility between these classifications.
\begin{theorem}[{Deligne-Langlands-Lusztig correspondence}]\label{thm:Langlands} There is a bijection
$$\Phi(G^\vee)\xrightarrow{\sim} \Pi^{\mathsf{Lus}}(\bfG),
\qquad (s,n,\rho)\mapsto X(s,n,\rho),$$ such that \begin{enumerate}
\item $X(s,n,\rho)$ is tempered if and only if $s_0\in T_c^\vee$ (in particular, $\overline {G^\vee(s)n}=\mathfrak g_q^\vee$),
\item $X(s,n,\rho)$ is square integrable (modulo the center) if and only if it is tempered and $(s,n,\rho)$ is discrete.
\item $X(s,n,\rho)$ is supercuspidal if and only if $(s,n,\rho)$ is a cuspidal parameter.
\item $X(s,n,\rho)\in \Pi^{\mathsf{Lus}}(\bfG^\omega(\mathsf k))$ if and only if $\rho|_{Z(G^\vee)}$ is a multiple of $\zeta_\omega$. \end{enumerate} This bijection satisfies several natural desiderata (including formal degrees, equivariance with respect to tensoring by weakly unramified characters), see \cite[Theorem 1]{Sol-LLC} and \cite[Theorem 2]{FengOpdamSolleveld2021}. \end{theorem} We denote by $\Pi^{\mathsf{Lus}}_s(\bfG(\mathsf k))$ the set of irreducible $\bfG(\mathsf k)$-representations $X(s,n,\rho)$ for a fixed $s\in T^\vee$.
\subsection{Aubert-Zelevinsky duality}\label{subsec:AZ} There is an involution $\AZ$ on the Grothendieck group $R(\mathbf{G}(\sfk))$ of smooth $\mathbf{G}(\sfk)$-representation, called \emph{Aubert-Zelevinsky duality} \cite[\S1]{Au}. This involution can defined in the following manner. Let $\mathcal Q$ denote the set of parabolic subgroups of $\bfG$ defined over $\mathsf k$ and containing $\mathbf B$. For every $\mathbf Q\in\mathcal Q$, let $i_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}$ and $\mathsf{r}_{\mathbf Q(\mathsf k)}^{\mathbf{G}(\mathsf k)}$ denote the normalized parabolic induction and normalized Jacquet functors, respectively. Then $\AZ$ is defined by \begin{equation}
\widetilde{\mathsf{AZ}}: R(\mathbf{G}(\mathsf{k})) \to R(\mathbf{G}(\mathsf{k})), \qquad \widetilde{\mathsf{AZ}}(X)=\sum_{\mathbf Q\in \mathcal Q} (-1)^{r_{\mathbf Q}} ~i_{\mathbf Q(\mathsf k)}^{\bfG(\mathsf k)}(\mathsf{r}_{\mathbf Q(\mathsf k)}^{\bfG(\mathsf k)}(X)), \end{equation} where $r_{\mathbf Q}$ is the semisimple rank of the reductive quotient of $\mathbf Q$. If a class $X \in R(\mathbf{G}(\sfk))$ is irreducible and $X$ occurs as a composition factor of a parabolically induced module $i_{\mathbf Q(\mathsf k)}^{\bfG(\mathsf k)}(\sigma)$, where $\sigma$ is a supercuspidal representation, then, by \cite[Corollaire 3.9]{Au} $(-1)^{r_{\mathbf Q}}\widetilde{\mathsf{AZ}}(X)$ is the class of an irreducible representation, which we denote $\AZ(X)$.
Moreover, it is known that $\AZ$ is an involution on the set of irreducible representations in any Bernstein component, see \cite[\S3]{Au}, also \cite[\S3.2]{BBK}. In particular, it preserves the irreducible representations with unipotent cuspidal support, and in fact, it induces an involution on $\Pi^{\mathsf{Lus}}_s(\bfG(\mathsf k))$, for each $s\in W\backslash T^\vee$. In addition, by \cite[Theorem 1.1]{BM1} for Iwahori-spherical representations and \cite[Theorem 1.2.1]{BarCiu13} in general, $\AZ$ maps unitary representations to unitary representations in the category of representations with unipotent supercuspidal support.
\subsection{Wavefront sets}\label{s:wave} Let $X$ be an admissible smooth representation of $\bfG(\sfk)$ and let $\Theta_X$ be the character of $X$. Recall that for each nilpotent orbit $\OO\in \mathcal N_o^{\bfG}(\sfk)$ there is an associated distribution $\mu_\OO$ on $C_c^\infty(\mf g(\sfk))$ called the \emph{nilpotent orbital integral} of $\OO$ \cite{rangarao}. Write $\hat\mu_\OO$ for the Fourier transform of this distribution. By a result of Harish-Chandra (\cite{HarishChandra1999}), there are complex numbers $c_{\OO}(X) \in \CC$ such that
\begin{equation}\label{eq:localcharacter}\Theta_{X}(\mathrm{exp}(\xi)) = \sum_{\OO} c_{\OO}(X) \hat{\mu}_{\OO}(\xi)\end{equation}
for $\xi \in \fg(\sfk)$ a regular element in a small neighborhood of $0$. The formula (\ref{eq:localcharacter}) is called the \emph{local character expansion} of $\pi$. There are several important invariants which can be extracted from the local character expansion. The \textit{($p$-adic) wavefront set} of $X$ is defined to be
$$\WF(X) := \max\{\OO \mid c_{\OO}(X)\ne 0\} \subseteq \mathcal N_o(\sfk).$$
The \emph{geometric wavefront set} of $X$ is defined to be
$$^{\bark}\WF(X) := \max \{\mathcal N_o(\bark/\sfk)(\OO) \mid c_{\OO}(X)\ne 0\} \subseteq \mathcal N_o(\bark),$$
see \cite[p. 1108]{Wald18} (warning: in \cite{Wald18}, this invariant is called simply the `wavefront set' of $X$). In \cite[Section 2.2.3]{okada2021wavefront} the third author has introduced a third type of wavefront set called the \emph{canonical unramified wavefront set}. This is defined to be
$$\CUWF(X) := \max \{[\mathcal N_o(\kun/\sfk)(\OO)] \mid c_{\OO}(X)\ne 0\} \subseteq \mathcal N_o(\kun)/\sim_A.$$
Recall the parameterisations $\Theta_{\bark}:\cN_o(\bark)\to \cN_o$ and $\bar\theta:\cN_o(\kun)/\sim_A\to \cN_{o,\bar c}$. We define
$$\hphantom{ }^{\bar{\sfk}}\WF(X,\CC) := \Theta_{\bar \sfk}(\hphantom{ }^{\bar{\sfk}}\WF(X)), \quad \CUWF(X,\CC) := \bar \theta(\hphantom{ }\CUWF(X)).$$
We have the following basic relation between the geometric and canonical unramified wavefront sets.
\begin{prop}
\cite[Theorem 2.37]{okada2021wavefront}
\label{prop:cuwf}
If $\CUWF(X)$ is a singleton then
$$^{\bark}\WF(X,\CC) = \pr_1(\CUWF(X,\CC)).$$
\end{prop}
For our main result below, we will need a description of $\underline{\WF}(X,\CC)$ in the case when $X$ is an irreducible representation with unipotent cuspidal support and real infinitesimal character. Such a description is contained in the main result of \cite{CMBOunipotent} (see also \cite[Theorem 1.2.1]{CMBOIspherical} for the Iwahori-spherical case).
\begin{theorem}\label{thm:WFformula} Let $X=X(s,n,\rho)$ be an irreducible $\bfG(\mathsf k)$-representation with real infinitesimal character and let $\AZ(X) = X(s,n',\rho')$. Then \begin{enumerate}
\item $\underline{\WF}(X,\CC)$ is a singleton, and \[\underline{\WF}(X,\CC) = D(\OO^{\vee}_{\AZ(X)},1),\] where $\OO^{\vee}_{\AZ(X)}$ is the $G^\vee$-orbit of $n'$. \item Suppose $X=X(q^{\frac 12 h^\vee},n,\rho)$ where $h^\vee$ is the neutral element of a Lie triple attached to a nilpotent orbit $\OO^\vee\subset \mathfrak g^\vee$. Then \[D(\OO^{\vee}, 1) \leq \hphantom{ } \underline{\WF}(X,\CC). \] \end{enumerate} \end{theorem}
\subsection{Basic Arthur packets}\label{subsec:basic}
To formulate the relation between $\AZ$ and Arthur packets, it will be convenient to recall an alternative formulation of the Langlands classification. A \emph{simplified Langlands parameter} is a continuous homomorphism $\widetilde{\varphi}: W_{\sfk} \times \mathrm{SL}(2,\CC) \to G^{\vee}$ such that $\widetilde{\varphi}(W_k)$ consists of semisimple elements and $\widetilde{\varphi}|_{\mathrm{SL}(2,\CC)}$ is algebraic. A simplified Langlands parameter is tempered if the image of $W_k$ is compact (the idea of replacing the Weil-Deligne group $W_k'$ with $W_k \times \mathrm{SL}(2,\CC)$ in the non-Archimedean case was first proposed by Langlands in \cite[p. 209]{Langlands1979}). Although there are good reasons for preferring the Weil-Deligne group formulation, there is a bijection between simplified and honest Langlands parameters, see \cite[p. 278]{Knapp-Langlands}. Similarly, a \emph{simplified Arthur parameter} is a continuous homomorphism $\widetilde{\psi}: W_k \times \mathrm{SL}(2,\CC)_{\mathsf{Lan}} \times \mathrm{SL}(2,\CC)_{\mathsf{Art}} \to G^{\vee}$ such that the restriction of $\widetilde{\psi}$ to $W_k \times \mathrm{SL}(2,\CC)_{\mathsf{Lan}}$ is a tempered simplified Langlands parameter and the restriction of $\widetilde{\psi}$ to $\mathrm{SL}(2,\CC)_{\mathsf{Art}}$ is algebraic. The bijection between simplified and honest Langlands parameters induces a bijection between simplified and honest Arthur parameters.
If $\widetilde{\psi}$ is a simplified Arthur parameter, we get a second simplified Arthur parameter $\widetilde\psi^t$ by `flipping' the $\mathrm{SL}(2,\CC)$ factors, \cite[p. 390]{Arthur2013}, i.e.
$$\widetilde\psi^t(w,x,y) :=\widetilde \psi(w,y,x), \qquad w \in W_k, \ x,y \in \mathrm{SL}(2,\CC).$$
Via the bijection between simplified and honest Arthur parameters, the map $\widetilde{\psi} \mapsto \widetilde{\psi}^t$ induces an involution on honest Arthur parameters, which we also denote by $\psi \mapsto \psi^t$. It is expected that
\begin{equation}\label{eq:AZflip}\mathsf{AZ}(\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\mathsf k)) = \Pi_{\psi^t}^{\mathsf{Art}}(\mathbf{G}(\mathsf k)).\end{equation}
In the case when $\mathbf G$ is orthogonal or symplectic, this duality is discussed in \cite[\S7.1]{Arthur2013}. If the Arthur parameter $\psi$ is trivial on the inertia subgroup $I_k$, then the associated Langlands parameter $\phi_{\psi}$ is unramified. So the representations in the Langlands packet $\Pi_{\phi_{\psi}}^{\mathsf{Lan}}(\mathbf{G}(\mathsf k))$, and hence also in the Arthur packet $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\mathsf k))$, are of unipotent cuspidal support.
If $\widetilde\psi$ is trivial on $W_k \times \SL(2,\CC)_{\mathsf{Art}}$, then $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\mathsf k)) = \Pi_{\phi_{\psi}}^{\mathsf{Lan}}(\mathbf{G}(\mathsf k))$ consists of \emph{tempered} representations with real infinitesimal character. So if $\psi$ is basic (cf. Definition \ref{def:unipotentparam}), the desideratum (\ref{eq:AZflip}) suggests the following definition for the associated Arthur packet $\Pi_{\psi}^{\mathsf{Art}}(\mathbf{G}(\sfk))$, see \cite[(7.1.10)]{Arthur2013}.
\begin{definition} \label{def:basicpacket} Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated basic Arthur parameter (cf. Definition \ref{def:unipotentparam}). Then
$$\Pi^{\mathsf{Art}}_{\psi_{\OO^{\vee}}}(\mathbf{G}(\mathsf k)) = \{X \in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\mathsf k)) \mid \AZ(X) \text{ is tempered}\}.$$ \end{definition}
\section{Main results}\label{sec:main-result}
Fix a nilpotent orbit $\OO^{\vee} \subset \cN^{\vee}$ and choose an $\mathfrak{sl}(2)$-triple $(e^{\vee},f^{\vee},h^{\vee})$ with $e^{\vee} \in \OO^{\vee}$. Recall that if $X = X(s,n,\rho)$, we write $\OO^{\vee}_X$ for the nilpotent adjoint $G^{\vee}$-orbit of $n$. The semisimple operator $\ad(h^{\vee})$ induces a Lie algebra grading
$$\fg^{\vee} = \bigoplus_{n \in \ZZ}\fg^{\vee}[n], \qquad \fg^{\vee}[n] := \{x \in \fg^{\vee} \mid [h^{\vee},x] = nx\}.$$
Write $L^{\vee}$ for the connected (Levi) subgroup corresponding to the centralizer $\fg^{\vee}_0$ of $h^{\vee}$. If we set $s := q^{\frac{1}{2}h^{\vee}}$, then
$$L^{\vee} = G^{\vee}(s), \qquad \fg^{\vee}[2] = \fg^{\vee}_q$$
where $G^{\vee}(s)$ and $\fg^{\vee}_q$ are as defined in Section \ref{subsec:LLC}. Note that $L^{\vee}$ acts by conjugation on each $\fg^{\vee}[n]$, and in particular on $\fg^{\vee}[2]$.We will need the following well-known facts, see \cite[Section 4]{Kostant1959} or \cite[Prop 4.2]{Lusztigperverse}.
\begin{lemma}\label{lem:orbitclosure} The following are true: \begin{itemize} \item[(i)] $L^{\vee}e^{\vee}$ is an open subset of $\fg^{\vee}[2]$ (and hence the unique open $L^{\vee}$-orbit therein). \item[(ii)] $L^{\vee}e = \OO^{\vee} \cap \fg^{\vee}[2]$. \item[(iii)] $G^{\vee}\fg^{\vee}[2] \subseteq \overline{\OO^{\vee}}$. \end{itemize} \end{lemma}
\begin{lemma}
\label{lem:tempclassification}
Let $X\in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\mathsf{k}))$.
Then $X$ is tempered if and only if $\OO^\vee_X = \OO^\vee$. \end{lemma} \begin{proof} Let $X = X(q^{\frac{1}{2}h^{\vee}},n,\rho)$ for $n \in \fg^{\vee}[2]$ and $\rho \in \mathrm{Irr}(A(q^{\frac{1}{2}h^{\vee}},n))$. Note that $s_0 = 1 \in T_c^{\vee}$. So
\begin{align*}
X \text{ is tempered} &\iff \overline{L^{\vee}n} = g^{\vee}[2] &&\text{(Theorem \ref{thm:Langlands}(i))}\\
&\iff n \in L^{\vee}e^{\vee} &&\text{(Lemma \ref{lem:orbitclosure}(i))}\\
&\iff n \in \OO^{\vee} &&\text{(Lemma \ref{lem:orbitclosure}(ii))}\\
&\iff \OO^{\vee}_X = \OO^{\vee}. \end{align*} \end{proof}
\begin{theorem}\label{thm:main} Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated basic Arthur parameter (cf. Definition \ref{def:unipotentparam}). Then
\begin{equation}\label{eq:Arthurpacket}\Pi^{\mathsf{Art}}_{\psi_{\OO^{\vee}}}(\mathbf{G}(\sfk)) = \{X \in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\mathsf{k})) \mid \ \CUWF(X,\CC) \leq D(\OO^{\vee},1)\}.\end{equation} \end{theorem} \begin{proof} Let $X\in \Pi^{\mathsf{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\mathsf{k}))$. By Theorem \ref{thm:WFformula}, we have the bound
$$D(\OO^\vee,1)\le \CUWF(X,\CC).$$
Thus
\begin{equation}
\label{eq:bound}
\CUWF(X,\CC) \le D(\OO^\vee,1) \iff \CUWF(X,\CC) = D(\OO^\vee,1). \end{equation}
But by Theorem \ref{thm:WFformula} we also have
\begin{equation}
\label{eq:wf}
\CUWF(X,\CC) = D(\OO^\vee_{\AZ(X)},1). \end{equation}
Therefore
\begin{align*}
X\in \Pi^{\mathsf{Art}}_{\OO^{\vee}}(\mathbf{G}(\mathsf{k})) &\iff \AZ(X) \text{ is tempered} &&\text{(Definition \ref{def:basicpacket})} \\
&\iff \OO^\vee_{\AZ(X)} = \OO^\vee &&\text{(Lemma \ref{lem:tempclassification})}\\
&\iff D(\OO^\vee_{\AZ(X)},1)=D(\OO^\vee,1) &&\text{(\cite[Lemma 2.1.2]{CMBOIspherical})}\\
&\iff \CUWF(X,\CC)=D(\OO^\vee,1) &&\text{(Equation \ref{eq:wf})}\\
&\iff \CUWF(X,\CC)\le D(\OO^\vee,1) &&\text{(Equation \ref{eq:bound})}. \end{align*} \end{proof}
There is a well-known conjecture regarding the (geometric) wavefront sets of the constituents of an Arthur packet. The following formulation essentially appears in \cite[Conjecture 3.2]{jiangliu} (in \cite{jiangliu} it is stated only for classical groups. The version below is the natural generalization to arbitrary groups).
\begin{conj}[{cf. \cite[Conjecture 3.2]{jiangliu}}]\label{conj:jiangliu} Let $\psi$ be an Arthur parameter and let $\OO^{\vee}_{\psi}$ be the nilpotent $G^{\vee}$-orbit corresponding to the restriction of $\psi$ to $\mathrm{SL}(2,\CC)$. Then \begin{itemize}
\item[(i)] For every $X \in \Pi^{\mathsf{Art}}_{\psi}(\mathbf{G}(\sfk))$, there is a bound
$$\hphantom{ }^{\bark}\WF(X,\CC) \leq d(\OO^{\vee}_{\psi}).$$
\item[(ii)] The bound in (i) is achieved for some $X \in \Pi^{\mathsf{Art}}_{\psi}(\mathbf{G}(\sfk))$. \end{itemize} \end{conj}
We note that a global version of this conjecture appears in \cite{Shahidi90}. An immediate consequence of Theorem \ref{thm:main} is that Conjecture \ref{conj:jiangliu} holds for basic Arthur packets. In fact, something stronger is true for this class of packets.
\begin{cor}\label{cor:jiang} Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated Arthur parameter. Then for every $X \in \Pi^{\mathsf{Art}}_{\psi_{\OO^{\vee}}}(\mathbf{G}(\sfk))$, there is an equality
$$\hphantom{ }^{\bark}\WF(X,\CC) = d(\OO^{\vee}).$$ \end{cor} \begin{proof}
Let $X \in \Pi^{\mathsf{Art}}_{\psi_{\OO^{\vee}}}(\mathbf{G}(\sfk))$.
By Theorem \ref{thm:main} we have that $\CUWF(X,\CC) = D(\OO^\vee_\psi,1)$.
Recall from the discussion in Section \ref{sec:duality} that $\pr_1\circ D = d_S$ and that $d_S(\OO^\vee,1) = d(\OO^\vee)$.
Moreover, by Proposition \ref{prop:cuwf}, we have that $^{\bark}\WF(X,\CC) = \pr_1(\CUWF(X,\CC))$.
Therefore
$$^{\bark}\WF(X,\CC) = \pr_1(\CUWF(X,\CC)) = \pr_1(D(\OO^\vee,1))= d(\OO^\vee).$$
\end{proof}
\subsection{Weak Arthur packets}
Let $\OO^{\vee}$ be a nilpotent adjoint $G^{\vee}$-orbit and let $\psi_{\OO^{\vee}}$ be the associated Arthur parameter. Theorem \ref{thm:main} gives a characterization of the associated Arthur packet $\Pi^{\mathsf{Art}}_{\OO^\vee}(\bfG(\mathsf k))$ in terms of the \emph{canonical unramified wavefront set} $\underline{\WF}(X)$. If we replace $\underline{\WF}(X)$ with the coarser invariant $\hphantom{ }^{\bar{\sfk}}\WF(X)$, we get a set which we will denote by $\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ \begin{equation}\label{eq:weakpacket} \Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk)) := \{X = X(q^{\frac{1}{2}h^{\vee}},n,\rho) \mid \hphantom{ }^{\bar{\sfk}}\WF(X) \leq d(\OO^{\vee})\}. \end{equation}
Because of the compatibility between $\underline{\WF}(X)$ and $\hphantom{ }^{\bar{\sfk}}\WF(X)$, there is a containment
$$\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Art}}(\mathbf{G}(\sfk)) \subseteq \Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk)).$$
In contrast to the set $\Pi^{\mathsf{Art}}_{\psi_{\OO^\vee}}(\bfG(\mathsf k))$, the set $\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ is \emph{not} the $\AZ$-dual of a tempered Arthur packet, nor is it parameterized (in any simple way) by representations of $A(\OO^{\vee})$. So it is not a reasonable candidate for an Arthur packet. By analogy with the case of real reductive groups (see \cite[Chapter 27]{AdamsBarbaschVogan}), we call $\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ the \emph{weak Arthur packet} attached to $\OO^{\vee}$.
{Using Theorem \ref{thm:WFformula} and Proposition \ref{prop:cuwf}, it is easy to describe the constituents of $\Pi_{\OO^{\vee}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$.
\begin{cor}\label{c:weak} The weak Arthur packet $\Pi_{\OO^{\vee}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ is the set of irreducible representations $\mathsf{AZ}(X(q^{\frac{1}{2}h^{\vee}},n,\rho))$, where $n$ belongs to the special piece (in the sense of \cite{Spaltenstein}) of $\OO^\vee.$ \end{cor} }
Guided by the case of real reductive groups, we conjecture that $\Pi_{\OO^{\vee}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ is a union of Arthur packets. Recall the notation of Section \ref{subsec:basic}.
\begin{conj}\label{conj:weakpacket} The weak Arthur packet $\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ is a (possibly non-disjoint) union of several Arthur packets
\begin{equation}\label{eq:unionpackets}\Pi_{\psi_{\OO^{\vee}}}^{\mathsf{Weak}}(\mathbf{G}(\sfk)) = \bigcup_{i=1}^n \Pi_{\widetilde{\psi}_i}^{\mathsf{Art}}(\mathbf{G}(\sfk)),\end{equation}
corresponding to a collection of simplified Arthur parameters
\begin{equation}\{\widetilde{\psi}_i: W_{\mathsf k} \times \mathrm{SL}(2,\CC)_{\mathsf{Lan}} \times \mathrm{SL}(2,\CC)_{\mathsf{Art}} \to G^{\vee} \mid \ 1 \leq i \leq n\},\end{equation}
including the anti-tempered parameter $\widetilde{\psi}_{\OO^{\vee}}$, but also several others. Each $\widetilde{\psi}_i$ is trivial on the Weil group $W_k$ and thus corresponds to an algebraic homomorphism
$$\widetilde{\psi}_i: \mathrm{SL}(2,\CC)_{\mathrm{Lan}} \times \mathrm{SL}(2,\CC)_{\mathsf{Art}} \to G^{\vee}$$
Via the Jacobson-Morozov theorem, each such $\widetilde{\psi}_i$ can be identified with a pair of nilpotent adjoint $G^{\vee}$-orbits
\begin{equation}\label{eq:commutingorbits}(\OO^{\vee}_{i,\mathsf{Art}},\OO^{\vee}_{i,\mathsf{Lan}}),\end{equation}
and a pair of semisimple elements in a single fixed Cartan subalgebra of $\fg^{\vee}$
\begin{equation}\label{eq:commuting semisimple}(h^{\vee}_{i,\mathsf{Art}}, h^{\vee}_{i,\mathsf{Lan}}).\end{equation}
These elements should satisfy
\begin{equation}\label{eq:inflcharsum}\frac{1}{2}h^{\vee} \in G^{\vee}(\frac{1}{2}h^{\vee}_{i,\mathsf{Art}} + \frac{1}{2}h^{\vee}_{i,\mathsf{Lan}}).\end{equation} \end{conj}
One might be tempted to guess that the set $\{\widetilde{\psi}_i\}$ of parameters appearing in Conjecture \ref{conj:weakpacket} consists of \emph{all} possible parameters satisfying the infinitesimal character condition (\ref{eq:inflcharsum}). But nothing quite so simple is true. The tempered Arthur packet corresponding to the pair $(\OO^{\vee}_{\mathsf{Art}},\OO^{\vee}_{\mathsf{Lan}}) = (0, \OO^{\vee})$ is almost never contained in $\Pi_{\OO^{\vee}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ (see Section \ref{sec:example} for a detailed example). Thus, the problem of describing the Arthur packets in $\Pi_{\OO^{\vee}}^{\mathsf{Weak}}(\mathbf{G}(\sfk))$ seems interesting and nontrivial. We note that Conjecture \ref{conj:weakpacket} together with Corollary \ref{c:weak} implies the following conjecture (noting that $\mathsf{AZ}$ preserves unitarity):
\begin{conj} Let $(q^{\frac{1}{2}h^{\vee}},n,\rho)$ be a Deligne-Langlands-Lusztig parameter such that $n$ belongs to the special piece of $\OO^{\vee}$. Then the irreducible representation $X(q^{\frac 12 h^\vee},n,\rho)$ is unitary. \end{conj}
\section{Example (split $F_4$)}\label{sec:example}
Let $\mathbf{G}(\sfk)$ be the split form of the (unique) simple exceptional group of type $F_4$, and let $\OO^{\vee}=F_4(a_3)$, the minimal distinguished orbit. There are 20 representations $X_1,...,X_{20}$ in $\Pi^{\mathsf{Lan}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$, enumerated in \cite[Section 5]{CiDirac}. All but one of these representations (namely, the representation labeled $X_5$) are Iwahori-spherical. For each $X_k = X(q^{\frac{1}{2}h^{\vee}},n,\rho)$ we record in Table \ref{table:f4a3} below the nilpotent orbit $G^{\vee}n \subset \cN^{\vee}$ and the representation $\rho$ of $A(s,n)$ (in all cases, $A(s,n)$ is a symmetric group, so $\rho$ corresponds to a partition). We also record $\AZ(X_k)$ and indicate whether $X_k$ is unitary. For these calculations, we refer the reader to \cite[Proposition 5.7]{CiDirac}. Using (i) of Theorem \ref{thm:Langlands}, it is easy to determine which representations are tempered (there are 5 such representations, labeled $X_1$,...,$X_5$). Finally, we compute the canonical unramified wavefront set $\CUWF(X_k)$. We have by Theorem \ref{thm:WFformula} that
\[\CUWF(\mathsf{AZ}(X_k)) = D(G^\vee n,1).\]
The right hand side can be computed using the tables in \cite[Section 6]{Acharduality}.
\begin{longtable}{|c|c|c|c|c|c|c|c|}
\hline
& $G^{\vee}n$ & $\rho$ & I-spherical? & Tempered? & Unitary? & $\mathsf{AZ}$ & $\CUWF$\\ \hline
$X_1$ & $F_4(a_3)$ & $(4)$ & yes & yes & yes & $X_{20}$ & $(F_4,1)$ \\ \hline
$X_2$ & $F_4(a_3)$ & $(31)$ & yes & yes & yes & $X_{19}$ & $(F_4(a_1),(12))$\\ \hline
$X_3$ & $F_4(a_3)$ & $(2^2)$ & yes & yes & yes & $X_{17}$ & $(F_4(a_1),1)$\\ \hline
$X_4$ & $F_4(a_3)$ & $(21^2)$ & yes & yes & yes & $X_{13}$ & $(C_3,1)$\\ \hline
$X_5$ & $F_4(a_3)$ & $(1^4)$ & no & yes & yes & $X_5$ & $(F_4(a_3), 1)$ \\ \hline
$X_6$ & $C_3(a_1)$ & $(2)$ & yes & no & yes & $X_{15}$ & $(F_4(a_2),1)$\\ \hline
$X_7$ & $C_3(a_1)$ & $(1^2)$ & yes & no & yes & $X_9$ & $(F_4(a_3),(1234)$\\ \hline
$X_8$ & $A_1+\widetilde{A}_2$ & $(1)$ & yes & no & yes & $X_8$ & $(F_4(a_3),(123))$\\ \hline
$X_9$ & $\widetilde{A}_1+A_2$ & $(1)$ & yes & no & yes & $X_7$ & $(F_4(a_3),(12))$\\ \hline
$X_{10}$ & $B_2$ & $(2)$ & yes & no & yes & $X_{18}$ & $(F_4(a_1),1)$\\ \hline
$X_{11}$ & $B_2$ & $(1^2)$ & yes & no & yes & $X_{11}$ & $(F_4(a_3),(12)(34))$\\ \hline
$X_{12}$ & $A_2$ & $(2)$ & yes & no & yes & $X_{14}$ & $(B_3,1)$\\ \hline
$X_{13}$ & $A_2$ & $(1^2)$ & yes & no & yes & $X_4$ & $(F_4(a_3),1)$\\ \hline
$X_{14}$ & $\widetilde{A}_2$ & $(1)$ & yes & no & yes & $X_{12}$ & $(C_3,1)$\\ \hline
$X_{15}$ & $A_1+\widetilde{A}_1$ & $(1)$ & yes & no & yes & $X_6$ & $(F_4(a_3),(12))$\\ \hline
$X_{16}$ & $A_1+\widetilde{A}_1$ & $(1)$ & yes & no & no & $X_{16}$ & $(F_4(a_2),1)$ \\ \hline
$X_{17}$ & $\widetilde{A}_1$ & $(2)$ & yes & no & yes & $X_3$ & $(F_4(a_3),1)$ \\ \hline
$X_{18}$ & $\widetilde{A}_1$ & $(1^2)$ & yes & no & yes & $X_{10}$ & $(F_4(a_3),(12)(34))$\\ \hline
$X_{19}$ & $A_1$ & $(1)$ & yes & no & yes & $X_2$ & $(F_4(a_3),1)$ \\ \hline
$X_{20}$ & $0$ & $(1)$ & yes & no & yes & $X_1$ & $(F_4(a_3),1)$\\ \hline
\caption{Irreducible representations of split $F_4$ with infinitesimal character $q^{h^{\vee}/2}$ for $\OO^{\vee} = F_4(a_3)$.}
\label{table:f4a3}
\end{longtable}
Note that $D(\OO^{\vee},1) = (F_4(a_3),1)$, see \cite[Section 6]{Acharduality}. So the Arthur packet attached to $\OO^{\vee}$ is the set
\begin{equation}\label{eq:smallset} \Pi_{\OO^{\vee}}^{\mathsf{Art}}(\mathbf{G}(\mathsf{k})) = \{X_5,X_{13},X_{17},X_{19},X_{20}\}\end{equation}
Note that this is precisely the set of anti-tempered representations in $\Pi^{\mathrm{Lus}}_{q^{\frac{1}{2}h^{\vee}}}(\mathbf{G}(\sfk))$, as predicted by Theorem \ref{thm:main}.
On the other hand, the \emph{weak Arthur packet} attached to $\OO^{\vee}$, see (\ref{eq:weakpacket}), is the much larger set
\begin{equation}\label{eq:bigset} \Pi^{\mathsf{Weak}}_{\OO^{\vee}}(\mathbf{G}(\sfk)) = \{X_5,,X_7,X_8,X_9,X_{11}, X_{13},X_{15},X_{17},X_{18},X_{19},X_{20}\}\end{equation}
In this case, we can attempt to verify Conjecture \ref{conj:weakpacket} by hand.
There are 10 simplified Arthur parameters $\widetilde{\psi}: \mathrm{SL}(2,\CC)_{\mathsf{Lan}} \times \mathrm{SL}(2,\CC)_{\mathsf{Art}} \to G^{\vee}$ satisfying the infinitesimal character condition (\ref{eq:inflcharsum}). The corresponding pairs $(\OO_{\mathsf{Lan}}^\vee,\OO_{\mathsf{Art}}^\vee)$ of nilpotent orbits are \[(0,F_4(a_3)), \ (A_1,C_3(a_1)), \ (\widetilde A_1, B_2), \ (A_1+\widetilde A_1, A_1+\widetilde A_2), \ (\widetilde A_1+A_2,\widetilde A_1+A_2). \]
together with the `flips' of these pairs. We believe that the Arthur packets contained in the weak Arthur packet (\ref{eq:bigset}) are as in Table \ref{ta:table2}. This is our naive guess for the decomposition of the weak Arthur packet; it is compatible with (\ref{eq:AZflip}), Conjecture \ref{conj:Arthur}(i),(ii), and the expectation for the occurrence of the supercuspidal representation in the Arthur packets.
\begin{table}[H]
\begin{tabular}{|c|c|c|} \hline
$\OO_{\mathsf{Lan}}^\vee$ &$\OO_{\mathsf{Art}}^\vee$ &Arthur packet\\
\hline
$0$ &$F_4(a_3)$ &\{$X_5, X_{13}, X_{17}, X_{19}, X_{20}\}$\\
\hline
$\widetilde A_1$ &$B_2$ &$\{X_5,X_{17}, X_{18}, X_{11}\}$\\
\hline
$A_1+\widetilde A_1$ &$A_1+\widetilde A_2$ &$\{X_5,X_{15}, X_8\}$\\
\hline
$\widetilde A_1+A_2$ &$\widetilde A_1+A_2$ & $\{X_5,X_9, X_7\}$\\
\hline \end{tabular} \caption{Arthur packets in weak packet attached to $F_4(a_3)$.} \label{ta:table2} \end{table}
\begin{sloppypar} \printbibliography[title={References}] \end{sloppypar}
\end{document} | arXiv | {
"id": "2210.00251.tex",
"language_detection_score": 0.5992257595062256,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Discussion of: ``A Bayesian information criterion for singular models''} \author{N. Friel$^{1,4}$, J. P. McKeone$^2$, C. J. Oates$^{3,5}$, A. N. Pettitt$^{2,5}$\\ \small $^1$ School of Mathematics and Statistics, University College Dublin, Ireland \\ \small $^2$ School of Mathematical Sciences, Queensland University of Technology, Australia \\ \small $^3$ School of Mathematical and Physical Sciences, University of Technology Sydney, Australia \\ \small $^4$ Insight: Centre for Data Analytics, Ireland \\ \small $^5$ ARC Centre of Excellence for Mathematical and Statistical Frontiers, Australia } \date{} \maketitle
The authors should be congratulated on their thought-provoking contribution in \cite{Drton2017}.
Our discussion focuses on the widely applicable Bayesian information criterion \citep[WBIC;][]{Watanabe2013}, an approximation to the model evidence that is valid for (both non-singular and) singular models. The WBIC combines approximation and computation; its evaluation requires Monte Carlo (MC) but approximation is used to reduce this computational cost compared to ``exact'' MC methods \citep[e.g.][]{Friel2008}. In contrast to the singular Bayesian information criterion \citep[sBIC;][]{Drton2017}, analytic bounds on learning coefficients are not required for WBIC. One can therefore implement WBIC in more general settings than sBIC if one can afford the associated MC computational cost. A second important difference is that the prior is explicitly required for WBIC whereas it is only used implicitly in sBIC. The performance of WBIC can be sensitive to the prior, a basic characteristic of Bayesian model choice! For the galaxy data, in particular, \cite{Cameron2014} demonstrated prior sensitivity for the posterior model probabilities for the number of clusters.
It is demonstrated in Sec. 5.1 (rank selection example) that performance of WBIC is inferior to that of sBIC. The reason for this difference is not discussed. Here we complement these results with our own, which show that for Gaussian mixture models (GMMs), WBIC tends to over-estimate model evidence for GMMs. This is, of course, particularly clear when the number $n$ of data is small since WBIC is an asympotic approximation; see Fig. \ref{fig:Friel2016}. Results in Sec. 6.2 do not indicate whether sBIC also over-estimates model evidence for GMMs, but Fig. 4 (factor analysis) suggests that sBIC is not over-confident in a different class of problem. It would be interesting to see whether sBIC has an advantage over WBIC in the GMM example in terms of avoiding over-confident approximation of model evidence terms.
The sophistication of modern statistical models demands intelligent approximation methods. Tractable approximations to model evidence are an important research goal, whether based on asymptotic results or on efficient numerical approximation methods. In particular, promising research directions include the use of approximate MC methods \citep{Alquier2016} and variance reduction techniques \citep{Oates2016}.
\begin{figure}
\caption{$n=50$}
\caption{$n=1000$}
\caption{Finite Gaussian mixture model: WBIC against the power posterior estimate of the exact model evidence. (a) Sample of size $n = 50$. (b) Sample of size $n = 1000$. \citep[Reproduced from][each point represents an independent dataset (100 in total)]{Friel2016}. }
\label{fig:Friel2016}
\end{figure}
\end{document} | arXiv | {
"id": "1611.02367.tex",
"language_detection_score": 0.8018408417701721,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Arithmetic graphs and the products of finite groups\footnote{This work is supported by BFFR $\Phi23\textrm{PH}\Phi\textrm{-}237$ }} \author{Viachaslau I. Murashka\footnote{e-mail: mvimath@yandex.by} \footnote{Faculty of Mathematics and Technologies of Programming, Francisk Skorina Gomel State University, Sovetskaya 104, Gomel, 246028, Belarus}} \date{}
\maketitle \begin{abstract} The Hawkes graph $\Gamma_H(G)$ of $G$ is the directed graph whose vertex set coincides with $\pi(G)$ and it has the edge $(p, q)$ whenever $q\in\pi(G/O_{p',p}(G))$. The Sylow graph $\Gamma_s(G)$ of $G$ is the directed graph with vertex set $\pi(G)$ and $(p, q)$ is an edge of $\Gamma_s(G)$ whenever $q \in\pi(N_G(P)/PC_G(P))$ for some Sylow $p$-subgroup $P$ of $G$. The $N$-critical graph $\Gamma_{Nc}(G)$ of a group $G$ the directed graph whose vertex set coincides with $\pi(G)$ such that $(p, q)$ is an edge of $\Gamma_{Nc}(G)$ whenever $G$ contains a Schmidt $(p, q)$-subgroup, i.e. a Schmidt $\{p, q\}$-subgroup with a normal Sylow $p$-subgroup. In the paper the Hawkes, the Sylow and the $N$-critical graphs of the products of totally permutable, mutually permutable and $\mathfrak{N}$-connected subgroups are studied.\\ \textbf{Keywords}: finite group; Hawkes graph; Sylow graph; $N$-critical graph; totally permutable product; mutually permutable product; $\mathfrak{N}$-connected subgroups.\\ \textbf{MSC2020}: 20D40. \end{abstract}
\section{Introduction}
All groups considered are \textbf{finite}, $G$ always denotes a group and $\pi(G)$ is the set of all prime divisors of $|G|$. If a graph has no isolated vertices, then we will define it just by its edges.
There were many papers in which with every group a certain graph is assigned and the connection of the geometry of the graph with the properties of the group is studied since 1878 (for example, see \cite{Abe2000, Cayley1878, Hawkes1968, Kazarin2011, Kondratev1990, Lucchini2009, VM, Williams1981} and etc.). Among such graphs there is an interesting family of arithmetic graphs, i.e. graphs whose vertices are prime divisors of the group's order.
In 1968 Hawkes \cite{Hawkes1968} introduced the directed graph $\Gamma_H(G)$ of $G$ whose vertex set coincides with $\pi(G)$ and it has the edge $(p, q)$ whenever $q\in\pi(G/O_{p',p}(G))$. This graph has many interesting properties \cite{Hawkes1968, Vasilev2022, VM}. For example \cite{Hawkes1968}, if it does not have a loop $(p, p)$, then the $p$-length of $G$ is at most 1.
Recall \cite{DAniello2007, Kazarin2011} that the Sylow graph $\Gamma_s(G)$ of $G$ is the directed graph with vertex set $\pi(G)$ and $(p, q)$ is an edge of $\Gamma_s(G)$ whenever $q \in\pi(N_G(P)/PC_G(P))$ for some Sylow $p$-subgroup $P$ of $G$. For applications and properties of this graph see \cite{DAniello2007, Kazarin2011, Murashka2022, VM}. In particular \cite{Murashka2022}, every connected component of $\Gamma_s(G)$ corresponds to a normal Hall subgroup of $G$.
Recall that a Schmidt $(p, q)$-group is a Schmidt group (i.e. non-nilpotent group, all whose proper subgroups are nilpotent) $G$ with $\pi(G) = \{p, q\}$ and a normal Sylow $p$-subgroup. The $N$-critical \cite{VM} graph $\Gamma_{Nc}(G)$ of a group $G$ is the directed graph whose vertex set coincides with $\pi(G)$ such that $(p, q)$ is an edge of $\Gamma_{Nc}(G)$ whenever $G$ contains a Schmidt $(p, q)$-subgroup. For its properties and applications see \cite{Murashka2021, IMurashka2021, VM}.
Recall that $\mathfrak{N}$ denotes the class of all nilpotent groups. Following Carocca \cite{Carocca1996} subgroups $H$ and $K$ are called $\mathfrak{N}$-connected if $\langle x, y\rangle\in\mathfrak{N}$ for every $x\in H$ and $y\in K$. Products of $\mathfrak{N}$-connected subgroups were studied in \cite{Carocca1996, Francalanci2021, Hauck2003} and other. Here we prove
\begin{theorem}\label{thm1}
Let a group $G$ be the product of pairwise permutable and $\mathfrak{N}$-connected subgroups $G_1,\dots, G_n$. Then
$$ \Gamma_s(G)=\bigcup_{i=1}^n\Gamma_s(G_i), \Gamma_H(G)=\bigcup_{i=1}^n\Gamma_H(G_i), \Gamma_{Nc}(G)=\bigcup_{i=1}^n\Gamma_{Nc}(G_i).$$ \end{theorem}
Recall that $G=AB$ is called a totally permutable product of subgroups $A$ and $B$ if every subgroup of $A$ permutes with every subgroup of $B$. Asaad and Shaalan [4] proved that a totally permutable product of two supersoluble groups is also supersoluble. This result started a study of totally permutable products in connection with the theory of group's classes (for example, see \cite[Chapter 4]{PFG}).
\begin{theorem}\label{thm2}
Let a group $G$ be the product of pairwise totally permutable subgroups $G_1,\dots, G_n$ and $\Gamma(G)=\{(p, q)\mid p, q\in\pi(G), q\in\pi(p-1)\}$. Then
$$ \Gamma_s(G)\subseteq\bigcup_{i=1}^n\Gamma_s(G_i)\cup \Gamma(G), \Gamma_H(G)\subseteq\bigcup_{i=1}^n\Gamma_H(G_i)\cup \Gamma(G), \Gamma_{Nc}(G)\subseteq\bigcup_{i=1}^n\Gamma_{Nc}(G_i)\cup \Gamma(G).$$ \end{theorem}
\begin{example}
The symmetric group $S_3$ of degree 3 is a totally permutable product of cyclic groups $Z_3$ and $Z_2$ of orders 3 and 2 respectively. Note that $(3, 2)$ is the unique edge of the Sylow graph, the Hawkes graph and the $N$-critical graph of $S_3$ and the Sylow graphs, the Hawkes graphs and the $N$-critical graphs of $Z_3$ and $Z_2$ have no edges. Thus $\Gamma(S_3)\not\subseteq\Gamma(Z_3)\cup\Gamma(Z_2)$ for $\Gamma\in\{\Gamma_s, \Gamma_H, \Gamma_{Nc}\}$. \end{example}
\begin{remark}
Theorems \ref{thm1} and \ref{thm2} follow from a more general result (see Theorem \ref{thm}). \end{remark}
Recall \cite[Definition 4.1.1]{PFG} that a group $G$ is called a mutually permutable product of its subgroups $A$ and $B$ if $G=AB$, $A$ permutes with every subgroup of $B$ and $B$ permutes with every subgroup of $A$. The products of mutually permutable subgroups are widely studied (see \cite[Chapter 4]{PFG}).
\begin{theorem}\label{mut} Let $G=AB$ be a mutually permutable product of its subgroups $A$ and $B$ and $\Gamma(A, B)=\{(p, q)\mid p\in\pi(A), q\in\pi(B)\cap\pi(p-1)\textrm{ or } p\in\pi(B), q\in\pi(A)\cap\pi(p-1)\}$. Then $$\Gamma_{Nc}(A)\cup\Gamma_{Nc}(B)\subseteq \Gamma_{Nc}(G)\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)\textrm{ and }$$ $$\Gamma_{H}(A)\cup\Gamma_{H}(B)\subseteq \Gamma_{H}(G)\subseteq \Gamma_{H}(A)\cup \Gamma_{H}(B)\cup\Gamma(A, B)\cup\{(p,p)\mid p\in\pi(G)\}.$$ \end{theorem}
\begin{example}
Note that the symmetric group $S_4$ of degree 4 is a mutually permutable product of its Sylow 2-subgroup $P$ and the alternating group $A_4$ of degree 4. Now $E(\Gamma_H(P))=\emptyset, E(\Gamma_H(A_4))=\{(2,3)\}, E(\Gamma(P, A_4))=\{(3,2)\}$ and $E(\Gamma_H(S_4))=\{(2,2), (2,3), (3,2)\}$. Hence $\Gamma_H(S_4)\not\subseteq\Gamma_H(P)\cup\Gamma_H(A_4)\cup\Gamma(P, A_4)$. \end{example}
Recall \cite{Vasilev} that a formation $\mathfrak{F}$ has the Shemetkov property if every $s$-critical for $\mathfrak{F}$ group is a Schmidt group or a group of prime order. For various properties and applications of such formations see \cite[Chapter 6.4]{BallesterBollinches2006}.
\begin{corollary}\label{Shemetkov} A hereditary formation $\mathfrak{F}$ with the Shemetkov property is closed under taking products of mutually permutable $\mathfrak{F}$-subgroups if and only if it contains all supersoluble Schmidt $\pi(\mathfrak{F})$-groups. \end{corollary}
\begin{corollary}[{\cite[Theorem 2]{Beidleman2005}}]\label{Beidleman}
Let $p$ be a prime and $\pi$ be a $p$-special set of primes $($i.e. $q\not\in\pi$ whenever $p$ divides $q(q-1))$. If $G$ is the mutually permutable product of two subgroups $A$ and $B$ which are normal extensions of $p$-groups by $\pi$-groups, the same is true for $G$. \end{corollary}
\begin{theorem}\label{thm4}
Let a group $G$ be a product of pairwise mutually permutable soluble subgroups $G_1,\dots, G_n$ and $\Gamma(G_i, G_j)$ be defined the same way as in Theorem \ref{mut}. Then
$$ \Gamma_H(G)\subseteq\bigcup_{1\leq i\leq n}\Gamma_H(G_i)\cup\bigcup_{1\leq i, j\leq n, i\neq j} \Gamma(G_i, G_j)\cup\{(p,p)\mid p\in\pi(G)\}\textrm{ and}$$ $$\Gamma_{Nc}(G)\subseteq\bigcup_{1\leq i\leq n}\Gamma_{Nc}(G_i)\cup\bigcup_{1\leq i, j\leq n, i\neq j} \Gamma(G_i, G_j).$$ \end{theorem}
The following result for $n=2$ was proved in \cite[Corollary 7]{Vasilev2017}.
\begin{corollary}\label{cor41} Let a group $G$ be a product of pairwise mutually permutable subgroups $G_1,\dots, G_n$ If every Schmidt subgroup of $G_1, \dots, G_n$ is supersoluble, then every Schmidt subgroup of $G$ is supersoluble. \end{corollary}
\section{Preliminaries}
Here $\pi(n)$ is the set of prime divisors of $n$; $\pi(\mathfrak{F})=\cup_{G\in\mathfrak{F}}\pi(G)$; $S_n$ and $A_n$ are the symmetric and the alternating group of degree $n$ respectively; $Z_n$ is the cyclic group of order $n$; $\Phi(G)$ is the Frattini subgroup of $G$;
$\mathrm{O}_\pi(G)$ is the greatest normal $\pi$-subgroup of a group $G$ for a set of primes $\pi$. If $\pi=\{p\}$, then $\mathrm{O}_\pi(G)$ is denoted by $\mathrm{O}_p(G)$. If $\pi=\mathbb{P}\setminus\{p\}$, then $\mathrm{O}_\pi(G)$ is denoted by $\mathrm{O}_{p'}(G)$; $\mathrm{O}_{p',p}(G)$ is the greatest normal $p$-nilpotent subgroup of a group $G$. It can be defined by $\mathrm{O}_{p',p}(G)/\mathrm{O}_{p'}(G)=\mathrm{O}_p(G/\mathrm{O}_{p'}(G))$.
Recall that here a (directed) graph $\Gamma$ is a pair of sets $V(\Gamma)$ and $E(\Gamma)$ where $V(\Gamma)$ is a set of vertices of $\Gamma$ and $E(\Gamma)$ is a set of edges of $ \Gamma$, i.e. the set of ordered pairs of elements from $V(\Gamma)$. An edge $(v, v)$ is called a loop. Two graphs $\Gamma_1$ and $\Gamma_2$ are called equal (denoted by $\Gamma_1 = \Gamma_2$) if $V (\Gamma_1) = V (\Gamma_2)$ and $E(\Gamma_1) = E(\Gamma_2)$. Graph $\Gamma_1$ is called subgraph of $\Gamma_2$ (denoted by $\Gamma_1\subseteq\Gamma_2$) if $V (\Gamma_1) \subseteq V (\Gamma_2)$ and $E(\Gamma_1) \subseteq E(\Gamma_2)$. Graph $\Gamma$ is called a union of graphs $\Gamma_1$ and $\Gamma_2$ (denoted by $\Gamma = \Gamma_1\cup\Gamma_2)$ if $V(\Gamma) = V (\Gamma_1)\cup V (\Gamma_2)$ and $E(\Gamma) = E(\Gamma_1)\cup E(\Gamma_2)$.
Let $\Gamma\in\{\Gamma_s, \Gamma_H, \Gamma_{Nc}\}$ and $\mathfrak{X}$ be a class of groups. Recall \cite[Definition 3.1]{VM} that $$\Gamma(\mathfrak{X})=\bigcup_{G\in\mathfrak{X}}\Gamma(G).$$
\begin{lemma}[{\cite[Theorem 2.7]{VM}}]\label{lem1} Let $G$ be a group. Then
\begin{enumerate}
\item If $\Gamma\in\{\Gamma_H, \Gamma_{Nc}\}$, then $\Gamma(H)\subseteq\Gamma(G)$ for any $H\leq G$.
\item If $\Gamma\in\{\Gamma_s, \Gamma_H, \Gamma_{Nc}\}$, then $\Gamma(G/N)\subseteq\Gamma(G)$ for any $N\trianglelefteq G$.
\item If $\Gamma\in\{\Gamma_s, \Gamma_H, \Gamma_{Nc}\}$, then $\Gamma(G/N_1)\cup\Gamma(G/N_2)=\Gamma(G)$ for any $N_1,N_2\trianglelefteq G$ with
\linebreak $N_1\cap N_2=1$.
\item If $\Gamma\in\{\Gamma_H, \Gamma_{Nc}\}$, then $\Gamma(N_1)\cup\Gamma(N_2)=\Gamma(G)$ for any $N_1,N_2\trianglelefteq G$.
\item If $\Gamma\in\{\Gamma_s, \Gamma_H, \Gamma_{Nc}\}$, then $\Gamma(G_1\times\dots\times G_n)=\Gamma(G_1)\cup\dots\cup\Gamma(G_n)$ for any groups $G_1, \dots,G_n$.
\end{enumerate} \end{lemma}
Let $\mathfrak{X}$ be a class of groups. Recall that a chief factor $H/K$ of $G$ is called $\mathfrak{X}$-\emph{central} (see \cite[p. 127--128]{s6}) in $G$, for a class of groups $\mathfrak{X}$, provided that the semidirect product $(H/K)\rtimes (G/C_G(H/K))$ of $H/K$ with $G/C_G(H/K)$ corresponding to the action by conjugation of $G$ on $H/K$ belongs $\mathfrak{X}$. The $\mathfrak{X}$-\emph{hypercenter} $\mathrm{Z}_\mathfrak{X}(G)$ of $G$ is the greatest normal subgroup of $G$ such that every chief factor of $G$ below it is $\mathfrak{X}$-central (it exists according to \cite[Lemma 14.1]{s6}). If $\mathfrak{X}=\mathfrak{N}$ is the class of all nilpotent groups, then $\mathrm{Z}_\mathfrak{N}(G)$ is the hypercenter $\mathrm{Z}_\infty(G)$ of $G$.
\section{The proof of Theorems \ref{thm1} and \ref{thm2}}
Recall \cite[Proposition 1(8)]{Hauck2003} that if $G=G_1\dots G_n$ is the product of pairwise permutable and $\mathfrak{N}$-connected subgroups, then $[G_i, \prod_{j=1, j\neq i}^nG_j]\leq \mathrm{Z}_\infty(G)$ for any $i\in \{1,\dots, n\}$. According to \cite[Lemma 4.2.12]{PFG} if $G=G_1\dots G_n$ is the product of totally permutable subgroups, then $[G_i, \prod_{j=1, j\neq i}^nG_j]\leq \mathrm{Z}_\mathfrak{U}(G)$ for any $i\in \{1,\dots, n\}$ where $\mathfrak{U}$ stands for the class of all supersoluble groups. These observations lead us to the following definition.
\begin{definition} We say that $G$ is the product of subgroups $G_1, G_2\dots, G_n$ with $\mathfrak{F}$-hypercentral condition for commutators if $G=G_1\dots G_n$, $G_iG_j$ is a subgroup of $G$ for every $i, j\in\{1, \dots, n\}$ and $[G_i, \prod_{j=1, j\neq i}^nG_j]\leq \mathrm{Z}_\mathfrak{F}(G)$ for any $i\in \{1,\dots, n\}$. \end{definition}
The main property of products with $\mathfrak{F}$-hypercentral condition for commutators is
\begin{lemma}\label{lemma1}
Let $\mathfrak{F}$ be a hereditary formation with $\mathfrak{N}\subseteq \mathfrak{F}$. If a group $G$ is the product of subgroups $G_1,\dots, G_n$ with $\mathfrak{F}$-hypercentral condition for commutators, then
$$G/\mathrm{Z}_\mathfrak{F}(G)\simeq G_1/\mathrm{Z}_\mathfrak{F}(G_1)\times\dots\times G_n/\mathrm{Z}_\mathfrak{F}(G_n).$$ \end{lemma}
\begin{proof} $(a)$
$\overline{H}_i=G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G) \cap (\prod_{j=1, j\neq i}^nG_j)\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)\simeq 1$ for any $i\in\{1,\dots,n\}$.
Since $G$ satisfies $\mathfrak{F}$-hypercentral condition for commutators, we see that every element of $\overline{H}_i$ commutes with every element of $G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$ and $(\prod_{j=1, j\neq i}^nG_j)\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$. Hence it commutes with every element of $G/\mathrm{Z}_\mathfrak{F}(G)$. Therefore $\overline{H}_i\leq \mathrm{Z}(G/\mathrm{Z}_\mathfrak{F}(G))$. From $\mathfrak{N}\subseteq\mathfrak{F}$ it follows that $\mathrm{Z}(G/\mathrm{Z}_\mathfrak{F}(G))\leq \mathrm{Z}_\mathfrak{F}(G/\mathrm{Z}_\mathfrak{F}(G))\simeq 1$. Thus $\overline{H}_i\simeq 1$.
$(b)$ $G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)\trianglelefteq G/\mathrm{Z}_\mathfrak{F}(G)$ for any $i\in\{1,\dots,n\}$.
Since $G$ satisfies $\mathfrak{F}$-hypercentral condition for commutators, we see that every element of $G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$ commutes with every element of $(\prod_{j=1, j\neq i}^nG_j)\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$. Now from $G=G_1\dots G_n$ it follows that $G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)\trianglelefteq G/\mathrm{Z}_\mathfrak{F}(G)$.
$(c)$ $G/\mathrm{Z}_\mathfrak{F}(G)\simeq G_1/\mathrm{Z}_\mathfrak{F}(G_1)\times\dots\times G_n/\mathrm{Z}_\mathfrak{F}(G_n)$.
Now from $(a)$ and $(b)$ it follows that $$G/\mathrm{Z}_\mathfrak{F}(G)= G_1\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)\times\dots\times G_n\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G).$$ Note that every $\mathfrak{F}$-central chief factor of $G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$ is an $\mathfrak{F}$-central chief factor of $G/\mathrm{Z}_\mathfrak{F}(G)$. From $\mathrm{Z}_\mathfrak{F}(G/\mathrm{Z}_\mathfrak{F}(G))\simeq 1$ it follows that $$\mathrm{Z}_\mathfrak{F}(G_i\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G))\simeq \mathrm{Z}_\mathfrak{F}(G_i/(G_i\cap \mathrm{Z}_\mathfrak{F}(G)))\simeq 1.$$ Since $\mathfrak{F}$ is hereditary, we see that $G_i\cap \mathrm{Z}_\mathfrak{F}(G)\leq \mathrm{Z}_\mathfrak{F}(G_i)$ by \cite[Lemma 2.4(iii)]{Aivazidis2021}. Now from $\mathrm{Z}_\mathfrak{F}(G_i/(G_i\cap \mathrm{Z}_\mathfrak{F}(G)))\simeq 1$ it follows that $G_i\cap \mathrm{Z}_\mathfrak{F}(G)= \mathrm{Z}_\mathfrak{F}(G_i)$. Thus $$G/\mathrm{Z}_\mathfrak{F}(G)\simeq G_1/(G_1\cap \mathrm{Z}_\mathfrak{F}(G))\times\dots\times G_n/(G_n\cap \mathrm{Z}_\mathfrak{F}(G_n))\simeq G_1/\mathrm{Z}_\mathfrak{F}(G_1)\times\dots\times G_n/\mathrm{Z}_\mathfrak{F}(G_n).$$ Lemma is proved.\end{proof}
Denote by $\Gamma(\mathfrak{F})_{|G}$ the induced subgraph of $\Gamma(\mathfrak{F})$ on $\pi(G)$.
\begin{lemma}\label{lemma2}
Let $\mathfrak{F}$ be a hereditary formation with $\mathfrak{N}\subseteq \mathfrak{F}$, $\Gamma\in\{\Gamma_s, \Gamma_{Nc}, \Gamma_H\}$ and $G$ be a group. Then
$$\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\subseteq\Gamma(G)\subseteq\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma(\mathfrak{F})_{|G}. $$ \end{lemma}
\begin{proof}
From 2 of Lemma \ref{lem1} it follows that $\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\subseteq\Gamma(G)$. Assume that there is a group $G$ with $\Gamma(G)\not\subseteq\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma(\mathfrak{F})_{|G}$. Note that $V(\Gamma(G))=V(\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma(\mathfrak{F})_{|G})$. Hence there is $(p, q)\in E(\Gamma(G))\setminus E(\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma(\mathfrak{F})_{|G})$.
Let $\Gamma=\Gamma_{Nc}$. It means there is a Schmidt $(p, q)$-subgroup $H$ of $G$ with $H\not\in \mathfrak{F}$. From $\Gamma_{Nc}(G/\mathrm{Z}_\mathfrak{F}(G))\subseteq\Gamma_{Nc}(G)$ it follows that $H\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)\simeq H/H\cap\mathrm{Z}_\mathfrak{F}(G) $ is nilpotent. Since $\mathfrak{F}$ is hereditary, $H\cap \mathrm{Z}_\mathfrak{F}(G)\leq \mathrm{Z}_\mathfrak{F}(H)$ by \cite[Lemma 2.4(iii)]{Aivazidis2021}. From $\mathfrak{N}\subseteq\mathfrak{F}$ it follows that $H\mathrm{Z}_\mathfrak{F}(G)\in\mathfrak{F}$. Therefore $H\in\mathfrak{F}$, a contradiction. Thus $\Gamma_{Nc}(G)\subseteq\Gamma_{Nc}(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma_{Nc}(\mathfrak{F})_{|G}$.
Let $\Gamma=\Gamma_{s}$. Then there is an element $x$ of $G$ which induces an automorphisms of order $q^\alpha$ on a Sylow $p$-subgroup $P$ of $G$. WLOG we may assume that $x$ is a $q$-element of $G$. Note that $x\mathrm{Z}_\mathfrak{F}(G)$ acts trivially on a Sylow $p$-subgroup $P\mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$ of $G/\mathrm{Z}_\mathfrak{F}(G)$. Hence $P\langle x\rangle \mathrm{Z}_\mathfrak{F}(G)/\mathrm{Z}_\mathfrak{F}(G)$ is a nilpotent group. By analogy with the previous paragraph, $P\langle x\rangle \mathrm{Z}_\mathfrak{F}(G)\in\mathfrak{F}$. Hence $(p, q)\in\Gamma_s(\mathfrak{F})$, a contradiction.
Let $\Gamma=\Gamma_{H}$. From \cite[Proposition 2.3(1)]{VM} it follows that there is a chief factor $H/K$ of $G$ below $\mathrm{Z}_\mathfrak{F}(G)$ with $p\in\pi(H/K)$ and $q\in \pi(G/C_G(H/K))$. From $(H/K)\rtimes G/C_G(H/K)\in\mathfrak{F}$ it follows that $(p, q)\in\Gamma_H(\mathfrak{F})$, a contradiction. Thus $\Gamma_{H}(G)\subseteq\Gamma_{H}(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma_{H}(\mathfrak{F})_{|G}$. \end{proof}
The main result of this section is
\begin{theorem}\label{thm}
Let $\mathfrak{F}$ be a hereditary formation with $\mathfrak{N}\subseteq \mathfrak{F}$ and $\Gamma\in\{\Gamma_s, \Gamma_{Nc}, \Gamma_H\}$. If a group $G$ is the product of subgroups $G_1,\dots, G_n$ with $\mathfrak{F}$-hypercentral condition for commutators, then
$$\Gamma(G)\subseteq \bigcup_{i=1}^n\Gamma(G_i)\cup\Gamma(\mathfrak{F})_{|G}. $$ \end{theorem}
\begin{proof}
From Lemma \ref{lemma1} it follows that
$$G/\mathrm{Z}_\mathfrak{F}(G)\simeq G_1/\mathrm{Z}_\mathfrak{F}(G_1)\times\dots\times G_n/\mathrm{Z}_\mathfrak{F}(G_n).$$
Now $\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))=\cup_{i=1}^n\Gamma(G_i/\mathrm{Z}_\mathfrak{F}(G_i))$ by 5 of Lemma \ref{lem1}. Note that $\Gamma(\mathfrak{F})_{|G_i}\subseteq\Gamma(\mathfrak{F})_{|G}$. Therefore by Lemma \ref{lemma2} \begin{multline*}
\Gamma(G)\cup\Gamma(\mathfrak{F})_{|G}=
\Gamma(G/\mathrm{Z}_\mathfrak{F}(G))\cup\Gamma(\mathfrak{F})_{|G}=\\
\bigcup_{i=1}^n\Gamma(G_i/\mathrm{Z}_\mathfrak{F}(G_i))\cup\Gamma(\mathfrak{F})_{|G}
=\bigcup_{i=1}^n(\Gamma(G_i/\mathrm{Z}_\mathfrak{F}(G_i))\cup\Gamma(\mathfrak{F})_{|G_i}) \cup\Gamma(\mathfrak{F})_{|G}\\
=\bigcup_{i=1}^n(\Gamma(G_i)\cup\Gamma(\mathfrak{F})_{|G_i}) \cup\Gamma(\mathfrak{F})_{|G}
=\bigcup_{i=1}^n\Gamma(G_i) \cup\Gamma(\mathfrak{F})_{|G}\end{multline*}
Thus $\Gamma(G)\subseteq \cup_{i=1}^n\Gamma(G_i)\cup\Gamma(\mathfrak{F})_{|G} $.\end{proof}
\begin{lemma}\label{equal}
If a group $G$ has a Sylow tower, then $\Gamma_s(G)=\Gamma_{Nc}(G)=\Gamma_H(G)$. \end{lemma}
\begin{proof}
According to \cite[Proposition 2.4]{VM} $\Gamma_s(G)\subseteq\Gamma_{Nc}(G)\subseteq\Gamma_H(G)$ for any group $G$. Hence we need to prove only that $\Gamma_s(G)=\Gamma_H(G)$ for a Sylow tower group $G$. Assume the contrary, let a Sylow tower group $G$ be a minimal order counterexample. Since $V(\Gamma_H(G))=V(\Gamma_s(G))=\pi(G)$, we see that there is $(p, q)\in E(\Gamma_H(G))\setminus E(\Gamma_s(G))$. Since $G$ has a Sylow tower, we see that $\mathrm{O}_{p',p}(G)$ contains all Sylow $p$-subgroups of $G$. Therefore $p\neq q$.
Let $N$ be a minimal normal subgroup of $G$. Recall that the class of Sylow tower groups is closed under taking epimorphic images. Therefore $G/N$ is a Sylow tower group. From $|G/N|<|G|$ and our assumption it follows that $\Gamma_s(G/N)=\Gamma_H(G/N)$, in particular $(p, q)\not\in E(\Gamma_H(G))$. If $G$ has two minimal normal subgroups $N_1$ and $N_2$, then $\Gamma_s(G)=\Gamma_s(G/N_1)\cup\Gamma_s(G/N_2)=\Gamma_H(G/N_1)\cup\Gamma_H(G/N_2)=\Gamma_H(G)$ by Lemma \ref{lem1}, a contradiction. Thus $G$ has the unique minimal normal subgroup $N$. Since $G$ is soluble, $N$ is an $r$-group for some prime $r$. If $r\neq p$, then $\mathrm{O}_{p', p}(G/N)=\mathrm{O}_{p',p}(G)/N$. Hence $(p, q)\in E(\Gamma_H(G/N))$, a contradiction. Now $r=p$. Since $G$ is a Sylow tower group with the unique minimal normal subgroup $N$, we see that a Sylow $p$-subgroup $P$ of $G$ is normal in $G$. Let $Q$ be a Sylow $q$-subgroup of $G$. Then $Q\leq N_G(P)$. From $(p, q)\not\in E(\Gamma_s(G))$ it follows that $Q\leq C_G(P)$. Now $Q\leq C_G(H/K)$ where $H/K$ is a chief $p$-factor of $G$. Recall that $\mathrm{O}_{p', p}(G)$ is the intersection of centralizers of all chief $p$-factors of $G$. So $Q\leq \mathrm{O}_{p', p}(G)$. Thus $(p, q)\not\in E(\Gamma_H(G))$, the final contradiction. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}]
From Lemma \ref{equal} it follows that $\Gamma_s(\mathfrak{N})=\Gamma_{Nc}(\mathfrak{N})=\Gamma_H(\mathfrak{N})$. Note that $V(\Gamma_H(\mathfrak{N}))=\mathbb{P}$ and $E(\Gamma_H(\mathfrak{N}))=\emptyset$.
Let $\Gamma\in\{\Gamma_s, \Gamma_{Nc}, \Gamma_H\}$. Now from the proof of Theorem \ref{thm} it follows that
$$ \Gamma(G)= \Gamma(G)\cup\Gamma(\mathfrak{N})_{|G}
=\bigcup_{i=1}^n\Gamma(G_i) \cup\Gamma(\mathfrak{N})_{|G}=\bigcup_{i=1}^n\Gamma(G_i).$$ Theorem \ref{thm1} is proved.\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2}]
From Lemma \ref{equal} it follows that $\Gamma_s(\mathfrak{U})=\Gamma_{Nc}(\mathfrak{U})=\Gamma_H(\mathfrak{U})$. Note that $V(\Gamma_H(\mathfrak{U}))=\mathbb{P}$. It is well known that a group $G$ is supersoluble iff $G/\mathrm{O}_{p',p}(G)$ is abelian of exponent dividing $p-1$. Hence $E(\Gamma_H(\mathfrak{U}))\subseteq\{(p, q)\mid q\in\pi(p-1)\}$. On the other hand if $q\in\pi(p-1)$, then a cyclic group $Z_p$ of order $p$ has a power automorphism of order $q$. Now $H=Z_p\rtimes Z_q$ is supersoluble and $q\in\pi(H/\mathrm{O}_{p',p}(H))$. Thus $E(\Gamma_H(\mathfrak{U}))=\{(p, q)\mid q\in\pi(p-1)\}$. Now Theorem \ref{thm2} directly follows from Theorem \ref{thm}.\end{proof}
\section{The proof of Theorems \ref{mut} and \ref{thm4}}
We need the following lemma in the proof of Theorem \ref{mut}.
\begin{lemma}\label{Centralizer}
Let $P$ be a Sylow $p$-subgroup of a Schmidt $(p, q)$-subgroup $S$ of a group $G$. If $A$ is a subgroup of $G$ with $P\leq A$ and $G=AC_G(P)$, then $A$ contains a Schmidt $(p, q)$-subgroup. \end{lemma}
\begin{proof}
Let $Q$ be a Sylow $q$-subgroup of $S$, then $Q=\langle x\rangle$ is cyclic. Since $G=AC_G(P)=C_G(P)A$, there exist $y\in A$ and $z\in C_G(P)$ with $x=zy$. Now $P=P^x=P^y$. It means that $P\trianglelefteq P\langle y\rangle\leq A$. Assume that $A$ does not contain a Schmidt $(p, q)$-group. Now $P\mathrm{O}_q(\langle y\rangle)$ is a $p$-closed $\{p, q\}$-group without Schmidt $(p, q)$-subgroups. It means that $P\mathrm{O}_q(\langle y\rangle)$ is nilpotent. Hence $\langle y_1\rangle= \mathrm{O}_q(\langle y\rangle)\leq C_G(P)$. Let $y_2=\mathrm{O}_{q'}(\langle y\rangle)$. So $y=y_1y_2$ It is well known that $C_G(P)\trianglelefteq N_G(P)$. Note that $x, y\in N_G(P)$. Now $\langle x\rangle C_G(P)/C_G(P)$ is a non-trivial $q$-group. From the other hand $\langle x\rangle C_G(P)/C_G(P)=\langle zy_1y_2\rangle C_G(P)/C_G(P)=\langle y_2\rangle C_G(P)/C_G(P)$ is a $q'$-group, a contradiction. \end{proof}
\begin{proof}[Proof of Theorem \ref{mut}] \textbf{\emph{Let prove that }}$\Gamma_{Nc}(G)\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)$. Note that $\Gamma_{Nc}(A)\cup\Gamma_{Nc}(B)\subseteq \Gamma_{Nc}(G)$ by Lemma \ref{lem1}.
Assume that $\Gamma_{Nc}(G)\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)$ is false. Let chose a minimal order group $G$ such that $G$ is a mutually permutable product of subgroups $ A$ and $B$ and $\Gamma_{Nc}(G)\not\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)$. It means that there is $(p, q)\not\in E(\Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B))$ such that $(p, q)\in E(\Gamma_{Nc}(G))$. Therefore $G$ has a Schmidt $(p, q)$-subgroup $S$.
Since $A_GB_G\neq 1$ by \cite[Theorem 4.3.11]{PFG}, WLOG we may assume that $A_G$ contains a minimal normal subgroup $N$ of $G$.
Now $G/N=(A/N)(BN/N)$ is a mutually permutable product of subgroups $A/N$ and $ BN/N$ by \cite[Lemma 4.1.10]{PFG}. Hence $\Gamma_{Nc}(G/N)\subseteq \Gamma_{Nc}(A/N)\cup \Gamma_{Nc}(BN/N)\cup\Gamma(A/N, BN/N)$. Note that $BN/N\simeq B/(B\cap N)$. It means that $\Gamma_{Nc}(A/N)\subseteq \Gamma_{Nc}(A)$ and $\Gamma_{Nc}(BN/N)=\Gamma_{Nc}(B/(B\cap N))\subseteq \Gamma_{Nc}(B)$ by Lemma \ref{lem1}. By the definition of $\Gamma(A,B)$ we see that $$\Gamma(A/N, BN/N)=\Gamma(A/N, B/(B\cap N))\subseteq\Gamma(A,B).$$ Now $\Gamma_{Nc}(G/N)\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)$. So $(p, q)\not\in E(\Gamma_{Nc}(G/N))$. Hence $SN/N$ is not a Schmidt group. Therefore $S\cap N$ contains a Sylow $p$-subgroup $P_0$ of $S$. Denote a Sylow $q$-subgroup of $S$ by $Q_0$.
Assume that $N\leq A\cap B$. There exist Sylow $q$-subgroups $Q, Q_1$ and $Q_2$ of $G, A$ and $B$ respectively such that $Q=Q_1Q_2$. Note that there is $x\in G$ with $Q_0\leq Q^x$. Now $S\leq NQ^x=(NQ_1)^x(NQ_2)^x$. Let $T=NQ^x$, $H=(NQ_1)^x$ and $K=(NQ_2)^x$. From $\Gamma_{Nc}(H)=\Gamma_{Nc}(NQ_1)\subseteq\Gamma_{Nc}(A)$ and $\Gamma_{Nc}(K)=\Gamma_{Nc}(NQ_2)\subseteq\Gamma_{Nc}(B)$ it follows that $(p, q)\not\in E(\Gamma_{Nc}(H)\cup \Gamma_{Nc}(K))$. Let $P$ be a Sylow $p$-subgroup of $N$ with $P_0\leq P$. By Frattini's Argument $H=NN_H(P)$. So there is a $q$-subgroup $Q_3$ of $ N_H(P)$ with $H=NQ_3$. Note that $PQ_3$ is a $p$-closed $\{p, q\}$-group without Schmidt $(p, q)$-subgroups. It means that $PQ_3$ is nilpotent. Hence $ Q_3\leq C_H(P)$. Therefore $H=NC_H(P)$. Similar arguments show that $K=NC_K(P)$. So $T=NC_T(P)$. Now $N$ contains a Schmidt $(p, q)$-group by Lemma \ref{Centralizer}, a contradiction.
Assume now that $N\not\leq A\cap B$. It means that $N\cap B=1$ by \cite[Lemma 4.3.3(4)]{PFG}. Suppose that $B\leq C_G(N)$. Now $A$ has a Schmidt $(p, q)$-group by Lemma \ref{Centralizer}, a contradiction. Thus $B\not\leq C_G(N)$. In this case $N$ is cyclic and $A\leq C_G(N)$ by \cite[Lemma 4.3.3(5)]{PFG}. Since $G=AB=C_G(N)(NB)$, we see that $NB$ contains a Schmidt $(p, q)$-group by Lemma \ref{Centralizer}. Hence $q\in\pi(NB/C_{NB}(N))\subseteq\pi(B)$. Since $N$ is a cyclic $p$-group, we see that $ NB/C_{NB}(N)\simeq G/C_G(N)$ is abelian of exponent dividing $p-1$. Therefore $(p, q)\in E(\Gamma(A, B))$, the final contradiction.
\textbf{\emph{Let prove that}} $\Gamma_{H}(G)\subseteq \Gamma_{H}(A)\cup \Gamma_{H}(B)\cup\Gamma(A, B)\cup\{(p, p)\mid p\in\pi(G)\}$. Note that $\Gamma_{H}(A)\cup \Gamma_{H}(B)\subseteq \Gamma_{H}(G)$ by Lemma \ref{lem1}.
Assume that $\Gamma_{H}(G)\subseteq \Gamma_{H}(A)\cup \Gamma_{H}(B)\cup\Gamma(A, B)\cup\{(p, p)\mid p\in\pi(G)\}$ is false. Let chose a minimal order group $G$ such that $G$ is a mutually permutable product of subgroups $ A$ and $B$ and $\Gamma_{H}(G)\not\subseteq \Gamma_{Nc}(A)\cup \Gamma_{Nc}(B)\cup\Gamma(A, B)\cup\{(p, p)\mid p\in\pi(G)\}$. It means that there is $(p, q)\not\in E(\Gamma_{H}(A)\cup \Gamma_{H}(B)\cup\Gamma(A, B)\cup\{(p, p)\mid p\in\pi(G)\})$ such that $(p, q)\in E(\Gamma_{H}(G))$. In particular, $p\neq q$.
WLOG we may assume that $A$ contains a minimal normal subgroup $N$ of $G$ by \cite[Theorem 4.3.11]{PFG}. Now $G/N=(A/N)(BN/N)$ is a mutually permutable product of subgroups $A/N$ and $ BN/N$ by \cite[Lemma 4.1.10]{PFG}. Hence $\Gamma_{H}(G/N)\subseteq \Gamma_{H}(A/N)\cup \Gamma_{H}(BN/N)\cup\Gamma(A/N, BN/N)\cup\{(p, p)\mid p\in\pi(G/N)\}$. Note that $\Gamma_{H}(A/N)\subseteq \Gamma_{H}(A)$, $\Gamma_{H}(BN/N)\subseteq \Gamma_{H}(B)$ by Lemma \ref{lem1}, and $\Gamma(A/N, BN/N)\subseteq\Gamma(A,B)$. Now $\Gamma_H(G/N)\subseteq \Gamma_{H}(A)\cup \Gamma_{H}(B)\cup\Gamma(A, B)\cup\{(p, p)\mid p\in\pi(G)\}$. So $(p, q)\not\in E(\Gamma_{H}(G/N))$. From 3 of Lemma \ref{lem1} and our assumption it follows that $N$ must be the unique minimal normal subgroup of $G$. If $\Phi(G)\not\simeq 1$, then similar arguments show that $(p, q)\not\in E(\Gamma_H(G/\Phi(G)))$. Note that $\Gamma_H(G/\Phi(G))=\Gamma_H(G)$ by \cite[Theorem 2.7]{VM}, a contradiction. So $\Phi(G)=1$. Thus $G$ is a primitive group with $C_G(N)\leq N$.
If $N$ is a $p'$-group, then $\mathrm{O}_{p',p}(G/N)=\mathrm{O}_{p',p}(G)/N$ and $G/\mathrm{O}_{p',p}(G)\simeq (G/N)/\mathrm{O}_{p',p}(G/N)$. Hence $(p, q)\in E(\Gamma_{H}(G/N))$, a contradiction. Now $p\in\pi(N)$. Therefore $\mathrm{O}_{p'}(G)=1$.
Assume that $N\leq A\cap B$. Hence $\mathrm{O}_{p'}(A)=\mathrm{O}_{p'}(B)=1$. Now $(\pi(A)\setminus\{p\})\subseteq\pi(A/\mathrm{O}_{p',p}(A))$ and $(\pi(B)\setminus\{p\})\subseteq\pi(B/\mathrm{O}_{p',p}(B))$. It means that $(p, q)\in E(\Gamma_H(A)\cup\Gamma_H(B))$, a contradiction. Therefore $N\not\leq A\cap B$. It means that $N\cap B=1$ by \cite[Lemma 4.3.3(4)]{PFG}.
Now either $A\leq C_G(N)$ or $B\leq C_G(N)$ by \cite[Lemma 4.3.3(5)]{PFG}. If $B\leq C_G(N)$, then from $C_G(N)\leq N\leq A$ it follows that $A=G$, and $(p, q)\in E(\Gamma_H(A))$, a contradiction. Thus $B\not\leq C_G(N)$. In this case $N$ is cyclic and $A\leq C_G(N)$ by \cite[Lemma 4.3.3(5)]{PFG}. Hence $N\leq A\leq C_G(N)\leq N$. Thus $N=C_G(N)=A$ is a cyclic group of order $p$. In this case $G/N$ is an abelian group of exponent dividing $p-1$. From $\mathrm{O}_{p'}(G)=1$ it follows that is a $\mathrm{O}_{p', p}(G)=N$. Therefore $\pi(G/\mathrm{O}_{p', p}(G))\subseteq\pi(p-1)$. Hence $q\in\pi(p-1)$. Thus $(p, q)\in E(\Gamma(A, B))$, the final contradiction. \end{proof}
\begin{proof}[Proof of Corollary \ref{Shemetkov}]
Let $\mathfrak{F}$ be a hereditary formation with the Shemetkov property. Then $\mathfrak{F}=(G\mid\Gamma_{Nc}(G)\subseteq\Gamma_{Nc}(\mathfrak{F}))$ by \cite[Theorem 4.4]{VM}.
Assume that $\mathfrak{F}$ is closed under taking mutually permutable products. Let $G$ be a supersoluble Schmidt $\pi(\mathfrak{F})$-group. Then $G/\Phi(G)$ is a mutually permutable product of groups $Z_p$ and $Z_q$ of orders $p$ and $q$ for some $p, q\in\pi(\mathfrak{F})$ with $q\in\pi(p-1)$. Hence $G/\Phi(G)\in\mathfrak{F}$. Since the class of soluble $\mathfrak{F}$-groups is saturated \cite[Corollary 6.4.5]{BallesterBollinches2006}, $G\in\mathfrak{F}$. Thus $\mathfrak{F}$ contains every supersoluble Schmidt $\pi(\mathfrak{F})$-group.
Assume now that $\mathfrak{F}$ contains every supersoluble Schmidt $\pi(\mathfrak{F})$-group. Hence $(p, q)\in E(\Gamma_{Nc}(\mathfrak{F}))$ for every $p, q\in\pi(\mathfrak{F})$ with $q\in\pi(p-1)$. Now if $G=AB$ is a mutually permutable product of $\mathfrak{F}$-subgroups $A$ and $B$, then $\Gamma_{Nc}(G)\subseteq\Gamma_{Nc}(\mathfrak{F})$ by Theorem \ref{mut}. Hence $G\in\mathfrak{F}$. Thus $G$ is closed under taking mutually permutable products. \end{proof}
\begin{proof}[Proof of Corollary \ref{Beidleman}]
Let $\pi$ be a $p$-special set of primes and $\mathfrak{F}$ be the class of normal extensions of $p$-groups by $\pi$-groups. Then $p\not\in\pi$. Hence $\mathfrak{F}$ is the formation of all $p$-closed $\pi(\mathfrak{F})$-groups. Recall that an $s$-critical group for the class of all $p$-closed groups is a Schmidt $(q, p)$-group for some prime $q$. Hence $\mathfrak{F}$ is the formation with the Shemetkov property. Note that $\mathfrak{F}$ contains every supersoluble $\pi(\mathfrak{F})$-group. Thus $\mathfrak{F}$ is closed under taking mutually permutable products by Corollary \ref{Shemetkov}. \end{proof}
We need the following Lemma in the prove of Theorem \ref{thm4}.
\begin{lemma}\label{ro2}
Let $G$ be a soluble group and $n\geq 3$. If $G=G_1\dots G_n$ is a product of pairwise permutable subgroups $G_1,\dots, G_n$, then $$\Gamma_{Nc}(G)=\bigcup_{1\leq i<j\leq n} \Gamma_{Nc}(G_iG_j).$$ \end{lemma}
\begin{proof}
Assume that $n=3$, then $G=(G_1G_2)(G_1G_3)=(G_1G_2)(G_2G_3)=(G_1G_3)(G_2G_3)$. Now $\Gamma_{Nc}(G)=\Gamma_{Nc}(G_1G_2)\cup\Gamma_{Nc}(G_1G_3)\cup\Gamma_{Nc}(G_2G_3)$ by \cite[Theorem 7.1(1)]{VM}. Assume that we prove Lemma \ref{ro2} for all $n$ with $3\leq n\leq k$, let prove it for $n=k+1$. Let $H_l=\prod_{j=1, j\neq l}^n$. Then by our assumption $\Gamma_{Nc}(H_l)=\bigcup_{1\leq i<j\leq n, i,j\neq l} \Gamma_{Nc}(G_iG_j)$. From $G=H_1H_2=H_1H_3=H_2H_3$ and \cite[Theorem 7.1(1)]{VM} it follows that $$\Gamma_{Nc}(G)=\Gamma_{Nc}(H_1)\cup\Gamma_{Nc}(H_2)\cup\Gamma_{Nc}(H_3)=\bigcup_{1\leq i<j\leq k+1} \Gamma_{Nc}(G_iG_j).$$
Now Lemma \ref{ro2} follows from the mathematical induction principle. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm4}] Let a group $G$ be a product of pairwise mutually permutable soluble subgroups $G_1,\dots, G_n$. From \cite[Theorem 4.1.14]{PFG} it follows that $G$ is soluble. Now $$\Gamma_{Nc}(G)=\bigcup_{1\leq i<j\leq n} \Gamma_{Nc}(G_iG_j).$$ According to Theorem \ref{mut} $\Gamma_{Nc}(G_iG_j)\subseteq \Gamma_{Nc}(G_i)\cup\Gamma_{Nc}(G_j)\cup\Gamma(G_i, G_j)$. Thus
$$\Gamma_{Nc}(G)\subseteq\bigcup_{1\leq i\leq n}\Gamma_{Nc}(G_i)\cup\bigcup_{1\leq i, j\leq n, i\neq j} \Gamma(G_i, G_j).$$
From $\Gamma_{Nc}(G)\subseteq \Gamma_H(G)$ for every group $G$ and \cite[Lemma 3]{Murashka2019} it follows that if $(p, q)\in E(\Gamma_H(G))\setminus E(\Gamma_{Nc}(G))$ for a soluble group $G$, then $p=q$. Thus
$$\Gamma_{H}(G)\subseteq\bigcup_{1\leq i\leq n}\Gamma_{H}(G_i)\cup\bigcup_{1\leq i, j\leq n, i\neq j} \Gamma(G_i, G_j)\cup\{(p, p)\mid p\in\pi(G)\}.$$
Theorem \ref{thm4} is proved. \end{proof}
\begin{proof}[Proof of Corollary \ref{cor41}] It is clear that the class $\mathfrak{F}$ of all groups whose Schmidt subgroups are supersoluble is a hereditary formation with the Shemetkov property and $\Gamma_{Nc}(\mathfrak{F})=\Gamma_{Nc}(\mathfrak{U})=\{(p, q)\mid q\in\pi(p-1)\}$. Note that $\Gamma_{Nc}(\mathfrak{F})$ does not have cycles. Hence every $\mathfrak{F}$-group has a Sylow tower by \cite[Theorem 6.2(b)]{VM}. In particular, $G_1,\dots, G_n$ are soluble. Now $G=G_1\dots G_n$ is a product of mutually permutable soluble $\mathfrak{F}$-subgroups $G_1,\dots, G_n$. Therefore $\Gamma_{Nc}(G)\subseteq\Gamma_{Nc}(\mathfrak{F})$ by Theorem \ref{thm4}. Thus $G\in\mathfrak{F}$ by \cite[Theorem 4.4]{VM}. \end{proof}
\section{Final remarks and open questions}
From 4 of Lemma \ref{lem1} it follows that if a group $G=AB$ is the product of its normal subgroups $A$ and $B$, then $\Gamma(G)=\Gamma(A)\cup\Gamma(B)$ where $\Gamma\in\{\Gamma_H, \Gamma_{Nc}\}$. Note that $S_4$ is the product of its normal subgroups $S_4$ and $A_4$ and $\Gamma_s(S_4)\cup\Gamma_s(A_4)\neq\Gamma_s(S_4)$. Nevertheless the following question seems interesting.
\begin{pr}
If a group $G=AB$ is the product of its normal subgroups $A$ and $B$. Is $\Gamma_s(G)\subseteq \Gamma_s(A)\cup\Gamma_s(B)$? \end{pr}
Through $ \overline{\Gamma}$ here we denote an undirected graph on the same vertex set as $ \Gamma$ in which two vertices are connected by the edge if they are connected in $\Gamma$. In the proves of \cite{DAniello2007, Kazarin2011} the Sylow graph was considered as a directed one but in \cite{Kazarin2011} it was defined as an undirected one. Therefore the graph $\overline{\Gamma}_s$ seems interesting. Moreover
\begin{proposition}
If a soluble group $G=AB$ is the product of its normal subgroups $A$ and $B$, then $\overline{\Gamma}_s(G)=\overline{\Gamma}_s(A)\cup\overline{\Gamma}_s(B)$. \end{proposition}
\begin{proof}
From \cite[Theorem 4.2(2)]{Murashka2021} it follows that $\overline{\Gamma}_s(H)=\overline{\Gamma}_{Nc}(H)$ for any soluble group $H$. Now $\overline{\Gamma}_s(G)=\overline{\Gamma}_s(A)\cup\overline{\Gamma}_s(B)$ follows from 4 of Lemma \ref{lem1}. \end{proof}
Note \cite[Proof of Theorem 4.2]{Murashka2021} that there are groups $H$ with $\overline{\Gamma}_s(H)\neq\overline{\Gamma}_{Nc}(H)$. That is why we ask
\begin{pr}
If a group $G=AB$ is the product of its normal subgroups $A$ and $B$. Is $\overline{\Gamma}_s(G)=\overline{\Gamma}_s(A)\cup\overline{\Gamma}_s(B)$? \end{pr}
In Theorem \ref{mut} only $N$-critical and Hawkes graph of mutually permutable product was described. What can be said about the Sylow graph of mutually permutable\,product?\,For\,example
\begin{pr}
If a group $G=AB$ is the product of mutually permutable subgroups $A$ and $B$. Is $\Gamma_s(G)\subseteq \Gamma_s(A)\cup\Gamma_s(B)\cup\Gamma(A, B)$? \end{pr}
The proof of Theorem \ref{mut} is based on the properties of mutually permutable products of 2 subgroups. The analogues of these properties for products of more than 2 subgroups are not known now. That is why we use some properties of $N$-critical graph of a soluble group (see Lemma \ref{ro2}) to prove Theorem \ref{thm4}. Hence we have the following 2 questions.
\begin{pr}
Does the conclusion of Theorem \ref{thm4} hold for the product of pairwise mutually permutable subgroups $G_1,\dots, G_n$? \end{pr}
\begin{pr}
Let $G$ be a group and $n\geq 3$. If $G=G_1\dots G_n$ is the product of pairwise permutable subgroups $G_1,\dots, G_n$, then is $$\Gamma_{Nc}(G)=\bigcup_{1\leq i<j\leq n} \Gamma_{Nc}(G_iG_j)?$$ \end{pr}
{\small
}
\end{document} | arXiv | {
"id": "2303.13384.tex",
"language_detection_score": 0.6692723035812378,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Subconvexity bound for $GL(2)$ L-functions: \lowercase{t}-aspect} \author{Ratnadeep Acharya, Sumit Kumar, Gopal Maiti and Saurabh Kumar Singh}
\address{ Stat-Math Unit, Indian Statistical Institute, 203 BT Road, Kolkata-700108, INDIA.}
\email{ratnadeepacharya87@gmail.com} \email{sumitve95@gmail.com} \email{g.gopaltamluk@gmail.com}
\email{skumar.bhu12@gmail.com}
\subjclass[2010]{Primary 11F66, 11M41; Secondary 11F55} \date{\today}
\keywords{Maass forms, Hecke eigenforms, Voronoi summation formula, Poisson summation formula.}
\begin{abstract} Let $f $ be a holomorphic Hecke eigenform or a Hecke-Maass cusp form for the full modular group $ SL(2, \mathbb{Z})$. In this paper we shall use circle method to prove the Weyl exponent for $GL(2)$ $L$-functions. We shall prove that
\[
L \left( \frac{1}{2} + it, f \right) \ll_{f, \epsilon} \left( 2 + |t|\right)^{1/3 + \epsilon}, \] for any $\epsilon > 0.$ \end{abstract} \maketitle
\section{ Introduction }
Estimating the central values of $L$-functions is one of the most important problems in number theory. In this paper we shall deal with the $t$-aspect of sub-convexity bound for $GL(2)$ $L$-functions. Let $f $ be a holomorphic Hecke eigenform, or a Maass cusp form for the full modular group $ SL(2, \mathbb{Z})$ with normalised Fourier coefficient $\lambda_f(n)$. The $L$-series associated with $f$ is given by \[ L(s, f)= \sum_{n=1}^\infty \frac{\lambda_f(n)}{n^s} \ = \prod_p \left( 1 -\lambda_f(p) p^{-s} + p^{-2s} \right)^{-1} \ \ \ (\Re s>1). \] It has been proved that the series $ L(s, f)$ extends to an entire function and satisfies a functional equation relating $s$ with $1-s$. The convexity problem in $t$-aspect deals with the size of $L(s, f)$ at the central line $\Re s = 1/2$. The functional equation together with the Phragm{\' e}n–Lindel{\"o}f principle and asymptotic of the Gamma functions gives us the convexity bound, or the trivial bound, $L(1/2+ it, f)\ll t^{1/2+ \epsilon}$. The sub-convexity problem is to obtain a bound of the form $L(1/2+ it, f) \ll t^{1/2 -\delta},$ for any $\delta>0.$ In this paper we shall prove the following theorem:
\begin{theorem} \label{main thm} Let $f $ be either a holomorphic Hecke eigenform or a Maass cusp form for the full modular group $ SL(2, \mathbb{Z})$. On the central line $\sigma= 1/2$, we have the following Weyl bound \[
L\left( \frac{1}{2} + it, f \right) \ll (|t|+2)^{ 1/3 +\epsilon} , \] for any $\epsilon >0$. \end{theorem} \begin{remark}
The method of the proof also works for any congruence subgroup $\Gamma_0(N)$, where $N$ is any natural number (not necessarily square free). \end{remark}
Let us briefly recall the history of the $t$-aspect sub-convexity bound for $L$-functions. The convexity bound for the Riemann zeta function is given by \begin{equation} \label{conv for zeta} \zeta \left( \frac{1}{2} + it \right) \ll t^{1/4 + \epsilon},\ \ \ \ (\epsilon> 0). \end{equation}
Lindel{\" o}f hypothesis asserts that the exponent $1/4 + \epsilon$ can be replaced by $\epsilon$. Sub-convexity bound for $\zeta(s)$ was first proved by G. H. Hardy and J. E. Littlewood, and H. Weyl independently.
It was first written down by E. Landau in a slightly refined form, and has been generalised to all Dirichlet $L$-functions. Since then it is a very hot topic for research. Many eminent mathematicians have worked on it and improved the exponent in \eqref{conv for zeta}. The latest bound is due to J. Bourgain who proves the exponent $13/84$.
The $t$-aspect Weyl exponent for $GL(2)$ $L$-functions is expected to be $1/3.$ For holomorphic forms, this was first proved by A. Good \cite{GOOD} using the spectral theory of automorphic functions. M. Jutila \cite{MJ} has given an alternative proof based only on the functional properties of $L(s, f)$ and $L(s, f\otimes \chi)$, where $\chi$ is an additive character. The arguments used in his proof were flexible enough to be adopted for the Maass cusp forms, as shown by Meurman \cite{MERU1}, who proved the result for Maass cusp forms. A. Good mean value estimate itself was extended by M. Jutila \cite{MJ1} to prove the Weyl bound for Maass cusp forms, yet in another way. Using Kloosterman's circle method and conductor lowering trick introduced by R.Munshi, Aggarwal and Singh \cite{AS} proved the Weyl bound for $GL(2)$ $L$-functions.
The aim of this paper is to use $GL(2)$ circle method to prove the Weyl bound for $GL(2)$ $L$-functions. This is the first instance where $GL(2)$ circle method is being used to obtain the Weyl bound. We carry out the suggestions of R.Munshi in this paper. We introduce one more layer in this technique by summing over the weights. This paper serves as a precursor to an upcoming paper of R.Munshi.
\section{Sketch of the proof} To prove our theorem, we start with the following Fourier sum: \begin{align}\label{Fourier sum}
\mathcal{F} &=\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\sum_{f\in H_k(q,\Psi)}\omega_f^{-1}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m)\psi(\ell) U \left(\frac{m \ell^2}{N} \right) \nonumber \\
& \hspace{3cm}\times \sum_{n=1}^\infty \overline{\lambda_f(n)} n^{it} W \left(\frac{n}{N} \right),
\end{align} where U is a smooth bump function supported on the interval $[0.5,3]$ such that $U(x)\equiv 1$ for $x\in [1,2]$ and $U^{(j)}(x)\ll_j 1,$ for all $j \geq 1$. Estimating $\mathcal{F}$ trivially at this stage, we get $\mathcal{F}\ll QKN^2$.
{\bf Step 1:} On applying the Petersson Trace formula, we obtain $\mathcal{F}=\Delta + \mathcal{O}$, where \begin{align*} \Delta =\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_F(m)\psi(\ell) U \left(\frac{m \ell^2}{N} \right) m^{it} W \left(\frac{m}{N} \right),
\end{align*} and \begin{align}\label{off diagonal}
\mathcal{O} &=\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\mathop{\sum \sum \sum}_{m, \ell ,n=1}^\infty \lambda_F(m) n^{it} \psi(\ell) W\left(\frac{n}{N}\right) U \left(\frac{m \ell^2}{N} \right)\notag \\
& \hspace{2cm} \times 2\pi i^{-k} \sum_{c=1}^\infty \frac{S_{\psi} (m,n,cq)}{cq} J_{k-1} \left(\frac{4\pi\sqrt{mn}}{cq}\right).
\end{align}
We observe that $|\Delta| \asymp KQ|S(N)|$.
{\bf Step 2:}
Next we evaluate the sum over $k$ in \eqref{off diagonal} and observe that $\mathcal{O}$ is negligibly small if $QK^{2}\gg Nt^{\epsilon}$. Hence we obtain $S(N) \ll \frac{\mathcal{F}}{QK}$. Now our goal is to prove that $\mathcal{F}\ll QKN^{1/2}t^{1/3}$.
{\bf Step 3:} Now we apply functional equation for $L(s, F\otimes f)$ in \eqref{Fourier sum}. We observe that the sum over $m$ in \eqref{Fourier sum} is given by \begin{align*} \mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m)\psi(\ell) U\left(\frac{m \ell^{2}}{N}\right) &= \eta i^{-2k} \left( \frac{N}{\tilde{N}}\right)^{1/2} \epsilon_{\psi}^{2}\overline{\lambda_f(q^{2}) } \sum_{\mathcal{U}}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m) \\ & \hspace{1cm} \times W_1\left(\frac{m \ell^{2}}{\tilde{N}}\right)+ O(t^{-2018}), \end{align*}
where $W_{1}^{(j)}(x)\ll_{j} t^{\epsilon j}$ and $\tilde{N} \asymp Q^2 K^4/N$. This gives us the following expression of $\mathcal{F}$:
\begin{align} \label{newsum} \mathcal{F} &=\eta \left( \frac{N}{\tilde{N}}\right)^{1/2} \sum_{k\sim K} i^{-2k} W\left(\frac{k-1}{K}\right)
\sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\epsilon_{\psi}^{2}
\sum_{f\in H_k(q,\Psi)}\omega_f^{-1}\sum_{\nu=0}^{\infty}\mathop{\sum \sum}_{m^{\prime}, \ell=1}^\infty \lambda_f(m^{\prime})\overline{\lambda_F} (m) \overline{\psi}(\ell m^{\prime}) \notag\\
& \hspace{3cm}\times W_{1} \left(\frac{m^{\prime} q^{\nu} \ell^2}{\tilde{N}} \right) \sum_{n=1}^\infty \overline{\lambda_f(nq^{2+\nu})} n^{it} V \left(\frac{n}{N} \right). \end{align} This step gives us a saving of the size $(\frac{N}{\tilde{N}})^{1/2} = \frac{N}{QK^2}.$
{\bf Step 4:} We again apply the Petersson Trace formula in \eqref{newsum} and obtain $\mathcal{F}=\textrm{diagonal}(\Delta_{1})+ \textrm{off diagonal}(\mathcal{O^{*}})$. We observe that diagonal($\Delta_{1}$) term vanishes and the dual off diagonal term is given by \begin{align}\label{new off} \mathcal{O^{*}} &= \left( \frac{N}{\tilde{N}}\right)^{1/2} \sum_{k\sim K} i^{-2k} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\epsilon_{\psi}^{2}\sum_{\nu=0}^{\infty} \mathop{\sum \sum}_{m^{'}, \ell=1}^\infty \lambda_F(m)\overline{\psi}(\ell m^{'}) W_{1} \left(\frac{m^{'} q^{\nu} \ell^2}{\tilde{N}} \right) \notag \\
& \hspace{1cm}\times \sum_{n=1}^\infty n^{it} V \left(\frac{n}{N} \right)2\pi i^{-k} \sum_{c=1}^\infty \frac{S_{\psi} (nq^{2+\nu},m^{'},cq)}{cq} J_{k-1} \left(\frac{4\pi\sqrt{m^{'}q^{\nu}n}}{c}\right) . \end{align}
This step gives us a saving of the size $\frac{\sqrt{QK}}{C}$, where $C\backsim Q$. From now on, we shall estimate $\mathcal{O^{*}}$.
{\bf Step 5:}
We evaluate the sum over $k$ in \eqref{new off} using stationary phase integral and also evaluate the sum over $\psi$ in the resulting expression. This process gives us the following expression for $\mathcal{O^{*}}$: \begin{align*} \mathcal{O^{*}} &=\sqrt{\frac{N}{\tilde{N}}} \frac{\phi(q)}{q}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_F(m) U \left(\frac{m \ell^2}{\tilde{N}} \right) \sum_{n}n^{it} W \left(\frac{n}{N} \right) \sum_{c\ll \frac{Q}{\ell}} \frac{S(n,m;c)}{c} e\left(\pm \frac{\overline{cl}}{q}\pm\frac{\sqrt{nm}}{c} \right) . \end{align*} In this step, the sum over $k$ gives us a saving of size $\sqrt{K}$ and the sum over $\psi$ gives a saving of size $\sqrt{Q}$. Thus, saving in this step is \begin{align*} \frac{N}{QK^2}\frac{\sqrt{QK}}{\sqrt{C}}\sqrt{QK}=\frac{N}{K\sqrt{Q}}. \end{align*} Hence total saving at this stage is \begin{align*} \frac{N}{\sqrt{Qt}}\frac{N}{K\sqrt{Q}}=\frac{N^2}{QK\sqrt{t}}. \end{align*}
{\bf Step 6:} We now apply the Poisson summation formula to the sum over $n$. The initial length for $n$-sum is $N$. Here ``analytic conductor'' is of size $t$ and ``arithmetic conductor'' is of size $c$. Hence the dual length is supported on $\frac{ct}{N}$. Saving in this step is of size $\frac{N}{\sqrt{Qt}}$. We obtain the following bound for $\mathcal{O^{*}}$: \begin{align*}
\mathcal{O^{*}} &\ll \left( \frac{N}{\tilde{N}}\right)^{1/2} \mathop{\sum \sum}_{m, \ell =1}^\infty |\lambda_F(m)\overline{\psi}(l ) W\left(\frac{ml^{2}}{\Tilde{N}}\right)|\ \ |\sum_{c\sim C} \sum_{n\ll\frac{ct}{N}} \frac{1}{c} e\left(\frac{-m \overline{n}}{c} \right)I(m,n,c)|. \end{align*}
{\bf Step 7:} We apply the Cauchy-Schwartz inequality to get rid of the Fourier coefficients. Opening the absolute value square and interchanging the sum over $m$ gives us the following expression: \begin{align*}
\mathcal{O^{*}} & \ll \left( \frac{N}{\tilde{N}}\right)^{1/2} (\tilde{N})^{1/2}N \left(\sum_{m\sim \tilde{N}} W(\frac{m}{\tilde{N}})\ \ |\sum_{c\sim C} \sum_{n\ll\frac{c t}{N}} \frac{1}{c} e\left(\frac{-m \overline{n}}{c} \right)I(m,n,c)|^{2} \right)^{1/2} \notag\\ &:= N^{3/2} (\mathcal{O}_2^\star)^\frac{1}{2}, \end{align*} where \begin{align*} \mathcal{O}_2^\star =\mathop{\sum \sum}_{c_{1},c_{2} \sim Q} \frac{1}{c_1 c_2} \mathop{\sum \sum}_{n_i \sim \frac{c_i t}{N}} \sum_{m\sim \tilde{N}} e\left(\frac{-m \overline{n_{1}}}{c_{1}} +\frac{m \overline{n_{2}}}{c_{2}} \right)I(m,n_{1},c_{1})I(m,n_{2},c_{2}) U\left(\frac{m}{\tilde{N}}\right). \end{align*} We again apply the Poisson summation formula to sum over $m$. ``Analytic conductor'' is of size $K^2$ and ``arithmetic conductor'' is of size $c_{1}c_{2}$. From the diagonal terms we get a saving of size $\frac{Q^2 t}{N}$. From off diagonal terms we save $\frac{\tilde{N}}{K^2 \sqrt{c_{1}c_{2}}}$. Also we are able to save $\sqrt{c_{1}c_{2}}$ from the resulting congruence relation. Thus, total savings in the off diagonal terms is of size \begin{align*} \frac{\tilde{N}}{K^2 \sqrt{c_{1}c_{2}}}\sqrt{c_{1}c_{2}}=\frac{\tilde{N}}{K}=\frac{Q^2 K^3}{N}. \end{align*} Therefore, total savings in sum over $m$ is of size $\min\left\lbrace\frac{Q^2 t}{N},\frac{Q^2 K^3}{N}\right\rbrace$. Optimal choice of $K$ is given by $K=t^{1/3}$. Hence, total saving from all of the above steps is of size \begin{align*} \frac{N^2}{QK\sqrt{t}}\left(\frac{Q^2t}{N}\right)^{1/2}=\frac{N^{3/2}}{K}. \end{align*} Finally, we obtain \begin{align*}
|\mathcal{F}|\ll \frac{QK}{N^{3/2}/K}=KQ\sqrt{N}t^{1/3}. \end{align*} This proves our claim.
\section{Preliminaries} In this section, we shall recall some basic facts about $SL(2, \mathbb{Z})$ automorphic forms (for details see \cite{HI} and \cite{IK1}). \subsection{Holomorphic cusp forms}
Let $f $ be a holomorphic Hecke eigenform of weight $k$ for the full modular group $ SL(2, \mathbb{Z})$. The Fourier expansion of $f$ at $\infty$ is given by $$ f(z)= \sum_{n=1}^\infty \lambda_f(n) n^{(k-1)/2} e(nz),$$
where $ e(z) = e^{2\pi i z}$ and $\lambda_f(n), \ {n \in \mathbb{Z}}$ are the normalized Fourier coefficients. Deligne proved that $|\lambda_f(n)| \leq d(n)$, where $d(n)$ is the divisor function. $L$-function associated with the form $f$ is given by
\[ L( s, f )= \sum_{n=1}^\infty \frac{\lambda_f(n)}{n^s} \ = \prod_p \left( 1 -\lambda_f(p) p^{-s} + p^{-2s} \right)^{-1} \ \ \ (\Re s>1). \] The completed $L$-function is given by \[ \Lambda(s, f) : = ( 2 \pi)^{-s} \Gamma \left( s + \frac{k-1}{2}\right) L( s, f ) = \pi^{-s} \Gamma\left( \frac{s + (k+1)/2}{2}\right) \Gamma\left( \frac{ s + (k-1)/2}{2}\right)L( s, f ). \]
Hecke proved that $L(s, f)$ admits an analytic continuation to the whole complex plane and satisfies the functional equation \begin{align*}
\Lambda(s, f) = \epsilon(f) \ \Lambda(1-s,\overline{f}), \end{align*}
where $ \epsilon(f)$ is a root number and $\overline{f} $ is the dual form of $f$.
\subsection{Maass cusp forms} Let $f$ be a weight zero Hecke-Maass cusp form with Laplace eigenvalue $1/4 + \nu^2$. The Fourier series expansion of $f$ at $\infty$ is given by \[
f(z)= \sqrt{y} \sum_{n \neq 0} \lambda_f(n) K_{ i \nu} (2 \pi |n|y) e(nx), \]
where $ K_{ i \nu}(y)$ is the Bessel function of second kind. Ramanujan-Petersson conjecture predicts that $|\lambda_f(n)|\ll n^\epsilon$. The work of H. Kim and P. Sarnak \cite{KS} tells us that $|\lambda_f(n)|\ll n^{7/64+\epsilon}$. $L$-function associated with the form $f$ is defined by $ L(s, f) := \sum_{n=1}^\infty \lambda_f(n) n^{-s}$ ( $\Re \ s>1$). It extends to an entire function and satisfies the functional equation
$ \Lambda(s, f) = \epsilon(f ) \Lambda(1-s, \overline{f})$, where $ |\epsilon(f )| = 1$ and completed $L$-function $ \Lambda(s, f)$ is given by \[ \Lambda(s, f) = \pi^{-s} \Gamma \left( \frac{s + i \nu }{ 2} \right) \Gamma \left( \frac{s - i \nu }{ 2} \right) L(s, f) . \]
\section{Some Lemmas}
In this section we shall recall some results which we require in the sequel. We first recall the following version of the Stirling’s formula.
\begin{lemma} \label{stirling} Let $s = \sigma + it$ with $A_1 \leq A_2$ and $t \geq 0.$We have \begin{equation}
\Gamma(s) = \sqrt{\frac{2\pi}{s}} \left(\frac{s}{e}\right)^{s} \left\lbrace \sum_{1}^{N} \frac{a_{n}}{s^{n}}+O\left( |s|^{-N-1}\right)\right\rbrace, \end{equation} and \begin{equation*}
|\Gamma(s)| = \sqrt{2\pi} t^{\sigma-1/2} e^{-\frac{\pi}{2} |t|} \left( 1+ O\left( |t|^{-1}\right)\right). \end{equation*} \end{lemma}
\begin{lemma}\label{sum over k} Let $g(u)$ be a real valued smooth function of $\mathbb{R}$. Let $ \hat{g}(v)$ be the Fourier transform of $g$ and let $J_{u}(x)$ be the Bessel's functions of order $u$. We have \begin{align*} 4 \sum_{u \equiv a (4)} g(u) J_u (2 \pi x) = \int_{\mathbb{R}} \hat{g}(v) C_a(v, x) dv, \end{align*} where \begin{align*}
C_a(v, x) = -2i \sin(x \sin 2 \pi v) + 2 i^{1-a} \sin(x \cos 2 \pi v). \end{align*} \end{lemma}
\begin{proof} See \cite[page 85-86]{HI}. \end{proof}
We now recall Rankin-Selberg bound for Fourier coefficients in the following lemma.
\begin{lemma} \label{rankin Selberg bound} Let $\lambda_f(n)$ be Fourier coefficients of a holomorphic cusp form, or a Maass form. For any real number $x\geq 1$, we have \begin{align*}
\sum_{1\leq n \leq x} \left| \lambda_f(n) \right|^2 \ll_{f, \epsilon} x^{1+\epsilon}. \end{align*}
\end{lemma}
We also require to estimate the exponential integral of the form: \begin{equation} \label{eintegral} \mathfrak{I}= \int_a^b g(x) e(f(x)) dx, \end{equation} where $f$ and $g$ are real valued smooth functions on the interval $[a, b]$. We recall the following lemma on exponential integrals.
\begin{lemma} \label{second deri bound}
Let $f$ and $g$ be real valued twice differentiable function and let $f^{\prime \prime} \geq r>0$ or $f^{\prime \prime} \leq -r <0$, throughout the interval $[a, b]$. Let $g(x)/f^\prime(x)$ is monotonic and $|g(x)| \leq M$. Then we have
\begin{align*} \mathfrak{I} \leq \frac{8M}{\sqrt{r}}. \end{align*} \end{lemma} \begin{proof} See \cite[Lemma 4.5, page 72]{ECT} \end{proof}
\begin{lemma} \label{exponential inte} Let $0<\delta<1/10$, $X,\ Y,\ V,\ V_{1},\ Q \ >0$, $Z:=Q+X+Y+V_{1}+1$, and assume that
\begin{align} Y\ge Z^{3\delta},\ \ V_{1} \ge V \ge \frac{QZ^{\frac{\delta}{2}}}{Y^{\frac{1}2{}}}. \ \ \ \end{align} Suppose that $w$ is a smooth function on $\mathbb{R}$ with support on an interval $J$ of length $V_{1}$ satisfying $w^{(j)}\ll_{j} XV^{-j}$, for all $j\in \mathbb{N}$. Suppose that $h$ is a smooth function on $J$ such that there exists a unique point $t_{0}\in J$ such that $h^\prime(t_0)=0$, and furthermore that \begin{align} h^{(2)}(t) \gg YQ^{-2}, \ \ h^{(j)}(t) \ll_{j} YQ^{-j},\ \ for j=1, 2,...\ and\ \ t\in J \end{align} Then the integral $I$ defined by \begin{align*}
I=\int_{\mathbb{R}} w(t)e^{i h(t)} dt \end{align*} has an asymptotic expansion of the form
\begin{align} \label{huxely bound} I=\frac{e^{ih(t_{0})}}{\sqrt{h^{(2)}(t_{0})}}\sum_{n\le 3\delta^{-1}A} p_{n}(t_{0}) + O_{A,\delta}(Z^{-A}), \end{align} \begin{align*}
p_{n}(t_{0})=\frac{\sqrt{2\pi}e^{\pi i/4}}{n!}\left(\frac{i}{2h^{(2)}(t_{0})}\right)^{n} G^{(2n)}(t_{0}), \end{align*}
where $A>0$ is arbitrary,\ and \begin{align}
G(t)=\ w(t)e^{iH(t)} ,\ \ H(t)=\ h(t)-\ h(t)-\ \frac{1}{2}h^{(2)}(t_{0})(t-\ t_{0})^{2}. \end{align} Furthermore,\ each $p_{n}$ is a rational function in $h^{\prime \prime},\ h^{\prime \prime \prime},...,$ satisfying \begin{align}
\frac{d^{j}}{dt^{j}} p_{n}(t_{0})\ll_{j,n}X(V^{-j}+\ Q^{j})\left( (V^2 Y/Q)^{-n} +\ Y^{-n/3}\right). \end{align}
The leading term satisfies \begin{align*}
\sqrt{2\pi}e^{\frac{\pi i}{4}}\frac{e^{i h(t_{0})}}{\sqrt{h^{(2)}(t_{0})}}w(t_{0}) \ll \frac{Q X}{Y^{1/2}}. \end{align*}
Also, if $h(t)$ does not vanishes on the interval $J$ and satisfies $|h^\prime(t)| \geq R$ for some $R>0$, then we have \begin{align} \label{without stationary}
I \ll_A V X \left[ (QR/ \sqrt{Y})^{-A} + (RV)^{-A} \right]. \end{align}
\end{lemma} \begin{proof} See Lemma $8.1$ and and Proposition $8.2$ of \cite{BKY}. We use this result to show that in absence of stationary phase, the integral is negligibly small, i.e., $O_A(t^{-A})$ for any $A>0$, if $R\gg t^\epsilon \max \left\lbrace Y^{1/2}/ Q, V^{-1}\right\rbrace$. \end{proof} \section{First application of the Petersson trace formula} To prove our theorem, we shall prove the following proposition. \begin{proposition} \begin{align*} S(N) \ll \begin{cases} N \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{if} \ \ 1 \ll N\ll t^{2/3 + \epsilon} \\ \sqrt{N} t^{1/3 + \epsilon} \ \ \ \ \ \textrm{if} \ \ t^{2/3 + \epsilon} \ll N \ll t^{1+\epsilon} \end{cases}, \end{align*} \end{proposition} where \begin{align*} S(N)=\sum \lambda_F(n)n^{it}W(n/N). \end{align*} We shall use the Petersson trace formula to separate the oscillations of $\lambda_F(n)$ and $n^{it}$, where we use harmonics from $H_k(q,\Psi)$, with $k\backsim K$ , $q\backsim Q$ and $\Psi$ is an odd character. Optimal size of K and Q will be choosen later. We now consider the following Fourier sum:
\begin{align}\label{sumit}
\mathcal{F} &=\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\sum_{f\in H_k(q,\Psi)}\omega_f^{-1}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m)\psi(\ell) U \left(\frac{m \ell^2}{N} \right) \nonumber \\
& \hspace{3cm}\times \sum_{n=1}^\infty \overline{\lambda_f(n)} n^{it} W \left(\frac{n}{N} \right),
\end{align} where U is a smooth bump function supported on the interval $[0.5,3]$ such that $U(x)\equiv 1$ for $x\in [1,2]$ and $U^{(j)}(x)\ll_j 1,$ for all $j \geq 1$. On applying the Petersson trace formula to the above sum $\mathcal{F}$, we observe that the diagonal term is given by
\begin{align*} \Delta =\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_F(m)\psi(\ell) U \left(\frac{m \ell^2}{N} \right) m^{it} W \left(\frac{m}{N} \right).
\end{align*}
Since $W$ is supported on $[1,2]$, the above sum is non-zero only when $N\leq m\leq 2N $. If $\ell \geq 2$, then $m\ell^{2} \geq 4N $. This gives us $\frac{m\ell^{2}}{N} \geq 4$. Since $U(x)$ vanishes for $x\geq 3$, this forces $\ell =1$. Finally we obtain the following expression for $\Delta$:
\begin{align*} \Delta =\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\sum _{m=1}^\infty \lambda_F(m) m^{it} W \left(\frac{m}{N} \right)
\end{align*}
\begin{equation*}
\Rightarrow|\Delta| \asymp KQ|S(N)|.
\end{equation*}
Next we consider the off diagonal term, which is given by
\begin{align}\label{ooff} \mathcal{O} &=\sum_{k\sim K} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\mathop{\sum \sum \sum}_{m, \ell ,n=1}^\infty \lambda_F(m) n^{it} \psi(\ell) W\left(\frac{n}{N}\right) U \left(\frac{m \ell^2}{N} \right) \notag\\ & \hspace{3cm} \times 2\pi i^{-k} \sum_{c=1}^\infty \frac{S_{\psi} (m,n,cq)}{cq} J_{k-1} \left(\frac{4\pi\sqrt{mn}}{cq}\right).
\end{align}
We will now consider the sum over $k$ in the above equation. Using Lemma \ref{sum over k} with $x=\frac{2\sqrt{mn}}{cq}$, we obtain
\begin{align*} S_{1} : = \sum_{k \sim K} i^{-k}W\left(\frac{k-1}{K}\right)J_{k-1}(2\pi x) = \int_{\mathbb{R}} \hat{W}\left(\frac{v}{K}\right) \sin(x \cos(2\pi v))dv \\
= K \int_{\mathbb{R}} \hat{W}(Kv)) \sin(x \cos(2\pi v))dv. \end{align*} By change of variable $Kv\rightarrow v$ in above equation, we obtain \begin{align*} S_{1}= \int_{\mathbb{R}} \hat{W}(v)) e(x\cos(\frac{2\pi v}{K}))dv . \end{align*}
We have
\begin{equation} \label{w hat}
\hat{W}(v)=\int_{\mathbb{R}}W(u)e(vu)du. \end{equation}
Integrating by parts $j$ times and using $W^{(j)}(u)\ll_{j}(t^{\epsilon})^{j}$, we obtain \begin{align*} \hat{W}(v)\ll\left(\frac{t^{\epsilon}}{2\pi v}\right)^{j}. \end{align*}
Hence, $\hat{W}(v)$ is negligibly small if $|v|\gg t^{\epsilon}$. Taking $V=At^{\epsilon}$ for some fixed constant $A$, we get \begin{align} \label{S_1} S_{1}= \int_{\mathbb{R}} \hat{W}(v)) F(v) e(x \cos(\frac{2\pi v}{K}))dv. \end{align} Where $F$ is a smooth bump function supported on the interval $[-2V,2V]$ such that $F(v)\equiv 1$ for $v\in [-V,V]$ and $F^{(j)}(v)\ll_j 1,$ for all $j \geq 1$. Using equation \eqref{w hat} in equation \eqref{S_1}, we obtain \begin{align*} S_{1}= \mathop{\int\int}_{\mathbb{R}^{2}} \hat{W}(v)) F(v) e\left( uv \pm x \cos(\frac{2\pi v}{K}) \right)dudv. \end{align*}
Applying Lemma \ref{exponential inte} to the $v$-integral, we observe that the integral is negligibly small if $x\ll K^{2-\epsilon}$. This analysis holds even if the weight function has the little oscillation, say $W^{j}\ll_{j} t^{j\epsilon}$. In the complementary range for $x$ we expand the cosine function into a Taylor series. Since $x\ll N/Q$, if we assume that $N\ll QK^{4}t^{-\epsilon}$, then we only need to retain the first two terms in the expansion, and the above integral essentially reduces to \begin{align*}
e(\pm x)\int\int_{\mathbb{R}^{2}} W(u))F(v) e\left(uv\pm \frac{4\pi^{2}xv^{2}}{K^{2}}\right)dudv. \end{align*} For integral over $v$, we apply the stationary phase analysis. If we choose $+$ sign in the above equation, then $v$ integral is negligibly small due to absence of stationary point (by second case of Lemma \ref{exponential inte}). Otherwise, the integral is given by \begin{align*}
e\left(x+\frac{u^{2}K^{2}}{16\pi^{2}x}\right)\frac{K}{\sqrt{x}}\rightsquigarrow e(x)\frac{K}{\sqrt{x}} \end{align*} with $x\gg K^{2-\epsilon}$ (upto an oscillatory factor which oscillates at most like $t^{\epsilon}$). In any case, it follows that we can cut the sum over $c$ in \eqref{ooff} at $C\gg Nt^{\epsilon}/QK^{2}$, at a cost of a negligible error term. Hence effective range of $x$ is given by $x\gg K^{2}t^{\epsilon}$. We note that $x=\frac{2\sqrt{mn}}{cq}$. Hence we have $ \frac{N}{cQ}\gg K^{2} t^{\epsilon}$ i.e., $ c\ll\frac{N t^{\epsilon}}{QK^{2}}$. If we choose parameters $K$ and $Q$ such that $QK^{2}\gg Nt^{\epsilon}$, the off diagonal term is negligibly small. Hence, We obtain \begin{align*}
|S(N)|\ll \frac{|\mathcal{F}|}{QK} +t^{-2018}. \end{align*} \section{functional equation for $L(s,F\otimes f)$ }
We now consider the sum \begin{align*} S_2 := \mathop{\sum \sum}_{m, \ell=1}^{\infty} \lambda_f(m) \lambda_F(m)\psi(\ell) U \left(\frac{m \ell^{2}}{N}\right) . \end{align*} Using Mellin inversion formula, we obtain \begin{align*} S_2 =\int_{(\sigma)}\tilde{U}(s)N^{s}\mathop{\sum \sum}_{m, \ell=1}^\infty \frac{\lambda_f(m)\lambda_F(m)}{(m\ell ^{2})^{s}}\psi(\ell) ds =\int_{(\sigma)}\tilde{U}(s)N^{s} L(s,F\otimes f)ds. \end{align*} Applying the functional equation for $L(s,F\otimes f)$ (see \cite[page 135-136]{KMV}) in the above equation, we obtain
\begin{equation*}
S_2=\frac{q}{2\pi}\frac{g_{\psi}^{2}}{q\lambda_f(q^{2})}\int_{(\sigma)}\tilde{U}(s)\left(\frac{N}{(2\pi q)^{2}}\right)^{s}\frac{\gamma_k(1-s)}{\gamma_k(s)} L(1-s,\overline{F}\otimes \overline{f})ds ,
\end{equation*} where $\gamma_{k}(s)$ is a product of four gamma factors. Moving the line of integration to $\sigma=-\epsilon$ and expanding the resulting $L$ function into series, we obtain \begin{align*} S_2= \frac{g_{\psi}^{2}}{2 \pi i} \overline{\lambda_f(q^{2}) } \mathop{\sum \sum}_{m, \ell=1}^\infty \frac{\lambda_f(m)\lambda_F(m)}{m\ell ^{2}}\psi(\ell) U\left(\frac{m \ell^{2}}{\tilde{N}}\right) \int_{(-\epsilon)}\tilde{U}(s)\left(\frac{Nm\ell^{2}}{(q)^{2}}\right)^{s}\frac{\gamma_k(1-s)}{\gamma_k(s)})ds. \end{align*} For $\tilde{N}\gg \frac{Q^{2}K^{4 }}{N}t^{\epsilon}$, we shift the contour to the left and for $\tilde{N}\ll \frac{Q^{2}K^{4 }}{N}t^{-\epsilon}$, we shift the contour to $\frac{K-2}{2}$. Since $K$ is of size $\gg t^{1/3 - \epsilon}$, we observe that the contribution from the above range is negligibly small. Let $\mathcal{U}={(U,\tilde{N})}$ be a smooth dyadic partition of unity, which consists of pair $(U,\tilde{N})$ with $U$ a non negative smooth function on $[1,2]$ and $\sum_{(U,\tilde{N})} U\left(\frac{r}{\tilde{N}}\right)= 1$ for $r \in (0,\infty)$. Also the collection is such that the sum is locally finite in the sense that for any given $\ell \in \mathbb{Z}$, there are only finitely many pairs with $\tilde{N} \in [2^{\ell},2^{\ell +1}]$. We record the above result in the following lemma. \begin{lemma} Let $\tilde{N}$ be as above. We have \begin{align*} &\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m)\psi(\ell) U\left(\frac{m \ell^{2}}{N}\right)= \eta i^{-2k} q \ \epsilon_{\psi}^{2}\overline{\lambda_f(q^{2}) } \sum_{\mathcal{U}}\mathop{\sum \sum}_{m, \ell=1}^\infty \frac{\lambda_f(m)\lambda_F(m)}{m\ell ^{2}}\psi(\ell) \\
& \hspace{3cm}\times U\left(\frac{m \ell^{2}}{\tilde{N}}\right)\frac{1}{2 \pi i} \int_{(0)}\tilde{U}(s)\left(\frac{Nm\ell^{2}}{(q)^{2}}\right)^{s}\frac{\gamma_k(1-s)}{\gamma_k(s)})ds +O(t^{-2018}), \end{align*}
where $\tilde{N}\asymp \frac{Q^{2}K^{4}}{N}$. \end{lemma} To cancel out the oscillations of Gamma functions, we shift the contour to $\Re s = 1/2$. For $s=\sigma +i\tau$, we have \begin{align*}
\tilde{U}(s)=\int_{0}^{\infty} U(x)x^{s-1}dx \ll_{j} \frac{t^{\epsilon}}{|s||s+1| \cdots |s+j-1|}. \end{align*}
We observe that $\tilde{U}(s)$ is negligibly small if $\tau \gg t^{\epsilon}$. Hence we shall focus on the range $|\tau|\ll t^{\epsilon}$. We note that $\frac{\gamma_k(1/2 +i\tau)}{\gamma_k(1/2-i\tau)}$ is a product of $4$ factors of the form $\frac{\Gamma(k_{j} +i\tau)}{\Gamma(k_{j}-i\tau)}$, where $k_{j}\backsim K $. Let $K/2 +i\tau= re^{i\theta}$ with $r=\sqrt{\frac{K^{2}}{4}+\tau ^{2}}$ and $\theta=\tan^{-1}(2\tau /K)$. We obtain $\log r= \log K+O(\tau^{2}/K^2)$ and $\theta=\tau /K +O(\tau^3/K^3)$. Using lemma \ref{stirling}, we obtain \begin{align*} \frac{\Gamma(K/2 +i\tau)}{\Gamma(K/2-i\tau)} &=\exp\left\lbrace (K/2+i\tau -1/2)\log(re^{i\theta}) -(K/2+i\tau)-(K/2-i\tau -1/2)\log(re^{-i\theta}) \right. \\
& \hspace{7cm} \left. +K/2+i\tau +O(\tau/K)\right\rbrace \\ &=\exp\left\lbrace (K/2+i\tau -1/2)(\log K+O(\tau^2 /K^2) +i(\tau/K +O(\tau^3/K^3))) \right. \\ & \left. \hspace{10pt} -(K/2+i\tau)-(K/2-i\tau -1/2)( \log K+O(\tau^2 /K^2) +i(\tau/K +O(\tau^3/K^3))) \right\rbrace \\ &=\exp(2i\tau \log K -i\tau /K -i \tau +O(\tau^2/K^2). \end{align*}
We observe that oscillations with respect to $k$ are given by $(k/2)^{i\tau}$. Since $\tau \ll t^{\epsilon}$, we can ignore the oscillations with respect to $k$ and replace $W$ by $W_{1}$ such that $W_{1} ^{(j)}(x)\ll t^{j \epsilon}$. We record the above result in the following lemma. \begin{lemma} \label{lemma F times f} Let $F$ and $f$ be as above. We have \begin{align*} &S_3:= \mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m)\psi(\ell) U\left(\frac{m \ell^{2}}{N}\right) = \eta i^{-2k} N^{1/2} \epsilon_{\psi}^{2}\overline{\lambda_f(q^{2}) } \sum_{\mathcal{U}}\mathop{\sum \sum}_{m, \ell=1}^\infty \frac{\lambda_f(m)\lambda_F(m)}{\sqrt{m\ell ^{2}}}\psi(\ell) \\
& \hspace{3cm}\times U\left(\frac{m \ell^{2}}{\tilde{N}}\right)\frac{1}{2 \pi } \int_{(1/2)}\tilde{U}(s)\left(\frac{Nm\ell^{2}}{(q)^{2}}\right)^{it}\frac{\gamma_k(1-s)}{\gamma_k(s)})dt +O(t^{-2018}) \\
&\hspace{2cm} = \eta i^{-2k} \left( \frac{N}{\tilde{N}}\right)^{1/2} \epsilon_{\psi}^{2}\overline{\lambda_f(q^{2}) } \sum_{\mathcal{U}}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_f(m)\lambda_F(m) W_1\left(\frac{m \ell^{2}}{\tilde{N}}\right) +O(t^{-2018}), \end{align*}
where $W_{1}^{(j)}(x)\ll_{j} t^{\epsilon j}$. \end{lemma}
Let $m=m^{\prime}q^{\nu}$, where $(m^{\prime},q)=1$.
Then \begin{equation*}
\overline{\lambda_{f}(mq^{2})}=\overline{\lambda_{f}(m^{\prime})} \ \overline{\lambda_{f}(q^{2+\nu})}=\overline{\psi(m^{\prime})}\lambda_{f}(m^{\prime}) \ \overline{\lambda_{f}(q^{2+\nu})}.
\end{equation*} Using the above expression and Lemma \ref{lemma F times f} in \eqref{sumit}, we obtain \begin{align*} \mathcal{F} &=\eta \left( \frac{N}{\tilde{N}}\right)^{1/2} \sum_{k\sim K} i^{-2k} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\epsilon_{\psi}^{2} \sum_{f\in H_k(q,\Psi)}\omega_f^{-1}\sum_{\nu=0}^{\infty}\mathop{\sum \sum}_{,m^{\prime}, \ell=1}^\infty \lambda_f(m^{\prime})\overline{\lambda_F(m)}\overline{\psi}(\ell m^{\prime}) \\
& \hspace{3cm}\times W_{1} \left(\frac{m^{\prime} q^{\nu} \ell^2}{\tilde{N}} \right) \sum_{n=1}^\infty \overline{\lambda_f(nq^{2+\nu})} n^{it} V \left(\frac{n}{N} \right). \end{align*} Now applying the Petersson trace formula, we obtain
\begin{align*} \mathcal{F} &=\eta \left( \frac{N}{\tilde{N}}\right)^{1/2} \sum_{k\sim K} i^{-2k} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\epsilon_{\psi}^{2} \sum_{\nu=0}^{\infty} \mathop{\sum \sum}_{m^{\prime}, \ell=1}^\infty \lambda_F(m)\overline{\psi}(\ell m^{\prime}) W_{1} \left(\frac{m^{\prime} q^{\nu} \ell^2}{\tilde{N}} \right) \\
& \hspace{1cm} \times \sum_{n=1}^\infty n^{it} V \left(\frac{n}{N} \right) \left\lbrace \delta(m^{\prime},nq^{2+\nu})+2\pi i^{-k} \sum_{c=1}^\infty \frac{S_{\psi} (nq^{2+\nu},m^{\prime},cq)}{cq} J_{k-1} \left(\frac{4\pi\sqrt{m^{\prime}q^{\nu}n}}{c}\right) \right\rbrace . \end{align*} If $m^{\prime}=nq^{2+\nu}$ $\Rightarrow \psi(m^{\prime} \ell)= \psi(nq^{2+\nu}\ell)=0$, as $\psi$ is a character mod $q$. Hence diagonal term vanishes. From now on, we shall consider the dual off diagonal term, which is given by (with a constant multiple of $\eta$) \begin{align} \label{o star before proposition} \mathcal{O^{*}} &= \left( \frac{N}{\tilde{N}}\right)^{1/2} \sum_{k\sim K} i^{-2k} W\left(\frac{k-1}{K}\right) \sideset{}{^\dagger}\sum_{ \psi (\textrm{mod}\ q)}\epsilon_{\psi}^{2}\sum_{\nu=0}^{\infty} \mathop{\sum \sum}_{m^{'}, \ell=1}^\infty \lambda_F(m)\overline{\psi}(\ell m^{'}) W_{1} \left(\frac{m^{'} q^{\nu} \ell^2}{\tilde{N}} \right) \notag \\
& \hspace{3cm}\times \sum_{n=1}^\infty n^{it} V \left(\frac{n}{N} \right)2\pi i^{-k} \sum_{c=1}^\infty \frac{S_{\psi} (nq^{2+\nu},m^{'},cq)}{cq} J_{k-1} \left(\frac{4\pi\sqrt{m^{'}q^{\nu}n}}{c}\right) . \end{align} Next, we shall prove the following proposition.
\begin{proposition} \label{proposition o star} Let $ \mathcal{O^{*}} $ be as above. We have \begin{align*} \mathcal{O^{*}} \ll \sqrt{N} Q K^2 \left(1 + \frac{\sqrt{t}}{K^{3/2}}\right). \end{align*} \end{proposition}
To prove our theorem, it is enough to prove the above proposition. Because, when $K= t^{1/3}$, we obtain $ \mathcal{O^{*}} \ll \sqrt{N} Q K t^{1/3}$, which implies that \begin{align*}
L\left(\frac{1}{2} + it, F \right) \ll \frac{|S(N)|}{\sqrt{N}} \ll \frac{\mathcal{O^\star}}{\sqrt{N} QK} \ll t^{\frac{1}{3} + \epsilon} . \end{align*}
\section{Analysis of dual off-diagonal } We now consider sum over $k$.Using Lemma \ref{sum over k} with $x= 2 \frac{\sqrt{m^\prime q^\nu n}}{c}$, we have \begin{align*}
S_{4} & = \sum_{k} i^{-k} W\left(\frac{k-1}{K}\right) J_{k-1}\left(\frac{4\pi \sqrt{m^{\prime} q^{\nu}n}}{c}\right) = \iint_{\mathbb{R}^{2}} W\left(u \right) F\left(v\right) e\left(uv\pm x \ \cos\frac{2\pi v}{K} \right)du dv. \end{align*} The above integral is negligibly small if $c\gg \frac{Q t^{\varepsilon}}{l}$. So effective range of $c$ is given by $c\ll \frac{Q t^{\varepsilon}}{l}$. Since $\cos\left(y\right) =1-\frac{y^{2}}{2} + O\left(y^{4}\right)$ and $u\ll t^{\varepsilon}$, we obtain \begin{align*} S_{4} = e\left(x\right)\int_{\mathbb{R}} W\left(u \right)\int_{\mathbb{R}} F\left(v\right) e\left( uv\pm \frac{x \pi^2 v^{2}}{k^{2}}\right)\left(1+ O\left(\frac{1}{k^{4}}\right)\right) dv du. \end{align*} Let $G\left(v\right)=uv\pm \frac{x \pi^{2} v^{2}}{k^{2}}$. If we choose positive sign in $G$, then there is no stationary point, so the above integral is negligibly small. From now on, we shall consider $G$ with negative sign. If $G^{\prime}\left(v_{0}\right) =0$, then $v_{0}=\frac{uk^{2}}{4\pi x} \asymp \frac{K^{2}}{x}$; $G^{\prime\prime}\left(v\right)=\pm\frac{4x\pi}{k^{2}}$ and $G^{\left(j\right)}\left(v\right)=0$, for $ j\ge 3$. Applying Lemma \ref{exponential inte}, we obtain
\begin{align*} S_{4} & = e(x) \int_{\mathbb{R}} W\left(u \right)\frac{F\left(v_{0}\right)}{\sqrt{G^{\prime\prime}\left(v_{0}\right)}} e\left(G(v_{0})+\frac{1}{8} \right) du + \textrm{errors}\\ &= \int_{\mathbb{R}} W\left(u \right)e\left(x + \frac{u^2 K^2}{ 8 \pi^2 x} \right) \frac{K}{\sqrt{x}} (1+ o(1)) du. \end{align*} Since $x \gg K^2 t^\epsilon$, we note that the second term in exponential is not oscillating with respect to $x$. We push that term in weight function. We obtain
\begin{align*}
S_{\psi}\left(n q^{2+\nu},m^{\prime};cq\right)=S_{\psi}\left(0,m^{\prime}\overline{c};q\right) S\left(nq^{1+\nu},m^{\prime}\overline{q};c\right) = \sqrt{q} \ \overline{\epsilon_{\psi}}\psi\left(m^{\prime}\overline{c}\right) S\left(n,m^{\prime}q^{\nu};c\right). \end{align*} We shall now execute the sum over $\psi$, which is given by
\begin{align*}
\frac{1}{2}\sum_{\psi(q)}\left(1-\psi(-1)\right)\epsilon_{\psi}^{2}\overline{\epsilon_{\psi}}\psi\left(m^{\prime}\overline{c}\right)\overline{\psi(m^{\prime}l)}
=\sideset{}{^\pm}\sum_{\psi(q)}\psi\left(\pm \overline{cl}\right)\frac{1}{2 \sqrt{q}}\sum_{\alpha(q)}\psi(\alpha)e(\frac{\alpha}{q}) = \frac{1}{2 \sqrt{q} }e(\pm\overline{cl})\phi(q). \end{align*} Substituting the above estimate in equation \eqref{o star before proposition}, we get \begin{align*} \mathcal{O^{*}} &=\sqrt{ \frac{N}{\tilde{N}}} \frac{\phi(q)}{q}\mathop{\sum \sum}_{m, \ell=1}^\infty \lambda_F(m) U \left(\frac{m \ell^2}{\tilde{N}} \right) \sum_{n}n^{it} W \left(\frac{n}{N} \right) \sum_{c\ll \frac{Q}{\ell}} \frac{S(n,m;c)}{c} e\left(\pm \frac{\overline{cl}}{q}\pm\frac{\sqrt{nm}}{c} \right). \end{align*}
Now we consider the sum over $n$. Let \begin{align*} S_{5}:= \sum_{n} n^{it} e\left(\frac{\sqrt{nm}}{c}\right)S(n,m;c) W \left(\frac{n}{N} \right). \end{align*} Substituting $n=\alpha+bc$, where $0\le b<c$, we obtain \begin{align*} S_{5}&= \sum_{\alpha(c)} S(\alpha,m;c) \sum_{b} (\alpha+bc)^{it} e\left(\frac{\sqrt{m(\alpha+bc)}}{c}\right) W \left(\frac{\alpha+bc}{N} \right). \end{align*}
Applying the Poisson summation formula to the sum over $b$, we obtain \begin{align*} S_{5}&= \sum_{\alpha(c)} S(\alpha,m;c) \sum_{n}\int_{\mathbb{R}} (\alpha+yc)^{it} e\left(\frac{\sqrt{m(\alpha+yc)}}{c}\right) W \left(\frac{\alpha+yc}{N} \right)e\left(-ny\right) dy. \end{align*} By the change of variable $v=\frac{\alpha+yc}{N}$, $dy=\frac{N}{c} dv$, we obtain \begin{align*} S_{5}&=\frac{N^{1+it}}{c}\sum_{n}\sum_{\alpha(c)}S(\alpha,m;c)e\left(\frac{n\alpha}{c}\right)\int_{\mathbb{R}} v^{it}W(v)e\left(\frac{\sqrt{mNv}-nNv}{c}\right) dv \\ &=\sum_{n} \mathcal{C}(m,c)I(m,n,c) , \end{align*} where $\mathcal{C}(m,c)$ is the character sum and $I(m,n,c)$ is the integral in the above equation. Integrating by parts, we observe that $I(m,n,c)\ll_{j}\left(t+\frac{\sqrt{mN}}{c}\right)^{j} (\frac{c}{nN})^{j}$. We choose the parameter $K$ such that $K\ll t^{1/2-\delta}$. By this choice of K, we obtain $I(m,n,c)\ll_{j}\left( \frac{ct}{nN}\right)^{j}$. We observe that the integral $I(m,n,c)$ is negligibly small if $n\gg \frac{ct^{1+\varepsilon}}{N}$. Now we consider the character sum $\mathcal{C}(m,c)$, which is given by \begin{align*} \mathcal{C}(m,c)=\sum_{\alpha(c)}\sum_{\beta(c)}e\left(\frac{\alpha\beta+m\overline{\beta} +n\alpha}{c} \right) =\sum_{\beta(c)}e\left(\frac{m\overline{\beta}}{c} \right) \sum_{\alpha(c)} e\left(\frac{\alpha(n+\beta)}{c} \right) =c\ e\left(\frac{-m \overline{n}}{c} \right). \end{align*} Substituting the above estimates for $\mathcal{C}(m,c) $ and $ I(m,n,c)$, we obtain \begin{align*} S_{5}=N^{1+it}\sum_{n\ll\frac{Qt}{N}}e\left(\frac{-m \overline{n}}{c} \right)I(m,n,c). \end{align*}
Now we analyse the integral $ I(m,n,c)$. We have \begin{align*} I(m,n,c)=\int_{\mathbb{R}}W(v)e\left( \frac{t \log y}{2\pi}+\frac{\sqrt{mNv}-nNv}{c}\right) dv :=\int_{\mathbb{R}}W(v)e(G_{1}(v))dv, \end{align*}
We note that $G_{1}^{\prime\prime}(v)=-\frac{t}{2\pi v^{2}} \ -\frac{3\sqrt{mN}}{4cv^{3/2}} \Rightarrow | G_{1}^{\prime\prime}(v)| \asymp t$. By Lemma \ref{second deri bound}, we obtain $I(m,n,c)\ll \frac{1}{\sqrt{t}}$. Substituting the estimate for $S_5$ and using $\phi(q)/q <1$, we obtain \begin{align} \label{o star after s5}
\mathcal{O^{*}} &\ll \left( \frac{N}{\tilde{N}}\right)^{1/2} \mathop{\sum \sum}_{m, \ell =1}^\infty |\lambda_F(m)\overline{\psi}(l ) W\left(\frac{ml^{2}}{\Tilde{N}}\right)|\ \ |\sum_{c\sim C} \sum_{n\ll\frac{ct}{N}} \frac{1}{c} e\left(\frac{-m \overline{n}}{c} \right)I(m,n,c)|. \end{align}
For simplicity, we shall consider the case $l=1$ (Estimates for the other values of $\ell$ are similar). On applying the Cauchy-Schwartz inequality in equation \eqref{o star after s5}, we obtain \begin{align} \label{definition O 2 star}
\mathcal{O^{*}} & \ll \left( \frac{N}{\tilde{N}}\right)^{1/2} (\tilde{N})^{1/2}N \left(\sum_{m\sim \tilde{N}} W(\frac{m}{\tilde{N}})\ \ |\sum_{c\sim C} \sum_{n\ll\frac{c t}{N}} \frac{1}{c} e\left(\frac{-m \overline{n}}{c} \right)I(m,n,c)|^{2} \right)^{1/2} \notag\\ &:= N^{3/2} (\mathcal{O}_2^\star)^\frac{1}{2}. \end{align} Opening the absolute square and interchanging the order of summation, we obtain \begin{align}\label{O 2 star} \mathcal{O}_2^\star =\mathop{\sum \sum}_{c_{1},c_{2} \sim Q} \frac{1}{c_1 c_2} \mathop{\sum \sum}_{n_i \sim \frac{c_i t}{N}} \sum_{m\sim \tilde{N}} e\left(\frac{-m \overline{n_{1}}}{c_{1}} +\frac{m \overline{n_{2}}}{c_{2}} \right)I(m,n_{1},c_{1})I(m,n_{2},c_{2}) U\left(\frac{m}{\tilde{N}}\right). \end{align} We now apply the Poisson summation formula to the sum over $m$ with modulus $c_{1}c_{2}$. Writing $m=\beta +bc_{1}c_{2}$, we get \begin{align*} S_{6} &: =\sum_{\beta(c_{1}c_{2})}e\left(\frac{-\beta \overline{n_{1}}}{c_{1}} +\frac{\beta \overline{n_{2}}}{c_{2}} \right)\sum_{b}I(\beta +bc_{1}c_{2},n_{1},c_{1})I(\beta +bc_{1}c_{2},n_{2},c_{2}) U\left(\frac{\beta +bc_{1}c_{2}}{\tilde{N}}\right) \\ & =\sum_{\beta(c_{1}c_{2})}e\left(\frac{-\beta \overline{n_{1}}}{c_{1}} +\frac{\beta \overline{n_{2}}}{c_{2}} \right)\sum_{m}\int_\mathbb{R} I(\beta +uc_{1}c_{2},n_{1},c_{1})I(\beta +uc_{1}c_{2},n_{2},c_{2}) \\
& \hspace{3cm}\times U\left(\frac{\beta +uc_{1}c_{2}}{\tilde{N}}\right)e(-mu)du. \end{align*} Substituting $v=\frac{\beta +uc_{1}c_{2}}{\tilde{N}}$, we obtain \begin{align}\label{S_6}
S_{6}=\frac{\tilde{N}}{c_{1}c_{2}} \sum_{m}\mathcal{C}(m) \mathcal{J}(m), \end{align}
where the character sum $\mathcal{C}(m)=\sum_{\beta(c_{1}c_{2})}e\left(\frac{-\beta \overline{n_{1}}}{c_{1}} +\frac{\beta \overline{n_{2}}}{c_{2}}+\frac{m\beta}{c_{1}c_{2}} \right)$ and the integral \begin{align} \label{definition j m}
\mathcal{J}(m)&:= \mathcal{J}(m; n_1, n_2, c_1, c_2) :=\int_{\mathbb{R}}I(v\tilde{N},n_{1},c_{1})I(v\tilde{N},n_{2},c_{2}) U\left(v\right)e(-mv)dv \notag\\ &=\iint_{\mathbb{R}^2} W(y_{1})W(y_{2})\left(\frac{y_{1}}{y_{2}}\right)^{it} e\left(\frac{-Nn_{1}y_{1}}{c_{1}} +\frac{Nn_{2}y_{2}}{c_{2}} \right) \notag \\
& \hspace{2cm}\times \left\lbrace \int_\mathbb{R}U(u)e\left(-\frac{\sqrt{N\tilde{N}y_{1}v}}{c_{1}}+\frac{\sqrt{N\tilde{N}y_{2}v}}{c_{2}}-\frac{m\tilde{N}v}{c_{1}c_{2}}\right)dv\right\rbrace dy_{1}dy_{2}. \end{align} Integrating by parts $j$-times with respect to the variable $v$, we obtain
\begin{align*} \mathcal{J}(m) &\ll_{j} \left(1+\frac{\sqrt{N\tilde{N}}}{c_{1}}+\frac{\sqrt{N\tilde{N}}}{c_{2}} \right)^{j} \left(\frac{c_{1}c_{2}}{m\tilde{N}} \right)^{j} \ \ \ \ll_{j}\left(\frac{N}{mK^{2}}\right). \end{align*} So the integral $\mathcal{J}(m)$ is negligibly small if $m\gg \frac{Nt^{\varepsilon}}{K^{2}}$. For $m=0$, using the bound $I(m, n; c) \ll t^{-1/2}$, we obtain \begin{equation} \label{bound j 0} \mathcal{J}(0) \ll t^{-1}. \end{equation} For $m \neq 0$, changing the variables $y_1 = x_1^2$, $y_2 = x_2^2$, and $v = x_3^2$ in the equation \eqref{definition j m}, we obtain \begin{align*} \mathcal{J}(m)&=\iiint_{\mathbb{R}^3} x_1 W(x_1) x_2 W(x_2) x_3 W(x_3) \exp( i G(x_1, x_2, x_3))dx_1dx_2dx_3, \end{align*} where \begin{align*}
G:= 2 t \log x_1 - 2t \log x_2 - \frac{N n_1}{c_1} x_1^2 + \frac{N n_2}{c_2} x_2^2 - \frac{\sqrt{N \tilde{N}}}{c_1} x_1 x_3 + \frac{\sqrt{N \tilde{N}}}{c_2} x_2 x_3 - \frac{m \tilde{N}}{c_1 c_2} x_3^2. \end{align*} We apply \ref{exponential inte} in $x_1$ variable first. We have \begin{align*} \mathcal{J}(m)&=\iint_{\mathbb{R}^2} W_1(x_2) W_2(x_3) \exp( i \tilde{G}( x_2, x_3))\int_{\mathbb{R}} W_1(x_1)\exp(iG_1(x_1))dx_1dx_2dx_3. \end{align*} where \begin{align*} \tilde{G}( x_2, x_3)= - 2t \log x_2 + \frac{N n_2}{c_2} x_2^2 + \frac{\sqrt{N \tilde{N}}}{c_2} x_2 x_3 - \frac{m \tilde{N}}{c_1 c_2} x_3^2. \end{align*} and \begin{align*}
G_1(x_1) = 2 t \log x_1 - \frac{N n_1}{c_1} x_1^2 - \frac{\sqrt{N \tilde{N}}}{c_1} x_1 x_3 . \end{align*} Let $x_1^0$ be the stationary point of $G_1(x_1)$. Applying Lemma \ref{exponential inte}, we obtain: \begin{align*}
\mathcal{J}(m) \rightsquigarrow \iint_{\mathbb{R}^2} W_2(x_2) W_3(x_3) \exp( i \tilde{G}( x_2, x_3))\sqrt{2\pi}\frac{\exp(i\frac{\pi}{4}+ iG_1(x_1^0))}{\sqrt{|G_1^{(2)}(x_1^0)|}}dx_2dx_3. \end{align*} We note that \begin{align*}
G_1^{(2)} (x_1)= -\frac{2 t}{x_1^2} \Rightarrow \left|G_1^{(2)}(x_1^0) \right| \asymp t. \end{align*} Similarly, applying the Lemma \ref{exponential inte} in $x_2$ variable, we obtain: \begin{align*}
\mathcal{J}(m)\rightsquigarrow\int_{\mathbb{R}} W_3(x_3) \exp( i G_{3}( x_3))\sqrt{2\pi}\frac{\exp(i\frac{\pi}{4}+ iG_2(x_2^{0}))}{\sqrt{|G_2^{(2)}(x_2^{0})|}} \sqrt{2\pi}\frac{\exp(i\frac{\pi}{4}+ iG_1(x_1^{0}))}{\sqrt{|G_1^{(2)}(x_1^{0})|}}dx_3, \end{align*} where \begin{align*} G_3(x_3)=- \frac{m \tilde{N}}{c_1 c_2} x_3^2, \ \ \ \ G_2(x_2) = -2 t \log x_2 + \frac{N n_2}{c_2} x_2^2 + \frac{\sqrt{N \tilde{N}}}{c_2} x_2 x_3 . \end{align*}
and
$x_{2}^{0} $ is the stationary point of $G_2(x_2)$.
Like before, We note that $G_2^{(2)}(x_2^{0})$ is of size $t$ . Thus, \begin{equation}\label{final j}
\mathcal{J}(m) \rightsquigarrow \int_{\mathbb{R}} W_3(x_3) \exp( i G_{4}( x_3))\sqrt{2\pi}\frac{\exp(i\frac{\pi}{4})}{\sqrt{|G_2^{(2)}(x_2^{0})|}} \sqrt{2\pi}\frac{\exp(i\frac{\pi}{4})}{\sqrt{|G_1^{(2)}(x_1^{0})|}}dx_3. \end{equation} where \begin{align*}
G_{4}( x_3)=- \frac{m \tilde{N}}{c_1 c_2} x_3^2+G_2(x_2^{0})+G_1(x_1^{0}). \end{align*} Now we apply the second derivative bound in \eqref{final j}.
We obtain that
\begin{equation} \label{bound j m } \mathcal{J}(m) \ll (tK)^{-1}. \end{equation}
We now consider the character sum \begin{align*} \mathcal{C}(m):=\sum_{\beta(c_{1}c_{2})}e\left(\frac{-\beta \overline{n_{1}}}{c_{1}} +\frac{\beta \overline{n_{2}}}{c_{2}}+\frac{m\beta}{c_{1}c_{2}} \right) =c_{1}c_{2} \mathbbm{1}(\overline{n}_1 c_2 - \overline{n}_2 c_1 \equiv m (c_1 c_2)). \end{align*}
Substituting the evaluation of character sum in \eqref{S_6}, we obtain \begin{align*} S_{6}=\tilde{N}\sum_{m\ll \frac{N}{K^{2}}} \mathbbm{1}(\overline{n}_1 c_2 - \overline{n}_2 c_1 \equiv m (c_1 c_2)) \mathcal{J}(m). \end{align*} Substituting the above estimate in \eqref{O 2 star}, we obtain \begin{align*} \mathcal{O}_2^\star =\tilde{N} \mathop{\sum \sum}_{c_{1},c_{2} \sim Q} \frac{1}{c_1 c_2} \sum_{n_{1} \sim \frac{c_1 t}{N}} \sum_{n_{2} \sim \frac{c_2t}{N}} \sum_{m\sim \frac{N}{K^{2}}} \mathbbm{1}(\overline{n}_1 c_2 - \overline{n}_2 c_1 \equiv m (c_1 c_2)) \mathcal{J}(m). \end{align*} Using equation \eqref{bound j 0}, We observe that contribution of the diagonal term (when $c_1 = c_2$ and $n_1 = n_2$) is bounded from above by
\begin{align} \label{diagonal}
\mathcal{O}_2^\star (d) =\tilde{N} \sum_{c \sim Q} \frac{1}{c^2} \sum_{n \sim \frac{c t}{N}} |\mathcal{J}(0)| \ll \frac{ \tilde{N} }{N}. \end{align} Similarly using equation \eqref{bound j m }, contribution of the non-diagonal terms are bounded from above by \begin{align} \label{non diagonal}
\mathcal{O}_2^\star(nd) &=\tilde{N} \mathop{\sum \sum}_{c_{1},c_{2} \sim Q} \sum_{n_1 \sim \frac{c_1 t}{N}} \sum_{n_{2} \sim \frac{c_2 t}{N}} \frac{1}{c_1 c_2}\sum_{m\sim \frac{N}{K^2}} \frac{1}{c_1 c_2} |\mathcal{J}(m)| \notag \\ & \ll \tilde{N} \mathop{\sum \sum}_{c_{1},c_{2} \sim Q} \times \frac{c_1 t}{N} \frac{c_2 t}{N} \frac{1}{c_1 c_2} \frac{N}{K^2} \frac{1}{c_1 c_2} \frac{1}{t K} \ll \frac{\tilde{N} t }{NK^3}. \end{align} Using bounds of equations \eqref{diagonal} and \eqref{non diagonal} in equation \eqref{definition O 2 star}, we obtain
\begin{align*} \mathcal{O^{*}} \ll N^{3/2} \left(\frac{\tilde{N} }{N} + \frac{\tilde{N} t }{NK^3}\right)^\frac{1}{2} \ll N \sqrt{\tilde{N}} \left(1 + \frac{\sqrt{t}}{K^{3/2}}\right) \ll \sqrt{N} Q K^2 \left(1 + \frac{\sqrt{t}}{K^{3/2}}\right). \end{align*} This prove our Proposition \ref{proposition o star}.
{\bf Acknowledgement:} Authors are grateful to Prof. Ritabrata Munshi as most of this paper is based on the ideas that he shared with us. Authors would also like to thank Prof. Satadal Ganguly for useful suggestions and comments. Author would also like to thank Stat-Math unit, Indian Statistical Institute, Kolkata for the wonderful academic atmosphere. During the work, S. Singh was supported by the Department of Atomic Energy, Government of India, NBHM post doctoral fellowship no: 2/40(15)/2016/R$\&$D-II/5765.
{}
\end{document} | arXiv | {
"id": "1805.04892.tex",
"language_detection_score": 0.5094584226608276,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\newtheorem{thm}{Theorem}[section] \newtheorem{cor}[thm]{Corollary} \newtheorem{lem}[thm]{Lemma} \newtheorem{prop}[thm]{Proposition} \newtheorem{defin}[thm]{Definition} \newtheorem{exam}[thm]{Example} \newtheorem*{examples}{Examples} \newtheorem{rem}[thm]{Remark} \newtheorem{case}{\sl Case} \newtheorem{claim}{Claim} \newtheorem{prt}{Part} \newtheorem*{mainthm}{Main Theorem} \newtheorem*{thmm}{Theorem} \newtheorem{question}[thm]{Question} \newtheorem*{notation}{Notation} \swapnumbers \newtheorem{rems}[thm]{Remarks} \newtheorem*{acknowledgment}{Acknowledgment} \newtheorem{questions}[thm]{Questions} \numberwithin{equation}{section}
\newcommand{\mathrm{ab}}{\mathrm{ab}} \newcommand{\mathrm{cont}}{\mathrm{cont}} \newcommand{\varinjlim}{\varinjlim} \newcommand{\ \ensuremath{\mathaccent\cdot\cup}}{\ \ensuremath{\mathaccent\cdot\cup}} \newcommand{\mathrm{div}}{\mathrm{div}} \newcommand{,\ldots,}{,\ldots,} \newcommand{^{-1}}{^{-1}} \newcommand{\cong}{\cong} \newcommand{\mathrm{pr}}{\mathrm{pr}} \newcommand{\mathrm{sep}}{\mathrm{sep}} \newcommand{\otimes}{\otimes} \newcommand{\alpha}{\alpha} \newcommand{\gamma}{\gamma} \newcommand{\Gamma}{\Gamma} \newcommand{\delta}{\delta} \newcommand{\Delta}{\Delta} \newcommand{\epsilon}{\epsilon} \newcommand{\lambda}{\lambda} \newcommand{\Lambda}{\Lambda} \newcommand{\sigma}{\sigma} \newcommand{\Sigma}{\Sigma} \newcommand{\mathbf{A}}{\mathbf{A}} \newcommand{\mathbf{B}}{\mathbf{B}} \newcommand{\mathbf{C}}{\mathbf{C}} \newcommand{\mathbf{F}}{\mathbf{F}} \newcommand{\mathbf{P}}{\mathbf{P}} \newcommand{\mathbf{Q}}{\mathbf{Q}} \newcommand{\mathbf{R}}{\mathbf{R}} \newcommand{\mathbf{S}}{\mathbf{S}} \newcommand{\mathbf{T}}{\mathbf{T}} \newcommand{\mathbf{Z}}{\mathbf{Z}} \newcommand{\mathbb{A}}{\mathbb{A}} \newcommand{\mathbb{C}}{\mathbb{C}} \newcommand{\mathbb{F}}{\mathbb{F}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{U}}{\mathbb{U}} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{\mathfrak{f}}{\mathfrak{f}} \newcommand{\mathfrak{a}}{\mathfrak{a}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{p}}{\mathfrak{p}} \newcommand{\mathfrak{q}}{\mathfrak{q}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{W}}{\mathcal{W}} \newcommand{\mathcal{V}}{\mathcal{V}}
\title{Filtrations of free groups as intersections}
\author{Ido Efrat} \address{Mathematics Department\\ Ben-Gurion University of the Negev\\ P.O.\ Box 653, Be'er-Sheva 84105\\ Israel} \email{efrat@math.bgu.ac.il}
\thanks{This work was supported by the Israel Science Foundation (grant No.\ 152/13).}
\keywords{lower central filtration, lower $p$-central filtration, Zassenhaus filtration, profinite groups, upper-triangular unipotent representations}
\subjclass[2010]{Primary 20E05 Secondary 20E18}
\maketitle
\begin{abstract} For several natural filtrations of a free group $S$ we express the $n$-th term of the filtration as the intersection of all kernels of homomorphisms from $S$ to certain groups of upper-triangular unipotent matrices. This generalizes a classical result of Gr\"un for the lower central filtration. In particular, we do this for the $n$-th term in the lower $p$-central filtration of $S$. \end{abstract}
\section{Introduction} We consider a group $G$ and decreasing filtrations $G_i$, $i=1,2,\ldots,$ of $G$ by normal subgroups such that $G_1=G$ and $[G_i,G_j]\leq G_{i+j}$ for every $i,j\geq1$. The \textbf{lower central filtration} $G^{(i)}$ is the fastest such filtration. Next to it, for a prime number $p$, we have the \textbf{$p$-Zassenhaus filtration} $G_{(i,p)}$ and the \textbf{lower $p$-central filtration} $G^{(i,p)}$, which are the fastest filtrations as above such that, in addition, $G_i^p\leq G_{ip}$, resp., $G_i^p\leq G_{i+1}$, for every $i\geq1$. Here, for subgroups $H,K$ of $G$ we write as usual $[H,K]$ (resp., $H^p$, $HK$) for the subgroup generated by all elements $[h,k]=h^{-1} k^{-1} hk$ (resp., $h^p$, $hk$) with $h\in H$ and $k\in K$. More concretely, we define inductively: \begin{enumerate} \item[(i)] $G^{(1)}=G$, \ $G^{(i)}=[G,G^{(i-1)}]$ for $i\geq2$; \item[(ii)] $G_{(1,p)}=G$, \ $G_{(i,p)}=(G_{(\lceil i/p\rceil,p)})^p\prod_{j+l=i}[G_{(j,p)},G_{(l,p)}]$ for $i\geq2$; \item[(iii)] $G^{(1,p)}=G$, \ $G^{(i,p)}=(G^{(i-1,p)})^p[G,G^{(i-1,p)}]$ for $i\geq2$. \end{enumerate} (See \cite{DixonDuSautoyMannSegal99}*{p.\ 5 and Prop.\ 1.16(2)} for the condition about the commutators in (i) and (iii)). The $p$-Zassenhaus filtration is also called the \textbf{$p$-modular dimension filtration} \cite{DixonDuSautoyMannSegal99}*{Ch.\ XI}.
These filtrations also have their natural profinite analogs, where the subgroups are taken to be closed.
When $G=S$ is a free group, the subgroups $S^{(n)}$ (in the discrete case) and $S_{(n,p)}$ (in the profinite case) have known alternative descriptions in terms of linear representations. Namely for a unital commutative ring $R$ let $\mathbb{U}_n(R)$ be the group of all upper-triangular unipotent $n\times n$ matrices over $R$. Then:
(1) \quad When $S$ is a free discrete group on finitely many generators, $S^{(n)}=\bigcap\Ker(\varphi)$, where the intersection is over all group homomorphisms $\varphi\colon S\to \mathbb{U}_n(\mathbb{Z})$ (Gr\"un \cite{Grun36}).
(2) \quad When $S$ is a free profinite group, $S_{(n,p)}=\bigcap\Ker(\varphi)$, where the intersection is over all continuous group homomorphisms $\varphi\colon S\to \mathbb{U}_n(\mathbb{F}_p)$ (this is a special case of \cite{Efrat14}, which is proved under a more general cohomological assumption on $n$-fold Massey products).
In this note we prove a general intersection theorem for free groups (Theorem \ref{theorem qqq}) which gives similar results in a variety of situations, both in the discrete and the profinite case, including (1) and (2) above.
Moreover, it gives as a special case an analogous intersection theorem for the filtration $S^{(n,p)}$. Namely, let $G(n,p)$ be the group of all upper-triangular unipotent $n\times n$ matrices $(a_{ij})$ over $\mathbb{Z}/p^n\mathbb{Z}$ such that $a_{ij}\in p^{j-i}\mathbb{Z}/p^n\mathbb{Z}$ for every $i\leq j$.
\begin{thmm} One has $S^{(n,p)}=\bigcap\Ker(\varphi)$, where the intersection is over all group homomorphisms $\varphi\colon S\to G(n,p)$. \end{thmm} This result holds in both the discrete and profinite settings. The main tool we use is the Magnus representation of $S$ by formal power series, and a description of $S^{(n,p)}$ by means of this representation, due to Koch \cite{Koch60}.
The motivation for this work comes from Galois theory, where results of this nature were studied for absolute Galois groups of fields. More specifically, let $F$ be a field containing a root of unity of order $p$ and let $G=G_F$ be its absolute Galois group with the (profinite) Krull topology. For a list $\mathcal{L}$ of finite groups let $G_\mathcal{L}=\bigcap\Ker(\varphi)$, where the intersection is over all continuous epimorphisms $\varphi\colon G\to\bar G$ where $\bar G\in\mathcal{L}$. Let $D_4$ be the dihedral group of order $8$, and for $p$ odd let $H_{p^3}$ (resp., $M_{p^3}$) be the unique non-abelian group of order $p^3$ and exponent $p$ (resp., $p^2$). The following facts for $n=3$ were proved by Min\'a\v c, Spira, and the author: \begin{enumerate} \item[(i)] $G^{(3,2)}=G_{(3,2)}=G_\mathcal{L}$ for $\mathcal{L}=\{1,\mathbb{Z}/2\mathbb{Z}, \mathbb{Z}/4\mathbb{Z},D_4\}$ \cite{MinacSpira96} (see also \cite{EfratMinac13}*{Remark 2.1(1)}); \item[(ii)] For $p>2$, $G_{(3,p)}=G_\mathcal{L}$, where $\mathcal{L}=\{1,\mathbb{Z}/p\mathbb{Z},H_{p^3}\}$ \cite{EfratMinac13}; \item[(iii)] For $p>2$, $G^{(3,p)}=G_\mathcal{L}$, where $\mathcal{L}=\{1,\mathbb{Z}/p^2\mathbb{Z},M_{p^3}\}$ \cite{EfratMinac11}. \end{enumerate} The result (2) above extends (i) and (ii) to higher subgroups $G_{(n,p)}$ in the $p$-Zassenhaus filtration, at least in the case of free pro-$p$ groups. Our Theorem solves the same problem for the lower $p$-central filtration $G^{(n,p)}$.
The paper is organized as follows: In \S2 we recall some basic facts on free pro-$\mathcal{C}$ groups on a (possibly infinite) basis $A$, where $\mathcal{C}$ is a full formation of finite groups. In \S3 and \S4 we extend the classical construction of the Magnus algebra $R_0\langle\langle X_A\rangle\rangle\rangle$ and Magnus homomorphism $\Lambda_{R_0,A}\colon S\to R_0\langle\langle X_A\rangle\rangle\rangle$ from the discrete setting to the setting of free pro-$\mathcal{C}$ groups, where $R_0$ is a unital ring. The heart of the proof is given in \S5. There we consider a ring homomorphism $\theta\colon R_0\to R$ and a system $\mathcal{J}=(J_k)_{k=0}^{n-1}$ of ideals in $R$ satisfying some natural conditions. This gives rise to the ring $T_{n,0}(\mathcal{J})$ of upper-triangular $n\times n$-matrices whose $(i,j)$-entries lie in $J_{j-i}$, and to the group $\mathbb{U}_n(\mathcal{J})$ of unipotent matrices in $T_{n,0}(\mathcal{J})$. Every representation $\varphi\colon S\to\mathbb{U}_n(\mathcal{J})$ lifts in a canonical way to a ring homomorphism $\hat\varphi\colon R_0\langle\langle X_A\rangle\rangle\to T_{n,0}(\mathcal{J})$ which is $\theta I_n$ on $R_0$ and such that the following square commutes (see Lemma \ref{sss}): \begin{equation} \label{com square} \xymatrix{ S\ar[r]^{\varphi}\ar[d]_{\Lambda_{R_0,A}} & \mathbb{U}_n(\mathcal{J})\ar@{_(->}[d] \\ R_0\langle\langle X_A\rangle\rangle \ar^{\hat\varphi}[r] & T_{n,0}(\mathcal{J}). } \end{equation} Under the additional assumption that $J_t=d^tR$ for some $d\in R$, $t=0,1,\ldots, n-1$, we use this lifting to identify the Magnus expansions of the elements of $\bigcap\Ker(\varphi)$, where $\varphi$ ranges over all such representations (Theorem \ref{theorem qqq}). Finally, in \S6 we apply this general result to obtain intersection theorems in all the above-mentioned special cases.
I warmly thank J\'an Min\'a\v c for many discussions that motivated this work. I also thank the referee for his helpful comments and suggestions.
After an earlier version of this paper was posted on the arXiv, J\'an Min\'a\v c and Nguyen Duy Tan sent me an advanced draft of their preprint \cite{MinacTan2}, where they prove a result in the spirit of the Theorem above, but with the target group $G(n,p)$ replaced by the groups $\mathbb{U}_{k+1}(\mathbb{Z}/p^{n-k}\mathbb{Z})$, $k=1,\ldots, n - 1$. They also recover Gr\"un's result (1). Their general approach was to use the Magnus theory in combination with \cite{Koch02}*{Th.\ 7.14}, which is the group ring analog of \cite{Koch60}.
\section{Free profinite groups} Let $\mathcal{C}$ be a \textbf{full formation} of finite groups, i.e., a non-empty family of finite groups closed under subgroups, epimorphic images and extensions (in the sense that if $N$ is a normal subgroup of a group $G$ and $N,G/N\in\mathcal{C}$, then $G\in\mathcal{C}$; see \cite{FriedJarden08}*{\S17.3}). We recall from \cite{FriedJarden08}*{\S17.4} the following terminology and facts about pro-$\mathcal{C}$ groups.
Let $G$ be a pro-$\mathcal{C}$ group and $A$ a set. A map $\varphi\colon A\to G$ \textbf{converges to $1$} if for every open normal subgroup $N$ of $G$, the set $A\setminus\varphi^{-1}(N)$ is finite.
We say that a pro-$\mathcal{C}$ group $S$ is a \textbf{free pro-$\mathcal{C}$ group on basis $A$} with respect to a map $\iota\colon A\to S$ if \begin{enumerate} \item[(i)] $\iota\colon A\to S$ converges to $1$; \item[(ii)] $\iota(A)$ generates $S$; \item[(iii)] For every pro-$\mathcal{C}$ group $G$ and every map $\varphi\colon A\to G$ converging to $1$, there is a unique continuous homomorphism $\hat\varphi\colon S\to G$ with $\varphi=\hat\varphi\circ\iota$ on $A$. \end{enumerate} A free pro-$\mathcal{C}$ group on $A$ exists, and is unique up to a unique continuous isomorphism. We denote it by $S_A(\mathcal{C})$. Necessarily, $\iota$ is injective, and we identify $A$ with its image in $S_A(\mathcal{C})$. In particular, we write $\hat\mathbb{Z}_\mathcal{C}$ for the free pro-$\mathcal{C}$ group on one generator.
\section{The Magnus Algebra} Let $A$ be a set and $A^*$ the set of all finite sequences of elements of $A$. We denote the empty sequence by $\emptyset$. For each $a\in A$ let $X_a$ be a variable. For $I=(a_1,\ldots, a_t)\in A^*$ let $X_I$ denote the formal product $X_{a_1}\cdots X_{a_t}$ (by convention $X_\emptyset=1$). Let $R$ be a unital commutative ring with additive group $R_+$. The \textbf{Magnus algebra} $R\langle\langle X_A\rangle\rangle$ over $R$ is the ring of all formal power series
$\sum_{I\in A^*}c_IX_I$, where $c_I\in R$, with the natural operations. We identify $R$ as the subring of $R\langle\langle X_A\rangle\rangle$ of all constant power series. Let $R\langle\langle X_A\rangle\rangle^\times$ be the multiplicative group of $R\langle\langle X_A\rangle\rangle$, and for a positive integer $n$ let $V_{R,A,n}$ be the set of all power series in $R\langle\langle X_A\rangle\rangle$ of the form $1+\sum_{|I|\geq n}c_IX_I$. For $1+\alpha\in V_{R,A,n}$ we have $(1+\alpha)^{-1}=\sum_{k=0}^\infty(-1)^k\alpha^k$, showing that $V_{R,A,n}$ is in fact a normal subgroup of $V_{R,A,1}$. We have \begin{equation} \label{invlim} V_{R,A,1}\cong\invlim V_{R,A,1}/V_{R,A,n}. \end{equation}
\begin{lem} \label{direct product} For every positive integer $n$ there is a group isomorphism \[ \Psi_n\colon
\prod_{|I|=n}R_+\to V_{R,A,n}/V_{R,A,n+1}, \quad
(c_I)_{|I|=n}\mapsto(1+\sum_{|I|=n}c_IX_I)V_{R,A,n+1}. \] \end{lem} \begin{proof}
It is straightforward to see that $\Psi_n$ is a group homomorphism and is injective. For the surjectivity, let $1+\sum_{|I|\geq n}c_IX_I\in V_{R,A,n}$. Then \[ \begin{split}
(1+\sum_{|I|\geq n}c_IX_I)(1+\sum_{|I|=n}c_IX_I)^{-1}
&=1+(\sum_{|I|\geq n+1}c_IX_I)(1+\sum_{|I|=n}c_IX_I)^{-1} \\
&\in 1+(\sum_{|I|\geq n+1}c_IX_I)V_{R,A,n}\subseteq V_{R,A,n+1}. \end{split} \]
Hence $\Psi((c_I)_{|I|=n})=(1+\sum_{|I|\geq n}c_IX_I)V_{R,A,n+1}$. \end{proof}
The map $\sum_Ic_IX_I\mapsto(c_I)_I$ identifies $R\langle\langle X_A\rangle\rangle$ with $\prod_{I\in A^*}R$. When $R$ is a profinite topological ring, this induces on the additive group of $R\langle\langle X_A\rangle\rangle$ a profinite (product) topology. Moreover, the multiplication map in $R\langle\langle X_A\rangle\rangle$ is continuous, making it a profinite topological ring. Then the isomorphisms (\ref{invlim}) and $\Psi_n$ are continuous.
\section{The Magnus homomorphism} Here we separate the discussion into the discrete case and the profinite case.
Suppose that $A$ is a finite set and $S$ is a free discrete group on the basis $A$. Let $R$ be a (discrete) unital commutative ring. We define the \textbf{Magnus homomorphism} $\Lambda=\Lambda_{R,A}\colon S\to R\langle\langle X_A\rangle\rangle^\times$
by $\Lambda(a)=1+X_a$ for $a\in A$. It is known to be injective (\cite{Magnus35}*{Satz I}, \cite{SerreCG}*{I-\S1.4}).
Next we define the Magnus homomorphism in a profinite setting. Let $\Pi$ be a set of prime numbers and let $\mathcal{C}=\mathcal{C}(\Pi)$ be the family of all finite $\Pi$-groups, i.e., finite groups whose order is a product of primes in $\Pi$. It is a full formation. Let $R=\hat \mathbb{Z}_\mathcal{C}=\prod_{p\in \Pi}\mathbb{Z}_p$ and note that it is a profinite ring.
\begin{lem} \label{pro-C} In this setup, $V_{R,A,1}$ is a pro-$\mathcal{C}$ group. \end{lem} \begin{proof} By Lemma \ref{direct product}, $V_{R,A,n}/V_{R,A,n+1}$ is a pro-$\mathcal{C}$ group for every $n$. Using the extension \[ 1\to V_{R,A,n}/V_{R,A,n+1}\to V_{R,A,1}/V_{R,A,n+1}\to V_{R,A,1}/V_{R,A,n}\to 1 \] we conclude by induction that $V_{R,A,1}/V_{R,A,n}$ is a pro-$\mathcal{C}$ group for every $n$. Now use the isomorphism (\ref{invlim}). \end{proof}
For a finite subset $B$ of $A$, $V_{R,A\setminus B,1}$ is a closed subgroup of $V_{R,A,1}$. We have $\bigcap_BV_{R,A\setminus B,1}=\{1\}$. Hence every open normal subgroup of $V_{R,A,1}$ contains some $V_{R,A\setminus B,1}$ \cite{RibesZalesskii10}*{Prop.\ 2.1.5(a)}. If $a\in A$ and $1+X_a\not\in V_{R,A\setminus B,1}$, then $a\in B$. Therefore the map $A\to V_{R,A,1}$, $a\mapsto 1+X_a$, converges to $1$. In view of Lemma \ref{pro-C}, it extends to the continuous \textbf{Magnus homomorphism} $\Lambda_{\mathcal{C},A}\colon S_A(\mathcal{C})\to V_{R,A,1}\leq R\langle\langle X_A\rangle\rangle^\times$.
\section{Unipotent representations} We fix $n\geq1$ and a unitary commutative ring $R$. We write $I_n$ for the $n\times n$ identity matrix.
Let $\mathcal{J}=(J_k)_{k=0}^{n-1}$ be a sequence of ideals in $R$ such that $R=J_0\supseteq J_1\supseteq J_2\supseteq\cdots\supseteq J_{n-1}$ and $J_kJ_l\subseteq J_{k+l}$ for every $k,l\geq0$ with $k+l\leq n-1$. Given a non-negative integer $t$, let $T_{n,t}(\mathcal{J})$ be the set of all $n\times n$ matrices $(a_{ij})$ over $R$ such that \begin{enumerate} \item[(i)] $a_{ij}=0$ for every $1\leq i,j\leq n$ such that $j-i\leq t-1$; \item[(ii)] $a_{ij}\in J_{j-i}$ for every $1\leq i\leq j\leq n$. \end{enumerate}
\begin{rems} \label{rems on T} \rm \begin{enumerate} \item[(1)] $T_{n,0}(\mathcal{J})$ is a unital $R$-algebra with respect to the usual operations. \item[(2)] $T_{n,t}(\mathcal{J})=\{0\}$ for $n\leq t$. \item[(3)] $\mathbb{U}_n(\mathcal{J}):=T_{n,0}(\mathcal{J})\cap\mathbb{U}_n(R)=I_n+T_{n,1}(\mathcal{J})$ is a multiplicative group. \item[(4)] The entries of matrices in $T_{n,t}(\mathcal{J})$ are in $J_t$ when $0\leq t<n$. \item[(5)] $T_{n,t}(\mathcal{J})T_{n,t'}(\mathcal{J})\subseteq T_{n,t+t'}(\mathcal{J})$ for $t,t'\geq 0$. \end{enumerate} \end{rems}
Now let $\theta\colon R_0\to R$ be a homomorphism of unital commutative rings. Let $A$ be a finite set, and let $S$ be the free discrete group on basis $A$.
\begin{lem} \label{sss} Let $\varphi\colon S\to \mathbb{U}_n(\mathcal{J})$ be a group homomorphism. For every sequence $I=(a_1,\ldots, a_t)\in A^*$ let $M_I=\prod_{k=1}^t(\varphi(a_k)-I_n)$
(by convention $M_\emptyset=I_n$). Then: \begin{enumerate} \item[(a)]
$M_I\in T_{n,t}(\mathcal{J})$. \item[(b)] There is a homomorphism of unital rings \[
\hat\varphi\colon R_0\langle\langle X_A\rangle\rangle\to T_{n,0}(\mathcal{J}), \quad
\sum_Ic_IX_I\mapsto\sum_{0\leq|I|<n}\theta(c_I)M_I. \] It satisfies $\hat\varphi(c)=\theta(c)I_n$ for $c\in R_0$, and $\varphi=\hat\varphi\circ\Lambda_{R_0,A}$ on $S$, and is the unique homomorphism with these properties (see diagram (\ref{com square})). \end{enumerate} \end{lem} \begin{proof} (a) \quad The matrices $\varphi(X_{a_1})-I_n,\ldots,\varphi(X_{a_t})-I_n$ are in $T_{n,1}(\mathcal{J})$. By Remark \ref{rems on T}(5), $M_I\in T_{n,1}(\mathcal{J})^t\subseteq T_{n,t}(\mathcal{J})$.
(b)\quad By (a) and Remark \ref{rems on T}(2), $M_I=0$ when $|I|\geq n$. It follows that $\hat\varphi$ is a homomorphism. By definition, $\hat\varphi(c)=\hat\varphi(cX_\emptyset)=\theta(c)M_\emptyset=\theta(c)I_n$, and for every generator $a\in A$ we have \[ \varphi(a)=I_n+M_{(a)}=\hat\varphi(1+X_a)=(\hat\varphi\circ\Lambda_{R_0,A})(a), \] so $\varphi=\hat\varphi\circ\Lambda_{R_0,A}$ on $S$.
Further, if $\hat\varphi'\colon R_0\langle\langle X_A\rangle\rangle\to T_{n,0}(\mathcal{J})$ is another unital ring homomorphism satisfying the above properties, then $\hat\varphi'(c)=\theta(c)I_n=\hat\varphi(c)$ for every $c\in R_0$, and $\hat\varphi'(X_a)=\hat\varphi'(\Lambda_{R_0,A}(a)-1)=\varphi(a)-I_n=\hat\varphi(X_a)$ for every $a\in A$. It follows that $\hat\varphi'=\hat\varphi$. \end{proof}
Next let $L_\mathcal{J}$ be the set of all power series $\sum_Ic_IX_I$ in $R_0\langle\langle X_A\rangle\rangle$ such that $c_\emptyset=1$ and $\theta(c_I)\in \Ann_R(J_t)$ for every sequence $I$ of length $1\leq t<n$. Here $\Ann_R(J_t)$ denotes the annihilator of $J_t$ in $R$.
\begin{lem} \label{lemma 2} $\varphi(\Lambda_{R_0,A}^{-1}(L_\mathcal{J}))=I_n$ for every group homomorphism $\varphi\colon S\to \mathbb{U}_n(\mathcal{J})$. \end{lem} \begin{proof} Let $\varphi$ be a homomorphism as above, and let $M_I\in T_{n,t}(\mathcal{J})$ and $\hat\varphi$ be as in Lemma \ref{sss}. Consider a power series $\sum_Ic_IX_I\in L_\mathcal{J}$. If $I\in A^*$ is a sequence of length $1\leq t<n$, then $\theta(c_I)\in \Ann_R(J_t)$, and the entries of $M_I$ are in $J_t$ (Remark \ref{rems on T}(4)). Hence $\hat\varphi(c_IX_I)=\theta(c_I)M_I=0$. Consequently, $\hat\varphi(\sum_Ic_IX_I)=c_\emptyset M_\emptyset=I_n$.
Thus if $\sigma\in S$ and $\Lambda_{R_0,A}(\sigma)\in L_\mathcal{J}$, then $\varphi(\sigma)=\hat\varphi(\Lambda_{R_0,A}(\sigma))=I_n$. \end{proof}
We define the \textsl{kernel intersection} of groups $H,G$ to be \[
\KerInt(H,G)=\bigcap\{\Ker(\varphi)\ |\ \varphi\colon H\to G \text{\rm \ homomorphism}\}. \]
\begin{thm} \label{theorem qqq} Suppose that $J_t=d^tR$, $t=0,1,\ldots, n-1$, for some $d\in R$. Then \[ \Lambda_{R_0,A}^{-1}(L_\mathcal{J})=\KerInt(S,\mathbb{U}_n(\mathcal{J})). \] \end{thm} \begin{proof} Lemma \ref{lemma 2} gives the inclusion $\subseteq$. For the opposite inclusion let $\sigma\in\KerInt(S,\mathbb{U}_n(\mathcal{J}))$ and write $\Lambda_{R_0,A}(\sigma)=\sum_Ic_IX_I$ (so $c_\emptyset=1$). We need to show that $d^t\theta(c_{I_0})=0$ for every sequence $I_0=(l_1,\ldots, l_t)\in A^*$ of length $1\leq t<n$. We may assume inductively that $d^s\theta(c_I)=0$ for every sequence $I\in A^*$ of length $1\leq s<t$.
Let $E_{ij}$ be the $n\times n$ matrix over $R$ which is $1$ at entry $(i,j)$ and $0$ elsewhere. We recall that $E_{ij}E_{i'j'}$ is $E_{ij'}$, if $i'=j$, and is $0$ otherwise. Hence a product $E_{j_1,j_1+1}\cdots E_{j_s,j_s+1}$, where $1\leq j_1,\ldots, j_s<n$, is non-zero if and only if $j_1,\ldots, j_s$ are consecutive numbers, and in this case it equals $E_{j_1,j_s+1}$.
For every $a\in A$ we define \[ M_a=d\sum_jE_{j,j+1}\in T_{n,1}(\mathcal{J}), \] where the sum is over all $1\leq j\leq t$ such that $l_j=a$. Since $S$ is free, there is a group homomorphism $\varphi\colon S\to\mathbb{U}_n(\mathcal{J})$ such that $\varphi(a)=I_n+M_a$ for every $a\in A$. Let $\hat\varphi\colon R_0\langle\langle X_A\rangle\rangle\to T_{n,0}(\mathcal{J})$ be the ring homomorphism as in Lemma \ref{sss}(b). By the assumption on $\sigma$, \[ 0=\varphi(\sigma)-I_n=\hat\varphi(\Lambda_{R_0,A}(\sigma)-1)=\sum \theta(c_I)M_I, \] where the sum is over all sequences $I=(a_1,\ldots, a_s)\in A^*$ with $1\leq s<n$.
Given such a sequence $I$, the matrix $M_I=M_{a_1}\cdots M_{a_s}$ is the sum of all products $d^sE_{j_1,j_1+1}\cdots E_{j_s,j_s+1}$ such that $1\leq j_1,\ldots, j_s\leq t$ and $l_{j_1}=a_1,\ldots, l_{j_s}=a_s$. We now break the computation into three cases:
\textsl{Case 1:}
$1\leq s<t$. \quad Then the induction hypothesis gives $d^s\theta(c_I)=0$, whence $d^s\theta(c_I)E_{j_1,j_1+1}\cdots E_{j_s,j_s+1}=0$.
\textsl{Case 2:}
$s\geq t$, $(j_1,\ldots, j_s)\neq(1,\ldots, t)$. \quad Since $1\leq j_1,\ldots, j_s\leq t$ this implies that $j_1,\ldots, j_s$ are not consecutive numbers, and therefore $E_{j_1,j_1+1}\cdots E_{j_s,j_s+1}=0$.
\textsl{Case 3:}
$s=t$, $(j_1,\ldots, j_t)=(1,\ldots, t)$. \quad Then $I=I_0$ and \[ d^s\theta(c_I)E_{j_1,j_1+1}\cdots E_{j_s,j_s+1}=d^t\theta(c_{I_0})E_{1,t+1}. \]
Altogether we obtain that \[
0=\sum_{1\leq |I|<n}\theta(c_I)M_I=d^t\theta(c_{I_0})E_{1,t+1}. \] It follows that $d^t\theta(c_{I_0})=0$, as required. \end{proof}
\begin{rems} \label{remark on profinite analog} \rm (a) \quad In the previous proof, if $a$ does not appear in $I_0$, then $M_a=0$.
(b) \quad There is also a profinite analog of Theorem \ref{theorem qqq}: Let $\Pi$ be a set of prime numbers and let $\mathcal{C}$ be the formation of all finite $\Pi$-groups. Let $R_0=\hat \mathbb{Z}_\mathcal{C}$, let $R$ be a profinite ring whose additive group is pro-$\mathcal{C}$, and let $\theta\colon R_0\to R$ be a continuous ring homomorphism. We choose $d\in R$ and set $J_t=d^tR$ for $t=0,1,\ldots, n-1$. Then $\mathbb{U}_n(\mathcal{J})$ is a pro-$\mathcal{C}$ group. Also let $S=S_A(\mathcal{C})$ be the free pro-$\mathcal{C}$ group on basis $A$. We note that $\Ann_R(J_t)$ is closed in $R$, so $L_\mathcal{J}$ is closed in the profinite ring $\mathbb{Z}_\mathcal{C}\langle\langle X_A\rangle\rangle$. Lemma \ref{sss}, Lemma \ref{lemma 2}, and Theorem \ref{theorem qqq} and their proofs hold almost without any changes, with homomorphisms understood to be continuous, with $\KerInt(S,\mathbb{U}_n(\mathcal{J}))$ replaced by \[ \begin{split} \KerInt_\mathrm{cont}&(S,\mathbb{U}_n(\mathcal{J})) \\
&=\bigcap\{\Ker(\varphi)\ |\ \varphi\colon S\to\mathbb{U}_n(\mathcal{J}) \hbox{ continuous homomorphism}\}, \end{split} \] and using Remark (a) to see that the map $A\to \mathbb{U}_n(\mathcal{J})$, $a\mapsto I_n+M_a$ converges to $1$. \end{rems}
\section{Examples}
\begin{exam} \label{exam 1}
\rm Suppose that $R=R_0$ and $\theta$ is the identity map. Take in Theorem \ref{theorem qqq} $d=1$. Then $J_t=R$ and $\Ann_R(J_t)=\{0\}$ for $0\leq t\leq n-1$. Thus $L_\mathcal{J}$ consists of all power series $1+\sum_{|I|\geq n}c_IX_I$ with $c_I\in R$. Moreover, here $\mathbb{U}_n(\mathcal{J})=\mathbb{U}_n(R)$. For a free discrete group $S$ on a finite set $A$ of generators Theorem \ref{theorem qqq} gives \begin{equation} \label{ttt} \Lambda_{R,A}^{-1}(L_J)=\KerInt(S,\mathbb{U}_n(R)). \end{equation} \end{exam}
\begin{exam} \label{exam 2} \rm Take in Example \ref{exam 1} $R=R_0=\mathbb{Z}$. As proved by Magnus \cite{Magnus37}*{Satz III}, Witt \cite{Witt37}, and Gr\"un \cite{Grun36} (see \cite{SerreLie}*{Ch.\ IV, Th.\ 6.3} for a more modern approach),
$\Lambda_{\mathbb{Z},A}^{-1}(L_\mathcal{J})$ is the $n$-th term $S^{(n)}$ in the lower central filtration of $S$. We deduce from (\ref{ttt}) that \[ S^{(n)}=\KerInt(S,\mathbb{U}_n(\mathbb{Z})).
\] This was proved in \cite{Grun36}; see also the modern exposition of Gr\"un's work in \cite{Rohl85}, as well as the related work \cite{Magnus35}*{Satz VI}. We remark that Gr\"un actually works with {\sl lower}-triangular unipotent matrices. \end{exam}
\begin{exam} \label{exam 3} \rm Let $\Pi$ be a set of prime numbers and let $\mathcal{C}=\mathcal{C}(\Pi)$ be the formation of all finite $\Pi$-groups. Let $A$ be a set, and $S=S_A(\mathcal{C})$ the free pro-$\mathcal{C}$ group on basis $A$. We similarly obtain, in view of Remark \ref{remark on profinite analog}, that \[ S^{(n)}=\KerInt_\mathrm{cont}(S,\mathbb{U}_n(\hat\mathbb{Z}_\mathcal{C})).
\] In particular, for a free profinite group $S$ we have $S^{(n)}=\KerInt(S,\mathbb{U}_n(\hat\mathbb{Z}))$, and for a free pro-$p$ group $S$ we have $S^{(n)}=\KerInt(S,\mathbb{U}_n(\mathbb{Z}_p))$. \end{exam}
\begin{exam} \label{exam 4} \rm Let $R=R_0=\mathbb{F}_p$ and $S$ a free discrete group on a finite set $A$ of generators. Then $\Lambda_{\mathbb{F}_p,A}^{-1}(L_\mathcal{J})$ is the $n$-th term $S_{(n,p)}$ in the $p$-Zassenhaus filtration of $S$ (compare \cite{Vogel05}*{Lemma 2.19(ii)}, \cite{Morishita12}*{\S8.3}, \cite{Efrat14}*{Prop.\ 6.2}, and \cite{Jennings41}*{Th.\ 5.5}). We conclude from (\ref{ttt}) that \[ S_{(n,p)}=\KerInt(S,\mathbb{U}_n(\mathbb{F}_p)). \]
Again, the same result holds when $S=S_\mathcal{C}(A)$ is a free pro-$\mathcal{C}$ group on a basis $A$, with $\mathcal{C}$ the formation of finite $\Pi$-groups for some set $\Pi$ of prime numbers. In fact, this was proved in \cite{Efrat14}*{Th.\ A'} for any profinite group $S$ with $p$-cohomological dimension $\leq1$, as a special case of a deeper cohomological result, related to Massey products.
In the special case where $n=3$, we note that $\mathbb{U}_3(\mathbb{F}_2)=D_4$, and $\mathbb{U}_3(\mathbb{F}_p)=H_{p^3}$ when $p>2$. Hence the list $\mathcal{L}$ of all subgroups of $\mathbb{U}_3(\mathbb{F}_p)$ consists of $\{1\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/4\mathbb{Z},D_4$, when $p=2$, and $\{1\},\mathbb{Z}/p\mathbb{Z},(\mathbb{Z}/p\mathbb{Z})^2,H_{p^3}$, when $p>2$. For a free profinite group $S$ we deduce that $S_{(3,p)}=\KerInt(S,\mathbb{U}_3(\mathbb{F}_p))=S_\mathcal{L}$ (with notation as in the Introduction). Furthermore, when $p>2$ we may replace here $\mathcal{L}$ by $\mathcal{L}'=\{\{1\},\mathbb{Z}/p\mathbb{Z},H_{p^3}\}$. This recovers the results of \cite{MinacSpira96} and \cite{EfratMinac13} mentioned in the Introduction (for $G=S$). \end{exam}
\begin{exam} \label{exam 5} \rm Let $S$ be a \textsl{free pro-$p$ group}. We recall that $\mathbb{U}_n(\mathbb{F}_p)$ is a $p$-Sylow subgroup of $\GL_n(\mathbb{F}_p)$ (see e.g., \cite{RibesZalesskii10}*{Ex.\ 2.3.12}). It follows from Example \ref{exam 4} that \[ S_{(n,p)}=\KerInt(S,\GL_n(\mathbb{F}_p)). \] \end{exam}
\begin{exam} \label{exam 6} \rm Let $p$ be a prime number, $R_0=\mathbb{Z}$, and $R=\mathbb{Z}/p^n\mathbb{Z}$, and $\theta\colon\mathbb{Z}\to\mathbb{Z}/p^n\mathbb{Z}$ the natural epimorphism. We take in Theorem \ref{theorem qqq} $d=p+p^n\mathbb{Z}\in R$, so $J_t=p^t\mathbb{Z}/p^n\mathbb{Z}\subseteq R$ and $\Ann_R(J_t)=p^{n-t}\mathbb{Z}/p^n\mathbb{Z}$ for $0\leq t\leq n-1$. Thus, in the terminology of the Introduction, $\mathbb{U}_n(\mathcal{J})=G(n,p)$, and $L_\mathcal{J}$ consists of all power series $1+\sum_Ic_IX_I$
in $\mathbb{Z}\langle\langle X_A\rangle\rangle$ such that $c_I\in p^{n-|I|}\mathbb{Z}$ for $1\leq|I|<n$, and $c_I\in\mathbb{Z}$ for $|I|\geq n$.
Let $S$ be a discrete free group on a finite set $A$ of generators. Let $D$ be the two-sided ideal in $\mathbb{Z}\langle\langle X_A\rangle\rangle$ generated by $X_a$, $a\in A$. Then $L_\mathcal{J}=1+\sum_{1\leq t\leq n}p^{n-t}D^t$, where $D^t$ is the ideal of all sums of products of $t$ elements of $D$. We observe that $L_\mathcal{J}=1+(p\mathbb{Z}+D)^n$. By a theorem of Koch \cite{Koch60}, $\Lambda_{\mathbb{Z},A}^{-1}(1+(p\mathbb{Z}+D)^n)$ is the $n$-th term $S^{(n,p)}$ in the lower $p$-central series of $S$. Combining this with Theorem \ref{theorem qqq}, we obtain the Theorem from the Introduction: \[ S^{(n,p)}=\KerInt(S,G(n,p)). \] A similar result holds for a free pro-$\mathcal{C}$ group $S=S_A(\mathcal{C})$, with $\mathcal{C}$ the formation of all finite $\Pi$-groups for set $\Pi$ of prime numbers (of course, here $S^{(n,p)}$ is in the profinite sense): \[ S^{(n,p)}=\KerInt_\mathrm{cont}(S,G(n,p)). \] \end{exam}
We list some consequences of these examples (the first three facts seem to be well-known):
\begin{cor} \label{triviality of subgroups} \begin{enumerate} \item[(a)] In the discrete setting, $\mathbb{U}_n(\mathbb{Z})^{(n)}=1$. \item[(b)] In the profinite setting, $\mathbb{U}_n(\mathbb{Z}_\mathcal{C})^{(n)}=1$. \item[(c)] $\mathbb{U}_n(\mathbb{F}_p)_{(n,p)}=1$. \item[(d)] $G(n,p)^{(n,p)}=1$. \end{enumerate} \end{cor} \begin{proof} We prove (d). Take a free group $S$ on sufficiently many generators and an epimorphism $\varphi\colon S\to G(n,p)$. By the definition of the lower $p$-central filtration, it maps $S^{(n,p)}$ onto $G(n,p)^{(n,p)}$. But the Theorem implies that $\varphi$ is trivial on $S^{(n,p)}$.
(a)--(c) are proved similarly, taking $S$ to be a free group in the relevant context, and using Examples \ref{exam 2}, \ref{exam 3}, and \ref{exam 4}, respectively. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{DixonDuSautoyMannSegal99}{book}{ title={Analytic Pro-$p$ Groups}, author={Dixon, J.D.}, author={du Sautoy, Marcus}, author={Mann, Avinoam}, author={Segal, D.}, publisher={Cambridge University Press}, series={Cambridge Stud. Adv. Math.}, volume={61}, date={1999}, label={DDMS} }
\bib{Efrat14}{article}{ author={Efrat, Ido}, title={The Zassenhaus filtration, Massey products, and representations of profinite groups}, journal={Adv.\ Math.}, volume={263}, date={2014}, pages={389\ndash411}, }
\bib{EfratMinac11}{article}{ author={Efrat, Ido}, author={Min\' a\v c, J\'an}, title={On the descending central sequence of absolute Galois groups}, journal={Amer.\ J.\ Math.}, volume={133}, date={2011}, pages={1503\ndash1532}, }
\bib{EfratMinac13}{article}{ author={Efrat, Ido}, author={Min\' a\v c, J\'an}, title={Galois groups and cohomological functors}, date={2011}, status={to appear}, eprint={arXiv:1103.1508v2}, }
\bib{FriedJarden08}{book}{
author={Fried, Michael D.},
author={Jarden, Moshe},
title={Field arithmetic},
edition={3},
publisher={Springer},
place={Berlin},
date={2008},
pages={xxiv+792}, }
\bib{Grun36}{article}{ author={Gr\"un, Otto}, title={\"Uber eine Faktorgruppe freier Gruppen I}, journal={Deutsche Mathematik}, volume={1}, date={1936}, pages={772\ndash 782},}
\bib{Jennings41}{article}{
author={Jennings, S. A.},
title={The structure of the group ring of a $p$-group over a modular field},
journal={Trans. Amer. Math. Soc.},
volume={50},
date={1941},
pages={175--185}, }
\bib{Koch60}{article}{
author={Koch, H.},
title={\"Uber die Faktorgruppen einer absteigenden Zentralreihe},
journal={Math.\ Nach.},
volume={22},
date={1960},
pages={159\ndash161}, }
\bib{Koch02}{book}{
author={Koch, Helmut},
title={Galois Theory of $p$-Extensions},
publisher={Springer-Verlag},
place={Berlin},
date={2002},
pages={xiv+190}, }
\bib{Magnus35}{article}{ author={Magnus, Wilhelm}, title={Beziehungen zwischen Gruppen und Idealen in einem speziellen Ring}, journal={Math.\ Ann.}, volume={111}, date={1935}, pages={259\ndash280}, }
\bib{Magnus37}{article}{ author={Magnus, Wilhelm}, title={\"Uber Beziehungen zwischen h\"oheren Kommutatoren}, journal={J.\ reine angew. Math.}, volume={177}, date={1937}, pages={105\ndash115}, }
\bib{MinacSpira96}{article}{ author={Min{\'a}{\v {c}}, J{\'a}n}, author={Spira, Michel}, title={Witt rings and Galois groups}, journal={Ann. Math.}, volume={144}, date={1996}, pages={35\ndash60}, }
\bib{MinacTan2}{article}{ author={Min{\'a}{\v {c}}, J{\'a}n}, author={Tan, Nguyen Duy}, title={The Kernel Unipotent Conjecture and the vanishing of Massey products for odd rigid fields {\rm (with an appendix by I.\ Efrat, J.\ Min\'a\v c and N.D.\ T\^an)}}, date={2013}, eprint={arXiv:1312.2655}, }
\bib{Morishita12}{book}{
author={Morishita, Masanori},
title={Knots and Primes},
series={Universitext},
publisher={Springer},
place={London},
date={2012},
pages={xii+191}, }
\bib{NeukirchSchmidtWingberg}{book}{
author={Neukirch, J{\"u}rgen},
author={Schmidt, Alexander},
author={Wingberg, Kay},
title={Cohomology of Number Fields, Second edition},
edition={2},
publisher={Springer},
place={Berlin},
date={2008}, }
\bib{RibesZalesskii10}{book}{ author={Ribes, Luis}, author={Zalesskii, Pavel}, title={Profinite Groups}, edition={2}, publisher={Springer}, date={2010}, }
\bib{Rohl85}{article}{
author={R\"ohl, Frank},
title={Review and some critical comments on a paper of Gr\"un concerning the dimension subgroup conjecture},
journal={Bol.\ Soc.\ Braz.\ Mat.},
volume={16},
date={1985},
pages={11\ndash27}, }
\bib{SerreLie}{book}{ author={Serre, Jean-Pierre}, title={Lie Algebras and Lie groups}, series={Lect.\ Notes Math.}, volume={1500}, publisher={Springer}, place={Berlin--Heidelberg}, date={1992}, }
\bib{SerreCG}{book}{
author={Serre, Jean-Pierre},
title={Galois Cohomology},
series={Springer Monographs in Mathematics},
publisher={Springer},
place={Berlin},
date={2002},
pages={x+210}, }
\bib{Vogel05}{article}{ author={Vogel, Denis}, title={On the Galois group of $2$-extensions with restricted ramification}, journal={J.\ reine angew.\ Math.}, volume={581}, date={2005}, pages={117\ndash150}, }
\bib{Witt37}{article}{
author={Witt, Ernst},
title={Treue Darstellungen beliebiger Liescher Ringe},
journal={J.\ reine angew.\ Math.},
volume={177},
date={1937},
pages={152--160}, }
\end{biblist} \end{bibdiv}
\end{document} | arXiv | {
"id": "1312.1811.tex",
"language_detection_score": 0.5638148188591003,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{On the Approximability and Hardness of the Minimum Connected Dominating Set with Routing Cost Constraint} \author{Tung-Wei Kuo
\thanks{E-mail: \texttt{twkuo@cs.nccu.edu.tw}}
} \affil{Department of Computer Science, National Chengchi University}
\date{Dated: \today} \maketitle
\begin{abstract} In the problem of minimum connected dominating set with routing cost constraint, we are given a graph $G=(V,E)$, and the goal is to find the smallest connected dominating set $D$ of $G$ such that, for any two non-adjacent vertices $u$ and $v$ in $G$, the number of internal nodes on the shortest path between $u$ and $v$ in the subgraph of $G$ induced by $D \cup \{u,v\}$ is at most $\alpha$ times that in $G$. For general graphs, the only known previous approximability result
is an $O(\log n)$-approximation algorithm ($n=|V|$) for $\alpha = 1$ by Ding \textit{et al.} For any constant $\alpha > 1$, we give an $O(n^{1-\frac{1}{\alpha}}(\log n)^{\frac{1}{\alpha}})$-approximation algorithm. When $\alpha \geq 5$, we give an $O(\sqrt{n}\log n)$-approximation algorithm. Finally, we prove that, when $\alpha =2$, unless $NP \subseteq DTIME(n^{poly\log n})$, for any constant $\epsilon > 0$, the problem admits no polynomial-time $2^{\log^{1-\epsilon}n}$-approximation algorithm, improving upon the $\Omega(\log n)$ bound by Du \textit{et al.} (albeit under a stronger hardness assumption). \end{abstract} \keywords{Connected dominating set, spanner, set cover with pairs, MIN-REP problem}
\section{Introduction} \subsection{Motivation} In wireless network routing, a common approach is to select a set of nodes as the \textit{virtual backbone}. The virtual backbone is responsible for relaying packets. Specifically, when a node $s$ generates a packet destined to $d$, the packet is routed through path $(s, v_1, v_2, \cdots, v_k, t)$, where every internal node $v_i, 1 \leq i \leq k,$ belongs to the virtual backbone. To realize this idea, we can model the wireless network as a graph $G=(V,E)$, where $V$ is the set of nodes in the wireless network, and $(u,v) \in E$ if and only if $u$ and $v$ can communicate with each other directly. Thus, a connected dominating set of $G$ is a virtual backbone for the wireless network.\footnote{A set $D \subseteq V$ is a \textit{dominating set} of $G=(V,E)$ if every vertex in $V \setminus D$ is adjacent to $D$. Furthermore, if $D$ induces a connected subgraph of $G$, then $D$ is called a \textit{connected dominating set} of $G$.} One of the concerns in constructing the virtual backbone is the routing cost. Specifically, the routing cost of sending a packet from the source $s$ to the destination $d$ is the number of internal nodes (relays) in the routing path from $s$ to $d$. For example, the routing cost is $k$ if the routing path is $(s, v_1, v_2, \cdots, v_k, t)$. The routing cost should not be too high even if packets are only allowed to be routed through the virtual backbone. Next, we give the formal definition of the problem.
\subsection{Problem Definition} Let $G[S]$ be the subgraph of $G=(V,E)$ induced by $S \subseteq V$. Let $m_G(u,v)$ be the number of internal vertices on the shortest path between $u$ and $v$ in $G$. For example, if $u$ and $v$ are adjacent, then $m_G(u,v) = 0$. If $u$ and $v$ are not adjacent and have a common neighbor, then $m_G(u,v) = 1$. Furthermore, given a vertex subset $D$ of $G$, $m^D_G(u,v)$ is defined as $m_{G[D \cup \{u,v\}]}(u,v)$, i.e., the number of internal vertices on the shortest path between $u$ and $v$ through $D$. We use $n(G)$ to denote the number of vertices in graph $G$. When the graph we are referring to is clear from the context, we simply write $n$, $m(u,v)$, and $m^D(u,v)$ instead of $n(G)$, $m_G(u,v)$, and $m^D_G(u,v)$, respectively.
\begin{definition} Given a connected graph $G$ and a positive integer $\alpha$, the \textbf{Connected Dominating set problem with Routing cost constraint} (CDR-$\alpha$) asks for the smallest connected dominating set $D$ of $G$, such that, for every two vertices $u$ and $v$, if $u$ and $v$ are not adjacent in $G$, then $m^D(u,v) \leq \alpha \cdot m(u,v)$. \end{definition}
\subsection{Preliminary} \subsubsection{An Equivalent Problem} In the CDR-$\alpha$ problem, we need to consider all the pairs of non-adjacent nodes. Ding \textit{et al.} discovered that to solve the CDR-$\alpha$ problem, it suffices to consider only vertex pairs $(u,v)$ such that $m(u,v)= 1$, i.e., $u$ and $v$ are not adjacent but have a common neighbor~\cite{5703070}. We call the corresponding problem the 1-DR-$\alpha$ problem.
\begin{definition} Given a connected graph $G = (V,E)$ and a positive integer $\alpha$, the 1-DR-$\alpha$ problem asks for the smallest dominating set $D$ of $G$, such that, for every two vertices $u$ and $v$, if $m(u,v) = 1$, then $m^D(u,v) \leq \alpha$. \end{definition} We say that $u$ and $v$ form a \textbf{target couple}, denoted by $[u,v]$, if $m(u,v) = 1$. We say that a set $S$ \textbf{covers} a target couple $[u,v]$ if $m^S(u,v) \leq \alpha$. Hence, the 1-DR-$\alpha$ problem asks for the smallest dominating set that covers all the target couples. Notice that any feasible solution of the 1-DR-$\alpha$ problem must induce a connected subgraph of $G$. The equivalence between the CDR-$\alpha$ problem and the 1-DR-$\alpha$ problem is stated in the following theorem.
\begin{theorem}[Ding \textit{et al.} \cite{5703070}] $D$ is a feasible solution of the CDR-$\alpha$ problem with input graph $G$ if and only if $D$ is a feasible solution of the 1-DR-$\alpha$ problem with input graph $G$. \end{theorem}
\begin{corollary} Any $r$-approximation algorithm of the 1-DR-$\alpha$ problem is an $r$-approximation algorithm of the CDR-$\alpha$ problem. \end{corollary} In this paper, we thus focus on the 1-DR-$\alpha$ problem.
\subsubsection{Feasibility of the 1-DR-$\alpha$ Problem for $\alpha \geq 5$} Next, we give the basic idea of finding a feasible solution of the 1-DR-$\alpha$ problem for $\alpha \geq 5$ used in previous researches, e.g., in~\cite{Du2010}. One of our algorithms still uses this idea. First, find a dominating set $D$. Thus, for any target couple $[u,v]$, there exist $u^d$ and $v^d$ in $D$, such that $u^d$ and $v^d$ dominate $u$ and $v$, respectively.\footnote{$u^d$ dominates $u$ if $u^d = u$ or $u^d$ and $u$ are adjacent.} Let $D' = D$. For any two vertices $u'$ and $v'$ in $D$, if $m(u',v') \leq 3$, then we add the $m(u',v')$ internal vertices of the shortest path between $u'$ and $v'$ on $G$ to $D'$. Observe that $m(u^d,v^d) \leq 3$. Hence, $m^{D'}(u,v) \leq 5$ and $D'$ is a feasible solution of the 1-DR-$\alpha$ problem for $\alpha \geq 5$.
\begin{lemma} \label{idea} Let $D$ be a dominating set of $G$. Let $D' \supseteq D$ be a vertex subset of $G$ such that, for any two vertices $u'$ and $v'$ in $D$, if $m(u',v') \leq 3$, then $m^{D'}(u',v') \leq 3$. Then, $D'$ is a feasible solution of the 1-DR-$\alpha$ problem with input $G$ and $\alpha \geq 5$. \end{lemma}
\subsection{Previous Result} \paragraph*{Previous result on general graphs} When $\alpha = 1$, the 1-DR-$\alpha$ problem can be transformed to the set cover problem, i.e., cover all the vertices (to form a dominating set) and cover all the target couples. Observe that each target couple can be covered by a single vertex. The resulting approximation ratio is $O(\log n)$~\cite{5703070}. When $\alpha$ is sufficiently large, e.g., $\alpha \geq n$, any connected dominating set is feasible for the CDR-$\alpha$ problem. Note that, for any $\alpha$, the size of the minimum connected dominating set is a lower bound of the CDR-$\alpha$ problem. Since the connected dominating set can be approximated within a factor of $O(\log n)$~\cite{Guha1998, RUAN2004325}, the CDR-$n$ problem can be approximated within a factor of $O(\log n)$. If $\alpha$ falls between these two extremes, e.g., $\alpha = 2$, the only known previous result is the trivial $O(n)$-approximation algorithm. On the hardness side, it has been proved that, unless $NP \subseteq DTIME(n^{\log \log n})$, there is no polynomial-time algorithm that can approximate the CDR-$\alpha$ problem within a factor of $\rho \ln \delta$ ($\forall \rho<1$) for $\alpha = 1$~\cite{5703070} and $\alpha \geq 2$~\cite{Du2013, 6216366}, where $\delta$ is the maximum degree of $G$.
\begin{open}[Du and Wan~\cite{Du2013}] Is there a polynomial-time $O(\log n)$-approximation algorithm for the CDR-$\alpha$ problem for $\alpha \geq 2$? \end{open}
\paragraph*{Previous result on Unit Disk Graph (UDG)} Most of the studies on the CDR-$\alpha$ problem focused on UDG~\cite{5703070, Du2013, 6216366, Du2010, 7524455}. UDG exhibits many nice properties that enable constant factor approximation algorithms (or PTAS) in many problems where only $O(\log n)$-approximation algorithms (or worse) are known in general graphs, e.g., the minimum (connected) dominating set problem and the maximum independent set problem~\cite{NET:NET10097, DAS2015439, Nieberg2006}. All the previous research on the CDR-$\alpha$ problem on UDG leveraged constant bounds of the maximum independent set or the minimum dominating set. However, all the previous research only solved the case where $\alpha \geq 5$ (by Lemma~\ref{idea}), and the best result so far is a PTAS by Du~\textit{et al.}~\cite{Du2010}. When $1 < \alpha < 5$, the only known previous result is the trivial $O(n)$-approximation algorithm.
\subsection{Our Result and Basic Ideas} In this paper, we first give an approximation algorithm of the 1-DR-$\alpha$ problem on general graphs for any constant $\alpha > 1$. A critical observation is that the 1-DR-$2$ problem is a special case of the Set Cover with Pairs (SCP) problem~\cite{Hassin2005}. Hassin and Segev proposed an $O(\sqrt{t\log t})$-approximation algorithm for the SCP problem, where $t$ is the number of targets to be covered. However, since there are $O(n^2)$ target couples to be covered, directly applying the $O(\sqrt{t\log t})$-approximation bound yields a trivial upper bound for the 1-DR-2 problem. We re-examine the analysis in~\cite{Hassin2005} and find that, when applying the algorithm to the 1-DR-2 problem, the approximation ratio can also be expressed as $O(\sqrt{n\log n})$. Nevertheless, in this paper, we give a slightly simplified algorithm with an easier analysis for the SCP problem. The algorithm and analysis also make it easy to solve the generalized SCP problem. We obtain the following result, which is the first non-trivial result of the CDR-$\alpha$ problem for $\alpha > 1$ on general graphs and for $1 < \alpha < 5$ on UDG. \begin{theorem}\label{thrm: 1st} For any constant $\alpha > 1$, there is an $O(n^{1-\frac{1}{\alpha}}(\log n)^{\frac{1}{\alpha}})$-approximation algorithm for the 1-DR-$\alpha$ problem. \end{theorem}
Apparently, the above performance guarantee deteriorates quickly as $\alpha$ increases. In our second algorithm, we apply the aforementioned idea of finding a feasible solution when $\alpha \geq 5$, i.e., Lemma~\ref{idea}. We have the following result. \begin{theorem}\label{thrm: 2nd} When $\alpha \geq 5$, there is an $O(\sqrt{n}\log n)$-approximation algorithm for the 1-DR-$\alpha$ problem. \end{theorem}
Finally, we answer Open Question 1 negatively. We improve upon the $\Omega(\log n)$ hardness result for the 1-DR-2 problem (albeit under a stronger hardness assumption)~\cite{Du2013, 6216366}. In this paper, we give a reduction from the MIN-REP problem~\cite{Kortsarz2001}.
\begin{theorem} \label{thrm: inapprox} Unless $NP \subseteq DTIME(n^{poly\log n})$, for any constant $\epsilon > 0$, the 1-DR-2 problem admits no polynomial-time $2^{\log^{1-\epsilon}n}$-approximation algorithm, even if the graph is triangle-free\footnote{If the graph is triangle-free, then any two vertices with a common neighbor form a target couple.} or the constraint that the feasible solution must be a dominating set is ignored\footnote{One may drop the constraint that the solution must be a dominating set, and focuses on minimizing the number of vertices to cover all the target couples. This theorem also applies to such a problem.}. \end{theorem}
\subsection{Relation with the Basic $k$-Spanner Problem} When we ignore the constraint that any feasible solution must be a connected dominating set, the CDR-$\alpha$ problem is similar to the basic $k$-spanner problem. For completeness, we give the formal definition of the basic $k$-spanner problem. Given a graph $G = (V,E)$, a $k$-spanner of $G$ is a subgraph $H$ of $G$ such that $d_H(u,v) \leq kd_G(u,v)$ for all $u$ and $v$ in $V$, where $d_G(u,v)$ is the number of edges in the shortest path between $u$ and $v$ in $G$. The basic $k$-spanner problem asks for the $k$-spanner that has the fewest edges. The CDR-$\alpha$ problem differs with the basic $k$-spanner problem in the following three aspects: First, in the CDR-$\alpha$ problem, we find a set of vertices $D$, and all the edges in the subgraph induced by $D$ can be used for routing; while in the basic $k$-spanner problem, only edges in $H$ can be used. Second, in the CDR-$\alpha$ problem, the objective is to minimize the number of chosen vertices; while in the basic $k$-spanner problem, the objective is to minimize the number of chosen edges. Finally, in the basic $k$-spanner problem, the distance is measured by the number of edges; while in the CDR-$\alpha$ problem, the distance is measured by the number of internal nodes. Despite the above differences, these two problems share similar approximability and hardness results. Alth\"{o}fer \textit{et al.} proved that every graph has a $k$-spanner of at most $n^{1+\frac{1}{\lfloor (k+1)/2\rfloor}}$ edges, and such a $k$-spanner can be constructed in polynomial time~\cite{Althofer1993, Dinitz:2016:ALS:2884435.2884494}. Since the number of edges in any $k$-spanner is at least $n-1$, this yields an $O(n^{\frac{1}{\lfloor (k+1)/2\rfloor}})$-approximation algorithm for the basic $k$-spanner problem. For $k = 2$, there is an $O(\log n)$-approximation algorithm due to Kortsarz and Peleg~\cite{KORTSARZ1994222}, and this is the best possible~\cite{Kortsarz2001}. For $k = 3$, Berman \textit{et al.} proposed an $\tilde{O}(n^{1/3})$-approximation algorithm~\cite{BERMAN201393}. For $k = 4$, Dinitz and Zhang proposed an $\tilde{O}(n^{1/3})$-approximation algorithm~\cite{Dinitz:2016:ALS:2884435.2884494}. On the hardness side, it has been proved that for any constant $\epsilon > 0$ and for $3 \leq k \leq \log^{1-2\epsilon}n$, unless $NP \subseteq BPTIME(2^{poly\log n})$, there is no polynomial-time algorithm that approximates the basic $k$-spanner problem to a factor better than $2^{(\log^{1-\epsilon}n)/k}$~\cite{Dinitz:2015:LCI:2846106.2818375}.
\section{Two Algorithms for the 1-DR-$\alpha$ Problem} \subsection{The First Algorithm} We first give the formal definition of the Set Cover with Pairs (SCP) problem.
\begin{definition} Let $T$ be a set of $t$ targets. Let $V$ be a set of $n$ elements. For every pair of elements $P=\{v_1, v_2\} \subseteq V$, $C(P)$ denotes the set of targets covered by $P$. The Set Cover with Pairs (SCP) problem asks for the smallest subset $S$ of $V$ such that $\bigcup\limits_{\{v_1, v_2\} \subseteq S}{C(\{v_1, v_2\})}=T$. \end{definition} Let $OPT$ be the number of elements in the optimal solution. We only need to consider the case where $t > 1$ and $OPT > 1$. \subsubsection{Approximating the SCP Problem} \label{alg: SCP} Our algorithm is a simple greedy algorithm: in each round, we choose at most two elements $u$ and $v$ that maximize the number of covered targets. Specifically, $S$ is an empty set initially. In each round, we select a set $P \subseteq V\setminus S$
such that $|P| \leq 2$ and $P$ increases the number of covered targets the most, i.e.,
$P=\argmax\limits_{P': |P'| \leq 2, P' \subseteq V \setminus S} {g(P')}$, where
$$g(P') = |\bigcup_{\{v_1,v_2\} \subseteq S \cup P}{C(\{v_1,v_2\})}|
-|\bigcup_{\{v_1,v_2\} \subseteq S}{C(\{v_1,v_2\})}|.$$ We then add $P$ to $S$ and repeat the above process until all the targets are covered. The algorithm terminates once all targets are covered.\footnote{In~\cite{Hassin2005},
in each round, a set $P=\argmax\limits_{P': |P'| \leq 2, P' \subseteq V \setminus S}
{g'(P')}$ is added to $S$, where $g'(P') = \frac{g(P')}{|P'|}$.}
\begin{theorem} \label{SCP} The above algorithm is an $O(\sqrt{n\log t})$-approximation algorithm for the SCP problem. \end{theorem}
\begin{proof} Let $R_i$ be the number of uncovered targets after round $i$. In the first round, some pair of elements in the optimal solution can cover at least $t/{{OPT}\choose{2}}$ targets. Since we choose a pair of elements greedily in each round, $R_1 \leq t(1-1/{{OPT}\choose{2}})$. In the second round, there exists a pair of elements in the optimal solution that can cover at least $R_1/{{OPT}\choose{2}}$ targets among the $R_1$ uncovered targets. Again, we choose the pair of elements that increases the number of covered targets the most. Hence, $R_2 \leq R_1 - R_1/{{OPT}\choose{2}}\leq t(1-1/{{OPT}\choose{2}})^2$. In general, $R_i \leq t(1-1/{{OPT}\choose{2}})^i$. After $r = {{OPT}\choose{2}}\ln t$ rounds, the number of uncovered targets is at most $t(1-1/{{OPT}\choose{2}})^r \leq t(e^{-1/{{OPT}\choose{2}}})^r \leq te^{-\ln t} = 1$. Hence, after $O(OPT^2\ln t)$ rounds, all targets are covered. Let $ALG$ be the number of elements chosen by the algorithm. Since we choose at most two elements in each round, $ALG = O(OPT^2\ln t)$. Finally, since $ALG \leq n$, $ALG = O(\sqrt{n \cdot OPT^2\ln t}) = O(\sqrt{n\ln t})OPT$. \end{proof}
Note that, in Theorem~\ref{SCP}, we can replace $n$ with any upper bound of the size of solutions obtained by any polynomial-time algorithm $\mathcal{A}$ for the SCP problem. This is achieved by executing both $\mathcal{A}$ and our algorithm. Choosing the best between the two outputs yields the desired approximation ratio. An example is replacing $n$ with $2t$.
\subsubsection{Approximating the 1-DR-2 Problem} \label{alg: 1-DR-2} To transform the 1-DR-2 problem to the SCP problem, we treat each target couple as a target. Moreover, we treat each vertex as a target so that the output is a dominating set. The set of elements $V$ in the SCP problem is the vertex set of $G$. $C(P)$ consists of all the vertices that are dominated by $P$ in $G$ and all the target couples that are covered by $P$ in $G$. In this SCP instance, $n=n(G)$ and $t=O(n(G)^2)$. It is easy to verify the following result.
\begin{theorem} There is an $O(\sqrt{n\log n})$-approximation algorithm for the 1-DR-2 problem. \end{theorem}
\subsubsection{The Set Cover with $\alpha$-Tuples (SCT-$\alpha$) Problem} \label{defi: SCT} In the 1-DR-2 problem, every target couple can be covered by no more than two vertices. In the 1-DR-$\alpha$ problem, every target couple can be covered by no more than $\alpha$ vertices. Hence, we consider the following generalization of the SCP problem. \begin{definition} Let $T$ be a set of $t$ targets. Let $V$ be a set of $n$ elements. Let $\alpha$ be a positive integer constant greater than one. For every $\alpha$-tuple $P=\{v_1, v_2, \cdots, v_{\alpha}\}\subseteq V$, $C(P)$ denotes the set of targets covered by $P$. The Set Cover with $\alpha$-Tuples (SCT-$\alpha$) problem asks for the smallest subset $S$ of $V$ such that $\bigcup\limits_{\{v_1, v_2, \cdots, v_{\alpha}\} \subseteq S} {C(\{v_1, v_2, \cdots, v_{\alpha}\})}=T$. \end{definition} We only need to consider the case where $t > 1$ and $OPT \geq \alpha$ ($\alpha$ is a constant). \subsubsection{Approximating the SCT-$\alpha$ Problem and the 1-DR-$\alpha$ Problem} \label{algo: SCT} The algorithm for the SCT-$\alpha$ problem is a straightforward generalization of the algorithm for the SCP problem. The difference is that, in each round, we choose a set $P$ of at most $\alpha$ elements that increases the number of covered targets the most. The transformation from the 1-DR-$\alpha$ problem to the SCT-$\alpha$ problem is also similar to the previous transformation. The value of $\alpha$ in the constructed SCT-$\alpha$ instance is equal to that in the 1-DR-$\alpha$ instance. Again, $n = n(G)$ and $t = O(n(G)^2)$ in the constructed SCT-$\alpha$ instance. Theorem~\ref{thrm: 1st} is a direct result of the following theorem. \begin{theorem} \label{thrm: SCT} There is an $O(n^{1-\frac{1}{\alpha}} \cdot (\ln t)^{\frac{1}{\alpha}})$-approximation algorithm for the SCT-$\alpha$ problem. \end{theorem}
We have the following claim, whose proof can be found in the appendix. \begin{myclaim} \label{c} When $c = \frac{1}{\alpha}-\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}$, $n^{1-c}=\sqrt{n \cdot \alpha(n^c)^{\alpha-2}\ln t} = n^{1-\frac{1}{\alpha}} \cdot (\alpha \ln t)^{\frac{1}{\alpha}}$. \end{myclaim}
\textbf{Proof of Theorem~\ref{thrm: SCT}:} Let $R_i$ be the number of uncovered targets after round $i$. By a similar argument in the proof of Theorem~\ref{SCP}, we get that $R_i \leq t(1-1/{{OPT}\choose{\alpha}})^i$. After $r = {{OPT}\choose{\alpha}}\ln t$ rounds, the number of uncovered targets is at most one. Hence, after $O(OPT^{\alpha}\ln t)$ rounds, all targets are covered. Let $ALG$ be the number of elements chosen by the algorithm. Since we choose at most $\alpha$ elements in each round, $ALG = O(\alpha OPT^{\alpha}\ln t)$. Since $ALG \leq n$, $ALG = O(\sqrt{n \cdot \alpha OPT^{\alpha}\ln t})$.
Let $c = \frac{1}{\alpha}-\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}$. When $OPT \geq n^c$, the approximation ratio is $n^{1-c}$. When $OPT \leq n^c$, $ALG = O(\sqrt{n \cdot \alpha OPT^{\alpha-2}\ln t})OPT = O(\sqrt{n \cdot \alpha(n^c)^{\alpha-2}\ln t})OPT$. The proof then follows from Claim~\ref{c} and $\alpha^{\frac{1}{\alpha}}=O(1)$. \qed
\subsection{The Second Algorithm} The second algorithm is designed for the 1-DR-$\alpha$ problem when $\alpha \geq 5$. It has a better approximation ratio than that of the previous algorithm when $\alpha \geq 5$. The algorithm is suggested in Lemma~\ref{idea}: We first find a dominating set $D$ by any $O(\log n)$-approximation algorithm. Let $D' = D$. For any two vertices $u$ and $v$ in $D$, if $m(u,v) \leq 3$, we then add at most three vertices to $D'$ so that $m^{D'}(u,v) \leq 3$.
\textbf{Proof of Theorem~\ref{thrm: 2nd}:} Let $OPT_{DS}$ be the size of the minimum dominating set in $G$. Let $OPT$ be the size of the optimum of the 1-DR-$\alpha$ problem. Since any feasible solution of the 1-DR-$\alpha$ problem must be a dominating set, $OPT_{DS} \leq OPT$.
$|D'| \leq |D|+3{{|D|}\choose{2}} = O((\log n \cdot OPT_{DS})^2)
= O((\log n \cdot OPT)^2)$. Since $|D'| \leq n$, we have $|D'| = O(\sqrt{n \cdot (\log n \cdot OPT)^2}) = O(\sqrt{n}\log n)OPT$. \qed
\section{Inapproximability Result} \label{inapprox} \subsection{The MIN-REP Problem} We prove Theorem~\ref{thrm: inapprox} by a reduction from the MIN-REP problem~\cite{Kortsarz2001}. The input of the MIN-REP problem consists of a bipartite graph $G=(X, Y, E)$, a partition of $X$, $\mathcal{P}_X=\{X_1, X_2, \cdots, X_{k_X}\}$, and a partition of $Y$, $\mathcal{P}_Y=\{Y_1, Y_2, \cdots, Y_{k_Y}\}$, such that $\bigcup_{i=1}^{k_X}{X_i} = X$ and $\bigcup_{i=1}^{k_Y}{Y_i} = Y$. Every $X_i \in \mathcal{P}_X$ (respectively, $Y_i \in \mathcal{P}_Y$)
has size $|X|/k_X$ (respectively, $|Y|/k_Y$). $X_1, X_2, \cdots, X_{k_X}$ and $Y_1, Y_2, \cdots, Y_{k_Y}$ are called \textit{super nodes}, and two super nodes $X_i$ and $Y_j$ are \textit{adjacent} if some vertex in $X_i$ and some vertex in $Y_j$ are adjacent in $G$. If $X_i$ and $Y_j$ are adjacent, then $X_i$ and $Y_j$ form a \textit{super edge}. In the MIN-REP problem, our task is to choose representatives for super nodes so that if $X_i$ and $Y_j$ form a super edge, then some representative for $X_i$ and some representative for $Y_j$ are adjacent in $G$. Note that a super node may have multiple representatives. Specifically, the goal of the MIN-REP problem is to find the smallest subset $R \subseteq X \cup Y$ such that if $X_i$ and $Y_j$ form a super edge, then $R$ must contain two vertices $x$ and $y$ such that $x \in X_i$, $y \in Y_j$ and $(x, y) \in E$. In this case, we say that $\{x,y\}$ \textit{covers} the super edge $(X_i, Y_j)$. The inapproximability result of the MIN-REP problem is stated as the following theorem. \begin{theorem}[Kortsarz \textit{et al.}~\cite{doi:10.1137/S0097539702416736}] \label{MIN-REP}
For any constant $\epsilon > 0$, unless $NP \subseteq DTIME(n^{poly\log n})$, there is no polynomial-time algorithm that can distinguish between instances of the MIN-REP problem with a solution of size $k_X+k_Y$ and instances where every solution is of size at least $(k_X+k_Y) \cdot 2^{\log^{1-\epsilon}{n(G)}}$, where $n(G)$ is the number of vertices in the input graph of the MIN-REP problem. \end{theorem}
\subsection{The Reduction} Given inputs $G=(X,Y,E)$, $\mathcal{P}_X$, and $\mathcal{P}_Y$ of the MIN-REP problem, we construct a corresponding graph $G'(G,\mathcal{P}_X, \mathcal{P}_Y)$ of the 1-DR-2 problem. When $G$, $\mathcal{P}_X$, and $\mathcal{P}_Y$ are clear from the context, we simply write $G'$ instead of $G'(G,\mathcal{P}_X, \mathcal{P}_Y)$. Initially, $G'=G$. Hence, $G'$ contains $X$, $Y$, and $E$. For each super node $X_i$ (respectively, $Y_i$), we create two corresponding vertices $px^1_i$ and $px^2_i$ (respectively, $py^1_i$ and $py^2_i$) in $G'$. If $x$ is in super node $X_i$ (respectively, $y$ is in super node $Y_i$), then we add two edges $(x, px^1_i)$ and $(x, px^2_i)$ (respectively, $(y, py^1_i)$ and $(y, py^2_i)$) in $G'$. If $X_i$ and $Y_j$ form a super edge, then we add two vertices $r^1_{i,j}$ and $r^2_{i,j}$ to $G'$, and we add four edges $(px^1_i, r^1_{i,j})$, $(r^1_{i,j}, py^1_j)$, $(px^2_i, r^2_{i,j})$, $(r^2_{i,j}, py^2_j)$ to $G'$. $r^1_{i,j}$ (respectively, $r^2_{i,j}$) is called the \textit{relay} of $px^1_i$ and $py^1_j$ (respectively, $px^2_i$ and $py^2_j$).
Before we complete the construction of $G'$, we briefly explain the idea behind the construction so far. If two super nodes $X_i$ and $Y_j$ form a super edge, then $px^I_i$ and $py^I_j$ ($I \in \{1,2\}$) have a common neighbor in $G'$, i.e., the relay $r^I_{i,j}$. Because $px^I_i$ and $py^I_j$ are not adjacent, $px^I_i$ and $py^I_j$ form a target couple. To transform a solution $D$ of the 1-DR-2 problem to a solution of the MIN-REP problem, we need to transform $D$ to another feasible solution $D'$ for the 1-DR-2 problem so that none of the relays is chosen, and only vertices in $X \cup Y$ are used to connect $px^I_i$ and $py^I_j$. This is the reason that we have two corresponding vertices for each super node (and thus two relays for each super edge). Under this setting, to connect $px^1_i$ to $py^1_j$ and $px^2_i$ to $py^2_j$, choosing two vertices in $X \cup Y$ is no worse than choosing the relays.
\begin{figure}
\caption{An example of the reduction from the MIN-REP problem to the 1-DR-2 problem.}
\label{fig: reduction}
\end{figure}
Let $PX = \{px^1_1, px^1_2, \cdots, px^1_{k_X}\} \cup \{px^2_1, px^2_2, \cdots, px^2_{k_X}\}$ be the set of vertices in $G'$ corresponding to the super nodes in $\mathcal{P}_X$. Similarly, let $PY = \{py^1_1, py^1_2, \cdots, py^1_{k_Y}\} \cup \{py^2_1, py^2_2, \cdots, py^2_{k_Y}\}$. Let $R$ be the set of all relays. To complete the construction, we add four vertices (hubs) $h_{X,R}$, $h_{Y,R}$, $h_{PX}$, and $h_{PY}$ to $G'$. In $G'$, all the vertices in $X$, $Y$, $PX$, and $PY$ are adjacent to $h_{X,R}$, $h_{Y,R}$, $h_{PX}$, and $h_{PY}$, respectively. Moreover, every relay is adjacent to $h_{X,R}$ and $h_{Y,R}$. These four hubs induce a 4-cycle $(h_{PX}, h_{Y,R}, h_{PY}, h_{X,R}, h_{PX})$ in $G'$. Finally, for each hub $h$, we create two dummy nodes $d_1$ and $d_2$, and add two edges $(h, d_1)$ and $(h, d_2)$ to $G'$. This completes the construction of $G'$. Fig.~\ref{fig: reduction} shows an example of the reduction. Let $H$ and $M$ be the set of hubs and the set of dummy nodes, respectively. Hence, the vertex set of $G'$ is $X \cup Y \cup PX \cup PY \cup R \cup H \cup M$. Let $N(u)$ be the set of neighbors of $u$ in $G'$. We then have \begin{align*} &N(px) \subseteq X \cup R \cup \{h_{PX}\} \text{ if } px \in PX. &N(py) \subseteq Y \cup R \cup \{h_{PY}\} \text{ if } py \in PY.\\ &N(x) \subseteq PX \cup Y \cup \{h_{X,R}\} \text{ if } x \in X. &N(y) \subseteq PY \cup X \cup \{h_{Y,R}\} \text{ if } y \in Y.\\ &N(h_{X,R}) \setminus M = X \cup R \cup \{h_{PX}, h_{PY}\}. &N(h_{Y,R}) \setminus M = Y \cup R \cup \{h_{PX}, h_{PY}\}.\\ &N(h_{PX}) \setminus M = PX \cup \{h_{X,R}, h_{Y,R}\}. &N(h_{PY}) \setminus M = PY \cup \{h_{X,R}, h_{Y,R}\}.\\ &N(m) \subseteq H \text{ if } m \in M. &N(r) \subseteq PX \cup PY \cup \{h_{X,R}, h_{Y,R}\} \text{ if } r \in R. \end{align*}
Observe that $|R|=O(n(G)^2)$. We have the following lemma. \begin{lemma} \label{size} $n(G')=O(n(G)^2)$. \end{lemma}
It is easy to check that, for any two adjacent vertices $u$ and $v$ in $G'$, $u$ and $v$ have no common neighbor. Hence, we have the following lemma. \begin{lemma} \label{triangle-free} $G'$ is triangle-free. \end{lemma}
We say that a target couple $[a,b]$ is in $[A,B]$ if $a \in A$ and $b \in B$. It is easy to verify the following two lemmas. \begin{lemma} \label{HMust} Only $H$ can cover the target couples in $[M, M]$. \end{lemma}
\begin{lemma} \label{DS} $H$ is a dominating set of $G'$. \end{lemma}
The proof of the following lemma can be found in the appendix. \begin{lemma} \label{H} $H$ covers all the target couples except those in $[PX,PY]$. \end{lemma}
Let $px$ and $py$ be vertices in $PX$ and $PY$, respectively. Observe that, if $(px,x,y,py)$ is a path in $G'$, then $x \in X$ and $y \in Y$. We then have the following lemma. \begin{lemma} \label{CoverPXPY} $D$ covers target couples $[px^1_i,py^1_j]$ and $[px^2_i,py^2_j]$ if and only if at least one of the following conditions is satisfied. \begin{enumerate} \item There exist $x \in X$ and $y \in Y$ such that $(px^1_i,x,y,py^1_j)$ and $(px^2_i,x,y,py^2_j)$ are paths in $G'$ and $\{x,y\} \subseteq D$. \item $\{r^1_{i,j}, r^2_{i,j}\} \subseteq D$. \end{enumerate} \end{lemma}
\subsection{The Analysis} Let $I_{MR}$ be an instance of the MIN-REP problem with inputs $G$, $\mathcal{P}_X$, and $\mathcal{P}_Y$. Let $I_D$ be the instance of the 1-DR-2 problem with input $G'(G,\mathcal{P}_X,\mathcal{P}_Y)$. To prove the inapproximability result, we use the following two lemmas. \begin{lemma} \label{UB} If $I_{MR}$ has a solution of size $s$, then $I_D$ has a solution of size $s+4$. \end{lemma} \begin{lemma} \label{LB} If every solution of $I_{MR}$ has size at least $s \cdot 2^{\log^{1-\epsilon}{n(G)}}$, then every solution of $I_D$ has size at least $s \cdot 2^{\log^{1-\epsilon}{n(G)}}+4$. \end{lemma}
\textbf{Proof of Theorem~\ref{thrm: inapprox}:} By Theorem~\ref{MIN-REP}, for any constant $\epsilon > 0$, unless $NP \subseteq DTIME(n^{poly\log n})$, there is no polynomial-time algorithm that can distinguish between instances of the MIN-REP problem with a solution of size $k_X+k_Y$ and instances where every solution is of size at least $(k_X+k_Y) \cdot 2^{\log^{1-\epsilon}{n(G)}}$. By the above two lemmas, it is hard to distinguish between instances of the 1-DR-2 problem with a solution of size $k_X+k_Y+4$ and instances in which every solution is of size at least $(k_X+k_Y) \cdot 2^{\log^{1-\epsilon}{n(G)}}+4$. Therefore, for any constant $\epsilon > 0$, unless $NP \subseteq DTIME(n^{poly\log n})$, there is no polynomial-time algorithm that can approximate the 1-DR-2 problem by a factor better than $\frac{(k_X+k_Y)\cdot 2^{\log^{1-\epsilon}{n(G)}}+4}{k_X+k_Y+4}$. Lemma~\ref{size} implies that, for any constant $\epsilon' > 0$, unless $NP \subseteq DTIME(n^{poly\log n})$, there is no $O(2^{\log^{1-\epsilon'}{n(G')^{0.5}}})$-approximation algorithm for the 1-DR-2 problem. By considering sufficiently large instances and a small enough $\epsilon'$, we have the hardness result claimed in Theorem~\ref{thrm: inapprox}. On the other hand, let 1-DR-$2'$ be the problem obtained by removing the constraint that any feasible solution must be a dominating set from the 1-DR-2 problem. Thus, in the 1-DR-$2'$ problem, we only focus on covering target couples. By Lemmas~\ref{HMust} and~\ref{DS}, a solution $D$ is feasible for the 1-DR-$2'$ problem with input $G'$ if and only if $D$ is a feasible solution of $I_D$. Thus, the inapproximability result also applies to the 1-DR-$2'$ problem. Finally, the proof follows from Lemma~\ref{triangle-free}. \qed
Lemma~\ref{UB} is a direct result of the following claim. \begin{myclaim} \label{SHfeasible} If $S$ is a feasible solution of $I_{MR}$, then $S \cup H$ is a feasible solution of $I_D$. \end{myclaim} \begin{proof} Since $H$ is a dominating set, by Lemma~\ref{H}, it suffices to prove that every target couple $[u,v] = [px^{I_1}_i, py^{I_2}_j]$ in $[PX,PY]$ is covered by $S$. Note that $[px^{I_1}_i, py^{I_2}_j]$ cannot be a target couple if $I_1 \neq I_2$. This is because $px^{I_1}_i$ and $py^{I_2}_j$ do not have a common neighbor if $I_1 \neq I_2$. If $I_1 = I_2$, then the common neighbor must be $r^I_{i,j}$. By the construction of $G'$, this implies that $X_i$ and $Y_j$ form a super edge. Since $S$ is a feasible solution of $I_{MR}$, there exists $x \in X_i$ and $y \in Y_j$ such that $x$ and $y$ are adjacent in $G$ and $\{x,y\} \subseteq S$. Again, by the construction of $G'$, $(u, x, y, v)$ is a path in $G'$. Hence, $S \supseteq \{x,y\}$ covers $[u,v]$. \end{proof}
To prove Lemma~\ref{LB}, we use the following claim. \begin{myclaim} \label{XYonly} $I_D$ has an optimal solution $D^*$, such that $D^* \setminus H$ is a feasible solution of $I_{MR}$. \end{myclaim}
\textbf{Proof of Lemma~\ref{LB}:}
Let $S^*$ be the optimal solution of $I_{MR}$. By the assumption, we have $|S^*| \geq s \cdot 2^{\log^{1-\epsilon}{n}}$. It suffices to prove that $S^* \cup H$ is an optimal solution for $I_D$, which implies that every feasible solution of $I_D$ has size at least
$|S^* \cup H| = |S^*| + 4 \geq s \cdot 2^{\log^{1-\epsilon}{n}} + 4$. The feasibility of $S^* \cup H$ follows from Claim~\ref{SHfeasible}. For the sake of contradiction, assume that the optimal solution of $I_D$ has
size smaller than $|S^* \cup H|=|S^*|+4$. Claim~\ref{XYonly} and Lemma~\ref{HMust} then imply that $S^*$ is not an optimal solution of $I_{MR}$, which is a contradiction. \qed
\textbf{Proof of Claim~\ref{XYonly}:} Let $D_{OPT}$ be any optimal solution of $I_D$. By Lemmas~\ref{HMust}, \ref{H}, and \ref{CoverPXPY}, $D_{OPT} \subseteq H \cup X \cup Y \cup R$. If $D_{OPT} \cap R = \emptyset$, by Lemma~\ref{CoverPXPY}, each target couple $[px^I_i,py^I_j]$ is covered by some $x \in X$ and some $y \in Y$. By the construction of $G'$, such $x$ and $y$ also cover the super edge $(X_i, Y_j)$ in $I_{MR}$. Because each super edge in $I_{MR}$ has a corresponding target couple in $I_D$, $D_{OPT} \setminus H$ is a feasible solution of $I_{MR}$.
If $D_{OPT} \cap R \neq \emptyset$, then some $r^I_{i,j} \in D_{OPT}$. We can further assume that both $r^1_{i,j}$ and $r^2_{i,j}$ are in $D_{OPT}$; otherwise, by Lemma~\ref{CoverPXPY}, we can remove $r^I_{i,j}$ from $D_{OPT}$, the resulting solution is smaller and is still feasible. We then replace $r^1_{i,j}$ and $r^2_{i,j}$ with some $x \in X$ and some $y \in Y$ satisfying the first condition in Lemma~\ref{CoverPXPY}. By Lemma~\ref{CoverPXPY}, the resulting solution is still feasible, and the size remains the same. Repeat the above replacing process until the resulting solution does not contain any relay. The proof then follows from the argument of the case where $D_{OPT} \cap R = \emptyset$. \qed
\section{Transforming the 1-DR-$\alpha$ Problem to Other Related Problems} \textbf{Submodular Cost Set Cover Problem:} The 1-DR-$\alpha$ problem can also be considered as a special case of the submodular cost set cover problem~\cite{Du2011_submodular, 5438589, Wan2010}. In the set cover problem, we are given a set of targets $\mathcal{T}$ and a set of objects $\mathcal{S}$. Each object in $\mathcal{S}$ can cover a subset of $\mathcal{T}$ (specified in the input). The goal is to choose the smallest subset of $\mathcal{S}$ that covers $\mathcal{T}$. In the submodular cost set cover problem, there is a non-negative submodular function $c$ that maps each subset of $\mathcal{S}$ to a cost, and the goal is to find the set cover with the minimum cost. To transform the 1-DR-$\alpha$ problem with input $G=(V,E)$ to the submodular cost set cover problem, let $\mathcal{T}$ be the union of $V$ and the set of all target couples, and let $\mathcal{S}$ be the set of all subsets of $V$ with size at most $\alpha$. Hence, each object in $\mathcal{S}$ is a subset of $V$. An object $S \in \mathcal{S}$ can cover a vertex $v$ if $v$ is adjacent to some vertex in $S$ or $v \in S$. An object $S \in \mathcal{S}$ can cover a target couple $[u,v]$ if $m^S(u,v) \leq \alpha$. The cost of a subset $\mathcal{S}'$ of $\mathcal{S}$ is simply the size of the union of objects in $\mathcal{S}'$, i.e., the number of distinct vertices specified in $\mathcal{S}'$.
Iwata and Nagano proposed a $|\mathcal{T}|$-approximation algorithm and an $f$-approximation algorithm, where $f$ is the maximum frequency, $\argmax_{T\in \mathcal{T}}
{|\{S \in \mathcal{S}| S \text{ covers } T\}|}$~\cite{5438589}. Koufogiannakis and Young also proposed an $f$-approximation algorithm when the cost function $c$ is non-decreasing~\cite{Koufogiannakis2013}. It is easy to see that these algorithms give trivial bounds for the 1-DR-$\alpha$ problem. When the cost function $c$ is integer-valued, non-decreasing, and satisfies $c(\emptyset)=0$, Wan \textit{et al.} proposed a $\rho H(\gamma)$-approximation algorithm, where $\rho = \min\limits_{\mathcal{S}^*: \mathcal{S}^* \text{ is an optimal solution}} {\frac{\sum_{S \in \mathcal{S}^*}c(\{S\})}{c(\mathcal{S}^*)}}$, $\gamma$ is the largest number of targets that can be covered by an object in $\mathcal{S}$, and $H(k)$ is the $k$-th Harmonic number~\cite{Wan2010}. Du \textit{et al.} applied this algorithm to the 1-DR-$\alpha$ problem on UDG for $\alpha \geq 5$ and obtained a constant factor approximation algorithm~\cite{Du2011_submodular}. It is unclear whether or not $\rho$ can be upper bounded by $O(n^{1-\epsilon})$ for some $\epsilon > 0$ when applied to the 1-DR-$\alpha$ problem on general graphs.
\textbf{Minimum Rainbow Subgraph Problem on Multigraphs:} Given a set of $p$ colors and a multigraph $H$, where each edge is colored with one of the $p$ colors, the Minimum Rainbow Subgraph (MRS) problem asks for the smallest vertex subset $D$ of $H$,
such that each of the $p$ colors appears in some edge induced by $D$. The 1-DR-2 problem can be transformed to the MRS problem as follows. Let $G=(V,E)$ be the input graph of the 1-DR-2 problem. Let $T$ be the union of $V$ and the set of all target couples. The set of colors for the MRS problem is $\{c_i|i \in T\}$. The input multigraph $H$ of the MRS problem has the same vertex set as $G$. To form a dominating set, for each $v \in V$, $v$ is incident to $d(v)+1$ loops $(v,v)$ in $H$, where $d(v)$ is the degree of $v$ in $G$. Each of these loops receives a different color in
$\{c_v\} \cup \{c_u| (u,v) \in E\}$. For each target couple $[u,v]$ in $G$, if $w$ is a common neighbor of $u$ and $v$ in $G$, we add a loop $(w,w)$ with color $c_{[u,v]}$ to $H$. Finally, for each target couple $[u,v]$ in $G$, if $(u,w_1, w_2,v)$ is a path in $G$, we add an edge $(w_1, w_2)$ with color $c_{[u,v]}$ to $H$. The MRS problem can be transformed to the SCP problem. When the input graph is simple, Tirodkar and Vishwanathan proposed an $O(n^{1/3}\log n)$-approximation algorithm~\cite{Tirodkar2017}.
\appendix \section{Proof of Claim~\ref{c}} \begin{align*} n^{1-c} &= \sqrt{n \cdot \alpha(n^c)^{\alpha-2}\ln t} \\ \Leftrightarrow n^{2-2c} &= n \cdot \alpha(n^c)^{\alpha-2}\ln t \text{ (both sides are non-negative)} \\ \Leftrightarrow n^{2-2c-(1+c(\alpha-2))} &= \alpha \ln t \\ \Leftrightarrow n^{1-c\alpha} &= \alpha \ln t. \end{align*} When $c = \frac{1}{\alpha}-\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}$, \begin{align} n^{1-c\alpha} &= n^{1-(1-\frac{\ln \ln (t^{\alpha})}{\ln n})} \nonumber \\ &= n^{\frac{\ln \ln (t^{\alpha})}{\ln n}} \label{eq0} \\ &= (n^{\ln (\ln (t^{\alpha}))})^{\frac{1}{\ln n}} \label{eq1} \\ &= ((\ln (t^{\alpha}))^{\ln n})^{\frac{1}{\ln n}} \label{eq2} \\ &= (({\alpha}\ln t)^{\ln n})^{\frac{1}{\ln n}} \label{eq3} \\ &= {\alpha}\ln t \label{eq4}. \end{align} Hence, when $c = \frac{1}{\alpha}-\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}$, $n^{1-c}=\sqrt{n \cdot \alpha(n^c)^{\alpha-2}\ln t}$.
Finally, when $c = \frac{1}{\alpha}-\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}$, \begin{align*} n^{1-c} &= n^{1-\frac{1}{\alpha}+\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}}\\ &= n^{1-\frac{1}{\alpha}} \cdot n^{\frac{\ln \ln (t^{\alpha})}{\alpha \ln n}}\\ &= n^{1-\frac{1}{\alpha}} \cdot (n^{\frac{\ln \ln (t^{\alpha})}{\ln n}})^{\frac{1}{\alpha}}\\ &= n^{1-\frac{1}{\alpha}} \cdot (\alpha \ln t)^{\frac{1}{\alpha}}. \end{align*} In the last equality, we reuse Eq.\eqref{eq0}-Eq.\eqref{eq4}. \qed
\section{Proof of Lemma~\ref{H}} If $[u,v]$ is in $[PX, PX \cup \{h_{X,R}, h_{Y,R}\}]$,
$[PY, PY \cup \{h_{X,R}, h_{Y,R}\}]$,
$[X, X \cup R \cup \{h_{PX}, h_{PY}\}]$,
$[Y, Y \cup R \cup \{h_{PX}, h_{PY}\}]$,
or $[R, R \cup \{h_{PX}, h_{PY}\}]$,
then $[u,v]$ can be covered by one vertex in $H$. If $[u,v]$ is in $[PX,Y], [PY,X], [X, \{h_{Y,R}\}]$, or $[Y,\{h_{X,R}\}]$,
then $[u,v]$ can be covered by an edge in $H$. If $[u,v]$ is in $[PX, X \cup R \cup \{h_{PX}, h_{PY}\}]$,
$[PY, Y \cup R \cup \{h_{PX}, h_{PY}\}]$,
or $[X,Y]$,
then $[u,v]$ cannot be a target couple
(since $u$ and $v$ do not have a common neighbor). If $[u,v]$ is in $[X,\{h_{X,R}\}], [Y,\{h_{Y,R}\}]$,
or $[R, \{h_{X,R}, h_{Y,R}\}]$,
then $[u,v]$ cannot be a target couple
(since $u$ and $v$ are adjacent\footnote{In addition,
by Lemma~\ref{triangle-free},
$u$ and $v$ do not have a common neighbor.}). Moreover, it is easy to see that $H$ covers all the target couples in $[H,H]$ or $[V(G'), M]$, where $V(G')$ is the vertex set of $G'$. Finally, observe that if $[u,v]$ is in $[PX,PY]$, then $H$ cannot cover $[u,v]$. \qed
\end{document} | arXiv | {
"id": "1711.10680.tex",
"language_detection_score": 0.7591345906257629,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} Let $M$ be a compact two-dimensional manifold, $f \in C^{\infty}(M,\mathbb{R})$ be a Morse function, and $\Gamma_f$ be its Kronrod-Reeb graph. Denote by $\mathcal{O}(f)=\{f \circ h \mid h \in \mathcal{D}\}$ the orbit of $f$ with respect to the natural right action of the group of diffeomorphisms $\mathcal{D}$ on $C^{\infty}(M,\mathbb{R})$, and by $\mathcal{S}(f)=\{h\in\mathcal{D} \mid f \circ h = f\}$ the corresponding stabilizer of this function. It is easy to show that each $h\in\mathcal{S}(f)$ induces a homeomorphism of $\Gamma_f$. Let also $\mathcal{D}_{\mathrm{id}}(M)$ be the identity path component of $\mathcal{D}(M)$, $\mathcal{S}'(f)= \mathcal{S}(f) \cap \mathcal{D}_{\mathrm{id}}(M)$ be group of diffeomorphisms of $M$ preserving $f$ and isotopic to identity map, and $G_f$ be the group of homeomorphisms of the graph $\Gamma_f$ induced by diffeomorphisms belonging to $\mathcal{S}'(f)$. This group is one of the key ingredients for calculating the homotopy type of the orbit $\mathcal{O}(f)$.
Recently the authors described the structure of groups $G_f$ for Morse functions on all orientable surfaces distinct from $2$-torus $T^2$ and $2$-sphere $S^2$. The present paper is devoted to the case $\Mman=S^{2}$. In this situation $\Gamma_f$ is always a tree, and therefore all elements of the group $G_f$ have a common fixed subtree $\mathrm{Fix}(G_f)$, which may even consist of a unique vertex. Our main result calculates the groups $G_f$ for all Morse functions $\func\colon S^{2}\to\mathbb{R}$ whose fixed subtree $\mathrm{Fix}(G_f)$ consists of more than one point. \end{abstract}
\subjclass[2010]{ 37E30, 22F50 }
\maketitle
\section{Introduction} Let $\Mman$ be a compact two-dimensional manifold and $\DiffM$ the group of diffeomorphisms of $\Mman$. Then there exists a natural right action \[\phi\colon \Ci{\Mman}{\bR}\times\DiffM\to \Ci{\Mman}{\bR}\] of this group on the space of smooth functions on $\Mman$ defined by the formula $\phi(\func,\dif) = \func \circ \dif$. For $\func \in \Ci{\Mman}{\bR}$ denote by \begin{align*} \Stabilizer{\func} &=\{\dif\in\DiffM \mid \func \circ \dif = \func\} \end{align*} its \futurelet\testchar\MaybeOptArgmyemph {stabilizer} with respect to the specified action.
\begin{definition}\label{def:MorseFunc} Let $\FSp{\Mman}$ be the subset of $\Ci{\Mman}{\bR}$ consisting of maps $\func\colon \Mman\to\bR$ such \begin{enumerate}[leftmargin=*, label={\rm(\arabic*)}] \item $\func$ takes constant values on the connected components of the boundary $\partial\Mman$ and has no critical points on $\partial\Mman$; \item for each critical point $z$ of $\func$ there are local coordinates $(x,y)$ in which $z=(0,0)$ and $\func(x,y)=f(z) + g_z(x,y)$, where $g_z\colon \bR^2\to\bR$ is a homogeneous polynomial without multiple factors. \end{enumerate} Notice that every critical point of $\func\in\FSp{\Mman}$ is isolated.
A function $\func\in\FSp{\Mman}$ is called \futurelet\testchar\MaybeOptArgmyemph{Morse}, if $\deg g_z =2$ for each critical point $z$ of $\func$. In that case, due to Morse Lemma, one can assume that $g_z(x,y) = \pm x^2 \pm y^2$. \end{definition}
We will denote by $\Morse{\Mman}{\bR}$ the space of all Morse maps $\Mman\to\bR$.
Homotopy types of stabilizers and orbits of Morse functions and functions from $\mathcal{F}(\Mman,\bR)$ were studied in \cite{Maksymenko:AGAG:2006}, \cite{Maksymenko:ProcIM:ENG:2010}, \cite{Maksymenko:UMZ:ENG:2012}, \cite{Kudryavtseva:ConComp:VMU:2012}, \cite{Kudryavtseva:MathNotes:2012}, \cite{Kudryavtseva:MatSb:2013}, \cite{KudryavtsevaPermyakov:MatSb:2010}, \cite{MaksymenkoFeshchenko:MS:2015}, \cite{MaksymenkoFeshchenko:MFAT:2015}.
Let $\func\in\Ci{\Mman}{\bR}$, $\KRGraphf$ be a partition of the surface $\Mman$ into the connected components of level sets of this function, and $p\colon \Mman \to \KRGraphf$ be the canonical factor-mapping, associating to each $x \in \Mman$ the connected component of the level set $\func^{-1}(\func(x))$ containing that point.
Endow $\KRGraphf$ with the factor topology with respect to the mapping $p$: so a subset $A\subset \KRGraphf$ will be regarded as open if and only if its inverse image $p^{-1}(A)$ is open in $\Mman$. Then $\func$ induces the function $\hat{\func}\colon \KRGraphf \to \bR$, such that $\func=\hat{\func}\circ p$.
It is well known, that if $\func\in\FSp{\Mman}$, then $\KRGraphf$ has a structure of a one-dimensional CW-complex called the \futurelet\testchar\MaybeOptArgmyemph{Kronrod-Reeb graph}, or simply the \futurelet\testchar\MaybeOptArgmyemph{graph} of $\func$. The vertices of this graph correspond to critical connected components of level sets of $\func$ and connected components of the boundary of the surface. By the \futurelet\testchar\MaybeOptArgmyemph{edge} of $\KRGraphf$ we will mean an \futurelet\testchar\MaybeOptArgmyemph{open} edge, that is, a one-dimensional cell.
Denote by $\Homeo(\KRGraphf)$ the group of homeomorphisms of $\KRGraphf$. Notice that each element of the stabilizer $\dif\in\Stabilizer{\func}$ leaves invariant each level set of $\func$, and therefore induces a homeomorphism $\rho(\dif)$ of the graph of $\func$, so that the following diagram is commutative: \begin{equation}\label{equ:2x2_M_Graph} \xymatrix{
\Mman \ar[rr]^-{p} \ar[d]_-{\dif} &&
\KRGraphf \ar[rr]^-{\hat{\func}} \ar[d]^-{\rho(\dif)} &&
\bR \ar@{=}[d] \\
\Mman \ar[rr]^-{p} &&
\KRGraphf \ar[rr]^-{\hat{\func}} &&
\bR } \end{equation}
Moreover, the correspondence $h\mapsto g(h)$ is a homomorphism of groups \[\rho\colon \Stabilizer{\func} \to \Homeo(\KRGraphf).\]
Let also $\DiffIdM$ be the path component of the identity map $\id_{\Mman}$ in $\DiffM$. Put \begin{align*} \StabilizerIsotId{\func} &= \Stabilizer{\func} \cap \DiffIdM &\ \fG &=\rho(\StabilizerIsotId{\func}). \end{align*}
Thus, $\fG$ is the group of automorphisms of the Kronrod-Reeb graph of $\func$ induced by diffeomorphisms of the surface preserving the function and isotopic identity.
\begin{remark}\label{rm:f}\rm Since $\hat{\func}$ is monotone on edges of $\KRGraphf$, it is easy to show that $\fG$ is a finite group. Moreover, if $g(E)=E$, for some $g\in G$ and an edge $E$ of the graph $\KRGraphf$, then $g(x)=x$ for all $x \in E$. \end{remark}
Since $\fG$ is finite and $\rho$ is continuous, it follows that $\rho$ reduces to an epimorphism \[
\rho_0\colon \pi_0 \StabilizerIsotId{\func} \to \fG, \] of the group $\pi_{0}\StabilizerIsotId{\func}$ path components of $\StabilizerIsotId{\func}$ being an analogue of the mapping class group for $\func$-preserving diffeomorphisms.
Algebraic structure of the group $\pi_{0}\StabilizerIsotId{\func}$ of connected components of $\StabilizerIsotId{\func}$ for all $\func\in\FSp{\Mman}$ on orientable surfaces $\Mman$ distinct from $2$-torus and $2$-sphere is described in~\cite{Maksymenko:KRGraphs:2013}, and the structure of its factor group $\fG$ is investigated in~\cite{MaksymenkoKravchenko:GMF:2018}. These groups play an important role in computing the homotopy type of the path component $\OrbitComp{\func}{\func}$ of the orbit of $\func$, see also~\cite{Maksymenko:AGAG:2006}, \cite{Maksymenko:ProcIM:ENG:2010}, \cite{Kudryavtseva:ConComp:VMU:2012}, \cite{Kudryavtseva:MathNotes:2012}, \cite{Kudryavtseva:MatSb:2013}.
The purpose of this note is to describe the groups $\fG$ for a certain class of smooth functions on $2$-sphere $S^2$.
The main result Theorem~\ref{th:iso} reduces computation of $\fG$ to computations of similar groups for restrictions of $\func$ to some disks in $S^2$. As noted above the latter calculations were described in \cite{MaksymenkoKravchenko:GMF:2018}.
First we recall a variant of the well known fact about automorphisms of finite trees from graphs theory.
\begin{lemma}\label{lm:cw} Let $\Gamma$ be a finite contractible one-dimensional CW-complex (<<a topological tree>>), $G$ be a finite group of its cellular homeomorphisms, and $\mathrm{Fix}(G)$ be the set of common fixed points of all elements of the group $G$. Then $\mathrm{Fix}(G)$ is either a contractible subcomplex or consists of a single point belonging to some edge $E$ an open 1-cell), and in the latter case there exists $g \in G$ such that $g(E)=E$ and $g$ changes the orientation of $E$. \end{lemma}
Suppose $\func\colon \mathcal{S}^{2}\to{\bR}$ belongs to $\FSp{S^2}$. Then it is easy to show that $\KRGraphf$ is a tree, \emph{i.e.}, a finite contractible one-dimensional CW-complex, and by Remark~\ref{rm:f} $\fG$ is a finite group of cellular homeomorphisms of $\KRGraphf$. Therefore, for $\fG$, the conditions of Lemma~\ref{lm:cw} are satisfied. Note that according to Remark~\ref{rm:f} the second case of Lemma~\ref{lm:cw} is impossible, and hence $\fG$ has a fixed subtree.
In this paper we consider the case when the fixed subtree of the group $\fG$ contains more than one vertex, \emph{i.e.} has at least one edge.
Let us also mention that $\DiffId(S^{2})$ coincides with the group $\Diff^{+}(S^{2})$ of diffeomorphisms of the sphere preserving orientation, \cite{Smale:ProcAMS:1959}. Therefore $\StabilizerIsotId{\func}$ consists of diffeomorphisms of the sphere preserving the function $f$ and the orientation of $\mathcal{S}^{2}$. \begin{theorem}\label{th:iso} Let $\func\in\FSp{S^2}$. Suppose that all elements of the group $\fG$ have a common fixed edge $E$. Let $x \in E$ be an arbitrary point and $A$ and $B$ be the closures of the connected components of $S^{2}\setminus p^{-1}(x)$. Then \begin{enumerate}[label={\rm{(\arabic*)}}] \item\label{en1} $A$ and $B$ are 2-disks being invariant with respect to $\StabilizerIsotId{\func}$; \item\label{en2} the restrictions $\fa \in\FSp{A}$ and $\fb\in\FSp{B}$; \item\label{en3} the map $\phi\colon \fG \to \aG \times \bG$ defined by the formula \[\phi(\gamma)=(\ag, \bg)\] is an isomorphism of groups. \end{enumerate}
\begin{proof} \ref{en1} By assumption $x$ belongs to the open edge $E$. Therefore $p^{-1}(x)$ is a regular connected component of some level set of the function $\func$, that is, a simple closed curve. Then, by Jordan Theorem, $p^{-1}(x)$ divides the sphere into two connected components whose closures are homeomorphic to two-dimensional disks. Consequently, $A$ and $B$ are two-dimensional disks.
Let as show that $A$ and $B$ are invariant with respect to $\StabilizerIsotId{\func}$, \emph{i.e.}, $h(A)=A$ and $h(B)=B$ for each $h\in \StabilizerIsotId{\func}$. Denote \begin{align*} \gA&=p(A) &\ \gB&=p(B). \end{align*} Then \begin{align*}\label{equ:gG} \gA\cup\gB&=\gG &\ \gA\cap\gB&=\{x\}. \end{align*} By definition, $\rh(x)=x$, whence $\rh$ either preserves both $\gA$ and $\gB$ or interchange them. We claim that \begin{align*} \rh(\gA)&=\gA &\ \rh(\gB)&=\gB. \end{align*} Indeed suppose $\rh(\gA)=\gB$. Since $\rh$ is fixed on $E$, it follows that \[ \rh(\gA\cap E)=\gA\cap E, \] whence \[ \rho(h)(\gA\cap E)=\rh(\gA)\cap\rho(E)=\gB\cap E\neq\gA\cap E, \] which contradicts to our assumption. Thus $\gA$ and $\gB$ are invariant with respect to the group $\fG$.
Now we can show that $A$ and $B$ are also invariant with respect to $h$. By virtue of the commutativity of the diagram~\eqref{equ:2x2_M_Graph} $\rh(p(y))=p(h(y))$ for all $y\in \gG$. In particular: \[ p(h(A))=\rh(p(A))=\rh(\gA)=\gA. \]
Therefore, $h(A)=p^{-1}(\gA)=A$. The proof for $B$ is similar. Thus, $A$ and $B$ are invariant with respect to $\StabilizerIsotId{\func}$.
\ref{en2} Notice that the function $f$ takes a constant value on the simple closed curve $p^{-1}(x)$ being a common boundary of disks $A$ and $B$, and does not contain critical points of $\func$. Therefore, the restrictions $\fa, \fb$ satisfy the conditions 1) and 2) the Definition~\ref{def:MorseFunc}, and so they belong to $\FSp{A}$ and $\FSp{B}$ respectively.
\ref{en3} We should prove that the map $\phi\colon \fG \to \aG \times \bG$ defined by formula $\phi(\gamma)=(\ag,\bg)$ is an isomorphism.
First we will show that $\phi$ is correctly defined. Let $\gamma\in\fG=\Rf$, that is, $\gamma=\rh$, where $\dif$ is a diffeomorphism of the sphere preserving the function $f$ and isotopic to the identity.
We claim that $\ha\in\Sa=\Stabilizer{\fa}\cap \DiffId(A)$. Indeed, for each point $x\in A$ we have that: \[ f(x)=\fa(x)=\fa(\ha(x))=\fa(\dif(x))=f(\dif(x)), \] which means that $\ha\in\Stabilizer{\fa}$.
Moreover, since $\dif$ preserves the orientation of the sphere, it follows that $\ha$ preserves the orientation of the disk $A$, and therefore by~\cite{Smale:ProcAMS:1959}, $\ha\in\DiffId(A)$. Thus $\ag\in\aG$. Similarly $\bg\in\bG$, and so $\phi$ is well defined.
Let us now verify that $\phi$ is an \futurelet\testchar\MaybeOptArgmyemph{isomorphism of groups}, that is, a bijective homomorphism.
Let $\delta, \omega \in\fG$. Then \begin{align*} \phi(\delta\circ\omega)
&= \bigl( \delta\circ\omega|_{\gA}, \ \delta\circ\omega|_{\gB} \bigr) = \\
&=\bigl(\delta|_{\gA},\ \delta|_{\gB}\bigr)\circ\bigl(\omega|_{\gA}, \ \omega|_{\gB}\bigr)=\\
&=\bigl(\delta|_{\gA}\circ\omega|_{\gA},\ \delta|_{\gB}\circ\omega|_{\gB}\bigr)=\\
&=\bigl(\delta\circ\omega|_{\gA}, \ \delta\circ \omega|_{\gB}\bigr), \end{align*} sp $\phi$ is a homomorphism.
Let us show that $\ker\phi=\{\id_{\gG}\}$. Indeed, suppose $\gamma\in \ker\phi$, that is $\ag=\id_{\gA}$ and $\bg=\id_{\gB}$. Then $\gamma$ is fixed on $\gA\cup\gB=\gG$, and hence it is the identity map.
Surjectivity of $\phi \colon \fG \to \aG \times \bG$ is implied by the following simple lemma whose proof we leave to the reader.
\begin{lemma}\label{lm:ab} Suppose $f\colon D^{2}\to \bR$ belgns to the space $\FSp{D^{2}}$. Then for arbitrary $\alpha\in\fG$, there exists $a\in \Sf$ fixed near the boundary $\partial D^{2}$ and such that $\alpha=\rho(a)$.\qed \end{lemma}
Let $(\alpha,\beta)\in\aG \times \bG$, then by Lemma~\ref{lm:ab} there exist $a\in \Sa$ and $b\in \Sb$ fixed near $\partial A=\partial B=p^{-1}(x)$ and such that $\alpha=\rho_{A}(a)$ and $\beta=\rho_{B}(b)$. Define $\dif$ by the following formula: \begin{equation*}
\dif=
\begin{cases}
a(x),& x\in A, \\
b(x),& x\in B.
\end{cases} \end{equation*} Then, $\dif$ is a diffeomorphism of the sphere, preserving the function and orientation, whence $\dif\in\StabilizerIsotId{\func}$.
Moreover if we put $\gamma=\rh\in \fG$, then $\ag=\rha=\alpha$ and $\bg=\rhb=\beta$. In other words, $\phi(\gamma)=(\ag,\bg)=(\alpha,\beta)$, \emph{i.e.}, $\phi$ is surjective and therefore an isomorphism. \end{proof} \end{theorem}
\def$'${$'$}
\end{document} | arXiv | {
"id": "1903.09721.tex",
"language_detection_score": 0.7124384641647339,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Numerical Root Finding via Cox Rings} \author[1]{Simon Telen \thanks{\texttt{simon.telen@kuleuven.be}}} \affil[1]{Department of Computer Science, KU Leuven} \maketitle
\begin{abstract} We present a new eigenvalue method for solving a system of Laurent polynomial equations defining a zero-dimensional reduced subscheme of a toric compactification $X$ of $(\mathbb{C} \setminus \{0\})^n$. We homogenize the input equations to obtain a homogeneous ideal $I$ in the Cox ring of $X$ and generalize the eigenvalue, eigenvector theorem for root finding in affine space to compute homogeneous coordinates of the solutions. Several numerical experiments show the effectiveness of the resulting method. In particular, the method outperforms existing solvers in the case of (nearly) degenerate systems with solutions on or near the torus invariant prime divisors. \end{abstract}
\keywords{\small systems of polynomial equations, toric varieties, Cox rings, multiplication matrix}
\classification{\small 14M25, 65H04, 65H10, 65H17 }
\section{Introduction} Many problems in science and engineering can be solved by finding the solutions of a system of (Laurent) polynomial equations. Here, we consider the important case where the number of solutions to the system is finite. There exist many different approaches to tackle this problem \cite{sturmfels2,elkadi_introduction_2007,cattani2005solving}. Symbolic tools such as Groebner bases focus on systems with coefficients in $\mathbb{Q}$ or in finite fields \cite{cox1,sturmfels1996grobner}. For many applications, it is natural to work in finite precision, floating point arithmetic. This is the case, for instance, when the coefficients are known approximately (e.g.\ from measurements) or when it is sufficient to compute solutions accurately up to a certain number of significant decimal digits.
The most important classes of numerical solvers are homotopy algorithms \cite{bates2013numerically,verschelde1999algorithm,li1997numerical} and algebraic methods such as resultant based algorithms \cite{cox2,emir1,emiris_matrices_1999,noferini,mvb} and normal form algorithms \cite{mourrain1999new,mourrain_stable_2008,dreesen2012back,telen2017stabilized,telen2017solving} which rewrite the problem as an eigenvalue problem. Homotopy solvers are very successful for systems with many variables of low degree, whereas algebraic solvers can handle high degree systems in few variables. The algorithm presented in this paper is a new, numerical normal form algorithm for solving square systems of Laurent polynomial equations. The approach distinguishes itself from existing methods by the interpretation of `solving' the system: we compute the points defined by the input equations on a toric compactification $X$ of $(\mathbb{C} \setminus \{0\})^n \simeq T_X \subset X$ via an eigenvalue computation. More specifically, we work in the Cox ring of $X$ to find `homogeneous' coordinates of the solutions. The motivation is that, even though generically all solutions lie in $(\mathbb{C} \setminus \{0\})^n$, many problems encountered in applications are non-generic with respect to the Newton polytopes of the input equations. Solutions on or near $X \setminus T_X$ cause trouble for the stability of existing numerical algorithms, as we will show in our experiments, and the proposed algorithm is designed to handle such situations. The correctness of the algorithm depends on a conjecture regarding the regularity of a homogeneous ideal in the Cox ring of $X$. In the remainder of this section, we discuss some applications and give an overview of related work and of our main contributions. We conclude the section with an outline of this paper.
\subsection*{Applications} The applications we have in mind are problems that can be formulated as polynomial systems in only a few variables.
Many problems in computer vision, such as relative pose problems, require the solution of a system of polynomial equations \cite{kukelova2013algebraic,kukelova2008automatic}. In this context, there are often several different polynomial formulations for the same problem, with a different number of variables and a different degree of the equations. See \cite[Sec.\ 7.1.3]{kukelova2013algebraic} for a description of a relative pose problem by a square 7-dimensional system (6 quadratics and a cubic in 7 unknowns) and by a square 3-dimensional system (two cubics and a quintic in 3 unknowns).
Another application comes from molecular biology. In \cite{emiris1998computer} the problem of computing all possible conformations of several molecules is written in the form of a polynomial system in only two or three variables.
A problem encountered in many fields of engineering is that of finding the critical points of a function $f$, not necessarily polynomial, in a bounded domain $\Omega \subset \mathbb{R}^n$. A possible approach is to replace $f$ by a polynomial $\tilde{f}$, computed from samples, which approximates $f$ on $\Omega$ and compute the critical points of $\tilde{f}$ instead. The problem is now reduced to a system of polynomial equations, and if $\tilde{f}$ is a good approximation of $f$ in $\Omega$, the solutions in $\Omega$ will be good approximations of the critical points of $f$. It is clear that high degrees lead to better approximations, but also to higher degree polynomial systems. See \cite{noferini} for an application of this technique to solve one of the SIAM 100-Digit Challenge problems \cite{trefethen2002100}.
\subsection*{Related work}
As stated above, solutions on or near the torus invariant prime divisors (i.e.\ the irreducible components of $X \setminus T_X$) cause trouble for numerical root finding in non-compact solution spaces such as $\mathbb{C}^n$ or $(\mathbb{C} \setminus \{0\})^n$. In practice, for homotopy methods, such solutions are the reason for diverging paths, which often require a lot of unnecessary computational effort. Algebraic solvers such as the algorithms proposed in \cite{telen2017stabilized} and \cite[\S 3, \S 4]{telen2017solving}, as well as the classical resultant algorithms \cite[Chapters 3 and 7]{cox2} for computing multiplication matrices, require invertibility of a certain matrix: see for instance the matrix $M_{11}$ in \cite[Chapter 3, \S 6]{cox2} or the matrix $N_{|B}$ in \cite[Section 2]{telen2017solving}. In the presence of solutions on special divisors `at infinity', these matrices are singular. In a numerical context, if these solutions are not exactly \textit{on}, but \textit{near} $X \setminus T_X$, homotopy paths `diverge' to large solutions, causing scaling and condition problems, and the algebraic algorithms require the inversion of an ill-conditioned matrix, causing large rounding errors. A partial solution is to homogenize the equations and solve the problem in $X = \mathbb{P}^{n_1} \times \cdots \times \mathbb{P}^{n_k}, k \geq 1, n_1 + \ldots + n_k = n$, which should be thought of as a compactification of $\mathbb{C}^n$, such that a `solution' is defined by $n + k$ (multi-)homogeneous coordinates. This technique is used in total degree homotopies \cite{bates2013numerically,wampler2011numerical}, multihomogeneous homotopies \cite[Chapter 8]{sommese} and in normal form methods such as \cite[\S 5, \S 6]{telen2017solving} or \cite{bender2018towards}. However, depending on the support of the input equations, this standard way of homogenizing may introduce highly singular solutions on the torus invariant divisors, or even destroy 0-dimensionality. More general sparsity structures are taken into account by polyhedral homotopies \cite{li1999solving,hustu,verschelde1994homotopies}, toric or sparse resultants \cite{emir1,cox2,d2002macaulay,pedersen1996mixed,emiris1994monomial,massri2016solving} and truncated normal forms \cite[\S 4]{telen2017solving}. In \cite{huber1998polyhedral} a method for dealing with diverging paths in a polyhedral homotopy is proposed.
In symbolic computing, modified sparse resultant methods have been introduced for solving degenerate systems symbolically \cite{rojas1999solving,d2001computing}. Recently, specialized Groebner basis methods over semigroup algebras have been developed for exploiting sparsity structure \cite{bender2019gr}.
\subsection*{Contributions} To the best of the author's knowledge, Cox rings (other than the familiar ones corresponding to products of projective spaces) have not been applied for numerical root finding before. To do so may seem like a bad idea, because the dimension of the Cox ring is (possibly much) greater than that of $X$. However, because of its fine grading by the class group $\textup{Cl}(X) = \textup{Div}(X)/\sim$ of Weil divisors modulo linear equivalence, this does not affect the computational complexity that much (see Remark \ref{rem:complexity}). The input Laurent polynomial equations define a homogeneous ideal $I$ of the Cox ring $S = \bigoplus_{ \alpha \in \textup{Cl}(X) } S_\alpha$ with respect to this grading (this is detailed in Section \ref{sec:setup}). We will assume that $I$ defines a zero-dimensional reduced subscheme $V_X(I) $ of $X$ which is contained in its largest simplicial open subset $U$ (see Section \ref{sec:preliminaries}). The \textit{regularity} $\textup{Reg}(I) \subset \textup{Cl}(X)$ of this ideal is defined in Section \ref{sec:lagreg}. In the same section, we conjecture a degree $\alpha \in \textup{Cl}(X)$ that is in $\textup{Reg}(I)$ (Conjecture \ref{conj}). The correctness of the algorithm depends upon this conjecture, which is supported by some weaker results in Section \ref{sec:lagreg} and by experimental evidence in Section \ref{sec:examples}. For this degree $\alpha \in \textup{Reg}(I)$, let $(S/I)_\alpha$ be the degree $\alpha$ part of the graded $S$-module $S/I$. We will construct a linear \textit{multiplication map} ${M}_{f} : (S/I)_\alpha \rightarrow (S/I)_\alpha$ with respect to a rational function $f$ on $X$ which is regular at the roots of $I$. Here is a simplified version of Theorem \ref{thm:multiplication}. \begin{theorem} Let $V_X(I) = \{\zeta_1, \ldots, \zeta_\delta\} \subset U$ be reduced and let $\alpha, \alpha_0 \in \textup{Cl}(X)$ be such that $\alpha, \alpha+ \alpha_0 \in \textup{Reg}(I)$ and there exists $h_0 \in S_{\alpha_0}$ such that $\zeta_j \notin V_X(h_0), j = 1, \ldots, \delta$. Then for any $g \in S_{\alpha_0}$, the multiplication map ${M}_f : (S/I)_\alpha \rightarrow (S/I)_{\alpha}$ with $f = g/h_0$ has eigenvalues $f(\zeta_j)$. \end{theorem}
For every monomial $x^{b_i} \in S_{\alpha_0}$, we compute a multiplication matrix and denote its eigenvalues by $\lambda_{ij}, j = 1, \ldots, \delta$. This way, we reduce the problem of finding Cox coordinates of $\zeta_j$ to finding one point on the affine variety defined by the simple binomial system $\{x^{b_i} = \lambda_{ij} ~|~ x^{b_i} \in S_{\alpha_0} \}$ (Corollary \ref{cor:orbiteq}). This leads to a numerical linear algebra based algorithm for finding Cox coordinates (Algorithm \ref{alg:coxcoords}). Unlike other numerical methods, the algorithm is robust in the situation where some of the $\zeta_j$ are on or near torus invariant prime divisors. We illustrate this in Section \ref{sec:examples} with some examples.
\subsection*{Outline of the paper} The paper is organized as follows. In the next section we discuss some preliminaries on Cox rings and the classical eigenvalue, eigenvector theorem for polynomial root finding. Our problem setup is discussed in detail in Section \ref{sec:setup}. In Section \ref{sec:lagreg} we introduce homogeneous Lagrange polynomials and their relation to multigraded regularity. Our main result is discussed in detail in Section \ref{sec:toriceval}. The resulting algorithm is presented in Section \ref{sec:alg}. Finally, in Section \ref{sec:examples} we work out several numerical examples. Throughout the paper, we work with polynomials, varieties and vector spaces over $\mathbb{C}$.
\section{Preliminaries} \label{sec:preliminaries} In this section we give a brief introduction to the classical eigenvalue, eigenvector theorem and to complete toric varieties and their Cox rings. We denote by $V(I) \subset \mathbb{C}^n$ the affine variety of an ideal $I \subset \mathbb{C}[x_1, \ldots, x_n]$ and by $I(Y) \subset \mathbb{C}[x_1, \ldots, x_n]$ the vanishing ideal of a set $Y \subset \mathbb{C}^n$. If $I$ is generated by $f_1, \ldots, f_s \in \mathbb{C}[x_1, \ldots, x_n]$, we denote $I = \ideal{f_1, \ldots, f_s}$ and $V(I) = V(\ideal{f_1, \ldots, f_s}) = V(f_1, \ldots, f_s)$. For a finite dimensional vector space $W$, $W^\vee = \textup{Hom}_\mathbb{C}(W, \mathbb{C})$ denotes its dual. For a linear endomorphism $M: W \rightarrow W$ of a finite dimensional vector space $W$, a right eigenpair is $(\lambda, w) \in \mathbb{C} \times (W \setminus \{0\})$ satisfying $M (w) = \lambda w$. Analogously, a left eigenpair is given by $(v, \lambda) \in (W^\vee \setminus \{0 \}) \times \mathbb{C}$ satisfying $v \circ M = \lambda v$ .
\subsection{The classical eigenvalue, eigenvector theorem for polynomial root finding} \label{subsec:multclass} Let $R = \mathbb{C}[x_1, \ldots, x_n]$ be the ring of $n$-variate polynomials with coefficients in $\mathbb{C}$. Take $f_i \in R, i = 1, \ldots, s$ and let $I = \ideal{f_1, \ldots, f_s}$ be a zero-dimensional ideal in $R$. That is, $V(I) = \{z_1, \ldots, z_\delta\}$ consists of $\delta < \infty$ points in $\mathbb{C}^n$. We assume for simplicity that all of the $z_i$ have multiplicity one or, equivalently, that $I$ is radical. By \cite[Chapter 2, Lemma 2.9]{cox2} there exist polynomials $\ell_i \in R, i = 1, \ldots, \delta$ such that $$ \ell_i(z_j) = \begin{cases} 0 & i \neq j \\ 1 & i = j \end{cases}. $$ The $\ell_i$ are called \textit{Lagrange polynomials} with respect to the set $V(I)$. We define $v_j \in (R/I)^\vee$ by $v_j (f + I) = f(z_j)$. \begin{lemma} \label{lem:aff} The map $ \psi : R/I \rightarrow \mathbb{C}^\delta : f + I \mapsto (v_1(f+I), \ldots, v_\delta(f+I))$ is an isomorphism of vector spaces. \end{lemma} \begin{proof} The map $\psi$ is clearly linear and injective. Surjectivity follows from $\psi(\ell_j + I) = e_j$ with $e_j$ the $j$-th standard basis vector of $\mathbb{C}^\delta$. \end{proof} It follows from Lemma \ref{lem:aff} that, under our assumptions, $\dim_\mathbb{C}(R/I) = \delta$. This is well known, see for instance \cite[Chapter 5, \S3, Proposition 7]{cox1}. In particular, the map $\psi$ defines coordinates on $R/I$ and the residue classes of the Lagrange polynomials form a basis of $R/I$ with dual basis $v_j, j = 1, \ldots, \delta$. For $g \in R$, define the linear map ${M}_g: R/I \rightarrow R/I: f+I \mapsto fg +I$. \begin{theorem}[Eigenvalue, eigenvector theorem] \label{thm:EVaff} The left and right eigenpairs of ${M}_g$ are $$ (v_j, g(z_j)), \qquad (g(z_j), \ell_j + I), \qquad j= 1, \ldots, \delta.$$ \end{theorem} \begin{proof} See for instance \cite[Chapter 2, Proposition 4.7]{cox2}. \end{proof} Note that by definition, ${M}_{g_1} \circ {M}_{g_2} = {M}_{g_2} \circ {M}_{g_1}$ for any $g_1, g_2 \in R$. Therefore, after fixing a basis for $R/I$, the matrices corresponding to any two multiplication maps commute and have common eigenspaces. Theorem \ref{thm:EVaff} provides the following algorithm for finding the points in $V(I)$: \begin{enumerate} \item compute the matrices ${M}_{x_1}, \ldots, {M}_{x_n}$, \item find the coordinates of the $z_i$ from their simultaneous eigenvalue decomposition. \end{enumerate} For a more detailed exposition on multiplication matrices, we refer the reader to \cite[Chapter 2]{cox2}, \cite[Chapter 4]{elkadi_introduction_2007} and \cite[Chapter 2]{sturmfels2}. \subsection{Complete toric varieties and Cox rings} \label{subsec:torvar} We will restrict ourselves to the discussion of only those aspects of toric varieties that are directly related to this paper. The reader who is unfamiliar with unexplained basic concepts can find an excellent introduction in \cite{cox2011toric} or \cite{fulton1993introduction}. For more information on Cox rings we refer to \cite[Chapter 5]{cox2011toric} and the original paper by Cox \cite{cox1995homogeneous}. The $n$-dimensional algebraic torus $(\mathbb{C}^*)^n = (\mathbb{C} \setminus \{0\})^n$ has character lattice $M = \textup{Hom}_\mathbb{Z}((\mathbb{C}^*)^n, \mathbb{C}^*) \simeq \mathbb{Z}^n$ and cocharacter lattice $N = \textup{Hom}_\mathbb{Z}(M,\mathbb{Z}) \simeq \mathbb{Z}^n$. An element $m \in M$ gives $\chi^m : (\mathbb{C}^*)^n \rightarrow \mathbb{C}^* $ such that if $m$ corresponds to $(m_1, \ldots, m_n) \in \mathbb{Z}^n$, $\chi^m(t) = t^m = t_1^{m_1} \cdots t_n^{m_n}$. Hence characters can be thought of as Laurent monomials and $$\mathbb{C}[M] = \bigoplus_{m \in M} \mathbb{C} \cdot \chi^m \simeq \mathbb{C}[t_1^{\pm 1}, \ldots, t_n^{\pm 1}].$$ Following \cite{cox2011toric}, we denote $N_\mathbb{R} = N \otimes_\mathbb{Z} \mathbb{R} \simeq \mathbb{R}^n$ and $T_N= N \otimes_\mathbb{Z} \mathbb{C}^* = (\mathbb{C}^*)^n $. A complete, normal toric variety $X$ with torus $T_N$ is given by a complete fan $\Sigma$ in $N_\mathbb{R}$ and we will sometimes emphasize this correspondence by writing $X = X_\Sigma$. The set of $d$-dimensional cones of $\Sigma$ is denoted $\Sigma(d)$. In particular, we write $\Sigma(1) = \{\rho_1, \ldots, \rho_k \}$ for the rays of $\Sigma$ and $u_i \in N$ for the primitive generator of $\rho_i$. It is convenient to think of the $u_i$ as column vectors and to define the matrix $F = [u_1 ~ u_2 ~ \cdots ~ u_k ] \in \mathbb{Z}^{n \times k}$. We will use $F_{ij}$ for the entry in row $i$, column $j$ of $F$, $F_{i,:}$ for the $i$-th row of $F$, $F_{:,j} = u_j$ for the $j$-th column of $F$ and $F^\top$ for the transpose. Every ray $\rho_i$ corresponds to a torus invariant prime divisor $D_i$ on $X_\Sigma$ and we have $X_\Sigma \setminus (\bigcup_{i=1}^k D_i) = T_{X_\Sigma} \simeq T_N$. The class group $\textup{Cl}(X_\Sigma)$ of $X_\Sigma$, which is the group of Weil divisors modulo linear equivalence, is generated by the classes $[D_i]$ of the torus invariant prime divisors. The Picard group $\textup{Pic}(X_\Sigma) \subset \textup{Cl}(X_\Sigma)$ consists of the classes of Weil divisors that are locally principal. Identifying $\bigoplus_{i=1}^k \mathbb{Z} \cdot D_i \simeq \mathbb{Z}^k$ we have a short exact sequence $$ 0 \longrightarrow M \overset{F^\top}{\longrightarrow} \mathbb{Z}^k \longrightarrow \textup{Cl}(X_\Sigma) \longrightarrow 0$$ where $\mathbb{Z}^k \longrightarrow \textup{Cl}(X_\Sigma)$ sends a torus invariant Weil divisor $\sum_{i=1}^k a_i D_i$ to its class $ [ \sum_{i=1}^k a_i D_i] \in \textup{Cl}(X_\Sigma)$. Taking $\textup{Hom}_\mathbb{Z}(-, \mathbb{C}^*)$ and defining the \textit{reductive group} $G = \textup{Hom}_\mathbb{Z}(\textup{Cl}(X_\Sigma), \mathbb{C}^*)$ we find that $G$ is the kernel of the map \begin{equation} \label{eq:GCQ}
\pi: (\mathbb{C}^*)^k \rightarrow T_N : t \mapsto (t^{F_{1,:}}, \ldots, t^{F_{n,:}}).
\end{equation} That is, $G$ is the subgroup of $(\mathbb{C}^*)^k$ given by
$$ G = \{ g \in (\mathbb{C}^*)^k : g^{F_{i,:}} = 1, i = 1, \ldots, n \}$$ and $\pi$ is constant on $G$-orbits.
Let $S = \mathbb{C}[x_1, \ldots, x_k]$ be the polynomial ring in $k$ variables where each of the $x_i$ corresponds to a ray $\rho_i \in \Sigma(1)$. For every cone $\sigma \in \Sigma$, denote by $\sigma(1)$ the rays contained in $\sigma$. We are going to associate a monomial in $S$ to each cone in $\Sigma$: for $\sigma \in \Sigma$, define $ x^{\hat{\sigma}} = \prod_{\rho_i \notin \sigma(1)} x_i.$ The \textit{irrelevant ideal} $K$ of $\Sigma$ (or of $X_\Sigma$) is the monomial ideal defined as \begin{equation} \label{eq:irrelideal} K = \ideal{ x^{\hat{\sigma}} : \sigma \in \Sigma(n)} \subset S. \end{equation} The \textit{exceptional set} of $X_\Sigma$ is $Z = V(K) \subset \mathbb{C}^k$. The action of $G$ on $(\mathbb{C}^*)^k$ extends to an action on $\mathbb{C}^k \setminus Z$. In \cite{cox1995homogeneous}, Cox proves that there is a good categorical quotient $ \pi : \mathbb{C}^k \setminus Z \rightarrow X_\Sigma $, constant on $G$-orbits, such that \eqref{eq:GCQ} is its restriction to $(\mathbb{C}^*)^k$. By the properties of good categorical quotients we have a bijection \begin{equation*} \label{eq:bijection} \{ \textup{ closed $G$-orbits in $\mathbb{C}^k \backslash Z$ } \} \leftrightarrow \{ \textup{ points in $X_\Sigma$ } \}. \end{equation*}
Moreover, $\pi$ is an almost geometric quotient, meaning that there is a Zariski open subset $U \subset X_\Sigma$ such that $\pi_{|\pi^{-1}(U)}: \pi^{-1}(U) \rightarrow U$ is a geometric quotient: $$ \{ \textup{ $G$-orbits in $\pi^{-1}(U)$ } \} \leftrightarrow \{ \textup{ points in $U$ } \}.$$ The open set $U$ is the toric variety $X_{\Sigma'} \subset X_\Sigma$ corresponding to the subfan $\Sigma' \subset \Sigma$ of simplicial cones of $\Sigma$ (see \cite[proof of Theorem 5.1.11]{cox2011toric}). Therefore, by the orbit-cone correspondence, $X \setminus U$ is a union of $T_N$-orbits of codimension at least 3 (cones of dimension 0, 1 or 2 are simplicial). If $\Sigma$ is simplicial, the nicest possible bijection holds: $$ \{ \textup{ $G$-orbits in $\mathbb{C}^k \backslash Z$ } \} \leftrightarrow \{ \textup{ points in $X_\Sigma$ } \}.$$ In this case we write $X_\Sigma = (\mathbb{C}^k \setminus Z)/G$. \begin{example} The quotient construction of $X_\Sigma$ is a generalization of the familiar construction of $\mathbb{P}^n$ as the quotient $\mathbb{P}^n = (\mathbb{C}^{n+1} \setminus \{0\})/ \mathbb{C}^*$. In this case $S = \mathbb{C}[x_0, \ldots, x_n]$, $K = \ideal{x_0, \ldots, x_n}$, $Z = \{0\}$ and $G = \textup{Hom}_\mathbb{Z}(\textup{Cl}(\mathbb{P}^n), \mathbb{C}^*) = \textup{Hom}_\mathbb{Z}(\mathbb{Z}, \mathbb{C}^*) = \mathbb{C}^*$ acts by $g \cdot (x_0, \ldots, x_n) = (g x_0, \ldots, g x_n), g \in G$. \end{example} The ring $S$ has a natural grading by $\textup{Cl}(X_\Sigma)$: \begin{equation} \label{eq:grading} \deg(x^a) = \deg(x_1^{a_1} \cdots x_k^{a_k}) = [ \sum_{i=1}^k a_i D_i ] \in \textup{Cl}(X_\Sigma), \qquad S = \bigoplus_{\alpha \in \textup{Cl}(X_\Sigma)} S_\alpha, \end{equation} where $S_\alpha = \bigoplus_{\deg(x^a) = \alpha} \mathbb{C} \cdot x^a$. In fact, the only nonzero graded pieces correspond to `positive' degrees, and one can write
$$\textup{Cl}(X_\Sigma)_+ = \{ \alpha \in \textup{Cl}(X_\Sigma) ~|~ \alpha = n_1 \deg(x_1) + \cdots + n_k \deg(x_k), n_i \in \mathbb{N} \}, \quad S = \bigoplus_{\alpha \in \textup{Cl}(X_\Sigma)_+} S_\alpha.$$ Similarly, we denote $\textup{Pic}(X_\Sigma)_+ = \textup{Cl}(X_\Sigma)_+ \cap \textup{Pic}(X_\Sigma)$. The graded pieces correspond to vector spaces of global sections of divisorial sheaves, that is, for $\alpha \in \textup{Cl}(X_\Sigma)$ with $\alpha = [ D], D = \sum_{i=1}^k a_i D_i $, \begin{equation} \label{eq:gradedpiece} S_\alpha \simeq \Gamma( X_\Sigma, \mathscr{O}_{X_\Sigma}(D)) \simeq \bigoplus_{ F^\top m + a \geq 0} \mathbb{C} \cdot \chi^m. \end{equation} Here the direct sum ranges over all $m$ such that elementwise, $F^\top m + a \geq 0$, that is, $\pair{u_i,m} + a_i \geq 0, i = 1, \ldots, k$ where $\pair{\cdot, \cdot}$ is the natural pairing between $N$ and $M$. Denoting $x^{F^\top m + a} = x_1^{\pair{u_1,m} + a_1} \cdots x_k^{\pair{u_k,m} + a_k}$, the isomorphism \eqref{eq:gradedpiece} is given by \begin{equation} \label{eq:hom} \sum_{F^\top m + a \geq 0} c_m \chi^m \mapsto \sum_{F^\top m + a \geq 0} c_m x^{F^\top m + a} \in S_{\alpha}, \end{equation} which is \textit{homogenization} with respect to $\alpha$. To see the analogy with the classical notion of homogenization, note that the action of $G$ on $\mathbb{C}^k$ induces an action of $G$ on $S$ by $(g \cdot f )(x) = f(g^{-1} \cdot x)$ for $g \in G, f \in S$. If $f \in S_\alpha$, it is the image of some Laurent polynomial under \eqref{eq:hom} and we can write \begin{equation} \label{eq:eigenspaces} (g \cdot f)(x) = \sum_{F^\top m + a \geq 0} c_m (g^{-1} \cdot x)^{F^\top m + a} = g^{-a} f(x) \end{equation} since by the definition of the reductive group $g^{F^\top m}= 1$. This shows that the number $g^{-a}$ does not depend on the representative divisor $D$ we choose for $\alpha \in \textup{Cl}(X_\Sigma)$. It therefore makes sense to write $g^{-\alpha} = g^{-(F^\top m +a)}$. Equation \eqref{eq:eigenspaces} shows that the homogeneous components $S_\alpha \subset S$ with respect to the grading \eqref{eq:grading} are the eigenspaces of the action of $G$ on $S$ and that \begin{equation} \label{eq:zeroset} V_{X_\Sigma}(f) = \{ p \in X_\Sigma : f(x) = 0 \textup{ for some } x \in \pi^{-1}(p)\} \subset X_\Sigma \end{equation} is well defined if $f$ is homogeneous. An ideal $I \subset S$ is called homogeneous if it is generated by homogeneous polynomials, and it is straightforward to extend \eqref{eq:zeroset} to define $V_{X_\Sigma}(I)$. The ring $S$ equipped with the grading \eqref{eq:grading} and the irrelevant ideal \eqref{eq:irrelideal} is called the \textit{total coordinate ring}, \textit{homogeneous coordinate ring} or \textit{Cox ring} of $X_\Sigma$. \begin{example} The complete fans $\Sigma$ we will encounter in this paper are normal fans of full dimensional lattice polytopes \cite[\S 2.3]{cox2011toric}. If
$$ P = \{ m \in M_\mathbb{R} ~|~ \pair{u_i,m} \geq -a_i, i = 1, \ldots, k \}$$ is the minimal facet representation of a full dimensional lattice polytope $P \subset M_\mathbb{R}$, then its normal fan $\Sigma_P$ defines a toric variety $X_{\Sigma_P}$, which we will often denote by $X$ for simplicity of notation. There are bijective correspondences between rays in $\Sigma_P$, facets of $P$, torus invariant prime divisors in $X$ and indeterminates in the Cox ring. The matrix $F$ contains the primitive inward pointing facet normals of $P$. For example, the toric variety of the standard $n$-simplex is $\mathbb{P}^n$. \end{example}
\begin{example} \label{ex:hirz1} As a running example, we will consider the problem of finding the intersections of two curves on the Hirzebruch surface $\mathscr{H}_2$. The associated fan $\Sigma$ and the matrix $F$ of ray generators are shown in Figure \ref{fig:hirzebruch1}. The Cox ring $S = \mathbb{C}[x_1, x_2,x_3,x_4]$ is graded by $ \textup{Cl}(\mathscr{H}_2)\simeq \mathbb{Z}^4/\textup{im} F^\top \simeq \mathbb{Z}^2$, with $\deg(x^b) = \deg(x_1^{b_1}x_2^{b_2}x_3^{b_3}x_4^{b_4}) = (b_1 - 2b_2+b_3, b_2 + b_4)$. The reductive group and exceptional set are given by
$G = \{(\lambda, \mu, \lambda, \lambda^2 \mu) ~|~ (\lambda,\mu)\in (\mathbb{C}^*)^2 \} \subset (\mathbb{C}^*)^4$ and $Z = V(x_1,x_3) \cup V(x_2,x_4) \subset \mathbb{C}^4$ respectively. Since $\mathscr{H}_2$ is smooth, it is simplicial (in the notation from above $U = \mathscr{H}_2$) and $\textup{Pic}(\mathscr{H}_2) = \textup{Cl}(\mathscr{H}_2)$. \begin{figure}
\caption{Fan and matrix of primitive ray generators of the Hirzebruch surface $\mathscr{H}_2$.}
\label{fig:hirzebruch1}
\end{figure}
\end{example}
\section{Problem setup} \label{sec:setup} In this section, we give a detailed description of the problem considered in this paper and we discuss our assumptions. We start from $n$ given Laurent polynomials $\hat{f}_1, \ldots, \hat{f}_n \in \mathbb{C}[M]$ (that is, we consider \textit{square} systems). Denote $$ \hat{f}_j = \sum c_{m,j} \chi^m$$
and let $P_j \subset M_\mathbb{R}$ be the Newton polytope of $\hat{f}_j$: $P_j = \textup{Conv}(m \in M ~|~ c_{m,j} \neq 0) \subset M_\mathbb{R}$. Let $P = P_1 + \ldots + P_n$ be the Minkowski sum of these polytopes. We assume that $P$ is full-dimensional and we let $X = X_{\Sigma_P}$ be the complete normal toric variety corresponding to its normal fan. To each $P_j$, we associate a basepoint free\footnote{For $\alpha = [D] \in \textup{Pic}(X)$, we say that $p \in X$ is a \textit{basepoint} of $S_\alpha \simeq \Gamma(X,\mathscr{O}_X(D))$ if every global section of the associated line bundle $\mathscr{O}_X(D)$ vanishes at $p$. The divisor $D$ and its associated degree $\alpha \in \textup{Pic}(X)$ are called \textit{basepoint free} if $S_\alpha$ has no basepoints.} Cartier divisor $D_{P_j}$ on $X$, given by $$ D_{P_j} = \sum_{i=1}^k a_{j,i} D_i, \qquad a_{j,i} = - \min_{m \in P_j} \pair{u_i,m}$$ and we denote $a_j = (a_{j,1}, \ldots, a_{j,k}) \in \mathbb{Z}^k, [D_{P_j}] = \alpha_j \in \textup{Pic}(X)$. For this construction, $D_{P_i+ P_j} = D_{P_i} + D_{P_j}$ and for $\mathcal{J} \subset \{1,\ldots, n\}$, $P_\mathcal{J} = \sum_{j \in \mathcal{J}} P_j$ we have \begin{equation} \label{eq:sumofdiv} \Gamma(X, \mathscr{O}_X( \sum_{j \in \mathcal{J}} D_{P_j})) =\Gamma(X, \mathscr{O}_X( D_{P_\mathcal{J}})) = \bigoplus_{m \in P_\mathcal{J} \cap M } \mathbb{C} \cdot \chi^m. \end{equation}
By definition, $m \in P_j \cap M$ if and only if $F^\top m + a_j \geq 0$, so we have \begin{equation} \label{eq:linebundle} \hat{f}_j = \sum_{m \in P_j \cap M} c_{m,j} \chi^m \in \Gamma(X, \mathscr{O}_X(D_{P_j})). \end{equation} Homogenizing with respect to $\alpha_j$ according to \eqref{eq:hom} gives (see \cite{cattani1997global}) $$\hat{f}_j \mapsto f_j = \sum_{m \in P_j \cap M} c_{m,j} x^{F^\top m + a_j} \in S_{\alpha_j}.$$ Equation \eqref{eq:linebundle} shows that $\hat{f}_j$ is a global section of the line bundle given by $\mathscr{O}_X(D_{P_j})$ \cite[Chapter 6]{cox2011toric}. Its divisor of zeroes is the effective divisor $\textup{div}(\hat{f}_j) + D_{P_j}$, whose support is exactly $V_X(f_j)$. This construction gives a homogeneous ideal $I = \ideal{f_1, \ldots, f_n} \subset S$. We will make the following assumptions on $I$. \begin{assumption} \label{ass:1} $V_X(I)$ is zero-dimensional. We denote $V_X(I) = \{\zeta_1, \ldots, \zeta_\delta \} \subset X$. \end{assumption} \begin{assumption} \label{ass:2} $V_X(I) \subset U \subset X$, where $U$ is the `simplicial part' of $X$ as in Subsection \ref{subsec:torvar}. \end{assumption} \begin{assumption} \label{ass:3} $I$ defines a reduced subscheme of $U \subset X$. That is, all points $\zeta_i$ are `simple roots' of $I$. \end{assumption}
It is clear that when $n = 2$, Assumption \ref{ass:2} can be dropped. For $n = 3$, $U$ is the complement of finitely many points in $X$: one point for each vertex of $P$ corresponding to a non-simplicial, full dimensional cone of $\Sigma_P$. It follows that we can drop Assumption \ref{ass:2} also for $n= 3$, since `face systems' corresponding to vertices do not contribute any solutions (see for instance the appendix in \cite{hustu}).
For $n>3$, Assumption \ref{ass:2} can be dropped if $X$ is simplicial. We will comment on Assumption \ref{ass:3} in Section \ref{sec:lagreg} (Remark \ref{rem:multiplicities}).
In order to say something more about the number $\delta$ in Assumption \ref{ass:1}, we recall the definition of mixed volume. The $n$-dimensional \textup{mixed volume} of a collection of $n$ polytopes $P_1,\ldots,P_n$ in $M_\mathbb{R} \simeq \mathbb{R}^n$, denoted $\textup{MV}(P_1,\ldots,P_n)$, is the coefficient of the monomial $\lambda_1 \lambda_2 \cdots \lambda_n$ in $\textup{Vol}_n(\sum_{i = 1}^n \lambda_i P_i)$. A formula for the mixed volume that will be useful is (see \cite{bihan2016irrational,csahin2016multigraded}) \begin{equation} \label{eq:mvpoints}
\textup{MV}(P_1, \ldots,P_n) = \sum_{\ell = 0}^{n} (-1)^{n-\ell} \sum_{ \substack{\mathcal{J} \subset \{1,\ldots,n\} \\ |\mathcal{J}| = \ell}} \left|(P_0+P_\mathcal{J}) \cap M \right |, \end{equation} for any lattice polytope $P_0 \subset \mathbb{R}^n$ corresponding to a basepoint free divisor $D_{P_0}$. The following important theorem was named after Bernstein, Khovanskii and Kushnirenko and tells us what the number $\delta$ is. \begin{theorem}[BKK Theorem] \label{thm:bkk} Let $I = \ideal{ f_1,\ldots,f_n } \subset S$ be a homogeneous ideal constructed as above. If $I$ defines $\delta < \infty$ points on $X$, counting multiplicities, then $\delta$ is given by $\textup{MV}(P_1,\ldots,P_n)$. For generic choices of the coefficients of the $f_i$, the number of roots in $T_X \simeq T_N = (\mathbb{C}^*)^n$ is exactly equal to $\textup{MV}(P_1,\ldots,P_n)$ and they all have multiplicity one. \end{theorem} \begin{proof} See \cite[\S 5.5]{fulton1993introduction}. For sketches of the proof we refer to \cite{cox2,sturm}. Other proofs can be found in Bernstein's original paper \cite{bernstein} and in \cite{hustu}. \end{proof} Theorem \ref{thm:bkk} is a generalization of B\'ezout's theorem for projective space. Motivated by this result, for the rest of this article $\delta = \textup{MV}(P_1, \ldots, P_n)$. We can represent each $\zeta_j \in V_X(I)$ by a set of homogeneous coordinates $z_j = (z_{j1}, \ldots, z_{jk}) \in \mathbb{C}^k \setminus Z$. Let $\pi^{-1}(\zeta_j) = G \cdot z_j \subset \mathbb{C}^k \setminus Z$ be the corresponding $(k - n)$-dimensional closed $G$-orbit and let $\overline{G \cdot z_j}$ be the closure in $\mathbb{C}^k$. It follows from our assumptions that $$ V(I) \setminus Z = G \cdot z_1 \cup \cdots \cup G \cdot z_\delta \qquad \textup{and} \qquad V(I) = \overline{G \cdot z_1} \cup \cdots \cup \overline{G \cdot z_\delta} \cup Z',$$ with $Z' \subset Z$ a closed subvariety. We define $J = I(\overline{G \cdot z_1} \cup \cdots \cup \overline{G \cdot z_\delta} )$ to be the ideal of the union of orbit closures, which is radical and saturated with respect to the irrelevant ideal $K$. The ideal $J$ is the one investigated in \cite{csahin2016multigraded} (in the simplicial case). It is clear that $I \subset J$. In some special cases where $Z$ is very small, the ideals $I$ and $J$ coincide. This happens for instance for $X = \mathbb{P}^n$ or for any weighted projective space $X = \mathbb{P}(w_0, \ldots, w_n)$.
\begin{example} \label{ex:hirz2} Let us consider the polynomials \begin{eqnarray*} \hat{f}_1 &=& 1 + t_1 +t_2 + t_1t_2 + t_1^2t_2 + t_1^3t_2 , \\ \hat{f}_2 &=& 1 + t_2 + t_1t_2 + t_1^2t_2. \end{eqnarray*} We think of $\hat{f}_1,\hat{f}_2$ as elements of $\mathbb{C}[t_1^{\pm 1}, t_2^{\pm 1}] \simeq \mathbb{C}[M]$ with $M = \mathbb{Z}^2$ the character lattice of $T_N = (\mathbb{C}^*)^2$. The polytopes $P_1, P_2$ and $P$ are shown in Figure \ref{fig:hirzebruch2}. Note that the normal fan $\Sigma_P$ of $P$ is the fan of Figure \ref{fig:hirzebruch1}, so the toric variety associated to this system is $X = X_{\Sigma_P} = \mathscr{H}_2$. \begin{figure}
\caption{Newton polytopes involved in Example \ref{ex:hirz2}.}
\label{fig:hirzebruch2}
\end{figure} We identify $\textup{Cl}(X)$ with $\mathbb{Z}^2$ as in Example \ref{ex:hirz1}. It is easy to check that $\alpha_1 = [D_{P_1}] = [D_3+D_4] = (1,1) \in \textup{Cl}(X)$ and $\alpha_2 = [D_{P_2}] = [D_4] = (0,1) \in \textup{Cl}(X)$. This gives the following homogeneous polynomials in the Cox ring $S = \mathbb{C}[x_1, \ldots, x_4]$: \begin{eqnarray*} f_1 &=& x_3x_4 + x_1x_4 + x_2x_3^3 + x_1x_2x_3^2 + x_1^2x_2x_3 + x_1^3x_2, \\ f_2 &=& x_4 + x_2x_3^2 + x_1x_2x_3 + x_1^2x_2. \end{eqnarray*} The mixed volume is $\delta = \textup{MV}(P_1,P_2) = 3$. To see that the ideal $I = \ideal{f_1,f_2}$ satisfies our assumptions, we compute its primary decomposition\footnote{We used Macaulay2 to perform the symbolic computations in this example \cite{eisenbud2001computations}.}. \begin{eqnarray*} I &=& \ideal{x_1+x_3, x_2x_3^2+x_4}\cap \ideal{x_1,x_2x_3^2+x_4} \cap \ideal{x_3, x_1^2x_2+x_4} \cap \ideal{x_2,x_4} \end{eqnarray*} which gives the decomposition of the associated variety $V(I) = \overline{G \cdot z_1} \cup \overline{G \cdot z_2} \cup \overline{G \cdot z_3} \cup Z'$ with orbit representatives $ z_1 = (-1,-1,1,1), z_2 = (0,-1,1,1), z_3 = (1,-1,0,1)$ and $Z' = V(x_2,x_4) \subset Z$. This shows that $I$ defines the expected number of simple, isolated points on $X = \mathscr{H}_2$. The first solution $\zeta_1 = \pi(z_1) \in T_N$ lies in the torus, the others satisfy $\zeta_2 = \pi(z_2) \in D_1$, $\zeta_3 = \pi(z_3) \in D_3$. The ideal $J$ in this example is the intersection of the first three primary components of $I$. We find $J = \ideal{x_1^2x_3+x_1x_3^2, f_2}$. \end{example}
\section{Multigraded regularity and homogeneous Lagrange polynomials} \label{sec:lagreg} The regularity of a graded module measures its complexity (for instance, in terms of the degree of minimal generators). The notion of regularity has been studied in a multigraded context. The general situation is treated in \cite{maclagan2003multigraded}. The zero-dimensional case is further investigated in \cite{csahin2016multigraded} and some more results in a multiprojective setting can be found in \cite{bender2018towards,sidman2006multigraded}. In our case, the regularity (as defined below) of the ideal $I$ in Section \ref{sec:setup} will determine in which graded piece $S_\alpha$ of the Cox ring $S$ we can work to define our multiplication maps in Section \ref{sec:toriceval}. The `larger' this graded piece (i.e.\ the larger the dimension of $S_\alpha$ as a $\mathbb{C}$-vector space), the larger the matrices involved in the presented algorithm in Section \ref{sec:alg}. We will define homogeneous Lagrange polynomials and show how they are related to multigraded regularity. As in Subsection \ref{subsec:multclass}, these Lagrange polynomials and their dual basis will have a nice interpretation as eigenvectors of multiplication maps. For $\alpha \in \textup{Cl}(X)$, we denote $n_\alpha = \dim_\mathbb{C}(S_\alpha)$. Since $X$ is complete, $n_{\alpha} < \infty, \forall \alpha \in \textup{Cl}(X)$ \cite[Proposition 4.3.8]{cox2011toric}. The ideals $I, J \subset S$ are as defined in Section \ref{sec:setup}. In particular, $I$ satisfies Assumptions \ref{ass:1}-\ref{ass:3}. For $\alpha \in \textup{Cl}(X)$, let $S_\alpha = \bigoplus_{i=1}^{n_\alpha} \mathbb{C} \cdot x^{b_i}, b_i \in \mathbb{N}^k$ and consider the map $$\Phi_\alpha : \mathbb{C}^k \setminus Z \dashrightarrow \mathbb{P}^{n_\alpha - 1} \simeq \mathbb{P}(S_\alpha^\vee) \simeq \mathbb{P}(\Gamma(X,\mathscr{O}_X(D))^\vee):(x_1, \ldots, x_k) \mapsto (x^{b_1}, \ldots, x^{b_{n_\alpha}} ).$$ Note that $\Phi_\alpha$ may have basepoints (hence the dashed arrow) and it is constant on $G$-orbits. We will say that $\alpha \in \textup{Cl}(X)$ is basepoint free if $\Phi_\alpha$ has no basepoints (this extends the definition for basepoint free $\alpha \in \textup{Pic}(X)$ to the class group). We say that $\zeta \in U \subset X$ is a basepoint of $S_\alpha$ if $\pi^{-1}(\zeta)$ are basepoints of $\Phi_\alpha$. The following lemma is straightforward and we omit the proof.
\begin{lemma} \label{lem:nonzero} Let $\alpha = [D] \in \textup{Cl}(X)$ be such that no $\zeta_j$ is a basepoint of $S_\alpha$. For generic $h \in S_\alpha$, we have $\zeta_j \notin V_X(h), j = 1, \ldots, \delta$. \end{lemma}
Note that in particular, the condition of Lemma \ref{lem:nonzero} is always satisfied for basepoint free $\alpha$. The grading on $S$ defines a grading on the quotient $S/I$: $(S/I)_\alpha = S_\alpha / I_ \alpha$. It follows from Lemma \ref{lem:nonzero} that for any $\alpha = [D] \in \textup{Cl}(X)$ such that no $\zeta_j$ is a basepoint of $S_\alpha$, the following $\mathbb{C}$-linear map is well defined for generic $h \in S_\alpha$: \begin{equation} \label{eq:evalmap} \psi_\alpha : (S/I)_\alpha \rightarrow \mathbb{C}^\delta : f + I_\alpha \mapsto \left ( \frac{f}{h}(z_1), \ldots, \frac{f}{h}(z_\delta) \right ). \end{equation}
We fix such a generic $h \in S_\alpha$. Note that the definition of $\psi_\alpha$ does not depend on the choice of representative $z_j$ of $G \cdot z_j$.
We will now investigate for which $\alpha \in \textup{Cl}(X)$ the map $\psi_\alpha$ defines coordinates on $(S/I)_\alpha$, that is, for which $\alpha$ it is an isomorphism (note that this is independent of the choice of $h$ satisfying $\zeta_j \notin V_X(h)$). It is clear that for this to happen, we need $\dim_\mathbb{C}((S/I)_\alpha) = \delta$. The dimension of the graded parts of $S/I$ is given by the multigraded analog of the Hilbert function \cite{csahin2016multigraded}. \begin{definition}[Hilbert function] \label{def:hilbfun} For a homogeneous ideal $I$ in the Cox ring $S$ of $X$, the \textup{Hilbert function} of $I$ is given by $ \textup{HF}_I : \textup{Cl}(X) \rightarrow \mathbb{N} : \alpha \mapsto \dim_\mathbb{C}((S/I)_\alpha).$ \end{definition} We note that in \cite{csahin2016multigraded}, the Hilbert function of the scheme $V_X(I)$ is equal to $\textup{HF}_J$ as defined above. In order to state a necessary and sufficient condition for surjectivity of $\psi_\alpha$, we will introduce a homogeneous analog of the Lagrange polynomials introduced in Subsection \ref{subsec:multclass}. \begin{definition}[homogeneous Lagrange polynomials] \label{def:lagpol} Let $\alpha \in \textup{Cl}(X)$ be such that no $\zeta_j$ is a basepoint of $S_\alpha$ and let $h \in S_\alpha$ be such that $\zeta_j \notin V_X(h), j = 1, \ldots, \delta$. A set of elements $\ell_1, \ldots, \ell_\delta \in S_\alpha$ is called a set of \textup{homogeneous Lagrange polynomials} of degree $\alpha$ with respect to $h$ if for $j = 1, \ldots, \delta$, \begin{enumerate} \item $\zeta_i \in V_X(\ell_j), i \neq j$, \item $\zeta_j \in V_X(h-\ell_j)$. \end{enumerate} \end{definition} In terms of the homogeneous coordinates $z_j$, a set of homogeneous Lagrange polynomials satisfies $\ell_j(z_i) = 0, i \neq j$ and $\ell_j(z_j) = h(z_j), j = 1, \ldots, \delta$. \begin{remark} \label{rem:multiplicities} Let $\ell_j, j = 1, \ldots, \delta$ be a set of homogeneous Lagrange polynomials of degree $\alpha$ with respect to $h$. The cosets $\ell_j + I_\alpha \in (S/I)_\alpha$ are a dual basis for the evaluation functionals $v_j \in (S/I)_\alpha^\vee$ given by $v_j: (S/I)_\alpha \rightarrow \mathbb{C} : f + I_\alpha \mapsto (f/h)(z_j)$. If $I$ defines points with multiplicities (the case of `fat points', violating Assumption \ref{ass:3}), a starting point would be to extend this set of evaluation functionals to a basis of $(S/I)_\alpha^\vee$, using analogs of differentiation operators. It is known that the theory for the affine root finding problem (Subsection \ref{subsec:multclass}) extends nicely in this way; see for instance \cite[Chapter 4, Proposition 2.7]{cox2}, \cite[Section 4.3]{elkadi_introduction_2007} or \cite{moller1995multivariate}. We leave this for future research. \end{remark} In what follows, we use the same function $h$ to define $\psi_\alpha$ and a set of homogeneous Lagrange polynomials. \begin{proposition} \label{prop:inj} Let $\alpha \in \textup{Cl}(X)$ be such that no $\zeta_j$ is a basepoint of $S_\alpha$. Then \begin{enumerate} \item $\psi_\alpha$ is injective if and only if $I_\alpha = J_\alpha$. In this case $\textup{HF}_I(\alpha) \leq \delta$, \item $\psi_\alpha$ is surjective if and only if there exists a set of homogeneous Lagrange polynomials of degree $\alpha$. In this case $\textup{HF}_I(\alpha) \geq \delta$. \end{enumerate} \end{proposition} \begin{proof} Let $f, h \in S_\alpha$ such that $\zeta_j \notin V_X(h), j = 1, \ldots, \delta$. If $\psi_\alpha$ is injective, then $ f \in J_\alpha \Rightarrow \psi_\alpha(f + I_\alpha) = 0 \Rightarrow f \in I_\alpha$. So $J_\alpha \subset I_\alpha$ and the other inclusion is trivial. Conversely, if $I_\alpha = J_\alpha$, then $\psi_\alpha(f + I_\alpha) = 0 \Rightarrow f \in J_\alpha \Rightarrow f \in I_\alpha$, so $\psi_\alpha$ is injective. The corresponding statement about $\textup{HF}_I$ follows easily.\\ If $\psi_\alpha$ is surjective, take $\ell_j \in \psi_{\alpha}^{-1}(e_j)$. Conversely, if $\ell_j, j = 1, \ldots, \delta$ is a set of homogeneous Lagrange polynomials of degree $\alpha$, $\psi_\alpha(\ell_j + I_\alpha) = e_j$ and $\psi_\alpha$ is surjective. Again, the statement about $\textup{HF}_I$ follows easily. \end{proof}
\begin{corollary} \label{cor:radical} If $\alpha \in \textup{Pic}(X)$ is ample\footnote{A divisor $D$ and its degree $\alpha = [D]$ are called \textit{very ample} if $D$ is basepoint free and $X \rightarrow \mathbb{P}(\Gamma(X, \mathscr{O}_X(D))^\vee)$ is a closed embedding. If $kD$ (or $k\alpha$) is very ample for some $k \geq 1$, then $D$ (or $\alpha$) is called \textit{ample}. See \cite[Chapter 6]{cox2011toric} for definitions and properties.} and $I$ is radical, then $\psi_\alpha$ is injective. \end{corollary} \begin{proof} In this case $I = I(\overline{G\cdot z_1} \cup \cdots \cup \overline{G\cdot z_\delta} \cup Z')$ by the Nullstellensatz. Take $f \in J_\alpha$. Since any polynomial in $S_\alpha$ for $\alpha$ ample vanishes on $Z$ ($S_\alpha \subset K$, see e.g.\ \cite{soprounov2005toric}), $f$ vanishes on $Z' \subset Z$. Therefore $f \in I_\alpha$ and $J_\alpha \subset I_\alpha \subset J_\alpha$. Now apply Proposition \ref{prop:inj}. \end{proof}
The following proposition shows that the existence of homogeneous Lagrange polynomials of degree $\alpha \in \textup{Cl}(X)$ is equivalent to the fact that the points $\Phi_\alpha(z_j)$ span a linear space of dimension $\delta-1$ in $\mathbb{P}^{n_\alpha -1}$. Let $p_j \in \mathbb{C}^{n_\alpha}$ be a set of homogeneous coordinates (in the standard sense) of $\Phi_\alpha(z_j) \in \mathbb{P}^{n_\alpha - 1}$ and define the matrix $L_\alpha = [ p_1 ~ \cdots ~ p_\delta ] \in \mathbb{C}^{n_{\alpha} \times \delta}$. \begin{proposition} \label{prop:Lalpha} Let $\alpha \in \textup{Cl}(X)$ be such that no $\zeta_j$ is a basepoint of $S_\alpha$. There exists a set of Lagrange polynomials of degree $\alpha$ if and only if $L_\alpha$ has rank $\delta$. \end{proposition} \begin{proof} The rank of $L_\alpha$ is $\delta$ if and only if there exists a left inverse matrix $L_{\alpha}^\dagger \in \mathbb{C}^{\delta \times n_\alpha}$ such that $L_\alpha^\dagger L_\alpha = \textup{id}_\delta$ is the $\delta \times \delta$ identity matrix. We will show that this is equivalent to the existence of a set of homogeneous Lagrange polynomials of degree $\alpha$. Suppose that $L_\alpha^\dagger$ exists. The rows of $L_\alpha^\dagger$ should be interpreted as elements of $S_\alpha$ represented in the basis $\{x^{b_1}, \ldots, x^{b_{n_\alpha}} \}$. The columns of $L_\alpha$ are elements of $S_\alpha^\vee$ represented in the dual basis. Let the $j$-th row of $L_{\alpha}^\dagger$ correspond to $\tilde{\ell}_j \in S_\alpha$. It is clear from $L_\alpha^\dagger L_\alpha = \textup{id}_\delta$ that $$\pair{\tilde{\ell}_j, p_i} = \tilde{\ell}_j(z_i) = \begin{cases} 1 & i = j, \\ 0 & \textup{otherwise.} \end{cases}$$ By Lemma \ref{lem:nonzero}, there is $h\in S_\alpha$ such that $h(z_j) \neq 0, j = 1, \ldots, \delta$. Then $\ell_j = h(z_j)\tilde{\ell}_j, j = 1, \ldots, \delta$ are a set of homogeneous Lagrange polynomials. Conversely, if a set of homogeneous Lagrange polynomials exists, construct a matrix $\tilde{L}_\alpha^\dagger$ by plugging the coefficients of $\ell_j$ into the $j$-th row. Then there is $h \in S_\alpha$ such that $\tilde{L}_\alpha^\dagger L_\alpha = \textup{diag}(h(z_1), \ldots, h(z_\delta))$ is an invertible diagonal matrix. The left inverse is $L_\alpha^\dagger = \textup{diag}(h(z_1), \ldots, h(z_\delta))^{-1} \tilde{L}_\alpha^\dagger$. \end{proof}
Based on these results, we make the following definition. \begin{definition}[Regularity] The regularity $\textup{Reg}(I) \subset \textup{Cl}(X)$ of $I$ is the subset of degrees $\alpha \in \textup{Cl}(X)$ for which no $\zeta_j$ is a basepoint of $S_\alpha$ and the following equivalent conditions are satisfied: \begin{enumerate} \item $\psi_\alpha$ is an isomorphism, \item $\textup{HF}_I(\alpha) = \delta$ and $I_\alpha = J_\alpha$, \item $\textup{HF}_I(\alpha) = \delta$ and there exists a set of homogeneous Lagrange polynomials of degree $\alpha$, \item $I_\alpha = J_\alpha$ and there exists a set of homogeneous Lagrange polynomials of degree $\alpha$. \end{enumerate} \end{definition}
\begin{theorem} If $\alpha \in \textup{Reg}(I)$, $\alpha_0 \in \textup{Cl}(X)_+$ is such that no $\zeta_j$ is a basepoint of $S_{\alpha_0}$ and $\textup{HF}_I(\alpha + \alpha_0) = \delta$, then $\alpha + \alpha_0 \in \textup{Reg}(I)$. \end{theorem} \begin{proof} Let $\ell_j, j = 1, \ldots, \delta$ be a set of homogeneous Lagrange polynomials of degree $\alpha$ w.r.t.\ $h \in S_\alpha$. It is easy to verify that for generic $h_0 \in S_{\alpha_0}$, $h_0 \ell_j, j = 1, \ldots, \delta$ is a set of homogeneous Lagrange polynomials of degree $\alpha + \alpha_0$ w.r.t.\ $hh_0$. \end{proof}
If $\alpha \in \textup{Pic}(X)$ is basepoint free and $\textup{HF}_I(\alpha) = \delta$, then to show that $\alpha \in \textup{Reg}(I)$, by Proposition \ref{prop:Lalpha} it suffices to show that $L_\alpha$ is of rank $\delta$. If $\alpha$ is `large enough' (the associated polytope has enough lattice points), this seems reasonable to expect. Alternatively, by Proposition \ref{prop:inj} it suffices to show that $I_\alpha = J_\alpha$. Based on experimental evidence we propose the following conjecture. \begin{conjecture} \label{conj} Let $I = \ideal{f_1, \ldots, f_n} \subset S$ be a homogeneous ideal obtained as in Section \ref{sec:setup} such that $V_X(I)$ is a zero-dimensional subscheme of $U \subset X$. Let $\alpha_i = \deg(f_i) \in \textup{Pic}(X)$ be the basepoint free degrees of the generators. Then $ \alpha_0 + \alpha_1 + \ldots + \alpha_n \in \textup{Reg}(I)$ for all $\alpha_0 \in \textup{Cl}(X)_+$ such that no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. \end{conjecture} In the rest of this section, we prove some weaker results to support Conjecture \ref{conj} and we continue our running example by investigating the regularity.
We consider the question for which $\alpha \in \textup{Cl}(X)$ we have $\textup{HF}_I(\alpha) = \delta$. The following theorem generalizes Theorem 3.16 in \cite{csahin2016multigraded} in the case where $Z$ is small enough. \begin{theorem} \label{thm:koszul1} Let $I = \ideal{f_1, \ldots, f_n} \subset S$ be a homogeneous ideal obtained as in Section \ref{sec:setup} such that $V_X(I)$ is a zero-dimensional subscheme of $U \subset X$. Let $\alpha_i = \deg(f_i) \in \textup{Pic}(X)$ be the basepoint free degrees of the generators. If $\textup{codim}(Z) \geq n$ then for all basepoint free $\alpha_0 \in \textup{Pic}(X)_+$, $\textup{HF}_I(\alpha_0 + \alpha_1 + \ldots + \alpha_n) = \delta$. \end{theorem} \begin{proof} Consider the Koszul complex
$$ 0 \rightarrow S(- \sum_{i=1}^n \alpha_i) \rightarrow \bigoplus_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = n - 1}} S(-\alpha_\mathcal{J}) \rightarrow \cdots \rightarrow \bigoplus_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = 2}} S(-\alpha_\mathcal{J}) \rightarrow \bigoplus_{i=1}^n S(-\alpha_i) \rightarrow S $$ where $\alpha_\mathcal{J} = \sum_{i \in \mathcal{J}} \alpha_i$ and $S(-\alpha)$ is the Cox ring with twisted grading: $S(-\alpha)_\beta = S(\beta - \alpha)$. Since the orbit closures $\overline{G \cdot z_j}$ have dimension $k-n$ and by assumption $\dim(Z) \leq k-n$, the $f_i$ form a regular sequence in $S$. Hence the Koszul complex is exact. Restricting to the degree $\alpha = \alpha_0 + \alpha_1 + \ldots + \alpha_n$ part we get
$$ 0 \rightarrow S(\alpha_0) \rightarrow \bigoplus_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = n - 1}} S( \alpha -\alpha_\mathcal{J}) \rightarrow \cdots \rightarrow \bigoplus_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = 2}} S(\alpha -\alpha_\mathcal{J}) \rightarrow \bigoplus_{i=1}^n S(\alpha -\alpha_i) \rightarrow S_\alpha. $$ Since $\alpha_0$ is basepoint free, it corresponds to a polytope $P_0$ and we have by \eqref{eq:gradedpiece} and \eqref{eq:sumofdiv}
$$ \dim_\mathbb{C}(S_{\alpha_0 + \alpha_\mathcal{J}}) = |(P_0+P_{\mathcal{J}}) \cap M|$$ with $P_\mathcal{J} = \sum_{i \in \mathcal{J}} P_i$ for any subset $\mathcal{J} \subset \{0, \ldots, n\}$. Counting dimensions we get
$$ \dim_\mathbb{C}((S/I)_\alpha) = \sum_{\ell = 0}^n (-1)^{n-\ell} \sum_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = \ell}} |(P_0 + P_\mathcal{J}) \cap M|,$$ and the right hand side is the formula \eqref{eq:mvpoints} for the mixed volume $\delta = \textup{MV}(P_1, \ldots, P_n)$. \end{proof} Note that the conditions of Theorem \ref{thm:koszul1} are satisfied by all toric surfaces ($n = 2$). Here is an analogous result for the case where the system is `unmixed' (in some sense) and the corresponding polytope is normal. \begin{theorem} \label{thm:koszul2} Let $I = \ideal{f_1, \ldots, f_n} \subset S$ be a homogeneous ideal obtained as in Section \ref{sec:setup} such that $V_X(I)$ is a zero-dimensional subscheme of $X$. Let $\alpha_i = \deg(f_i) \in \textup{Pic}(X)$ be the basepoint free degrees of the generators. If there is a basepoint free degree $\alpha_\star \in \textup{Pic}(X)$ corresponding to a normal polytope, such that $\alpha_i = t_i \alpha_\star$ for positive integers $t_i$, then $\textup{HF}_I(t \alpha_\star) = \delta$ for $t \geq \sum_{i=1}^n t_i$. \end{theorem} \begin{proof}
The assumption on $\alpha_i$ implies that $P_i = t_i P_\star+ m_i$ for a normal polytope $P_\star$, lattice points $m_i$ and positive integers $t_i$. We can assume without loss of generality that $m_i = 0, i = 1, \ldots, n$. We consider the embedding $X_\mathscr{A} \subset \mathbb{P}^{|\mathscr{A}|-1}$ of $X$ where $\mathscr{A} = P_\star \cap M$. More precisely, $X_\mathscr{A}$ is the image of $\Phi_{\alpha_\star}$ \cite[Proposition 5.4.7]{cox2011toric}. Let $u_m, m \in \mathscr{A}$ be homogeneous coordinates on $\mathbb{P}^{n_{\alpha_\star}-1} = \mathbb{P}^{|\mathscr{A}| -1}$. The toric ideal of $X_\mathscr{A}$ is denoted $I_\mathscr{A} \subset \mathbb{C}[u_m, m \in \mathscr{A}]$ and the $\mathbb{Z}$-graded coordinate ring of $X_\mathscr{A}$ is $\mathbb{C}[X_\mathscr{A}] = \mathbb{C}[u_m, m \in \mathscr{A}]/I_\mathscr{A}$. By \cite[Theorem 5.4.8]{cox2011toric}, we have $S_{\alpha_i} \simeq \mathbb{C}[X_\mathscr{A}]_{t_i}$ and $f_i \in S_{\alpha_i}$ corresponds to an element $h_i + I_\mathscr{A} \in \mathbb{C}[X_\mathscr{A}]_{t_i}$. We define the homogeneous ideal $ I' = \ideal{h_1 + I_\mathscr{A},\ldots, h_n + I_\mathscr{A}} \subset \mathbb{C}[X_\mathscr{A}]$. By assumption, $I'$ defines a 0-dimensional subscheme of $X_\mathscr{A}$, so $h_1 + I_\mathscr{A}, \ldots, h_n + I_\mathscr{A}$ is a regular sequence in $\mathbb{C}[X_\mathscr{A}]$. The ring $\mathbb{C}[X_\mathscr{A}]$ is arithmetically Cohen-Macaulay \cite[Exercise 9.2.8]{cox2011toric}, so the corresponding Koszul complex
$$ 0 \rightarrow K_n \rightarrow K_{n-1} \rightarrow \cdots \rightarrow K_2 \rightarrow K_1 \rightarrow \mathbb{C}[X_\mathscr{A}] \quad \textup{ with } \quad K_t = \bigoplus_{\substack{\mathcal{J} \subset \{1,\ldots,n\}\\ |\mathcal{J}| = t}} \mathbb{C}[X_\mathscr{A}](-\sum_{i \in \mathcal{J}} t_i)$$
is exact. Since $P_\star$ is a normal polytope, we have $\dim_\mathbb{C}(\mathbb{C}[X_\mathscr{A}]_t) = |tP_\star \cap M|$. Counting dimensions and using the same formula as before for $\delta = \textup{MV}(P_1, \ldots, P_n) = \textup{MV}(P_\star, \ldots, P_\star)$ we find that $\dim_\mathbb{C}((\mathbb{C}[X_\mathscr{A}]/I')_t) = \delta$ for $t \geq \sum_{i=1}^n t_i$. Combining this with $(\mathbb{C}[X_\mathscr{A}]/I')_t \simeq (S/I)_{t \alpha_\star}$ (see \cite[Theorem 5.4.8]{cox2011toric}) we get the desired result. \end{proof} We note that in the case where $X$ is a product of projective spaces, stronger bounds than those of Theorem \ref{thm:koszul1} and Theorem \ref{thm:koszul2} are known \cite{bender2018towards}. \begin{example} We continue Example \ref{ex:hirz2}. The polytope $P = P_1 + P_2$ (shown in Figure \ref{fig:hirzebruch2}) has 12 lattice points. Therefore $n_\alpha = 12$, with $\alpha = [D_P] \in \textup{Pic}(X)$. Since $\delta = 3$, $L_\alpha$ is a $12 \times 3$ matrix. Its rows are indexed by the monomials spanning $S_\alpha$, and its columns by the representatives $z_j$. The transpose is given by
\begin{small} $$ L_\alpha^\top = ~ ~ \begin{blockarray}{ccccccccccccc} \rot{$x_3x_4^2$} & \rot{$x_1x_4^2$} & \rot{$x_2x_3^3x_4$} & \rot{$x_1x_2x_3^2x_4$} & \rot{$x_1^2x_2x_3x_4$} & \rot{$x_1^3x_2x_4$} & \rot{$x_2^2x_3^5$} & \rot{$x_1x_2^2x_3^4$} & \rot{$x_1^2x_2^2x_3^3$} & \rot{$x_1^3x_2^2x_3^2$} & \rot{$x_1^4x_2^2x_3$} & \rot{$x_1^5x_2^2$}\\ \begin{block}{[cccccccccccc]c}
~1~ & -1~ & -1~ & 1~ & -1~ & 1~ & 1~ & -1~ & 1~ & -1~ & 1~ & -1~ &z_1~\\
~1~ & 0~ & -1~ & 0~ & 0~ & 0~ & 1~ & 0~ & 0~ & 0~ & 0~ & 0~ &z_2~\\
~0~& 1~& 0~&0~&0~&-1~&0~&0~&0~&0~&0~&1~&z_3~\\ \end{block} \end{blockarray}. $$ \end{small} \noindent Consider $h = 39(x_3x_4^2 - x_1x_4^2) \in S_\alpha$ and note that $h(z_j) \neq 0$ for all $j$. A set of homogeneous Lagrange polynomials w.r.t.\ $h$ is given by
\begin{small} $$ \frac{2~\tilde{L}_\alpha^\dagger}{13} = ~ ~ \begin{blockarray}{ccccccccccccc} \rot{$x_3x_4^2$} & \rot{$x_1x_4^2$} & \rot{$x_2x_3^3x_4$} & \rot{$x_1x_2x_3^2x_4$} & \rot{$x_1^2x_2x_3x_4$} & \rot{$x_1^3x_2x_4$} & \rot{$x_2^2x_3^5$} & \rot{$x_1x_2^2x_3^4$} & \rot{$x_1^2x_2^2x_3^3$} & \rot{$x_1^3x_2^2x_3^2$} & \rot{$x_1^4x_2^2x_3$} & \rot{$x_1^5x_2^2$}\\ \begin{block}{[cccccccccccc]c}
~0~&0~&0~&2~&-2~&0~&0~&-2~&2~&-2~&2~&0 ~&\ell_1\\
~2~&0~&-2~&-1~&1~&0~&2~&1~&-1~&1~&-1~&0 ~&\ell_2\\
~0~&2~&0~&1~&-1~&-2~&0~&-1~&1~&-1~&1~&2 ~&\ell_3\\ \end{block} \end{blockarray}, $$ \end{small} \noindent which is related to the pseudo inverse of $L_\alpha$ by $$L_\alpha^\dagger = \textup{diag}(h(z_1), h(z_2), h(z_3))^{-1} \tilde{L}^\dagger_\alpha = \textup{diag}(1/78,1/39,1/39) \tilde{L}^\dagger_\alpha.$$
To check that $I_\alpha = J_\alpha$ we compute $\textup{HF}_I(\alpha) = \textup{HF}_J(\alpha) = 3$. Hence we have $\alpha \in \textup{Reg}(I)$. In fact, in this example $I$ is radical and $\alpha$ is ample, so $I_\alpha = J_\alpha$ follows from Corollary \ref{cor:radical}. \end{example}
\section{A toric eigenvalue, eigenvector theorem} \label{sec:toriceval} In this section, we will work with multiplication maps between graded pieces of $S/I$. Again, $I$ is a homogeneous ideal in $S$ obtained as in Section \ref{sec:setup} satisfying Assumptions \ref{ass:1}-\ref{ass:3}. For $\alpha, \alpha_0 \in \textup{Cl}(X)_+$, a homogeneous element $g \in S_{\alpha_0}$ defines a linear map $$ M_g : (S/I)_\alpha \rightarrow (S/I)_{\alpha + \alpha_0} : f + I_\alpha \mapsto gf + I_{\alpha + \alpha_0}$$ representing `multiplication with $g$'. Just as in the affine case, these multiplication maps will be the key ingredient to formulate our root finding problem as a linear algebra problem. We state a toric version of the eigenvalue, eigenvector theorem and show how the eigenvalues can be used to recover homogeneous coordinates of the solutions and equations for the corresponding $G$-orbits. Our main result uses the following Lemma. \begin{lemma} \label{lem:isom} Let $\alpha, \alpha_0 \in \textup{Cl}(X)_+$ be such that $\alpha, \alpha + \alpha_0 \in \textup{Reg}(I)$ and no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. Then for generic $h_0 \in S_{\alpha_0}$, ${M}_{h_0}: (S/I)_\alpha \rightarrow (S/I)_{\alpha + \alpha_0} : f + I_\alpha \mapsto h_0f + I_{\alpha + \alpha_0}$ is an isomorphism of vector spaces. \end{lemma} \begin{proof} Let $\psi_\alpha$ be given as in \eqref{eq:evalmap} for some $h \in S_\alpha$. We can take $hh_0 \in S_{\alpha + \alpha_0}$ to define $\psi_{\alpha + \alpha_0}$. Then $\psi_{\alpha + \alpha_0} \circ {M}_{h_0} = \psi_{\alpha}$ shows that ${M}_{h_0}$ is invertible. \end{proof} \begin{theorem}[Toric eigenvalue, eigenvector theorem] \label{thm:multiplication} Let $\alpha, \alpha_0 \in \textup{Cl}(X)_+$ be such that $\alpha, \alpha+ \alpha_0 \in \textup{Reg}(I)$ and no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. Then for any $g \in S_{\alpha_0}$, ${M}_{g/h_0} = {M}_{h_0}^{-1} \circ {M}_g: (S/I)_\alpha \rightarrow (S/I)_{\alpha}$ has eigenpairs $$ \left( \frac{g}{h_0}(z_j), \ell_j + I_\alpha \right), \qquad \left( v_j, \frac{g}{h_0}(z_j) \right), \quad j = 1, \ldots, \delta,$$ where the $\ell_j + I_\alpha$ are cosets of homogeneous Lagrange polynomials of degree $\alpha$ and the $v_j$ are the dual basis of $(S/I)_\alpha^\vee$. \end{theorem} \begin{proof} The map ${M}_{h_0}$ is an isomorphism by Lemma \ref{lem:isom}. We define $\psi_\alpha$, $\psi_{\alpha + \alpha_0}$ as in \eqref{eq:evalmap} with $h \in S_\alpha$, $h h_0 \in S_{\alpha + \alpha_0}$ respectively. A straightforward computation shows that $\psi_{\alpha+\alpha_0} \circ {M}_{h_0} (\ell_j + I_\alpha) = e_j$. Analogously, we have $ \psi_{\alpha+\alpha_0} \circ {M}_g (\ell_j + I_\alpha) = \frac{g}{h_0}(z_j) e_j$. It follows that $h_0(z_j) {M}_g(\ell_j + I_\alpha) = g(z_j) {M}_{h_0}(\ell_j + I_\alpha)$, and therefore $$ {M}_{g/h_0}(\ell_j + I_\alpha) = \frac{g}{h_0}(z_j) (\ell_j+ I_\alpha),$$ which proves the statement about the right eigenpairs, since the $\ell_j + I_\alpha$ are linearly independent. For the statement about the left eigenpairs, note that for any $f \in S_\alpha$ $$ v_j \circ {M}_{g/h_0} (f + I_\alpha) = v_j \circ {M}_{h_0}^{-1} (gf + I_{\alpha+\alpha_0})$$ and since ${M}_{h_0}$ is an isomorphism, there is $\tilde{f} \in S_\alpha$ such that $gf - h_0 \tilde{f} \in I_{\alpha+\alpha_0}$. Therefore, for each $z_j \in V(I)$ we have $$ \frac{gf - h_0 \tilde{f}}{h_0 h}(z_j) = 0 \Rightarrow \frac{\tilde{f}}{h}(z_j) = \frac{g}{h_0}(z_j) \frac{f}{h}(z_j)$$ and thus, since ${M}_{h_0}^{-1}(gf+I_{\alpha+\alpha_0}) = \tilde{f} + I_\alpha$, we have $$v_j \circ {M}_{g/h_0} (f + I_\alpha) = v_j (\tilde{f} + I_\alpha) = \frac{g}{h_0}(z_j) v_j(f + I_\alpha).$$ The $v_j$ are linearly independent, so this concludes the proof. \end{proof}
Let $S_{\alpha_0} = \bigoplus_{i=1}^{n_{\alpha_0}} \mathbb{C} \cdot x^{b_i}$ where $\alpha_0 \in \textup{Cl}(X)_+$ is such that no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. We now show how the eigenvalues of the ${M}_{x^{b_i}/h_0}$ lead directly to a set of defining equations of $G \cdot z_j, j = 1, \ldots, \delta$ if $\alpha_0$ is `large enough'. For every cone $\sigma \in \Sigma_P$, we define $U_\sigma = \mathbb{C}^k \setminus V(x^{\hat{\sigma}}) = \textup{MaxSpec}(S_{x^{\hat{\sigma}}})$. Note that $ \mathbb{C}^k \setminus Z = \bigcup_{\sigma \in \Sigma_P} U_\sigma$. Let $D_{\alpha_0}$ be a representative divisor: $\alpha_0 = [D_{\alpha_0}] =[ \sum_{i=1}^k a_{0,i} D_i]$. Let $P_0 \subset M_\mathbb{R}$ be the polytope $\{m \in M_\mathbb{R} ~|~ F^\top m + a_0 \geq 0 \}$. If $\alpha_0 \in \textup{Pic}(X)$, then for every $\sigma \in \Sigma_P$ there is $m_\sigma \in P_0 \cap M$ such that \begin{equation} \label{eq:distpoint} \pair{u_i, m_\sigma} + a_{0,i} = 0, \quad \forall \rho_i \in \sigma(1), \end{equation} see for instance \cite[Lemma 3.4]{cox1995homogeneous} or \cite[Theorem 4.2.8]{cox2011toric}. If $D_{\alpha_0}$ is not Cartier, such an $m_\sigma$ does not exist for every cone $\sigma \in \Sigma_P$. We will denote the subset of cones for which $m_\sigma \in P_0$ satisfying \eqref{eq:distpoint} exists by $\widetilde{\Sigma}_P \subset \Sigma_P$. This set is nonempty since $\{0\} \in \widetilde{\Sigma}_P$. We write $P_0 \cap M = \{m_1, \ldots, m_{n_{\alpha_0}} \}$, $b_i = F^\top m_i + a_0$ and $b_\sigma = F^\top m_\sigma + a_0$. For all $\sigma \in \widetilde{\Sigma}_P$ we denote $P_0 \cap M - m_\sigma = \{m_1 - m_\sigma, \ldots, m_{n_{\alpha_0}} - m_\sigma \}$ (note that $0 \in P_0 \cap M - m_\sigma$) and
$ \sigma^\vee = \{ m \in M_\mathbb{R} ~|~ \pair{u,m} \geq 0, \forall u \in \sigma \}, \sigma^\perp = \{ m \in M_\mathbb{R} ~|~ \pair{u,m} = 0, \forall u \in \sigma \}.$ We partition $P_0 \cap M - m_\sigma$ into $\mathcal{M}_\sigma^\perp = (P_0 \cap M - m_\sigma) \cap \sigma^\perp$ and $\mathcal{M}_\sigma = (P_0 \cap M - m_\sigma) \setminus \mathcal{M}_\sigma^\perp$. The inclusion
$$ \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp = \left \{ \sum_{m \in \mathcal{M}_\sigma} c_m m + \sum_{m \in \mathcal{M}_\sigma^\perp} d_m m ~|~ c_m \in \mathbb{N}, d_m \in \mathbb{Z} \right \} \subset \sigma^\vee \cap M$$ is clear. In what follows, we will show that if equality holds for some simplicial $\sigma \in \widetilde{\Sigma}_P$ with $z_j \in U_\sigma$, then $\alpha_0$ is `large enough' to recover equations for $G \cdot z_j$ from the eigenvalues of the $M_{x^{b_i}/h_0}$. \begin{theorem} \label{thm:orbiteq} Let $z \in U_\sigma$ for a simplicial cone $\sigma \in \widetilde{\Sigma}_P$ such that $\pi(z)$ is not a basepoint of $S_{\alpha_0}$. Take $h_0 \in S_{\alpha_0}$ such that $h_0(z) \neq 0$ and let $\lambda_{i} = z^{b_i}/h_0(z), i = 1, \ldots, n_{\alpha_0}$. If $ \sigma^\vee \cap M = \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp$, then $G \cdot z \subset U_\sigma$ is the subvariety defined by the ideal
$$ \ideal{x^{b_i-b_\sigma} - \lambda_{i} \frac{h_0(x)}{x^{b_\sigma}}~|~ i = 1, \ldots, n_{\alpha_0}} \subset S_{x^{\hat{\sigma}}}.$$ \end{theorem} To prove Theorem \ref{thm:orbiteq}, we need the following auxiliary lemma. \begin{lemma} \label{lem:orbiteq} Let $\sigma \in \widetilde{\Sigma}_P$ be a simplicial cone. For any point $z \in U_\sigma$, the orbit $G \cdot z$ is the subvariety defined by
$$ G \cdot z = \{ x \in U_\sigma ~|~ x^{F^\top m} - z^{F^\top m}, m \in \sigma^\vee \cap M \} \subset U_\sigma.$$ If $\sigma^\vee \cap M = \mathbb{N} \{m_1, \ldots, m_\kappa \} + \mathbb{Z} \{m_{\kappa + 1}, \ldots, m_s \}$, then
$$ \{ x \in U_\sigma ~|~ x^{F^\top m} - z^{F^\top m} , m \in \sigma^\vee \cap M \}= \{ x \in U_\sigma ~|~ x^{F^\top m_i} - z^{F^\top m_i}, i = 1, \ldots, s \} .$$ \end{lemma} \begin{proof}
Note that $x^{F^\top m} - z^{F^\top m} \in S_{x^{\hat{\sigma}}}, \forall m \in \sigma^\vee \cap M$ and $m_{\kappa+1}, \ldots, m_s \in \sigma^\perp \cap M$. The first statement is shown in the proof of Theorem 2.1 in \cite{cox1995homogeneous}. For the second statement, the inclusion `$\subset$' is obvious. To show the opposite inclusion, take $m \in \sigma^\vee \cap M$ and write $m = c_1m_1 + \ldots + c_s m_s$ with $c_1, \ldots, c_\kappa \in \mathbb{N}$, $c_{\kappa+1}, \ldots, c_s \in \mathbb{Z}$. Then $$ x^{F^\top m} = \prod_{i=1}^{\kappa} (x^{F^\top m_i})^{c_i} \prod_{j=\kappa + 1}^{s} (x^{F^\top m_j})^{c_j}$$ and if $x^{F^\top m_i} = z^{F^\top m_i}, i = 1, \ldots, s$, it follows that $x^{F^\top m} = z^{F^\top m}$. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:orbiteq}] It follows from Lemma \ref{lem:orbiteq} that $G\cdot z$ is the variety of
$$ \ideal{ x^{F^\top (m_i - m_\sigma)} - z^{F^\top (m_i - m_\sigma)} ~|~ i = 1, \ldots, n_{\alpha_0}} = \ideal{ x^{b_i - b_\sigma} - z^{b_i - b_\sigma}~|~i = 1, \ldots, n_{\alpha_0}}.$$ Write $h_0(x) = \sum_{i=1}^{n_{\alpha_0}} c_i x^{b_i}, c_i \in \mathbb{C}$. It is easy to check that $$ \left ( \begin{bmatrix} 1 \\ & \ddots & \\ & & 1 \end{bmatrix} - \begin{bmatrix} \lambda_{1} \\ \vdots \\ \lambda_{n_{\alpha_0}} \end{bmatrix} \begin{bmatrix} c_1 & \hdots & c_{n_{\alpha_0}} \end{bmatrix} \right) \begin{bmatrix} x^{b_1 - b_\sigma} - z^{b_1 - b_\sigma} \\ \vdots \\ x^{b_{n_{\alpha_0}} - b_\sigma} - z^{b_{n_{\alpha_0}} - b_\sigma} \end{bmatrix} = \begin{bmatrix} x^{b_1 - b_\sigma} - \lambda_{1} \frac{h_0(x)}{x^{b_\sigma}} \\ \vdots \\ x^{b_{n_{\alpha_0}} -b_\sigma} - \lambda_{n_{\alpha_0}} \frac{h_0(x)}{x^{b_\sigma}} \end{bmatrix}$$ and for generic $c_i$, the matrix on the left is invertible (it's invertible for $c_i = 0$, so the determinant is a nonzero polynomial in the $c_i$). \end{proof}
\begin{theorem} \label{thm:usefuleq} Let $z \in U_\sigma$ with $\sigma \in \widetilde{\Sigma}_P$ simplicial be such that $\pi(z)$ is not a basepoint of $S_{\alpha_0}$ and $ \sigma^\vee \cap M = \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp$. For generic $h_0 \in S_{\alpha_0}$ satisfying $h_0(z) \neq 0$, the variety $$Y_z = V \left (x^{b_i} - \frac{z^{b_i}}{h_0(z)}, i = 1, \ldots, n_{\alpha_0} \right ) \subset \mathbb{C}^k$$ is nonempty and $Y_z \subset G \cdot z$. \end{theorem} The proof of Theorem \ref{thm:usefuleq} uses the following lemma. \begin{lemma} \label{lem:nontorsion} If $\alpha_0 \in \textup{Cl}(X)_+$ is such that $ \sigma^\vee \cap M = \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp$ for some $\sigma \in \widetilde{\Sigma}_P$, then $\alpha_0$ is not a torsion element of $\textup{Cl}(X)$. \end{lemma} \begin{proof} Suppose $u \alpha_0 = 0$ for some $u >0$. Then $F^\top m + u a_0 = 0$ for some $m \in M$, and therefore $F^\top (m/u) + a_0 = 0$. Since $\Sigma_P$ is complete, this means that $P_0 = \{m/u\}$ and $P_0$ either has 1 lattice point (if $m/u \in M$, in which case $\alpha_0 = 0$), or it has none. Since $\alpha_0 \in \textup{Cl}(X)_+$, we can assume $0 \in P_0$ and this must be the only lattice point in $P_0$. Then $\sigma^\vee \cap M = \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp = \{0\}$. But $\sigma^\vee$ has dimension $n$ because $\sigma$ is strongly convex (\cite[Proposition 1.2.12]{cox2011toric}), so this is a contradiction. \end{proof}
\begin{proof}[Proof of Theorem \ref{thm:usefuleq}]
Since $\alpha_0$ is not a torsion element of $\textup{Cl}(X)$ (Lemma \ref{lem:nontorsion}), we have the exact sequence $$ 0 \longrightarrow \mathbb{Z} \longrightarrow \textup{Cl}(X) \longrightarrow \textup{Cl}(X)/(\mathbb{Z} \cdot \alpha_0) \longrightarrow 0$$ where $\mathbb{Z} \rightarrow \textup{Cl}(X)$ sends $u \mapsto u \alpha_0 \in \textup{Cl}(X)$. Taking $\textup{Hom}_\mathbb{Z}(-, \mathbb{C}^*)$ shows that $G \rightarrow \mathbb{C}^* : g \mapsto g^{\alpha_0}$ is surjective (because $\mathbb{C}^*$ is divisible). Therefore we can find $g \in G$ such that $g^{\alpha_0} = h_0(z)^{-1}$ and thus $h_0(g \cdot z) = 1$. Every $x \in Y_z$ satisfies $x^{b_i} - (g \cdot z)^{b_i} = 0, i = 1, \ldots, n_{\alpha_0}$: this follows from $(g \cdot z)^{b_i} = z^{b_i}/h_0(z)$. In particular, $x^{b_\sigma} = (g \cdot z)^{b_\sigma} \neq 0$ ($z \in U_\sigma$ and hence $g \cdot z \in U_\sigma$ since $U_\sigma$ is $G$-invariant) and therefore $x$ satisfies $x^{b_i - b_\sigma} = (g \cdot z)^{b_i - b_\sigma}, i = 1, \ldots n_{\alpha_0}$. By Lemma \ref{lem:orbiteq} it follows that $g \cdot z \in Y_z \subset G \cdot z$. \end{proof}
Recall that we took $\alpha_0$ such that no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. We conclude this section with the following immediate corollary of Theorems \ref{thm:orbiteq} and \ref{thm:usefuleq}. \begin{corollary} \label{cor:orbiteq} Let $\lambda_{ij} = z_j^{b_i}/h_0(z_j)$ be the $j$-th eigenvalue of the $i$-th multiplication map ${M}_{x^{b_i}/h_0}$. For $j = 1, \ldots, \delta$, assume that $z_j \in U_{\sigma_j}$ for a simplicial cone $\sigma_j \in \widetilde{\Sigma}_P$ satisfying $ \sigma_j^\vee \cap M = \mathbb{N} \mathcal{M}_{\sigma_j} + \mathbb{Z} \mathcal{M}_{\sigma_j}^\perp$. The ideal
$$ \ideal{x^{b_i-b_{\sigma_j}} - \lambda_{ij} \frac{h_0(x)}{x^{b_{\sigma_j}}}~|~ i = 1, \ldots, n_{\alpha_0}} \subset S_{x^{\hat{\sigma}_j}}$$ defines the orbit $G \cdot z_j \subset U_{\sigma_j}$, and for any point $z_j' \in V(x^{b_i} - \lambda_{ij}, i = 1, \ldots, n_{\alpha_0}) \subset U_{\sigma_j}$, we have $\pi(z_j') = \zeta_j$. \end{corollary} Corollary \ref{cor:orbiteq} implies that we can find homogeneous coordinates of the solutions from the eigenvalues $\lambda_{ij}$ by solving a system of binomial equations if $P_0$ `has enough lattice points'. Concretely, for every point $z_j$ there has to be a cone $\sigma_j \in \widetilde{\Sigma}_P$ such that $z_j \in U_{\sigma_j}$ and $ \sigma_j^\vee \cap M = \mathbb{N} \mathcal{M}_{\sigma_j} + \mathbb{Z} \mathcal{M}_{\sigma_j}^\perp$. Note that if all solutions are in the torus, then $z_j \in U_\sigma$ for $\sigma = \{0\} \in \widetilde{\Sigma}_P$ and this condition translates to the fact that $\mathbb{Z}(P_0 \cap M - m) = M$ for some $m \in P_0 \cap M$. If $P_0$ is very ample, then $\alpha_0 \in \textup{Pic}(X)$, so $\widetilde{\Sigma}_P = \Sigma_P$ and $ \sigma^\vee \cap M = \mathbb{N} \mathcal{M}_\sigma + \mathbb{Z} \mathcal{M}_\sigma^\perp$ holds for all $\sigma \in \Sigma_P$ \cite[Proposition 1.3.16]{cox2011toric}. We will elaborate on how to solve this system of binomial equations in the next section.
\section{Algorithm} \label{sec:alg} In this section we present an eigenvalue algorithm for computing homogeneous coordinates of the points in $V_X(I)$, where $I$ is an ideal satisfying Assumptions \ref{ass:1}-\ref{ass:3}. As in Theorem \ref{thm:multiplication}, let $\alpha, \alpha_0 \in \textup{Cl}(X)_+$ be such that $\alpha, \alpha + \alpha_0 \in \textup{Reg}(I)$ and no $\zeta_j$ is a basepoint of $S_{\alpha_0}$. In practice, we will take $\alpha = \alpha_1 + \cdots + \alpha_n$ where $\alpha_i = \deg(f_i)$ (by Conjecture \ref{conj}) and $\alpha_0$ `large enough' to recover all solutions (Corollary \ref{cor:orbiteq}). We denote $$ S_{\alpha_0} = \bigoplus_{i=1}^{n_{\alpha_0}} \mathbb{C} \cdot x^{b_i}.$$ We have that $ \textup{HF}_I(\alpha) = \textup{HF}_I(\alpha + \alpha_0) = \delta.$ Given a generic $h_0 \in S_{\alpha_0}$ and a surjective linear map $N: S_{\alpha+\alpha_0} \rightarrow \mathbb{C}^{\delta}$ with $\ker N = I_{ \alpha+\alpha_0 }$, we define $$N_{h_0}: S_{\alpha} \rightarrow \mathbb{C}^{\delta}: f \mapsto N(h_0f)$$ and assume that $N_{h_0}$ is surjective as well. Such a map $N$ can be computed directly from the input equations. We will come back to this later. Let $N^* : B \rightarrow \mathbb{C}^\delta$ be the restriction of $N_{h_0}$ to a subspace $B \subset S_\alpha$ of dimension $\delta$ such that $N^*$ is invertible, and let $$N_i: B \rightarrow \mathbb{C}^\delta: f \mapsto N(x^{b_i} f), \qquad i = 1, \ldots, n_{\alpha_0}.$$ \begin{theorem} \label{thm:simtrans} The map $\nu: B \simeq (S/I)_\alpha: g \mapsto g + I_\alpha$ is an isomorphism of vector spaces and the linear maps $(N^*)^{-1} \circ N_i: B \rightarrow B$ satisfy $\nu \circ (N^*)^{-1} \circ N_i = {M}_{x^{b_i}/h_0} \circ \nu$ where ${M}_{x^{b_i}/h_0}$ are the maps from Theorem \ref{thm:multiplication}. \end{theorem} \begin{proof} By Lemma \ref{lem:isom}, $h_0 f \in I_{\alpha + \alpha_0}$ if and only if $f \in I_\alpha$. Therefore $\ker N_{h_0} = I_\alpha$. The first statement follows from $S_\alpha = B \oplus \ker N_{h_0}$. Since $\ker N = I_{\alpha+ \alpha_0}$, $N$ is well defined mod $I_{\alpha + \alpha_0}$. We define $$\tilde{N} : (S/I)_{\alpha + \alpha_0} \rightarrow \mathbb{C}^\delta : f + I_{\alpha + \alpha_0} \mapsto N(f).$$ Since $\tilde{N}$ is a surjective linear map between $\delta$-dimensional vector spaces, it is invertible. For $g \in B$, $N^*(g) = N(h_0 g) = (\tilde{N} \circ {M}_{h_0}) (g + I_\alpha)$ so $\nu \circ (N^*)^{-1} = (\tilde{N} \circ {M}_{h_0})^{-1}$. Analogously we find $N_i(g) = (\tilde{N} \circ {M}_{x^{b_i}}) (g + I_\alpha)$. The theorem follows from $ (\nu \circ (N^*)^{-1} \circ N_i )(g) = ((\tilde{N} \circ {M}_{h_0})^{-1} \circ (\tilde{N} \circ {M}_{x^{b_i}})) (g+ I_\alpha) = ({M}_{h_0}^{-1} \circ {M}_{x^{b_i}} \circ \nu)(g).$ \end{proof} Theorem \ref{thm:simtrans} tells us that, identifying $B$ with $(S/I)_\alpha$, the homogeneous multiplication operators are given by $(N^*)^{-1} \circ N_i$. After fixing a basis $\mathcal{B}$ for $B$ the multiplication operators are commuting $\delta \times \delta$ matrices and we can compute their simultaneous diagonalization to find the values $\lambda_{ij} = z_j^{b_i}/h_0(z_j)$.
We now show how the map $N$ can be computed from the input equations. Our strategy is based on techniques for computing Truncated Normal Forms (TNFs), as introduced in \cite{telen2017solving}. We use the notation $V = S_{\alpha + \alpha_0}$, $V_i = S_{\alpha + \alpha_0 - \alpha_i}$ and by the \textit{Resultant map} $\textup{Res}: V_1 \times \cdots \times V_n \rightarrow V$ we mean the linear map $$ (q_1, \ldots, q_n) \mapsto q_1f_1 + \ldots + q_nf_n. $$ When represented in matrix form, using monomial bases for the vector spaces involved, this map looks a lot like the resultant matrices coming from Macaulay and toric resultants \cite[Chapters 3 and 7]{cox2}. Since $\textup{im} \textup{Res} = I_{\alpha+\alpha_0}$, the cokernel map of $\textup{Res}$ is a map $N : V \rightarrow \mathbb{C}^\delta \simeq V/\textup{im} \textup{Res}$ with the properties we need.
The next step is to find the homogeneous coordinates of $V_X(I)$ from the $\lambda_{ij}$. Suppose that $z_j \in U_{\sigma_j}$ for $\sigma_j \in \widetilde{\Sigma}_P$ and that $\alpha_0$ is such that $\sigma_j^\vee \cap M = \mathbb{N} \mathcal{M}_{\sigma_j} + \mathbb{Z} \mathcal{M}_{\sigma_j}^\perp$. By Corollary \ref{cor:orbiteq}, it remains to compute one point on the variety $V(x^{b_i} - \lambda_{ij}, i = 1, \ldots, n_{\alpha_0} )$ for $j = 1, \ldots, \delta$. If $\zeta_j \in T_X$, we can do this efficiently using only linear algebra as follows. Let $A = [b_1 ~ \cdots ~ b_{n_{\alpha_0}} ] \in \mathbb{Z}^{k \times n_{\alpha_0}}$ be the matrix of exponents and compute its Smith normal form: $U A V = S$ with $U,V$ unimodular and $S = [\textup{diag}(m_1, \ldots, m_r, 0, \ldots, 0) ~ 0 ] \in \mathbb{Z}^{k \times n_{\alpha_0}} $, where $m_i | m_{i+1}$. We make the substitution of variables $x_\ell = y_1^{U_{1 \ell}} \cdots y_k^{U_{k \ell}}$ to obtain the equivalent system of equations given by $y^{Ub_i} = \lambda_{ij}$. Applying the invertible transformation given by the matrix $V$, this simplifies to $$ y_\ell^{m_\ell} = \prod_{i=1}^{n_{\alpha_0}} \lambda_{ij}^{V_{i \ell}}, ~ \ell = 1, \ldots, r \quad \textup{ and } \quad 1 = \prod_{i=1}^{n_{\alpha_0}} \lambda_{ij}^{V_{i\ell}}, ~ r < \ell \leq k.$$ This imposes no conditions on $y_\ell, \ell > r$, so we can put $y_\ell = 1, \ell > r$. Taking the logarithm then shows that $$ \log y = [ \log y_1 ~ \cdots ~ \log y_k ] = \left [ w ~ 0_{k-r} \right ]$$ where $w = [\log \lambda_{1j} ~ \cdots \log \lambda_{n_{\alpha_0}j}] [V_{:,1} ~ \cdots ~ V_{:,r} ] \textup{diag}(1/m_1, \ldots, 1/m_r)$ and $0_{k-r}$ is a row vector of length $k-r$ with zero entries. To find the homogeneous coordinates, we only need to invert our change of coordinates and the logarithm: $$\log x = [\log x_1 ~ \cdots ~ \log x_k] = \log y ~ U, \qquad x_\ell = e^{\log x_\ell}, \ell = 1, \ldots, k .$$ Taking the logarithm has some advantages for the implementation: it reduces all computations to some matrix multiplications and it may prevent overflow. When $\zeta_j$ is not in the torus, some of the $\lambda_{ij}$ may be zero. In this case, to compute a point on
$V(x^{b_i} - \lambda_{ij}, i = 1, \ldots, n_{\alpha_0} )$, we may use a simple Newton iteration, for instance. In the \textit{nearly} degenerate situation, where $\lambda_{ij}$ is close to zero for some $i$, the approach above suffers from rounding errors. We take this into account by using the Smith normal form technique when $(\min_i |\lambda_{ij}|)/(\sum_{i=1}^{n_{\alpha_0}} |\lambda_{ij}|^2)^{1/2} > \textup{tol}$, where $|\cdot|$ denotes the modulus and $\textup{tol}$ is a predefined tolerance. This leads to Algorithm \ref{alg:coxcoords}. \begin{algorithm} \footnotesize \caption{Computes the Cox coordinates of the points defined by $I = \ideal{f_1, \ldots, f_n}$}\label{alg:coxcoords} \begin{algorithmic}[1] \STATE{ $\textup{Res} \gets \textup{Matrix of the resultant map $V_1 \times \cdots \times V_n \rightarrow V$}$ }\label{constres} \STATE{$N \gets \textup{Matrix of the cokernel $V \rightarrow \mathbb{C}^\delta$ of $\textup{Res}$}$} \STATE{$h_0 \gets \textup{Generic element of $S_{\alpha_0}$}$} \STATE{Construct a matrix of $N_{h_0}$}
\STATE{Find $B \subset S_\alpha$ such that $(N_{h_0})_{|B}$ is invertible }\label{choiceofB}
\STATE{$N^* \gets (N_{h_0})_{|B}$} \STATE{Construct a matrix of $N_i$, $1\leq i \leq n_{\alpha_0}$} \FOR{$i=1,\ldots,n_{\alpha_0}$} \STATE{ ${M}_{x^{b_i}/h_0} \gets (N^*)^{-1} N_i$} \ENDFOR \STATE{ Compute $\lambda_{ij}, 1\leq i \leq n_{\alpha_0}, 1 \leq j \leq \delta$ by sim.~diag.~of the ${M}_{x^{b_i}/h_0}$} \FOR{$j=1,\ldots,\delta$} \STATE{$\tilde{J}_j \gets \ideal{x^{b_i} - \lambda_{ij}, 1\leq i \leq n_{\alpha_0}} \subset S$}
\IF{$(\min_i |\lambda_{ij}|)/(\sum_{i=1}^{n_{\alpha_0}} |\lambda_{ij}|^2)^{1/2} > \textup{tol}$} \STATE{Find one point $z_j \in \mathbb{C}^k$ on $V(\tilde{J}_j)$ using SNF} \ELSE{} \STATE{Find one point $z_j \in \mathbb{C}^k$ on $V(\tilde{J}_j)$ using Newton iteration} \ENDIF \ENDFOR \STATE{\textbf{return} $z_1, \ldots, z_\delta$} \end{algorithmic} \end{algorithm}
In line \ref{choiceofB} of the algorithm, the choice of the subspace $B$ is important for the numerical stability. A good choice is using QR factorization with optimal column pivoting as in \cite{telen2017stabilized, telen2017solving} which results in a basis for $(S/I)_\alpha$ consisting of monomials in $S$. An alternative is using the singular value decomposition, in which case $B$ is the orthogonal complement of $I_\alpha$ in $S_\alpha$ \cite[Section 3]{mourrain2018truncated}. We use the SVD for the experiments in this article.
Algorithm \ref{alg:coxcoords} requires some computations involving polytopes. If one is interested in solving many systems with the same structure, it is advantageous to do these computations in an `offline' phase. The `online' algorithm then takes a basis of $S_{\alpha_0}$, $S_\alpha$ and $S_{\alpha + \alpha_0}$, a facet representation of $P$ and $P_0$ and the mixed volume $\delta = \textup{MV}(P_1, \ldots, P_n)$ as inputs. The `offline' version of the algorithm computes all this information from the input equations, and generates an $\alpha_0$ such that $\mathbb{Z}(P_0 \cap M - m) = M$. This is enough to find (at least) all solutions in the torus by Corollary \ref{cor:orbiteq}.
To retrieve the coordinates in $(\mathbb{C}^*)^n$ of toric solutions from their homogeneous coordinates, we use the map \eqref{eq:GCQ}.
\begin{remark} \label{rem:complexity} We conclude this Section with a remark on the complexity of Algorithm \ref{alg:coxcoords} as compared to the TNF algorithm of \cite{telen2017solving}. The first step in both algorithms is to compute the cokernel of a resultant map $\textup{Res}$. Since for both algorithms the monomials indexing the vector space $V$ in the definition of $\textup{Res}$ are the lattice points contained in a slightly enlarged version of the polytope $P = P_1 + \ldots + P_n$, this step takes roughly the same computation time for both algorithms. Even though the Cox ring has dimension $k > n$, the dimensions of its graded pieces correspond to the lattice points contained in $n$-dimensional polytopes. This is an important observation, because for larger problems, the computation of the cokernel of $\textup{Res}$ is the most expensive step of the algorithm. Next, both algorithms compute the multiplication matrices from this cokernel. This is more expensive for the algorithm in this paper: there are more multiplication maps. Another important difference is that for the TNF algorithm, the eigenvalues of the multiplication maps immediately give the coordinates of the solutions, whereas Algorithm \ref{alg:coxcoords} processes these eigenvalues to find the homogeneous coordinates (line 12-19). We conclude that Algorithm \ref{alg:coxcoords} is computationally more expensive overall. This should be considered the price that is payed for being more robust in nearly degenerate situations, which is the main objective in this paper. However, the increase of complexity is not dramatic: systems with thousands of solutions can be solved within reasonable time (see Subsection \ref{subsec:generic}), and there is certainly room for performance optimization in the current Matlab implementation, which is tested in the next Section. \end{remark}
\section{Examples} \label{sec:examples} Algorithm \ref{alg:coxcoords} is implemented in Matlab. Polymake is used for computations involving polytopes \cite{polymake:FPSAC_2009}, except for the mixed volume, which is computed using PHCpack \cite{verschelde1999algorithm}. In this section, we test the implementation on several examples and compare the results with those of some other polynomial system solvers. All computations are done in double precision arithmetic on an 8 GB RAM machine with an intel Core 17-6820HQ CPU working at 2.70 GHz. To measure the quality of an approximate solution, we compute the \textit{residual} as defined in \cite[Section 7]{telen2017stabilized} as a measure for the relative backward error. In double precision arithmetic, a residual of order $10^{-16}$ is the best one can hope for. The goal of the experiments is to show that Algorithm \ref{alg:coxcoords} meets our objectives: it finds \textit{all} solutions with \textit{good accuracy} within reasonable time. In particular, it does so for (nearly) degenerate systems with solutions on or near the exceptional divisors of $X$ that cannot be solved by other state of the art solvers. \subsection{Points on $\mathscr{H}_2$} We finish our running example by using Algorithm \ref{alg:coxcoords} to compute homogeneous coordinates of the solutions of the system defined in Example \ref{ex:hirz1}. We use $\textup{tol} = 10^{-12}$, $\alpha = \alpha_1 + \alpha_2$. For $\alpha_0 = \alpha_2$, Algorithm \ref{alg:coxcoords} finds three solutions. All three residuals are of order $10^{-16}$.
To illustrate the results, we use the \textit{moment map} $$\mu: \mathbb{C}^k \setminus Z \rightarrow P : x \mapsto \frac{1}{\sum_{m \in P\cap M} |x^{F^\top m + a}|} \sum_{m \in P \cap M} |x^{F^\top m + a}| m,$$
where $|\cdot|$ denotes the modulus. The map $\mu$ is constant on $G$-orbits and takes a point $x \in \mathbb{C}^k \setminus Z$ to a convex combination of the lattice points of $P$. It has the property that torus invariant prime divisors are sent to their corresponding facets and $(\mathbb{C}^*)^k$ is sent to the interior of $P$. More information can be found in \cite[Section 4.2]{fulton1993introduction} and \cite[Section 2]{sottile2017ibadan}. Figure \ref{fig:hirzmoment} shows that two of the computed solutions lie on divisors and one is in the torus. The image under $\mu$ of all of the solutions must lie on an intersection of the images of $V(f_1) \setminus Z, V(f_2) \setminus Z$ (but not all intersections correspond to solutions). As an illustration, we have included the same picture for a system with more solutions in the right part of the same figure. The polytopes for this system are $P_1 = [0,4] \times [0,4]$ and $P_2 = 5 \Delta_2$ where $\Delta_2$ is the standard simplex. There are $\delta = 40$ solutions, 12 of them are real.
\begin{figure}
\caption{Left: images in $P$ of the real part of $V(f_1)$ (\ref{bluecurve}) and $V(f_2)$ (\ref{orangecurve}) from Example \ref{ex:hirz1} under the moment map $\mu$. The images of the computed real solutions are shown as black dots (\ref{sols}). Right: same picture for a different system.}
\label{bluecurve}
\label{orangecurve}
\label{sols}
\label{fig:hirzmoment}
\end{figure}
\subsection{A problem from computer vision} \label{subsec:vision} One of the so-called `minimal problems' in computer vision is the problem of estimating radial distortion from eight point correspondences in two images. In \cite{kukelova2007minimal}, Kukelova and Pajdla propose a formulation of this problem as a system of 3 polynomial equations in 3 unknowns. The Newton polytopes are visualized in Figure \ref{fig:polytopes8ptrad}. The mixed volume is $\delta = \textup{MV}(P_1,P_2,P_3) = 17$ and the matrix of facet normals is $$ F = \begin{bmatrix} 0 &-1& -1 &0& 1 &0 \\ 1 &-1 &-1 &0 &0 &0 \\ 0&0 &-1 &1 &0 &-1 \end{bmatrix},$$ so the Cox ring $S$ has dimension 6. We assign random real coefficients drawn from a standard normal distribution to all lattice points in the polytopes and solve the system using Algorithm \ref{alg:coxcoords}. We first run the offline version, which generates the polytope $P_0$. In this case, $P_0$ is the standard simplex. All 17 solutions are found with a residual of order $10^{-16}$ within $\pm 0.1$ s (using the online version of the algorithm). \begin{figure}
\caption{Newton polytopes of the equations of the eight point radial distortion problem.}
\label{fig:polytopes8ptrad}
\end{figure}
To show the robustness of Algorithm \ref{alg:coxcoords} in the nearly degenerate case, i.e.\ the case where there are solutions on or near the torus invariant prime divisors, we perform the following experiment. Consider the lattice points $$\mathscr{F}_3 = \{m \in P_1\cap M ~|~ \pair{u_3,m}+3 = 0 \}, \qquad \mathscr{G}_3 = (P_1 \cap M) \setminus \mathscr{F}_3.$$ The points in $\mathscr{F}_3$ are the lattice points on the facet of $P_1$ corresponding to $u_3 = (-1,-1,-1)$. Set $$ \hat{g}_i = \sum_{m \in \mathscr{F}_3} c_{m,i} \chi^m + \sum_{m \in \mathscr{G}_3} c_{m,i} \chi^m, \qquad i = 1,2$$ with $c_{m,i}$ real numbers drawn from a standard normal distribution. Now let $\hat{f}_1 = \hat{g}_1$ and $$\hat{f}_2(e) = \sum_{m \in \mathscr{F}_3} (10^{-e} c_{m,2} + (1-10^{-e})c_{m,1}) \chi^m + \sum_{m \in \mathscr{G}_3} c_{m,2} \chi^m, e \in [0, \infty).$$ The equation $\hat{f}_2 = 0$ is parametrized by the real parameter $e$. The third equation $\hat{f}_3 = 0$ is chosen randomly. When $e = 0$, $\hat{f}_2 = \hat{g}_2$ and the system is generic, as before. When $e \rightarrow \infty$, the part of $\hat{f}_2$ corresponding to $\mathscr{F}_3$ converges to the part of $\hat{f}_1$ corresponding to $\mathscr{F}_3$, meaning that there will be solutions `at infinity' on the divisor $D_3$. We solve the system for $e = 0, 1/2, 1, 3/2, \ldots, 16$ and compute both the maximal residual $r_{\max}$ and the minimal residual $r_{\min}$ for the 17 solutions found by Algorithm \ref{alg:coxcoords} with $\textup{tol} = 10^{-4}$ and the solutions found by the toric version of the Truncated Normal Form (TNF) algorithm \cite{telen2017solving}. The TNF solver computes the multiplication matrices for the input equations (in the classical sense) using heuristically `the best possible basis' from a numerical point of view. The numerical results in \cite{telen2017solving,mourrain2018truncated} motivate the choice of this method as a reference. The result of the experiment is shown in Figure \ref{fig:residuals8pt}. Note that not only the residuals of the solutions approaching the divisor deteriorate for the TNF algorithm. Accuracy is lost on \textit{all} solutions. The reason is that even for the `best' basis selected by this algorithm, the computation of the classical multiplication matrices is ill-conditioned because the system is nearly degenerate. Looking at the computed Cox coordinates, we see that for three of the solutions, the coordinate $x_3$ goes to zero as $e$ increases, so 3 out of 17 solutions approach the divisor $D_3$. \begin{figure}
\caption{Minimal and maximal residual for different values of the parameter $e$ for the parametrized eight point radial distortion problem, for Algorithm \ref{alg:coxcoords} (\ref{blueres}) and the toric TNF algorithm (\ref{orangeres}).}
\label{blueres}
\label{orangeres}
\label{fig:residuals8pt}
\end{figure}
One can perform the same experiment for any other facet of $P_1$. However, in order to find the solutions on the divisors, the polytope $P_0$ must be large enough and it might not be sufficient that its lattice points generate the lattice (Corollary \ref{cor:orbiteq}). Repeating the same experiment, but this time using $\mathscr{F}_2$ instead of $\mathscr{F}_3$, the solutions in the torus are still found with good accuracy by Algorithm \ref{alg:coxcoords}. Accuracy is lost on the solutions approaching $D_2$. The reason is that the standard simplex does not `show' this facet. Using $P_0=\textup{Conv}((0,0,0),(1,0,0),(0,1,0),(0,0,1),(1,0,1),(0,1,1),(0,0,2))$ we find homogeneous coordinates of all solutions.
\subsection{Generic problems} \label{subsec:generic} To give an idea of the computation time and the type of systems Algorithm \ref{alg:coxcoords} can handle, we perform the following experiment. Consider the parameters $n$, NZ, $d_{\max} \in \mathbb{N} \setminus \{0\}$. For $j = 1,\ldots, n$ we generate a set $\mathscr{A}_j \subset \mathbb{Z}^n$ of NZ lattice points by selecting NZ points in $\mathbb{N}^n$ with coordinates drawn uniformly from $\{0,1,\ldots, d_{\max}\}$ and shifting these points by substracting the first point from all other points. Then for each $m \in \mathscr{A}_j$ we generate a random real number $c_{m,j}$ drawn from a standard normal distribution and we set $$ \hat{f}_j = \sum_{m \in \mathscr{A}_j} c_{m,j} \chi^m.$$ If two or more points $m \in \mathscr{A}_j$ coincide, we add the $c_{m,j}$ together, so NZ is an upper bound for the number of terms in $\hat{f}_j$. We use Algorithm \ref{alg:coxcoords} to compute the Cox coordinates of the solutions of the resulting system and their image under \eqref{eq:GCQ}. In Table \ref{tab:genericmixed} we report the number of solutions $\delta$, the dimension $k$ of the Cox ring, the number $n_{\alpha_0}$ for the automatically generated $\alpha_0$, and, for both the offline and the online solver, the maximal residual $r_{\max}$, the geometric mean of the residuals of all solutions $r_{\textup{mean}}$ and the computation time $t$ (in seconds). The residuals are represented by $D_{\textup{mean}}= \ceil{-\log_{10} r_{\textup{mean}}}$ and $D_{\max} = \ceil{-\log_{10} r_{\max}}$. \begin{table}[] \footnotesize \centering
\begin{tabular}{ccc|ccc|ccc|ccc} \multirow{2}{*}{$n$} & \multirow{2}{*}{NZ} & \multirow{2}{*}{$d_{\max}$} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{$k$} & \multirow{2}{*}{$n_{\alpha_0}$} & \multicolumn{3}{c}{OFFLINE} & \multicolumn{3}{c}{ONLINE} \\
& & & & & & $t$ & $D_{\textup{mean}}$ & $D_{\max}$ & $t$ & $D_{\textup{mean}}$ & $D_{\max}$ \\ \hline 2 & 20 & 10 & 144 & 12 & 3 & 1.9\text{e+}1 & 15 & 14 & 2.0\text{e-}1 & 15 & 14 \\ 2 & 20 & 20 & 505 & 14 & 4 & 2.4\text{e+}1 & 14 & 12 & 1.9\text{e+}0 & 14 & 11 \\ 2 & 20 & 30 & 1268 & 15 & 3 & 5.8\text{e+}1 & 14 & 12 & 1.9\text{e+}1 & 14 & 12 \\ 2 & 20 & 40 & 2390 & 16 & 3 & 2.6\text{e+}2 & 14 & 11 & 1.4\text{e+}2 & 14 & 13 \\ 2 & 20 & 50 & 3275 & 16 & 3 & 3.7\text{e+}2 & 14 & 12 & 2.3\text{e+}2 & 14 & 11 \\ 2 & 20 & 60 & 4469 & 12 & 3 & 7.8\text{e+}2 & 11 & 7 & 5.2\text{e+}2 & 11 & 8 \\ 2 & 40 & 30 & 1522 & 15 & 3 & 9.5\text{e+}1 & 14 & 11 & 3.4\text{e+}1 & 14 & 10 \\ 2 & 60 & 30 & 1670 & 15 & 4 & 1.2\text{e+}2 & 14 & 12 & 5.3\text{e+}1 & 14 & 12 \\ 2 & 200 & 30 & 1672 & 10 & 3 & 1.1\text{e+}2 & 15 & 10 & 6.0\text{e+}1 & 15 & 9 \\ \hline 3 & 5 & 3 & 18 & 21 & 4 & 2.2\text{e+}1 & 14 & 12 & 1.1\text{e-}1 & 15 & 13 \\ 3 & 5 & 5 & 136 & 36 & 4 & 3.9\text{e+}1 & 14 & 9 & 6.3\text{e-}1 & 14 & 13 \\ 3 & 10 & 5 & 190 & 60 & 5 & 3.5\text{e+}1 & 15 & 7 & 2.1\text{e+}0 & 15 & 11 \\ 3 & 10 & 7 & 592 & 63 & 5 & 1.3\text{e+}2 & 14 & 10 & 3.2\text{e+}1 & 15 & 7 \\ \hline 4 & 5 & 3 & 81 & 106 & 6 & 6.9\text{e+}1 & 14 & 11 & 3.7\text{e+}1 & 14 & 11 \end{tabular} \caption{Results for generic systems with mixed supports.} \label{tab:genericmixed} \end{table} It follows from Bernstein's second theorem \cite{bernstein,hustu} that solutions on divisors can only occur if the involved polytopes have common tropisms corresponding to positive dimensional faces. An important case in which this may happen is the unmixed case in which all input polytopes are equal. We repeat the experiment, but this time we keep the supports $\mathscr{A} = \mathscr{A}_1 = \ldots = \mathscr{A}_n$ fixed. Table \ref{tab:genericunmixed} shows some results. Of course, for this type of systems, the dimension of the Cox ring (or, equivalently, the number of facets of the Minkowski sum of the input polytopes) is lower and the system of binomial equations from Corollary \ref{cor:orbiteq} is easier to solve. \begin{table}[h!] \footnotesize \centering
\begin{tabular}{ccc|ccc|ccc|ccc} \multirow{2}{*}{$n$} & \multirow{2}{*}{NZ} & \multirow{2}{*}{$d_{\max}$} & \multirow{2}{*}{$\delta$} & \multirow{2}{*}{$k$} & \multirow{2}{*}{$n_{\alpha_0}$} & \multicolumn{3}{c}{OFFLINE} & \multicolumn{3}{c}{ONLINE} \\
& & & & & & $t$ & $D_{\textup{mean}}$ & $D_{\max}$ & $t$ & $D_{\textup{mean}}$ & $D_{\max}$ \\ \hline 2 & 20 & 60 & 3638 & 7 & 3 & 5.8\text{e+}2 & 13 & 11 & 3.8\text{e+}2 & 13 & 10 \\ \hline 3 & 10 & 10 & 834 & 14 & 6 & 3.5\text{e+}2 & 13 & 12 & 1.9\text{e+}2 & 13 & 12 \\ \hline 4 & 6 & 3 & 15 & 7 & 8 & 3.3\text{e+}1 & 15 & 15 & 8.4\text{e-}1 & 15 & 14 \\ 4 & 6 & 4 & 28 & 6 & 11 & 4.3\text{e+}1 & 14 & 13 & 5.4\text{e+}0 & 15 & 14 \\ 4 & 6 & 5 & 216 & 9 & 7 & 5.7\text{e+}2 & 12 & 11 & 2.7\text{e+}2 & 12 & 11 \\ 4 & 6 & 6 & 339 & 8 & 6 & 1.5\text{e+}3 & 6 & 4 & 2.0\text{e+}3 & 6 & 5 \\ \hline 5 & 6 & 3 & 10 & 6 & 8 & 7.5\text{e+}1 & 15 & 14 & 1.0\text{e+}1 & 15 & 15 \end{tabular} \caption{Results for generic systems with unmixed supports.} \label{tab:genericunmixed} \end{table}
\subsection{Comparison with homotopy methods} \label{subsec:exphomotopy} As discussed in the introduction, homotopy continuation methods provide very successful numerical solvers for systems of small degrees in large numbers of variables. Algebraic methods prove to be more robust in the case of high degrees and small dimensions, see for instance the numerical experiments in \cite{telen2017solving}. In this sense, these two important classes of numerical solvers are complementary to each other. As an illustration, we repeat the mixed experiment from Subsection \ref{subsec:generic} for three challenging 2-dimensional systems and compare the results with two homotopy implementations that are considered state of the art: Bertini (v1.6) \cite{bates2013numerically} and PHCpack (v2.4.64) \cite{verschelde1999algorithm}. For both these solvers, we use standard double precision settings and the backward errors of the computed solutions are of the order of the machine precision because these solvers intrinsically use Newton refinement. The results are reported in Table \ref{tab:homotopy}. For each solver, the number $\hat{\delta}$ is the number of correctly computed solutions (with residual $<10^{-9}$). \begin{table}[] \centering \footnotesize
\begin{tabular}{ccc|c|ccccc|cc|cc} $n$ & NZ & $d_{\max}$ & $\delta$ & \multicolumn{5}{c}{Algorithm \ref{alg:coxcoords}} & \multicolumn{2}{c}{PHCpack} & \multicolumn{2}{c}{Bertini} \\
& & & & $k$ & $n_{\alpha_0}$ & $t_{\textup{OFFLINE}}$ & $t_{\textup{ONLINE}}$ & $\hat{\delta}$ & $t$ & $\hat{\delta}$ & $t$ & $\hat{\delta}$ \\ \hline 2 & 20 & 20 & 622 & 14 & 3 & 4.4e+1 & 2.8e+0 & 622 & 1.7e+0 & 597 & 2.2e+1 & 605 \\ 2 & 200 & 30 & 1700 & 14 & 3 & 1.5e+2 & 7.1e+1 & 1700 & 1.3e+1 & 1671 & 4.9e+2 & 1119 \\ 2 & 800 & 40 & 3117 & 9 & 3 & 3.5e+2 & 2.3e+2 & 3117 & 7.7e+1 & 3055 & 7.6e+3 & 2832 \end{tabular} \caption{Results for generic systems using Algorithm \ref{alg:coxcoords} and the homotopy packages PHCpack and Bertini.} \label{tab:homotopy} \end{table} Note that both homotopy solvers miss some solutions for all these problems. PHCpack is very efficient for this type of generic problems because it implements polyhedral homotopies \cite{hustu,verschelde1994homotopies}. This means in practice that exactly $\delta$ paths are tracked. Bertini tracks 1258, 3135 and 6320 paths for the first, second and third problem respectively. This experiment shows that even for generic systems, for large $\delta$ and small $n$ the state of the art homotopy algorithms do not find all solutions. The method introduced in this paper aims at solving (nearly) degenerate, non-generic systems. In practice, this often means that there are `large solutions'. To show that such situations cause trouble for homotopy methods, even for small $\delta$, we consider the experiment of Subsection \ref{subsec:vision}. Solving the system for $e = 4.5$ using Algorithm \ref{alg:coxcoords} we find three solutions whose coordinates have a modulus of order $10^4$. PHCpack and Bertini both find only 14 solutions (the homotopy solvers give up on the paths converging to the `large solutions'). \section{Conclusion} \label{sec:conclusion} We have presented a toric eigenvalue, eigenvector theorem that allows to compute homogeneous coordinates of solutions of systems of Laurent polynomial equations (satisfying the assumptions in Section \ref{sec:setup}) on a natural toric compactification $X$ of $(\mathbb{C}^*)^n$. This results in a numerical linear algebra based algorithm that proves to be robust in the case of (nearly) degenerate systems with solutions on the torus invariant prime divisors. The algorithm is particularly successful for small dimensions $n$ and large degrees. It relies on a conjecture related to the regularity of $I$ (Conjecture \ref{conj}), which is checked numerically to be true in all of the presented experiments and supported by some weaker results in Section \ref{sec:lagreg}.
\section*{Acknowledgments} I would like to thank David Cox for his many useful comments on an earlier version of this paper. I also want to thank Mateusz Michalek and Milena Wrobel for our discussions that led to Theorems \ref{thm:koszul1} and \ref{thm:koszul2} and their proofs, Tomas Pajdla and Zuzana Kukelova for suggesting the problem of Subsection \ref{subsec:vision}, Bernd Sturmfels for suggesting the title and Wouter Castryck, Alexander Lemmens, Bernard Mourrain, Marc Van Barel and Wim Veys for fruitful discussions. I am grateful to an anonymous referee for their detailed and useful suggestions.
\footnotesize
\end{document} | arXiv | {
"id": "1903.12002.tex",
"language_detection_score": 0.6754149794578552,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter}
\title{Conditional propagation of chaos for mean field systems of interacting neurons}
\runtitle{Conditional propagation of chaos}
\begin{aug}
\author{\fnms{Xavier} \snm{Erny}\thanksref{m4}\ead[label=e4]{xavier.erny@univ-evry.fr}}, \author{\fnms{Eva} \snm{L\"ocherbach}\thanksref{m2}\ead[label=e5]{locherbach70@gmail.com}} \and \author{\fnms{Dasha} \snm{Loukianova}\thanksref{m4} \ead[label=e4]{dasha.loukianova@univ-evry.fr}}
\address{
\thanksmark{m4}Universit\'e Paris-Saclay, CNRS, Univ Evry, Laboratoire de Math\'ematiques et Mod\'elisation d'Evry, 91037, Evry, France\\
\thanksmark{m2}Statistique, Analyse et Mod\'elisation Multidisciplinaire, Universit\'e Paris 1 Panth\'eon-Sorbonne, EA 4543 et FR FP2M 2036 CNRS}
\runauthor{X. Erny et al.}
\end{aug}
\begin{abstract} We study the stochastic system of interacting neurons introduced in
\cite{de_masi_hydrodynamic_2015} and in \cite{fournier_toy_2016} in a diffusive scaling. The system consists of $N$ neurons, each spiking randomly with rate depending on its membrane potential. At its spiking time, the potential of the spiking neuron is reset to $0$ and all other neurons receive an additional amount of potential which is a centred random variable of order $ 1 / \sqrt{N}.$ In between successive spikes, each neuron's potential follows a deterministic flow. We prove the convergence of the system, as $N \to \infty$, to a limit nonlinear jumping stochastic differential equation driven by Poisson random measure and an additional Brownian motion $W$ which is created by the central limit theorem. This Brownian motion is underlying each particle's motion and induces a common noise factor for all neurons in the limit system. Conditionally on $W,$ the different neurons are independent in the limit system. This is the {\it conditional propagation of chaos} property. We prove the well-posedness of the limit equation by adapting the ideas of \cite{graham_mckean-vlasov_1992-1} to our frame. To prove the convergence in distribution of the finite system to the limit system, we introduce a new martingale problem that is well suited for our framework. The uniqueness of the limit is deduced from the exchangeability of the underlying system. \end{abstract}
\begin{keyword}[class=MSC]
\kwd{60J75}
\kwd{60K35}
\kwd{60G55}
\kwd{60G09}
\end{keyword}
\begin{keyword}
\kwd{Multivariate nonlinear Hawkes processes with variable length memory}
\kwd{Mean field interaction}
\kwd{Piecewise deterministic Markov processes}
\kwd{Interacting particle systems}
\kwd{Propagation of chaos}
\kwd{Exchangeability}
\kwd{Hewitt Savage theorem} \end{keyword}
\end{frontmatter}
\section*{Introduction}
This paper is devoted to the study of the Markov process $X^N_t = (X^{N, 1 }_t, \ldots , X^{N, N}_t )$ taking values in $\r^N$ and having generator $A^N$ which is defined for any smooth test function $ \varphi : \R^N \to \R $ by $$ A^N \varphi ( x) = - \alpha \sum_{i=1}^N \partial_{x^i} \varphi (x) x^i + \sum_{i=1}^N f (x^i) \int_\R \nu ( du ) \left( \varphi ( x - x^i e_i + \sum_{j\neq i } \frac{u}{\sqrt{N}} e_j ) - \varphi ( x) \right) , $$ where $ x= (x^1, \ldots, x^N) $ and where $ e_j $ denotes the $j-$th unit vector in $ \R^N.$ In the above formula, $ \alpha > 0 $ is a fixed parameter and $ \nu $ is a centred probability measure on $\R$ having a second moment.
Informally, the process $(X^{N,j})_{1\leq j\leq N}$ solves \begin{equation} \label{eq:dynintro} X^{N, i}_t = X^{N,i}_0 - \alpha \int_0^t X^{N, i}_s ds - \int_0^t X^{N, i}_{s-} dZ^{N,i}_s+\frac{1}{\sqrt{N}}\sum_{ j \neq i } \int_0^t U^j(s)dZ^{N,j}_s, \end{equation} where $U^j(s)$ are i.i.d. centred random variables distributed according to $ \nu , $ and where for each $1\leq j\leq N,$ $Z^{N,j}$ is a simple point process on $\r_+$ having stochastic intensity $s\mapsto f\ll(X^{N,j}_{s-}\rr).$
The particle system \eqref{eq:dynintro} is a version of the model of interacting neurons considered in \cite{de_masi_hydrodynamic_2015}, inspired by \cite{galves_infinite_2013}, and then further studied in \cite{fournier_toy_2016} and \cite{cormier_long_2019}. The system consists of $N$ interacting neurons. In \eqref{eq:dynintro}, $Z^{N,j}_t$ represents the number of spikes emitted by the neuron~$j$ in the interval~$[0,t]$ and $X^{N,j}_t$ the membrane potential of the neuron~$j$ at time~$t$. Spiking occurs randomly following a point process of rate $f (x) $ for any neuron of which the membrane potential equals $x.$ Each time a neuron emits a spike, the potentials of all other neurons receive an additional amount of potential. In \cite{de_masi_hydrodynamic_2015}, \cite{fournier_toy_2016} and \cite{cormier_long_2019} this amount is of order $N^{-1}, $ leading to classical mean field limits as $ N \to \infty .$ On the contrary to this, in the present article we study a {\it diffusive scaling} where each neuron $j$ receives the amount $U/\sqrt{N}$ at spike times $t$ of neuron $i, i \neq j,$ where $U \sim \nu $ is a random variable. The variable $U$ is centred modeling the fact that the synaptic weights are balanced. Moreover, right after its spike, the potential of the spiking neuron~$i$ is reset to~0, interpreted as resting potential. Finally, in between successive spikes, each neuron has a loss of potential of rate~$\alpha$.
Equations similar to \eqref{eq:dynintro} appear also in the frame of multivariate Hawkes processes with mean field interactions. Indeed, if $\ll(Z^{N,i}\rr)_{1\leq i\leq N}$ is a multivariate Hawkes process where the stochastic intensity of each $Z^{N,i}$ is given by $f\ll(X^N_{t-}\rr)_t$ with \begin{equation}\label{eq:Xold} X^{N}_t= e^{- \alpha t } X^N_0 + \frac{1}{\sqrt{N}}\sum_{j=1}^N\int_{0}^te^{-\alpha(t-s)}U^j(s)dZ^{N,j}_s, \end{equation} then $X^N$ satisfies $$X_t^N=X_0^N-\alpha \int_0^tX_s^Nds+\frac{1}{\sqrt{N}}\sum_{j=1}^N\int_0^tU^j(s)dZ^{N,j}_s,$$ which corresponds to equation \eqref{eq:dynintro} without the big jumps, i.e.\ without the reset to $0$ after each spike.
The above system of interacting Hawkes processes with intensity given by \eqref{eq:Xold} has been studied in our previous paper \cite{erny_mean_2019}. There we have shown firstly that $X^N$ converges in distribution in $D(\r_+,\r)$ to a limit process $\bar{X}$ solving \begin{equation} \label{barX} d\bar{X}_t=-\alpha\bar{X}_tdt+\sigma\sqrt{f\ll(\bar{X}_t\rr)}dW_t , \end{equation} and secondly that the sequence of multivariate counting processes $\ll(Z^{N,i}\rr)_i$ converges in distribution in $D(\r_+,\r)^{\n^*}$ to a limit sequence of counting processes $\ll(\bar{Z}^i\rr)_i .$ Every $\bar{Z}^{i} $ is driven by its own Poisson random measure and has the same intensity $\ll(f(\bar{X}_{t-}) \rr)_t ,$ where $\bar{X}$ is the strong solution of~\eqref{barX} with respect to some Brownian motion~$W$. Consequently, the processes $\bar{Z}^i$ $(i\geq 1)$ are conditionally independent given the Brownian motion~$W.$
In the present paper we add the reset term in~\eqref{eq:dynintro} that forces the potential $X^{N,i}$ of neuron $i$ to go back to~0 at each jump time of~$Z^{N,i}$. This models the well-known biological fact that right after its spike, the membrane potential of the spiking neuron is reset to a resting potential. From a mathematical point of view, this reset to $0$ induces a de-synchronization of the processes $X^{N,i}$ ($1\leq i\leq N$). In terms of Hawkes processes, it means that in \eqref{eq:Xold}, the process $ X^N_t$ has been replaced by $$X^{N, i }_t = \frac{1}{\sqrt{N}}\sum_{j=1}^N\int_{L^i_t }^te^{-\alpha(t-s)}U^j(s)dZ^{N,j}_s + e^{ - \alpha t } X_0^{N, i } \indiq_{ L_t^i = 0} , $$ where $ L^i_t = \sup \{ 0\leq s \le t : \Delta Z^{N, i } _s = 1 \} $ is the last spiking time of neuron $i$ before time $t,$ with the convention $ \sup \emptyset := 0.$ Thus the integral over the past, starting from $0 $ in \eqref{eq:Xold}, is replaced by an integral starting at the last jump time before the present time. Such processes are termed being of {\it variable length memory}, in reminiscence of \cite{rissanen_universal_1983}. They are the continuous-time analogues of the model considered in \cite{galves_infinite_2013}, and we are thus considering {\it multivariate Hawkes processes with mean field interactions and variable length memory}. As a consequence, on the contrary to the situation in \cite{erny_mean_2019}, the point processes $Z^{N,i}$ ($1\leq i\leq N$) do not share the same stochastic intensity. The reset term in \eqref{eq:dynintro} is a jump term that survives in the limit $ N \to \infty .$
Before introducing the exact limit equation for the system~\eqref{eq:dynintro}, let us explain informally how the limit particle system associated to $\ll(X^{N,i}\rr)_{1\leq i\leq N}$ should a priori look like. Suppose for the moment that we already know that there exists a process $ (\limY^1, \limY^2 , \limY^3, \ldots ) \in \D ( \R_+, \R)^{\n^*} $ such that for all $ K > 0, $ weak convergence $ {\mathcal L }(X^{N, 1, }, \ldots , X^{N, K} ) \to {\mathcal L} ( \limY^1, \ldots, \limY^K) $ in $\D (\R_+, \R )^K ,$ as $ N \to \infty ,$ holds. In equation~\eqref{eq:dynintro} the only term that depends on~$N$ is the martingale term which is approximately given by $$M_t^N=\frac{1}{\sqrt{N}}\sum_{j=1}^N\int_0^tU^j(s)dZ^{N,j}_s.$$ Then in the infinite neuron model, each process $\bar{X}^i$ should solve the equation~\eqref{eq:dynintro}, where the term $M_t^N$ is replaced by $M_t:=\underset{N\to\infty}{\lim}M_t^N$. Because of the scaling in $N^{-1/2},$ the limit martingale $M_t$ will be a stochastic integral with respect to some Brownian motion, and its variance the limit of $$\esp{ (M^N_t )^2} = \sigma^2 \int_0^t \esp{\frac1N \sum_{j=1}^N f ( X_s^{N, j } ) } ds ,$$ where $\sigma^2$ is the variance of $U^j(s).$ Therefore, the limit martingale (if it exists) must be of the form $$M_t=\sigma\int_0^t\sqrt{\underset{N\to\infty}{\lim}\frac{1}{N}\sum_{j=1}^Nf\ll(X_s^{N,j}\rr)} \, dW_s=\sigma\int_0^t\sqrt{\underset{N\to\infty}{\lim}\mu^N_s(f)}dW_s,$$ where $\mu_s^N$ is the empirical measure of the system $\ll(X^{N,j}_s\rr)_{1\leq j\leq N}$ and $W$ is a one-dimensional standard Brownian motion.
Since the law of the $N-$particle system $ (X^{N, 1}, \ldots, X^{N, N} ) $ is symmetric, the law of the limit system $ \limY = (\limY^1, \limY^2 , \limY^3, \ldots ) $ must be exchangeable, that is, for all finite permutations $\sigma, $ we have that ${\mathcal L} ( \limY^{\sigma ( 1) }, \limY^{\sigma ( 2) } , \ldots ) = {\mathcal L} (\limY).$ In particular, the theorem of Hewitt-Savage, see \cite{hewitt_symmetric_1955}, implies that the random limit \begin{equation}\label{eq:limitmu}
\mu_s := \lim_{N\to \infty}\frac1N \sum_{i=1}^N \delta_{\limY^i_s } \end{equation} exists. Supposing that $\mu^N_s$ converges, it necessarily converges towards $\mu_s.$ Therefore, $\limY $ should solve the limit system \begin{equation} \label{eq:dynlimintro} \limY^i_t = \limY^i_0 -\alpha \int_0^t \limY^i_s ds - \int_0^t \limY^i_{s- } d\bar{Z}^{i}_s + \sigma \int_0^t \sqrt{ \mu_s ( f) } d W_s , i \in \N, \end{equation} where each $ \bar{Z}^i $ has intensity $ t \mapsto f ( \limY^i_{t- }),$ and where $\mu_s$ is given by \eqref{eq:limitmu}.
The above arguments are made rigorous in Sections \ref{sec:21} and \ref{sec:22} below.
Let us briefly discuss the form of the limit equation \eqref{eq:dynlimintro}. Analogously to \cite{erny_mean_2019}, the scaling in $N^{-1/2}$ in~\eqref{eq:dynintro} creates a Brownian motion~$W$ in the limit system~\eqref{eq:dynlimintro}. We will show that the presence of this Brownian motion entails a {\it conditional propagation of chaos}, that is the conditional independence of the particles given~$W$. In particular, the limit measure $\mu_s$ will be random. This differs from the classical framework, where the scaling is in $N^{-1}$ (see e.g. \cite{delattre_hawkes_2016}, \cite{ditlevsen_multi-class_2017} in the framework of Hawkes processes, and \cite{de_masi_hydrodynamic_2015}, \cite{fournier_toy_2016} and \cite{cormier_long_2019} in the framework of systems of interacting neurons), leading to a deterministic limit measure $ \mu_s $ and the true propagation of chaos property implying that the particles of the limit system are independent.
This is not the first time that conditional propagation of chaos is studied in the literature; it has already been considered e.g. in \cite{carmona_mean_2016}, \cite{coghi_propagation_2016} and \cite{dermoune_propagation_2003}. But in these papers the common noise, represented by a common (maybe infinite dimensional) Brownian motion, is already present at the level of the finite particle system, the mean field interactions act on the drift of each particle, and the scaling is the classical one in~$N^{-1}.$ On the contrary to this, in our model, this common Brownian motion, leading to conditional propagation of chaos, is only present in the limit, and it is created by the central limit theorem as a consequence of the joint action of the small jumps of the finite size particle system. Moreover, in our model, the interactions survive as a variance term in the limit system due to the diffusive scaling in $N^{-1/2}.$
Now let us discuss the form of $\mu_s$, which is the limit of the empirical measures of the limit system~$\ll(\limY^i_s\rr)_{i\geq 1}$. The theorem of Hewitt-Savage, \cite{hewitt_symmetric_1955}, implies that the law of $\ll(\limY_s^i\rr)_{i\geq 1}$ is a mixture directed by the law of $\mu_s$. As it has been remarked by \cite{carmona_mean_2016} and \cite{coghi_propagation_2016}, this conditioning reflects the dependencies between the particles.
We will show that the variables $\limY^i$ are conditionally independent given the Brownian motion $W.$ As a consequence, $\mu_s $ will be shown to be the conditional law of the solution given the Brownian motion, that is, $P-$almost surely, \begin{equation}\label{eq:limitmudef}
\mu_s ( \cdot ) = P ( \limY^i_s \in \cdot | (W_t)_{ 0 \le t \le s } ) = P( \limY^i_s \in \cdot | W ) , \end{equation} for any $ i \in \N .$ Equation \eqref{eq:dynlimintro} together with \eqref{eq:limitmudef} gives a precise definition of the limit system.
The nonlinear SDE \eqref{eq:dynlimintro} is not clearly well-posed, and our first main result, Theorem \ref{prop:42}, gives appropriate conditions on the system that guarantee pathwise uniqueness and the existence of a strong solution to \eqref{eq:dynlimintro}.
We then prove, in Sections \ref{sec:21} and \ref{sec:22}, our main Theorem \ref{convergencemuN} stating the convergence in distribution of the sequence of empirical measures $ \mu^N=N^{-1}\sum_{i=1}^N \delta_{(X^{N,i}_t)_{t\geq 0}}, $ in ${\mathcal P} (D(\r_+,\r)),$ to the random limit $ \mu = P ( (\limY_t)_{t\geq 0} \in \cdot | W) .$ To do so, we first prove that under suitable conditions on the parameters of the system, the sequence $ \mu^N$ is tight (see Proposition \ref{mutight} below). We then follow a classical road and identify every possible limit as solution of a martingale problem. Since the random limit measure $ \mu $ will only be the directing measure of the limit system (that is, the conditional law of each coordinate, but not its law), this martingale problem is not a classical one. It is in particular designed to reflect the correlation between the particles and to describe all possible limits of couples of neurons.
Classical representation theorems imply that any coordinate of the limit process must satisfy an equation of the type \eqref{eq:dynlimintro}. The fact that our martingale problem describes correlations within couples of neurons allows to show that each coordinate is driven by its own Poisson random measure and that all coordinates are driven by the same underlying Brownian motion $W.$ But it is not yet clear that $\mu_s $ is of the form \eqref{eq:limitmudef}. In other words, it has to be proven that the only common randomness is the one present in the driving Brownian motion $W.$ To prove this last point, we introduce an auxiliary particle system which is a mean field particle version of the limit system, constructed with the same underlying Brownian motion, and we provide an explicit control on the distance (with respect to a particular $L^1 -$norm) between the two systems.
Let us finally mention that the random limit measure $ \mu $ satisfies the following nonlinear stochastic PDE in weak form: for any test function $ \phi \in C^2_b (\R),$ the set of $C^2$-functions on $\R$ such that $\phi,$ $\phi'$ and $\phi''$ are bounded, for any $t\geq 0$, \begin{multline*} \int_\R \phi (x) \mu_t (dx) = \int_\R \phi (x) \mu_0 (dx) + \int_0^t \left( \int_\R \phi' (x) \mu_s (dx) \right) \, \sqrt{\mu_s (f) } d W_s \\ +\int_0^t \int_\R \Big([ \phi ( 0) - \phi ( x) ] f(x) - \alpha \phi'(x) x + \frac12 \phi'' (x) \mu_s (f) \Big) \mu_s ( dx)ds . \end{multline*}
{\bf Organisation of the paper.} In Section~\ref{secnota}, we state the assumptions and formulate the main results. Section~\ref{secthm14} is devoted to the proof of the convergence of $\mu^N:=\sum_{j=1}^N\delta_{X^{N,j}}$ (Theorem~\ref{convergencemuN}). In particular, we introduce our new martingale problem in Section \ref{sec:22} and prove the uniqueness of the limit law in Theorem \ref{uniquelimit}. Finally, in Appendix, we prove some auxiliary results.
\section{Notation, Model and main results} \label{secnota}
\subsection{Notation} We use the following notation throughout the paper.
If $E$ is a metric space, we note: \begin{itemize} \item $\mathcal{P}(E)$ the space of probability measures on $E$ endowed with the topology of the weak convergence, \item $C_b^n(E)$ the set of the functions $g$ which are $n$ times continuously differentiable such that $g^{(k)}$ is bounded for each $0\leq k\leq n,$ \item $C_c^n(E)$ the set of functions $g\in C_b^n(E)$ that have a compact support. \end{itemize}
In addition, in what follows $D(\r_+,\r)$ denotes the space of c\`adl\`ag functions from $\r_+$ to $\r$, endowed with Skorohod metric, and $C$ and $K$ denote arbitrary positive constants whose values can change from line to line in an equation. We write $C_\theta$ and $K_\theta$ if the constants depend on some parameter $\theta.$
\subsection{The finite system}
We consider, for each $N\geq 1$, a family of i.i.d. Poisson measures $(\pi^i(ds, dz, du))_{i=1,\dots,N}$ on $\r_+ \times \r_+ \times \r $ having intensity measure $ds dz \mesuremu (du)$ where $\nu$ is a probability measure on $\r$, as well as an i.i.d. family $(X^{N,i}_0)_{i=1,\dots,N}$ of $\r $-valued random variables independent of the Poisson measures. The object of this paper is to study the convergence of the Markov process $X^N_t = (X^{N, 1 }_t, \ldots , X^{N, N}_t )$ taking values in $\r^N$ and solving, for $i=1,\dots,N$, for $t\geq 0$, \begin{equation}\label{eq:dyn} \ll\{\begin{array}{rcl} X^{N, i}_t &= &\displaystyle X^{N,i}_0 - \alpha \int_0^t X^{N, i}_s ds - \int_{[0,t]\times\r_+\times\r} X^{N, i}_{s-} \indiq_{ \{ z \le f ( X^{N, i}_{s-}) \}} \pi^i (ds,dz,du) \\ &&+\displaystyle \frac{1}{\sqrt{N}}\sum_{ j \neq i } \int_{[0,t]\times\r_+\times\r}u \indiq_{ \{ z \le f ( X^{N, j}_{s-}) \}} \pi^j (ds,dz,du),\\ X_0^{N,i}&\sim& \nu_0. \end{array}\right. \end{equation} The coefficients of this system are the exponential loss factor $ \alpha > 0, $ the jump rate function $f:\r \mapsto \r_+$ and the probability measures $\mesuremu$ and $\nu_0$.
In order to guarantee existence and uniqueness of a strong solution of~\eqref{eq:dyn}, we introduce the following hypothesis.
\begin{assu} \label{ass:1} The function $f$ is Lipschitz continuous. \end{assu}
In addition, we also need the following condition to obtain a priori bounds on some moments of the process $\ll(X^{N,i}\rr)_{1\leq i\leq N}.$ \begin{assu} \label{control} We assume that
$\int_\r xd\nu(x)=0,$ $\int_\r x^2d\nu(x)<+\infty,$ and $\int_\r x^2d\nu_0(x)<+\infty.$ \end{assu}
Under Assumptions~\ref{ass:1} and~\ref{control}, existence and uniqueness of strong solutions of~\eqref{eq:dyn} follow from Theorem~IV.9.1 of \cite{ikeda_stochastic_1989}, exactly in the same way as in Proposition~6.6 of \cite{erny_mean_2019}.
We now define precisely the limit system and discuss its properties before proving the convergence of the finite to the limit system.
\subsection{The limit system} The limit system $\ll(\limY^i\rr)_{i\geq 1}$ is given by \begin{equation}\label{eq:dynlimit1} \ll\{\begin{array}{rcl} \limY^i_t &=& \displaystyle\limY^i_0 - \alpha \int_0^t \limY^i_s ds -\int_{[0,t]\times\r_+\times\r} \limY^i_{s- } \indiq_{ \{ z \le f ( \limY^i_{s-}) \}} \bN^i (ds,dz, du) \\ &&\displaystyle + \sigma \int_0^t \sqrt{\espc{f\ll(\limY^i_s\rr)}{\W }} d W_s,\\ \limY^i_0&\sim&\nu_0 . \end{array}\rr. \end{equation} In the above equation, $(W_t)_{t\geq 0}$ is a standard one-dimensional Brownian motion which is independent of the Poisson random measures, and $ \W = \sigma \{ W_t ,t \geq 0 \} .$ Moreover, the initial positions $ \limY^i_0 , i \geq 1 , $ are i.i.d., independent of $ W$ and of the Poisson random measures, distributed according to $\nu_0$ which is the same probability measure as in~\eqref{eq:dyn}. The common jumps of the particles in the finite system, due to their scaling in $ 1/\sqrt{N} $ and the fact that they are centred, by the Central Limit Theorem, create this single Brownian motion $ W_t $ which is underlying each particle's motion and which induces the common noise factor for all particles in the limit.
The limit equation \eqref{eq:dynlimit1} is not clearly well-posed and requires more conditions on the rate function~$f$. Let us briefly comment on the type of difficulties that one encounters when proving trajectorial uniqueness of \eqref{eq:dynlimit1}. Roughly speaking, the jump terms demand to work in an $L^1 - $framework, while the diffusive terms demand to work in an $L^2-$framework. \cite{graham_mckean-vlasov_1992-1} proposes a unified approach to deal both with jump and with diffusion terms in a non-linear framework, and we shall rely on his ideas in the sequel. The presence of the random volatility term which involves conditional expectation causes however additional technical difficulties. Finally, another difficulty comes from the fact that the jumps induce non-Lipschitz terms of the form $ \limY^i_s f ( \limY^i_{s}) .$ For this reason a classical Wasserstein-$1-$coupling is not appropriate for the jump terms. Therefore we propose a different distance which is inspired by the one already used in \cite{fournier_toy_2016}. To do so, we need to work under the following additional assumption.
\begin{assu}\label{ass:2} 1. We suppose that $ \inf f > 0 .$ \\ 2. There exists a function $a \in C^2(\R , \R_+ ), $ strictly increasing and bounded, such that, for a suitable constant $C,$ for all $ x, y \in \R,$
$$ |a'' ( x) - a'' (y) | + |a'(x) - a' (y) | + |x a'(x) - ya'(y) | + |f(x)-f(y)| \le C | a(x)-a(y) |.$$ \end{assu}
Note that Assumption~\ref{ass:2} implies Assumption~\ref{ass:1} as well as the boundedness of the rate function~$f.$
\begin{prop} Suppose that $ f(x) = c + d \arctan (x) ,$ where $c > d \frac{\pi}2 ,$ $ d > 0 .$ Then Assumption \ref{ass:2} holds with $ a = f.$ \end{prop}
\begin{proof}
We quickly check that $|x a'(x) - ya'(y) | \le C | a(x) - a(y) | .$ We have that $ a' (x) = \frac{d}{1+x^2 } , $ whence $ x a'(x) - ya'(y) = d( \frac{x}{1+ x^2 } - \frac{y}{1+ y^2 }) .$ We use that $ \left| \frac{d}{dx} \left( \frac{x}{1+ x^2 } \right)\right| =\left| \frac{1- x^2}{(1+x^2)^2} \right| \le \frac{1}{1+x^2 } .$ Suppose w.l.o.g. that $ x \le y .$ As a consequence,
$$ | x a'(x) - ya'(y)| = d \left| \int_x^y \frac{1- t^2}{(1+t^2)^2}dt \right| \le d \int_x^y \frac{1}{1 + t^2 } dt = | \arctan (y) - \arctan (x)| = d | a(x) - a(y)|. $$ The other points of Assumption \ref{ass:2} follow immediately. \end{proof}
Under these additional assumptions we obtain the well-posedness of each coordinate of the limit system~\eqref{eq:dynlimit1}, that is, of the $({\mathcal F}_t)_t- $ adapted process $ (\limY_t)_t $ having c\`adl\`ag trajectories which is solution of the SDE \begin{equation} \ll\{\begin{array}{rcl} \label{eq:dynlimit} d\limY_t&=&- \alpha \displaystyle \limY_t dt- \limY_{t-}\int_{ \r_+\times\r} \indiq_{ \{ z \le f ( \limY_{t-}) \}} \bN (dt,dz, du) +\sigma\sqrt{\mu_t(f)}dW_t,\\ \limY_0&\sim&\nu_0 ,~~\mu_t(f)=\espc{f\ll(\limY_t\rr)}{\W }=\espc{f\ll(\limY_t\rr)}{\W_t } . \end{array}\rr. \end{equation} Here, $ {\mathcal F}_t = \sigma \{ \bN ( [0, s ] \times A ), s \le t , A \in {\mathcal B} ( \R_+ \times \R ) \} \vee \W_t, $ $\W_t = \sigma \{ W_s , s \le t \} $ and $ \W = \sigma \{ W_s , s \geq 0\}.$
\begin{thm}\label{prop:42} Grant Assumption~\ref{ass:2}. \\ 1. Pathwise uniqueness holds for the nonlinear SDE~\eqref{eq:dynlimit}. \\ 2. If additionally, $\int_\r x^2d\nu_0(x)<+\infty,$ then there exists a unique strong solution $(\limY_t)_{t\geq 0} $ of the nonlinear SDE~\eqref{eq:dynlimit}, which is $({\mathcal F}_t)_t- $ adapted with c\`adl\`ag trajectories, satisfying for every $t>0$, \begin{equation} \label{controllimite} \esp{\underset{0\leq s\leq t}{\sup}\limY_s^2}<+\infty. \end{equation} \end{thm}
\begin{rem} Notice that the stochastic integral $ \int_0^t \sqrt{\mu_s(f)}dW_s $ is well-defined since $ s \mapsto \sqrt{\mu_s(f)} $ is an $({\mathcal W}_t)_t- $progressively measurable process. \end{rem}
In what follows we just give the proof of Item 1. of the above theorem since its arguments are important for the sequel. We postpone the rather classical proof of Item 2.\ to Appendix.
\begin{proof}[Proof of Item 1. of Theorem \ref{prop:42}]
Consider two solutions $ (\limYu_t)_{t \geq 0}$ and $ (\limYd_t)_{t \geq 0 } , $ $({\mathcal F}_t)_t- $adapted, defined on the same probability space and driven by the same Poisson random measure $\bN $ and the same Brownian motion~$ W,$ and with $ \limYu_0 = \limYd_0.$ We consider $ Z_t := a(\limYu_t) - a( \limYd_t) .$ Denote $\limmuu_s(f)=\E[f( \limYu_s)|\W_s]$ and $\limmud_s(f)=\E[f( \limYd_s)|\W_s]. $ \\ Using Ito's formula, we can write \begin{multline*} Z_t = - \alpha \int_0^t \left( \limYu_s a' ( \limYu_s ) - \limYd_s a' ( \limYd_s) \right) ds +\frac12 \int_0^t ( a'' ( \limYu_s) \limmuu_s(f) - a'' ( \limYd_s ) \limmud_s(f) ) \sigma^2 ds \\ + \int_0^t ( a' ( \limYu_s) \sqrt{\limmuu_s(f)} -a' (\limYd_s ) \sqrt{ \limmud_s(f)} ) \sigma d W_s \\ - \int_{[0,t]\times\r_+\times\r} \, [ a (\limYu_{s- }) - a( \limYd_{s-}) ] \indiq_{ \{ z \le f ( \limYu_{s-}) \wedge f ( \limYd_{s-}) \}} \bN (ds, dz, du)\\ + \int_{[0,t]\times\r_+\times\r} \, [a(0)- a( \limYu_{s-} )] \indiq_{ \{ f ( \limYd_{s-} ) < z \le f ( \limYu_{s-} ) \}} \bN (ds, dz, du) \\ + \int_{[0,t]\times\r_+\times\r} \, [ a( \limYd_{s-} ) - a(0) ] \indiq_{ \{ f ( \limYu_{s-} ) < z \le f ( \limYd_{s-} ) \}} \bN (ds,dz,du) = : A_t + M_t +\Delta_t , \end{multline*} where $ A_t $ denotes the bounded variation part of the evolution, $M_t$ the martingale part and $ \Delta_t$ the sum of the three jump terms. Notice that $$M_t= \int_0^t ( a' ( \limYu_s) \sqrt{\limmuu_s(f)} -a'(\limYd_s ) \sqrt{\limmud_s(f)} ) \sigma d W_s$$ is a square integrable martingale since $ f$ and $a'$ are bounded.
We wish to obtain a control on $ |Z^* _t | := \sup_{ s\le t } |Z_s | .$ We first take care of the jumps of $ |Z_t|.$ Notice first that, since $f$ and $a$ are bounded, \begin{multline*}
\Delta (x,y):= (f(x) \wedge f(y)) | a (x) - a(y ) | + | f (x ) - f( y ) | \; \Big| |a (0) - a(y)| + |a(0) - a(x)| \Big| \\
\le C | a (x) - a( y ) | , \end{multline*} implying that
$$ \E \sup_{s \le t } | \Delta_s | \le C \E \int_0^t | a(\limYu_s) - a(\limYd_s ) | ds \le C t \, \E |Z_t^* | . $$
Moreover, for a constant $C$ depending on $\sigma^2 ,$ $ \| f \|_\infty , \|a\|_\infty, \| a'\|_\infty, \| a'' \|_\infty $ and $ \alpha , $ \begin{multline*}
\E \sup_{ s \le t } | A_s | \le C \int_0^t \E |a'( \limYu_s ) \limYu_s - a' ( \limYd_s )\limYd_s | ds \\
+ C \left[ \int_0^t | a'' ( \limYu_s ) -a '' ( \limYd_s ) | ds + \int_0^t | \limmuu_s ( f) - \limmud_s ( f) | ds \right] . \end{multline*}
We know that $ |a'( \limYu_s ) \limYu_s - a' ( \limYd_s )\limYd_s | + |a'' ( \limYu_s ) - a'' ( \limYd_s ) | \le C |a ( \limYu_s ) - a ( \limYd_s ) |= C | Z_s| .$ Therefore,
$$ \E \sup_{ s \le t } | A_s | \le C \E \left[ \int_0^t | Z_s | ds + \int_0^t | \limmuu_s ( f) - \limmud_s ( f) | ds \right].$$ Moreover,
$$ |\limmuu_s (f)- \limmud_s (f) | = \Big| \E \left( f ( \limYu_s ) - f ( \limYd_s ) | \W \right) \Big| \le \E \left( | f ( \limYu_s ) - f ( \limYd_s )| | \W \right) \leq \E ( |Z_s| | \W) ,$$ and thus,
$$ \E \int_0^t | \limmuu_s ( f) - \limmud_s ( f) | ds \le \E \int_0^t |Z_s| ds \le t \E | Z^*_t| .$$ Putting all these upper bounds together we conclude that for a constant $C$ not depending on $t,$
$$ \E \sup_{s \le t} |A_s| \le C t \E |Z_t^*| .$$ Finally, we treat the martingale part using the Burkholder-Davis-Gundy inequality, and we obtain $$
\E \sup_{s \le t} |M_s| \le C \E \left[ \left( \int_0^t (a' (\limYu_s ) \sqrt{ \limmuu_s ( f) } - a' (\limYd_s ) \sqrt{ \limmud_s ( f) })^2 ds \right)^{1/2}\right].$$ But \begin{multline}\label{eq:varquadratique}
(a' (\limYu_s ) \sqrt{ \limmuu_s ( f) } - a' (\limYd_s ) \sqrt{ \limmud_s ( f) })^2 \le C \left[ ( (a' (\limYu_s ) - a' (\limYd_s ))^2 + (\sqrt{ \limmuu_s ( f) } - \sqrt{ \limmud_s ( f) })^2 \right] \\
\le C | Z_t^*|^2 + C (\sqrt{ \limmuu_s ( f) } - \sqrt{ \limmud_s ( f) })^2 , \end{multline}
where we have used that $ | a' (x) - a' (y) | \le C | a(x) - a(y) | $ and that $f$ and $a'$ are bounded.
Finally, since $\inf f> 0, $
$$| \sqrt{ \limmuu_s ( f) } - \sqrt{ \limmud_s ( f) }|^2 \le C | \limmuu_s ( f) - \limmud_s ( f) |^2 \le C \left( \E ( |Z_s^*| | \W_s ) \right)^2.$$ We use that
$ |Z_s^* | \le | Z_t^*| ,$ implying that $ \E ( |Z_s^*| | \W ) \le \E ( |Z_t^*| | \W ).$ Therefore we obtain the upper bound
$$ | \sqrt{ \limmuu_s ( f) } - \sqrt{ \limmud_s ( f) }|^2
\le C \left( \E ( |Z_t^*| | \W ) \right)^2 $$ for all $ s \le t ,$ which implies the control of
$$ \E \sup_{s \le t} |M_s| \le C \sqrt{t} \E | Z_t^* | .$$ The above upper bounds imply that, for a constant $C$ not depending on $t $ nor on the initial condition,
$$ \E |Z_t^*| \le C (t + \sqrt{t} ) \E | Z_t^* | ,$$
and therefore, for $ t_1 $ sufficiently small, $ \E |Z_{t_1}^*| = 0 . $ We can repeat this argument on intervals $ [ t_1, 2 t_1 ], $ with initial condition $\hat X_{t_1 } ,$ and iterate it up to any finite $T$ because $t_1 $ does only depend on the coefficients of the system but not on the initial condition. This implies the assertion. \end{proof}
\begin{rem} Theorem~\ref{prop:42} states the well-posedness of the SDE~\eqref{eq:dynlimit}. Under the same hypotheses, with almost the same reasoning, one can prove the well-posedness of the system~\eqref{eq:dynlimit1}. \end{rem}
In the sequel, we shall also use an important property of the limit system~\eqref{eq:dynlimit1}, which is the conditional independence of the processes $\limY^i$ ($i\geq 1$) given the Brownian motion~$W$.
\begin{prop} \label{independence} If Assumption~\ref{ass:2} holds and $\int_\r x^2d\nu_0(x)<+\infty,$ then \begin{itemize} \item[(i)] for all $N\in\N^*$ there exists a strong solution $\ll(\limY^i\rr)_{1\leq i\leq N}$ of~\eqref{eq:dynlimit1}, and pathwise uniqueness holds, \item[(ii)] $\limY^1,\ldots, \limY^N$ are independent conditionally to $W,$
\item[(iii)] for all $t\geq 0$, almost surely, the weak limit of $\frac1N \sum_{i=1}^N \delta_{\limY^i_{|[0,t]} } $ is given by $ \lim_{N\to \infty}\frac1N \sum_{i=1}^N \delta_{\limY^i_{|[0,t]} }= P ( \limY^i_{|[0,t]} \in \cdot | \W_{t} ) = P( \limY^i_{|[0,t]} \in \cdot | \W ) .$ \end{itemize} \end{prop}
Let us finally mention that the random limit measure $ \mu $ satisfies a nonlinear stochastic PDE in weak form. More precisely, \begin{cor}\label{cor:PDE}
Grant Assumption~\ref{ass:2} and suppose that $\int_\r x^2d\nu_0(x)<+\infty . $ Then the measure $ \mu = P ( (\limY_t)_{t\geq 0} \in \cdot | W) $ satisfies the following nonlinear stochastic PDE in weak form: for any $ \phi \in C^2_b (\R),$ for any $t\geq 0$, \begin{multline*} \int_\R \phi (x) \mu_t (dx) = \int_\R \phi (x) \nu_0 (dx) + \int_0^t \left( \int_\R \phi' (x) \mu_s (dx) \right) \, \sqrt{\mu_s (f) } \sigma d W_s \\ +\int_0^t \int_\R \Big([ \phi ( 0) - \phi ( x) ] f(x) - \alpha \phi'(x) x + \frac12 \sigma^2 \phi'' (x) \mu_s (f) \Big) \mu_s ( dx)ds . \end{multline*} \end{cor} The proofs of Proposition~\ref{independence} and of Corollary \ref{cor:PDE} are postponed to Appendix.
\subsection{Convergence to the limit system}
We are now able to state our main result. \begin{thm} \label{convergencemuN}
Grant Assumptions~\ref{ass:1},~\ref{control} and~\ref{ass:2}. Then the empirical measure $\mu^N=\frac1N\sum_{i=1}^N\delta_{X^{N,i}}$ of the $N-$particle system $(X^{N,i})_{1\leq i\leq N}$ converges in distribution in $\mathcal{P}(D(\r_+,\r))$ to $\mu:=\mathcal{L}(\bar X^1|\W)$, where $(\bar X^i)_{i\geq 1}$ is solution of~\eqref{eq:dynlimit1}. \end{thm}
\begin{cor} Under the assumptions of Theorem~\ref{convergencemuN}, $(X^{N,j})_{1\leq j\leq N}$ converges in distribution to $(\bar X^j)_{j\geq 1}$ in $D(\r_+,\r)^{\n^*}.$ \end{cor}
\begin{proof} Together with the statement of Theorem \ref{convergencemuN}, the proof is an immediate consequence of Proposition~7.20 of \cite{aldous_exchangeability_1983}. \end{proof}
We will prove Theorem~\ref{convergencemuN} in a two step procedure. Firstly we prove the tightness of the sequence of empirical measures, and then in a second step we identify all possible limits as solutions of a martingale problem.
\section{Proof of Theorem~\ref{convergencemuN}} \label{secthm14}
This section is dedicated to prove that the sequence $(\mu^N)_N$ of the empirical measures $\mu^N:=\sum_{j=1}^N\delta_{(X^j_t)_{t\geq 0}}$ converges in distribution to $\mu:=\mathcal{L}(\bar X^1|\W)$, where $(\bar X^j)_{j\geq 1}$ is solution of~\eqref{eq:dynlimit1}.
In a first time, we prove that the sequence $(\mu^N)_N$ is tight on $\mathcal{P}(D(\r_+,\r))$. The main step to prove the convergence of $(\mu^N)_N$ is then to show that each converging subsequence converges to the same limit in distribution. For this purpose, we introduce a new martingale problem, and we show that every possible limit of $\mu^N$ is a solution of this martingale problem. Finally, we will show how the uniqueness of the limit law follows from the exchangeability of the system.
\subsection{Tightness of $(\mu^N)_N$}\label{sec:21}
\begin{prop}\label{mutight} Grant Assumptions \ref{ass:1} and \ref{control}. For each $N\geq 1$, consider the unique solution $(X^N_t)_{t\geq 0}$ to \eqref{eq:dyn} starting from some i.i.d. $\nu_0$-distributed initial conditions $X^{N,i}_0$. \begin{itemize} \item[(i)] The sequence of processes $(X^{N,1}_t)_{t\geq 0}$ is tight in $\D(\R_+, \R)$. \item[(ii)] The sequence of empirical measures $ \mu^N=N^{-1}\sum_{i=1}^N \delta_{(X^{N,i}_t)_{t\geq 0}}$ is tight in ${\mathcal P}(\D(\R_+, \R))$. \end{itemize} \end{prop}
\begin{proof} First, it is well-known that point (ii) follows from point (i) and the exchangeability of the system, see \cite[Proposition 2.2-(ii)]{sznitman_topics_1989}. We thus only prove (i). To show that the family $((X^{N,1}_t)_{t\geq 0})_{N\geq 1}$ is tight in $\D(\R_+,\R)$, we use the criterion of Aldous, see Theorem~4.5 of \cite{jacod_limit_2003}. It is sufficient to prove that \begin{itemize} \item[(a)] for all $ T> 0$, all $\varepsilon >0$, $ \lim_{ \delta \downarrow 0} \limsup_{N \to \infty } \sup_{ (S,S') \in A_{\delta,T}}
P ( |X_{S'}^{N, 1 } - X_S^{N , 1 } | > \varepsilon ) = 0$, where $A_{\delta,T}$ is the set of all pairs of stopping times $(S,S')$ such that $0\leq S \leq S'\leq S+\delta\leq T$ a.s., \item[(b)] for all $ T> 0$, $\lim_{ K \uparrow \infty } \sup_N
P ( \sup_{ t \in [0, T ] } |X_t^{N, 1 }| \geq K ) = 0$. \end{itemize}
To check (a), consider $(S,S')\in A_{\delta,T}$ and write \begin{multline*} X_{S'}^{N, 1 } - X_S^{N , 1 } = - \int_S^{S'} \int_\R \int_0^\infty X^{ N, 1 }_{s- } \indiq_{\{ z \le f ( X_{s- }^{N, 1} ) \}} \bN^1 (ds, du, dz ) - \alpha \int_S^{S'} X^{N, 1 }_s ds \\ + \frac{1}{ \sqrt{N} } \sum_{j=2}^N \int_S^{S'} \int_\R \int_0^\infty u \indiq_{\{ z \le f ( X_{s- }^{N, j} ) \}} \bN^j (ds, du, dz ) , \end{multline*} implying that \begin{multline*}
|X_{S'}^{N, 1 } - X_S^{N , 1 }| \le | \int_S^{S'} \int_\R \int_0^\infty X^{ N, 1 }_{s- } \indiq_{\{ z \le f ( X_{s- }^{N, 1} ) \}} \bN^1 (ds, du, dz ) | \\
+ \delta\alpha\underset{0\leq s\leq T}{\sup}\ll|X_s^{N,1}\rr| + | \frac{1}{ \sqrt{N} } \sum_{j=2}^N \int_S^{S'} \int_\R \int_0^\infty u \indiq_{\{ z \le f ( X_{s- }^{N, j} ) \}}
\bN^j (ds, du, dz ) | \\
=: |I_{S, S'}|+ \delta\alpha\underset{0\leq s\leq T}{\sup}\ll|X_s^{N,1}\rr| + |J_{S, S'}|. \end{multline*}
We first note that $|I_{S,S'}|>0$ implies that $\tilde I_{S,S'}:= \int_S^{S'} \int_\R \int_0^\infty \indiq_{\{ z \le f ( X_{s- }^{N, 1} ) \}} \bN^i (ds, du, dz)\geq 1$, whence $$
P ( |I_{S, S'}| > 0 )\leq P (\tilde I_{S,S'}\geq 1)\leq \E[\tilde I_{S,S'}]\le
\E\Big[ \int_S^{S+\delta} f( X_s^{N, 1 } ) ds \Big] \le ||f||_\infty\delta, $$ since $ f$ is bounded. We proceed similarly to check that $$
P ( |J_{S, S'}| \geq \varepsilon ) \le \frac{1}{\varepsilon^2} \E[(J_{S,S'})^2 ]\leq \frac{\sigma^2}{N\varepsilon^2 } \sum_{j=2}^N\E\Big[ \int_S^{S+\delta} f( X_s^{N, j} ) ds\Big]
\le \frac{\sigma^2}{\varepsilon^2} \| f \|_\infty \delta. $$
The term $\sup_{0\leq s\leq T}|X^{N,1}_s|$ can be handled using Lemma~\ref{estimate}.(ii).
Finally (b) is a straightforward consequence of Lemma~\ref{estimate}.(ii) and Markov's inequality.
\end{proof}
\subsection{Martingale problem}\label{sec:22}
We now introduce a new martingale problem, whose solutions are the limits of any converging subsequence of $ \mu^N =\frac1{N}\sum_{j=1}^{N}\delta_{X^{N,j}}$. In this martingale problem, we are interested in couples of trajectories to be able to put hands on the correlations between the particles. In particular, this will allow us to show that, in the limit system~\eqref{eq:dynlimit1}, the processes $\bar X^i$ ($i\geq 1$) share the same Brownian motion, but are driven by Poisson measures $\pi^i$ ($i\geq 1$) which are independent. The reason why we only need to study the correlation between two particles is the exchangeability of the infinite system.
Let $Q$ be a distribution on $\mathcal{P}(D(\r_+,\r))$. Define a probability measure $P$ on $\mathcal{P}(D(\r_+,\r))\times D(\r_+,\r)^2$ by \begin{equation}\label{eq:P} P(A\times B):=\int_{\mathcal{P}(D(\r_+,\r))}\un_A( m) m \otimes m (B)Q(d m). \end{equation}
We write any atomic event $\omega\in\Omega:=\mathcal{P}(D(\r_+,\r))\times D(\r_+,\r)^2$ as $\omega=(\mu,Y),$ with $Y=(Y^1,Y^2).$ Thus, the law of the canonical variable $ \mu$ is $Q$, and that of $(Y_t)_{t\geq 0}$ is $$P_Y = \int_{\mathcal{P}(D(\r_+,\r))}Q(d m) m \otimes m (\cdot).$$ Moreover we have $P-$ almost surely
$$ \mu=\mathcal{L}(Y^1 | \mu) = \mathcal{L}(Y^2 | \mu) \mbox{ and } \mathcal{L} ( Y| \mu) = \mu \otimes \mu .$$ Writing $ \mu_t := \int_{ D(\r_+,\r) } \mu ( d \gamma ) \delta_{ \gamma_t } $ for the projection onto the $t-$th time coordinate, we introduce the filtration $$\mathcal{G}_t=\sigma(Y_s,s\leq t)\vee\sigma( \mu_s(f),s\leq t) .$$
\begin{defi} We say that $Q \in {\mathcal P} ( {\mathcal P} ( D( \R_+, \R ) ) ) $ is a solution to the martingale problem $({\mathcal M}) $ if the following holds. \begin{itemize} \item[(i)] $Q-$almost surely, $\mu_0 = \nu_0.$ \item[(ii)] For all $ g \in C^2_b ( \R^2), $ $M_t^g := g(Y_t) - g(Y_0) - \int_0^t L g( \mu_s, Y_s) ds $ is a $( P, ({\mathcal G}_t)_t)-$martingale, where \begin{align*} L g ( \mu, x) =& -\alpha x^1\partial_{x^1}g(x)-\alpha x^2\partial_{x^2} g(x)+\frac{\sigma^2}{2} \mu(f) \sum_{i,j=1}^2\partial^2_{x^i x^j}g(x)\\ &+f(x^1)(g(0,x^2)-g(x))+f(x^2)(g(x^1,0)-g(x)). \end{align*} \end{itemize} \end{defi}
\begin{rem}
It is not clear if the martingale problem is well-posed, but we are not interested in proving uniqueness for it. However, we will have uniqueness within the class of all possible limits in distribution of $ \mu^N.$ More precisely, we shall prove that, if $\mu$ is a limit in distribution of $ \mu^N$ such that $\mathcal{L}(\mu)$ is solution to~$(\mathcal{M})$, then $ \mu = {\mathcal L} ( \bar X | \W ) , $ with $\bar X $ the strong solution of \eqref{eq:dynlimit}. Equivalently, defining the problem~$(\mathcal{M})$ for all finite-dimensional distributions, and not only for two coordinates, where $Y^i$ ($i\geq 1$) are defined as a mixture directed by $\mu$, would lead to uniqueness.
\end{rem}
Let $(\limY^i)_{i\geq 1}$ be the solution of the limit system~\eqref{eq:dynlimit1} and $ \mu = \mathcal{L}(\limY^1 |\W)$. Then we already know that $ \mathcal{L}( \mu) $ is a solution of $({\mathcal M}) .$ Let us now characterise any possible solution of $(\mathcal{M}).$
\begin{lem} \label{representation} Grant Assumption~\ref{ass:2}. Let $Q \in {\mathcal P} ( {\mathcal P} ( D( \R_+, \R ) ) )$ be a solution of $(\mathcal{M}).$ Let $(\mu , Y)$ be the canonical variable defined above, and write $Y=(Y^1,Y^2)$. Then there exists a standard $(\mathcal{G}_t)_t-$Brownian motion $W$ and on an extension $ (\tilde \Omega, (\mathcal{\tilde G}_t)_t, \tilde P) $ of $ ( \Omega, ({\mathcal G}_t)_t, P) $ there exist $(\mathcal{\tilde G}_t)_t-$ Poisson random measures $\pi^1,\pi^2$ on $\r_+\times\r_+$ having Lebesgue intensity such that $W,\pi^1$ and $\pi^2$ are independent and \begin{align*} dY^1_t=&-\alpha Y^1_tdt+\sigma\sqrt{\mu_t(f)}dW_t-Y^1_{t-}\int_{\r_+}\uno{z\leq f(Y^1_{t-})}\pi^1(dt,dz),\\ dY^2_t=&-\alpha Y^2_tdt+\sigma\sqrt{\mu_t(f)}dW_t-Y^2_{t-}\int_{\r_+}\uno{z\leq f(Y^2_{t-})}\pi^2(dt,dz). \end{align*} \end{lem}
\begin{proof} Item (ii) of of $(\mathcal{M})$ together with Theorem~II.2.42 of \cite{jacod_limit_2003} imply that $Y$ is a semimartingale with characteristics $(B,C,\nu)$ given by \begin{align*} &B^i_t=-\alpha\int_0^tY^i_sds-\int_0^tY_s^if(Y^i_s)ds,~~1\leq i\leq 2,\\ &C_t^{i,j}=\int_0^t \mu_s(f) ds,~~1\leq i,j\leq 2,\\ &\nu(dt,dy)=dt(f(Y^1_{t-})\delta_{(-Y^1_{t-},0)}(dy)+f(Y^2_{t-})\delta_{(0,-Y^2_{t-})}(dy)). \end{align*}
Then we can use the canonical representation of $Y$ (see Theorem~II.2.34 of \cite{jacod_limit_2003}) with the truncation function $h(y)=y$ for every $y$: $Y_t-Y_0-B_t=M^c_t+M^d_t,$ where $M^c$ is a continuous local martingale and $M^d$ a purely discontinuous local martingale. By definition of the characteristics, $\langle M^{c,i},M^{c,j}\rangle_t=C^{i,j}_t.$ In particular, $\langle M^{c,i}\rangle_t=\int_0^t \mu_s(f)ds$ ($i=1,2$). Consequently, applying Theorem~II.7.1 of \cite{ikeda_stochastic_1989} to both coordinates, we know that there exist Brownian motions $W^1,W^2$ such that $$M^{c,i}_t=\int_0^t\sqrt{ \mu_s(f)}dW^i_s,~~i=1,2.$$ We now prove that $W^1=W^2.$ Let $\rho$ be the correlation between $W^1$ and $W^2$. Classical computations give $\langle W^1,W^2\rangle_t=\rho t,$ implying that $\langle M^{c,1},M^{c,2}\rangle_t=\rho\int_0^t \mu_s(f)ds$. In addition $\langle M^{c,1},M^{c,2}\rangle_t=C^{1,2}_t=\int_0^t \mu_s(f)ds$, and this implies that $\rho=1$ and $W^1=W^2,$ since $\int_0^t \mu_s(f) ds>0$ because $f$ is lower-bounded.
We now prove the existence of the independent Poisson measures $\pi^1,\pi^2.$ We know that $M^d=h*(\mu^Y-\nu),$ where $\mu^Y=\sum_s\uno{\Delta Y_s\neq 0}\delta_{(s,Y_s)}$ is the jump measure of $Y$ and $\nu$ is its compensator. We rely on Theorem~II.7.4 of \cite{ikeda_stochastic_1989}. Using the notation therein, we introduce $Z=\r_+,$ $m$ Lebesgue measure on $Z$ and
$$\theta(t,z):=(-Y^1_{t-},0)\uno{z\leq f(Y^1_{t-})}+(0,-Y^2_{t-})\uno{||f||_\infty<z\leq ||f||_\infty+f(Y^2_{t-})}.$$ According to Theorem~II.7.4 of \cite{ikeda_stochastic_1989}, there exists a Poisson measure $\pi$ on $\r_+\times\r_+$ having intensity $dt\cdot dz$ such that, for all $E\in\mathcal{B}(\r^2),$ \begin{equation}\label{eq:firstrepr} \mu^Y([0,t]\times E)=\int_0^t\int_0^\infty\uno{\theta(s,z)\in E}\pi(ds,dz). \end{equation}
Now let us consider two independent Poisson measures $\tilde\pi^1,\tilde\pi^2$ (independent of everything else) on $[||f||_\infty,\infty[$ having Lebesgue intensity. We then define $\pi^1$ in the following way: for any $A\in\mathcal{B}(\r_+\times[0,||f||_\infty]),$ $\pi^1(A)=\pi(A)$, and for $A\in\mathcal{B}(\r_+\times]||f||_\infty,\infty[),$ $\pi^1(A)=\tilde\pi^1(A).$ We define $\pi^2$ in a similar way: For $A\in\mathcal{B}(\r_+\times[0,||f||_\infty]),$ $\pi^2(A)=\pi(\{(t,||f||_\infty+z):(t,z)\in A\})$, and for $A\in\mathcal{B}(\r_+\times]||f||_\infty,\infty[),$ $\pi^2(A)=\tilde\pi^2(A).$ By definition of Poisson measures, $\pi^1$ and $\pi^2$ are independent Poisson measures on $\r_+^2$ having Lebesgue intensity, and together with \eqref{eq:firstrepr}, we have $$M^{d,i}_t=-\int_{[0,t]\times\r_+}Y^i_{s-}\uno{z\leq f(Y^i_{s-})}\pi^i(ds,dz)+\int_0^tY_s^if(Y^i_s)ds,~~1\leq i\leq 2.$$
\end{proof}
Moreover we have the following \begin{thm} \label{convergencemartingale} Assume that Assumptions \ref{ass:1}, \ref{control} and \ref{ass:2} hold. Then the distribution of any limit $\mu$ of the sequence $ \mu^N :=\frac1{N}\sum_{j=1}^{N}\delta_{X^{N,j}} $ is solution of item (ii) of $({\mathcal M}).$ \end{thm}
\begin{proof} {\it Step~1.} We first check that for any $t\geq 0$, a.s., $\mu(\{\gamma \, : \, \Delta\gamma(t)\neq 0\})=0$. We assume by contradiction that there exists $t > 0 $ such that $\mu ( \{ \gamma : \Delta \gamma (t) \neq 0 \} ) > 0 $ with positive probability. Hence there are $a,b>0$ such that the event
$E:=\{\mu ( \{ \gamma : |\Delta \gamma (t) | > a \} ) > b\}$ has a positive probability. For every $\varepsilon > 0$, we have $E\subset \{ \mu ( \cB^\varepsilon_a ) > b\}$, where
$\cB^\varepsilon_a := \{ \gamma : \sup_{ s \in (t- \varepsilon , t + \varepsilon)}| \Delta \gamma(s) | > a \}$, which is an open subset of $D(\R_+,\R)$. Thus $\cP^{\varepsilon}_{a,b} := \{ \mu \in {\cP} ( {D} ( \R_+, \R ) ) : \mu ( \cB^\varepsilon_a ) > b \}$ is an open subset of $ {\cP} ( {\D} ( \R_+,\r) )$. The Portmanteau theorem implies then that for any $\varepsilon>0$, \begin{equation} \label{controlPE} \liminf_{N \to \infty } P ( \mu^N \in \cP^{\varepsilon}_{a,b} ) \geq P ( \mu \in \cP^{\varepsilon}_{a,b} ) \geq P ( E) > 0. \end{equation} Firstly, we can write
$$J^{N,\eps,i}:=\underset{t-\eps<s<t+\eps}{\sup}\ll|\Delta X^{N,i}_s\rr|=G^{\eps,i}_N\vee S^\eps_N,$$
where $G^{\eps,i}_N:=\max_{s\in D^{\eps,i}_N}|X^{N,i}_{s-}|$ is the maximal height of the big jumps of $X^{N,i},$ with $D^{\eps,i}_N:=\{t-\eps\leq s\leq t+\eps:\pi^i(\{s\}\times[0,f(X^{N,i}_{s-})]\times\r_+)\neq 0\}.$ Moreover, $S^{\eps}_N:=\max\{|U^j(s)|/\sqrt{N}:s\in\bigcup_{1\leq j\leq N} D^{\eps,j}_N\}$ is the maximal height of the small jumps of $X^{N,i}$, where $U^j(s)$ is defined for $s\in D^{\eps,j}_N$, almost surely, as the only real number that satisfies $\pi^j(\{s\}\times[0,f(X^{N,j}_{s-})]\times\{U^j(s)\})=1.$
We have that $$\ll\{\mu^N(\mathcal{B}^\eps_a)>b\rr\}= \ll\{\frac1N\sum_{j=1}^N\uno{J^{N,\eps,j}>a}> b \rr\}.$$ Consequently, by exchangeability and Markov's inequality, \begin{equation} \label{probamuN} \pro{\mu^N(\mathcal{B}^\eps_a)>b}\leq \frac1b\esp{\uno{J^{N,\eps,1}>a}}= \frac1b\pro{J^{N,\eps,1}>a}\leq\frac1b\ll(\pro{G_N^{\eps,1}>a}+\pro{S_N^\eps>a}\rr). \end{equation}
The number of big jumps of $X^{N,1}$ in $]t-\eps,t+\eps[$ is smaller than a random variable $\xi$ having Poisson distribution with parameter $2\eps||f||_\infty.$ Hence \begin{equation} \label{probaGN}
\pro{G_{N,1}^\eps>a}\leq \pro{\xi\geq 1}=1-e^{2\eps||f||_\infty}\leq 2\eps||f||_\infty. \end{equation}
The small jumps that occur in $]t-\eps,t+\eps[$ are included in $\{U_1/\sqrt{N},...,U_K/\sqrt{N}\}$ where $K$ is a $\n-$valued random variable having Poisson distribution with parameter $2\eps N||f||_\infty$, which is independent of the variables $U_i$ ($i\geq 1$) that are i.i.d. with distribution~$\nu.$ Hence, \begin{equation*}
\pro{S_N^\eps>a}\leq \pro{\underset{1\leq i\leq K}{\max}\frac{|U_i|}{\sqrt{N}}>a}\leq \esp{\proc{\underset{1\leq i\leq K}{\max}\frac{|U_i|}{\sqrt{N}}>a}{K}}=\esp{\psi(K)}, \end{equation*}
where $\psi(k)=\pro{\max_{1\leq i\leq k}|U_i|>a\sqrt{N}}\leq k\pro{|U_1|>a\sqrt{N}}\leq ka^{-2}N^{-1}\esp{U_1^2}.$ Hence \begin{equation} \label{probaSN}
\pro{S_N^\eps>a}\leq \frac{\esp{U_1^2}}{Na^2}\esp{K}\leq 2||f||_\infty\esp{U_1^2}\frac{1}{a}\eps. \end{equation}
Inserting the bounds~\eqref{probaGN} and~\eqref{probaSN} in~\eqref{probamuN}, we have $$\pro{\mu^N(\mathcal{B}^\eps_a>b)}\leq C\eps,$$ where $C$ does not depend on $N$ nor $\eps.$ This last inequality is in contradiction with~\eqref{controlPE} since $P(E)$ does not depend on~$\eps.$
\noindent{\it Step 2.} In the following, we note $\partial^2\phi:=\sum_{i,j=1}^2\partial^2_{x^i x^j}\phi.$ For any $ 0 \le s_1 < \ldots < s_k < s < t$, any $\varphi_1,\dots,\varphi_k ,\psi_1,\hdots,\psi_k\in C_b ( \R)$, any $\varphi \in C^3_b (\R^2)$, we introduce \begin{multline*} F(\mu):=\psi_1(\mu_{s_1}(f)) \hdots\psi_k(\mu_{s_k}(f) )\int_{D(\r_+,\r)^2} \mu \otimes \mu (d\gamma)\phi_1(\gamma_{s_1})\hdots\phi_k(\gamma_{s_k})\\ \ll[\phi(\gamma_t)-\phi(\gamma_s)+\alpha\int_s^t\gamma^1_r\partial_{x^1} \varphi (\gamma_r)dr+\alpha\int_s^t\gamma^2_r\partial_{x^2}\phi(\gamma_r)dr-\frac{\sigma^2}{2}\int_s^t \mu_r(f)\partial^2\phi(\gamma_r)dr\rr.\\ \ll.-\int_s^tf(\gamma^1_r)(\phi(0,\gamma^2_r)-\phi(\gamma_r))dr-\int_s^tf(\gamma^2_r)(\phi(\gamma^1_r,0)-\phi(\gamma_r))dr\rr]. \end{multline*}
To show that $\mathcal{L}(\mu)$ is solution of item (ii) of the martingale problem~$(\mathcal{M}),$ by a classical density argument, it is sufficient to prove that $\esp{F(\mu)}=0.$ We have \begin{multline*} F(\mu^N)=\psi_1(\mu^{N}_{s_1}(f))\hdots\psi_k(\mu^{N}_{s_k}(f)) \frac{1}{N^2}\sum_{i =1}^{N} \sum_{j=1}^N \phi_1(X^{N,i}_{s_1}, X^{N, j }_{s_1} )\hdots\phi_k(X^{N,i}_{s_k},X^{N,j}_{s_k}) \cdot\\ \ll[\phi(X^{N,i}_t,X^{N,j}_t)-\phi(X^{N,i}_s,X^{N,j}_s)+\alpha\int_s^tX^{N,i}_r\partial_{x^1}\phi(X^{N,i}_r,X^{N,j}_r)dr\rr.\\ +\alpha\int_s^tX^{N,j}_r\partial_{x^2}\phi(X^{N,i}_r,X^{N,j}_r)dr-\frac{\sigma^2}{2}\int_s^t \mu^N_r (f) \partial^2\phi(X^{N,i}_r,X^{N,j}_r)dr\\ \ll. -\int_s^tf(X^{N,i}_r)(\phi(0,X^{N,j}_r)-\phi(X^{N,i}_r,X^{N,j}_r))dr -\int_s^tf(X^{N,j}_r)(\phi(X^{N,i}_r,0)-\phi(X^{N,i}_r,X^{N,j}_r))dr\rr]. \end{multline*} But recalling~\eqref{eq:dyn} and using Ito's formula, for any $ i \neq j,$ we have \begin{multline*} \phi(X^{N,i}_t,X^{N,j}_t)\\ =\phi(X^{N,i}_s,X^{N,j}_s) -\alpha\int_s^tX^{N,i}_r\partial_{x^1}\phi(X^{N,i}_r,X^{N,j}_r)dr-\alpha\int_s^tX^{N,j}_r\partial_{x^2}\phi(X^{N,i}_r,X^{N,j}_r)dr\\ +\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,i}_{r-})}\ll[\phi\ll(0,X^{N,j}_{r-}+\frac{u}{\sqrt{N}}\rr)-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\pi^{i}(dr,dz,du)\\ +\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,j}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},0\rr)-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\pi^{j}(dr,dz,du)\\ +\sum_{\substack{k=1\\k\not\in\{i,j\}}}^{N}\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,k}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},X^{N,j}_{r-}+\frac{u}{\sqrt{N}}\rr)\rr.\\ \ll.-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\pi^{k}(dr,dz,du). \end{multline*} We use the notation $\tilde\pi^j(dr,dz,du)=\pi^j(dr,dz,du)-drdz\nu(du)$ and set \begin{align*} M^{N,i,j,1}_{s,t}:=&\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,i}_{r-})}\ll[\phi\ll(0,X^{N,j}_{r-}+\frac{u}{\sqrt{N}}\rr)-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\tilde\pi^{i}(dr,dz,du),\\ M^{N,i,j,2}_{s,t}:=&\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,j}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},0\rr)-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\tilde\pi^{j}(dr,dz,du),\\ W^{N,i,j}_{s,t}:=&\sum_{\substack{k=1\\j\not\in\{i,j\}}}^{N}\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,k}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},X^{N,j}_{r-}+\frac{u}{\sqrt{N}}\rr)\rr.\\ &\hspace*{8cm}\ll.-\phi(X^{N,i}_{r-},X^{N,j}_{r-})\rr]\tilde\pi^{k}(dr,dz,du),\\ \Delta^{N,i,j,1}_{s,t}:=&\int_s^t\int_\r f(X^{N,i}_r)\ll[\phi\ll(0,X^{N,j}_r+\frac{u}{\sqrt{N}}\rr)-\phi(0,X^{N,j}_r)\rr]d\nu(u)dr,\\ \Delta^{N,i,j,2}_{s,t}:=&\int_s^t\int_\r f(X^{N,j}_r)\ll[\phi\ll(X^{N,i}_r+\frac{u}{\sqrt{N}},0\rr)-\phi(X^{N,i}_r,0)\rr]d\nu(u)dr,\\ \Gamma^{N,i,j}_{s,t}:=&\sum_{\substack{k=1\\k\not\in\{i,j\}}}^{N}\int_s^t\int_\r f(X^{N,k}_r)\ll[\phi\ll(X^{N,i}_r+\frac{u}{\sqrt{N}},X^{N,j}_r+\frac{u}{\sqrt{N}}\rr)-\phi(X^{N,i}_r,X^{N,j}_r)\rr.\\ &\ll.-\frac{u}{\sqrt{N}}\partial_{x^1}\phi(X^{N,i}_r,X^{N,j}_r)-\frac{u}{\sqrt{N}}\partial_{x^2}\phi(X^{N,i}_r,X^{N,j}_r)\rr]d\nu(u)dr \\ &-\int_s^t\int_\r\frac{u^2}{2}\partial^2\phi(X^{N,i}_r,X^{N,j}_r)\frac1N\sum_{\substack{k=1\\k\not\in\{i,j\}}}^{N}f(X^{N,k}_r)d\nu(u)dr,\\ R^{N,i,j}_{s,t}:=&\frac{\sigma^2}{2}\int_s^t\partial^2\phi(X^{N,i}_r,X^{N,j}_r)\ll(\frac1N\sum_{\substack{k=1\\k\not\in\{i,j\}}}^{N}f(X^{N,k}_r)-\frac{1}{N}\sum_{k=1}^{N}f(X^{N,k}_r)\rr)dr. \end{align*} Finally, for $ i = j , $ we have \begin{multline*} \phi(X^{N,i}_t,X^{N,i}_t)=\phi(X^{N,i}_s,X^{N,i}_s)\\ -\alpha\int_s^tX^{N,i}_r\partial_{x^1}\phi(X^{N,i}_r,X^{N,i}_r)dr-\alpha\int_s^tX^{N,i}_r\partial_{x^2}\phi(X^{N,i}_r,X^{N,i}_r)dr\\ +\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N, i}_{r-})}\ll[\phi\ll(0,0 \rr)-\phi(X^{N,i}_{r-},X^{N,i}_{r-})\rr]\pi^{i}(dr,dz,du)\\ +\sum_{\substack{k=1\\k\neq i}}^{N}\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,k}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},X^{N,i}_{r-}+\frac{u}{\sqrt{N}}\rr) -\phi(X^{N,i}_{r-},X^{N,i}_{r-})\rr]\pi^{k}(dr,dz,du). \end{multline*} The associated martingales and error terms are given by \begin{align*} M^{N, i }_{s,t}:= & \int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N, i}_{r-})}\ll[\phi\ll(0,0 \rr)-\phi(X^{N,i}_{r-},X^{N,i}_{r-})\rr]\tilde \pi^{i}(dr,dz,du),\\ W^{N, i }_{s,t} := & \sum_{\substack{k=1\\k\neq i}}^{N}\int_{]s,t]\times\r_+\times\r}\uno{z\leq f(X^{N,k}_{r-})}\ll[\phi\ll(X^{N,i}_{r-}+\frac{u}{\sqrt{N}},X^{N,i}_{r-}+\frac{u}{\sqrt{N}}\rr) \rr.\\ &\hspace*{8cm} \ll. -\phi(X^{N,i}_{r-},X^{N,i}_{r-})\rr]\tilde \pi^{k}(dr,dz,du),\\ \Delta^{N,i}_{s,t}:=&\int_s^t\int_\r f(X^{N,i}_r)\ll[\phi\ll(0,0 \rr)-\phi(0,X^{N,i}_r) - \phi ( X^{N, i }_r, 0) + \phi (X^{N, i }_r, X^{N, i }_r) \rr]d\nu(u)dr,\\ \Gamma^{N,i}_{s,t}:=&\sum_{\substack{k=1\\k\neq i }}^{N}\int_s^t\int_\r f(X^{N,k}_r)\ll[\phi\ll(X^{N,i}_r+\frac{u}{\sqrt{N}},X^{N,i}_r+\frac{u}{\sqrt{N}}\rr)-\phi(X^{N,i}_r,X^{N,i}_r)\rr.\\ &\ll.-\frac{u}{\sqrt{N}}\partial_{x^1}\phi(X^{N,i}_r,X^{N,i}_r)-\frac{u}{\sqrt{N}}\partial_{x^2}\phi(X^{N,i}_r,X^{N,i}_r)\rr]d\nu(u)dr \\ &-\int_s^t\int_\r\frac{u^2}{2}\partial^2\phi(X^{N,i}_r,X^{N,i}_r)\frac1N\sum_{\substack{k=1\\k\neq i }}^{N}f(X^{N,k}_r)d\nu(u)dr,\\ R^{N,i}_{s,t}:=&\frac{\sigma^2}{2}\int_s^t\partial^2\phi(X^{N,i}_r,X^{N,i}_r)\ll(\frac1N\sum_{\substack{k=1\\k\neq i }}^{N}f(X^{N,k}_r)-\frac{1}{N}\sum_{k=1}^{N}f(X^{N,k}_r)\rr)dr. \end{align*} Then we obtain, since $\int_\r ud\nu(u)=0$, that \begin{multline*} F(\mu^N)=\psi_1(\mu^{N}_{s_1}(f))\hdots\psi_k(\mu^{N}_{s_k}(f)) \frac{1}{N^2}\sum_{i, j =1, i \neq j }^{N}\phi_1(X^{N,i}_{s_1},X^{N,j}_{s_1})\hdots\phi_k(X^{N,i}_{s_k},X^{N,j}_{s_k})\\ \ll[M^{N,i,j,1}_{s,t}+M^{N,i,j,2}_{s,t} +W^{N,i,j}_{s,t} +\Delta^{N,i,j,1}_{s,t}+\Delta^{N,i,j,2}_{s,t}+\Gamma^{N,i,j}_{s,t}+R^{N,i,j}_{s,t}\rr]\\ +\psi_1(\mu^{N}_{s_1}(f))\hdots\psi_k(\mu^{N}_{s_k}(f)) \frac{1}{N^2}\sum_{i=1 }^{N}\phi_1(X^{N,i}_{s_1},X^{N,i}_{s_1})\hdots\phi_k(X^{N,i}_{s_k},X^{N,i}_{s_k})\\ \ll[M^{N,i}_{s,t}+W^{N,i}_{s,t} +\Delta^{N,i}_{s,t}+\Gamma^{N,i}_{s,t}+R^{N,i}_{s,t}\rr]. \end{multline*}
Using exchangeability and the boundedness of the $\phi_j,\psi_j$ ($1\leq j\leq k$) and the fact that $M^{N,i,j,1}, \ldots , $ $W^{N,i}$ are martingales, this implies \[
|\esp{F(\mu^N)}|\leq C\esp{|\Delta^{N,i,j,1}_{s,t}|+|\Delta^{N,i,j,2}_{s,t}|+|\Gamma^{N,i,j}_{s,t}|+|R^{N,i,j}_{s,t}| +\frac{ |\Delta^{N,i}_{s,t}|+|\Gamma^{N,i}_{s,t}|+|R^{N,i}_{s,t}|}{N}}. \] Since $f$ is bounded and $\phi\in C^3_b(\r^2),$ Taylor-Lagrange's inequality implies then that
$$|\esp{F(\mu^N)}|\leq \frac{C}{\sqrt{N}}.$$ Finally, using that $F$ is bounded and almost surely continuous at $\mu$ (see {\it Step~1}), we have $$\esp{F(\mu)}=\underset{N\rightarrow\infty}{\lim}\esp{F(\mu^N)}=0,$$ concluding our proof. \end{proof}
Now we have all elements to give the proof of the following main result. \begin{thm}\label{uniquelimit}
Grant Assumptions \ref{ass:1}, \ref{control} and \ref{ass:2}. Each converging subsequence of $\mu^N:=\frac1N\sum_{j=1}^N\delta_{X^{N,j}}$ converges to the same limit $ \mu = {\mathcal L} ( \bar X | {\mathcal W}) , $ where $\bar X $ is the unique strong solution of \eqref{eq:dynlimit}. \end{thm}
\begin{proof} Let us consider the limit (in distribution) $\mu$ of a subsequence of $\mu^N$. By Proposition~(7.20) of \cite{aldous_exchangeability_1983}, $\mu$ is the directing measure of some exchangeable system $(\bar Y^i)_{i\geq 1},$ and it holds that, for the chosen subsequence, $(X^{N,i})_{1\leq i\leq N}$ converges in law to $(\bar Y^i)_{i\geq 1}.$ Moreover, we also know that
$$\mu=\mathcal{L}(\bar Y^i| \mu) \mbox{ and } \mu \otimes \mu = \mathcal{L}((\bar Y^i, \bar Y^j) | \mu) , $$ almost surely, for all $ i \neq j . $ In particular, for all $ i \neq j,$ $$ \mathcal {L} ( \mu, (\bar Y^i , \bar Y^j ) ) = P ,$$ where $ P$ is given by \eqref{eq:P}, with $ Q = \mathcal{L} ( \mu ) .$
Thanks to Lemma~\ref{representation}, together with Theorem~\ref{convergencemartingale}, we know that there exist Brownian motions $W^{(i,j)}$ ($i,j \geq 1$) and Poisson random measures $\pi^i$ ($i\geq 1$) such that for all pairs $(i,j) , i \neq j ,$ $\pi^{i}$ is independent of $\pi^{j}$ and such that \begin{align*} d\bar Y^{i}_t=&-\alpha\bar Y^{i}_tdt+\sigma\sqrt{\mu_t(f)}dW^{(i,j)}_t-\bar Y^{i}_{t-}\int_{\r_+}\uno{z\leq f(\bar Y^{i}_{t-})}\pi^{i}(dt,dz),\\ d\bar Y^{j}_t=&-\alpha\bar Y^{j}_tdt+\sigma\sqrt{\mu_t(f)}dW^{(i,j)}_t-\bar Y^{j}_{t-}\int_{\r_+}\uno{z\leq f(\bar Y^{j}_{t-})}\pi^{j}(dt,dz). \end{align*} The exchangeability of the system $(\bar Y^i)_{i\geq 1}$ implies the independence of the $(\pi^i)_{i\geq 1}$ and that for all $i,j,k \geq 1,$ $ i \neq j,$ $i \neq k, $ $W^{(i,j) }=W^{(i, k )}=W.$
The last point to prove is that $\mu_t(f):=\espc{f(\bar Y^1_t)}{\mu}=\espc{f(\bar Y^1_t)}{\W}~a.s..$ This would be a consequence of the fact that, conditionally on $W,$ the processes $(\bar Y^j)_{j\geq 1}$ are $i.i.d.$ (see Lemma~(2.12).(a) of \cite{aldous_exchangeability_1983}). But this last assertion is not trivial because we do not know yet that $W$ is the only noise term common to each process $\bar Y^j$ ($j\geq 1$). That is why we will introduce an auxiliary particle system which is a mean field version of the limit system and which converges to $(\bar Y^j)_{j\geq 1}.$
To begin with, Lemma~(2.15) of \cite{aldous_exchangeability_1983} implies that $\mu_t(f)$ is the almost sure limit of $N^{-1}\sum_{j=1}^Nf(\bar Y^j_t).$ Now, let us prove that this sequence converges to $\espc{f(\bar Y^1_t)}{\W}.$ For this purpose, we introduce the system $(\auxY^{N,i})_{1\leq i\leq N},$ driven by the same Brownian motion $ W$ and the same Poisson random measures $\pi^i,$ with $\bar Y^i_0=\tilde X^{N,i}_0$ ($i\geq 1$), replacing the term $\mu_t(f)$ by the empirical measure: $$d\auxY_t^{N,i}=-\alpha\auxY^{N,i}_tdt+\sqrt{\frac1N\sum_{j=1}^Nf(\auxY^{N,j}_t)}dW_t-\auxY^{N,i}_{t-}\int_{\r_+}\uno{z\leq f(\auxY^{N,i}_{t-})}\pi^i(dt,dz), \; \auxY_0^{N,i} = \bar Y_0^i .$$ Consider finally the system $(\bar X^i)_{i\geq 1}$ defined in \eqref{eq:dynlimit1}, driven by the same Brownian motion $ W$ and the same Poisson random measures $\pi^i $ as $(\bar Y^i)_{i\geq 1},$ with $ \bar X^i_0 = \bar Y^i_0 $ for all $ i \geq 1.$ In this way, $ (\bar X^i)_{i \geq 1} , (\bar Y^i)_{i \geq 1} $ and $ (\tilde X^{N, i })_{1 \le i \le N} $ are all defined on the same probability space.
It is now sufficient to prove that both for $(\bar Y^i)_{i\geq 1}$ and for $(\bar X^i)_{i\geq 1},$ \begin{equation} \label{auxlim}
\esp{\ll|a(\bar Y^i_t)-a(\auxY^{N,i}_t)\rr|} + \esp{\ll|a(\bar X^i_t)-a(\auxY^{N,i}_t)\rr|}\leq C_t N^{-1/2}. \end{equation} Indeed, suppose we have already proven the above control \eqref{auxlim}. Then \begin{multline*}
\esp{\ll|\frac1N\sum_{j=1}^Nf(\bar Y^j_t)-\espc{f(\bar X^1_t)}{\W}\rr|}\leq \frac1N\sum_{j=1}^N\esp{|f(\bar Y^j_t)-f(\auxY^{N,j}_t)|}\\
+\frac1N\sum_{j=1}^N\esp{|f(\auxY^{N,j}_t) - f( \bar X_t^j ) |}+\esp{\ll|\frac1N\sum_{j=1}^Nf(\bar X^{j}_t)-\espc{f(\bar X^{1}_t)}{\W}\rr|} . \end{multline*} Then, \eqref{auxlim} and Assumption~\ref{ass:2} imply that the first term and the second one of the sum above are smaller than $C_tN^{-1/2}$ for some $C_t>0.$ In addition, by item (ii) of Proposition \ref{independence}, the variables $ (\bar X^j)_{ 1 \le j \le N } $ are i.i.d., conditionally on $W.$ Consequently, the third term is also smaller than $C_tN^{-1/2}.$
This implies that $$\mu_t ( f) = \espc{f(\bar Y^1_t)}{\mu}=\espc{f(\bar X^1_t)}{\W}= \espc{f(\bar X^i_t)}{\W} \mbox{ a.s..}$$ As a consequence, $(\bar Y^i)_{i\geq 1}$ is solution of the infinite system $$ d\bar Y^{i}_t=-\alpha\bar Y^{i}_tdt+\sigma\sqrt{\espc{f(\bar X^i_t)}{\W}}dW_t-\bar Y^{i}_{t-}\int_{\r_+}\uno{z\leq f(\bar Y^{i}_{t-})}\pi^{i}(dt,dz),$$ while $(\bar X^i)_{i\geq 1}$ in \eqref{eq:dynlimit1} is solution of $$ d\bar X^{i}_t=-\alpha\bar X^{i}_tdt+\sigma\sqrt{\espc{f(\bar X^i_t)}{\W}}dW_t-\bar X^{i}_{t-}\int_{\r_+}\uno{z\leq f(\bar X^{i}_{t-})}\pi^{i}(dt,dz),$$ with $ \bar X^i_0 = \bar Y^i _0,$ for all $ i \geq 1.$
Let us prove that $\bar X^i=\bar Y^i$ almost surely. For that sake, consider $\tau_M=\inf\{t>0:|\bar X^i_t|\wedge|\bar Y^i_t|>M\}$ for $M>0.$ We prove that $\esp{|\bar X^i_{t\wedge\tau_M}-\bar Y^i_{t\wedge\tau_M}|}=0$ for all $M>0,$ which implies, by Fatou's lemma, that $\esp{|\bar X^i_{t}-\bar Y^i_{t}|}=0$, recalling~\eqref{controllimite}, and the fact that we can prove a similar control for $\bar Y^i$. Let $u_M(t):=\esp{|\bar X^i_{t\wedge\tau_M}-\bar Y^i_{t\wedge\tau_M}|}$. To see that $u_M(t)=0$, it is sufficient to apply Gr\"onwall's lemma to the following inequality $$
u_M(t)\leq \alpha\int_0^tu_M(s)ds+\esp{\int_{[0,t\wedge\tau_M]\times\r_+}\ll|\bar X^i_{s-}\uno{z\leq f(\bar X^i_{s-})}-\bar Y^i_{s-}\uno{z\leq f(\bar Y^i_{s-})}\rr|\pi^i(ds,dz)} $$ implying that \begin{multline*}
u_M(t) \leq \alpha\int_0^tu_M(s)ds+\esp{\int_{[0,t\wedge\tau_M]\times \R_+ } \indiq_{ z \in [0,f(\bar X^i_{s-})\wedge f(\bar Y^i_{s-})]} \ll|\bar X^i_{s-}-\bar Y^i_{s-}\rr|\pi^i(ds,dz)}\\
+\esp{\int_{[0,t\wedge\tau_M]\times \R_+} \indiq_{ z \in ]f(\bar X^i_{s-})\wedge f(\bar Y^i_{s-}), f(\bar X^i_{s-})\vee f(\bar Y^i_{s-})]} |\bar X^i_{s-}|\vee|\bar Y^i_{s-}|\pi^i(ds,dz)}, \end{multline*} whence $$ u_M(t)\leq C(1+M)\int_0^tu_M(s)ds. $$
Hence $(\bar Y^i)_{i\geq 1}$ is solution of the infinite system \eqref{eq:dynlimit1} and $ \mu =\mathcal{L}(\bar Y^1|\W),$ its directing measure, is uniquely determined. As a consequence, $\mu^N$ converges in distribution to $\mathcal{L}(\bar Y^1|\W)$ in $\mathcal{P}(D(\r_+,\r)).$
Let us now show \eqref{auxlim}. We only prove it for $ \bar Y^i, $ the proof for $ \bar X^i$ is similar. We decompose the evolution of $a(\bar Y^{1}_t)$ in the following way. \begin{align} a(\bar Y^{1}_t) &= a(\bar Y^{1}_0) -\alpha \int_0^t a'(\bar Y^{1}_s)\bar Y^{1}_s ds+ \int_{[0,t]\times\r_+} \ll(a(0)-a(\bar Y^{1}_{s-})\rr) \indiq_{ \{ z \le f ( \bar Y^{1}_{s-}) \}} \bN^1 (ds,dz ) \label{itobarY}\\ &+\frac{\sigma^2}{2}\int_0^ta''(\bar Y^{1}_s)\frac{1}{N}\sum_{j=1}^Nf(\bar Y^{j}_s)ds - B_t^N +\sigma \int_0^t a'(\bar Y^{1}_s)\sqrt{ \frac1N \sum_{j=1}^N f ( \bar Y_s^{j} ) } d W_{s} -M_t^N, \nonumber \end{align} where $$B_t^N=\frac{\sigma^2}{2}\int_0^ta''(\bar Y^{1}_s)\ll(\frac{1}{N}\sum_{j=1}^Nf(\bar Y^{j}_s)-\espc{f(\bar Y^{1}_s)}{\mu}\rr)ds$$ and
$$ M_t^N = \sigma \int_0^t a'(\bar Y^{1}_s)\left( \sqrt{ \frac1N \sum_{j=1}^N f ( \bar Y_s^{j} ) } - \sqrt{\esp{ f ( \bar Y_s^{1} ) | \mu} }\right) d W_s.$$ Since
$$ <M^N>_t \leq \sigma^2 \ll(\underset{x\in\r}{\sup}|a'(x)^2|\rr) \int_0^t \left( \sqrt{ \frac1N \sum_{j=1}^N f ( \bar Y_s^{j} ) } - \sqrt{\esp{ f ( \bar Y_s^{1} ) | \mu} }\right)^2 ds, $$
recalling that the variables $\bar Y_s^{j}$ ($1\leq j\leq N$) are i.i.d. conditionally to $\mu$, taking conditional expectation $\E ( \cdot | \mu ) $ implies that $$ \esp{<M^N>_t} \le C_t N^{-1}\textrm{ and }\esp{B_t^N} \le C_t N^{-1}.$$
Then, applying Ito's formula on $\tilde X^{N,1}$, we obtain the same equation as~\eqref{itobarY}, but without the terms $B^N_t$ and $M^N_t.$ Introducing
$$u(t):=\underset{0\leq s\leq t}{\sup}\esp{\ll|a(\bar Y^1_s)-a(\tilde X^{N,1}_s)\rr|},$$ we can prove with the same reasoning as in the proof of Theorem~\ref{prop:42} that $$u(t)\leq C(1+t)u(t)+\frac{C_t}{\sqrt{N}},$$ where $C$ and $C_t$ are independent of~$N$. Finally, using the arguments of the proof of Theorem~\ref{prop:42}, this implies~\eqref{auxlim}. \end{proof}
Let us end this section with the \begin{proof}[Proof of Theorem~\ref{convergencemuN}]
According to Proposition~\ref{mutight}, the sequence $(\mu^N)_N$ is tight. Besides, thanks to Theorem~\ref{uniquelimit}, every converging subsequence of $(\mu^N)_N$ converges to the same limit $\mu=\mathcal{L}(\bar X^1|\W)$, where $(\bar X^j)_{j\geq 1}$ is solution of \eqref{eq:dynlimit1}. This implies the result. \end{proof}
\section{Appendix}
\subsection{Well-posedness of the limit equation~\eqref{eq:dynlimit}}\label{sec:wellposed}
\begin{proof}[Proof of Item 2. of Theorem \ref{prop:42}] The proof is done using a classical Picard-iteration. For that sake we introduce the sequence of processes $ \limY_t^{[0] } \equiv \limY_0 , $ and $$ \limY^{[n+1]}_t := \limY_0 -\alpha \int_0^t \limY_s^{[n]} ds - \int_{[0,t]\times\r_+\times\r} \limY^{[n+1]}_{s- } \indiq_{ \{ z \le f ( \limY^{[n]}_{s-}) \}} \bN (ds,dz,du) + \sigma \int_0^t \sqrt{ \mu^{n}_s ( f) } d W_s ,$$ where
$$ \mu_s^n = P ( \limY_s^{[n]} \in \cdot | \W ) .$$ Let us first prove a control on the moments of $\limY^{[n]}$ uniformly in $n$. Applying Ito's formula we have \begin{multline*} \esp{\ll(\limY_{t}^{[n+1]}\rr)^2}\leq \esp{\limY_0^2}-2\alpha\int_0^t\esp{\ll(\limY_{s}^{[n+1]}\rr)^2}ds+\sigma^2\int_0^t\esp{\mu^n_{s}(f)}ds\\ \leq \esp{\limY_0^2}+\sigma^2\int_0^t\esp{\mu^n_{s}(f)}ds. \end{multline*} Using that $f$ is bounded,
$$\esp{\ll(\limY_{t}^{[n+1]}\rr)^2}\leq \esp{\limY_0^2}+\sigma^2||f||_\infty t,$$ implying \begin{equation} \label{controlY[n]} \underset{n\in\n}{\sup}~\underset{0\leq s\leq t}{\sup}\esp{\ll(\limY_{s}^{[n]}\rr)^2}<+\infty. \end{equation} Now, we prove the convergence of $\limY^{[n]}_t$. The same strategy as the one of the proof of Item 1. of Theorem~\ref{prop:42} allows to show that
$$\delta_t^n := \E \sup_{s \le t } | a ( \limY_s^{[n]} ) - a( \limY_s^{[n-1]} ) | \; \mbox{ satisfies } \;
\delta_t^n \le C (t + \sqrt{t} ) \delta_t^{n-1} ,$$ for all $ n \geq 1 , $ for a constant $C$ only depending on the parameters of the model, but not on $ n, $ neither on $t. $ Choose $t_1 $ such that $$ C (t_1 + \sqrt{t_1} ) \le \frac12.$$
Since $ \sup_{s \le t_1 } | a ( \limY_s^{[0]} ) | = a ( \limY_0) \le \| a \|_\infty, $ we deduce from this that
$$ \delta_{t_1}^n \le 2^{- n } \| a \|_\infty .$$ This implies the almost sure convergence of $a\ll(\limY_t^{[n]}\rr)_n$ to some random variable $Z_t$ for all $t\in[0,t_1]$. As $a$ is an increasing function, the almost sure convergence of $\limY_t^{[n]}$ to some (possibly infinite) random variable $\limY_t$ follows from this. The almost sure finiteness of $\limY_t$ is then guaranteed by Fatou's lemma and~\eqref{controlY[n]}.
Now let us prove that $\limY$ is solution of the limit equation~\eqref{eq:dynlimit} which follows by standard arguments (note that the jump term does not cause troubles because it is of finite activity). The most important point is to notice that
$$ \mu_t^n ( f) = \E ( f ( \limY_t^{[n]}) | \W) \to \E ( f (\limY_t) | \W ) $$ almost surely, which follows from the almost sure convergence of $ f ( \limY_t^{[n]} ) \to f (\limY_t ) ,$ using dominated convergence.
Once the convergence is proven on the time interval $ [0, t_1 ], $ we can proceed iteratively over successive intervals $ [ k t_1, (k+1) t_1] $ to conclude that $\bar X$ is solution of~\eqref{eq:dynlimit} on~$\r_+.$
It remains to prove~\eqref{controllimite}. Firstly, by Fatou's lemma and~\eqref{controlY[n]}, we know that, for all $t>0,$ \begin{equation} \label{controllimite2} \underset{0\leq s\leq t}{\sup}\esp{\bar X_s^2}<\infty. \end{equation}
Besides, Ito's formula gives \begin{multline*} \bar X_t^2 = \bar X_0^2 -2\alpha\int_0^t\bar X_s^2ds-\int_{[0,t]\times\r_+\times\r}\bar X_{s-}^2\indiq_{ \{ z \le f ( \bar X_{s-}) \}} \pi(ds,dz,du)\\ +\sigma^2\int_0^t\mu_s(f)ds+2\sigma\int_0^t\sqrt{\mu_s(f)}\bar X_sdW_s\\
\leq \bar X_0^2+\sigma^2||f||_\infty t+2\sigma\int_0^t\sqrt{\mu_s(f)}\bar X_sdW_s. \end{multline*} Inequality~\eqref{controllimite} is then a straightforward consequence of Burkholder-Davis-Gundy inequality,~\eqref{controllimite2} and the above computation. \end{proof}
We now give the \begin{proof}[Proof of Proposition~\ref{independence}] $(i)$ Given a Brownian motion $W$ and i.i.d. Poisson measures $\pi^i ,$ the same proof as the one of Theorem~\ref{prop:42} implies the existence and the uniqueness of the system given in~\eqref{eq:dynlimit1} for $1\leq i\leq N.$
$(ii)$ The construction of the proof of Item 2. of Theorem~\ref{prop:42}, together with the proof of Theorem~1.1 of Chapter IV.1 and of Theorem 9.1 in Chapter IV.9 of \cite{ikeda_stochastic_1989}, imply the existence of a measurable function $\Phi$ that does not depend on $k=1,\ldots, N$, and that satisfies, for each $1\leq k\leq N,$ $$\limY^k=\Phi(\limY^k_0,\pi^k,W)$$ and for all $ t \geq 0, $ \begin{equation}\label{eq:nonanticipatif}
\limY^k_{|[0,t]} = \Phi_t (\limY^k_0, \pi^k_{| [ 0, t ] \times \R_+ \times \R } ,(W_s)_{ s \le t } ) ; \end{equation} in other words, our process is non-anticipative and does only depend on the underlying noise up to time $t.$
Then we can write, for all continuous bounded functions $g,h$, $$\espc{g(\limY^i)h(\limY^j)}{\W}=\espc{g(\Phi(\limY^i_0,\pi^i,W))h(\Phi(\limY^j_0,\pi^j,W))}{\W}=\psi(W),$$ where $\psi(w):=\esp{g(\Phi(\limY^i_0,\pi^i,w))h(\Phi(\limY^j_0,\pi^j,w))}=\esp{g(\Phi(\limY^i_0,\pi^i,w))}\esp{h(\Phi(\limY^j_0,\pi^j,w))}=:\psi_i(w)\psi_j(w)$. With the same reasoning, we show that $\espc{g(\limY^i)}{\W}=\psi_i(W)$ and $\espc{h(\limY^j)}{\W}=\psi_j(W)$. The same arguments prove the mutual independence of $\limY^1,\ldots \limY^N$ conditionally to $W.$
$(iii)$ Using the representation $\limY^k_{|[0,t]}=\Phi_{t}(\limY^k_0,\pi^k,W) ,$ we can write for any continuous and bounded function $g : D([0,t],\r) \to \R,$
$$\int_\r g d (N^{-1} \sum_{i=1}^N \delta_{\limY^i_{||0,t]}}) = \frac 1N \sum_{i=1}^N g (\limY^i_{|[0,t]})=\frac 1N \sum_{i=1}^N g \circ\Phi_t(\limY^i_0,\pi^i,W).$$ Using the law of large numbers on the account of the sequence of i.i.d. PRM's and working conditionally on $ W, $ we obtain that
$$ \lim_{ N \to \infty} \int_\r g d (N^{-1} \sum_{i=1}^N \delta_{\limY^i_{|[0,t]}}) = \esp{ g \circ\Phi_t(\limY^1_0,\pi^1,W) | \W } =\esp{ g ( \limY^1_{|[0,t]}) | \W } = \esp{ g ( \limY^1_{|[0,t]}) | (W_s)_{s \le t } } ,$$ where we have used \eqref{eq:nonanticipatif}. \end{proof}
\subsection{Proof of Corollary \ref{cor:PDE}}
Applying Ito's formula, we have \begin{multline}\label{eq:pde}
\phi ( \limY_t ) = \phi ( \limY_0 ) + \int_0^t \left( -\alpha\phi' ( \limY_s)\limY_s+ \frac12 \phi'' ( \limY_s) \mu_s ( f) \right) ds + \int_0^t \phi' ( \limY_s) \sqrt{ \mu_s ( f) } d W_s \\ + \int_{ [0, t ] \times \R_+ \times \R } \indiq_{\{ z \le f( \limY_{s-} \}} \left( \phi( 0 ) - \phi ( \limY_{s-} \right) \pi ( ds, dz, du ) . \end{multline} Since $ \phi', \phi''$ and $ f$ are bounded, it follows from~\eqref{controllimite} and Fubini's theorem that \begin{multline*}
\E \left( \int_0^t \big( -\alpha\phi' ( \limY_s)\limY_s + \frac12 \phi'' ( \limY_s) \mu_s ( f) \big) ds | W \right) = \int_0^t E \left( -\alpha\phi' ( \limY_s)\limY_s + \frac12 \phi'' ( \limY_s) \mu_s ( f) | W\right) ds \\
= \int_0^t \int_\R \left( -\alpha\phi' (x)x + \frac12 \phi'' (x) \mu_s ( f) \right) \mu_s (dx) ds .
\end{multline*}
Moreover, by independence of $ \limY_0 $ and $ W, $ $ \E ( \phi( \limY_0 ) | W ) = \int_\R \phi ( x) \nu_0 (dx) .$
To deal with the martingale part in \eqref{eq:pde}, we use an Euler scheme to approximate the stochastic integral $I_t := \int_0^t \phi' ( \limY_s) \sqrt{ \mu_s ( f) } d W_s.$ For that sake, let $ t^n_k : = k 2^{- n } t , 0 \le k \le 2^n,$ $n \geq 1,$ and define $$ I_t^n := \sum_{k=0}^{2^n - 1 } \phi ' ( \limY_{t_k^n } ) \Delta_k^n , \; \Delta_k^n = \int_{t_k^n }^{t_{k+1}^n } \sqrt{ \mu_s ( f) } d W_s ,$$
then $ \E ( | I_t - I_t^n |^2 ) \to 0 $ as $ n \to \infty ,$ and therefore $ \E ( I_t^n | \W ) \to \E ( I_t | \W) $ in $L^2 ( P), $ as $ n \to \infty .$ But
$$ \E ( I_t^n |\W ) = \sum_{k=0}^{2^n - 1 } \E ( \phi ' ( \limY_{t_k^n } ) | \W) \Delta_k^n \to \int_0^t \E ( \phi' ( \limY_s) | \W) \sqrt{ \mu_s ( f) } d W_s $$
in $L^2 ( P), $ since the sequence of processes $ Y^n_s := \sum_{k=0}^{2^n - 1 } \indiq_{] t_k^n , t_{k+1}^n ] } (s) \E ( \phi ' ( \limY_{t_k^n } ) | \W ) , 0 \le s \le t, $ converges in $L^2 ( \Omega \times [0, t ]) $ to $ \E ( \phi ' ( \limY_s) |\W ) .$
We finally deal with the jump part in \eqref{eq:pde}. Since $f$ is bounded, and by independence of $W$ and $\pi, $ we can rewrite this part in terms of an underlying Poisson process $N_t , $ independent of $W $ and having rate $ \| f\|_\infty , $ and in terms of i.i.d. variables $(V_n)_{n\geq 1}$ uniformly distributed on $ [0, 1 ] ,$ independent of $W$ and of $ N$ as follows. $$
\int_{ [0, t ] \times \R_+ \times \R } \indiq_{\{ z \le f( \limY_{s-} \}} \left( \phi( 0 ) - \phi ( \limY_{s-} \right) \pi ( ds, dz, du ) =
\sum_{n=1}^{N_t} \indiq_{\{ \|f \|_\infty V_n \le f( \limY_{T_n- } ) \}} ( \phi(0) - \phi( \limY_{T_n- } )) . $$
Taking conditional expectation $ E ( \cdot | \W), $ we obtain \begin{multline*}
\E \left( \sum_{n=1}^{N_t} \indiq_{\{ \|f \|_\infty V_n \le f( \limY_{T_n- } ) \}} ( \phi(0) - \phi( \limY_{T_n- } ))| \W \right) = \\
\E \left( \sum_{n=1}^{N_t} \frac{ f( \limY_{T_n- } ) }{\|f\|_\infty} ( \phi(0) - \phi( \limY_{T_n- } ))| \W \right)\\
= \int_0^t \E \left( f( \limY_s) ( \phi(0) - \phi( \limY_s))| \W \right) ds , \end{multline*} where we have used the independence properties of $ (V_n)_n, N_t $ and $ W $ and the fact that conditionally on $ \{ N_t = n \}, $ the jump times $ (T_1, \ldots , T_n ) $ are distributed as the order statistics of $n$ i.i.d. times which are uniformly distributed on $ [0, t ].$ This concludes our proof.
\subsection{A priori estimates}
In this subsection, we prove useful a priori upper bounds on some moments of the solutions of the SDEs~\eqref{eq:dyn}.
\begin{lem} \label{estimate} Assume that~\ref{control} holds and that $f$ is bounded: \begin{itemize} \item[(i)] for all $t>0, $ $\underset{N\in\n^*}{\sup}\underset{0\leq s\leq t}{\sup}\esp{\ll(X_{s}^{N,1}\rr)^2}<+\infty,$
\item[(ii)] for all $t>0,\underset{N\in\n^*}{\sup}\esp{\underset{0\leq s\leq t}{\sup}\ll|X^{N,1}_s\rr|}<+\infty,$
\end{itemize} \end{lem}
\begin{proof} {\it Step~1:} Let us prove $(i)$. \begin{multline*} \ll(X_{t}^{N,1}\rr)^2= \ll(X_0^{N,1}\rr)^2-2\alpha\int_0^{t} \ll(X_s^{N,1}\rr)^2ds-\int_{[0,t]\times\r_+\times\r}\ll(X_s^{N,1}\rr)^2\uno{z\leq f\ll(X_{s-}^{N,1}\rr)}d\pi^j(s,z,u)\\ +\sum_{j=2}^N\int_{[0,t]\times\r_+\times\r}\ll[\ll(X_{s-}^{N,1}+\frac{u}{\sqrt{N}}\rr)^2-\ll(X_{s-}^{N,1}\rr)^2\rr]\uno{z\leq f\ll(X_{s-}^{N,j}\rr)}d\pi^j(s,z,u)\\ \leq \ll(X_0^{N,1}\rr)^2+\sum_{j=2}^N\int_{[0,t]\times\r_+\times\r}\ll[\ll(X_{s-}^{N,1}+\frac{u}{\sqrt{N}}\rr)^2-\ll(X_{s-}^{N,1}\rr)^2\rr]\uno{z\leq f\ll(X_{s-}^{N,j}\rr)}d\pi^j(s,z,u). \end{multline*} As $f$ is bounded,
$$\esp{\ll(X_{t}^{N,1}\rr)^2}\leq \esp{\ll(X_0^{N,1}\rr)^2}+\frac{\sigma^2}{N}\sum_{j=2}^N\int_0^t\esp{f\ll(X_{s}^{N,j}\rr)}ds\leq \esp{\ll(X_0^{N,1}\rr)^2}+\sigma^2||f||_\infty t.$$ {\it Step~2:} Now we prove $(ii)$. \begin{align*}
\ll|X_t^{N,1}\rr|\leq& \ll|X_0^{N,1}\rr|+\alpha\int_0^t\ll|X_s^{N,1}\rr|ds+\int_{[0,t]\times\r_+\times\r}\ll|X^{N,1}_{s-}\rr|\uno{z\leq f(X^{N,1}_{s-})}d\pi^1(s,z,u)+\frac{1}{\sqrt{N}}|M_t^N| , \end{align*} where $M_t^N$ is the martingale $M_t^N=\sum_{j=2}^N\int_{[0,t]\times\r_+\times\r}u\uno{z\leq f\ll(X^{N,j}_{s-}\rr)}d\pi^j(s,z,u).$ Then \begin{multline*}
\underset{0\leq s\leq t}{\sup}\ll|X_s^{N,1}\rr|\leq \ll|X_0^{N,1}\rr|+\alpha\int_0^t|X_s^{N,1}|ds+\int_{[0,t]\times\r_+\times\r}\ll|X^{N,1}_{s-}\rr|\uno{z\leq f(X^{N,1}_{s-})}d\pi^1(s,z,u)\\
+\frac{1}{\sqrt{N}}\underset{0\leq s\leq t}{\sup}|M_s^N|. \end{multline*} To conclude the proof, it is now sufficient to notice that
$$\frac{1}{\sqrt{N}}\esp{\underset{0\leq s\leq t}{\sup}|M_s^N|}\leq \esp{\frac{1}{N}[M^N]_t}^{1/2}$$ is uniformly bounded in $N$, since $f$ is bounded, and to use the point $(i)$ of the lemma. \end{proof}
\end{document} | arXiv | {
"id": "1909.02925.tex",
"language_detection_score": 0.636326789855957,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\textbf{KAM for autonomous quasi-linear \\ perturbations of KdV}}
\date{} \author{Pietro Baldi, Massimiliano Berti, Riccardo Montalto}
\maketitle
\noindent {\bf Abstract.} We prove the existence and stability of Cantor families of quasi-periodic, small amplitude solutions of {\it quasi-linear} (i.e. {\it strongly nonlinear}) autonomous Hamiltonian perturbations of KdV.
\noindent {\it Keywords:} KdV, KAM for PDEs, quasi-linear PDEs, Nash-Moser theory, quasi-periodic solutions.
\noindent {\it MSC 2010:} 37K55, 35Q53.
\tableofcontents
\section{Introduction and main results}
In this paper we prove the existence and stability of Cantor families of quasi-periodic solutions of Hamiltonian {\it quasi-linear} (also called ``{\it strongly nonlinear}'', e.g. in \cite{k1}) perturbations of the KdV equation \begin{equation}\label{kdv quadratica} u_t + u_{xxx} - 6 u u_x + {\mathcal N}_4 (x, u, u_x, u_{xx}, u_{xxx}) = 0\,, \end{equation} under periodic boundary conditions $ x \in \mathbb T := \mathbb R / 2 \pi \mathbb Z$, where \begin{equation}\label{qlpert} {\mathcal N}_4 (x, u, u_x, u_{xx}, u_{xxx}) := - \partial_x \big[ (\partial_u f)(x, u,u_x) - \partial_{x} ((\partial_{u_x} f)(x, u,u_x)) \big] \end{equation} is the most general quasi-linear Hamiltonian (local) nonlinearity. Note that $ {\cal N}_4 $ contains as many derivatives as the linear part $ \partial_{xxx} $. The equation \eqref{kdv quadratica} is the Hamiltonian PDE $ u_t = \partial_x \nabla H(u) $ where $ \nabla H $ denotes the $L^2(\mathbb T_x)$ gradient of the Hamiltonian \begin{equation} \label{Ham in intro} H (u) = \int_\mathbb T \frac{u_x^2}{2} + u^3 + f(x, u,u_x) \, dx \end{equation} on the real phase space \begin{equation}\label{def phase space} H^1_0 (\mathbb T_x) := \Big\{ u(x ) \in H^1 (\mathbb T, \mathbb R) \ : \ \int_{\mathbb T} u(x) dx = 0 \Big\} \, . \end{equation} We assume that the ``Hamiltonian density" $ f \in C^q (\mathbb T \times \mathbb R \times \mathbb R; \mathbb R ) $ for some $ q $ large enough, and that \begin{equation}\label{order5} f = f_5(u,u_x) + f_{\geq 6}(x,u,u_x) \,, \end{equation} where $ f_5 (u, u_x) $ denotes the homogeneous component of $ f $ of degree 5 and $ f_{\geq 6} $ collects all the higher order terms. By \eqref{order5} the nonlinearity $ {\mathcal N}_4 $ vanishes of order $ 4 $ at $ u = 0 $ and
\eqref{kdv quadratica} may be seen, close to the origin, as a ``small" perturbation of the KdV equation \begin{equation}\label{KdVmKdV} u_t + u_{xxx} - 6 u u_x = 0 \, , \end{equation} which is completely integrable. Actually, the KdV equation \eqref{KdVmKdV} may be described by global analytic action-angle variables, see \cite{KaP} and the references therein.
A natural question is to know whether the periodic, quasi-periodic or almost periodic solutions of \eqref{KdVmKdV} persist under small perturbations. This is the content of KAM theory.
The first KAM results for PDEs have been obtained for $1$-d semilinear Schr\"odinger and wave equations by Kuksin \cite{Ku}, Wayne \cite{W1}, Craig-Wayne \cite{CW}, P\"oschel \cite{Po2}, see \cite{C}, \cite{k1} and references therein. For PDEs in higher space dimension the theory has been more recently extended by Bourgain \cite{B5}, Eliasson-Kuksin \cite{EK}, and Berti-Bolle \cite{BB13JEMS}, Geng-Xu-You \cite{GXY}, Procesi-Procesi \cite{PP}-\cite{PP1}, Wang \cite{Wang}.
For {\it unbounded} perturbations the first KAM results have been proved by Kuksin \cite{K2} and Kappeler-P\"oschel \cite{KaP} for KdV (see also Bourgain \cite{B96}), and more recently by Liu-Yuan \cite{LY}, Zhang-Gao-Yuan \cite{ZGY} for derivative NLS, and by Berti-Biasco-Procesi \cite{BBiP1}-\cite{BBiP2} for derivative NLW. For a recent survey of known results for KdV, we refer to \cite{K13}.
The KAM theorems in \cite{K2}, \cite{KaP} prove the persistence of the finite-gap solutions of the integrable KdV \eqref{KdVmKdV} under semilinear Hamiltonian perturbations $ \varepsilon \partial_{x} (\partial_u f) (x, u) $, namely when the density $ f $ is independent of $ u_x $, so that \eqref{qlpert} is a differential operator of order $ 1 $ (note that in \cite{k1} such nonlinearities are called ``quasi-linear" and \eqref{qlpert} ``strongly nonlinear").
The key point is that the frequencies of KdV grow as $ \sim j^3 $ and the difference $ |j^3 - i^3| \geq (j^2 + i^2)/2 $, $i \neq j $, so that KdV gains (outside the diagonal) two derivatives. This approach also works for Hamiltonian pseudo-differential perturbations of order $ 2 $ (in space), using the improved Kuksin's lemma in \cite{LY}. However it does {\it not} work for a general quasi-linear
perturbation as in \eqref{qlpert}, which is a nonlinear differential operator of the {\it same} order (i.e.\ 3) as the constant coefficient linear operator $ \partial_{xxx}$. Such a strongly nonlinear perturbation term makes the KAM question quite delicate because of the possible phenomenon of formation of singularities in finite time, see Lax \cite{Lax}, Klainerman-Majda \cite{KM} for quasi-linear wave equations, see also section 1.4 of \cite{k1}. For example, Kappeler-P\"oschel \cite{KaP} (Remark 3, page 19) wrote: ``{\it It would be interesting to obtain perturbation results which also include terms of higher order, at least in the region where the KdV approximation is valid. However, results of this type are still out of reach, if true at all}''.
In this paper we give the first positive answer to this problem, proving the existence of small amplitude, linearly stable, quasi-periodic solutions of \eqref{kdv quadratica}, see Theorem \ref{thm:KdV}. Note that \eqref{kdv quadratica} does not depend on external parameters. Moreover the KdV equation \eqref{kdv quadratica} is a {\it completely resonant} PDE, namely the linearized equation at the origin is the linear Airy equation $ u_t + u_{xxx} = 0 $, which possesses only the $ 2 \pi $-periodic in time solutions \begin{equation}\label{Airyper} u(t,x) = {\mathop \sum}_{j \in \mathbb Z \setminus \{0\} } u_j {\rm e}^{{\mathrm i} j^3 t} e^{{\mathrm i} jx } \, . \end{equation} Thus the existence of quasi-periodic solutions of \eqref{kdv quadratica} is a purely nonlinear phenomenon
(the diophantine frequencies in \eqref{solution u} are $ O(|\xi|) $-close to integers with $ \xi \to 0 $) and a perturbation theory is more difficult.
The solutions that we find are localized in Fourier space close to finitely many {``tangential sites''} \begin{equation} \label{tang sites} S^+ := \{ \bar\jmath_1, \ldots, \bar\jmath_\nu \}\,, \quad S := S^+ \cup (- S^+) = \{ \pm j : j \in S^+ \}\,, \quad {\bar \jmath}_i \in \mathbb N \setminus \{0\} \, , \quad \forall i =1, \ldots, \nu \,. \end{equation} The set $ S $ is required to be even because the solutions $ u $ of \eqref{kdv quadratica} have to be real valued. Moreover, we also assume the following explicit hypotheses on $ S $: \begin{itemize} \item {\sc $ ({\mathtt S}1) $} $j_1 + j_2 + j_3 \neq 0$ for all $j_1, j_2, j_3 \in S$. \item {\sc $ ({\mathtt S}2) $} $ \nexists j_1, \ldots, j_4 \in S $ such that $ j_1 + j_2 + j_3 + j_4 \neq 0 $, $ j_1^3 + j_2^3 + j_3^3 + j_4^3 - (j_1 + j_2 + j_3 + j_4)^3 = 0 $. \end{itemize} \begin{theorem} \label{thm:KdV} Given $ \nu \in \mathbb N $, let $ f \in C^q $ (with $ q := q(\nu) $ large enough) satisfy \eqref{order5}. Then, for all the tangential sites $ S $ as in \eqref{tang sites} satisfying $ ({\mathtt S}1) $-$ ({\mathtt S}2) $, the KdV equation \eqref{kdv quadratica} possesses small amplitude quasi-periodic solutions with diophantine frequency vector $\omega := \omega(\xi) = (\omega_j)_{j \in S^+} \in \mathbb R^\nu$, of the form \begin{equation}\label{solution u}
u(t,x) = {\mathop\sum}_{j \in S^+} 2 \sqrt{\xi_j} \, \cos( \omega_j t + j x) + o( \sqrt{|\xi|} ), \quad \omega_j := j^3 - 6 \xi_j j^{-1} \,, \end{equation}
for a ``Cantor-like" set of small amplitudes $ \xi \in \mathbb R^\nu_+ $ with density $ 1 $ at $ \xi = 0 $. The term $ o( \sqrt{|\xi|} ) $ is small in some $ H^s $-Sobolev norm, $ s < q $. These quasi-periodic solutions are {\sl linearly stable}. \end{theorem}
This result is deduced from Theorem \ref{main theorem}. Let us make some comments. \begin{enumerate} \item The set of tangential sites $ S $ satisfying $({\mathtt S}1)$-$({\mathtt S}2) $ can be iteratively constructed in an explicit way, see the end of section \ref{sec:NM}. After fixing $ \{\bar\jmath_1, \ldots, \bar\jmath_n \} $, in the choice of $\bar\jmath_{n+1}$ there are only finitely many forbidden values, while all the other infinitely many values are good choices for $\bar\jmath_{n+1}$. In this precise sense the set $ S $ is ``generic''. \item The linear stability of the quasi-periodic solutions is discussed after \eqref{sistema lineare dopo}. In a suitable set of symplectic coordinates $ (\psi, \eta, w) $, $ \psi \in \mathbb T^\nu $, near the invariant torus,
the linearized equations at the quasi-periodic solutions assume the form \eqref{sistema lineare dopo}, \eqref{vjmuj}. Actually there is a complete KAM normal form near the invariant torus (remark \ref{rem:KAM normal form}), see also \cite{BB13}. \item A similar result holds for perturbed (focusing/defocusing) mKdV equations \begin{equation}\label{mKdV} u_t + u_{xxx} \pm \partial_x u^3 + {\mathcal N}_4 (x, u, u_x, u_{xx}, u_{xxx}) = 0 \end{equation} for tangential sites $ S $ which satisfy $ \frac{2}{2\nu - 1} \sum_{i = 1}^\nu \bar \jmath_i^{\,2} \notin \mathbb Z $. If the density $ f(u, u_x) $ is independent on $ x $, the result holds for {\it all} the choices of the tangential sites. The KdV equation \eqref{kdv quadratica} is more difficult than
\eqref{mKdV} because the nonlinearity is quadratic and not cubic.
An important point is that the fourth order Birkhoff normal form of KdV and mKdV is completely integrable. The present strategy of proof --- that we describe in detail below --- is a rather general approach for constructing small amplitude quasi-periodic solutions of quasi-linear perturbed KdV equations. For example it could be applied to generalized KdV equations with leading nonlinearity $ u^p $, $ p \geq 4 $, by using the normal form techniques of Procesi-Procesi \cite{PP1}-\cite{PP}. A further interesting open question concerns perturbations of the finite gap solutions of KdV. \end{enumerate}
Let us describe the strategy of proof of Theorem \ref{thm:KdV}, which involves many different arguments. \\[2mm] {\it Weak Birkhoff normal form.} Once the finite set of tangential sites $ S $ has been fixed, the first step is to perform a ``weak" Birkhoff normal form (weak BNF), whose goal is to find an invariant manifold of solutions of the third order approximate KdV equation \eqref{kdv quadratica}, on which the dynamics is completely integrable, see section \ref{sec:WBNF}. Since the KdV nonlinearity is quadratic, two steps of weak BNF are required. The present Birkhoff map is close to the identity up to {\it finite dimensional} operators, see Proposition \ref{prop:weak BNF}. The key advantage is that it modifies $ {\mathcal N}_4 $ very mildly, only up to finite dimensional operators (see for example Lemma \ref{lemma astratto potente}), and thus the spectral analysis of the linearized equations (that we shall perform in section \ref{operatore linearizzato sui siti normali}) is essentially the same as if we were in the original coordinates.
The weak normal form \eqref{widetilde cal H} does not remove (or normalize) the monomials $ O(z^2) $. This could be done. However, we do not perform such stronger normal form (called ``partial BNF" in P\"oschel \cite{Po3}) because the corresponding Birkhoff map is close to the identity only up to an operator of order $ O(\partial_x^{-1}) $, and so it would produce, in the transformed vector field $ {\mathcal N}_4 $, terms of order $ \partial_{xx} $ and $ \partial_x $. A fortiori, we cannot either use the full Birkhoff normal form computed in \cite{KaP} for KdV, which completely diagonalizes the fourth order terms, because such Birkhoff map is only close to the identity up to a bounded operator. For the same reason, we do not use the global nonlinear Fourier transform in \cite{KaP} (Birkhoff coordinates), which is close to the Fourier transform up to smoothing operators of order $ O(\partial_x^{-1}) $.
The weak BNF procedure of section \ref{sec:WBNF} is sufficient to find the first nonlinear (integrable) approximation of the solutions and to extract the ``frequency-to-amplitude'' modulation \eqref{mappa freq amp}.
In Proposition \ref{prop:weak BNF} we also remove the terms $O(v^5)$, $O(v^4 z)$ in order to have sufficiently good approximate solutions so that the Nash-Moser iteration of section \ref{sec:NM} will converge. This is necessary for KdV whose nonlinearity is quadratic at the origin. These further steps of Birkhoff normal form are not required if the nonlinearity is yet cubic as for mKdV, see Remark \ref{remark:cubic}. To this aim, we choose the tangential sites $S$ such that $({\mathtt S}2)$ holds. We also note that we assume \eqref{order5} because we use the conservation of momentum up to the homogeneity order 5, see \eqref{cons mom}.
\\[1mm]
{\it Action-angle and rescaling.} At this point we introduce action-angle variables on the tangential sites (section \ref{sec:4}) and, after the rescaling \eqref{rescaling kdv quadratica}, we look for quasi-periodic solutions of the Hamiltonian \eqref{formaHep}. Note that the coefficients of the normal form $ {\cal N } $ in \eqref{Hamiltoniana Heps KdV} depend on the angles $ \theta $, unlike the usual KAM theorems \cite{Po3}, \cite{Ku}, where the whole normal form is reduced to constant coefficients. This is because the weak BNF of section \ref{sec:WBNF} did {\it not} normalize the quadratic terms $ O(z^2) $. These terms are dealt with the ``linear Birkhoff normal form'' (linear BNF) in sections \ref{BNF:step1}, \ref{BNF:step2}. In some sense here the ``partial" Birkhoff normal form of \cite{Po3} is split into the weak BNF of section \ref{sec:WBNF} and the linear BNF of sections \ref{BNF:step1}, \ref{BNF:step2}.
The action-angle variables are convenient for proving the stability of the solutions. \\[1mm]
{\it The nonlinear functional setting.} We look for a zero of the nonlinear operator \eqref{operatorF}, whose unknown is the embedded torus and the frequency $ \omega $ is seen as an ``external" parameter. The solution is obtained by a Nash-Moser iterative scheme in Sobolev scales. The key step is to construct (for $ \omega $ restricted to a suitable Cantor-like set)
an approximate inverse ({\it \`a la} Zehnder \cite{Z1}) of the linearized operator at any approximate solution. Roughly, this means to find a linear operator which is an inverse at an exact solution. A major difficulty is that the tangential and the normal dynamics near an invariant torus are strongly coupled.
This difficulty is overcome by implementing the abstract procedure in Berti-Bolle \cite{BB13}-\cite{BB14}
developed in order to prove existence of quasi-periodic solutions for autonomous NLW (and NLS) with a multiplicative potential. This approach reduces the search of an approximate inverse for \eqref{operatorF} to the invertibility of a quasi-periodically forced PDE restricted on the normal directions. This method approximately decouples the ``tangential" and the ``normal" dynamics around an approximate invariant torus, introducing a suitable set of symplectic variables $ (\psi, \eta, w) $ near the torus, see \eqref{trasformazione modificata simplettica}. Note that, in the first line of \eqref{trasformazione modificata simplettica}, $ \psi $ is the ``natural" angle variable which coordinates the torus, and, in the third line, the normal variable $ z $ is only translated by the component $ z_0 (\psi )$ of the torus. The second line completes this transformation to a symplectic one. The canonicity of this map is proved in \cite{BB13} using the isotropy of the approximate invariant torus $ i_\delta $, see Lemma \ref{toro isotropico modificato}. The change of variable \eqref{trasformazione modificata simplettica} brings the torus $ i_\delta $ ``at the origin". The advantage is that
the second equation in \eqref{operatore inverso approssimato} (which corresponds to the action variables of the torus)
can be immediately solved, see \eqref{soleta}. Then it remains to solve the third equation \eqref{cal L omega}, i.e. to invert the linear operator $ {\cal L}_\omega $. This is, up to finite dimensional remainders, a quasi-periodic Hamiltonian linear Airy equation perturbed by a variable coefficients differential operator of order $ O(\partial_{xxx} ) $. The exact form of $ {\cal L}_\omega $ is obtained in Proposition \ref{prop:lin}. \\[1mm] {\it Reduction of the linearized operator in the normal directions.} In section \ref{operatore linearizzato sui siti normali} we conjugate the variable coefficients operator $ {\cal L}_\omega $ in \eqref{Lom KdVnew} to a diagonal operator with constant coefficients which describes infinitely many harmonic oscillators \begin{equation}\label{linearosc} {\dot v}_j + \mu_j^\infty v_j = 0 \, , \quad \mu_j^\infty := {\mathrm i} (-m_3 j^3 + m_1 j) + r_j^\infty \in {\mathrm i} \mathbb R \, , \quad j \notin S \, , \end{equation}
where the constants $ m_3 -1 $, $ m_1 \in \mathbb R $ and $ \sup_j |r_j^\infty | $ are small, see Theorem \ref{teoremadiriducibilita}. The main perturbative effect to the spectrum (and the eigenfunctions) of $ {\cal L}_\omega $ is clearly due to the term $ a_1 (\omega t, x ) \partial_{xxx} $ (see \eqref{Lom KdVnew}), and it is too strong for the usual
reducibility KAM techniques to work directly. The conjugacy of $ {\cal L}_\omega $ with \eqref{linearosc} is obtained in several steps. The first task (obtained in sections \ref{step1}-\ref{step5}) is to conjugate $ {\cal L}_\omega $ to another Hamiltonian operator of $ H_S^\bot $ with constant coefficients \begin{equation}\label{L6 qualitativo} {\cal L}_6 := \omega \cdot \partial_\varphi + m_3 \partial_{xxx} + m_1 \partial_x + R_6 \, , \quad m_1, m_3 \in \mathbb R \, , \end{equation} up to a small bounded remainder $ R_6 = O(\partial_x^0 ) $, see \eqref{def L6}. This expansion of $ {\cal L}_\omega $ in ``decreasing symbols" with constant coefficients is similar to \cite{BBM}, and it is somehow in the spirit of the works of Iooss, Plotnikov and Toland \cite{Ioo-Plo-Tol}-\cite{IP11} in water waves theory, and Baldi \cite{Baldi Benj-Ono} for Benjamin-Ono. It is obtained by transformations which are very different from the usual KAM changes of variables. There are several differences with respect to \cite{BBM}:
\begin {enumerate} \item The first step is to eliminate the $ x $-dependence from the coefficient $ a_1 (\omega t, x ) \partial_{xxx} $ of the Hamiltonian operator $ {\cal L}_\omega $. We cannot use the symplectic transformation $ {\cal A } $ defined in \eqref{primo cambio di variabile modi normali}, used in \cite{BBM}, because $ {\cal L}_\omega $ acts on the normal subspace $ H_S^\bot $ only,
and not on the whole Sobolev space as in \cite{BBM}. We can not use the restricted map $ {\cal A}_\bot := \Pi_S^\bot {\cal A} \Pi_S^\bot $ which is {\it not} symplectic. In order to find a symplectic diffeomorphism of $ H_S^\bot $ near $ {\cal A}_\bot $, the first observation is to realize $ {\cal A } $ as the flow map of the time dependent Hamiltonian transport linear PDE \eqref{transport-free}. Thus we conjugate $ {\cal L}_\omega $ with the flow map of the projected Hamiltonian equation \eqref{problemi di cauchy}. In Lemma \ref{modifica simplettica cambio di variabile} we prove that it differs from $ {\cal A}_\bot $ up to finite dimensional operators. A technical, but important, fact is that the remainders produced after this conjugation of ${\cal L}_\omega $ remain of the finite dimensional form \eqref{forma buona con gli integrali}, see Lemma \ref{cal R3}.
This step may be seen as a quantitative application of the Egorov theorem, see \cite{Taylor}, which describes how the principal symbol of a pseudo-differential operator (here $ a_1 (\omega t, x) \partial_{xxx} $) transforms under the flow of a linear hyperbolic PDE (here \eqref{problemi di cauchy}).
\item Since the weak BNF procedure of section \ref{sec:WBNF} did not touch the quadratic terms $ O(z^2 ) $, the operator $ {\cal L}_\omega $ has variable coefficients also at the orders $ O(\varepsilon)$ and $ O(\varepsilon^2 )$, see \eqref{Lom KdVnew}-\eqref{a1p1p2}. These terms cannot be reduced to constants by the perturbative scheme in \cite{BBM}, which applies to terms $ R $ such that $ R \gamma^{ -1} \ll 1 $ where $ \gamma $ is the diophantine constant of the frequency vector $ \omega $. Here, since KdV is completely resonant, such $ \gamma = o(\varepsilon^2 ) $, see \eqref{omdio}. These terms are reduced to constant coefficients in sections \ref{BNF:step1}-\ref{BNF:step2} by means of purely algebraic arguments (linear BNF), which, ultimately, stem from the complete integrability of the fourth order BNF of the KdV equation \eqref{KdVmKdV}, see \cite{KaP}. \end{enumerate}
The order of the transformations of sections \ref{step1}-\ref{subsec:mL0 mL5} used to reduce $ {\cal L}_\omega $ is not accidental. The first two steps in sections \ref{step1}, \ref{step2} reduce to constant coefficients the quasi-linear term $ O(\partial_{xxx}) $ and eliminate the term $ O(\partial_{xx})$, see \eqref{L2 Kdv} (the second transformation is a time quasi-periodic reparametrization of time). Then,
in section \ref{step3}, we apply the transformation ${\cal T}$ \eqref{gran tau} in such a way that the space average of the coefficient $ d_1 (\varphi, \cdot ) $ in \eqref{L3 KdV} is constant. This is done in view of the applicability of the descent method in section \ref{step5}. All these transformations are composition operators induced by diffeomorphisms of the torus. Therefore they are well-defined operators of a Sobolev space into itself, but their decay norm is infinite! We perform the transformation $ {\cal T } $ \emph{before} the linear Birkhoff normal form steps of sections \ref{BNF:step1}-\ref{BNF:step2}, because $\mathcal{T}$ is a change of variable that preserves the form \eqref{forma buona con gli integrali} of the remainders (it is not evident after the Birkhoff normal form). The Birkhoff transformations are symplectic maps of the form $ I + \varepsilon O( \partial_x^{-1}) $. Thanks to this property the coefficient $ d_1 ( \varphi,x) $ obtained in step \ref{step3} is {\it not} changed by these Birkhoff maps. The transformation in section \ref{step5} is one step of ``descent method'' which transforms $ d_1 ( \varphi,x) \partial_x $ into a constant $ m_1 \partial_x $. It is at this point of the regularization procedure that the assumption $({\mathtt S}1)$ on the tangential sites is used, so that the space average of the function $ q_{>2} $ is zero, see Lemma \ref{p3 zero average}. Actually we only need that the average of the function in \eqref{unico pezzo} is zero. If
$ f_5 = 0 $
(see \eqref{order5}) then $({\mathtt S}1)$ is not required. This completes the task of conjugating ${\cal L}_\omega $ to $ {\cal L}_6 $ in \eqref{L6 qualitativo}.
Finally, in section \ref{subsec:mL0 mL5} we apply the abstract reducibility Theorem 4.2 in \cite{BBM}, based on a quadratic KAM scheme, which completely diagonalizes the linearized operator, obtaining \eqref{linearosc}. The required smallness condition \eqref{R6resto} for $ R_6 $ holds. Indeed
the biggest term in $ R_6 $ comes from the conjugation of $ \varepsilon \partial_x v_\varepsilon (\theta_0 (\varphi), y_\delta (\varphi)) $ in \eqref{a1p1p2}. The linear BNF procedure of section \ref{BNF:step1} had eliminated its main contribution $ \varepsilon \partial_x v_\varepsilon ( \varphi,0) $. It remains $ \varepsilon \partial_x \big( v_\varepsilon (\theta_0 (\varphi), y_\delta (\varphi) ) - v_\varepsilon ( \varphi,0) \big) $ which has size $ O( \varepsilon^{7-2b} \gamma^{-1} ) $ due to the estimate \eqref{ansatz 0} of the approximate solution.
This term enters in the variable coefficients of $ d_1 (\varphi,x) \partial_x $ and $ d_0 (\varphi, x) \partial_x^0 $. The first one had been reduced to the constant operator $ m_1 \partial_x $ by the descent method of section \ref{step5}. The latter term is an operator of order $O(\partial_x^0 )$ which satisfies \eqref{R6resto}. Thus $ {\cal L}_6 $ may be diagonalized
by the iterative scheme of Theorem 4.2 in \cite{BBM} which requires the smallness condition $ O( \varepsilon^{7-2b} \gamma^{-2}) \ll 1 $. This is the content of section \ref{subsec:mL0 mL5}. \\[1mm] {\it The Nash-Moser iteration.} In section \ref{sec:NM} we perform the nonlinear Nash-Moser iteration which finally proves Theorem \ref{main theorem} and, therefore, Theorem \ref{thm:KdV}. The optimal smallness condition required for the convergence of the scheme is
$ \varepsilon \| {\cal F}(\varphi, 0, 0 ) \|_{s_0+ \mu} \gamma^{-2} \ll 1 $, see \eqref{nash moser smallness condition}. It is verified because $ \| X_P(\varphi, 0 , 0 ) \|_s \leq_s \varepsilon^{6 - 2b} $ (see \eqref{stima XP}), which, in turn, is a consequence of having eliminated the terms $ O(v^5), O( v^4 z)$ from the original Hamiltonian \eqref{H iniziale KdV}, see \eqref{widetilde cal H}. This requires the condition ($ {\mathtt S}2$).
\noindent {\it Acknowledgements}. We thank M. Procesi, P. Bolle and T. Kappeler for many useful discussions. This research was supported by the European Research Council under FP7, and partially by the grants STAR 2013 and PRIN 2012 ``Variational and perturbative aspects of nonlinear differential problems".
\section{Preliminaries}
\subsection{Hamiltonian formalism of KdV}\label{sec:Ham For}
The Hamiltonian vector field $X_H$ generated by a Hamiltonian $ H : H^1_0(\mathbb T_x) \to \mathbb R$ is $ X_H (u) := \partial_x \nabla H (u) $, because $$ d H (u) [ h] = ( \nabla H(u), h )_{L^2(\mathbb T_x)} = \Omega ( X_H (u), h ) \, , \quad \forall u, h \in H^1_0(\mathbb T_x) \, , $$ where $\Omega$ is the non-degenerate symplectic form \begin{equation}\label{2form KdV} \Omega (u, v) := \int_{\mathbb T} (\partial_x^{-1 } u) \, v \, dx \, , \quad \forall u, v \in H^1_0 (\mathbb T_x) \, , \end{equation} and $ \partial_x^{-1} u $ is the periodic primitive of $ u $ with zero average. Note that \begin{equation} \label{def pi 0} \partial_x \partial_x^{-1} = \partial_x^{-1} \partial_x = \pi_0 \, , \quad \pi_0 (u) := u - \frac{1}{2\pi} \int_{\mathbb T} u(x) \, dx \,. \end{equation} A map is symplectic if it preserves the 2-form $ \Omega $.
We also remind that the Poisson bracket between two functions $ F $, $ G : H^1_0(\mathbb T_x) \to \mathbb R$ is \begin{equation}\label{Poisson bracket} \{ F (u), G(u) \} := \Omega (X_F, X_G ) = \int_{\mathbb T} \nabla F(u) \partial_x \nabla G (u) dx \, . \end{equation} The linearized KdV equation at $ u $ is $$ h_t = \partial_x \, (\partial_u \nabla H)(u) [h ] = X_{K}(h) \, , $$ where $ X_K $ is the KdV Hamiltonian vector field with quadratic Hamiltonian $ K = \frac12 ((\partial_u \nabla H)(u)[h], h)_{L^2(\mathbb T_x)} $ $= \frac12 (\partial_{uu} H)(u)[h, h] $. By the Schwartz theorem, the Hessian operator $ A := (\partial_u \nabla H)(u) $ is symmetric, namely $ A^T = A $, with respect to the $ L^2$-scalar product.
\noindent {\bf Dynamical systems formulation.} It is convenient to regard the KdV equation also in the Fourier representation \begin{equation}\label{Fourier} u(x) = {\mathop \sum}_{j \in \mathbb Z \setminus \{0\} } u_j e^{{\mathrm i} j x} \, , \qquad u(x) \longleftrightarrow u := (u_j)_{j \in \mathbb Z \setminus \{0\} } \, , \quad u_{-j} = \overline{u}_j \, , \end{equation} where the Fourier indices $ j \in \mathbb Z \setminus \{ 0 \}$ by the definition \eqref{def phase space} of the phase space and $u_{-j} = \overline{u}_j$ because $u(x)$ is real-valued. The symplectic structure writes \begin{equation}\label{2form0} \Omega = \frac12 \sum_{j \neq 0} \frac{1}{{\mathrm i} j} du_j \wedge d u_{-j} = \sum_{j \geq 1} \frac{1}{{\mathrm i} j} du_j \wedge d u_{-j} \, , \qquad \Omega ( u, v ) = \sum_{j \neq 0} \frac{1}{{\mathrm i} j} u_j v_{-j} = \sum_{j \neq 0} \frac{1}{{\mathrm i} j} u_j { \overline v}_{j} \, , \end{equation} the Hamiltonian vector field $X_H$ and the Poisson bracket $\{ F, G \}$ are \begin{equation}\label{PoissonBr} [X_H (u)]_j = {\mathrm i} j (\partial_{u_{-j}} H) (u) \, , \ \forall j \neq 0 \, , \quad \{ F (u), G(u) \} = - {\mathop \sum}_{j \neq 0} {\mathrm i} j (\partial_{u_{-j}} F) (u) (\partial_{u_j} G) (u) \, . \end{equation} \noindent \noindent {\bf Conservation of momentum.} A Hamiltonian \begin{equation} \label{cons mom} H(u) = \sum_{j_1, \ldots, j_n \in \mathbb Z \setminus \{ 0 \} } H_{j_1, \ldots, j_n} u_{j_1} \ldots u_{j_n}, \quad u(x) = \sum_{j \in \mathbb Z \setminus \{ 0 \} } u_j e^{{\mathrm i} jx}, \end{equation} homogeneous of degree $n$, \emph{preserves the momentum} if the coefficients $H_{j_1, \ldots, j_n}$ are zero for $j_1 + \ldots + j_n \neq 0$, so that the sum in \eqref{cons mom} is restricted to integers
such that $j_1 + \ldots + j_n = 0$. Equivalently, $H$ preserves the momentum if $\{ H, M \} = 0$, where $M$ is the momentum $ M(u) := \int_{\mathbb T} u^2 dx = $ $ \sum_{j \in \mathbb Z \setminus \{0\}} u_j u_{-j} $. The homogeneous components of degree $ \leq 5 $ of the KdV Hamiltonian $ H $ in \eqref{Ham in intro} preserve the momentum because, by \eqref{order5}, the homogeneous component $f_5$ of degree 5 does not depend on the space variable $x$.
\noindent {\bf Tangential and normal variables.} Let $\bar\jmath_1, \ldots, \bar\jmath_\nu \geq 1$ be $\nu$ distinct integers, and $S^+ := \{ \bar\jmath_1, \ldots, \bar\jmath_\nu \}$. Let $S$ be the symmetric set in \eqref{tang sites}, and $S^c := \{ j \in \mathbb Z \setminus \{ 0 \} : j \notin S \}$ its complementary set in $\mathbb Z \setminus \{ 0 \}$. We decompose the phase space as \begin{equation}\label{splitting S-S-bot} H^1_0 (\mathbb T_x) := H_S \oplus H_S^\bot \,, \quad H_S := \mathrm{span}\{ e^{{\mathrm i} jx} : \, j \in S \} , \quad H_S^\bot := \big\{ u = \sum_{j \in S^c} u_j e^{{\mathrm i} jx} \in H^1_0(\mathbb T_x) \big\} , \end{equation} and we denote by $\Pi_S $, $\Pi_S^\bot $ the corresponding orthogonal projectors. Accordingly we decompose \begin{equation} \label{u = v + z} u = v + z, \qquad v = \Pi_S u := {\mathop \sum}_{j \in S} u_j \, e^{{\mathrm i} jx}, \quad z = \Pi_S^\bot u := {\mathop \sum}_{j \in S^c} u_j \, e^{{\mathrm i} jx} \, , \end{equation} where $ v $ is called the {\it tangential} variable and $ z $ the {\it normal} one. We shall sometimes identify $ v \equiv (v_j)_{j \in S } $ and $ z \equiv (z_j)_{j \in S^c } $. The subspaces $ H_S $ and $ H_S^\bot $ are {\it symplectic}. The dynamics of these two components is quite different. On $ H_S $ we shall introduce the action-angle variables, see \eqref{coordinate azione angolo}. The linear frequencies of oscillations on the tangential sites are \begin{equation}\label{bar omega} \bar\omega := (\bar\jmath_1^3, \ldots, \bar\jmath_\nu^3) \in \mathbb N^\nu. \end{equation}
\subsection{Functional setting}
{\bf Norms.} Along the paper we shall use the notation \begin{equation}\label{Sobolev coppia}
\| u \|_s := \| u \|_{H^s( \mathbb T^{\nu + 1})} := \| u \|_{H^s_{\varphi,x} } \end{equation} to denote the Sobolev norm of functions $ u = u(\varphi,x) $ in the Sobolev space $ H^{s} (\mathbb T^{\nu + 1} ) $.
We shall denote by $ \| \ \|_{H^s_x} $ the Sobolev norm in the phase space of functions $ u := u(x) \in H^{s} (\mathbb T ) $. Moreover $ \| \ \|_{H^s_\varphi} $ will denote the Sobolev norm of scalar functions, like the Fourier components $ u_j (\varphi) $.
We fix $ s_0 := (\nu+2) \slash 2 $ so that $ H^{s_0} (\mathbb T^{\nu + 1} ) \hookrightarrow L^{\infty} (\mathbb T^{\nu + 1} ) $ and the spaces $ H^s (\mathbb T^{\nu + 1} ) $, $ s > s_0 $, are an algebra. At the end of this section we report interpolation properties of the Sobolev norm that will be currently used along the paper. We shall also denote \begin{align} \label{HSbot nu} H^s_{S^\bot} (\mathbb T^{\nu+1}) & := \big\{ u \in H^s(\mathbb T^{\nu + 1} ) \, : \, u (\varphi, \cdot ) \in H_S^\bot \ \forall \varphi \in \mathbb T^\nu \big\} \,, \\ \label{HS nu} H^s_{S} (\mathbb T^{\nu+1}) & := \big\{ u \in H^s(\mathbb T^{\nu + 1} ) \, : \, u (\varphi, \cdot ) \in H_{S} \ \forall \varphi \in \mathbb T^\nu \big\} \,. \end{align}
For a function $u : \Omega_o \to E$, $\omega \mapsto u(\omega)$, where $(E, \| \ \|_E)$ is a Banach space and $ \Omega_o $ is a subset of $\mathbb R^\nu $, we define the sup-norm and the Lipschitz semi-norm \begin{equation} \label{def norma sup lip}
\| u \|^{\sup}_E
:= \| u \|^{\sup}_{E,\Omega_o}
:= \sup_{ \omega \in \Omega_o } \| u(\omega ) \|_E \, , \quad
\| u \|^{\mathrm{lip}}_E
:= \| u \|^{\mathrm{lip}}_{E,\Omega_o} := \sup_{\omega_1 \neq \omega_2 }
\frac{ \| u(\omega_1) - u(\omega_2) \|_E }{ | \omega_1 - \omega_2 | }\,, \end{equation} and, for $ \gamma > 0 $, the Lipschitz norm \begin{equation} \label{def norma Lipg}
\| u \|^{{\mathrm{Lip}(\g)}}_E
:= \| u \|^{{\mathrm{Lip}(\g)}}_{E,\Omega_o}
:= \| u \|^{\sup}_E + \gamma \| u \|^{\mathrm{lip}}_E \, . \end{equation}
If $ E = H^s $ we simply denote $ \| u \|^{{\mathrm{Lip}(\g)}}_{H^s} := \| u \|^{{\mathrm{Lip}(\g)}}_s $. We shall use the notation $$ a \leq_s b \quad \ \Longleftrightarrow \quad a \leq C(s) b \quad \text{for some constant } C(s) > 0 \, . $$ \noindent {\bf Matrices with off-diagonal decay.} A linear operator can be identified, as usual, with its matrix representation. We recall the definition of the $ s $-decay norm (introduced in \cite{BB13JEMS}) of an infinite dimensional matrix. This norm is used in \cite{BBM} for the KAM reducibility scheme of the linearized operators. \begin{definition}\label{def:norms} The $s$-decay norm of an infinite dimensional matrix $ A := (A_{i_1}^{i_2} )_{i_1, i_2 \in \mathbb Z^b } $, $b \geq 1$, is \begin{equation} \label{matrix decay norm}
\left| A \right|_{s}^2 := \sum_{i \in \mathbb Z^b} \left\langle i \right\rangle^{2s} \Big( \sup_{ \begin{subarray}{c} i_{1} - i_{2} = i \end{subarray}}
| A^{i_2}_{i_1}| \Big)^{2} \, . \end{equation} For parameter dependent matrices $ A := A(\omega) $, $\omega \in \Omega_o \subseteq \mathbb R^\nu $, the definitions \eqref{def norma sup lip} and \eqref{def norma Lipg} become \begin{equation} \label{matrix decay norm Lip}
| A |^{\sup}_s := \sup_{ \omega \in \Omega_o } | A(\omega ) |_s \, , \quad
| A |^{\mathrm{lip}}_s := \sup_{\omega_1 \neq \omega_2}
\frac{ | A(\omega_1) - A(\omega_2) |_s }{ | \omega_1 - \omega_2 | }\,, \quad
| A |^{{\mathrm{Lip}(\g)}}_s := | A |^{\sup}_s + \gamma | A |^{\mathrm{lip}}_s \,. \end{equation} \end{definition}
Such a norm is modeled on the behavior of matrices representing the multiplication operator by a function. Actually, given a function $ p \in H^s(\mathbb T^b) $, the multiplication operator $ h \mapsto p h $ is represented by the T\"oplitz matrix
$ T_i^{i'} = p_{i - i'} $ and $ |T|_s = \| p \|_s $. If $p = p(\omega )$ is a Lipschitz family of functions, then \begin{equation}\label{multiplication Lip}
|T|_s^{\mathrm{Lip}(\g)} = \| p \|_s^{\mathrm{Lip}(\g)}\,. \end{equation} The $s$-norm satisfies classical algebra and interpolation inequalities, see \cite{BBM}.
\begin{lemma} \label{prodest} Let $A = A(\omega)$ and $B = B(\omega)$ be matrices depending in a Lipschitz way on the parameter $\omega \in \Omega_o \subset \mathbb R^\nu $. Then for all $s \geq s_0 > b/2 $ there are $ C(s) \geq C(s_0) \geq 1 $ such that \begin{align} \label{algebra Lip}
|A B |_s^{{\mathrm{Lip}(\g)}}
& \leq C(s) |A|_s^{{\mathrm{Lip}(\g)}} |B|_s^{{\mathrm{Lip}(\g)}} \, , \\ \label{interpm Lip}
|A B|_{s}^{{\mathrm{Lip}(\g)}}
& \leq C(s) |A|_{s}^{{\mathrm{Lip}(\g)}} |B|_{s_0}^{{\mathrm{Lip}(\g)}}
+ C(s_0) |A|_{s_0}^{{\mathrm{Lip}(\g)}} |B|_{s}^{{\mathrm{Lip}(\g)}} . \end{align} \end{lemma} The $ s $-decay norm controls the Sobolev norm, namely \begin{equation}\label{interpolazione norme miste}
\| A h \|_s^{\mathrm{Lip}(\g)}
\leq C(s) \big(|A|_{s_0}^{\mathrm{Lip}(\g)} \| h \|_s^{\mathrm{Lip}(\g)} + |A|_{s}^{\mathrm{Lip}(\g)} \| h \|_{s_0}^{\mathrm{Lip}(\g)} \big). \end{equation} Let now $ b := \nu + 1 $. An important sub-algebra is formed by the {\it T\"oplitz in time matrices} defined by \begin{equation}\label{Topliz matrix}
A^{(l_2, j_2)}_{(l_1, j_1)} := A^{j_2}_{j_1}(l_1 - l_2 )\, , \end{equation} whose decay norm \eqref{matrix decay norm} is \begin{equation}\label{decayTop}
|A|_s^2 = \sum_{j \in \mathbb Z, l \in \mathbb Z^\nu} \big( \sup_{j_1 - j_2 = j} |A_{j_1}^{j_2}(l)| \big)^2 \langle l,j \rangle^{2 s} \, . \end{equation} These matrices are identified with the $ \varphi $-dependent family of operators \begin{equation}\label{Aphi} A(\varphi) := \big( A_{j_1}^{j_2} (\varphi)\big)_{j_1, j_2 \in \mathbb Z} \, , \quad A_{j_1}^{j_2} (\varphi) := {\mathop\sum}_{l \in \mathbb Z^\nu} A_{j_1}^{j_2}(l) e^{{\mathrm i} l \cdot \varphi} \end{equation} which act on functions of the $x$-variable as \begin{equation}\label{notationA} A(\varphi) : h(x) = \sum_{j \in \mathbb Z} h_j e^{{\mathrm i} jx} \mapsto A(\varphi) h(x) = \sum_{j_1, j_2 \in \mathbb Z} A_{j_1}^{j_2} (\varphi) h_{j_2} e^{{\mathrm i} j_1 x} \, .
\end{equation} We still denote by $ | A(\varphi) |_s $ the $ s $-decay norm of the matrix in \eqref{Aphi}. As in \cite{BBM}, all the transformations that we shall construct in this paper are of this type (with $ j, j_1, j_2 \neq 0 $ because they act on the phase space $ H^1_0 (\mathbb T_x) $). This observation allows to interpret the conjugacy procedure from a dynamical point of view, see \cite{BBM}-section 2.2. Let us fix some terminology.
\begin{definition}\label{operatore Hamiltoniano} We say that:
the operator $(A h)(\varphi, x) := A(\varphi) h(\varphi, x)$ is {\sc symplectic} if each $ A (\varphi ) $, $ \varphi \in \mathbb T^\nu $, is a symplectic map of the phase space (or of a symplectic subspace like $ H_S^\bot $);
the operator $ \omega \cdot \partial_{\varphi} - \partial_x G( \varphi ) $ is {\sc Hamiltonian} if each $ G (\varphi) $, $ \varphi \in \mathbb T^\nu $, is symmetric;
an operator is {\sc real} if it maps real-valued functions into real-valued functions. \end{definition}
As well known, a Hamiltonian operator $ \omega \cdot \partial_{\varphi} - \partial_x G( \varphi ) $ is transformed, under a symplectic map $ {\cal A} $, into another Hamiltonian operator $ \omega \cdot \partial_{\varphi} - \partial_x E( \varphi ) $, see e.g. \cite{BBM}-section 2.3.
We conclude this preliminary section recalling the following well known lemmata, see Appendix of \cite{BBM}.
\begin{lemma} {\bf (Composition)} \label{lemma:composition of functions, Moser} Assume $ f \in C^s (\mathbb T^d \times B_1)$, $B_1 := \{ y \in \mathbb R^m :
|y| \leq 1 \}$. Then
$ \forall u \in H^{s}(\mathbb T^d, \mathbb R^m) $ such that $ \| u \|_{L^\infty} < 1 $, the composition operator $\tilde{f}(u)(x) := f(x, u(x))$ satisfies
$ \| \tilde f(u) \|_s \leq C \| f \|_{C^s} (\|u\|_{s} + 1) $ where the constant $C $ depends on $ s ,d $. If $ f \in C^{s+2} $
and $ \| u + h \|_{L^\infty} < 1$, then \begin{align*}
\big\| \tilde f(u+h) - {\mathop\sum}_{i = 0}^k \frac{\tilde{f}^{(i)}(u)}{i !} [h^i] \big\|_s
& \leq C \| f \|_{C^{s+ 2}} \, \| h \|_{L^\infty}^k ( \| h \|_{s} + \| h \|_{L^\infty} \| u \|_{s}) \, , \quad k = 0, 1 \, . \end{align*} The previous statement also holds replacing
$\| \ \|_s$
with the norms $| \ |_{s, \infty} $. \end{lemma}
\begin{lemma} \label{lemma:standard Sobolev norms properties} {\bf (Tame product).} For $s \geq s_0 > d/2 $, $$
\| uv \|_s \leq C(s_0) \|u\|_s \|v\|_{s_0} + C(s) \|u\|_{s_0} \| v \|_s \, , \quad \forall u,v \in H^s(\mathbb T^d) \, . $$ For $s \geq 0$, $s \in \mathbb N$, $$
\| uv \|_s \leq \tfrac32 \, \| u \|_ {L^\infty} \| v \|_s + C(s) \| u \|_ {W^{s, \infty}} \| v \|_0 \, , \quad \forall u \in W^{s,\infty}(\mathbb T^d) \, , \ v \in H^s(\mathbb T^d) \, . $$ The above inequalities also hold for the norms $\Vert \ \Vert_s^{{\rm Lip}(\gamma)}$. \end{lemma}
\begin{lemma} {\bf (Change of variable)} \label{lemma:utile} Let $p \in W^{s,\infty} (\mathbb T^d,\mathbb R^d) $, $ s \geq 1$, with
$ \| p \|_{W^{1, \infty}} \leq 1/2 $. Then the function $f(x) = x + p(x)$ is invertible, with inverse $ f^{-1}(y) = y + q(y)$ where
$q \in W^{s,\infty}(\mathbb T^d,\mathbb R^d)$, and
$ \| q \|_{W^{s, \infty}} \leq C \| p \|_{ W^{s, \infty}} $.
If, moreover, $p = p_\omega $ depends in a Lipschitz way on a parameter $\omega \in \Omega \subset \mathbb R^\nu $, and
$ \| D_x p_\omega \|_ {L^\infty} \leq 1/2 $, $ \forall \omega $, then
$ \| q \|_{W^{s, \infty}}^{{\rm Lip}(\gamma)} \leq C \| p \|_{W^{s+1, \infty}}^{{\rm Lip}(\gamma)} $. The constant $C := C (d, s) $ is independent of $\gamma$.
If $u \in H^s (\mathbb T^d,\mathbb C)$, then $ (u\circ f)(x) := u(x+p(x))$ satisfies \begin{align*}
\| u \circ f \|_s
& \leq C (\|u\|_s + \| p \|_{W^{s, \infty}} \|u\|_1), \quad
\| u \circ f - u \|_s
\leq C ( \| p \|_{L^\infty} \| u \|_{s + 1} + \| p \|_{W^{s, \infty}} \| u \|_{2} ) , \\
\| u \circ f \|_{s}^{{{\rm Lip}(\gamma)}} & \leq C \,
\big( \| u \|_{s+1}^{{{\rm Lip}(\gamma)}} + \| p \|_{W^{s, \infty}}^{{\rm Lip}(\gamma)}\| u \|_2^{{\rm Lip}(\gamma)} \big). \nonumber \end{align*} The function $u \circ f^{-1} $ satisfies the same bounds. \end{lemma}
\section{Weak Birkhoff normal form}\label{sec:WBNF}
The Hamiltonian of the perturbed KdV equation \eqref{kdv quadratica} is $ H = H_2 + H_3 + H_{\geq 5} $ (see \eqref{Ham in intro}) where \begin{equation} \label{H iniziale KdV} H_2 (u):=\frac{1}{2} \int_{\mathbb T} u_x^{2} \, dx \, , \quad H_3(u) := \int_\mathbb T u^3 dx \, , \quad H_{\geq 5}(u) := \int_\mathbb T f(x, u,u_x) dx \,, \end{equation} and $f$ satisfies \eqref{order5}. According to the splitting \eqref{u = v + z} $ u = v + z $, $ v \in H_S $, $ z \in H_S^\bot $, we have \begin{equation}\label{prima v z} H_2(u) = \int_\mathbb T \frac{v_x^2}{2}\, dx + \int_\mathbb T \frac{z_x^2}{2} \, dx, \quad H_3 (u) = \int_{\mathbb T} v^3 dx + 3 \int_{\mathbb T} v^2 z dx + 3\int_{\mathbb T} v z^2 dx + \int_{\mathbb T} z^3 dx \, . \end{equation} For a finite-dimensional space \begin{equation} \label{def E finito}
E := E_{C} := \mathrm{span} \{ e^{{\mathrm i} jx} : 0 < |j| \leq C \}, \quad C > 0, \end{equation} let $\Pi_E $ denote the corresponding $ L^2 $-projector on $E$.
The notation $R(v^{k-q} z^q)$ indicates a homogeneous polynomial of degree $k$ in $(v,z)$ of the form $$ R(v^{k-q} z^q) = M[\underbrace{v, \ldots, v}_{(k-q)\, \text{times}}, \underbrace{z, \ldots, z}_{q \, \text{times}} \,], \qquad M = k\text{-linear} \, . $$ \begin{proposition} \label{prop:weak BNF} {\bf (Weak Birkhoff normal form)} Assume Hypothesis $ ({\mathtt S}2) $. Then there exists an analytic invertible symplectic transformation of the phase space $ \Phi_B : H^1_0 (\mathbb T_x) \to H^1_0 (\mathbb T_x) $ of the form \begin{equation} \label{finito finito} \Phi_B(u) = u + \Psi(u), \quad \Psi(u) = \Pi_E \Psi(\Pi_E u), \end{equation} where $ E $ is a finite-dimensional space as in \eqref{def E finito}, such that the transformed Hamiltonian is \begin{equation} \label{widetilde cal H} {\cal H} := H \circ \Phi_B = H_2 + \mathcal{H}_3 + \mathcal{H}_4 + {\cal H}_5 + {\cal H}_{\geq 6} \,, \end{equation} where $H_2$ is defined in \eqref{H iniziale KdV}, \begin{equation} \label{H3tilde} \mathcal{H}_3 := \int_{\mathbb T} z^3\,dx + 3 \int_{\mathbb T} v z^2 \,dx \,, \quad
\mathcal{H}_4 := - \frac32 \sum_{j \in S} \frac{|u_j|^4}{j^2} + \mathcal{H}_{4,2} + \mathcal{H}_{4,3} \,, \quad {\cal H}_5 := \sum_{q=2}^5 R(v^{5-q} z^q)\,, \end{equation} \begin{equation} \label{mH3 mH4} \mathcal{H}_{4,2} := 6 \int_\mathbb T v z \Pi_S \big((\partial_x^{-1} v)(\partial_x^{-1} z) \big)\,dx + 3 \int_\mathbb T z^2 \pi_0 (\partial_x^{-1} v)^2 \,dx\,, \quad \mathcal{H}_{4, 3} := R(v z^3) \,, \end{equation} and ${\cal H}_{\geq 6}$ collects all the terms of order at least six in $(v,z)$. \end{proposition}
The rest of this section is devoted to the proof of Proposition \ref{prop:weak BNF}.
First, we remove the cubic terms $ \int_{\mathbb T} v^3 + 3 \int_{\mathbb T} v^2 z $ from the Hamiltonian $ H_3 $ defined in \eqref{prima v z}. In the Fourier coordinates \eqref{Fourier}, we have \begin{equation}\label{H3 Fourier}
H_2 = \frac12 \sum_{j \neq 0} j^2 |u_j|^2, \quad H_3 = \sum_{j_1 + j_2 + j_3 = 0} u_{j_1} u_{j_2} u_{j_3} \, . \end{equation} We look for a symplectic transformation $ \Phi^{(3)} $ of the phase space
which eliminates the monomials $ u_{j_1} u_{j_2} u_{j_3} $ of $ H_3 $ with at most {\it one} index outside $ S $. Note that, by the relation $ j_1 + j_2 + j_3 = 0 $, they are {\it finitely} many. We look for $\Phi^{(3)} := (\Phi^t_{F^{(3)}})_{|t=1}$ as the time-1 flow map generated by the Hamiltonian vector field $X_{F^{(3)}}$, with an auxiliary Hamiltonian of the form $$ F^{(3)}(u) := \sum_{j_1 + j_2 + j_3 = 0} F^{(3)}_{j_1 j_2 j_3} u_{j_1} u_{j_2} u_{j_3} \,. $$ The transformed Hamiltonian is \begin{align} H^{(3)} & := H \circ \Phi^{(3)} = H_2 + H_3^{(3)} + H_4^{(3)} + H_{\geq 5}^{(3)} \,, \notag \\ \label{H tilde 234} H_3^{(3)} & = H_3 + \{ H_2, F^{(3)} \}, \quad
H_4^{(3)} = \frac12 \{ \{ H_2, F^{(3)}\}, F^{(3)}\} + \{H_3, F^{(3)} \} , \end{align} where $ H_{\geq 5}^{(3)} $ collects all the terms of order at least five in $(u,u_x)$. By \eqref{H3 Fourier} and \eqref{PoissonBr} we calculate $$ H_3^{(3)} = \sum_{j_1 + j_2 + j_3 = 0} \big\{ 1 - {\mathrm i} (j_1^3 + j_2^3 + j_3^3) F^{(3)}_{j_1 j_2 j_3} \big\} \, u_{j_1} u_{j_2} u_{j_3} \,. $$ Hence, in order to eliminate the monomials with at most one index outside $ S $, we choose \begin{equation}\label{F3q} F^{(3)}_{j_1 j_2 j_3} := \begin{cases}
\dfrac{1}{{\mathrm i} (j_1^3 + j_2^3 + j_3^3)} & \text{if} \,\,(j_1,j_2,j_3) \in {\cal A}\,, \\ 0 & \text{otherwise}, \end{cases} \end{equation} where ${\cal A} := \big\{ (j_1 , j_2 , j_3) \in (\mathbb Z \setminus \{ 0 \})^3$ : $j_1 + j_2 + j_3 = 0$, $j_1^{3} + j_2^{3} + j_3^{3} \neq 0$, and at least 2 among $j_1 , j_2 , j_3$ belong to $S \big\}$. Note that \begin{equation} \label{def calA quadratica} {\cal A} = \big\{ (j_1 , j_2 , j_3) \in (\mathbb Z \setminus \{ 0 \})^3 : j_1 + j_2 + j_3 = 0, \, \text{and at least 2 among}\,\, j_1 , j_2 , j_3 \,\, \text{belong to} \, S \big\} \end{equation} because of the elementary relation \begin{equation}\label{prodottino} j_1 + j_2 + j_3 = 0 \quad \Rightarrow \quad j_1^3 + j_2^3 + j_3^3 = 3 j_1 j_2 j_3 \neq 0 \end{equation} being $ j_1, j_2, j_3 \in \mathbb Z \setminus \{ 0 \}$. Also note that $ \mathcal{A} $ is a finite set, actually $ \mathcal{A} \subseteq [- 2 C_S, 2 C_S]^{3} $ where the tangential sites $ S \subseteq [- C_S, C_S ]$. As a consequence, the Hamiltonian vector field $ X_{F^{(3)}} $ has finite rank and vanishes outside the finite dimensional subspace $ E := E_{2 C_S}$ (see \eqref{def E finito}), namely $$
X_{ F^{(3)}}(u) = \Pi_E X_{ F^{(3)}} ( \Pi_E u ) \, . $$ Hence its flow $ \Phi^{(3)} : H^1_0 (\mathbb T_x) \to H^1_0 (\mathbb T_x) $ has the form \eqref{finito finito} and it is analytic.
By construction, all the monomials of $ H_3 $ with at least two indices outside $ S $ are not modified by the transformation $ \Phi^{(3)} $. Hence (see \eqref{prima v z}) we have \begin{equation}\label{lem:H3tilde} H_3^{(3)} = \int_{\mathbb T} z^3\,dx + 3 \int_{\mathbb T} v z^2 \,dx \, . \end{equation} We now compute the fourth order term $ H_4^{(3)} = \sum_{i=0}^4 H_{4,i}^{(3)} $ in \eqref{H tilde 234}, where $ H_{4,i}^{(3)} $ is of type $ R( v^{4-i} z^i )$.
\begin{lemma} One has (recall the definition \eqref{def pi 0} of $ \pi_0 $) \begin{equation}\label{Htilde41} {H}_{4,0} ^{(3)} := \frac32 \int_\mathbb T v^2 \pi_0[(\partial_x^{-1} v)^2] dx \, , \quad H_{4,2}^{(3)} := 6 \int_\mathbb T v z \Pi_S \big((\partial_x^{-1} v)(\partial_x^{-1} z) \big)\,dx + 3 \int_\mathbb T z^2 \pi_0 [(\partial_x^{-1} v)^2] dx \, . \end{equation} \end{lemma}
\begin{proof} We write $ H_3 = H_{3, \leq 1} + H_3^{(3)} $ where $ H_{3, \leq 1}(u) := \int_\mathbb T v^3 dx + 3 \int_\mathbb T v^2 z \, dx $. Then, by \eqref{H tilde 234}, we get \begin{equation} \label{grado 4 *} H_4^{(3)} = \frac12 \big\{ H_{3, \leq 1} \, , F^{(3)} \big\} + \{ H_3^{(3)} , F^{(3)} \} \,. \end{equation} By \eqref{F3q}, \eqref{prodottino}, the auxiliary Hamiltonian may be written as $$ F^{(3)} (u) = - \frac{1}{3}\sum_{(j_1, j_2, j_3) \in {\cal A}} \frac{u_{j_1} u_{j_2} u_{j_3}}{ ({\mathrm i} j_1) ( {\mathrm i} j_2) ( {\mathrm i} j_3)} = - \frac{1}{3}\int_\mathbb T (\partial_x^{-1} v)^3 dx - \int_\mathbb T (\partial_x^{-1} v)^2 (\partial_x^{-1} z) dx \, . $$ Hence, using that the projectors $\Pi_S$, $\Pi_S^\bot $ are self-adjoint and $\partial_x^{-1}$ is skew-selfadjoint, \begin{equation} \label{grad F3 formula} \nabla F^{(3)}(u) = \partial_x^{-1}\big\{ (\partial_x^{-1} v)^2 + 2 \Pi_S \big[ (\partial_x^{-1} v)(\partial_x^{-1} z)\big] \big\} \end{equation} (we have used that $\partial_x^{-1} \pi_0 = \partial_x^{-1}$ be the definition of $\partial_x^{-1}$). Recalling the Poisson bracket definition \eqref{Poisson bracket}, using that $ \nabla H_{3, \leq 1}(u) = 3 v^2 + 6 \Pi_S(v z) $ and \eqref{grad F3 formula}, we get \begin{align} \{ H_{3, \leq1}, F^{(3)} \} & = \int_\mathbb T \big\{ 3 v^2 + 6 \Pi_S(v z) \big\} \pi_0 \big\{ (\partial_x^{-1} v)^2 + 2 \Pi_S \big[ (\partial_x^{-1} v)(\partial_x^{-1} z)\big] \big\}\,dx \nonumber \\ & = 3 \int_\mathbb T v^2 \pi_0 (\partial_x^{-1} v)^2\,dx +
12 \int_\mathbb T \Pi_S(v z) \Pi_S [ (\partial_x^{-1} v)(\partial_x^{-1} z) ]\,dx + R(v^3 z) \, . \label{H31F} \end{align} Similarly, since $ \nabla H_3^{(3)}(u) = 3 z^2 + 6 \Pi_S^\bot (v z) $, \begin{equation} \{ H_3^{(3)}, F^{(3)} \} = 3 \int_\mathbb T z^2 \pi_0 (\partial_x^{-1} v)^2\,dx + R(v^3 z) + R(v z^3) \,. \label{H3tildeF} \end{equation} The lemma follows by \eqref{grado 4 *}, \eqref{H31F}, \eqref{H3tildeF}. \end{proof}
We now construct a symplectic map $ \Phi^{(4)} $ such that the Hamiltonian system obtained transforming $ H_2 + H_3^{(3)} + H_4^{(3)} $ possesses the invariant subspace $ H_S $ (see \eqref{splitting S-S-bot}) and its dynamics on $ H_S $ is integrable and non-isocronous. Hence we have to eliminate the term $ H_{4,1}^{(3)} $ (which is linear in $ z $), and to normalize $ H_{4,0}^{(3)} $ (which is independent of $ z $). We need the following elementary lemma (Lemma 13.4 in \cite{KaP}).
\begin{lemma} \label{lemma:interi} Let $j_1, j_2, j_3, j_4 \in \mathbb Z $ such that $ j_1 + j_2 + j_3 + j_4 = 0 $. Then $$ j_1^3 + j_2^3 + j_3^3 + j_4^3 = -3 (j_1 + j_2) (j_1 + j_3) (j_2 + j_3). $$ \end{lemma}
\begin{lemma} There exists a symplectic transformation $ \Phi^{(4)} $ of the form \eqref{finito finito} such that \begin{equation}\label{Ham4quadratica} H^{(4)} := H^{(3)} \circ \Phi^{(4)} = H_2 + H_3^{(3)} + H_4^{(4)} + H^{(4)}_{\geq 5} \,, \qquad
H^{(4)}_4 := - \frac32 \sum_{j \in S} \frac{|u_j|^4}{j^2} + H_{4,2}^{(3)} + H_{4,3}^{(3)} \,, \end{equation} where $ H_3^{(3)} $ is defined in \eqref{lem:H3tilde}, $ H_{4,2}^{(3)} $ in \eqref{Htilde41}, $ H_{4,3}^{(3)} = R( v z^3) $ and $ H_{\geq 5}^{(4)} $ collects all the terms of degree at least five in $(u,u_x)$. \end{lemma}
\begin{proof}
We look for a map $\Phi^{(4)} := (\Phi_{F^{(4)}}^t)_{|t=1}$ which is the time $ 1$-flow map of an auxiliary Hamiltonian $$ F^{(4)}(u) := \sum_{\begin{subarray}{c} j_1 + j_2 + j_3 + j_4 = 0 \\ \text{at least}\,\,3\,\,\text{indices are in}\,\,S \end{subarray}} F^{(4)}_{j_1 j_2 j_3 j_4} u_{j_1} u_{j_2} u_{j_3} u_{j_4} $$ with the same form of the Hamiltonian $ H_{4,0}^{(3)} + H_{4,1}^{(3)} $. The transformed Hamiltonian is \begin{equation}\label{callH} H^{(4)} := H^{(3)} \circ \Phi^{(4)} = H_2 + H_3^{(3)} + H_4^{(4)} + H_{\geq 5}^{(4)}, \quad H_4^{(4)} = \{ H_2, F^{(4)} \} + H_4^{(3)} , \end{equation} where $ H_{\geq 5}^{(4)} $ collects all the terms of order at least five. We write $ H_4^{(4)} = \sum_{i=0}^4 H_{4,i}^{(4)} $ where each $ H_{4,i}^{(4)} $ if of type $ R( v^{4-i} z^i ) $. We choose the coefficients \begin{equation}\label{def F4} F^{(4)}_{j_1 j_2 j_3 j_4} := \begin{cases}
\dfrac{ H^{(3)}_{j_1j_2j_3j_4} }{{\mathrm i} (j_1^3 + j_2^3 + j_3^3+ j_4^3)} & \text{if} \,\,(j_1,j_2,j_3, j_4) \in {\cal A}_4\,, \\ 0 & \text{otherwise}, \end{cases} \end{equation} where \begin{align*} {\cal A}_4 := \big\{ (j_1 , j_2 , j_3, j_4) \in (\mathbb Z \setminus \{ 0 \})^4 & : j_1 + j_2 + j_3 + j_4 = 0, \, j_1^{3} + j_2^{3} + j_3^{3} + j_4^{3} \neq 0, \\ & \quad \text{and at most one among}\,\, j_1 , j_2 , j_3, j_4 \,\, \text{outside} \, S \big\} \, . \end{align*} By this definition $ H_{4,1}^{(4)}= 0 $ because there exist no integers $ j_1, j_2 , j_3 \in S$, $ j_4 \in S^c $ satisfying $ j_1 + j_2 + j_3 + j_4 = 0 $, $ j_1^3 + j_2^3 + j_3^3 + j_4^3 = 0 $, by Lemma \ref{lemma:interi} and the fact that $ S $ is symmetric. By construction, the terms $ H_{4,i}^{(4)}= H_{4,i}^{(3)} $, $ i = 2, 3, 4$, are not changed by $ \Phi^{(4)} $. Finally, by \eqref{Htilde41} \begin{equation} \label{H2F1} H_{4,0}^{(4)}= \frac32 \sum_{\begin{subarray}{c} j_1, j_2, j_3, j_4 \in S \\ j_1 + j_2 + j_3 + j_4 = 0 \\ j_1^3 + j_2^3 + j_3^3 + j_4^3 = 0 \\ j_1 + j_2 \,,\,j_3 + j_4 \neq 0 \end{subarray}} \frac{1}{({\mathrm i} j_3) ({\mathrm i} j_4)}u_{j_1} u_{j_2} u_{j_3} u_{j_4} \, . \end{equation} If $ j_1 + j_2 + j_3 + j_4 = 0 $ and $ j_1^3 + j_2^3 + j_3^3 + j_4^3 = 0$, then $(j_1 + j_2)(j_1 + j_3)(j_2 + j_3) = 0$ by Lemma \ref{lemma:interi}. We develop the sum in \eqref{H2F1} with respect to the first index $j_1$. Since $ j_1 + j_2 \neq 0 $ the possible cases are: \[ (i) \ \big\{ j_2 \neq - j_1, \ j_3 = - j_1, \ j_4 = - j_2 \big\} \qquad \text{or} \qquad (ii) \ \big\{ j_2 \neq - j_1, \ j_3 \neq - j_1, \ j_3 = - j_2, \ j_4 = - j_1 \big\} . \] Hence, using $ u_{-j} = \bar{u}_j $ (recall \eqref{Fourier}), and since $S$ is symmetric, we have \begin{equation} \label{case ii} \sum_{(i)} \frac{1}{j_3 j_4} u_{j_1} u_{j_2} u_{j_3} u_{j_4}
= \sum_{j_1, j_2 \in S, j_2 \neq - j_1} \frac{|u_{j_1}|^2 |u_{j_2}|^2 }{j_1 j_2}
= \sum_{j,j' \in S} \frac{|u_j|^2 |u_{j'}|^2}{j j'} + \sum_{j \in S} \frac{|u_j|^4}{j^2} = \sum_{j \in S} \frac{|u_j|^4}{j^2}\,, \end{equation} and in the second case ($ii$)
\begin{equation} \label{case iii} \sum_{(ii)} \frac{1}{j_3 j_4} u_{j_1} u_{j_2} u_{j_3} u_{j_4}
= \sum_{j_1, j_2, j_2 \neq \pm j_1} \frac{1}{j_1 j_2} u_{j_1} u_{j_2} u_{-j_2} u_{-j_1} =
\sum_{j \in S} \frac{1}{j} |u_{j}|^2 \Big( \sum_{j_2 \neq \pm j} \frac{1}{j_2} |u_{j_2}|^2 \Big) = 0\,. \end{equation} Then \eqref{Ham4quadratica} follows by \eqref{H2F1}, \eqref{case ii}, \eqref{case iii}. \end{proof}
Note that the Hamiltonian $ H_2 + H_3^{(3)} + H_4^{(4)} $ (see \eqref{Ham4quadratica}) possesses the invariant subspace $ \{ z = 0 \} $ and the system restricted to $ \{ z = 0 \} $ is completely integrable and non-isochronous (actually it is formed by $ \nu $ decoupled rotators). We shall construct quasi-periodic solutions which bifurcate from this invariant manifold.
In order to enter in a perturbative regime, we have to eliminate further monomials of $ H^{(4)} $ in \eqref{Ham4quadratica}. The minimal requirement for the convergence of the nonlinear Nash-Moser iteration is to eliminate the monomials $ R(v^5) $ and $ R(v^4 z)$. Here we need the choice of the sites of Hypothesis $ ({\mathtt S}2) $.
\begin{remark}\label{remark:cubic} In the KAM theorems \cite{k1}, \cite{Po3} (and \cite{PP}, \cite{Wang}), as well as for the perturbed mKdV equations \eqref{mKdV}, these further steps of Birkhoff normal form are not required because the nonlinearity of the original PDE is yet cubic. A difficulty of KdV is that the nonlinearity is quadratic. \end{remark}
We spell out Hypothesis $( {\mathtt S}2)$ as follows: \begin{itemize} \item {\sc $( {\mathtt S}2_0)$.} There is no choice of $ 5 $ integers $ j_1, \ldots, j_5 \in S $ such that \begin{equation}\label{scelta dei siti grado 7} j_1 + \ldots + j_5 = 0\,,\quad j_1^3 + \ldots + j_5^3 = 0 \,. \end{equation} \item {\sc $({\mathtt S}2_1)$.} There is no choice of $4 $ integers $j_1, \ldots, j_{4} $ in $ S $ and an integer in the complementary set $ j_5 \in S^c := (\mathbb Z \setminus \{0\}) \setminus S $ such that \eqref{scelta dei siti grado 7} holds. \end{itemize}
The homogeneous component of degree $ 5 $ of $H^{(4)}$ is $$ H^{(4)}_5 (u) = \sum_{j_1+ \ldots + j_5 = 0} H^{(4)}_{j_1, \ldots, j_5} u_{j_1} \ldots u_{j_5} \,. $$ We want to remove from $H^{(4)}_5$ the terms with at most one index among $ j_1, \ldots , j_5 $ outside $ S $. We consider the auxiliary Hamiltonian \begin{equation} \label{Fn odd} F^{(5)} = \sum_{\begin{subarray}{c} j_1 + \ldots + j_5 = 0 \\ \text{at most one index outside $ S $} \end{subarray} } F_{j_1 \ldots j_5}^{(5)} u_{j_1} \ldots u_{j_5} \, , \quad
F_{j_1 \ldots j_5}^{(5)} := \frac{H_{j_1 \ldots j_5}^{(5)}}{ {\mathrm i} (j_1^3 + \ldots + j_5^3)} \,. \end{equation} By Hypotheses $ ({\mathtt S}2_0), ({\mathtt S}2_1) $, if $ j_1 + \ldots + j_5 = 0 $ with at most one index outside $ S $ then $ j_1^3 + \ldots + j_5^3 \neq 0 $ and $ F^{(5)}$ is well defined. Let $ \Phi^{(5)} $ be the time $ 1 $-flow generated by $ X_{F^{(5)}} $. The new Hamiltonian is \begin{equation}\label{callHn} H^{(5)} := H^{(4)} \circ \Phi^{(5)} = H_2 + H_3^{(3)} + H_4^{(4)} + \{ H_2, F^{(5)} \} + H_5^{(4)} + H^{(5)}_{\geq 6} \end{equation} where, by \eqref{Fn odd}, $$ H_5^{(5)} := \{ H_2, F^{(5)} \} + H_5^{(4)} = {\mathop\sum}_{q = 2}^5 R(v^{5 - q} z^q) \, . $$
Renaming $ {\cal H} := H^{(5)} $, namely $ {\cal H}_n := H^{(n)}_n $, $ n =3, 4, 5 $, and setting $ \Phi_B := \Phi^{(3)} \circ \Phi^{(4)} \circ \Phi^{(5)} $, formula \eqref{widetilde cal H} follows.
The homogeneous component $ H^{(4)}_{5} $ preserves the momentum, see section \ref{sec:Ham For}. Hence $F^{(5)} $ also preserves the momentum. As a consequence, also $ H^{(5)}_k $, $ k \leq 5 $, preserve the momentum.
Finally, since $F^{(5)} $ is Fourier-supported on a finite set, the transformation $\Phi^{(5)}$ is of type \eqref{finito finito} (and analytic), and therefore also the composition $ \Phi_B $ is of type \eqref{finito finito} (and analytic).
\section{Action-angle variables}\label{sec:4}
We now introduce action-angle variables on the tangential directions by the change of coordinates \begin{equation}\label{coordinate azione angolo} \begin{cases}
u_j := \sqrt{\xi_j + |j| y_j} \, e^{{\mathrm i} \theta_j}, \qquad & \text{if} \ j \in S\,,\\ u_j := z_j, \qquad & \text{if} \ j \in S^c \, , \end{cases} \end{equation} where (recall $ u_{-j} = {\overline u}_j $) \begin{equation}\label{simmeS} \xi_{-j} = \xi_j \, , \quad \xi_j > 0 \, , \quad y_{-j} = y_j \, , \quad \theta_{-j} = - \theta_j \, , \quad \theta_j, \, y_j \in \mathbb R \, , \quad \forall j \in S \,. \end{equation} For the tangential sites $ S^+ := \{ {\bar \jmath_1}, \ldots, {\bar \jmath_\nu} \} $ we shall also denote $ \theta_{\bar \jmath_i} := \theta_i $, $ y_{\bar \jmath_i} := y_i $, $ \xi_{\bar \jmath_i} := \xi_i $, $ i =1, \ldots \, \nu $.
The symplectic 2-form $ \Omega $ in \eqref{2form0} (i.e. \eqref{2form KdV}) becomes \begin{equation}\label{2form} {\cal W} := \sum_{i=1}^\nu d \theta_i \wedge d y_i + \frac12 \sum_{j \in S^c \setminus \{ 0 \} } \frac{1}{{\mathrm i} j} \, d z_j \wedge d z_{-j} = \big( \sum_{i=1}^\nu d \theta_i \wedge d y_i \big) \oplus \Omega_{S^\bot} = d \Lambda \end{equation} where $ \Omega_{S^\bot} $ denotes the restriction of $ \Omega $ to $ H_S^\bot $ (see \eqref{splitting S-S-bot}) and $ \Lambda $ is the contact $ 1 $-form on $ \mathbb T^\nu \times \mathbb R^\nu \times H_S^\bot $ defined by $ \Lambda_{(\theta, y, z)} : \mathbb R^\nu \times \mathbb R^\nu \times H_S^\bot \to \mathbb R $, \begin{equation}\label{Lambda 1 form} \Lambda_{(\theta, y, z)}[\widehat \theta, \widehat y, \widehat z] := - y \cdot \widehat \theta + \frac12 ( \partial_x^{-1} z, \widehat z )_{L^2 (\mathbb T)} \, . \end{equation} Instead of working in a shrinking neighborhood of the origin, it is a convenient devise to rescale the ``unperturbed actions" $ \xi $ and the action-angle variables as \begin{equation}\label{rescaling kdv quadratica}
\xi \mapsto \varepsilon^2 \xi \, , \quad y \mapsto \varepsilon^{2b} y \, , \quad z \mapsto \varepsilon^b z \, .
\end{equation} Then the symplectic $ 2 $-form in \eqref{2form} transforms into $ \varepsilon^{2b} {\cal W } $. Hence the Hamiltonian system generated by $ {\cal H} $ in \eqref{widetilde cal H} transforms into the new Hamiltonian system \begin{equation} \label{def H eps}
\dot \theta = \partial_y H_\varepsilon (\theta, y, z) \, , \
\dot y = - \partial_\theta H_\varepsilon (\theta, y, z) \, , \
z_t = \partial_x \nabla_z H_\varepsilon (\theta, y, z) \, , \quad H_\varepsilon := \varepsilon^{-2b} \mathcal{H} \circ A_\varepsilon \end{equation} where \begin{equation} \label{def A eps} A_\varepsilon (\theta, y, z) := \varepsilon v_\varepsilon(\theta, y) + \varepsilon^b z
:= \varepsilon {\mathop \sum}_{j \in S} \sqrt{\xi_j + \varepsilon^{2(b-1)} |j| y_j} \, e^{{\mathrm i} \theta_j} e^{{\mathrm i} j x} + \varepsilon^b z \, . \end{equation} We shall still denote by $ X_{H_\varepsilon} = (\partial_y H_\varepsilon, - \partial_\theta H_\varepsilon, \partial_x \nabla_z H_\varepsilon) $ the Hamiltonian vector field in the variables $ (\theta, y, z ) \in \mathbb T^\nu \times \mathbb R^\nu \times H_S^\bot $.
We now write explicitly the Hamiltonian $ H_\varepsilon (\theta, y, z) $ in \eqref{def H eps}. The quadratic Hamiltonian $ H_2 $ in \eqref{H iniziale KdV} transforms into \begin{equation}\label{shape H2} \varepsilon^{-2b}H_2 \circ A_\varepsilon = const + {\mathop \sum}_{j \in S^+} j^3 y_j + \frac{1}{2} \int_{\mathbb T} z_x^2 \, dx \, , \end{equation} and, recalling \eqref{H3tilde}, \eqref{mH3 mH4}, the Hamiltonian $ {\cal H }$ in \eqref{widetilde cal H} transforms into (shortly writing $ v_\varepsilon := v_\varepsilon (\theta, y) $) \begin{align} H_\varepsilon (\theta, y, z) & = e(\xi) + \alpha (\xi) \cdot y + \frac12 \int_\mathbb T z_x^2 dx + \varepsilon^b\int_\mathbb T z^3 dx + 3 \varepsilon \int_\mathbb T v_\varepsilon z^2 dx \label{formaHep} \\ & \quad + \varepsilon^2 \Big\{6 \int_\mathbb T v_\varepsilon z
\Pi_S \big((\partial_x^{-1} v_\varepsilon)(\partial_x^{-1} z) \big)\,dx + 3 \int_\mathbb T z^2 \pi_0 (\partial_x^{-1} v_\varepsilon)^2\,dx \Big\}
- \frac32 \varepsilon^{2b} {\mathop \sum}_{j \in S} y_j^2 \nonumber \\ & \quad + \varepsilon^{b + 1} R(v_\varepsilon z^3) + \varepsilon^3 R(v_\varepsilon^3 z^2) + \varepsilon^{2+b} \sum_{q=3}^5 \varepsilon^{(q-3)(b-1)} R(v_\varepsilon^{5-q} z^q) + \varepsilon^{-2b} {\cal H}_{\geq 6} (\varepsilon v_\varepsilon + \varepsilon^b z ) \nonumber \end{align} where $e(\xi)$ is a constant, and the frequency-amplitude map is \begin{equation} \label{mappa freq amp} \alpha(\xi) := \bar\omega + \varepsilon^2 {\mathbb A} \xi \, , \quad {\mathbb A} := - 6 \, \text{diag} \{ 1/j \}_{j \in S^+} \, . \end{equation} We write the Hamiltonian in \eqref{formaHep} as \begin{equation} \label{Hamiltoniana Heps KdV} H_\varepsilon = {\cal N} + P \,, \quad {\cal N}(\theta, y, z) = \alpha (\xi) \cdot y + \frac12 \big(N(\theta) z , z \big)_{L^2(\mathbb T)} \,, \end{equation} where \begin{align} \frac12 \big(N(\theta) z, z \big)_{L^2(\mathbb T)} & := \frac12 \big( (\partial_{z} \nabla H_\varepsilon)(\theta, 0, 0)[z], z \big)_{L^2(\mathbb T)} \label{Nshape} = \frac12 \int_\mathbb T z_x^2 dx + 3 \varepsilon \int_\mathbb T v_\varepsilon(\theta, 0) z^2 dx \\ & + \varepsilon^2 \Big\{6 \int_\mathbb T v_\varepsilon(\theta, 0) z \Pi_S \big((\partial_x^{-1} v_\varepsilon(\theta,0))(\partial_x^{-1} z) \big) dx + 3 \int_\mathbb T z^2 \pi_0 (\partial_x^{-1} v_\varepsilon(\theta, 0))^2 dx \Big\} + \ldots \nonumber \end{align} and $P := H_\varepsilon - {\cal N} $.
\section{The nonlinear functional setting}\label{sec:functional}
We look for an embedded invariant torus \begin{equation} \label{embedded torus i} i : \mathbb T^\nu \to \mathbb T^\nu \times \mathbb R^\nu \times H_S^\bot, \quad \varphi \mapsto i (\varphi) := ( \theta (\varphi), y (\varphi), z (\varphi)) \end{equation} of the Hamiltonian vector field $ X_{H_\varepsilon} $ filled by quasi-periodic solutions with diophantine frequency $ \omega $. We require that $ \omega $ belongs to the set \begin{equation}\label{Omega epsilon} \Omega_\varepsilon := \alpha ( [1,2]^\nu ) = \{ \alpha(\xi) : \xi \in [1,2]^\nu \} \end{equation} where $ \alpha $ is the diffeomorphism \eqref{mappa freq amp}, and, in the Hamiltonian $ H_\varepsilon $ in \eqref{Hamiltoniana Heps KdV}, we choose \begin{equation}\label{linkxiomega} \xi = \alpha^{-1}(\omega) = \varepsilon^{-2} {\mathbb A}^{-1} (\omega - \bar\omega) \, . \end{equation} Since any $ \omega \in \Omega_\varepsilon$ is $ \varepsilon^2 $-close to the integer vector $ \bar \omega $ (see \eqref{bar omega}), we require that the constant $\gamma$ in the diophantine inequality \begin{equation}\label{omdio}
|\omega \cdot l | \geq \gamma \langle l \rangle^{-\tau} \, , \ \ \forall l \in \mathbb Z^\nu \setminus \{0\} \, , \quad \text{satisfies} \ \ \gamma = \varepsilon^{2+a} \quad \text{for some} \ a > 0 \,. \end{equation} We remark that the definition of $\gamma$ in \eqref{omdio} is slightly stronger than the minimal condition, which is $ \gamma \leq c \varepsilon^2 $ with $ c $ small enough. In addition to \eqref{omdio} we shall also require that $ \omega $ satisfies the first and second order Melnikov-non-resonance conditions \eqref{Omegainfty}.
We look for an embedded invariant torus of the modified Hamiltonian vector field $ X_{H_{\varepsilon, \zeta}} = X_{H_\varepsilon} + (0, \zeta, 0)$ which is generated by the Hamiltonian \begin{equation}\label{hamiltoniana modificata} H_{\varepsilon, \zeta} (\theta, y, z) := H_\varepsilon (\theta, y, z) + \zeta \cdot \theta\,,\quad \zeta \in \mathbb R^\nu\,. \end{equation} Note that $ X_{H_{\varepsilon, \zeta}} $ is periodic in $\theta $ (unlike $ H_{\varepsilon, \zeta} $). It turns out that an invariant torus for $ X_{H_{\varepsilon, \zeta}} $ is actually invariant for $ X_{H_\varepsilon} $, see Lemma \ref{zeta = 0}. We introduce the parameter $\zeta \in \mathbb R^\nu $ in order to control the average in the $ y$-component of the linearized equations. Thus we look for zeros of the nonlinear operator \begin{align}
{\cal F} (i, \zeta ) & := {\cal F} (i, \zeta, \omega, \varepsilon ) := {\cal D}_\omega i (\varphi) - X_{H_{\varepsilon, \zeta}} (i(\varphi)) = {\cal D}_\omega i (\varphi) - X_{\cal N} (i(\varphi)) - X_P (i(\varphi)) + (0, \zeta, 0 ) \label{operatorF} \\ & \nonumber := \left( \begin{array}{c} {\cal D}_\omega \theta (\varphi) - \partial_y H_\varepsilon ( i(\varphi) ) \\ {\cal D}_\omega y (\varphi) + \partial_\theta H_\varepsilon ( i(\varphi) ) + \zeta \\ {\cal D}_\omega z (\varphi) - \partial_x \nabla_z H_\varepsilon ( i(\varphi)) \end{array} \right) \!\! = \!\! \left( \begin{array}{c} {\cal D}_\omega \Theta (\varphi) - \partial_y P (i(\varphi) ) \\ \!\! \! {\cal D}_\omega y (\varphi) + \frac12 \partial_\theta ( N(\theta (\varphi)) z(\varphi), z(\varphi) )_{L^2(\mathbb T)}
+ \partial_\theta P ( i(\varphi) ) + \zeta \!\!\!\! \\ {\cal D}_\omega z (\varphi) - \partial_x N ( \theta (\varphi )) z (\varphi) - \partial_x \nabla_z P ( i(\varphi) ) \end{array} \right) \end{align} where $ \Theta(\varphi) := \theta (\varphi) - \varphi $ is $ (2 \pi)^\nu $-periodic and we use the short notation \begin{equation}\label{Domega} {\cal D}_\omega := \omega \cdot \partial_\varphi \, . \end{equation} The Sobolev norm of the periodic component of the embedded torus \begin{equation}\label{componente periodica} {\mathfrak I}(\varphi) := i (\varphi) - (\varphi,0,0) := ( {\Theta} (\varphi), y(\varphi), z(\varphi))\,, \quad \Theta(\varphi) := \theta (\varphi) - \varphi \, , \end{equation} is \begin{equation}\label{norma fracchia}
\| {\mathfrak I} \|_s := \| \Theta \|_{H^s_\varphi} + \| y \|_{H^s_\varphi} + \| z \|_s \end{equation}
where $ \| z \|_s := \| z \|_{H^s_{\varphi,x}} $ is defined in \eqref{Sobolev coppia}. We link the rescaling \eqref{rescaling kdv quadratica} with the diophantine constant $ \gamma = \varepsilon^{2+a} $ by choosing \begin{equation}\label{link gamma b} \gamma = \varepsilon^{2b}\,, \qquad b = 1 + ( a \slash 2 ) \, . \end{equation} Other choices are possible, see Remark \ref{comm3}.
\begin{theorem}\label{main theorem} Let the tangential sites $ S $ in \eqref{tang sites} satisfy $({\mathtt S}1), ({\mathtt S}2)$. Then, for all $ \varepsilon \in (0, \varepsilon_0 ) $, where $ \varepsilon_0 $ is small enough, there exists a Cantor-like set $ {\cal C}_\varepsilon \subset \Omega_\varepsilon $, with asympotically full measure as $ \varepsilon \to 0 $, namely \begin{equation}\label{stima in misura main theorem}
\lim_{\varepsilon\to 0} \, \frac{|{\cal C}_\varepsilon|}{|\Omega_\varepsilon|} = 1 \, , \end{equation} such that, for all $ \omega \in {\cal C}_\varepsilon $, there exists a solution $ i_\infty (\varphi) := i_\infty (\omega, \varepsilon)(\varphi) $ of ${\cal D}_\omega i_\infty(\varphi) - X_{H_\varepsilon}(i_\infty(\varphi)) = 0 $. Hence the embedded torus $ \varphi \mapsto i_\infty (\varphi) $ is invariant for the Hamiltonian vector field $ X_{H_\varepsilon (\cdot, \xi)} $ with $ \xi $ as in \eqref{linkxiomega}, and it is filled by quasi-periodic solutions with frequency $ \omega $. The torus $i_\infty$ satisfies \begin{equation}\label{stima toro finale}
\| i_\infty (\varphi) - (\varphi,0,0) \|_{s_0 + \mu}^{\mathrm{Lip}(\g)} = O(\varepsilon^{6 - 2 b} \gamma^{-1} ) \end{equation} for some $ \mu := \mu (\nu) > 0 $. Moreover, the torus $ i_\infty $ is {\sc linearly stable}. \end{theorem}
Theorem \ref{main theorem} is proved in sections \ref{costruzione dell'inverso approssimato}-\ref{sec:NM}. It implies Theorem \ref{thm:KdV} where the $ \xi_j $ in \eqref{solution u} are $ \varepsilon^2 \xi_j $, $ \xi_j \in [1,2 ] $, in \eqref{linkxiomega}. By \eqref{stima toro finale}, going back to the variables before the rescaling \eqref{rescaling kdv quadratica}, we get $ \Theta_\infty = O( \varepsilon^{6-2b} \gamma^{-1}) $, $ y_\infty = O( \varepsilon^6 \gamma^{-1} ) $, $ z_\infty = O( \varepsilon^{6-b} \gamma^{-1} ) $, which, as $ b \to 1^+ $, tend to the expected optimal estimates.
\begin{remark} \label{comm3} There are other possible ways to link the rescaling \eqref{rescaling kdv quadratica} with the diophantine constant $ \gamma = \varepsilon^{2+a} $. The choice $ \gamma > \varepsilon^{2b} $ reduces to study perturbations of an isochronous system (as in \cite{Ku}, \cite{k1}, \cite{Po3}), and it is convenient to
introduce $ \xi (\omega) $ as a variable. The case $ \varepsilon^{2b} > \gamma $, in particular $ b = 1 $, has to be dealt with a perturbation approach of a non-isochronous system {\`a la} Arnold-Kolmogorov. \end{remark}
We now give the tame estimates for the composition operator induced by the Hamiltonian vector fields $ X_{\cal N} $ and $ X_P $ in \eqref{operatorF}, that we shall use in the next sections.
We first estimate the composition operator induced by $ v_\varepsilon (\theta, y) $ defined in \eqref{def A eps}. Since the functions $ y \mapsto \sqrt{\xi + \varepsilon^{2(b - 1)}|j| y} $, $\theta \mapsto e^{{\mathrm i} \theta}$
are analytic for $\varepsilon$ small enough and $|y| \leq C$, the composition Lemma \ref{lemma:composition of functions, Moser} implies that, for all $ \Theta, y \in H^s(\mathbb T^\nu, \mathbb R^\nu )$,
$ \| \Theta \|_{s_0}, \| y \|_{s_0} \leq 1 $, setting $\theta(\varphi) := \varphi + \Theta (\varphi)$,
$ \| v_\varepsilon (\theta (\varphi) ,y(\varphi) ) \|_s
\leq_s 1 + \| \Theta \|_s + \| y \|_s $. Hence, using also \eqref{linkxiomega}, the map $ A_\varepsilon $ in \eqref{def A eps} satisfies, for all
$ \| {\mathfrak I} \|_{s_0}^{\mathrm{Lip}(\g)} \leq 1 $ (see \eqref{componente periodica}) \begin{equation} \label{stima Aep}
\| A_\varepsilon (\theta (\varphi),y(\varphi),z(\varphi)) \|_s^{{\mathrm{Lip}(\g)}} \leq_s \varepsilon (1 + \| {\mathfrak I} \|_s^{\mathrm{Lip}(\g)}) \, . \end{equation} We now give tame estimates for the Hamiltonian vector fields $ X_{\cal N} $, $ X_P $, $ X_{H_\varepsilon} $, see \eqref{Hamiltoniana Heps KdV}-\eqref{Nshape}.
\begin{lemma}\label{lemma quantitativo forma normale} Let $ {\mathfrak{I}}(\varphi) $ in \eqref{componente periodica} satisfy
$ \| {\mathfrak I} \|_{s_0 + 3}^{\mathrm{Lip}(\g)} \leq C\varepsilon^{6 - 2b} \gamma^{-1} $. Then
\begin{alignat}{2}
\| \partial_y P(i) \|_s^{\mathrm{Lip}(\g)} & \leq_s \varepsilon^4 + \varepsilon^{2b} \| {\mathfrak I}\|_{s+1}^{\mathrm{Lip}(\g)} \label{D y P} \, , &
\| \partial_\theta P(i) \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{6 - 2b} (1 + \| {\mathfrak I} \|_{s + 1}^{\mathrm{Lip}(\g)}) \\
\| \nabla_z P(i) \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{5 - b} + \varepsilon^{6- b} \gamma^{-1} \| {\mathfrak I} \|_{s + 1}^{\mathrm{Lip}(\g)} \, , &
\| X_P(i)\|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{6 - 2b} + \varepsilon^{2b} \| {\mathfrak I}\|_{s + 3}^{\mathrm{Lip}(\g)} \label{stima XP} \\ \label{stime linearizzato campo hamiltoniano}
\| \partial_{\theta} \partial_y P(i)\|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{4} + \varepsilon^{5} \gamma^{-1} \| {\mathfrak I}\|_{s + 2}^{\mathrm{Lip}(\g)} \, , &
\| \partial_y \nabla_z P(i)\|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{b+3} + \varepsilon^{2b - 1} \| {\mathfrak I} \|_{s+2}^{\mathrm{Lip}(\g)} \end{alignat} \begin{equation} \label{D yy P}
\| \partial_{yy} P(i) + 3 \varepsilon^{2b} I_{\mathbb R^\nu} \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^{2+ 2b} + \varepsilon^{2b+3} \gamma^{-1} \| {\mathfrak{I}} \|_{s+2}^{\mathrm{Lip}(\g)} \end{equation} and, for all $ \widehat \imath := (\widehat \Theta, \widehat y, \widehat z) $, \begin{align} \label{D yii}
\| \partial_y d_{i} X_P(i)[\widehat \imath \,]\|_s^{\mathrm{Lip}(\g)} &\leq_s \varepsilon^{2 b - 1} \big( \| \widehat \imath \|_{s + 3}^{\mathrm{Lip}(\g)} + \| {\mathfrak I}\|_{s + 3}^{\mathrm{Lip}(\g)} \| \widehat \imath \|_{s_0 + 3}^{\mathrm{Lip}(\g)}\big) \\
\label{tame commutatori} \| d_i X_{H_\varepsilon}(i) [\widehat \imath \, ] + (0,0, \partial_{xxx} \hat z)\|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon \big( \| \widehat \imath\|_{s + 3}^{\mathrm{Lip}(\g)}
+ \|{\mathfrak I} \|_{s + 3}^{\mathrm{Lip}(\g)} \| \widehat \imath\|_{s_0 + 3}^{\mathrm{Lip}(\g)} \big) \\ \label{parte quadratica da P}
\| d_i^2 X_{H_\varepsilon}(i) [\widehat \imath, \widehat \imath \,]\|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon\Big( \| \widehat \imath \|_{s + 3}^{\mathrm{Lip}(\g)} \|
\widehat \imath\|_{s_0 + 3}^{\mathrm{Lip}(\g)} + \| {\mathfrak I}\|_{s + 3}^{\mathrm{Lip}(\g)} (\| \widehat \imath\|_{s_0 + 3}^{\mathrm{Lip}(\g)})^2\Big). \end{align} \end{lemma}
In the sequel we will also use that, by the diophantine condition \eqref{omdio}, the operator $ {\cal D}_\omega^{-1} $ (see \eqref{Domega}) is defined for all functions $ u $ with zero $ \varphi $-average, and satisfies \begin{equation}\label{Dom inverso}
\| {\cal D}_\omega^{-1} u \|_s \leq C \gamma^{-1} \| u \|_{s+ \tau} \, , \quad \| {\cal D}_\omega^{-1} u \|_s^{{\mathrm{Lip}(\g)}} \leq C \gamma^{-1} \| u \|_{s+ 2 \tau+1}^{{\mathrm{Lip}(\g)}} \, . \end{equation}
\section{Approximate inverse}\label{costruzione dell'inverso approssimato}
In order to implement a convergent Nash-Moser scheme that leads to a solution of $ \mathcal{F}(i, \zeta) = 0 $
our aim is to construct an \emph{approximate right inverse} (which satisfies tame estimates) of the linearized operator \begin{equation}\label{operatore linearizzato} d_{i, \zeta} {\cal F}(i_0, \zeta_0 )[\widehat \imath \,, \widehat \zeta ] = d_{i, \zeta} {\cal F}(i_0 )[\widehat \imath \,, \widehat \zeta ] = {\cal D}_\omega \widehat \imath - d_i X_{H_\varepsilon} ( i_0 (\varphi) ) [\widehat \imath ] + (0, \widehat \zeta, 0 )
\,, \end{equation} see Theorem \ref{thm:stima inverso approssimato}. Note that $ d_{i, \zeta} {\cal F}(i_0, \zeta_0 ) = d_{i, \zeta} {\cal F}(i_0 ) $ is independent of $ \zeta_0 $ (see \eqref{operatorF}).
The notion of approximate right inverse is introduced in \cite{Z1}. It denotes a linear operator which is an \emph{exact} right inverse at a solution $ (i_0, \zeta_0) $ of $ {\cal F}(i_0, \zeta_0) = 0 $. We want to implement the general strategy in \cite{BB13}-\cite{BB14} which reduces the search of an approximate right inverse of \eqref{operatore linearizzato} to the search of an approximate inverse on the normal directions only.
It is well known that an invariant torus $ i_0 $ with diophantine flow is isotropic (see e.g. \cite{BB13}), namely the pull-back $ 1$-form $ i_0^* \Lambda $ is closed, where $ \Lambda $ is the contact 1-form in \eqref{Lambda 1 form}. This is tantamount to say that the 2-form $ \cal W $ (see \eqref{2form}) vanishes on the torus $ i_0 (\mathbb T^\nu )$ (i.e. $\cal W$ vanishes on the tangent space at each point $i_0(\varphi)$ of the manifold $i_0(\mathbb T^\nu)$), because $ i_0^* {\cal W} = i_0^* d \Lambda = d i_0^* \Lambda $. For an ``approximately invariant" torus $ i_0 $ the 1-form $ i_0^* \Lambda$ is only ``approximately closed". In order to make this statement quantitative we consider \begin{equation}\label{coefficienti pull back di Lambda} i_0^* \Lambda = {\mathop \sum}_{k = 1}^\nu a_k (\varphi) d \varphi_k \,,\quad a_k(\varphi) := - \big( [\partial_\varphi \theta_0 (\varphi)]^T y_0 (\varphi) \big)_k + \frac12 ( \partial_{\varphi_k} z_0(\varphi), \partial_{x}^{-1} z_0(\varphi) )_{L^2(\mathbb T)} \end{equation} and we quantify how small is \begin{equation} \label{def Akj} i_0^* {\cal W} = d \, i_0^* \Lambda = {\mathop\sum}_{1 \leq k < j \leq \nu} A_{k j}(\varphi) d \varphi_k \wedge d \varphi_j\,,\quad A_{k j} (\varphi) := \partial_{\varphi_k} a_j(\varphi) - \partial_{\varphi_j} a_k(\varphi) \, . \end{equation} Along this section we will always assume the following hypothesis (which will be verified at each step of the Nash-Moser iteration):
\begin{itemize} \item {\sc Assumption.} The map $\omega\mapsto i_0(\omega)$ is a Lipschitz function defined on some subset $\Omega_o \subset \Omega_\varepsilon$, where $\Omega_\varepsilon$ is defined in \eqref{Omega epsilon}, and, for some $ \mu := \mu (\tau , \nu) > 0 $, \begin{equation}\label{ansatz 0}
\| {\mathfrak I}_0 \|_{s_0+\mu}^{{\mathrm{Lip}(\g)}} \leq C\varepsilon^{6 - 2b} \gamma^{-1}, \quad
\| Z \|_{s_0 + \mu}^{{\mathrm{Lip}(\g)}} \leq C \varepsilon^{6 - 2b}, \quad \gamma = \varepsilon^{2 + a}, \quad
b := 1 + (a/2) \,,
\quad a \in (0, 1 / 6), \end{equation} where ${\mathfrak{I}}_0(\varphi) := i_0(\varphi) - (\varphi,0,0)$, and \begin{equation} \label{def Zetone} Z(\varphi) := (Z_1, Z_2, Z_3) (\varphi) := {\cal F}(i_0, \zeta_0) (\varphi) = \omega \cdot \partial_\varphi i_0(\varphi) - X_{H_{\varepsilon, \zeta_0}}(i_0(\varphi)) \, . \end{equation} \end{itemize} \begin{lemma} \label{zeta = 0}
$ |\zeta_0|^{{\mathrm{Lip}(\g)}} \leq C \| Z \|_{s_0}^{{\mathrm{Lip}(\g)}}$ \!\!\!. If $ {\cal F}(i_0, \zeta_0) = 0 $ then $ \zeta_0 = 0 $,
namely the torus $i_0 $ is invariant for $X_{H_\varepsilon}$.
\end{lemma}
\begin{proof} It is proved in \cite{BB13} the formula $$ \zeta_0 = \int_{\mathbb T^\nu} - [\partial_\varphi y_0 (\varphi)]^T Z_1 (\varphi) + [\partial_\varphi \theta_0 (\varphi)]^T Z_2 (\varphi) - [\partial_\varphi z_0 (\varphi) ]^T \partial_x^{-1} Z_3 (\varphi) \, d \varphi \, . $$ Hence the lemma follows by \eqref{ansatz 0} and usual algebra estimate. \end{proof}
We now quantify the size of $ i_0^* {\cal W} $ in terms of $ Z $.
\begin{lemma} The coefficients $A_{kj} (\varphi) $ in \eqref{def Akj} satisfy \begin{equation}\label{stima A ij}
\| A_{k j} \|_s^{{\mathrm{Lip}(\g)}} \leq_s \gamma^{-1} \big(\| Z \|_{s+2\tau +2}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s_0+ 1}^{{\mathrm{Lip}(\g)}}
+ \| Z \|_{s_0+1}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s+ 2 \tau + 2}^{{\mathrm{Lip}(\g)}} \big)\,. \end{equation} \end{lemma}
\begin{proof} We estimate the coefficients of the Lie derivative $ L_\omega (i_0^* {\cal W}) := \sum_{k < j} {\cal D}_\omega A_{k j}(\varphi) d \varphi_k \wedge d \varphi_j $. Denoting by $ \underline{e}_k $ the $ k $-th versor of $ \mathbb R^\nu $ we have $$
{\cal D}_\omega A_{k j} = L_\omega( i_0^* {\cal W})(\varphi)[ \underline{e}_k , \underline{e}_j ] = {\cal W}\big( \partial_\varphi Z(\varphi) \underline{e}_k , \partial_\varphi i_0(\varphi) \underline{e}_j \big) + {\cal W} \big(\partial_\varphi i_0(\varphi) \underline{e}_k , \partial_\varphi Z(\varphi) \underline{e}_j \big) $$ (see \cite{BB13}). Hence \begin{equation} \label{bella trovata}
\| {\cal D}_\omega A_{k j} \|_s^{\mathrm{Lip}(\g)}
\leq_s \| Z \|_{s+1}^{\mathrm{Lip}(\g)} \| {\mathfrak I}_0 \|_{s_0 + 1}^{\mathrm{Lip}(\g)}
+ \| Z \|_{s_0 + 1}^{\mathrm{Lip}(\g)} \| {\mathfrak I}_0 \|_{s + 1}^{\mathrm{Lip}(\g)} \,. \end{equation} The bound \eqref{stima A ij} follows applying $ {\cal D}_\omega^{-1}$ and using \eqref{def Akj}, \eqref{Dom inverso}. \end{proof}
As in \cite{BB13} we first modify the approximate torus $ i_0 $ to obtain an isotropic torus $ i_\delta $ which is still approximately invariant. We denote the Laplacian $ \Delta_\varphi := \sum_{k=1}^\nu \partial_{\varphi_k}^2 $ .
\begin{lemma}\label{toro isotropico modificato} {\bf (Isotropic torus)} The torus $ i_\delta(\varphi) := (\theta_0(\varphi), y_\delta(\varphi), z_0(\varphi) ) $ defined by \begin{equation}\label{y 0 - y delta} y_\delta := y_0 + [\partial_\varphi \theta_0(\varphi)]^{- T} \rho(\varphi) \, , \qquad \rho_j(\varphi) := \Delta_\varphi^{-1} {\mathop\sum}_{ k = 1}^\nu \partial_{\varphi_j} A_{k j}(\varphi) \end{equation} is isotropic. If \eqref{ansatz 0} holds, then, for some $ \sigma := \sigma(\nu,\tau ) $, \begin{align} \label{stima y - y delta}
\| y_\delta - y_0 \|_s^{{\mathrm{Lip}(\g)}}
& \leq_s \gamma^{-1} \big(\| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} +
\| Z \|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} \big) \,, \\ \label{stima toro modificato}
\| {\cal F}(i_\delta, \zeta_0) \|_s^{{\mathrm{Lip}(\g)}}
& \leq_s \| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} + \| Z \|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} \\ \label{derivata i delta}
\| \partial_i [ i_\delta][ \widehat \imath ] \|_s & \leq_s \| \widehat \imath \|_s + \| {\mathfrak I}_0\|_{s + \sigma} \| \widehat \imath \|_s \, . \end{align} \end{lemma} In the paper we denote equivalently the differential by $ \partial_i $ or $ d_i $. Moreover we denote by $ \sigma := \sigma(\nu, \tau ) $ possibly different (larger) ``loss of derivatives" constants.
\begin{proof}
In this proof we write $\| \ \|_s$ to denote $\| \ \|_s^{{\mathrm{Lip}(\g)}}$. The proof of the isotropy of $i_\delta$ is in \cite{BB13}. The estimate \eqref{stima y - y delta} follows by \eqref{y 0 - y delta}, \eqref{stima A ij}, \eqref{ansatz 0} and the tame bound for the inverse
$ \Vert [\partial_\varphi \theta_0 ]^{-T}\Vert_{s} \leq_s 1 + \| {\mathfrak I}_0 \|_{s + 1} $. It remains to estimate the difference (see \eqref{operatorF} and note that $ X_{\cal N} $ does not depend on $ y $) \begin{equation}\label{F diff} {\cal F}(i_\delta, \zeta_0) - {\cal F}(i_0, \zeta_0) = \begin{pmatrix} 0 \\ {\cal D}_\omega (y_\delta - y_0 ) \\ 0 \end{pmatrix}\, + X_P(i_\delta ) - X_P(i_0). \end{equation} Using \eqref{stime linearizzato campo hamiltoniano}, \eqref{D yy P}, we get
$ \| \partial_y X_P (i) \|_s \leq_s \varepsilon^{2b} + \varepsilon^{2b - 1} \| {\mathfrak I}\|_{s + 3} $. Hence \eqref{stima y - y delta}, \eqref{ansatz 0} imply \begin{equation}\label{XP delta - XP}
\|X_ P(i_\delta ) - X_P(i_0 )\|_s
\leq_s \| {\mathfrak I}_0 \|_{s_0 + \sigma} \|Z \|_{s + \sigma} + \| {\mathfrak I}_0 \|_{s + \sigma} \|Z \|_{s_0 + \sigma} \, . \end{equation} Differentiating \eqref{y 0 - y delta} we have \begin{equation}\label{D omega y 0 - y delta} {\cal D}_\omega (y_\delta - y_0 ) = [\partial_\varphi \theta_0(\varphi)]^{-T} {\cal D}_\omega \rho(\varphi) + ( {\cal D}_\omega [\partial_\varphi \theta_0(\varphi)]^{-T} ) \rho(\varphi) \end{equation} and $ {\cal D}_\omega \rho_j(\varphi) = \Delta^{-1}_\varphi \sum_{k = 1}^\nu \partial_{\varphi_j} {\cal D}_\omega A_{kj}(\varphi) $. Using \eqref{bella trovata}, we deduce that \begin{equation}\label{primo pezzo D omega y 0 - y delta}
\| [\partial_\varphi \theta_0 ]^{-T} {\cal D}_\omega \rho \|_s
\leq_s \| Z\|_{s + 1} \| {\mathfrak I}_0\|_{s_0 + 1} + \| Z \|_{s_0 + 1} \| {\mathfrak I}_0\|_{s + 1}\,. \end{equation} To estimate the second term in \eqref{D omega y 0 - y delta}, we differentiate $ Z_1(\varphi) = {\cal D}_\omega \theta_0(\varphi) - \omega - (\partial_y P)(i_0(\varphi)) $ (which is the first component in \eqref{operatorF}) with respect to $\varphi$. We get $ {\cal D}_\omega \partial_\varphi \theta_0(\varphi) = \partial_\varphi (\partial_y P)(i_0(\varphi)) + \partial_\varphi Z_1(\varphi) $. Then, by \eqref{D y P}, \begin{equation}\label{bella trovata 2}
\|{\cal D}_\omega [\partial_\varphi \theta_0]^T \|_s
\leq_s \varepsilon^4 + \varepsilon^{2b} \| {\mathfrak I}_0 \|_{s + 2} + \| Z \|_{s + 1}\,. \end{equation} Since $ {\cal D}_\omega [ \partial_\varphi \theta_0(\varphi)]^{-T} = - [ \partial_\varphi \theta_0(\varphi)]^{-T} \big( {\cal D}_\omega [\partial_\varphi \theta_0(\varphi)]^T \big) [\partial_\varphi \theta_0(\varphi)]^{-T} $, the bounds \eqref{bella trovata 2}, \eqref{stima A ij}, \eqref{ansatz 0} imply \begin{equation}\label{secondo pezzo D omega y 0 - y delta}
\| ({\cal D}_\omega [ \partial_\varphi \theta_0]^{-T} ) \rho \|_s \leq_s \varepsilon^{6-2b} \gamma^{-1}
\big( \| Z\|_{s + \sigma} \| {\mathfrak I}_0\|_{s_0 + \sigma} + \| Z \|_{s_0 + \sigma} \| {\mathfrak I}_0\|_{s + \sigma}\big) \, . \end{equation} In conclusion \eqref{F diff}, \eqref{XP delta - XP}, \eqref{D omega y 0 - y delta}, \eqref{primo pezzo D omega y 0 - y delta}, \eqref{secondo pezzo D omega y 0 - y delta} imply \eqref{stima toro modificato}. The bound \eqref{derivata i delta} follows by \eqref{y 0 - y delta}, \eqref{def Akj}, \eqref{coefficienti pull back di Lambda}, \eqref{ansatz 0}. \end{proof}
Note that there is no $ \gamma^{- 1} $ in the right hand side of \eqref{stima toro modificato}. It turns out that an approximate inverse of $d_{i, \zeta} {\cal F}(i_\delta )$
is an approximate inverse of $d_{i, \zeta} {\cal F}(i_0 )$ as well. In order to find an approximate inverse of the linearized operator $d_{i, \zeta} {\cal F}(i_\delta )$ we introduce a suitable set of symplectic coordinates nearby the isotropic torus $ i_\delta $. We consider the map $ G_\delta : (\psi, \eta, w) \to (\theta, y, z)$ of the phase space $\mathbb T^\nu \times \mathbb R^\nu \times H_S^\bot$ defined by \begin{equation}\label{trasformazione modificata simplettica} \begin{pmatrix} \theta \\ y \\ z \end{pmatrix} := G_\delta \begin{pmatrix} \psi \\ \eta \\ w \end{pmatrix} := \begin{pmatrix} \theta_0(\psi) \\ y_\delta (\psi) + [\partial_\psi \theta_0(\psi)]^{-T} \eta + \big[ (\partial_\theta \tilde{z}_0) (\theta_0(\psi)) \big]^T \partial_x^{-1} w \\ z_0(\psi) + w
\end{pmatrix} \end{equation} where $\tilde{z}_0 (\theta) := z_0 (\theta_0^{-1} (\theta))$. It is proved in \cite{BB13} that $ G_\delta $ is symplectic, using that the torus $ i_\delta $ is isotropic (Lemma \ref{toro isotropico modificato}). In the new coordinates, $ i_\delta $ is the trivial embedded torus $ (\psi , \eta , w ) = (\psi , 0, 0 ) $. The transformed Hamiltonian $ K := K(\psi, \eta, w, \zeta_0) $ is (recall \eqref{hamiltoniana modificata}) \begin{align} K := H_{\varepsilon, \zeta_0} \circ G_\delta & = \theta_0(\psi) \cdot \zeta_0 + K_{00}(\psi) + K_{10}(\psi) \cdot \eta + (K_{0 1}(\psi), w)_{L^2(\mathbb T)} + \frac12 K_{2 0}(\psi)\eta \cdot \eta \nonumber \\ & \quad + \big( K_{11}(\psi) \eta , w \big)_{L^2(\mathbb T)} + \frac12 \big(K_{02}(\psi) w , w \big)_{L^2(\mathbb T)} + K_{\geq 3}(\psi, \eta, w) \label{KHG} \end{align} where $ K_{\geq 3} $ collects the terms at least cubic in the variables $ (\eta, w )$. At any fixed $\psi $, the Taylor coefficient $K_{00}(\psi) \in \mathbb R $, $K_{10}(\psi) \in \mathbb R^\nu $, $K_{01}(\psi) \in H_S^\bot$ (it is a function of $ x \in \mathbb T $), $K_{20}(\psi) $ is a $\nu \times \nu$ real matrix, $K_{02}(\psi)$ is a linear self-adjoint operator of $ H_S^\bot $ and $K_{11}(\psi) : \mathbb R^\nu \to H_S^\bot$. Note that the above Taylor coefficients do not depend on the parameter $ \zeta_0 $.
The Hamilton equations associated to \eqref{KHG} are \begin{equation}\label{sistema dopo trasformazione inverso approssimato} \begin{cases} \dot \psi \hspace{-30pt} & = K_{10}(\psi) + K_{20}(\psi) \eta + K_{11}^T (\psi) w + \partial_{\eta} K_{\geq 3}(\psi, \eta, w) \\ \dot \eta \hspace{-30pt} & =- [\partial_\psi \theta_0(\psi)]^T \zeta_0 - \partial_\psi K_{00}(\psi) - [\partial_{\psi}K_{10}(\psi)]^T \eta - [\partial_{\psi} K_{01}(\psi)]^T w \\ & \quad - \partial_\psi \big( \frac12 K_{2 0}(\psi)\eta \cdot \eta + ( K_{11}(\psi) \eta , w )_{L^2(\mathbb T)} + \frac12 ( K_{02}(\psi) w , w )_{L^2(\mathbb T)} + K_{\geq 3}(\psi, \eta, w) \big) \\ \dot w \hspace{-30pt} & = \partial_x \big( K_{01}(\psi) + K_{11}(\psi) \eta + K_{0 2}(\psi) w + \nabla_w K_{\geq 3}(\psi, \eta, w) \big) \end{cases} \end{equation} where $ [\partial_{\psi}K_{10}(\psi)]^T $ is the $ \nu \times \nu $ transposed matrix and $ [\partial_{\psi}K_{01}(\psi)]^T $, $ K_{11}^T(\psi) : {H_S^\bot \to \mathbb R^\nu} $ are defined by the duality relation $ ( \partial_{\psi} K_{01}(\psi) [\hat \psi ], w)_{L^2} = \hat \psi \cdot [\partial_{\psi}K_{01}(\psi)]^T w $, $ \forall \hat \psi \in \mathbb R^\nu, w \in H_S^\bot $, and similarly for $ K_{11} $. Explicitly, for all $ w \in H_S^\bot $, and denoting $\underline{e}_k$ the $k$-th versor of $\mathbb R^\nu$, \begin{equation} \label{K11 tras} K_{11}^T(\psi) w = {\mathop \sum}_{k=1}^\nu \big(K_{11}^T(\psi) w \cdot \underline{e}_k\big) \underline{e}_k = {\mathop \sum}_{k=1}^\nu \big( w, K_{11}(\psi) \underline{e}_k \big)_{L^2(\mathbb T)} \underline{e}_k \, \in \mathbb R^\nu \, . \end{equation}
In the next lemma we estimate the coefficients $ K_{00} $, $ K_{10} $, $K_{01} $ in the Taylor expansion \eqref{KHG}. Note that on an exact solution we have $ Z = 0 $ and therefore $ K_{00} (\psi) = {\rm const} $, $ K_{10} = \omega $ and $ K_{01} = 0 $.
\begin{lemma} \label{coefficienti nuovi} Assume \eqref{ansatz 0}. Then there is $ \sigma := \sigma(\tau, \nu)$ such that \[
\| \partial_\psi K_{00} \|_s^{{\mathrm{Lip}(\g)}}
+ \| K_{10} - \omega \|_s^{{\mathrm{Lip}(\g)}} + \| K_{0 1} \|_s^{{\mathrm{Lip}(\g)}}
\leq_s \| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} + \| Z \|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| {\mathfrak I}_0 \|_{s + \sigma}^{{\mathrm{Lip}(\g)}}\,. \] \end{lemma}
\begin{proof} Let $ {\cal F}(i_\delta, \zeta_0) := Z_\delta := (Z_{1,\delta}, Z_{2,\delta}, Z_{3,\delta}) $. By a direct calculation as in \cite{BB13} (using \eqref{KHG}, \eqref{operatorF}) \begin{align*} \partial_\psi K_{00}(\psi) & = - [ \partial_\psi \theta_0 (\psi) ]^T \big( - Z_{2, \delta} - [ \partial_\psi y_\delta] [ \partial_\psi \theta_0]^{-1} Z_{1, \delta} + [ (\partial_\theta {\tilde z}_0)( \theta_0 (\psi)) ]^T \partial_x^{-1} Z_{3,\delta} \\ & \quad + [ (\partial_\theta {\tilde z}_0)(\theta_0 (\psi)) ]^T \partial_x^{-1} \partial_\psi z_0 (\psi) [ \partial_\psi \theta_0 (\psi)]^{-1} Z_{1,\delta} \big) \, , \\ K_{10}(\psi) & = \omega - [ \partial_\psi \theta_0(\psi)]^{-1} Z_{1,\delta}(\psi) \,, \\ K_{01}(\psi) & = - \partial_x^{-1} Z_{3,\delta} + \partial_x^{-1} \partial_\psi z_0(\psi) [\partial_\psi \theta_0(\psi)]^{-1} Z_{1,\delta}(\psi)\,. \end{align*} Then \eqref{ansatz 0}, \eqref{stima y - y delta}, \eqref{stima toro modificato} (using Lemma \ref{lemma:utile}) imply the lemma. \end{proof}
\begin{remark} \label{rem:KAM normal form} If $ {\cal F} (i_0, \zeta_0) = 0 $ then $\zeta_0 = 0$ by Lemma \ref{zeta = 0}, and Lemma \ref{coefficienti nuovi} implies that \eqref{KHG} simplifies to $ K = const + \omega \cdot \eta + \frac12 K_{2 0}(\psi)\eta \cdot \eta + \big( K_{11}(\psi) \eta , w \big)_{L^2(\mathbb T)} + \frac12 \big(K_{02}(\psi) w , w \big)_{L^2(\mathbb T)} + K_{\geq 3} $. \end{remark}
We now estimate $ K_{20}, K_{11}$ in \eqref{KHG}. The norm of $K_{20}$ is the sum of the norms of its matrix entries.
\begin{lemma} \label{lemma:Kapponi vari} Assume \eqref{ansatz 0}. Then \begin{align}\label{stime coefficienti K 20 11 bassa}
\|K_{20} + 3 \varepsilon^{2 b} I \|_s^{{\mathrm{Lip}(\g)}}
& \leq_s \varepsilon^{2b+2} + \varepsilon^{2b} \| {\mathfrak I}_0\|_{s + \sigma}^{{\mathrm{Lip}(\g)}} + \varepsilon^{3} \gamma^{-1} \| {\mathfrak I}_0\|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} \\ \label{stime coefficienti K 11 alta}
\| K_{11} \eta \|_s^{{\mathrm{Lip}(\g)}}
& \leq_s \varepsilon^{5} \gamma^{-1} \| \eta \|_s^{{\mathrm{Lip}(\g)}}
+ \varepsilon^{2 b - 1} ( \| {\mathfrak I}_0\|_{s + \sigma}^{{\mathrm{Lip}(\g)}} + \gamma^{-1} \| {\mathfrak I}_0\|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} )
\| \eta \|_{s_0}^{{\mathrm{Lip}(\g)}} \\ \label{stime coefficienti K 11 alta trasposto}
\| K_{11}^T w \|_s^{{\mathrm{Lip}(\g)}}
& \leq_s \varepsilon^{5} \gamma^{-1} \| w \|_{s + 2}^{{\mathrm{Lip}(\g)}}
+ \varepsilon^{2 b - 1} ( \| {\mathfrak I}_0\|_{s + \sigma}^{{\mathrm{Lip}(\g)}} + \gamma^{-1} \| {\mathfrak I}_0\|_{s_0 + \sigma}^{{\mathrm{Lip}(\g)}} \| Z \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} )
\| w \|_{s_0 + 2}^{{\mathrm{Lip}(\g)}} \, . \end{align} In particular
$ \| K_{20} + 3 \varepsilon^{2 b} I \|_{s_0 }^{{\mathrm{Lip}(\g)}} \leq C \varepsilon^{6} \gamma^{-1} $, and $$
\| K_{11} \eta \|_{s_0 }^{{\mathrm{Lip}(\g)}} \leq C \varepsilon^{5} \gamma^{-1}\| \eta \|_{s_0}^{{\mathrm{Lip}(\g)}} ,
\quad \| K_{11}^T w \|_{s_0 }^{{\mathrm{Lip}(\g)}} \leq C \varepsilon^{5} \gamma^{-1} \| w \|_{s_0}^{{\mathrm{Lip}(\g)}} \, . $$ \end{lemma}
\begin{proof}
To shorten the notation, in this proof we write $\| \ \|_s$ for $\| \ \|_s^{{\mathrm{Lip}(\g)}}$. We have $$ K_{2 0}(\varphi) = [\partial_\varphi \theta_0(\varphi)]^{-1} \partial_{yy} H_\varepsilon(i_\delta(\varphi)) [\partial_\varphi \theta_0(\varphi)]^{-T} = [\partial_\varphi \theta_0(\varphi)]^{-1} \partial_{yy} P(i_\delta(\varphi)) [\partial_\varphi \theta_0(\varphi)]^{-T}. $$ Then \eqref{D yy P}, \eqref{ansatz 0}, \eqref{stima y - y delta} imply \eqref{stime coefficienti K 20 11 bassa}. Now (see also \cite{BB13}) \begin{align*} K_{11}(\varphi) & = \partial_{y} \nabla_z H_\varepsilon (i_\delta(\varphi)) [\partial_\varphi \theta_0 (\varphi)]^{-T} - \partial_x^{-1} (\partial_\theta {\tilde z}_0) (\theta_0(\varphi)) (\partial_{yy} H_\varepsilon) (i_\delta(\varphi)) [\partial_\varphi \theta_0 (\varphi)]^{-T} \nonumber\\ & \stackrel{\eqref{Hamiltoniana Heps KdV}} =
\partial_{y} \nabla_z P(i_\delta(\varphi)) [\partial_\varphi \theta_0 (\varphi)]^{-T} - \partial_x^{-1} (\partial_\theta {\tilde z}_0) (\theta_0(\varphi)) (\partial_{yy} P) (i_\delta(\varphi)) [\partial_\varphi \theta_0 (\varphi)]^{-T}\,, \end{align*} therefore, using \eqref{stime linearizzato campo hamiltoniano}, \eqref{D yy P}, \eqref{ansatz 0}, we deduce \eqref{stime coefficienti K 11 alta}. The bound \eqref{stime coefficienti K 11 alta trasposto} for $K_{11}^T$ follows by \eqref{K11 tras}. \end{proof}
Under the linear change of variables \begin{equation}\label{DGdelta} D G_\delta(\varphi, 0, 0) \begin{pmatrix} \widehat \psi \, \\ \widehat \eta \\ \widehat w \end{pmatrix} := \begin{pmatrix} \partial_\psi \theta_0(\varphi) & 0 & 0 \\ \partial_\psi y_\delta(\varphi) & [\partial_\psi \theta_0(\varphi)]^{-T} & - [(\partial_\theta \tilde{z}_0)(\theta_0(\varphi))]^T \partial_x^{-1} \\ \partial_\psi z_0(\varphi) & 0 & I \end{pmatrix} \begin{pmatrix} \widehat \psi \, \\ \widehat \eta \\ \widehat w \end{pmatrix} \end{equation} the linearized operator $d_{i, \zeta}{\cal F}(i_\delta )$ transforms (approximately, see \eqref{verona 2}) into the operator obtained linearizing \eqref{sistema dopo trasformazione inverso approssimato} at $(\psi, \eta , w, \zeta ) = (\varphi, 0, 0, \zeta_0 )$ (with $ \partial_t \rightsquigarrow {\cal D}_\omega $), namely \begin{equation}\label{lin idelta} \hspace{-5pt} \begin{pmatrix} {\cal D}_\omega \widehat \psi - \partial_\psi K_{10}(\varphi)[\widehat \psi \, ] - K_{2 0}(\varphi)\widehat \eta - K_{11}^T (\varphi) \widehat w \\
{\cal D}_\omega \widehat \eta + [\partial_\psi \theta_0(\varphi)]^T \widehat \zeta + \partial_\psi [\partial_\psi \theta_0(\varphi)]^T [ \widehat \psi, \zeta_0] + \partial_{\psi\psi} K_{00}(\varphi)[\widehat \psi] + [\partial_\psi K_{10}(\varphi)]^T \widehat \eta + [\partial_\psi K_{01}(\varphi)]^T \widehat w \\ {\cal D}_\omega \widehat w - \partial_x \{ \partial_\psi K_{01}(\varphi)[\widehat \psi] + K_{11}(\varphi) \widehat \eta + K_{02}(\varphi) \widehat w \} \end{pmatrix} \! . \hspace{-5pt} \end{equation} We now estimate the induced composition operator. \begin{lemma} \label{lemma:DG} Assume \eqref{ansatz 0} and let $ \widehat \imath := (\widehat \psi, \widehat \eta, \widehat w)$. Then \begin{gather} \label{DG delta}
\|DG_\delta(\varphi,0,0) [\widehat \imath] \|_s + \|DG_\delta(\varphi,0,0)^{-1} [\widehat \imath] \|_s
\leq_s \| \widehat \imath \|_{s} + ( \| {\mathfrak I}_0 \|_{s + \sigma} +
\gamma^{-1} \| {\mathfrak I}_0 \|_{s_0 + \sigma} \|Z \|_{s + \sigma} ) \| \widehat \imath \|_{s_0}\,, \\
\| D^2 G_\delta(\varphi,0,0)[\widehat \imath_1, \widehat \imath_2] \|_s
\leq_s \| \widehat \imath_1\|_s \| \widehat \imath_2 \|_{s_0}
+ \| \widehat \imath_1\|_{s_0} \| \widehat \imath_2 \|_{s}
+ ( \| {\mathfrak I}_0 \|_{s + \sigma} + \gamma^{-1} \| {\mathfrak I}_0 \|_{s_0 + \sigma} \| Z\|_{s + \sigma} ) \|\widehat \imath_1 \|_{s_0} \| \widehat \imath_2\|_{s_0} \notag \end{gather} for some $\sigma := \sigma(\nu,\tau )$.
Moreover the same estimates hold if we replace the norm $\| \ \|_s$ with $\| \ \|_s^{{\mathrm{Lip}(\g)}}$. \end{lemma}
\begin{proof} The estimate \eqref{DG delta} for $D G_\delta(\varphi,0,0)$ follows by \eqref{DGdelta} and \eqref{stima y - y delta}. By \eqref{ansatz 0},
$ \| (DG_\delta(\varphi,0,0) - I) \widehat \imath \|_{s_0}
\leq $ $ C \varepsilon^{6 - 2b} \gamma^{-1} \| \widehat \imath \|_{s_0}
\leq \| \widehat \imath \|_{s_0} / 2 $. Therefore $DG_\delta(\varphi,0,0)$ is invertible and, by Neumann series, the inverse satisfies \eqref{DG delta}. The bound for $D^2 G_\delta$ follows by differentiating $DG_\delta$. \end{proof}
In order to construct an approximate inverse of \eqref{lin idelta} it is sufficient to solve the equation \begin{equation}\label{operatore inverso approssimato} {\mathbb D} [\widehat \psi, \widehat \eta, \widehat w, \widehat \zeta ] :=
\begin{pmatrix} {\cal D}_\omega \widehat \psi - K_{20}(\varphi) \widehat \eta - K_{11}^T(\varphi) \widehat w\\ {\cal D}_\omega \widehat \eta + [\partial_\psi \theta_0(\varphi)]^T \widehat \zeta \\ {\cal D}_\omega \widehat w - \partial_x K_{11}(\varphi)\widehat \eta - \partial_x K_{0 2}(\varphi) \widehat w \end{pmatrix} = \begin{pmatrix} g_1 \\ g_2 \\ g_3 \end{pmatrix} \end{equation} which is obtained by neglecting in \eqref{lin idelta} the terms $ \partial_\psi K_{10} $, $ \partial_{\psi \psi} K_{00} $, $ \partial_\psi K_{00} $, $ \partial_\psi K_{01} $ and $ \partial_\psi [\partial_\psi \theta_0(\varphi)]^T [ \cdot , \zeta_0] $ (which are naught at a solution by Lemmata \ref{coefficienti nuovi} and \ref{zeta = 0}).
First we solve the second equation in \eqref{operatore inverso approssimato}, namely $ {\cal D}_\omega \widehat \eta = g_2 - [\partial_\psi \theta_0(\varphi)]^T \widehat \zeta $. We choose $ \widehat \zeta $ so that the $\varphi$-average of the right hand side is zero, namely \begin{equation}\label{fisso valore di widehat zeta} \widehat \zeta = \langle g_2 \rangle \end{equation} (we denote $ \langle g \rangle := (2 \pi)^{- \nu} \int_{\mathbb T^\nu} g (\varphi) d \varphi $). Note that the $\varphi$-averaged matrix $ \langle [\partial_\psi \theta_0 ]^T \rangle = \langle I + [\partial_\psi \Theta_0]^T \rangle = I $ because $\theta_0(\varphi) = \varphi + \Theta_0(\varphi)$ and $\Theta_0(\varphi)$ is a periodic function. Therefore \begin{equation}\label{soleta} \widehat \eta := {\cal D}_\omega^{-1} \big( g_2 - [\partial_\psi \theta_0(\varphi) ]^T \langle g_2 \rangle \big) + \langle \widehat \eta \rangle \, , \quad \langle \widehat \eta \rangle \in \mathbb R^\nu \, , \end{equation} where the average $\langle \widehat \eta \rangle$ will be fixed below. Then we consider the third equation \begin{equation}\label{cal L omega} {\cal L}_\omega \widehat w = g_3 + \partial_x K_{11}(\varphi) \widehat \eta\,, \ \quad {\cal L}_\omega := \omega \cdot \partial_\varphi - \partial_x K_{0 2}(\varphi) \, . \end{equation}
\begin{itemize} \item {\sc Inversion assumption.} {\it There exists a set $ \Omega_\infty \subset \Omega_o$ such that for all $ \omega \in \Omega_\infty $, for every function $ g \in H^{s+\mu}_{S^\bot} (\mathbb T^{\nu+1}) $ there exists a solution $ h := {\cal L}_\omega^{- 1} g \in H^{s}_{S^\bot} (\mathbb T^{\nu+1}) $ of the linear equation $ {\cal L}_\omega h = g $ which satisfies} \begin{equation}\label{tame inverse}
\| {\cal L}_\omega^{- 1} g \|_s^{{\mathrm{Lip}(\g)}} \leq C(s) \gamma^{-1}
\big( \| g \|_{s + \mu}^{{\mathrm{Lip}(\g)}} + \varepsilon \gamma^{-1}
\big\{ \| {\mathfrak I}_0 \|_{s + \mu}^{{\mathrm{Lip}(\g)}} + \gamma^{-1} \| {\mathfrak I}_0 \|_{s_0 + \mu}^{{\mathrm{Lip}(\g)}} \| Z \|_{s + \mu}^{{\mathrm{Lip}(\g)}} \big\} \|g \|_{s_0}^{{\mathrm{Lip}(\g)}} \big) \end{equation} \emph{for some $ \mu := \mu (\tau, \nu) > 0 $}. \end{itemize}
\begin{remark} The term $ \varepsilon \gamma^{-1}
\{ \| {\mathfrak I}_0 \|_{s + \mu}^{{\mathrm{Lip}(\g)}} + \gamma^{-1} \| {\mathfrak I}_0 \|_{s_0 + \mu}^{{\mathrm{Lip}(\g)}} \| Z \|_{s + \mu}^{{\mathrm{Lip}(\g)}} \} $ arises because the remainder $ R_6 $ in section \ref{step5} contains the term
$ \varepsilon ( \| \Theta_0 \|_{s + \mu}^{{\mathrm{Lip}(\g)}} + \| y_\delta \|_{s + \mu}^{{\mathrm{Lip}(\g)}}) $
$\leq \varepsilon \| {\mathfrak I}_\delta \|_{s + \mu}^{{\mathrm{Lip}(\g)}} $, see Lemma \ref{lemma L6}. \end{remark}
By the above assumption there exists a solution \begin{equation}\label{normalw} \widehat w := {\cal L}_\omega^{-1} [ g_3 + \partial_x K_{11}(\varphi) \widehat \eta \, ] \end{equation} of \eqref{cal L omega}. Finally, we solve the first equation in \eqref{operatore inverso approssimato}, which, substituting \eqref{soleta}, \eqref{normalw}, becomes \begin{equation}\label{equazione psi hat} {\cal D}_\omega \widehat \psi = g_1 + M_1(\varphi) \langle \widehat \eta \rangle + M_2(\varphi) g_2 + M_3(\varphi) g_3 - M_2(\varphi)[\partial_\psi \theta_0]^T \langle g_2 \rangle \,, \end{equation} where \begin{equation} \label{cal M2} M_1(\varphi) := K_{2 0}(\varphi) + K_{11}^T(\varphi) {\cal L}_\omega^{-1} \partial_x K_{11}(\varphi)\,, \quad M_2(\varphi) := M_1 (\varphi) {\cal D}_\omega^{-1} \, , \quad M_3(\varphi) := K_{11}^T (\varphi) {\cal L}_\omega^{-1} \, . \end{equation} In order to solve the equation \eqref{equazione psi hat} we have to choose $\langle \widehat \eta \rangle$ such that the right hand side in \eqref{equazione psi hat} has zero average. By Lemma \ref{lemma:Kapponi vari} and \eqref{ansatz 0}, the $\varphi$-averaged matrix $ \langle M_1 \rangle =- 3 \varepsilon^{2 b} I + O( \varepsilon^{10} \gamma^{-3}) $. Therefore, for $ \varepsilon $ small, $\langle M_1 \rangle$ is invertible and $\langle M_1 \rangle^{-1} = O(\varepsilon^{-2 b}) = O(\gamma^{- 1})$ (recall \eqref{link gamma b}). Thus we define \begin{equation}\label{sol alpha} \langle \widehat \eta \rangle := - \langle M_1 \rangle^{-1} [ \langle g_1 \rangle + \langle M_2 g_2 \rangle + \langle M_3 g_3 \rangle - \langle M_2 [\partial_\psi \theta_0]^T \rangle \langle g_2 \rangle ]. \end{equation} With this choice of $\langle \widehat \eta \rangle$ the equation \eqref{equazione psi hat} has the solution \begin{equation}\label{sol psi} \widehat \psi := {\cal D}_\omega^{-1} [ g_1 + M_1(\varphi) \langle \widehat \eta \rangle + M_2(\varphi) g_2 + M_3(\varphi) g_3 - M_2(\varphi)[\partial_\psi \theta_0]^T \langle g_2 \rangle ]. \end{equation} In conclusion, we have constructed a solution $(\widehat \psi, \widehat \eta, \widehat w, \widehat \zeta)$ of the linear system \eqref{operatore inverso approssimato}.
\begin{proposition}\label{prop: ai} Assume \eqref{ansatz 0} and \eqref{tame inverse}. Then, $\forall \omega \in \Omega_\infty $, $ \forall g := (g_1, g_2, g_3) $,
the system \eqref{operatore inverso approssimato} has a solution $ {\mathbb D}^{-1} g := (\widehat \psi, \widehat \eta, \widehat w, \widehat \zeta ) $ where $(\widehat \psi, \widehat \eta, \widehat w, \widehat \zeta)$ are defined in \eqref{sol psi}, \eqref{soleta}, \eqref{sol alpha}, \eqref{normalw}, \eqref{fisso valore di widehat zeta} satisfying \begin{equation} \label{stima T 0 b}
\| {\mathbb D}^{-1} g \|_s^{{\rm Lip}(\gamma)}
\leq_s \gamma^{-1} \big( \| g \|_{s + \mu}^{{\rm Lip}(\gamma)}
+ \varepsilon \gamma^{-1} \big\{ \| {\mathfrak I}_0 \|_{s + \mu}^{{\rm Lip}(\gamma)}
+ \gamma^{-1} \| {\mathfrak I}_0 \|_{s_0 + \mu}^{{\rm Lip}(\gamma)} \|{\cal F}(i_0, \zeta_0) \|^{{\rm Lip}(\gamma)}_{s + \mu} \big\} \| g \|_{s_0 + \mu}^{{\rm Lip}(\gamma)} \big). \end{equation}
\end{proposition}
\begin{proof}
Recalling \eqref{cal M2}, by Lemma \ref{lemma:Kapponi vari}, \eqref{tame inverse}, \eqref{ansatz 0} we get $ \| M_2 h \|_{s_0} + \| M_3 h \|_{s_0} \leq C \| h \|_{s_0 + \sigma} $. Then, by \eqref{sol alpha} and $\langle M_1 \rangle^{-1} = O(\varepsilon^{-2 b}) = O(\gamma^{-1}) $, we deduce
$ |\langle \widehat \eta\rangle|^{{\mathrm{Lip}(\g)}} \leq C\gamma^{-1} \| g \|_{s_0+ \sigma}^{{\mathrm{Lip}(\g)}} $ and \eqref{soleta}, \eqref{Dom inverso} imply
$ \| \widehat \eta \|_s^{{\mathrm{Lip}(\g)}} \leq_s \gamma^{-1} \big( \| g \|_{s + \sigma}^{\mathrm{Lip}(\g)} + \| {\mathfrak{I}}_0 \|_{s + \sigma } \| g \|_{s_0}^{\mathrm{Lip}(\g)} \big)$.
The bound \eqref{stima T 0 b} is sharp for $ \widehat w $ because $ {\cal L}_\omega^{-1} g_3 $ in \eqref{normalw} is estimated using \eqref{tame inverse}. Finally $ \widehat \psi $ satisfies \eqref{stima T 0 b} using \eqref{sol psi}, \eqref{cal M2}, \eqref{tame inverse}, \eqref{Dom inverso} and Lemma \ref{lemma:Kapponi vari}. \end{proof} Finally we prove that the operator \begin{equation}\label{definizione T} {\bf T}_0 := (D { \widetilde G}_\delta)(\varphi,0,0) \circ {\mathbb D}^{-1} \circ (D G_\delta) (\varphi,0,0)^{-1} \end{equation} is an approximate right inverse for $d_{i,\zeta} {\cal F}(i_0 )$ where
$ \widetilde{G}_\delta (\psi, \eta, w, \zeta) := $ $ \big( G_\delta (\psi, \eta, w), \zeta \big) $
is the identity on the $ \zeta $-component.
We denote the norm $ \| (\psi, \eta, w, \zeta) \|_s^{\mathrm{Lip}(\g)} := $ $ \max \{ \| (\psi, \eta, w) \|_s^{\mathrm{Lip}(\g)}, $ $ | \zeta |^{\mathrm{Lip}(\g)} \} $.
\begin{theorem} {\bf (Approximate inverse)} \label{thm:stima inverso approssimato} Assume \eqref{ansatz 0} and the inversion assumption \eqref{tame inverse}. Then there exists $ \mu := \mu (\tau, \nu) > 0 $ such that, for all $ \omega \in \Omega_\infty $, for all $ g := (g_1, g_2, g_3) $, the operator $ {\bf T}_0 $ defined in \eqref{definizione T} satisfies \begin{equation}\label{stima inverso approssimato 1}
\| {\bf T}_0 g \|_{s}^{{\rm Lip}(\gamma)}
\leq_s \gamma^{-1} \big(\| g \|_{s + \mu}^{{\rm Lip}(\gamma)}
+ \varepsilon \gamma^{-1} \big\{ \| {\mathfrak I}_0 \|_{s + \mu}^{{\rm Lip}(\gamma)}
+\gamma^{-1} \| {\mathfrak{I}}_0 \|_{s_0 + \mu}^{\mathrm{Lip}(\g)}
\|{\cal F}(i_0, \zeta_0) \|_{s + \mu}^{{\rm Lip}(\gamma)} \big\}
\| g \|_{s_0 + \mu}^{{\rm Lip}(\gamma)} \big). \end{equation}
It is an approximate inverse of $d_{i, \zeta} {\cal F}(i_0 )$, namely \begin{align}
& \| ( d_{i, \zeta} {\cal F}(i_0) \circ {\bf T}_0 - I ) g \|_s^{{\rm Lip}(\gamma)} \label{stima inverso approssimato 2} \\
& \leq_s \gamma^{-1} \Big( \| {\cal F}(i_0, \zeta_0) \|_{s_0 + \mu}^{\mathrm{Lip}(\g)} \| g \|_{s + \mu}^{\mathrm{Lip}(\g)}
+ \big\{ \| {\cal F}(i_0, \zeta_0) \|_{s + \mu}^{\mathrm{Lip}(\g)}
+ \varepsilon \gamma^{-1} \| {\cal F}(i_0, \zeta_0) \|_{s_0 + \mu}^{\mathrm{Lip}(\g)} \| {\mathfrak I}_0 \|_{s + \mu}^{\mathrm{Lip}(\g)} \big\} \| g \|_{s_0 + \mu}^{\mathrm{Lip}(\g)} \Big). \nonumber \end{align} \end{theorem}
\begin{proof}
We denote $\| \ \|_s$ instead of $\| \ \|_s^{{\mathrm{Lip}(\g)}}$. The bound \eqref{stima inverso approssimato 1} follows from \eqref{definizione T}, \eqref{stima T 0 b}, \eqref{DG delta}. By \eqref{operatorF}, since $ X_\mathcal{N} $ does not depend on $ y $, and $ i_\delta $ differs from $ i_0 $ only for the $ y$ component, we have \begin{align} \label{verona 0} d_{i, \zeta} {\cal F}(i_0 )[\, \widehat \imath, \widehat \zeta \, ] - d_{i, \zeta} {\cal F}(i_\delta ) [\, \widehat \imath, \widehat \zeta \, ] & = d_i X_P (i_\delta) [\, \widehat \imath \, ] - d_i X_P (i_0) [\, \widehat \imath \, ] \\ & = \int_0^1 \partial_y d_i X_P (\theta_0, y_0 + s (y_\delta - y_0), z_0) [y_\delta - y_0, \widehat \imath \, ] ds =: {\cal E}_0 [\, \widehat \imath, \widehat \zeta \, ] \,. \nonumber \end{align} By \eqref{D yii}, \eqref{stima y - y delta}, \eqref{ansatz 0}, we estimate \begin{equation}\label{stima parte trascurata 1}
\| {\cal E}_0 [\, \widehat \imath, \widehat \zeta \, ] \|_s \leq_s
\| Z \|_{s_0 + \sigma} \| \widehat \imath \|_{s + \sigma} +
\| Z \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} + \varepsilon^{2b-1}\gamma^{-1}
\| Z \|_{s_0 + \sigma} \| \widehat \imath \|_{s_0 + \sigma} \| {\mathfrak{I}}_0 \|_{s+\sigma} \end{equation} where $Z := \mathcal{F}(i_0, \zeta_0)$ (recall \eqref{def Zetone}).
Note that $\mathcal{E}_0[\widehat \imath, \widehat \zeta]$ is, in fact, independent of $\widehat \zeta$. Denote the set of variables $ (\psi, \eta, w) =: {\mathtt u} $. Under the transformation $G_\delta $, the nonlinear operator ${\cal F}$ in \eqref{operatorF} transforms into \begin{equation} \label{trasfo imp} {\cal F}(G_\delta( {\mathtt u} (\varphi) ), \zeta ) = D G_\delta( {\mathtt u} (\varphi) ) \big( {\cal D}_\omega {\mathtt u} (\varphi) - X_K ( {\mathtt u} (\varphi), \zeta) \big) \, , \quad K = H_{\varepsilon, \zeta} \circ G_\delta \, , \end{equation} see \eqref{sistema dopo trasformazione inverso approssimato}. Differentiating \eqref{trasfo imp} at the trivial torus $ {\mathtt u}_\delta (\varphi) = G_\delta^{-1}(i_\delta) (\varphi) = (\varphi, 0 , 0 ) $, at $ \zeta = \zeta_0 $, in the directions $(\widehat {\mathtt u}, \widehat \zeta) = (D G_\delta ({\mathtt u}_\delta)^{-1} [\, \widehat \imath \, ], \widehat \zeta) = D {\widetilde G}_\delta ({\mathtt u}_\delta)^{-1} [\, \widehat \imath , \widehat \zeta \, ] $, we get \begin{align} \label{verona 2} d_{i , \zeta} {\cal F}(i_\delta ) [\, \widehat \imath, \widehat \zeta \, ] = & D G_\delta( {\mathtt u}_\delta) \big( {\cal D}_\omega \widehat {\mathtt u} - d_{\mathtt u, \zeta} X_K( {\mathtt u}_\delta, \zeta_0) [\widehat {\mathtt u}, \widehat \zeta \, ] \big)
+ {\cal E}_1 [ \, \widehat \imath , \widehat \zeta \, ]\,, \\ \label{E1} {\cal E}_1 [\, \widehat \imath , \widehat \zeta \, ] := & D^2 G_\delta( {\mathtt u}_\delta) \big[ D G_\delta( {\mathtt u}_\delta)^{-1} {\cal F}(i_\delta, \zeta_0), \, D G_\delta({\mathtt u}_\delta)^{-1} [ \, \widehat \imath \, ] \big] \,, \end{align} where $ d_{\mathtt u, \zeta} X_K( {\mathtt u}_\delta, \zeta_0) $ is expanded in \eqref{lin idelta}. In fact, ${\cal E}_1$ is independent of $\widehat \zeta$. We split \[ {\cal D}_\omega \widehat {\mathtt u} - d_{\mathtt u, \zeta} X_K( {\mathtt u}_\delta, \zeta_0) [\widehat {\mathtt u}, \widehat \zeta] = \mathbb{D} [\widehat {\mathtt u}, \widehat \zeta \, ] + R_Z [ \widehat {\mathtt u}, \widehat \zeta \, ], \]
where $ {\mathbb D} [\widehat {\mathtt u}, \widehat \zeta] $ is defined in \eqref{operatore inverso approssimato} and
\begin{equation}\label{R0}
R_Z [ \widehat \psi, \widehat \eta, \widehat w, \widehat \zeta]
:= \begin{pmatrix}
- \partial_\psi K_{10}(\varphi) [\widehat \psi ] \\ \partial_\psi [\partial_\psi \theta_0(\varphi)]^T [ \widehat \psi, \zeta_0] + \partial_{\psi \psi} K_{00} (\varphi) [ \widehat \psi ] +
[\partial_\psi K_{10}(\varphi)]^T \widehat \eta +
[\partial_\psi K_{01}(\varphi)]^T \widehat w \\
- \partial_x \{ \partial_{\psi} K_{01}(\varphi)[ \widehat \psi ] \}
\end{pmatrix} \end{equation} ($R_Z$ is independent of $\widehat \zeta$). By \eqref{verona 0} and \eqref{verona 2}, \begin{equation} \label{E2} d_{i, \zeta} {\cal F}(i_0 ) = D G_\delta({\mathtt u}_\delta) \circ {\mathbb D} \circ D {\widetilde G}_\delta ({\mathtt u}_\delta)^{-1} + {\cal E}_0 + {\cal E}_1 + \mathcal{E}_2 \,, \quad \mathcal{E}_2 := D G_\delta( {\mathtt u}_\delta) \circ R_Z \circ D {\widetilde G}_\delta ({\mathtt u}_\delta)^{-1} \, . \end{equation} By Lemmata \ref{coefficienti nuovi}, \ref{lemma:DG}, \ref{zeta = 0}, and \eqref{stima toro modificato}, \eqref{ansatz 0}, the terms $\mathcal{E}_1, \mathcal{E}_2 $ (see \eqref{E1}, \eqref{E2}, \eqref{R0}) satisfy the same bound \eqref{stima parte trascurata 1} as $\mathcal{E}_0$ (in fact even better).
Thus the sum $\mathcal{E} := \mathcal{E}_0 + \mathcal{E}_1 + \mathcal{E}_2$ satisfies \eqref{stima parte trascurata 1}. Applying $ {\bf T}_0 $ defined in \eqref{definizione T} to the right in \eqref{E2}, since $ {\mathbb D} \circ {\mathbb D}^{-1} = I $ (see Proposition \ref{prop: ai}), we get $d_{i, \zeta} {\cal F}(i_0 ) \circ {\bf T}_0 - I = \mathcal{E} \circ {\bf T}_0$. Then \eqref{stima inverso approssimato 2} follows from \eqref{stima inverso approssimato 1} and the bound \eqref{stima parte trascurata 1} for $\mathcal{E}$. \end{proof}
\section{The linearized operator in the normal directions}\label{linearizzato siti normali}
The goal of this section is to write an explicit expression of the linearized operator $\mathcal{L}_\omega$ defined in \eqref{cal L omega}, see Proposition \ref{prop:lin}. To this aim, we compute $ \frac12 ( K_{02}(\psi) w, w )_{L^2(\mathbb T)} $, $ w \in H_S^\bot$, which collects all the components of $(H_\varepsilon \circ G_\delta)(\psi, 0, w)$ that are quadratic in $w$, see \eqref{KHG}.
We first prove some preliminary lemmata.
\begin{lemma}\label{lemma astratto potente} Let $ H $ be a Hamiltonian of class $C^2 ( H^1_0(\mathbb T_x), \mathbb R )$ and consider a map $ \Phi(u) := u + \Psi(u) $ satisfying $\Psi (u) = \Pi_E \Psi(\Pi_E u)$, for all $ u $, where $E$ is a finite dimensional subspace as in \eqref{def E finito}. Then \begin{equation}\label{lint2} \partial_u \big[\nabla ( H \circ \Phi)\big] (u) [h] = (\partial_u \nabla H )(\Phi(u)) [h] + {\cal R}(u)[h]\,, \end{equation} where $ {\cal R}(u) $ has the ``finite dimensional" form \begin{equation}\label{forma buona resto}
{\cal R}(u)[h] = {\mathop\sum}_{|j| \leq C} \big( h , g_j(u) \big)_{L^2(\mathbb T)} \chi_j(u) \end{equation} with $ \chi_j (u) = e^{{\mathrm i} j x} $ or $ g_j(u) = e^{{\mathrm i} j x} $. The remainder $ {\cal R} (u) = {\cal R}_0 (u) + {\cal R}_1 (u) + {\cal R}_2 (u) $ with \begin{align}\label{resti012} {\cal R}_0 (u) & := (\partial_u \nabla H)(\Phi(u)) \partial_u \Psi (u), \qquad {\cal R}_1 (u) := [\partial_{u }\{ \Psi'(u)^T\}] [ \cdot , \nabla H(\Phi(u)) ], \nonumber \\ \, {\cal R}_2 (u) & := [\partial_u \Psi (u)]^T (\partial_u \nabla H)(\Phi(u)) \partial_u \Phi(u). \end{align} \end{lemma}
\begin{proof} By a direct calculation, \begin{equation}\label{nabla composto} \nabla (H \circ \Phi)(u) = [\Phi'(u)]^T \nabla H(\Phi(u)) = \nabla H(\Phi(u)) + [\Psi'(u)]^T \nabla H(\Phi(u)) \end{equation} where $ \Phi' (u) := ( \partial_u \Phi) (u) $ and $ [ \ ]^T $ denotes the transpose with respect to the $ L^2 $ scalar product. Differentiating \eqref{nabla composto}, we get \eqref{lint2} and \eqref{resti012}.
Let us show that each $ {\cal R}_m $ has the form \eqref{forma buona resto}. We have \begin{equation}\label{Psiuh} \Psi'(u) = \Pi_E \Psi'(\Pi_E u) \Pi_E \, \, , \quad [\Psi'(u)]^T = \Pi_E [\Psi'( \Pi_E u)]^T \Pi_E \, . \end{equation} Hence, setting $ A := (\partial_u \nabla H)(\Phi(u)) \Pi_E \Psi' ( \Pi_E u) $, we get \[ {\cal R}_0(u)[h] = A [ \Pi_E h ]
= {\mathop\sum}_{|j| \leq C } h_j A ( e^{{\mathrm i} j x} )
= {\mathop\sum}_{|j| \leq C} (h, g_j )_{L^2(\mathbb T)} \chi_j \] with $ g_j := e^{{\mathrm i} j x} $, $ \chi_j := A( e^{{\mathrm i} jx } )$. Similarly, using \eqref{Psiuh}, and setting $ A := [ \Psi' (\Pi_E u) ]^T \Pi_E (\partial_u \nabla H)(\Phi(u)) \Phi'(u) $, we get $$
{\cal R}_2 (u)[h] = \Pi_E [ A h ] = {\mathop\sum}_{|j | \leq C } (A h , e^{{\mathrm i} j x} )_{L^2(\mathbb T)} e^{{\mathrm i} jx}
= {\mathop\sum}_{|j| \leq C} (h, A^T e^{{\mathrm i} j x} )_{L^2(\mathbb T)} e^{{\mathrm i} j x} \,, $$ which has the form \eqref{forma buona resto} with $ g_j := A^T( e^{{\mathrm i} jx } )$ and $ \chi_j := e^{{\mathrm i} j x} $. Differentiating the second equality in \eqref{Psiuh}, we see that $$ {\cal R}_1 (u)[h] = \Pi_E [ A h ] \, , \quad A h := \partial_u \{ \Psi' ( \Pi_E u)^T \} [\Pi_E h, \Pi_E (\nabla H)(\Phi(u)) ] \,, $$ which has the same form of $ {\cal R}_2 $ and so \eqref{forma buona resto}.
\end{proof}
\begin{lemma} \label{sifulo} Let $ H(u ) := \int_\mathbb T f(u) X(u) d x $ where $ X(u) = \Pi_{E} X ( \Pi_E u) $ and $ f(u)(x) := f( u(x)) $ is the composition operator for a function of class $ C^2 $. Then \begin{equation}\label{lint1} (\partial_u \nabla H) (u) [h] = f''(u) X(u) \, h + {\cal R} (u) [h] \end{equation} where $ {\cal R} (u) $ has the form \eqref{forma buona resto} with $ \chi_j (u) = e^{{\mathrm i} j x} $ or $ g_j(u) = e^{{\mathrm i} j x} $. \end{lemma}
\begin{proof} A direct calculation proves that $\nabla H(u) = f'(u) X(u) + X'(u)^T [f(u)]$, and \eqref{lint1} follows with $ {\cal R} (u) [h] = $ $ f'(u) X'(u)[h] + $ $ \partial_u \{ X'(u)^T\} [h, f(u)] + $ $ X'(u)^T [ f'(u) h ]$, which has the form \eqref{forma buona resto}. \end{proof}
We conclude this section with a technical lemma used from the end of section \ref{step3} about the decay norms of ``finite dimensional operators". Note that operators of the form \eqref{forma buona con gli integrali} (that will appear in section \ref{step1}) reduce to those in \eqref{forma buona resto} when the functions $ g_j(\tau) $, $ \chi_j (\tau)$ are independent of $ \tau $
\begin{lemma}\label{remark : decay forma buona resto} Let $ {\cal R} $ be an operator of the form \begin{equation}\label{forma buona con gli integrali}
{\cal R} h = \sum_{|j| \leq C } \int_0^1 \big(h\,,\,g_j(\tau) \big)_{L^2(\mathbb T)} \chi_j (\tau)\,d \tau\,, \end{equation} where the functions $g_j(\tau),\,\chi_j(\tau) \in H^s$, $\tau \in [0, 1]$ depend in a Lipschitz way on the parameter $\omega$. Then its matrix $ s$-decay norm (see \eqref{matrix decay norm}-\eqref{matrix decay norm Lip}) satisfies $$
| {\cal R} |_s^{\mathrm{Lip}(\g)} \leq_s {\mathop \sum}_{|j| \leq C} {\rm sup}_{\tau \in [0,1]} \big\{ \| \chi_j(\tau) \|_s^{\mathrm{Lip}(\g)} \| g_j(\tau) \|_{s_0}^{\mathrm{Lip}(\g)}
+ \| \chi_j(\tau) \|_{s_0}^{\mathrm{Lip}(\g)} \| g_j(\tau) \|_s^{\mathrm{Lip}(\g)} \big\} \, . $$ \end{lemma}
\begin{proof} For each $\tau \in [0, 1]$, the operator $ h \mapsto (h,g_j(\tau)) \chi_j(\tau) $ is the composition $ \chi_j(\tau) \circ \Pi_0 \circ g_j(\tau) $ of the multiplication operators for $ g_j(\tau), \chi_j(\tau) $ and $ h \mapsto \Pi_0 h := \int_{\mathbb T} h dx $. Hence the lemma follows by the interpolation estimate \eqref{interpm Lip} and \eqref{multiplication Lip}. \end{proof}
\subsection{Composition with the map $G_\delta$} \label{section:appr}
In the sequel we shall use that $ {\mathfrak{I}}_\delta := {\mathfrak{I}}_\delta (\varphi ; \omega) := i_\delta (\varphi; \omega ) - (\varphi,0,0) $ satisfies, by Lemma \ref{toro isotropico modificato} and \eqref{ansatz 0}, \begin{equation}\label{ansatz delta}
\| {\mathfrak I}_\delta \|_{s_0+\mu}^{{\mathrm{Lip}(\g)}} \leq C\varepsilon^{6 - 2b} \gamma^{-1}\, . \end{equation} We now study the Hamiltonian $ K := H_\varepsilon \circ G_\delta = \varepsilon^{-2b} \mathcal{H} \circ A_\varepsilon \circ G_\delta $ defined in \eqref{KHG}, \eqref{def H eps}.
Recalling \eqref{def A eps} and \eqref{trasformazione modificata simplettica} the map $A_\varepsilon \circ G_\delta$ has the form \begin{equation} \label{A eps G delta} A_\varepsilon \circ G_\delta(\psi, \eta, w)
= \varepsilon \sum_{j \in S} \sqrt{\xi_j + \varepsilon^{2(b-1)} |j| [ y_\delta(\psi) + L_1(\psi) \eta + L_2(\psi) w ]_j } \, e^{{\mathrm i} [\theta_0(\psi)]_j} e^{{\mathrm i} jx} + \varepsilon^b (z_0(\psi) + w) \end{equation} where \begin{equation}\label{L1 L2} L_1(\psi) := [\partial_\psi \theta_0(\psi)]^{-T} \, , \quad L_2(\psi) := \big[ (\partial_\theta \tilde{z}_0) (\theta_0(\psi)) \big]^T \partial_x^{-1} \, . \end{equation}
By Taylor's formula, we develop \eqref{A eps G delta} in $w$ at $\eta=0$, $w=0$, and we get $ A_\varepsilon \circ G_\delta(\psi, 0, w) = $ $ T_\delta(\psi) + T_1(\psi) w + T_2(\psi)[w,w] + $ $ T_{\geq 3}(\psi, w) $, where \begin{equation}\label{T0} T_\delta(\psi) := (A_\varepsilon \circ G_\delta)(\psi, 0, 0)
= \varepsilon v_\delta(\psi) + \varepsilon^b z_0(\psi) \, , \ \ v_\delta (\psi):= \sum_{j \in S} \sqrt{\xi_j + \varepsilon^{2(b-1)} |j| [ y_\delta(\psi) ]_j } \, e^{{\mathrm i} [\theta_0(\psi)]_j} e^{{\mathrm i} jx} \end{equation} is the approximate isotropic torus in phase space (it corresponds to $ i_\delta $ in Lemma \ref{toro isotropico modificato}), \begin{align} T_1(\psi) w & = \varepsilon \sum_{j \in S}
\frac{\varepsilon^{2(b-1)} |j| [ L_2(\psi) w ]_j \, e^{{\mathrm i} [\theta_0(\psi)]_j}}
{2 \sqrt{ \xi_j + \varepsilon^{2(b-1)} |j| [ y_\delta(\psi) ]_j }} \, e^{{\mathrm i} jx} + \varepsilon^b w =: \varepsilon^{2b-1} U_1 (\psi) w + \varepsilon^b w \, \label{T1} \\ T_2(\psi)[w,w] & = - \varepsilon \sum_{j \in S} \frac{\varepsilon^{4(b-1)} j^2 [ L_2(\psi) w ]_j^2 \, e^{{\mathrm i} [\theta_0(\psi)]_j}}
{8 \{ \xi_j + \varepsilon^{2(b-1)} |j| [ y_\delta(\psi) ]_j \}^{3/2} } \, e^{{\mathrm i} jx} =: \varepsilon^{4b - 3} U_2(\psi)[w,w] \label{T2} \end{align} and $T_{\geq 3}(\psi, w)$ collects all the terms of order at least cubic in $w$. In the notation of \eqref{def A eps}, the function $v_\delta(\psi) $ in \eqref{T0} is $v_\delta(\psi) = v_\varepsilon( \theta_0(\psi), y_\delta(\psi))$. The terms $U_1, U_2 = O(1)$ in $\varepsilon$. Moreover, using that $ L_2 (\psi) $ in \eqref{L1 L2} vanishes as $ z_0 = 0 $, they satisfy \begin{equation}\label{extra piccolezza}
\| U_1 w \|_s \leq \| {\mathfrak{I}}_\delta \|_s \| w \|_{s_0} + \| {\mathfrak{I}}_\delta \|_{s_0} \| w \|_s \, , \quad
\| U_2 [w,w] \|_s \leq \| {\mathfrak{I}}_\delta \|_s \| {\mathfrak{I}}_\delta \|_{s_0} \| w \|_{s_0}^2 +
\| {\mathfrak{I}}_\delta \|_{s_0}^2 \| w \|_{s_0} \| w \|_s \end{equation}
and also in the $ \| \ \|_s^{\mathrm{Lip}(\g)} $-norm.
By Taylor's formula $ \mathcal{H}(u+h) = \mathcal{H}(u) + ( (\nabla \mathcal{H})(u), h )_{L^2(\mathbb T)} + \frac12 ( (\partial_u \nabla \mathcal{H})(u) [h], h )_{L^2(\mathbb T)} + O(h^3) $. Specifying at $u = T_\delta(\psi)$ and $ h = T_1(\psi) w + T_2(\psi)[w,w] + T_{\geq 3}(\psi,w)$, we obtain that the sum of all the components of $ K = \varepsilon^{-2b} (\mathcal{H} \circ A_\varepsilon \circ G_\delta)(\psi, 0, w) $ that are quadratic in $w$ is $$ \frac12 ( K_{02}w, w )_{L^2(\mathbb T)} = \varepsilon^{-2b} ( (\nabla \mathcal{H})(T_\delta ), T_2 [w,w] )_{L^2(\mathbb T)} + \varepsilon^{-2b} \frac12 ( (\partial_u \nabla \mathcal{H})(T_\delta ) [T_1 w], T_1 w )_{L^2(\mathbb T)} \, . $$ Inserting the expressions \eqref{T1}, \eqref{T2} we get \begin{align} K_{02}(\psi) w & = (\partial_u \nabla \mathcal{H})(T_\delta) [w] + 2 \varepsilon^{b-1} (\partial_u \nabla \mathcal{H})(T_\delta) [U_1 w] + \varepsilon^{2(b-1)} U_1^T (\partial_u \nabla \mathcal{H})(T_\delta) [U_1 w] \nonumber \\ & \quad + 2 \varepsilon^{2b- 3} U_2[w, \cdot]^T (\nabla \mathcal{H})(T_\delta). \label{K02} \end{align}
\begin{lemma}\label{dopo l'approximate inverse} \begin{equation}\label{piccolezza resti}
( K_{02}(\psi) w, w )_{L^2(\mathbb T)} = ( (\partial_u \nabla \mathcal{H})(T_\delta) [w], w )_{L^2(\mathbb T)} + ( R(\psi) w, w )_{L^2(\mathbb T)} \end{equation} where $R(\psi)w $ has the ``finite dimensional" form \begin{equation}\label{forma buona resto con psi}
R(\psi) w = {\mathop\sum}_{|j| \leq C} \big( w , g_j(\psi) \big)_{L^2(\mathbb T)} \chi_j(\psi) \end{equation} where, for some $\sigma := \sigma (\nu, \tau) > 0$, \begin{align} \label{piccolo FBR}
\| g_j \|_s^{\mathrm{Lip}(\g)} \| \chi_j \|_{s_0}^{\mathrm{Lip}(\g)} + \| g_j \|_{s_0}^{\mathrm{Lip}(\g)} \| \chi_j \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^{b+1} \| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} \\
\| \partial_i g_j [\widehat \imath ]\|_s \| \chi_j \|_{s_0}
+ \| \partial_i g_j [\widehat \imath ]\|_{s_0} \| \chi_j \|_{s} + \| g_j \|_{s_0} \| \partial_i \chi_j [\widehat \imath ] \|_s
+ \| g_j \|_{s} \| \partial_i \chi_j [\widehat \imath ]\|_{s_0}
& \leq_s \varepsilon^{b + 1} \| \widehat \imath \|_{s + \sigma}\label{derivata piccolo FBR} \\
& + \varepsilon^{2b-1} \| {\mathfrak I}_\delta\|_{s + \sigma} \|\widehat \imath \|_{s_0 + \sigma} \,, \nonumber \end{align} and, as usual, $i = (\theta, y, z)$ (see \eqref{embedded torus i}), $\widehat \imath = (\widehat \theta, \widehat y, \widehat z)$. \end{lemma}
\begin{proof} Since $ U_1 = \Pi_S U_1 $ and $ U_2 = \Pi_S U_2 $, the last three terms in \eqref{K02} have all the form \eqref{forma buona resto con psi} (argue as in Lemma \ref{lemma astratto potente}). We now prove that they are also small in size.
The contributions in \eqref{K02} from $ H_2 $ are better analyzed by the expression $$ \varepsilon^{-2b} H_2 \circ A_\varepsilon \circ G_\delta (\psi, \eta, w) = const + \sum_{j \in S^+} j^3 \big[ y_\delta (\psi) + L_1(\psi)\eta + L_2 (\psi) w \big]_j + \frac{1}{2} \int_{\mathbb T} ( z_0 (\psi) + w)_x^2 \, dx $$ which follows by \eqref{shape H2}, \eqref{trasformazione modificata simplettica}, \eqref{L1 L2}.
Hence the only contribution to $ (K_{02} w,w) $ is $ \int_{\mathbb T} w_x^2 \, dx $. Now we consider the cubic term $ \mathcal{H}_3 $ in \eqref{H3tilde}. A direct calculation shows that for $ u = v + z $, $ \nabla \mathcal{H}_3 ( u ) = 3 z^2 + 6 \Pi_S^\bot (v z) $, and $ \partial_u \nabla \mathcal{H}_3 ( u ) [U_1 w] = 6 \Pi_S^\bot ( z U_1 w) $ (since $ U_1 w \in H_S $). Therefore \begin{equation}\label{mH 3 T delta} \nabla \mathcal{H}_3(T_\delta) = 3 \varepsilon^{2 b} z_0^2 + 6 \varepsilon^{b + 1} \Pi_S^\bot(v_\delta z_0 ) \,,\quad \partial_u \nabla \mathcal{H}_3 ( T_\delta ) [U_1 w] = 6 \varepsilon^b \Pi_S^\bot ( z_0 \, U_1 w) \, . \end{equation} By \eqref{mH 3 T delta} one has $ ( (\partial_u \nabla \mathcal{H}_3)(T_\delta) [U_1 w ], U_1 w )_{L^2(\mathbb T)} = 0 $, and since also $U_2 = \Pi_S U_2$, \begin{equation}\label{contributi H3} \varepsilon^{b - 1}\partial_u \nabla {\cal H}_3(T_\delta)[U_1 w] + \varepsilon^{2 b - 3} U_2[w, \cdot]^T \nabla {\cal H}_3(T_\delta) = 6 \varepsilon^{2 b - 1} \Pi_S^\bot(z_0 U_1 w) + 3 \varepsilon^{4 b - 3} U_2[w, \cdot]^T z_0^2 \,. \end{equation} These terms have the form \eqref{forma buona resto con psi} and, using \eqref{extra piccolezza}, \eqref{ansatz 0}, they satisfy \eqref{piccolo FBR}.
Finally we consider all the terms which arise from ${\cal H}_{\geq 4} = O(u^4)$. The operators $ \varepsilon^{b - 1} \partial_u \nabla {\cal H}_{\geq 4}(T_\delta) U_1 $, $ \varepsilon^{2(b - 1)} U_1^T (\partial_u \nabla {\cal H}_{\geq 4})(T_\delta) U_1$, $ \varepsilon^{2 b - 3} U_2^T \nabla {\cal H}_{\geq 4}(T_\delta) $
have the form \eqref{forma buona resto con psi} and, using $ \| T_\delta \|_s^{\mathrm{Lip}(\g)} \leq \varepsilon (1 + \| {\mathfrak{I}}_\delta \|_s^{\mathrm{Lip}(\g)}) $, \eqref{extra piccolezza}, \eqref{ansatz 0}, the bound \eqref{piccolo FBR} holds. Notice that the biggest term is $ \varepsilon^{b - 1} \partial_u \nabla {\cal H}_{\geq 4}(T_\delta) U_1 $.
By \eqref{derivata i delta} and using explicit formulae \eqref{L1 L2}-\eqref{T2} we get estimate \eqref{derivata piccolo FBR}. \end{proof}
The conclusion of this section is that, after the composition with the action-angle variables, the rescaling \eqref{rescaling kdv quadratica}, and the transformation $ G_\delta $, the linearized operator to analyze is $ H_S^\bot \ni w \mapsto (\partial_u \nabla \mathcal{H})(T_\delta) [w] $, up to finite dimensional operators which have the form \eqref{forma buona resto con psi} and size \eqref{piccolo FBR}.
\subsection{The linearized operator in the normal directions}
In view of \eqref{piccolezza resti} we now compute $ ( (\partial_u \nabla \mathcal{H})(T_\delta) [w], w )_{L^2(\mathbb T)} $, $ w \in H_S^\bot $, where $ \mathcal{H} = H \circ \Phi_B $ and $\Phi_B $ is the Birkhoff map of Proposition \ref{prop:weak BNF}. It is convenient to estimate separately the terms in \begin{equation}\label{mH H2H3H5} \mathcal{H} = H \circ \Phi_B = (H_2 + H_3) \circ \Phi_B + H_{\geq 5} \circ \Phi_B \end{equation} where $ H_2, H_3, H_{\geq 5}$ are defined in \eqref{H iniziale KdV}.
We first consider $ H_{\geq 5} \circ \Phi_B $. By \eqref{H iniziale KdV} we get $ \nabla H_{\geq 5}(u) = \pi_0[ (\partial_u f)(x, u, u_x) ] - \partial_x \{ (\partial_{u_x} f)(x, u,u_x) \} $, see \eqref{def pi 0}. Since the Birkhoff transformation $ \Phi_B $ has the form \eqref{finito finito}, Lemma \ref{lemma astratto potente} (at $ u = T_\delta $, see \eqref{T0}) implies that \begin{align} \partial_u \nabla ( H_{\geq 5} \circ \Phi_B ) (T_\delta) [h] & = (\partial_u \nabla H_{\geq 5})(\Phi_B(T_\delta)) [h] + {\cal R}_{H_{\geq 5}}(T_\delta)[h] \notag \\ & = \partial_x (r_1(T_\delta) \partial_x h ) + r_0(T_\delta) h + {\cal R}_{H_{\geq 5}}(T_\delta)[h] \label{der grad struttura separata5} \end{align} where the multiplicative functions $r_0(T_\delta)$, $r_1(T_\delta)$ are \begin{alignat}{2} \label{r0r1 def} r_0 (T_\delta) & := \sigma_0(\Phi_B(T_\delta)), \qquad & \sigma_0(u) & := (\partial_{uu} f)(x, u, u_x) - \partial_x \{ (\partial_{u u_x} f)(x, u, u_x) \}, \\ \label{sigma0sigma1 def} r_1 (T_\delta) & := \sigma_1(\Phi_B(T_\delta)), \quad & \sigma_1(u) & := - (\partial_{u_x u_x} f)(x, u, u_x) , \end{alignat} the remainder $ {\cal R}_{H_{\geq 5}}(u) $ has the form \eqref{forma buona resto} with $\chi_j = e^{{\mathrm i} jx}$ or $g_j = e^{{\mathrm i} jx}$ and, using \eqref{resti012}, it satisfies,
for some $ \sigma := \sigma (\nu, \tau) > 0$, \begin{align*}
\| g_j \|_s^{\mathrm{Lip}(\g)} \| \chi_j \|_{s_0}^{\mathrm{Lip}(\g)} + \| g_j \|_{s_0}^{\mathrm{Lip}(\g)} \| \chi_j \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^4 (1 + \| {\mathfrak{I}}_\delta \|_{s+2}^{\mathrm{Lip}(\g)} ) \\
\| \partial_i g_j [\widehat \imath ]\|_s \| \chi_j \|_{s_0}
+ \| \partial_i g_j [\widehat \imath ]\|_{s_0} \| \chi_j \|_{s}
+ \| g_j \|_{s_0} \| \partial_i \chi_j [\widehat \imath ] \|_s
+ \| g_j \|_{s} \| \partial_i \chi_j [\widehat \imath ]\|_{s_0}
& \leq_s \varepsilon^4 ( \| \widehat \imath \|_{s+\sigma}
+ \| {\mathfrak{I}}_\delta \|_{s+2} \| \widehat \imath \|_{s_0 + 2} ). \end{align*} Now we consider the contributions from $ (H_2 + H_3) \circ \Phi_B $. By Lemma \ref{lemma astratto potente} and the expressions of $ H_2, H_3 $ in \eqref{H iniziale KdV} we deduce that $$ \partial_u \nabla ( H_2 \circ \Phi_B) (T_\delta) [h] = - \partial_{xx} h + {\cal R}_{H_2}(T_\delta)[h] \,, \quad \partial_u \nabla ( H_3 \circ \Phi_B) (T_\delta) [h] = 6 \Phi_B (T_\delta) h + {\cal R}_{H_3}(T_\delta)[h] \,, $$ where $ \Phi_B (T_\delta) $ is a function with zero space average, because $ \Phi_B: H^1_0 (\mathbb T_x) \to H^1_0 (\mathbb T_x)$ (Proposition \ref{prop:weak BNF}) and $ {\cal R}_{H_2}(u) $, $ {\cal R}_{H_3}(u) $ have the form \eqref{forma buona resto}. By \eqref{resti012}, the size $ ( {\cal R}_{H_2} + {\cal R}_{H_3}) (T_\delta ) = O( \varepsilon ) $. We expand $$ ( {\cal R}_{H_2} + {\cal R}_{H_3}) (T_\delta ) = \varepsilon {\cal R}_1 + \varepsilon^2 {\cal R}_2 + {\tilde {\cal R}}_{> 2} \,, $$ where $\tilde \mathcal{R}_{>2}$ has size $o(\varepsilon^2)$, and we get, $ \forall h \in H_S^\bot $, \begin{equation}\label{useful repr} \Pi_S^\bot \partial_u \nabla ((H_2 + H_3) \circ \Phi_B) (T_\delta) [h] = - \partial_{xx} h + \Pi_S^\bot (6 \Phi_B (T_\delta) h ) + \Pi_S^\bot ( \varepsilon {\cal R}_1 + \varepsilon^2 {\cal R}_2 + {\tilde {\cal R}}_{> 2} ) [h] \, . \end{equation} We also develop the function $ \Phi_B (T_\delta) $ is powers of $ \varepsilon $. Expand $\Phi_B (u) = u + \Psi_2 (u) + \Psi_{\geq 3}(u) $, where $ \Psi_2 (u) $ is quadratic, $ \Psi_{\geq 3} (u) = O(u^3)$, and both map $ H_0^1(\mathbb T_x) \to H_0^1(\mathbb T_x) $. At $ u = T_\delta = \varepsilon v_\delta + \varepsilon^b z_0 $ we get \begin{align} \Phi_B ( T_\delta ) & = T_\delta + \Psi_2 (T_\delta) + \Psi_{\geq 3}(T_\delta) = \varepsilon v_\delta + \varepsilon^2 \Psi_2 ( v_\delta ) + \tilde q \label{funzione moltiplicativa} \end{align} where $ \tilde q := \varepsilon^b z_0 + \Psi_2 ( T_\delta ) - \varepsilon^2 \Psi_2 (v_\delta) + \Psi_{\geq 3} (T_\delta) $ has zero space average and it satisfies $$
\| \tilde q \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^3 + \varepsilon^b \| {\mathfrak{I}}_\delta \|_s^{\mathrm{Lip}(\g)}\,,\quad
\| \partial_i \tilde q [\widehat \imath ] \|_s
\leq_s \varepsilon^b \big( \| \widehat \imath \|_s + \| {\mathfrak I}_\delta \|_s \| \widehat \imath \|_{s_0} \big)\,. $$
In particular, its low norm $\| \tilde q \|_{s_0}^{\mathrm{Lip}(\g)} \leq_{s_0} \varepsilon^{6-b} \gamma^{-1} = o(\varepsilon^2)$.
We need an exact expression of the terms of order $ \varepsilon $ and $ \varepsilon^2 $ in \eqref{useful repr}. We compare the Hamiltonian \eqref{widetilde cal H} with \eqref{mH H2H3H5}, noting that $ (H_{\geq 5} \circ \Phi_B)(u) = O(u^5) $ because $ f $ satisfies \eqref{order5} and $ \Phi_B (u) = O(u) $. Therefore $$ (H_2 + H_3) \circ \Phi_B = H_2 + \mathcal{H}_3 + \mathcal{H}_4 + O(u^5) \,, $$ and the homogeneous terms of $ (H_2 + H_3) \circ \Phi_B $ of degree $ 2, 3, 4 $ in $ u $ are $ H_2 $, $\mathcal{H}_3 $, $ \mathcal{H}_4 $ respectively. As a consequence, the terms of order $ \varepsilon $ and $ \varepsilon^2 $ in \eqref{useful repr} (both in the function $ \Phi_B (T_\delta ) $ and in the remainders $ \mathcal{R}_1, \mathcal{R}_2 $) come only from $ H_2 + \mathcal{H}_3 + \mathcal{H}_4 $. Actually they come from $ H_2 $, $ \mathcal{H}_3 $ and $ \mathcal{H}_{4,2} $ (see \eqref{H3tilde}, \eqref{mH3 mH4}) because, at $ u = T_\delta = \varepsilon v_\delta + \varepsilon^b z_0 $, for all $ h \in H_S^\bot $, $$ \Pi_S^\bot (\partial_u \nabla \mathcal{H}_4)(T_\delta) [h] = \Pi_S^\bot (\partial_u \nabla \mathcal{H}_{4,2})(T_\delta) [h] + o(\varepsilon^2) \,. $$ A direct calculation based on the expressions \eqref{H3tilde}, \eqref{mH3 mH4} shows that, for all $ h \in H_S^\bot $, \begin{align} \Pi_S^\bot (\partial_u \nabla ( H_2 + \mathcal{H}_3 + \mathcal{H}_4)) (T_\delta)[h] & = - \partial_{xx} h + 6 \varepsilon \Pi_S^\bot ( v_\delta h ) + 6 \varepsilon^b \Pi_S^\bot ( z_0 h ) + \varepsilon^2 \Pi_S^\bot \big\{ 6 \pi_0 [ (\partial_x^{-1} v_\delta)^2 ] h \nonumber \\ & + 6 v_\delta \Pi_S [ (\partial_x^{-1} v_\delta) (\partial_x^{-1} h) ] -
6 \partial_x^{-1} \{ (\partial_x^{-1} v_\delta) \Pi_S [v_\delta h] \} \big\} + o(\varepsilon^2). \label{lin basso} \end{align} Thus, comparing the terms of order $ \varepsilon, \varepsilon^2 $ in \eqref{useful repr} (using \eqref{funzione moltiplicativa}) with those in \eqref{lin basso} we deduce that the operators $\mathcal{R}_1, \mathcal{R}_2$ and the function $\Psi_2 (v_\delta)$ are \begin{equation}\label{R nullo1R2} {\cal R}_1 = 0 , \quad {\cal R}_2 [h ] = 6 v_\delta \Pi_S \big[ (\partial_x^{-1} v_\delta) (\partial_x^{-1} h) \big] - 6 \partial_x^{-1} \{ (\partial_x^{-1} v_\delta) \Pi_S [v_\delta h] \} \,, \quad \Psi_2 (v_\delta) = \pi_0 [ (\partial_x^{-1} v_\delta)^2 ] . \end{equation} In conclusion, by \eqref{mH H2H3H5}, \eqref{useful repr}, \eqref{der grad struttura separata5}, \eqref{funzione moltiplicativa}, \eqref{R nullo1R2}, we get, for all $ h \in H_{S^\bot } $, \begin{align} \Pi_S^\bot \partial_u \nabla \mathcal{H} (T_\delta)[h] & = - \partial_{xx} h + \Pi_S^\bot \big[ \big( \varepsilon 6 v_\delta + \varepsilon^2 6 \pi_0 [ (\partial_x^{-1} v_\delta)^2 ] + q_{>2} + p_{\geq 4 } \big) h \big] \nonumber \\ & \quad +
\Pi_S^\bot \partial_x (r_1(T_\delta) \partial_x h ) +
\varepsilon^2 \Pi_S^\bot {\cal R}_2[h] + \Pi_S^\bot {\cal R}_{> 2} [h] \label{lafinalina} \end{align} where $ r_1 $ is defined in \eqref{r0r1 def}, $ {\cal R}_2 $ in \eqref{R nullo1R2}, the remainder $ {\cal R}_{> 2} := {\tilde {\cal R}}_{> 2} + {\cal R}_{H_{\geq 5}} (T_\delta) $ and the functions (using also \eqref{r0r1 def}, \eqref{sigma0sigma1 def}, \eqref{order5}), \begin{align}\label{def p3} q_{>2} & := 6 \tilde q
+ \varepsilon^3 \big( (\partial_{uu} f_5)(v_\delta, (v_\delta)_x) - \partial_x \{ (\partial_{u u_x} f_5)(v_\delta, (v_\delta)_x) \} \big) \\ p_{\geq 4 } & := r_0 (T_\delta) - \varepsilon^3 \big[ (\partial_{uu} f_5)(v_\delta, (v_\delta)_x) - \partial_x \{ (\partial_{u u_x} f_5)(v_\delta, (v_\delta)_x) \} \big] \, .
\label{def p>4} \end{align}
\begin{lemma}\label{p3 zero average}
$ \int_{\mathbb T} q_{>2} dx = 0 $. \end{lemma}
\begin{proof} We already observed that $ \tilde q $ has zero $x$-average as well as the derivative $ \partial_x \{ (\partial_{u u_x} f_5)(v, v_x) \} $. Finally \begin{equation}\label{unico pezzo} (\partial_{uu} f_5)(v, v_x) = \sum_{j_1, j_2, j_3 \in S } c_{j_1j_2j_3} v_{j_1} v_{j_2} v_{j_3} e^{{\mathrm i} (j_1+ j_2 + j_3) x } \, , \quad v := \sum_{j \in S} v_j e^{{\mathrm i} j x} \end{equation} for some coefficient $ c_{j_1j_2j_3} $, and therefore it has zero average by hypothesis (${\mathtt S}1$). \end{proof}
By Lemma \ref{dopo l'approximate inverse} and the results of this section (in particular \eqref{lafinalina}) we deduce:
\begin{proposition}\label{prop:lin} Assume \eqref{ansatz delta}. Then the Hamiltonian operator $ {\cal L}_\omega $ has the form, $ \forall h \in H_{S^\bot}^s ( \mathbb T^{\nu+1}) $, \begin{equation}\label{Lom KdVnew} {\cal L}_\omega h := \omega \!\cdot \!\partial_{\varphi} h - \partial_x K_{02} h = \Pi_S^\bot \big( \omega \! \cdot \! \partial_\varphi h + \partial_{xx} (a_1 \partial_x h) + \partial_x ( a_0 h ) - \varepsilon^2 \partial_x {\cal R}_2 h - \partial_x \mathcal{R}_* h \big) \end{equation} where $ {\cal R}_2 $ is defined in \eqref{R nullo1R2}, $ {\mathcal{R}}_* := {\cal R}_{> 2} + R(\psi) $ (with $R(\psi)$ defined in Lemma \ref{dopo l'approximate inverse}), the functions \begin{equation}\label{a1p1p2} a_1 := 1 - r_1 ( T_\delta ) \, , \quad a_0 := - ( \varepsilon p_1 + \varepsilon^2 p_2 + q_{>2} + p_{\geq 4} ) \, , \quad p_1 := 6 v_\delta \, , \quad p_2 := 6 \pi_0 [ (\partial_x^{-1} v_\delta)^2 ]\,, \end{equation} the function $ q_{>2} $ is defined in \eqref{def p3} and satisfies $ \int_{\mathbb T} q_{>2} dx = 0 $, the function $ p_{ \geq 4} $ is defined in \eqref{def p>4}, $ r_1 $ in \eqref{sigma0sigma1 def}, $ T_\delta $ and $ v_\delta $ in \eqref{T0}. For $ p_k = p_1, p_2 $, \begin{alignat}{2} \label{stime pk}
\| p_k \|_s^{\mathrm{Lip}(\g)}
& \leq_s 1 + \| {\mathfrak I}_\delta \|_s^{\mathrm{Lip}(\g)}, & \quad \qquad
\| \partial_i p_k [ \widehat \imath ] \|_s
& \leq_s \| \widehat \imath \|_{s+1} + \| {\mathfrak I}_\delta \|_{s+1} \| \widehat \imath \|_{s_0+1}, \\ \label{stima q>2}
\| q_{>2} \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^3 + \varepsilon^b \| {\mathfrak{I}}_\delta \|_{s}^{\mathrm{Lip}(\g)}\,, & \quad
\| \partial_i q_{>2} [\widehat \imath ] \|_s
& \leq_s \varepsilon^b \big( \| \widehat \imath \|_{s+1} + \| {\mathfrak I}_\delta \|_{s+1} \| \widehat \imath \|_{s_0+1} \big), \\
\| a_1 -1 \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^3 \big( 1 + \| {\mathfrak I}_\delta \|_{s+1}^{\mathrm{Lip}(\g)} \big) \,, & \quad
\| \partial_i a_1[\widehat \imath ] \|_s
& \leq_s \varepsilon^3 \big( \| \widehat \imath \|_{s + 1} + \| {\mathfrak I}_\delta \|_{s + 1} \| \widehat \imath \|_{s_0 + 1} \big) \label{stima a1} \\
\| p_{\geq 4} \|_s^{\mathrm{Lip}(\g)} & \leq_s \varepsilon^4 + \varepsilon^{b + 2} \| {\mathfrak{I}}_\delta \|_{s + 2}^{\mathrm{Lip}(\g)}\,,& \quad \|
\partial_i p_{\geq 4}[\widehat \imath ] \|_s & \leq_s \varepsilon^{b + 2} \big( \| \widehat \imath \|_{s + 2} + \| {\mathfrak{I}}_\delta\|_{s + 2} \| \widehat \imath \|_{s_0 + 2} \big) \label{p geq 4} \end{alignat} where $ {\mathfrak{I}}_\delta (\varphi) := (\theta_0(\varphi) - \varphi, y_\delta(\varphi), z_0(\varphi)) $ corresponds to $T_\delta$. The remainder $ {\cal R}_{2} $ has the form \eqref{forma buona resto} with \begin{align} \label{stime resto 1}
\| g_j \|_s^{\mathrm{Lip}(\g)} + \| \chi_j \|_s^{\mathrm{Lip}(\g)}
\leq_s 1+ \| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} \, , \quad
\| \partial_i g_j [\widehat \imath ]\|_s + \| \partial_i \chi_j [\widehat \imath ]\|_s
\leq_s \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta\|_{s + \sigma} \|\widehat \imath \|_{s_0 + \sigma}
\end{align} and also $ {\cal R}_* $ has the form \eqref{forma buona resto} with \begin{align} \label{stima cal R*}
\| g_j^* \|_s^{\mathrm{Lip}(\g)} \| \chi_j^* \|_{s_0}^{\mathrm{Lip}(\g)} + \| g_j^* \|_{s_0}^{\mathrm{Lip}(\g)} \| \chi_j^* \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^3 + \varepsilon^{b+1} \| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)}
\\
\| \partial_i g_j^* [\widehat \imath ]\|_s \| \chi_j^* \|_{s_0}
+ \| \partial_i g_j^* [\widehat \imath ]\|_{s_0} \| \chi_j^* \|_{s} + \| g_j^* \|_{s_0} \| \partial_i \chi_j^* [\widehat \imath ] \|_s
+ \| g_j^* \|_{s} \| \partial_i \chi_j^* [\widehat \imath ]\|_{s_0}
& \leq_s \varepsilon^{b + 1} \| \widehat \imath \|_{s + \sigma} \label{derivate stima cal R*} \\
& + \varepsilon^{2b-1} \| {\mathfrak I}_\delta\|_{s + \sigma} \|\widehat \imath \|_{s_0 + \sigma} \nonumber \, . \end{align} \end{proposition}
The bounds \eqref{stime resto 1}, \eqref{stima cal R*} imply, by Lemma \ref{remark : decay forma buona resto}, estimates for the $ s $-decay norms of $ {\cal R}_2 $ and $ {\cal R}_* $. The linearized operator $ {\cal L}_\omega := {\cal L}_\omega (\omega, i_\delta (\omega))$ depends on the parameter $ \omega $ both directly and also through the dependence on the torus $ i_\delta (\omega ) $. We have estimated also the partial derivative $ \partial_i $ with respect to the variables $ i $ (see \eqref{embedded torus i}) in order to control, along the nonlinear Nash-Moser iteration, the Lipschitz variation of the eigenvalues of $ {\cal L}_\omega $ with respect to $ \omega $ and the approximate solution $ i_\delta $.
\section{Reduction of the linearized operator in the normal directions}\label{operatore linearizzato sui siti normali}
The goal of this section is to conjugate the Hamiltonian operator $ {\cal L}_\omega $ in \eqref{Lom KdVnew} to the diagonal operator $ {\cal L}_\infty $ defined in \eqref{Lfinale}. The proof is obtained applying different kind of symplectic transformations. We shall always assume \eqref{ansatz delta}.
\subsection{Change of the space variable }\label{step1}
The first task is to conjugate $ {\cal L}_\omega $ in \eqref{Lom KdVnew} to $ {\cal L}_1 $ in
\eqref{cal L1 Kdv}, which has the coefficient of $\partial_{xxx}$ independent on the space variable. We look for a $ \varphi $-dependent family of {\it symplectic} diffeomorphisms $\Phi (\varphi) $ of $ H_S^\bot $ which differ from \begin{equation}\label{primo cambio di variabile modi normali} {\cal A}_{\bot} := \Pi_S^\bot {\cal A} \Pi_S^\bot \, , \quad ({\cal A} h)(\varphi,x) := (1 + \beta_x(\varphi,x)) h(\varphi,x + \beta(\varphi,x)) \, , \end{equation} up to a small ``finite dimensional" remainder, see \eqref{forma buona resto cambio di variabile hamiltoniano}. Each
$ {\cal A}(\varphi) $ is a symplectic map of the phase space, see \cite{BBM}-Remark 3.3.
If $ \| \beta \|_{W^{1,\infty}} < 1 / 2 $ then
$ {\cal A} $ is invertible, see Lemma \ref{lemma:utile}, and its inverse and adjoint maps are \begin{equation}\label{cambio di variabile inverso} ({\cal A}^{-1} h)(\varphi,y) := (1 + \tilde{\beta}_y(\varphi,y)) h(\varphi, y + \tilde{\beta}(\varphi,y)) \,, \quad ({\cal A}^T h) (\varphi,y) = h(\varphi, y + \tilde{\beta}(\varphi,y)) \end{equation} where $ x = y + \tilde{\beta} (\varphi, y) $ is the inverse diffeomorphism (of $\mathbb T$) of $ y = x + \beta (\varphi, x) $.
The restricted maps $ {\cal A}_\bot (\varphi): H_S^\bot \to H_S^\bot $ are not symplectic. In order to find a symplectic diffeomorphism near $ {\cal A}_\bot (\varphi) $, the first observation is that each $ {\cal A }(\varphi ) $ can be seen as the time $1$-flow of a time dependent Hamiltonian PDE. Indeed $ {\cal A }(\varphi ) $ (for simplicity we skip the dependence on $ \varphi $) is homotopic to the identity via the path of symplectic diffeomorphisms $$ u \mapsto (1+ \tau \beta_x ) u ( x+ \tau \beta(x) ), \quad \tau \in [0,1 ] \, , $$ which is the trajectory solution of the time dependent, linear Hamiltonian PDE \begin{equation} \label{transport-free} \partial_\tau u = \partial_x (b(\tau, x) u) \, , \quad b (\tau, x) := \frac{\beta(x)}{1 + \tau \beta_x(x)}\, , \end{equation} with value $ u (x) $ at $ \tau = 0 $ and $ {\cal A}u = (1+ \beta_x(x) ) u ( x+ \beta(x) ) $ at $ \tau = 1 $. The equation \eqref{transport-free} is a {\it transport} equation. Its associated charactheristic ODE is
\begin{equation}\label{equazione delle caratteristiche}
\frac{d}{d\tau} x = - b(\tau, x ) \, .
\end{equation} We denote its flow by $ \gamma^{\tau_0, \tau } $, namely $ \gamma^{\tau_0, \tau } (y) $ is the
solution of \eqref{equazione delle caratteristiche} with $ \gamma^{\tau_0, \tau_0} (y) = y $. Each $ \gamma^{\tau_0, \tau } $ is a diffeomorphism of the torus $ \mathbb T_x $.
\begin{remark}\label{rem:ca} Let $ y \mapsto y + \tilde \beta(\tau, y)$ be the inverse diffeomorpshim of $x \mapsto x + \tau \beta(x)$. Differentiating the identity $\tilde \beta(\tau, y) + \tau \beta(y + \tilde \beta(\tau, y)) = 0$ with respect to $\tau$ it results that
$ \gamma^\tau (y) := \gamma^{0,\tau} (y) = y + \tilde \beta(\tau, y) $. \end{remark}
Then we define a symplectic map $\Phi $ of $ H_S^\bot $ as the time-1 flow of the Hamiltonian PDE
\begin{equation}\label{problemi di cauchy}
\partial_\tau u = \Pi_S^\bot \partial_x (b(\tau, x) u) = \partial_x (b(\tau, x) u) - \Pi_S \partial_x (b(\tau, x) u) \, ,
\quad u \in H_S^\bot \, .
\end{equation} Note that $ \Pi_S^\bot \partial_x (b(\tau, x) u) $ is the Hamiltonian vector field generated by $ \frac12 \int_{\mathbb T} b(\tau, x) u^2 dx $ restricted to $ H_S^\bot $. We denote by $ \Phi^{\tau_0,\tau} $ the flow of
\eqref{problemi di cauchy}, namely $ \Phi^{\tau_0, \tau} (u_0 ) $ is the solution of \eqref{problemi di cauchy}
with initial condition $ \Phi^{\tau_0, \tau_0} (u_0 ) = u_0 $. The flow is well defined in Sobolev spaces $ H^s_{S^\bot} (\mathbb T_x) $ for $ b(\tau, x) $ is smooth enough (standard theory of linear hyperbolic PDEs, see e.g. section 0.8 in \cite{Taylor}).
It is natural to expect that the difference between the flow map $ \Phi := \Phi^{0,1} $ and $ {\cal A}_\bot $ is a ``finite-dimensional" remainder of the size of $ \beta $.
\begin{lemma}\label{modifica simplettica cambio di variabile}
For $ \| \beta \|_{W^{s_0 + 1,\infty}} $ small, there exists an invertible symplectic transformation $ \Phi = {\cal A}_\bot + {\cal R}_\Phi $ of $ H_{S^\bot}^s $,
where $ {\cal A}_\bot $ is defined in \eqref{primo cambio di variabile modi normali} and $ {\cal R}_\Phi $ is a ``finite-dimensional" remainder \begin{equation}\label{forma buona resto cambio di variabile hamiltoniano} {\cal R}_\Phi h= \sum_{j \in S} \int_0^1 (h, g_j (\tau) )_{L^2(\mathbb T)} \chi_j (\tau) d \tau + \sum_{j \in S} \big(h, \psi_j \big)_{L^2(\mathbb T)} e^{{\mathrm i} j x} \end{equation} for some functions $ \chi_j (\tau), g_j (\tau) , \psi_j \in H^s $ satisfying \begin{equation}\label{stime forma buona resto cambio di variabile hamiltoniano}
\| \psi_j\|_s\,,\, \| g_j(\tau)\|_s \leq_s \| \beta\|_{W^{s + 2, \infty}}\,,
\quad \| \chi_j(\tau)\|_s \leq_s 1 + \| \beta \|_{W^{s + 1, \infty}} \,,\quad \forall \tau \in [0, 1]\, . \end{equation} Furthermore, the following tame estimates holds \begin{equation}\label{stime Phi Phi -1}
\| \Phi^{\pm 1}h\|_s \leq_s \| h \|_s + \| \beta \|_{W^{s + 2, \infty}} \| h \|_{s_0} \, , \quad \forall h \in H^s_{S^\bot} \, . \end{equation} \end{lemma}
\begin{proof} Let $ w (\tau, x ) := (\Phi^\tau u_0)(x) $ denote the solution of \eqref{problemi di cauchy} with initial condition $ \Phi^0 (w) = u_0 \in H_S^\bot $. The difference \begin{equation}\label{differenza voluta} ({\cal A}_\bot - \Phi) u_0 = \Pi_S^\bot {\cal A} u_0 - w(1, \cdot) = {\cal A}u_0 - w(1, \cdot ) - \Pi_S {\cal A} u_0 \, , \quad \forall u_0 \in H_S^\bot \, , \end{equation} and \begin{equation}\label{pezzo2} \Pi_S {\cal A} u_0 = \Pi_S ({\cal A} - I) \Pi_S^\bot u_0 = \sum_{j \in S} \big(u_0\,,\,\psi_j \big)_{L^2(\mathbb T)} e^{{\mathrm i} j x}\,, \quad \psi_j := ({\cal A}^T- I) e^{{\mathrm i} j x} \, . \end{equation} We claim that the difference \begin{equation}\label{differenza} {\cal A} u_0 - w(1, x) = (1 + \beta_x (x) )\int_0^1 (1 + \tau \beta_x (x) )^{-1}
\big[ \Pi_S \partial_x (b (\tau ) w(\tau) ) \big] ( \gamma^\tau ( x + \beta (x) )) \,d \tau \end{equation} where $ \gamma^\tau (y) := \gamma^{0,\tau } (y) $ is the flow of \eqref{equazione delle caratteristiche}. Indeed the solution $ w(\tau, x) $ of \eqref{problemi di cauchy} satisfies $$ \partial_\tau \{ w(\tau, \gamma^\tau (y) ) \} = b_{x}(\tau, \gamma^\tau (y)) w(\tau, \gamma^\tau (y)) - \big[ \Pi_S \partial_x (b (\tau ) w(\tau) ) \big] ( \gamma^\tau (y))\, . $$ Then, by the variation of constant formula, we find $$ w(\tau, \gamma^\tau (y)) = e^{\int_0^\tau b_x (s, \gamma^s (y))\,d s} \Big( u_0(y) - \int_0^\tau e^{- \int_0^s b_x(\zeta, \gamma^\zeta (y) )\,d\zeta} \big[ \Pi_S \partial_x (b ( s ) w(s) ) \big] ( \gamma^s (y)) \, d s \Big) \, . $$ Since $ \partial_y \gamma^\tau (y) $ solves the variational equation $ \partial_\tau (\partial_y \gamma^\tau (y)) = - b_x (\tau , \gamma^\tau (y) ) (\partial_y \gamma^\tau (y)) $ with $ \partial_y \gamma^0 (y) = 1 $ we have that \begin{equation}\label{identita equazione variazionale} e^{\int_0^\tau b_x(s , \gamma^s (y) )d s} = \big( {\partial_{y} \gamma^\tau (y) } \big)^{-1} = 1 + \tau \beta_x (x)
\end{equation}
by remark \ref{rem:ca}, and so we derive the expression $$ w(\tau, x) = (1 + \tau \beta_x (x)) \Big\{ u_0(x + \tau \beta(x)) - \int_0^\tau ( 1+ s \beta_x (x) )^{-1}
\big[ \Pi_S \partial_x (b ( s ) w(s) ) \big] ( \gamma^s ( x + \tau \beta (x) )) \,d s \Big\} \, . $$ Evaluating at $ \tau = 1 $, formula \eqref{differenza} follows. Next, we develop (recall $ w(\tau ) = \Phi^\tau (u_0 ) $) \begin{equation}\label{espressione esplicita gj}
[\Pi_S \partial_x (b(\tau) w(\tau))] (x)
= \sum_{j \in S} \big(u_0, g_j(\tau) \big)_{L^2(\mathbb T)} e^{{\mathrm i} j x} \, , \quad
g_j(\tau) := - (\Phi^\tau)^T[b(\tau) \partial_x e^{{\mathrm i} j x}]\,, \end{equation} and \eqref{differenza} becomes \begin{equation}\label{formula utile 4} {\cal A} u_0 - w(1, \cdot) = - \int_0^1 \sum_{j \in S} \big(u_0\,,\, g_j(\tau) \big)_{L^2(\mathbb T)} \chi_j(\tau, \cdot )\,d\tau\,, \end{equation} where \begin{equation}\label{espressione chi j} \chi_j(\tau, x) := - ( 1 + \beta_x (x) ) (1 + \tau \beta_x (x))^{-1} e^{{\mathrm i} j \gamma^\tau (x + \beta(x) )} \, . \end{equation}
By \eqref{differenza voluta}, \eqref{pezzo2}, \eqref{differenza}, \eqref{formula utile 4} we deduce that $ \Phi = {\cal A}_\bot + {\cal R}_\Phi $ as in \eqref{forma buona resto cambio di variabile hamiltoniano}.
We now prove the estimates \eqref{stime forma buona resto cambio di variabile hamiltoniano}.
Each function $ \psi_j $ in \eqref{pezzo2} satisfies $ \|\psi_j \|_s \leq_s \| \beta \|_{W^{s, \infty}} $,
see \eqref{cambio di variabile inverso}. The bound $ \| \chi_j(\tau)\|_s \leq_s 1 + \| \beta \|_{W^{s + 1, \infty}} $ follows by \eqref{espressione chi j}. The tame estimates for $ g_j(\tau) $ defined in \eqref{espressione esplicita gj} are more difficult because require tame estimates for the adjoint $(\Phi^\tau)^T$, $ \forall \tau \in [0, 1] $. The adjoint of the flow map can be represented as the flow map of the ``adjoint'' PDE \begin{equation}\label{equazione aggiunta} \partial_\tau z = \Pi_S^\bot \{ b(\tau, x) \partial_x \Pi_S^\bot z \} = b(\tau, x) \partial_x z - \Pi_S (b(\tau, x) \partial_x z ) \, , \quad z \in H_S^\bot \, , \end{equation} where $ - \Pi_S^\bot b(\tau,x) \partial_x $ is the $ L^2 $-adjoint of the Hamiltonian vector field in \eqref{problemi di cauchy}. We denote by $ \Psi^{\tau_0, \tau} $ the flow of \eqref{equazione aggiunta}, namely $ \Psi^{\tau_0, \tau} (v) $ is the solution of \eqref{equazione aggiunta} with $ \Psi^{\tau_0, \tau_0} (v) = v $. Since the derivative $ \partial_\tau (\Phi^\tau (u_0) , \Psi^{\tau_0,\tau } (v) )_{L^2(\mathbb T)} = 0 $, $ \forall \tau $, we deduce that $ ( \Phi^{\tau_0} (u_0) , \Psi^{\tau_0,\tau_0} (v) )_{L^2(\mathbb T)} = ( \Phi^0 (u_0) , \Psi^{\tau_0,0} (v) )_{L^2(\mathbb T)} $, namely (recall that $ \Psi^{\tau_0,\tau_0} (v) = v $) the adjoint \begin{equation}\label{adjoint flow} (\Phi^{\tau_0})^T = \Psi^{\tau_0, 0} \,, \quad \forall \tau_0 \in [0,1] \, . \end{equation} Thus it is sufficient to prove tame estimates for the flow $ \Psi^{\tau _0, \tau} $. We first provide a useful expression for the solution $ z (\tau, x) := \Psi^{\tau_0, \tau} (v) $ of \eqref{equazione aggiunta}, obtained by the methods of characteristics. Let $ \gamma^{\tau _0,\tau} (y) $ be the flow of \eqref{equazione delle caratteristiche}. Since $ \partial_\tau z (\tau, \gamma^{\tau _0, \tau } (y)) = - [\Pi_S (b(\tau ) \partial_x z (\tau ) ] ( \gamma^{\tau _0,\tau } (y)) $ we get $$ z(\tau, \gamma^{\tau _0,\tau} (y)) = v(y) + \int_\tau^{\tau_0} [\Pi_S (b(s) \partial_x z (s) ] ( \gamma^{\tau _0,s} (y)) \,d s\, , \quad \forall \tau \in [0,1] \, . $$ Denoting by $ y = x + \sigma(\tau, x)$ the inverse diffeomorphism of $ x = \gamma^{\tau_0, \tau} (y) = y + {\tilde \sigma}(\tau, y) $, we get \begin{align} \Psi^{\tau_0, \tau}(v) = z ( \tau, x) & = v(x + \sigma(\tau, x)) + \int_\tau^{\tau_0} [\Pi_S (b(s) \partial_x z (s) ] ( \gamma^{\tau _0,s} (x + \sigma(\tau, x))) \, d s \nonumber \\ & = v(x + \sigma(\tau, x)) +\int_\tau^{\tau_0} \sum_{j \in S} (z(s), p_j(s)) \kappa_j(s, x)\,d s = v( x + \sigma(\tau,x) ) + {\cal R}_\tau v\,, \label{Psi-expression} \end{align} where $ p_j (s) := - \partial_x (b(s) e^{{\mathrm i} j x}) $, $ \kappa_j(s, x) := e^{{\mathrm i} j \gamma^{\tau_0, s} (x + \sigma(\tau, x))} $ and $$ ({\cal R}_\tau v)(x) := \int_\tau^{\tau_0} \sum_{j \in S} (\Psi^{\tau_0,s}(v), p_j(s))_{L^2(\mathbb T)} \kappa_j(s, x)\,d s\,. $$ Since
$\| \sigma(\tau, \cdot) \|_{W^{s,\infty}} $, $ \| \tilde \sigma(\tau, \cdot)\|_{W^{s,\infty}}
\leq_s \| \beta \|_{W^{s + 1,\infty}}$ (recall also \eqref{transport-free}), we derive
$ \| p_j \|_s \leq_s \| \beta \|_{W^{s + 2,\infty}} $,
$ \| \kappa_j\|_s \leq_s 1 + \| \beta\|_{W^{s + 1,\infty}} $ and
$ \| v(x + \sigma(\tau, x)) \|_s \leq_s \| v\|_s + \| \beta\|_{W^{s + 1,\infty}} \| v\|_{s_0} $, $ \forall \tau \in [0,1] $. Moreover $$
\| {\cal R}_\tau v\|_s \leq_s {\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0,\tau}(v) \|_s
\| \beta\|_{W^{s_0 + 2,\infty}} +{\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0, \tau}(v) \|_{s_0} \| \beta\|_{W^{s + 2,\infty}}\, . $$ Therefore, for all $ \tau \in [0, 1] $, \begin{equation}\label{tame aggiunto flusso parziale}
\| \Psi^{\tau_0,\tau} v\|_s \leq_s \| v\|_s + \| \beta\|_{W^{s + 1,\infty}}
\| v\|_{s_0} + {\rm sup}_{\tau \in [0, 1]} \big\{ \| \Psi^{\tau_0, \tau} v\|_s \| \beta\|_{W^{s_0 + 2,\infty}} +
\| \Psi^{\tau_0, \tau} v\|_{s_0} \| \beta\|_{W^{s + 2,\infty}} \big\}\,. \end{equation} For $s = s_0$ it implies $$
{\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0, \tau}(v)\|_{s_0} \leq_{s_0} \| v \|_{s_0}(1 +
\| \beta \|_{W^{s_0 + 1,\infty}}) + {\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0, \tau}(v)\|_{s_0} \| \beta\|_{W^{s_0 + 2,\infty}} $$
and so, for $\| \beta\|_{W^{s_0 + 2,\infty} } \leq c(s_0) $ small enough, \begin{equation}\label{tame norma bassa aggiunto flusso}
{\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0, \tau}(v)\|_{s_0} \leq_{s_0} \| v \|_{s_0} \, . \end{equation} Finally \eqref{tame aggiunto flusso parziale}, \eqref{tame norma bassa aggiunto flusso} imply the tame estimate \begin{equation}\label{tame aggiunto flusso}
{\rm sup}_{\tau \in [0, 1]} \| \Psi^{\tau_0, \tau} (v)\|_s \leq_s \| v \|_s + \| \beta\|_{W^{s + 2,\infty}} \| v \|_{s_0}\,. \end{equation} By \eqref{adjoint flow} and \eqref{tame aggiunto flusso} we deduce the bound \eqref{stime forma buona resto cambio di variabile hamiltoniano} for $ g_j $ defined in \eqref{espressione esplicita gj}. The tame estimate \eqref{stime Phi Phi -1} for $\Phi $ follows by that of $ {\cal A} $ and \eqref{stime forma buona resto cambio di variabile hamiltoniano}
(use Lemma \ref{lemma:utile}). The estimate for $\Phi^{- 1}$ follows in the same way because $\Phi^{-1 } = \Phi^{1,0} $ is the backward flow. \end{proof}
We conjugate $ {\cal L}_\omega $ in \eqref{Lom KdVnew} via the symplectic map $ \Phi = {\cal A}_\bot + {\cal R}_\Phi $ of Lemma \ref{modifica simplettica cambio di variabile}. We compute
(split $ \Pi_S^{\bot} = I - \Pi_S $) \begin{equation}\label{LAbot} {\cal L}_\omega \Phi = \Phi {\cal D}_\omega + \Pi_S^\bot {\cal A} \big( b_3 \partial_{yyy} + b_2 \partial_{yy} + b_1 \partial_{y} + b_0 \big) \Pi_S^\bot + {\cal R}_I \,, \end{equation} where the coefficients are \begin{align} & b_3 (\varphi,y) := {\cal A}^T [ a_1 ( 1 + \beta_x)^3 ] \label{step1: b3KdV} \qquad \qquad b_2 (\varphi,y) := {\cal A}^T \big[ 2 (a_1)_x (1 + \beta_x )^2 + 6 a_1 \beta_{xx} (1 + \beta_x )\big] \\ & b_1 (\varphi,y) :=
{\cal A}^T \Big[ ({\cal D}_\omega \beta) + 3 a_1 \frac{\beta_{xx}^2 }{1 + \beta_x} +
4 a_1 \beta_{xxx} + 6 (a_1)_x \beta_{xx} +
(a_1)_{xx} (1 + \beta_x) + a_0 (1 + \beta_x) \Big] \label{tilde b1KdV} \\ & b_0 (\varphi,y) := {\cal A}^T \Big[ \frac{({\cal D}_\omega \beta_x)}{1 + \beta_x} +
a_1 \frac{\beta_{xxxx}}{1+ \beta_x} +
2 ( a_{1})_{x} \frac{\beta_{xxx}}{1+ \beta_x} + ( a_{1})_{xx} \frac{\beta_{xx}}{1+ \beta_x} + a_0 \frac{\beta_{xx}}{1+ \beta_x} + (a_0)_x \Big] \label{tilde b0KdV} \end{align}
and the remainder \begin{align}
{\cal R}_I &:= - \Pi_S^\bot \partial_x ( \varepsilon^2 {\cal R}_2 + \mathcal{R}_{*} ) {\cal A}_\bot - \Pi_S^\bot
\big( a_1 \partial_{xxx} + 2 (a_1)_x \partial_{xx} + ( (a_{1})_{xx} + a_0)\partial_x + (a_0)_x \big) \Pi_{S} {\cal A} \Pi_S^\bot \, \nonumber\\
& \quad +[{\cal D}_\omega, {\cal R}_\Phi] + ({\cal L}_\omega - {\cal D}_\omega) {\cal R}_\Phi\,. \label{calR1KdV} \end{align} The commutator $[{\cal D}_\omega, {\cal R}_\Phi] $ has the form \eqref{forma buona resto cambio di variabile hamiltoniano} with ${\cal D}_\omega g_j$ or ${\cal D}_\omega \chi_j$, ${\cal D}_\omega \psi_j$ instead of $\chi_j$, $g_j$, $\psi_j$ respectively. Also the last term $({\cal L}_\omega - {\cal D}_\omega) {\cal R}_\Phi$ in \eqref{calR1KdV} has the form \eqref{forma buona resto cambio di variabile hamiltoniano} (note that ${\cal L}_\omega - {\cal D}_\omega$ does not contain derivatives with respect to $\varphi$). By \eqref{LAbot}, and decomposing $ I = \Pi_S + \Pi_S^\bot $, we get \begin{align} \label{L A bot finaleKdV} {\cal L}_\omega \Phi = {} & \Phi ( {\cal D}_\omega + b_3 \partial_{yyy} + b_2 \partial_{yy} + b_1 \partial_{y} + b_0 ) \Pi_S^\bot + {\cal R}_ {II} \,, \\ \label{calR2KdV} {\cal R}_{II} := {} & \big\{\Pi_S^\bot ({\cal A} - I) \Pi_{S} - {\cal R}_\Phi \big\} ( b_3 \partial_{yyy} + b_2 \partial_{yy} + b_1 \partial_{y} + b_0 ) \Pi_S^\bot + {\cal R}_I \,. \end{align}
Now we choose the function $ \beta = \beta (\varphi, x) $ such that \begin{equation}\label{choice beta} a_1(\varphi, x) (1 + \beta_x (\varphi, x))^3 = b_3 (\varphi) \end{equation} so that the coefficient $ b_3 $ in \eqref{step1: b3KdV} depends only on $ \varphi $ (note that $ {\cal A}^T [b_3 (\varphi)] = b_3 (\varphi) $). The only solution of \eqref{choice beta} with zero space average is (see e.g. \cite{BBM}-section 3.1) \begin{equation}\label{sol beta} \beta := \partial_x^{-1} \rho_0 , \quad \rho_0 := b_3 (\varphi)^{1/3} (a_1 (\varphi, x))^{-1/3} - 1, \quad b_3 (\varphi) := \Big( \frac{1}{2 \pi} \int_{\mathbb T} (a_1 (\varphi, x))^{-1/3} dx \Big)^{-3}. \end{equation} Applying the symplectic map $ \Phi^{-1} $ in \eqref{L A bot finaleKdV} we obtain the Hamiltonian operator (see Definition \ref{operatore Hamiltoniano}) \begin{equation}\label{cal L1 Kdv} {\cal L}_1 := \Phi^{-1} {\cal L}_\omega \Phi = \Pi_S^\bot \big( \omega \cdot \partial_\varphi + b_3(\varphi) \partial_{yyy} + b_1 \partial_y + b_0 \big) \Pi_S^\bot + {\mathfrak R}_1 \end{equation} where $ {\mathfrak R}_1 := \Phi^{-1} {\cal R}_{II} $. We used that, by the Hamiltonian nature of $ {\cal L}_1 $, the coefficient $ b_2 = 2 (b_3)_y $ (see \cite{BBM}-Remark 3.5) and so, by the choice \eqref{sol beta}, we have $ b_2 = 2 (b_3)_y = 0 $. In the next Lemma we analyse the structure of the remainder ${\mathfrak R}_1$. \begin{lemma} \label{cal R3} The operator $ {\mathfrak R}_1 $ has the form \eqref{forma buona con gli integrali}.
\end{lemma}
\begin{proof} The remainders $ {\cal R}_I $ and $ {\cal R}_{II} $ have the form \eqref{forma buona con gli integrali}. Indeed
$ {\cal R}_2, {\cal R}_* $ in \eqref{calR1KdV} have the form \eqref{forma buona resto}
(see Proposition \ref{prop:lin}) and the term $ \Pi_S {\cal A} w = \sum_{j \in S } ( {\cal A}^T e^{{\mathrm i} j x}, w)_{L^2(\mathbb T) } e^{{\mathrm i} jx} $ has the same form. By \eqref{forma buona resto cambio di variabile hamiltoniano}, the terms of ${\cal R}_I$, ${\cal R}_{II}$ which involves the operator ${\cal R}_\Phi$ have the form \eqref{forma buona con gli integrali}.
All the operations involved preserve this structure: if $R_\tau w = \chi(\tau) (w, g(\tau) )_{L^2(\mathbb T)} $, $\tau \in [0, 1]$, then \begin{alignat*}{3} R_\tau \Pi_S^\bot w & = \chi(\tau) (\Pi_S^\bot g(\tau) , w)_{L^2(\mathbb T)}\,, \ & R_\tau {\cal A} w & = \chi(\tau) ({\cal A}^T g(\tau) , w)_{L^2(\mathbb T)} \,, \ & \partial_x R_\tau w & = \chi_x(\tau) (g(\tau) , w)_{L^2(\mathbb T)} \,, \\ \Pi_S^\bot R_\tau w & = (\Pi_S^\bot \chi(\tau)) (g(\tau) , w)_{L^2(\mathbb T)} \,, \ & {\cal A} R_\tau w & = ({\cal A} \chi(\tau)) (g(\tau) , w)_{L^2(\mathbb T)} \,, \ & \Phi^{-1} R_\tau w & = (\Phi^{-1} \chi(\tau)) (g(\tau), w)_{L^2(\mathbb T)} \end{alignat*}
(the last equality holds because $ \Phi^{-1} ( f (\varphi) w ) = f (\varphi) \Phi^{-1} ( w ) $ for all function $f(\varphi)$).
Hence $ {\mathfrak R}_1 $ has the form \eqref{forma buona con gli integrali}
where $ \chi_j(\tau) \in H_S^\bot $ for all $\tau \in [0, 1]$. \end{proof}
We now put in evidence the terms of order $ \varepsilon, \varepsilon^2, \ldots $, in $ b_1 $, $ b_0 $, $ \mathfrak R_1 $, recalling that $ a_1 -1 = O(\varepsilon^3 )$ (see \eqref{stima a1}), $ a_0 = O( \varepsilon) $ (see \eqref{a1p1p2}-\eqref{p geq 4}), and $ \beta = O( \varepsilon^3) $ (proved below in \eqref{stima beta}). We expand $ b_1$ in \eqref{tilde b1KdV} as \begin{equation}\label{b1 Kdv} b_1 = - \varepsilon p_1 -\varepsilon^2 p_2 - q_{>2} + {\cal D}_\omega \beta + 4 \beta_{xxx} + (a_1)_{xx}
+ b_{1, \geq 4} \end{equation} where $ b_{1, \geq 4} = O(\varepsilon^4) $ is defined by difference (the precise estimate is in Lemma \ref{lemma:stime coeff mL1}).
\begin{remark}\label{media beta} The function $ {\cal D}_\omega \beta $ has zero average in $ x $ by \eqref{sol beta} as well as $ (a_1)_{xx}, \beta_{xxx} $. \end{remark}
Similarly, we expand $ b_0 $ in \eqref{tilde b0KdV} as \begin{equation} \label{b0 Kdv} b_0 = - \varepsilon (p_1)_x - \varepsilon^2 (p_2)_x - (q_{>2})_x + {\cal D}_\omega \beta_x + \beta_{xxxx} + b_{0, \geq 4} \end{equation} where $ b_{0, \geq 4} = O(\varepsilon^4) $ is defined by difference.
Using the equalities \eqref{calR2KdV}, \eqref{calR1KdV} and $ \Pi_S {\cal A} \Pi_S^\bot = \Pi_S ({\cal A} - I) \Pi_S^\bot $ we get \begin{equation}\label{resto1 Kdv} {\mathfrak R}_1 := \Phi^{-1} {\cal R}_{II} = - \varepsilon^2 \Pi_S^\bot \partial_x {\cal R}_2 + {\cal R}_{*} \end{equation} where $ {\cal R}_2 $ is defined in \eqref{R nullo1R2} and we have renamed $\mathcal{R}_*$ the term of order $ o(\varepsilon^2) $ in $\mathfrak{R}_1$. The remainder $ {\cal R}_{*} $ in \eqref{resto1 Kdv} has the form \eqref{forma buona con gli integrali}.
\begin{lemma} \label{lemma:stime coeff mL1} There is $\sigma = \sigma(\tau ,\nu) > 0$ such that \begin{align} \label{stima beta}
\| \beta \|_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^3 (1 + \| {\mathfrak{I}}_\delta \|_{s + 1}^{\mathrm{Lip}(\g)} ), \qquad
\| \partial_i \beta [\widehat \imath ] \|_s
& \leq_s \varepsilon^3 \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big)\,, \\ \label{stima b1 b0}
\| b_3 - 1 \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^4 + \varepsilon^{b + 2} \| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)}, \qquad \| \partial_i b_3[\widehat \imath ]\|_s & \leq_s \varepsilon^{b + 2} \big( \| \widehat
\imath \|_{s + \sigma} + \| {\mathfrak{I}}_\delta\|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big) \\ \label{stima Di b1 b0}
\| b_{1, \geq 4} \|_s^{\mathrm{Lip}(\g)} + \| b_{0, \geq 4} \|_s^{\mathrm{Lip}(\g)} & \leq_s \varepsilon^4 + \varepsilon^{b + 2} \| {\mathfrak{I}}_\delta\|_{s + \sigma}^{\mathrm{Lip}(\g)} \\ \label{stima Di b geq 4}
\| \partial_i b_{1, \geq 4}[\widehat \imath ] \|_s + \| \partial_i b_{0, \geq 4}[\widehat \imath ]\|_s & \leq_s \varepsilon^{b + 2}\big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak{I}}_\delta\|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big) . \end{align} The transformations $\Phi$, $\Phi^{-1}$ satisfy \begin{align} \label{stima cal A bot}
\|\Phi^{\pm 1} h \|_s^{{\mathrm{Lip}(\g)}} & \leq_s \| h \|_{s + 1}^{{\mathrm{Lip}(\g)}} + \| {\mathfrak I}_\delta \|_{s + \sigma}^{{\mathrm{Lip}(\g)}} \| h \|_{s_0 + 1}^{{\mathrm{Lip}(\g)}} \\ \label{stima derivata cal A bot}
\| \partial_i (\Phi^{\pm 1}h) [\widehat \imath] \|_s & \leq_s
\| h \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} +
\| h\|_{s_0 + \sigma} \| \widehat \imath \|_{s + \sigma} +
\| {\mathfrak I}_\delta\|_{s + \sigma} \| h\|_{s_0 + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\,. \end{align} Moreover the remainder ${\cal R}_{*}$ has the form \eqref{forma buona con gli integrali}, where the functions $\chi_j(\tau)$, $g_j(\tau)$ satisfy the estimates \eqref{stima cal R*}-\eqref{derivate stima cal R*} uniformly in $\tau \in [0, 1]$. \end{lemma}
\begin{proof} The estimates \eqref{stima beta} follow by \eqref{sol beta}, \eqref{stima a1}, and the usual interpolation and tame estimates in Lemmata \ref{lemma:composition of functions, Moser}-\ref{lemma:utile} (and Lemma \ref{stima Aep}) and \eqref{ansatz delta}. For the estimates of $ b_3 $, by \eqref{sol beta} and \eqref{a1p1p2} we consider the function $ r_1 $ defined in \eqref{sigma0sigma1 def}. Recalling also \eqref{finito finito} and \eqref{T0}, the function $$ r_1 (T_\delta ) = \varepsilon^3 (\partial_{u_x u_x} f_5) (v_\delta, (v_\delta)_x ) + r_{1, \geq 4} \, , \quad r_{1, \geq 4} := r_1 (T_\delta ) - \varepsilon^3 (\partial_{u_x u_x} f_5) (v_\delta, (v_\delta)_x ) \, . $$ Hypothesis (${\mathtt S}1$) implies, as in the proof of Lemma \ref{p3 zero average}, that the space average $ \int_{\mathbb T} (\partial_{u_x u_x} f_5) (v_\delta, (v_\delta)_x ) dx = 0 $. Hence the bound \eqref{stima b1 b0} for $ b_3 - 1 $ follows.
For the estimates on $\Phi$, $\Phi^{-1} $ we apply Lemma \ref{modifica simplettica cambio di variabile} and the estimate \eqref{stima beta} for $\beta$ . We estimate the remainder ${\cal R}_*$ in \eqref{resto1 Kdv}, using \eqref{calR1KdV}, \eqref{calR2KdV} and
\eqref{stima cal R*}-\eqref{derivate stima cal R*}. \end{proof}
\subsection{Reparametrization of time}\label{step2}
The goal of this section is to make constant the coefficient of the highest order spatial derivative operator $ \partial_{yyy} $, by a quasi-periodic reparametrization of time. We consider the change of variable $$ (B w)(\varphi, y) := w(\varphi + \omega \alpha(\varphi), y), \qquad ( B^{-1} h)(\vartheta , y ) := h(\vartheta + \omega \tilde{\alpha}(\vartheta), y)\,, $$
where $ \varphi = \vartheta + \omega \tilde{\alpha}(\vartheta )$ is the inverse diffeomorphism of $ \vartheta = \varphi + \omega \alpha(\varphi) $ in $\mathbb T^\nu$. By conjugation, the differential operators become \begin{equation} \label{anche def rho} B^{-1} \omega \cdot \partial_\varphi B = \rho(\vartheta)\, \omega \cdot \partial_{\vartheta} , \quad B^{-1} \partial_y B = \partial_y, \quad \rho := B^{-1} (1 + \omega \cdot \partial_{\varphi} \alpha). \end{equation} By \eqref{cal L1 Kdv}, using also that $ B $ and $ B^{-1} $ commute with $ \Pi_S^\bot $, we get \begin{equation} \label{secondo coniugio siti normali Kdv}
B^{-1} {\cal L}_1 B = \Pi_S^\bot [ \rho \omega \cdot \partial_{\vartheta}
+ (B^{-1} b_3) \partial_{yyy}
+ ( B^{-1} b_1 ) \partial_y + ( B^{-1} b_0 ) ] \Pi_S^\bot
+ B^{-1} {\mathfrak R}_1 B. \end{equation} We choose $ \alpha $ such that \begin{equation}\label{B3solu} (B^{-1}b_3 )(\vartheta ) = m_3 \rho (\vartheta) \,, \quad m_3 \in \mathbb R \, , \quad \text{namely} \ \ \
b_3 (\varphi) = m_3 ( 1 + \omega \cdot \partial_\varphi \alpha (\varphi) ) \end{equation} (recall \eqref{anche def rho}). The unique solution with zero average of \eqref{B3solu} is \begin{equation} \label{def alpha m3} \alpha (\varphi) := \frac{1}{m_3} ( \omega \cdot \partial_\varphi )^{-1} ( b_3 - m_3 ) (\varphi) , \qquad m_3 := \frac{1}{(2 \pi)^\nu} \int_{\mathbb T^\nu} b_3 (\varphi) d \varphi \,. \end{equation} Hence, by \eqref{secondo coniugio siti normali Kdv}, \begin{alignat}{2} \label{L2 Kdv} & B^{-1} {\cal L}_1 B = \rho {\cal L}_2 \, , \qquad & & {\cal L}_2 := \Pi_S^\bot ( \omega \cdot \partial_{\vartheta} + m_3 \partial_{yyy} + c_1 \partial_y + c_0 ) \Pi_S^\bot + {\mathfrak R}_2 \\ \label{tilde c1 Kdv} & c_1 := \rho^{-1} (B^{-1} b_1 ) \,, \qquad & & c_0 := \rho^{-1} (B^{-1} b_0 ) \, , \qquad {\mathfrak R}_2 := \rho^{-1} B^{-1}{\mathfrak R}_1 B \, . \end{alignat} The transformed operator ${\cal L}_2$ in \eqref{L2 Kdv} is still Hamiltonian, since the reparametrization of time preserves the Hamiltonian structure (see Section 2.2 and Remark 3.7 in \cite{BBM}).
We now put in evidence the terms of order $ \varepsilon, \varepsilon^2, \ldots $ in $ c_1, c_0 $. To this aim, we anticipate the following estimates: $ \rho (\vartheta) = 1 + O(\varepsilon^4) $, $ \alpha = O( \varepsilon^4 \gamma^{-1}) $, $ m_3 = 1 + O(\varepsilon^4) $, $ B^{-1} - I = O( \alpha ) $ (in low norm), which are proved in Lemma \ref{lemma:stime coeff mL2} below. Then, by \eqref{b1 Kdv}-\eqref{b0 Kdv}, we expand the functions $ c_1, c_0 $ in \eqref{tilde c1 Kdv} as \begin{equation} \label{tilde c1 KdV} c_1 = - \varepsilon p_1 - \varepsilon^2 p_2 - B^{-1} q_{>2} + \varepsilon ( p_1 - B^{-1} p_1) + \varepsilon^2 ( p_2 - B^{-1} p_2 ) + {\cal D}_\omega \beta + 4 \beta_{xxx} + (a_1)_{xx} + c_{1, \geq 4} \, , \end{equation} \begin{equation} \label{tilde c0 KdV} c_0 = - \varepsilon (p_1)_x - \varepsilon^2 (p_2)_x - (B^{-1} q_{>2})_x + \varepsilon ( p_1 - B^{-1} p_1)_x + \varepsilon^2 ( p_2 - B^{-1} p_2 )_x + ({\cal D}_\omega \beta)_x + \beta_{xxxx} + c_{0, \geq 4}\,, \end{equation} where $ c_{1, \geq 4}, c_{0, \geq 4} = O( \varepsilon^4 ) $ are defined by difference.
\begin{remark} The functions $\varepsilon ( p_1 - B^{-1} p_1) = O( \varepsilon^5 \gamma^{-1} )$ and $\varepsilon^2 ( p_2 - B^{-1} p_2) = O( \varepsilon^6 \gamma^{-1} )$, see \eqref{commu B p1 p2}. For the reducibility scheme, the terms of order $ \partial_x^0 $ with size $ O(\varepsilon^5 \gamma^{-1}) $ are perturbative, since $ \varepsilon^5 \gamma^{-2} \ll 1 $. \end{remark}
The remainder $ {\mathfrak R}_2 $ in \eqref{tilde c1 Kdv} has still the form \eqref{forma buona con gli integrali} and, by \eqref{resto1 Kdv}, \begin{equation}\label{mathfrakR2} {\mathfrak R}_2 := - \rho^{-1} B^{-1} {\mathfrak R}_1 B = - \varepsilon^2 \Pi_S^\bot \partial_x {\cal R}_2 + {\cal R}_{*} \end{equation} where $ {\cal R}_2 $ is defined in \eqref{R nullo1R2} and we have renamed $ {\cal R}_{*} $ the term of order $ o( \varepsilon^2 ) $ in $\mathfrak{R}_2$.
\begin{lemma} \label{lemma:stime coeff mL2} There is $ \sigma = \sigma(\nu,\tau ) > 0 $ (possibly larger than $ \sigma $ in Lemma \ref{lemma:stime coeff mL1}) such that \begin{align} \label{stima m3}
| m_3 - 1 |^{\mathrm{Lip}(\g)} \leq C \varepsilon^4 , \qquad
| \partial_i m_3 [\widehat \imath ]| & \leq C
\varepsilon^{b+2} \| \widehat \imath \|_{s_0 + \sigma} \\ \label{stima Di b3}
\| \alpha \|_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^4 \gamma^{-1} + \varepsilon^{b + 2} \gamma^{- 1}\| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)}, \qquad
\| \partial_i \alpha [\widehat \imath] \|_s
& \leq_s \varepsilon^{b + 2} \gamma^{-1} \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big)\,, \\ \label{stima c1 c0}
\| \rho -1 \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^4 + \varepsilon^{b + 2} \| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} ,\qquad \| \partial_i \rho[\widehat \imath ]\|_s & \leq_s \varepsilon^{b + 2} \big(\| \widehat
\imath \|_{s + \sigma} + \| {\mathfrak{I}}_\delta\|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} \big) \\ \label{commu B p1 p2}
\| p_k - B^{-1} p_k \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^4 \gamma^{-1} + \varepsilon^{b + 2} \gamma^{- 1} \| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} , \quad k = 1, 2 \\ \label{commu B p1 p2 der}
\| \partial_i (p_k - B^{-1} p_k) [\widehat \imath] \|_s
& \leq_s \varepsilon^{b + 2} \gamma^{-1} \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big)\, \\ \label{stima z0 - B-1 z0}
\| B^{-1} q_{>2} \|_s^{\mathrm{Lip}(\g)}
& \leq_s \varepsilon^3 + \varepsilon^b \| {\mathfrak I}_\delta\|_{s + \sigma}^{\mathrm{Lip}(\g)} , \\ \label{stima derivata z0 - B-1 z0}
\|\partial_i (B^{-1} q_{>2}) [\widehat \imath] \|_s
& \leq_s \varepsilon^b \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} \big) \,. \end{align} The terms $ c_{1, \geq 4}, c_{0, \geq 4} $ satisfy the bounds \eqref{stima Di b1 b0}-\eqref{stima Di b geq 4}. The transformations $B$, $B^{-1}$ satisfy the estimates \eqref{stima cal A bot}, \eqref{stima derivata cal A bot}. The remainder $ {\cal R}_{*} $ has the form \eqref{forma buona con gli integrali}, and the functions $g_j(\tau)$, $\chi_j(\tau)$ satisfy the estimates \eqref{stima cal R*}-\eqref{derivate stima cal R*} for all $\tau \in [0,1]$. \end{lemma}
\begin{proof} \eqref{stima m3} follows from \eqref{def alpha m3},\eqref{stima b1 b0}.
The estimate $ \| \alpha \|_s \leq_s \varepsilon^4 \gamma^{-1} + \varepsilon^{b + 2} \gamma^{- 1} \| {\mathfrak{I}}_\delta \|_{s + \sigma} $ and the inequality for $ \partial_i \alpha $ in \eqref{stima Di b3} follow by \eqref{def alpha m3},\eqref{stima b1 b0},\eqref{stima m3}.
For the first bound in \eqref{stima Di b3} we also differentiate
\eqref{def alpha m3} with respect to the parameter $ \omega $. The estimates for $\rho$ follow from $\rho - 1 = B^{-1}(b_3 - m_3) / m_3$. \end{proof}
\subsection{Translation of the space variable}\label{step3}
In view of the next linear Birkhoff normal form steps (whose goal is to eliminate the terms of size $ \varepsilon $ and $ \varepsilon^2 $), in the expressions \eqref{tilde c1 KdV}, \eqref{tilde c0 KdV} we split $ p_1 = {\bar p}_1 + ( p_1 - {\bar p}_1)$, $p_2 = {\bar p}_2 + ( p_2 - {\bar p}_2)$ (see \eqref{a1p1p2}), where \begin{equation}\label{def bar pi} {\bar p}_1 := 6 {\bar v}, \qquad {\bar p}_2 := 6 \pi_0 [ (\partial_x^{-1} {\bar v})^2 ], \qquad {\bar v} (\varphi, x) := {\mathop \sum}_{j \in S} \sqrt{\xi_j} e^{{\mathrm i} \ell (j) \cdot \varphi } e^{{\mathrm i} j x}, \end{equation} and $\ell : S \to \mathbb Z^\nu$ is the odd injective map (see \eqref{tang sites}) \begin{equation}\label{del ell} \ell : S \to \mathbb Z^\nu, \quad \ell(\bar \jmath_i) := e_i, \quad \ell(-\bar \jmath_i) := - \ell(\bar \jmath_i) = - e_i, \quad i = 1,\ldots,\nu, \end{equation} denoting by $e_i = (0,\ldots,1, \ldots,0)$ the $i$-th vector of the canonical basis of $\mathbb R^\nu$.
\begin{remark}\label{remarkp1p2} All the functions $ {\bar p}_1 $, $ {\bar p}_2 $, $ p_1 - {\bar p}_1 $, $ p_2 - {\bar p}_2 $ have zero average in $ x $. \end{remark}
We write the variable coefficients $c_1, c_0$ of the operator $\mathcal{L}_2$ in \eqref{L2 Kdv} (see \eqref{tilde c1 KdV}, \eqref{tilde c0 KdV}) as \begin{equation}\label{tilde c1 KdVbis} c_1 = - \varepsilon {\bar p}_1 - \varepsilon^2 {\bar p}_2 + q_{c_1} + c_{1, \geq 4}\, , \qquad c_0 = - \varepsilon ({\bar p}_1)_x - \varepsilon^2 ({\bar p}_2)_x + q_{c_0} + c_{0, \geq 4}\, , \end{equation} where we define \begin{gather} \label{def qc1 qc0} q_{c_1} := q + 4 \beta_{xxx} + (a_1)_{xx} \,, \quad q_{c_0} := q_x + \beta_{xxxx}, \\ \label{def q} q := \varepsilon ( p_1 - B^{-1} p_1) + \varepsilon ( {\bar p}_1 - p_1) + \varepsilon^2 ( p_2 - B^{-1} p_2 ) + \varepsilon^2 ( {\bar p}_2 - p_2) - B^{-1} q_{>2} + {\cal D}_\omega \beta \,. \end{gather}
\begin{remark} \label{rem:qc1 qc0} The functions $ q_{c_1}, q_{c_0} $ have zero average in $ x $ (see Remarks \ref{remarkp1p2}, \ref{media beta} and Lemma \ref{p3 zero average}). \end{remark}
\begin{lemma} \label{lemma:stime scorporo}
The functions $\bar p_k - p_k$, $k=1,2$ and $q_{c_m}$, $m = 0,1, $ satisfy \begin{alignat}{2} \label{stima bar pk - pk}
& \| {\bar p}_k - p_k \|_s^{\mathrm{Lip}(\g)}
\leq_s \| {\mathfrak{I}}_\delta \|_{s}^{\mathrm{Lip}(\g)} , \quad &
& \| \partial_i ({\bar p}_k - p_k)[ \widehat \imath] \|_s
\leq_s \| \widehat \imath \|_{s} + \| {\mathfrak{I}}_\delta \|_{s} \| \widehat \imath \|_{s_0}\,, \\ \label{stima q}
& \| q_{c_m} \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^{5}\gamma^{-1} + \varepsilon \| {\mathfrak I}_\delta\|_{s + \sigma}^{\mathrm{Lip}(\g)}\,, \quad &
& \| \partial_i q_{c_m}[\widehat \imath ] \|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon \big( \| \widehat \imath \|_{s + \sigma} + \|{\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} \big)\,. \end{alignat} \end{lemma}
\begin{proof} The bound \eqref{stima bar pk - pk} follows from \eqref{def bar pi}, \eqref{a1p1p2}, \eqref{T0}, \eqref{ansatz delta}. Then use \eqref{stima bar pk - pk}, \eqref{commu B p1 p2}-\eqref{stima derivata z0 - B-1 z0}, \eqref{stima beta}, \eqref{stima a1} to prove \eqref{stima q}. The biggest term comes from $ \varepsilon ({\bar p}_1 - p_1 ) $. \end{proof}
We now apply the transformation $\mathcal{T}$ defined in \eqref{gran tau} whose goal is to remove the space average from the coefficient in front of $ \partial_y $.
Consider the change of the space variable $ z = y + p(\vartheta ) $ which induces on $ H^s_{S^\bot} (\mathbb T^{\nu+1}) $ the operators \begin{equation}\label{gran tau}
({\cal T} w)(\vartheta, y ) := w(\vartheta , y + p(\vartheta)) \, , \quad
({\cal T}^{-1} h) (\vartheta ,z ) = h(\vartheta, z - p(\vartheta))
\end{equation} (which are a particular case of those used in section \ref{step1}). The differential operator becomes $ {\cal T}^{-1} \omega \cdot \partial_{\vartheta} {\cal T} $ $ = \omega \cdot \partial_{\vartheta} + \{ \omega \cdot \partial_{\vartheta}p (\vartheta) \} \partial_z $, $ {\cal T}^{-1} \partial_{y} {\cal T} = \partial_{z} $. Since $\mathcal{T}, \mathcal{T}^{-1}$ commute with $ \Pi_S^\bot $, we get \begin{align} \label{L3 KdV} \mathcal{L}_3 & := {\cal T}^{-1}{\cal L}_2 {\cal T} = \Pi_S^\bot \big(\omega \cdot \partial_{\vartheta}
+ m_3 \partial_{zzz} + d_1 \partial_z + d_0 \big) \Pi_S^\bot
+ {\mathfrak R}_3 \,, \\ \label{d1d0R3} d_1 & := ({\cal T}^{-1} c_1) + \omega \cdot \partial_{\vartheta} p \, , \qquad d_0 := {\cal T}^{-1} c_0 \, , \qquad {\mathfrak R}_3 := {\cal T}^{-1} {\mathfrak R}_2 {\cal T}. \end{align} We choose \begin{equation} \label{def m1 p Kdv} m_1 := \frac{1}{(2\pi)^{\nu + 1}} \int_{\mathbb T^{\nu + 1}} c_1 d\vartheta dy \, , \quad p := (\omega \cdot \partial_\vartheta)^{-1} \Big( m_1 - \frac{1}{2 \pi} \int_{\mathbb T} c_1 d y \Big) \, , \end{equation} so that $\frac{1}{2 \pi} \int_{\mathbb T} d_1 (\vartheta, z) \, dz = m_1$ for all $\vartheta \in \mathbb T^\nu$. Note that, by \eqref{tilde c1 KdVbis}, \begin{equation} \label{media migliore} \int_\mathbb T c_1(\vartheta,y) \, dy = \int_\mathbb T c_{1, \geq 4}(\vartheta,y) \, dy\,, \quad \omega \cdot \partial_{\vartheta} p(\vartheta) = m_1 - \frac{1}{2\pi} \int_\mathbb T c_{1, \geq 4} (\vartheta,y)\, dy \end{equation} because $\bar p_1, \bar p_2, q_{c_1}$ have all zero space-average. Also note that $\mathfrak R_3$ has the form \eqref{forma buona con gli integrali}. Since ${\cal T}$ is symplectic, the operator ${\cal L}_3$ in \eqref{L3 KdV} is Hamiltonian. \begin{remark} We require Hypothesis (${\mathtt S}1$) so that the function $ q_{>2} $ has zero space average (see Lemma \ref{p3 zero average}). If $ q_{>2} $ did not have zero average, then $ p $ in \eqref{def m1 p Kdv} would have size $O(\varepsilon^3 \gamma^{-1})$ (see \eqref{def p3}) and, since $\mathcal{T}^{-1} - I = O(\varepsilon^3 \gamma^{-1}) $, the function $ \tilde d_0 $ in \eqref{d0 tilde KdV} would satisfy $ \tilde d_0 = O(\varepsilon^4 \gamma^{-1}) $. Therefore it would remain a term of order $ \partial_x^0 $ which is not perturbative for the reducibility scheme of section \ref{subsec:mL0 mL5}. \end{remark}
We put in evidence the terms of size $ \varepsilon, \varepsilon^2 $ in $ d_0 $, $ d_1 $, $ {\mathfrak R}_3 $. Recalling \eqref{d1d0R3}, \eqref{tilde c1 KdVbis}, we split \begin{equation} \label{d1 d0 KdV} d_1 = - \varepsilon {\bar p}_1 - \varepsilon^2 {\bar p}_2 + \tilde d_{1} \, , \quad d_0 = - \varepsilon ({\bar p}_1)_x - \varepsilon^2 ({\bar p}_2)_x + \tilde d_0 \, , \quad {\mathfrak R}_3 = - \varepsilon^2 \Pi_S^\bot \partial_x \bar {\cal R}_2 + \widetilde \mathcal{R}_{*} \end{equation} where $ \bar {\cal R}_2 $ is obtained replacing $ v_\delta $ with ${\bar v}$ in $ {\cal R}_2 $ (see \eqref{R nullo1R2}), and \begin{align} \label{d1 tilde KdV} \tilde d_1 & := \varepsilon (\bar p_1 - {\cal T}^{-1}{\bar p}_1 ) + \varepsilon^2 (\bar p_2 - {\cal T}^{-1}{\bar p}_2 ) + {\cal T}^{-1} ( q_{c_1} + c_{1, \geq 4}) + \omega \cdot \partial_{\vartheta} p , \\ \label{d0 tilde KdV} \tilde d_0 & := \varepsilon ( \bar p_1 - {\cal T}^{-1}{\bar p}_1 )_x + \varepsilon^2 ( \bar p_2 - {\cal T}^{-1} {\bar p}_2 )_x + {\cal T}^{-1} ( q_{c_0} + c_{0, \geq 4} ), \\ \label{tilde mR 3} \widetilde \mathcal{R}_{*} & := \mathcal{T}^{-1} \mathcal{R}_{*} \mathcal{T} + \varepsilon^2 \Pi_S^\bot \partial_x (\mathcal{R}_2 - \mathcal{T}^{-1} \mathcal{R}_2 \mathcal{T}) + \varepsilon^2 \Pi_S^\bot \partial_x ( \bar {\cal R}_2 - {\cal R}_2 ), \end{align} and $ \mathcal{R}_{*} $ is defined in \eqref{mathfrakR2}. We have also used that ${\cal T}^{-1}$ commutes with $\partial_x$ and with $\Pi_S^\bot$.
\begin{remark}\label{d1 media} The space average $ \frac{1}{2 \pi} \int_{\mathbb T} \tilde d_1 (\vartheta, z) \, dz = \frac{1}{2 \pi} \int_{\mathbb T} d_1 (\vartheta, z) \, dz = m_1$ for all $\vartheta \in \mathbb T^\nu$. \end{remark}
\begin{lemma} \label{lemma:stime coeff mL3}
There is $ \sigma := \sigma(\nu,\tau ) > 0 $ (possibly larger than in Lemma \ref{lemma:stime coeff mL2}) such that \begin{alignat}{2} \label{stima m1}
| m_1 |^{\mathrm{Lip}(\g)} & \leq C \varepsilon^4 ,
\quad & | \partial_i m_1 [\widehat \imath ]| & \leq C \varepsilon^{b+2} \| \widehat{\imath} \|_{s_0 + \sigma} \\ \label{stima p}
\| p \|_s^{\mathrm{Lip}(\g)} & \leq_s \varepsilon^4 \gamma^{-1} + \varepsilon^{b + 2} \gamma^{- 1} \| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)}\,,
\quad & \| \partial_i p [\widehat \imath] \|_s
& \leq_s \varepsilon^{b + 2} \gamma^{-1} \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma}\big)\,, \\ \label{tilde d1 d0 KdV}
\| \tilde d_k \|_s^{\mathrm{Lip}(\g)} & \leq_s \varepsilon^{5} \gamma^{-1} + \varepsilon \| {\mathfrak{I}}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} \,,
\quad & \| \partial_i \tilde d_k [\widehat \imath] \|_s
& \leq_s \varepsilon \big(\| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma}
\| \widehat \imath \|_{s_0 + \sigma} \big) \end{alignat} for $k = 0,1$. Moreover the matrix $ s $-decay norm (see \eqref{matrix decay norm}) \begin{align}\label{nuove R3}
|\widetilde{\cal R}_{*}|_s^{{\mathrm{Lip}(\g)}} & \leq_s \varepsilon^{3} + \varepsilon^{2}\| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} \, , \quad
|\partial_i \widetilde{\cal R}_{*} [\widehat \imath ]|_s \leq_s \varepsilon^{2} \| \widehat \imath \|_{s + \sigma} + \varepsilon^{2 b - 1} \| {\mathfrak I}_\delta\|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} \, . \end{align} The transformations ${\cal T}$, ${\cal T}^{-1}$ satisfy \eqref{stima cal A bot}, \eqref{stima derivata cal A bot}. \end{lemma}
\begin{proof} The estimates \eqref{stima m1}, \eqref{stima p} follow by \eqref{def m1 p Kdv},\eqref{tilde c1 KdVbis},\eqref{media migliore}, and the bounds for $ c_{1, \geq 4}, c_{0, \geq 4} $ in Lemma \ref{lemma:stime coeff mL2}. The estimates \eqref{tilde d1 d0 KdV} follow similarly by \eqref{stima q}, \eqref{media migliore}, \eqref{stima p}. The estimates \eqref{nuove R3} follow because $ \mathcal{T}^{-1} \mathcal{R}_{*} \mathcal{T} $ satisfies the bounds \eqref{stima cal R*} like $ \mathcal{R}_{*} $ does (use Lemma \ref{remark : decay forma buona resto} and \eqref{stima p}) and
$ |\varepsilon^2 \Pi_S^\bot \partial_x ( \bar {\cal R}_2 - {\cal R}_2 )|_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^2 \| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} $. \end{proof} It is sufficient to estimate $ \widetilde \mathcal{R}_{*} $ (which has the form \eqref{forma buona con gli integrali}) only in the $ s $-decay norm (see \eqref{nuove R3}) because the next transformations will preserve it. Such norms are used in the reducibility scheme of section \ref{subsec:mL0 mL5}.
\subsection{Linear Birkhoff normal form. Step 1} \label{BNF:step1}
Now we eliminate the terms of order $ \varepsilon$ and $ \varepsilon^2 $ of $ {\cal L}_3 $. This step is different from the reducibility steps that we shall perform in section \ref{subsec:mL0 mL5}, because the diophantine constant $ \gamma = o(\varepsilon^2 ) $ (see \eqref{omdio}) and so terms $ O( \varepsilon), O(\varepsilon^2) $ are not perturbative. This reduction is possible thanks to the special form of the terms $ \varepsilon {\cal B}_1$, $ \varepsilon^2 {\cal B}_2 $ defined in \eqref{def cal B1 B2}: the harmonics of $ \varepsilon {\cal B}_1$, and $ \varepsilon^2 T $ in \eqref{mL4 T R4}, which correspond to a possible small divisor are naught, see Corollary \ref{B1 zero KdV}, and Lemma \ref{pezzo epsilon 2 A}. In this section we eliminate the term $ \varepsilon {\cal B}_1 $. In section \ref{BNF:step2} we eliminate the terms of order $\varepsilon^2$.
Note that, since the previous transformations $ \Phi $, $ B $, $ {\cal T} $ are $ O(\varepsilon^4 \gamma^{-1} ) $-close to the identity, the terms of order $\varepsilon $ and $ \varepsilon^2 $ in $ {\cal L}_3 $ are the same as in the original linearized operator.
We first collect all the terms of order $ \varepsilon $ and $ \varepsilon^2 $ in the operator $ {\cal L}_3 $ defined in \eqref{L3 KdV}. By \eqref{d1 d0 KdV}, \eqref{R nullo1R2}, \eqref{def bar pi} we have, renaming $ \vartheta = \varphi $, $ z = x $, $$
\mathcal{L}_3 = \Pi_S^\bot \big( \omega \cdot \partial_\varphi
+ m_3 \partial_{xxx}+ \varepsilon {\cal B}_1 + \varepsilon^2 {\cal B}_2 +
{\tilde d}_1 \partial_x + {\tilde d}_0 \big) \Pi_S^\bot
+ {\widetilde {\cal R}}_{*} $$ where $ {\tilde d}_1 $, $ {\tilde d}_0 $, $ {\widetilde {\cal R}}_{*} $ are defined in \eqref{d1 tilde KdV}-\eqref{tilde mR 3} and (recall also \eqref{def pi 0}) \begin{equation} \label{def cal B1 B2} {\cal B}_1 h := - 6 \partial_x ( {\bar v} h), \quad {\cal B}_2 h := - 6 \partial_x \{ {\bar v} \Pi_S [ (\partial_x^{-1} {\bar v} )\,\partial_x^{-1} h ] + h \pi_0 [ ( \partial_x^{-1} {\bar v} )^2 ] \} +
6 \pi_0 \{ (\partial_x^{-1} {\bar v}) \Pi_S [ {\bar v} h ] \}. \end{equation} Note that ${\cal B}_1$ and ${\cal B}_2$ are the linear Hamiltonian vector fields of $ H_{S}^\bot $
generated, respectively, by the Hamiltonian $ z \mapsto 3 \int_\mathbb T v z^2 $ in \eqref{H3tilde}, and the fourth order Birkhoff Hamiltonian $ {\cal H}_{4,2}$ in \eqref{mH3 mH4} at $ v = \bar v $.
We transform $ {\cal L}_3 $ by a symplectic operator $ \Phi_1 : H_{S^\bot}^s(\mathbb T^{\nu + 1}) \to H_{S^\bot}^s(\mathbb T^{\nu + 1}) $ of the form \begin{equation}\label{Phi_1} \Phi_1 := {\rm exp}(\varepsilon A_1) = I_{H_S^\bot} + \varepsilon A_1 + \varepsilon^2 \frac{A_1^2}{2} + \varepsilon^3 \widehat A_1, \quad \widehat A_1 := {\mathop \sum}_{k \geq 3} \frac{\varepsilon^{k - 3}}{k !} A_1^k \, , \end{equation} where $ A_1(\varphi) h = {\mathop \sum}_{j,j' \in S^c} ( A_1)_j^{j'}(\varphi) h_{j'} e^{{\mathrm i} j x} $ is a Hamiltonian vector field. The map $ \Phi_1 $ is symplectic, because it is the time-1 flow of a Hamiltonian vector field. Therefore \begin{align}\label{L3 new KdV} & {\cal L}_3 \Phi_1 - \Phi_1 \Pi_S^\bot ( {\cal D}_\omega + m_3 \partial_{xxx} ) \Pi_S^\bot \\ & = \Pi_S^\bot ( \varepsilon \{ {\cal D}_\omega A_1 + m_3 [\partial_{xxx}, A_1] + {\cal B}_1 \} + \varepsilon^2 \{ {\cal B}_1 A_1 + {\cal B}_2 + \frac12 m_3 [\partial_{xxx}, A_1^2 ] + \frac12 ({\cal D}_\omega A_1^2)\} + {\tilde d}_1 \partial_x + R_3 ) \Pi_S^\bot \nonumber \end{align} where \begin{align}\label{def R3} R_3 & := {\tilde d}_1 \partial_x (\Phi_1 - I) \! +\! {\tilde d}_0 \Phi_1\! +\! \widetilde {\cal R}_{*} \Phi_1\! + \! \varepsilon^2 {\cal B}_2 (\Phi_1 - I) \! + \! \varepsilon^3 \big\{ {\cal D}_\omega \widehat A_1 \! + \! m_3 [\partial_{xxx}, \widehat A_1] \! + \! \frac12 {\cal B}_1 A_1^2 \!+ \! \varepsilon {\cal B}_1 \widehat A_1 \big\}\, . \end{align}
\begin{remark} $ R_3 $ has no longer the form \eqref{forma buona con gli integrali}. However $ R_3 = O( \partial_x^0 ) $ because $ A_1 = O( \partial_x^{-1} ) $ (see Lemma \ref{lemma:Dx A bounded}), and therefore $\Phi_1 - I_{H_S^\bot} = O(\partial_x^{-1})$. Moreover the matrix decay norm of $ R_3 $ is $ o(\varepsilon^2) $. \end{remark}
In order to eliminate the order $\varepsilon$ from \eqref{L3 new KdV}, we choose \begin{equation}\label{cal A 1} ( A_1)_j^{j'}(l) := \begin{cases} - \dfrac{( {\cal B}_1)_{j}^{j'}(l)}{{\mathrm i} (\omega \cdot l + m_3( j'^3 - j^3))} & \text{if} \ \bar{\omega} \cdot l + j'^3 - j^3 \neq 0 \, , \\ 0 & \text{otherwise}, \end{cases} \qquad j, j' \in S^c, \ l \in \mathbb Z^\nu. \end{equation} This definition is well posed. Indeed, by \eqref{def cal B1 B2} and \eqref{def bar pi}, \begin{equation}\label{def cal B1 b} ( {\cal B}_1)_{j}^{j'}(l):= \begin{cases} -6 {\mathrm i} j \sqrt{\xi_{j - j'}} & \text{if} \ j - j' \in S\,, \ \ l = \ell(j - j') \\ 0 & \text{otherwise}. \end{cases} \end{equation}
In particular $ ( {\cal B}_1)_{j}^{j'}(l) = 0 $ unless $ | l | \leq 1 $. Thus, for $ \bar{\omega} \cdot l + j'^3 - j^3 \neq 0 $, the denominators in \eqref{cal A 1} satisfy \begin{align}
|\omega \cdot l +m_3( j'^3 - j^3)|
& = | m_3 (\bar \omega \cdot l + j'^3 - j^3) + ( \omega - m_3 \bar \omega ) \cdot l | \nonumber \\ &
\geq |m_3| | \bar \omega \cdot l + j'^3 - j^3 |
- | \omega - m_3 \bar \omega | |l| \geq 1/2 \, , \quad \forall |l| \leq 1 \, , \label{BNFdeno} \end{align} for $ \varepsilon $ small, because the non zero integer
$ | \bar{\omega} \cdot l + j'^3 - j^3| \geq 1 $, \eqref{stima m3}, and $ \omega = \bar \omega + O(\varepsilon^2) $.
$ A_1$ defined in \eqref{cal A 1} is a Hamiltonian vector field as $\mathcal{B}_1$.
\begin{remark} \label{rem:Ham solving homolog} This is a general fact: the denominators $ \delta_{l,j,k} := {\mathrm i} (\omega \cdot l + m_3( k^3 - j^3)) $ satisfy $ \overline{ \delta_{l,j,k} } = \delta_{-l,k,j} $ and an operator $G(\varphi)$ is self-adjoint if and only if its matrix elements satisfy $ \overline{ G_j^k(l) } = G_k^j(-l) $, see \cite{BBM}-Remark 4.5. In a more intrinsic way, we could solve the homological equation of this Birkhoff step directly for the Hamiltonian function whose flow generates $ \Phi_1 $. \end{remark}
\begin{lemma} \label{lem:cubetto} If $j,j' \in S^c$, $j - j' \in S$, $l = \ell(j - j')$, then $ \bar{\omega} \cdot l + j'^3 - j^3 = 3 j j' (j' - j) \neq 0 $. \end{lemma}
\begin{proof} We have $ \bar \omega \cdot l = \bar \omega \cdot \ell( j - j' ) = (j-j')^3$ because $ j - j' \in S$ (see \eqref{bar omega} and \eqref{del ell}). Note that $ j, j' \neq 0 $ because $ j, j' \in S^c $, and $ j - j' \neq 0 $ because $ j - j' \in S $. \end{proof}
\begin{corollary}\label{B1 zero KdV} Let $j, j' \in S^c$. If $\bar{\omega} \cdot l + j'^3 - j^3 = 0$ then $( {\cal B}_1)_{j}^{j'}(l) = 0$. \end{corollary}
\begin{proof} If $({\cal B}_1)_{j}^{j'}(l) \neq 0$ then $j - j' \! \in \! S, l = \ell(j - j')$ by \eqref{def cal B1 b}. Hence $\bar{\omega} \cdot l + j'^3 - j^3 \! \neq 0$ by Lemma \ref{lem:cubetto}. \end{proof}
By \eqref{cal A 1} and the previous corollary, the term of order $\varepsilon$ in \eqref{L3 new KdV} is \begin{equation}\label{primo termine BNF1} \Pi_S^\bot \big( {\cal D}_\omega A_1 + m_3 [\partial_{xxx}, A_1] + {\cal B}_1 \big) \Pi_S^\bot = 0 \, . \end{equation} We now estimate the transformation $ A_1 $.
\begin{lemma}\label{lem: A decay} $(i)$ For all $l \in \mathbb Z^\nu$, $j,j' \in S^c$, \begin{equation}\label{Acoef}
| (A_1)_{j}^{j'}(l)| \leq C (| j | + | j' |)^{-1}\, , \quad
| (A_1)_{j}^{j'}(l)|^{\rm lip} \leq \varepsilon^{-2} (|j| + |j'|)^{-1} \,. \end{equation}
$(ii)$ $ (A_1)_j^{j'}(l) = 0$ for all $l \in \mathbb Z^\nu$, $j,j' \in S^c$ such that $|j - j'| > C_S $, where $C_S := \max\{ |j| : j \in S \}$. \end{lemma}
\begin{proof}
$(i)$ We already noted that $ (A_1)_j^{j'}(l) = 0$, $ \forall |l| > 1$.
Since $ | \omega | \leq |\bar \omega | + 1 $, one has, for $ |l| \leq 1 $, $ j \neq j' $, \[
|\omega \cdot l + m_3 (j'^3 - j^3)|
\geq |m_3||j'^3 - j^3| - |\omega \cdot l|
\geq \frac14 (j'^2 + j^2) - |\omega| \geq \frac18 (j'^2 + j^2) \, , \quad \forall (j'^2 + j^2) \geq C, \] for some constant $C > 0$. Moreover, recalling that also \eqref{BNFdeno} holds, we deduce that for $ j \neq j' $, \begin{equation} \label{lower bound}
(A_1)_j^{j'}(l) \neq 0 \quad \Rightarrow \quad
|\omega \cdot l + m_3 (j'^3 - j^3)| \geq c ( | j | + | j' | )^2 \, . \end{equation} On the other hand, if $ j = j' $, $ j \in S^c$, the matrix $ (A_1)_j^j (l) = 0 $, $ \forall l \in \mathbb Z^\nu$, because $ ({\cal B}_1)_j^j (l) = 0 $ by \eqref{def cal B1 b} (recall that $ 0 \notin S $). Hence \eqref{lower bound} holds for all $ j, j' $. By \eqref{cal A 1}, \eqref{lower bound}, \eqref{def cal B1 b} we deduce the first bound in \eqref{Acoef}.
The Lipschitz bound follows similarly (use also $ |j - j'| \leq C_S $). $(ii)$ follows by \eqref{cal A 1}-\eqref{def cal B1 b}. \end{proof}
The previous lemma means that $ A = O(| \partial_x|^{-1})$. More precisely we deduce that
\begin{lemma} \label{lemma:Dx A bounded}
$ | A_1 \partial_x |_s^{\mathrm{Lip}(\g)} + | \partial_x A_1 |_s^{\mathrm{Lip}(\g)} \leq C(s) $. \end{lemma}
\begin{proof} Recalling the definition of the (space-time) matrix norm in \eqref{decayTop},
since $(A_1)_{j_1}^{j_2}(l) = 0$ outside the set of indices $|l| \leq 1, |j_1 - j_2| \leq C_S$, we have \begin{align*}
| \partial_x A_1 |_s^2
& = \sum_{|l| \leq 1, \, |j| \leq C_S}
\Big( \sup_{j_1 - j_2 = j} |j_1| | (A_1)_{j_1}^{j_2}(l)| \Big)^2 \langle l,j \rangle^{2s} \leq C(s) \end{align*}
by Lemma \ref{lem: A decay}. The estimates for $ |A_1 \partial_x |_s $ and the Lipschitz bounds follow similarly. \end{proof}
It follows that the symplectic map $ \Phi_1 $ in \eqref{Phi_1} is invertible for $ \varepsilon $ small, with inverse \begin{equation}\label{A1 check}
\Phi_1^{-1} = {\rm exp}(-\varepsilon A_1) = I_{H_S^\bot} + \varepsilon {\check A}_1 \, , \
{\check A}_1 := {\mathop \sum}_{n \geq 1} \frac{\varepsilon^{n-1}}{n !} (-A_1)^n \, , \
| {\check A}_1 \partial_x |_s^{\mathrm{Lip}(\g)} + | \partial_x {\check A}_1 |_s^{\mathrm{Lip}(\g)} \leq C(s) \, . \end{equation} Since $ A_1 $ solves the homological equation \eqref{primo termine BNF1}, the $ \varepsilon $-term in \eqref{L3 new KdV} is zero, and, with a straightforward calculation, the $ \varepsilon^2 $-term simplifies to $ {\cal B}_2 + \frac12 [{\cal B}_1, A_1] $. We obtain the Hamiltonian operator \begin{align} \label{bernardino1} {\cal L}_4 & := \Phi_1^{-1} {\cal L}_3 \Phi_1 = \Pi_S^\bot ( {\cal D}_\omega + m_3 \partial_{xxx} + {\tilde d}_1 \partial_x + \varepsilon^2 \{ {\cal B}_2 + \tfrac12 [{\cal B}_1, A_1] \} + \tilde{R}_4 ) \Pi_S^\bot \\ \label{bernardino 2} {\tilde R}_4 & := (\Phi_1^{-1} - I) \Pi_S^\bot [ \varepsilon^2 ( {\cal B}_2 + \tfrac12 [{\cal B}_1, A_1] ) + {\tilde d}_1 \partial_x ] + \Phi_1^{-1} \Pi_S^\bot R_3 \, . \end{align}
We split $ A_1 $ defined in \eqref{cal A 1}, \eqref{def cal B1 b} into $ A_1 = \bar{A}_1 + \widetilde{A}_1$ where, for all $ j, j' \in S^c $, $l \in \mathbb Z^\nu$, \begin{equation}\label{bar A1} ( {\bar A}_1)_j^{j'}(l) := \dfrac{6 j \sqrt{\xi_{j - j'}}}{ \bar\omega \cdot l + j'^3 - j^3} \qquad \text{if } \ \bar{\omega} \cdot l + j'^3 - j^3 \neq 0, \ \ j-j' \in S , \ \ l = \ell(j - j'), \end{equation} and $( {\bar A}_1)_j^{j'}(l) := 0$ otherwise. By Lemma \ref{lem:cubetto}, for all $ j , j' \in S^c $, $l \in \mathbb Z^\nu$, $ ( {\bar A}_1)_j^{j'}(l) = \frac{2 \sqrt{\xi_{j - j'}}}{ j' ( j' - j )} $ if $ j-j' \in S $, $ l = \ell(j - j') $, and $ ( {\bar A}_1)_j^{j'}(l) = 0 $ otherwise, namely (recall the definition of $ \bar v $ in \eqref{def bar pi}) \begin{equation}\label{bar A1 esplicita} {\bar A}_1h = 2 \Pi_S^\bot [( \partial_x^{-1} {\bar v} )(\partial_x^{-1} h)] \, , \quad \forall h \in H_{S^\bot}^s(\mathbb T^{\nu+1}) \,. \end{equation} The difference is \begin{equation}\label{widetilde A1} (\widetilde{A}_1)_j^{j'}(l) = (A_1 - \bar{A}_1)_j^{j'}(l) = - \frac{6 j \sqrt{\xi_{j - j'}}\big\{ ( \omega - \bar \omega) \cdot l + (m_3 - 1)(j'^3 - j^3) \big\}}{\big( \omega \cdot l + m_3(j'^3 - j^3) \big) \big(\bar\omega \cdot l + j'^3 - j^3 \big)} \end{equation} for $ j, j' \in S^c $, $ j - j' \in S $, $ l = \ell(j - j') $, and $ (\widetilde{A}_1)_j^{j'}(l) = 0 $ otherwise. Then, by \eqref{bernardino1}, \begin{equation}\label{mL4 T R4} {\cal L}_4 = \Pi_S^\bot \big( {\cal D}_\omega + m_3 \partial_{xxx} + {\tilde d}_1 \partial_x + \varepsilon^2 T + R_4 \big) \Pi_S^\bot \,, \end{equation} where \begin{equation}\label{T hamiltoniano} T := {\cal B}_2 + \frac12 [{\cal B}_1, {\bar A}_1] \,, \qquad R_4 := \frac{\varepsilon^2}{2} [{\cal B}_1, \widetilde A_1] + \tilde R_4\,. \end{equation} The operator $ T $ is Hamiltonian as $ {\cal B}_2 $, $ {\cal B}_1 $, $ {\bar A}_1 $ (the commutator of two Hamiltonian vector fields is Hamiltonian).
\begin{lemma} \label{lemma:R4}
There is $ \sigma = \sigma(\nu,\tau ) > 0 $ (possibly larger than in Lemma \ref{lemma:stime coeff mL3}) such that \begin{align} \label{stima Lip R4}
| R_4 |_s^{{\rm Lip}(\gamma)}
& \leq_s \varepsilon^{5} \gamma^{-1} + \varepsilon \| {\mathfrak I}_\delta \|_{s + \sigma}^{{\rm Lip}(\gamma)} \,, \quad
| \partial_i R_4 [\widehat \imath] |_s
\leq_s \varepsilon \big( \| \widehat \imath \|_{s + \sigma} + \| {\mathfrak I}_\delta \|_{s + \sigma}
\| \widehat \imath \|_{ s_0 + \sigma} \big) \, .
\end{align} \end{lemma}
\begin{proof} We first estimate $ [{\cal B}_1, \widetilde A_1] =
({\cal B}_1 \partial_x^{-1}) (\partial_x {\widetilde A}_1)- ({\widetilde A}_1 \partial_x)( \partial_x^{-1} {\cal B}_1) $. By \eqref{widetilde A1}, $ | \omega - \bar \omega | \leq C \varepsilon^2 $ (as $ \omega \in \Omega_\varepsilon $ in \eqref{Omega epsilon}) and \eqref{stima m3}, arguing as in Lemmata \ref{lem: A decay}, \ref{lemma:Dx A bounded}, we deduce that
$ | {\widetilde A}_1 \partial_x |_s^{\mathrm{Lip}(\g)} + $ $ | \partial_x {\widetilde A}_1 |_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^2 $. By \eqref{def cal B1 B2}
the norm $ |{\cal B}_1 \partial_x^{-1}|_s^{\mathrm{Lip}(\g)} + |\partial_x^{-1} {\cal B}_1|^{\mathrm{Lip}(\g)} \leq C(s) $.
Hence $ \varepsilon^2 |[{\cal B}_1, {\widetilde A}_1]|_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^4 $. Finally \eqref{T hamiltoniano}, \eqref{bernardino 2}, \eqref{A1 check}, \eqref{def R3}, \eqref{tilde d1 d0 KdV}, \eqref{nuove R3},
and the interpolation estimate \eqref{interpm Lip} imply \eqref{stima Lip R4}. \end{proof}
\subsection{Linear Birkhoff normal form. Step 2}\label{BNF:step2}
The goal of this section is to remove the term $ \varepsilon^2 T$ from the operator $\mathcal{L}_4$ defined in \eqref{mL4 T R4}. We conjugate the Hamiltonian operator $\mathcal{L}_4$ via a symplectic map \begin{equation}\label{def Phi2} \Phi_2 := {\rm exp}(\varepsilon^2 A_2) = I_{H_S^\bot} + \varepsilon^2 A_2 + \varepsilon^4 \widehat A_2\,,\quad \widehat A_2 := {\mathop \sum}_{k \geq 2} \frac{\varepsilon^{2(k - 2)}}{k !} A_2^k \end{equation} where $A_2(\varphi) = {\mathop \sum}_{j,j' \in S^c} ( A_2)_j^{j'}(\varphi) h_{j'} e^{{\mathrm i} j x} $
is a Hamiltonian vector field. We compute \begin{gather} \label{L4 diff} {\cal L}_4 \Phi_2 - \Phi_2 \Pi_S^\bot \big( {\cal D}_\omega + m_3 \partial_{xxx} \big) \Pi_S^\bot = \, \Pi_S^\bot ( \varepsilon^2 \{ {\cal D}_\omega A_2 + m_3 [\partial_{xxx}, A_2 ] + T \} + \tilde{d}_1 \partial_x + \tilde R_5 ) \Pi_S^\bot \,, \\ \label{def tilde R5} \tilde R_5 := \Pi_S^\bot \{ \varepsilon^4 ( ({\cal D}_\omega \widehat A_2) + m_3 [\partial_{xxx}, \widehat A_2] ) + (\tilde{d}_1 \partial_x + \varepsilon^2 T)(\Phi_2 - I) + R_4 \Phi_2 \} \Pi_S^\bot \, . \end{gather} We define \begin{equation}\label{A2} ( A_2)_j^{j'}(l) := - \dfrac{T_j^{j'}(l)}{ {\mathrm i} (\omega \cdot l + m_3 (j'^3 - j^3) )} \quad \text{if } \ \bar\omega \cdot l + j'^3 - j^3 \neq 0; \qquad ( A_2)_j^{j'}(l) := 0 \quad \text{otherwise.} \end{equation} This definition is well posed. Indeed, by \eqref{T hamiltoniano}, \eqref{def cal B1 b}, \eqref{bar A1}, \eqref{def cal B1 B2},
the matrix entries $ T_j^{j'} (l) = 0 $ for all $ | j - j' | > 2 C_S $, $l \in \mathbb Z^\nu $,
where $C_S := \max \{ |j| \, , j \in S \} $.
Also $ T_j^{j'} (l) = 0 $ for all $ j, j' \in S^c $, $ | l | > 2 $ (see also \eqref{forma funzionale B1 A1}, \eqref{B1B2 Fourier}, \eqref{B3 Fourier} below). Thus, arguing as in \eqref{BNFdeno}, if $ \bar\omega \cdot l + j'^3 - j^3 \neq 0 $, then
$ |\omega \cdot l + m_3 (j'^3 - j^3)| \geq 1 / 2 $. The operator $ A_2 $ is a Hamiltonian vector field because $ T $ is Hamiltonian and by Remark \ref{rem:Ham solving homolog}.
Now we prove that the Birkhoff map $\Phi_2$ removes completely the term $\varepsilon^2 T$.
\begin{lemma} \label{pezzo epsilon 2 A}
Let $j, j' \in S^c$. If $\bar \omega \cdot l + j'^3 - j^3 = 0$, then $T_j^{j'}(l) = 0$. \end{lemma}
\begin{proof} By \eqref{def cal B1 B2}, \eqref{bar A1 esplicita} we get $ {\cal B}_1 {\bar A}_1 h = - 12 \partial_x \{ \bar v \Pi_S^\bot[( \partial_x^{-1} \bar v )(\partial_x^{-1} h)] \} $, ${\bar A}_1{\cal B}_1 h = $ $ - 12 \Pi_S^\bot [ (\partial_x^{-1} \bar v) \Pi_S^\bot (\bar v h) ] $ for all $h \in H_{S^\bot}^s $, whence, recalling \eqref{def bar pi}, for all $ j, j' \in S^c $, $ l \in \mathbb Z^\nu $, \begin{equation}\label{forma funzionale B1 A1} ( [{\cal B}_1, {\bar A}_1])_{j}^{j'}(l) = 12 {\mathrm i} \!\!\! \sum_{\begin{subarray}{c} j_1, j_2 \in S, \, j_1 + j_2 = j - j' \\ j' + j_2 \in S^c, \, \ell(j_1) + \ell(j_2) = l \end{subarray}} \!\!\! \frac{ j j_1 -j' j_2 }{j' j_1 j_2} \sqrt{\xi_{j_1} \xi_{j_2}}\,,
\end{equation} If $([{\cal B}_1, {\bar A}_1])_j^{j'}(l) \neq 0 $ there are $ j_1, j_2 \in S $ such that $ j_1 + j_2 = j - j' $, $j' + j_2 \in S^c$, $ \ell(j_1) + \ell(j_2) = l $. Then \begin{equation} \label{con lo zero} \bar \omega \cdot l + j'^3 - j^3
= \bar \omega \cdot \ell(j_1 ) + \bar \omega \cdot \ell(j_2) + j'^3 - j^3
\stackrel{\eqref{del ell}}
= j_1^3 + j_2^3 + j'^3 - j^3\, . \end{equation} Thus, if $\bar \omega \cdot l + j'^3 - j^3 = 0$, Lemma \ref{lemma:interi} implies $ (j_1 + j_2 )( j_1 + j') ( j_2 + j' ) = 0 $. Now $ j_1 + j' $, $ j_2 + j' \neq 0 $ because $ j_1, j_2 \in S $, $ j' \in S^c $ and $ S $ is symmetric. Hence $ j_1 + j_2 = 0 $, which implies $ j = j' $ and $ l = 0 $ (the map $ \ell $ in \eqref{del ell} is odd). In conclusion, if $\bar \omega \cdot l + j'^3 - j^3 = 0$, the only nonzero matrix entry $ ([{\cal B}_1, {\bar A}_1])_j^{j'}(l) $ is \begin{equation}\label{parte diagonale zero} ([{\cal B}_1, \bar A_1])_j^j(0) \stackrel{\eqref{forma funzionale B1 A1}} = 24 {\mathrm i} \sum_{j_2 \in S, \, j_2 + j \in S^c } \xi_{j_2} {j_2^{-1}}. \end{equation} Now we consider $ {\cal B}_2 $ in \eqref{def cal B1 B2}. Split $ {\cal B}_2 = B_1 + B_2 + B_3 $, where $B_1 h := - 6 \partial_x \{ {\bar v} \Pi_S [ (\partial_x^{-1} {\bar v}) \partial_{x}^{-1} h ] \}$, $B_2 h := - 6 \partial_x \{ h \pi_0 [(\partial_x^{-1} {\bar v} )^2] \}$, $B_3 h := 6 \pi_0 \{ \Pi_S ({\bar v} h) \partial_x^{-1} {\bar v} \}$. Their Fourier matrix representation is \begin{gather}\label{B1B2 Fourier} (B_1)_j^{j'}(l) = 6 {\mathrm i} j \!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{\begin{subarray}{c} j_1, j_2 \in S, \, j_1 + j' \in S \\ j_1 + j_2 = j - j', \, \ell(j_1) + \ell(j_2) = l \end{subarray}} \!\!\!\! \!\!\!\!\!\!\! \frac{\sqrt{\xi_{j_1} \xi_{j_2}}}{ j_1 j'} \, , \qquad (B_2)_{j}^{j'}(l) = 6 {\mathrm i} j \!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{\begin{subarray}{c} j_1 , j_2 \in S, \, j_1 + j_2 \neq 0 \\ j_1 + j_2 = j - j', \, \ell(j_1) + \ell(j_2) = l \end{subarray}} \!\!\!\!\!\!\!\!\!\!\! \frac{\sqrt{\xi_{j_1} \xi_{j_2}}}{j_1 j_2} \,, \\ \label{B3 Fourier} (B_3)_j^{j'}(l) = 6 \!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{\begin{subarray}{c} j_1 , j_2 \in S, \, j_1 + j' \in S \\ j_1 + j_2 = j - j', \, \ell(j_1) + \ell(j_2) = l \end{subarray}} \!\!\!\!\!\!\!\!\!\!\!\!\! \frac{\sqrt{\xi_{j_1} \xi_{j_2}}}{{\mathrm i} j_2}\,, \qquad j,j' \in S^c, \ l \in \mathbb Z^\nu. \end{gather} We study the terms $ B_1 $, $ B_2 $, $ B_3$ separately. If $(B_1)_j^{j'}(l) \neq 0$, there are $j_1 , j_2 \in S$ such that $ j_1 + j_2 = j - j' $, $j_1 + j' \in S$, $ l = \ell(j_1) + \ell(j_2) $ and \eqref{con lo zero} holds. Thus, if $\bar \omega \cdot l + j'^3 - j^3 = 0$, Lemma \ref{lemma:interi} implies $ (j_1 + j_2) (j_1 + j')(j_2 + j') = 0 $, and, since $j' \in S^c $ and $ S $ is symmetric, the only possibility is $j_1 + j_2 = 0$. Hence $j = j'$, $l = 0$. In conclusion, if $\bar \omega \cdot l + j'^3 - j^3 = 0$, the only nonzero matrix element $ ( B_1 )_j^{j'}(l) $ is \begin{equation} \label{diag R0} (B_1)_j^j(0) = 6 {\mathrm i} \sum_{ j_1 \in S, \, j_1 + j \in S } \xi_{j_1} j_1^{-1} \,. \end{equation} By the same arguments, if $ (B_2)_j^{j'} (l) \neq 0 $ and $\bar \omega \cdot l + j'^3 - j^3 = 0$ we find $ (j_1 + j_2) (j_1 + j')(j_2 + j') = 0 $, which is impossible because also $ j_1 + j_2 \neq 0$. Finally, arguing as for $ B_1 $, if $\bar \omega \cdot l + j'^3 - j^3 = 0$, then the only nonzero matrix element $ ( B_3 )_j^{j'}(l) $ is \begin{equation} \label{diag R1 R2} (B_3)_j^j(0) = 6 {\mathrm i} \sum_{j_1 \in S, \, j_1 + j \in S } \xi_{j_1} j_1^{-1} \,. \end{equation} From \eqref{parte diagonale zero}, \eqref{diag R0}, \eqref{diag R1 R2} we deduce that, if $ \bar \omega \cdot l + j'^3 - j^3 = 0 $, then the only non zero elements $ (\frac12 [\mathcal{B}_1, \bar A_1] + B_1 + B_3)_j^{j'} (l) $ must be for $ (l, j, j') = (0, j, j) $. In this case, we get \begin{equation}\label{Tjj0} \frac12( [\mathcal{B}_1, \bar A_1])_j^j(0) + (B_1)_j^j(0) + (B_3)_j^j(0) = 12 {\mathrm i} \sum_{\begin{subarray}{c} j_1 \in S \\ j_1 + j \in S^c \end{subarray}} \frac{\xi_{j_1}}{ j_1} + 12 {\mathrm i} \sum_{\begin{subarray}{c} j_1 \in S \\ j_1 + j \in S \end{subarray}} \frac{\xi_{j_1}}{ j_1} = 12 {\mathrm i} \sum_{j_1 \in S} \frac{\xi_{j_1}}{ j_1} = 0 \end{equation} because
the case $j_1 + j = 0$ is impossible ($j_1 \in S$, $j' \in S^c$ and $S$ is symmetric), and the function $S \ni j_1 \to \xi_{j_1} / j_1 \in \mathbb R$ is odd.
The lemma follows by \eqref{T hamiltoniano}, \eqref{Tjj0}. \end{proof}
The choice of $ A_2 $ in \eqref{A2} and Lemma \ref{pezzo epsilon 2 A} imply that \begin{equation}\label{pezzo zero}
\Pi_S^\bot \big( {\cal D}_\omega A_2 + m_3 [\partial_{xxx}, A_2] + T \big) \Pi_S^\bot = 0 \, . \end{equation}
\begin{lemma}\label{A2 decay}
$ |\partial_x A_2|_s^{\mathrm{Lip}(\g)} + | A_2 \partial_x|_s^{\mathrm{Lip}(\g)} \leq C(s) $. \end{lemma}
\begin{proof} First we prove that the diagonal elements $ T_j^j (l) = 0 $ for all $ l \in \mathbb Z^\nu $. For $l = 0$, we have already proved that $T_j^j(0) = 0$ (apply Lemma \ref{pezzo epsilon 2 A} with $j = j'$, $l=0$). Moreover, in each term $[\mathcal{B}_1, \bar A_1]$, $B_1$, $B_2$, $B_3$ (see \eqref{forma funzionale B1 A1}, \eqref{B1B2 Fourier}, \eqref{B3 Fourier}) the sum is over $j_1 + j_2 = j - j'$, $l = \ell(j_1) + \ell(j_2)$. If $j = j'$, then $j_1 + j_2 = 0$, and $l = 0$. Thus $ T_j^j (l) = T_j^j (0) = 0 $.
For the off-diagonal terms $ j \neq j' $ we argue as in Lemmata \ref{lem: A decay}, \ref{lemma:Dx A bounded}, using that all the denominators $ |\omega \cdot l + m_3 ( j'^3 - j^3 ) | \geq c ( |j| + |j'|)^2 $. \end{proof}
For $ \varepsilon $ small, the map $ \Phi_2 $ in \eqref{def Phi2} is invertible and $ \Phi_2 = \exp(-\varepsilon^2 A_2 ) $. Therefore \eqref{L4 diff}, \eqref{pezzo zero} imply \begin{align} \label{L5 KdV} {\cal L}_5 & := \Phi_2^{-1} {\cal L}_4 \Phi_2 = \Pi_S^\bot ( {\cal D}_\omega + m_3 \partial_{xxx} + \tilde{d}_1 \partial_x + R_5 ) \Pi_S^\bot \,, \\ \label{R5} R_5 & := ( \Phi_2^{-1} - I) \Pi_S^\bot \tilde{d}_1 \partial_x + \Phi_2^{-1} \Pi_S^\bot {\tilde R}_5 \,. \end{align} Since $ A_2 $ is a Hamiltonian vector field, the map $\Phi_2$ is symplectic and so ${\cal L}_5$ is Hamiltonian. \begin{lemma} \label{lemma:R5} $R_5$ satisfies the same estimates \eqref{stima Lip R4} as $R_4$ (with a possibly larger $\sigma$). \end{lemma}
\begin{proof} Use \eqref{R5}, Lemma \ref{A2 decay}, \eqref{tilde d1 d0 KdV}, \eqref{def tilde R5}, \eqref{stima Lip R4} and the interpolation inequalities \eqref{multiplication Lip}, \eqref{interpm Lip}. \end{proof}
\subsection{Descent method}\label{step5}
The goal of this section is to transform $ {\cal L}_5 $ in \eqref{L5 KdV} so that the coefficient of $ \partial_x $ becomes constant. We conjugate $ {\cal L}_5 $ via a symplectic map of the form \begin{equation}\label{def descent} {\cal S} := {\rm exp}(\Pi_S^\bot (w \partial_x^{-1}))\Pi_S^\bot = \Pi_S^\bot \big( I + w \partial_x^{-1} \big) \Pi_S^\bot + \widehat {\cal S}\,,\quad \widehat {\cal S} := {\mathop \sum}_{k \geq 2} \frac{1}{k!} [\Pi_S^\bot (w \partial_x^{-1})]^k \Pi_S^\bot\,, \end{equation} where $ w : \mathbb T^{\nu+1} \to \mathbb R $ is a function.
Note that $\Pi_S^\bot (w \partial_x^{-1}) \Pi_S^\bot$ is the Hamiltonian vector field generated by
$ - \frac12 \int_\mathbb T w (\partial_x^{-1} h)^2\,dx$, $h \in H_S^\bot$. Recalling \eqref{def pi 0}, we calculate \begin{align} \label{L5 diff} & {\cal L}_5 {\cal S} - {\cal S} \Pi_S^\bot ( {\cal D}_\omega + m_3 \partial_{xxx} + m_1 \partial_x ) \Pi_S^\bot = \Pi_S^\bot ( 3 m_3 w_x + \tilde{d}_1 - m_1 ) \partial_x \Pi_S^\bot + \tilde R_6 \,, \\ & \tilde R_6 := \Pi_S^\bot \{ ( 3 m_3 w_{xx} + \tilde d_1 \Pi_S^\bot w - m_1 w ) \pi_0 + ( ({\cal D}_\omega w) + m_3 w_{xxx} + \tilde d_1 \Pi_S^\bot w_x ) \partial_x^{-1} + ({\cal D}_\omega \widehat{\cal S}) \notag \\ & \qquad \ \ + m_3 [\partial_{xxx}, \widehat{\cal S}] + \tilde{d}_1 \partial_x \widehat{\cal S} - m_1 \widehat{\cal S} \partial_x + R_5 {\cal S} \} \Pi_S^\bot \notag \end{align} where $\tilde R_6$ collects all the terms of order at most $\partial_x^0$. By Remark \ref{d1 media}, we solve $ 3 m_3 w_x + \tilde{d}_1 - m_1 = 0 $ by choosing $w := - (3 m_3)^{-1} \partial_x^{-1} ( \tilde{d}_1 - m_1 )$. For $ \varepsilon $ small, the operator $ {\cal S} $ is invertible and, by \eqref{L5 diff}, \begin{equation}\label{def L6} \mathcal{L}_6 := \mathcal{S}^{-1} \mathcal{L}_5 \mathcal{S} = \Pi_S^\bot ( {\cal D}_\omega + m_3 \partial_{xxx} + m_1 \partial_x ) \Pi_S^\bot + R_6 \,, \qquad R_6 := {\cal S}^{-1} \tilde R_6 \, . \end{equation} Since $ {\cal S} $ is symplectic, ${\cal L}_6$ is Hamiltonian (recall Definition \ref{operatore Hamiltoniano}).
\begin{lemma}\label{lemma L6}
There is $ \sigma = \sigma(\nu,\tau ) > 0 $ (possibly larger than in Lemma \ref{lemma:R5}) such that $$
|{\cal S}^{\pm 1} - I|_s^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^{5} \gamma^{-1} + \varepsilon \| {\mathfrak I}_\delta\|_{s + \sigma}^{\mathrm{Lip}(\g)}\,, \quad
|\partial_i {\cal S}^{\pm 1} [\widehat \imath ]|_s
\leq_s \varepsilon ( \| \widehat \imath\|_{s + \sigma} + \|{\mathfrak I}_\delta \|_{s + \sigma} \| \widehat \imath \|_{s_0 + \sigma} ). $$ The remainder $R_6$ satisfies the same estimates \eqref{stima Lip R4} as $R_4$. \end{lemma}
\begin{proof}
By \eqref{tilde d1 d0 KdV},\eqref{stima m1},\eqref{stima m3}, $\| w \|_s^{\mathrm{Lip}(\g)} \leq_s \varepsilon^{5} \gamma^{-1} + \varepsilon \| {\mathfrak I}_\delta\|_{s + \sigma}^{\mathrm{Lip}(\g)} $, and the lemma follows by \eqref{def descent}. Since $ \widehat { \cal S } = O(\partial_x^{-2})$ the commutator $ [ \partial_{xxx}, \widehat { \cal S }] = O(\partial_x^0 )$ and
$ | [ \partial_{xxx}, \widehat { \cal S }] |_s^{\mathrm{Lip}(\g)} \leq_s \| w \|_{s_0+3}^{\mathrm{Lip}(\g)} \| w \|_{s+3}^{\mathrm{Lip}(\g)} $. \end{proof}
\subsection{KAM reducibility and inversion of $ {\cal L}_{\omega} $} \label{subsec:mL0 mL5}
The coefficients $ m_3, m_1 $ of the operator $ {\cal L}_6 $ in \eqref{def L6} are constants, and the remainder $ R_6 $ is a bounded operator of order $ \partial_x^0 $ with small matrix decay norm, see \eqref{decay R6}. Then we can diagonalize $ {\cal L}_6 $ by applying the iterative KAM reducibility Theorem 4.2 in \cite{BBM} along the sequence of scales \begin{equation}\label{defN} N_n := N_{0}^{\chi^n}, \quad n = 0,1,2,\ldots, \quad \chi := 3/2, \quad N_0 > 0 \, . \end{equation} In section \ref{sec:NM}, the initial $ N_0 $ will (slightly) increase to infinity as $ \varepsilon \to 0 $, see \eqref{nash moser smallness condition}. The required smallness condition (see (4.14) in \cite{BBM}) is (written in the present notations) \begin{equation}\label{R6resto}
N_0^{C_0} | R_6 |_{s_0 + \beta}^{{\mathrm{Lip}(\g)}} \gamma^{-1} \leq 1 \end{equation} where $ \beta := 7 \tau + 6 $ (see (4.1) in \cite{BBM}), $ \tau $ is the diophantine exponent in \eqref{omdio} and \eqref{Omegainfty}, and the constant $ C_0 := C_0 (\tau , \nu ) > 0 $ is fixed in Theorem 4.2 in \cite{BBM}. By Lemma \ref{lemma L6}, the remainder $ R_6 $ satisfies the bound \eqref{stima Lip R4}, and using \eqref{ansatz delta} we get (recall \eqref{link gamma b}) \begin{equation} \label{decay R6}
| R_6|_{s_0 + \beta}^{{\mathrm{Lip}(\g)}} \leq C \varepsilon^{7 - 2 b} \gamma^{-1} = C \varepsilon^{3 - 2 a}, \qquad
| R_6 |_{s_0 + \beta}^{{\mathrm{Lip}(\g)}} \gamma^{-1} \leq C \varepsilon^{1 - 3 a} \, . \end{equation} We use that $ \mu $ in \eqref{ansatz delta} is assumed to satisfy $ \mu \geq \sigma + \beta $ where $ \sigma := \sigma (\tau, \nu ) $ is given in Lemma \ref{lemma L6}.
\begin{theorem} \label{teoremadiriducibilita} {\bf (Reducibility)} Assume that $\omega \mapsto i_\delta (\omega) $ is a Lipschitz function defined on some subset $\Omega_o \subset \Omega_\varepsilon $ (recall \eqref{Omega epsilon}), satisfying \eqref{ansatz delta} with $ \mu \geq \sigma + \beta $ where $ \sigma := \sigma (\tau, \nu) $ is given in Lemma \ref{lemma L6} and $ \beta := 7 \tau + 6 $. Then there exists $ \delta_{0} \in (0,1) $ such that, if \begin{equation}\label{condizione-kam} N_0^{C_0} \varepsilon^{7 - 2 b} \gamma^{-2} = N_0^{C_0} \varepsilon^{1 - 3 a} \leq \delta_{0} \, , \quad \gamma := \varepsilon^{2 + a} \, , \quad a \in (0,1/6) \, , \end{equation} then:
$(i)$ {\bf (Eigenvalues)}. For all $ \omega \in \Omega_\varepsilon $ there exists a sequence \begin{equation} \label{espressione autovalori} \mu_j^\infty(\omega) := \mu_j^\infty(\omega, i_\delta (\omega)) := {\mathrm i} \big( - {\tilde m}_3 (\omega) j^3 + {\tilde m}_1(\omega) j \big) + r_j^\infty(\omega), \quad j \in S^c \, , \end{equation} where $ {\tilde m}_3, {\tilde m}_1$ coincide with the coefficients $m_3, m_1$ of $ {\cal L}_6 $ in \eqref{def L6} for all $ \omega \in \Omega_o $, and \begin{align} \label{autofinali}
| {\tilde m}_3 - 1 |^{{\rm Lip}(\gamma)} + | {\tilde m}_1 |^{{\rm Lip}(\gamma)} \leq C \varepsilon^4 \, , \quad
| r^{\infty}_j |^{{\rm Lip}(\gamma)} & \leq C \varepsilon^{3 - 2 a} \, , \quad \ \forall j \in S^c \, , \end{align} for some $ C > 0 $. All the eigenvalues $\mu_j^{\infty}$ are purely imaginary. We define, for convenience, $ \mu_0^\infty (\omega) := 0 $.
$(ii)$ {\bf (Conjugacy)}. For all $\omega$ in the set \begin{equation} \label{Omegainfty} \Omega_\infty^{2\gamma} := \Omega_\infty^{2\gamma} (i_\delta) := \Big\{ \omega \in \Omega_o : \,
| {\mathrm i} \omega \cdot l + \mu^{\infty}_j (\omega) - \mu^{\infty}_{k} (\omega) |
\geq \frac{2 \gamma | j^{3} - k^{3} |}{ \langle l \rangle^{\tau}}, \, \forall l \in \mathbb Z^{\nu}, \, j ,k \in S^c \cup \{0\}\Big\} \end{equation} there is a real, bounded, invertible linear operator $\Phi_\infty(\omega) : H^s_{S^\bot} (\mathbb T^{\nu+1}) \to H^s_{S^\bot} (\mathbb T^{\nu+1}) $, with bounded inverse $\Phi_\infty^{-1}(\omega)$, that conjugates $\mathcal{L}_6$ in \eqref{def L6} to constant coefficients, namely \begin{equation}\label{Lfinale} {\cal L}_{\infty}(\omega) := \Phi_{\infty}^{-1}(\omega) \circ \mathcal{L}_6(\omega) \circ \Phi_{\infty}(\omega) = \omega \cdot \partial_{\varphi} + {\cal D}_{\infty}(\omega), \quad {\cal D}_{\infty}(\omega) := {\rm diag}_{j \in S^c} \{ \mu^{\infty}_{j}(\omega) \} \, . \end{equation} The transformations $\Phi_\infty, \Phi_\infty^{-1}$ are close to the identity in matrix decay norm, with \begin{equation} \label{stima Phi infty}
| \Phi_{\infty} - I |_{s,\Omega_\infty^{2\gamma}}^{{\rm Lip}(\gamma)}
+ | \Phi_{\infty}^{- 1} - I |_{s,\Omega_\infty^{2\gamma}}^{\mathrm{Lip}(\g)}
\leq_s \varepsilon^{5} \gamma^{-2} + \varepsilon \gamma^{-1} \| {\mathfrak I}_\delta \|_{s + \sigma}^{\mathrm{Lip}(\g)} . \end{equation} Moreover $\Phi_{\infty}, \Phi_{\infty}^{-1}$ are symplectic, and $\mathcal{L}_\infty $ is a Hamiltonian operator. \end{theorem}
\begin{proof} The proof is the same as the one of Theorem 4.1 in \cite{BBM},
which is based on Theorem 4.2, Corollaries 4.1, 4.2 and Lemmata 4.1, 4.2 of \cite{BBM}. A difference is that here $\omega \in \mathbb R^\nu$, while in \cite{BBM} the parameter $\lambda \in \mathbb R$ is one-dimensional. The proof is the same because Kirszbraun's Theorem on Lipschitz extension of functions also holds in $\mathbb R^\nu$ (see, e.g., Lemma A.2 in \cite{Po2}). The bound \eqref{stima Phi infty} follows by Corollary 4.1 of \cite{BBM} and the estimate of $ R_6 $ in Lemma \ref{lemma L6}. We also use the estimates \eqref{stima m3}, \eqref{stima m1} for $ \partial_i m_3 $, $ \partial_i m_1 $ which correspond to (3.64) in \cite{BBM}. Another difference is that here the sites $ j \in S^c \subset \mathbb Z \setminus \{0\} $ unlike in \cite{BBM} where $ j \in \mathbb Z $. We have defined $ \mu_0^\infty := 0 $ so that also the first Melnikov conditions \eqref{prime di melnikov} are included in the definition of $ \Omega^{2 \gamma}_{\infty} $. \end{proof}
\begin{remark} Theorem 4.2 in \cite{BBM} also provides the Lipschitz dependence of the (approximate) eigenvalues $ \mu_j^n $ with respect to the unknown $ i_0 (\varphi) $, which is used for the measure estimate Lemma \ref{matteo 10}.
\end{remark}
All the parameters $ \omega \in \Omega_\infty^{2 \gamma} $ satisfy (specialize \eqref{Omegainfty} for $ k = 0 $) \begin{equation}\label{prime di melnikov}
|{\mathrm i} \omega \cdot l + \mu_j^\infty(\omega)| \geq
2 \gamma | j |^3 \langle l \rangle^{-\tau} \, , \quad \forall l \in \mathbb Z^\nu , \ j \in S^c, \end{equation} and the diagonal operator $ {\cal L}_\infty $ is invertible.
In the following theorem we finally verify the inversion assumption \eqref{tame inverse} for ${\cal L}_\omega $.
\begin{theorem}\label{inversione linearized normale} {\bf (Inversion of $ {\cal L}_\omega $)} Assume the hypotheses of Theorem \ref{teoremadiriducibilita} and \eqref{condizione-kam}. Then there exists $ \sigma_1 := \sigma_1 ( \tau , \nu ) > 0 $ such that, $ \forall \omega \in \Omega^{2 \gamma}_\infty(i_\delta )$ (see \eqref{Omegainfty}), for any function $ g \in H^{s+\sigma_1}_{S^\bot} (\mathbb T^{\nu+1}) $ the equation ${\cal L}_\omega h = g$ has a solution $h = {\cal L}_\omega^{-1} g \in H^s_{S^\bot} (\mathbb T^{\nu+1})$, satisfying \begin{align}\label{stima inverso linearizzato normale}
\| {\cal L}_\omega^{-1} g \|_s^{{\rm Lip}(\gamma)}
& \leq_s \gamma^{-1} \big( \| g \|_{s +\sigma_1}^{{\rm Lip}(\gamma)}
+ \varepsilon \gamma^{-1} \| {\mathfrak I}_\delta\|_{s + \sigma_1}^{\mathrm{Lip}(\g)}
\| g \|_{s_0}^{{\rm Lip}(\gamma)} \big) \\
& \leq_s \gamma^{-1} \big( \| g \|_{s +\sigma_1}^{{\rm Lip}(\gamma)}
+ \varepsilon \gamma^{-1} \big\{ \| {\mathfrak{I}}_0 \|_{s + \sigma_1 + \sigma}^{\mathrm{Lip}(\g)} + \gamma^{-1}
\| {\mathfrak{I}}_0 \|_{s_0 + \sigma }^{\mathrm{Lip}(\g)}
\| Z \|_{s + \sigma_1 + \sigma}^{\mathrm{Lip}(\g)} \big\}
\| g \|_{s_0}^{{\rm Lip}(\gamma)} \big)\, . \nonumber \end{align} \end{theorem}
\begin{proof} Collecting Theorem \ref{teoremadiriducibilita} with the results of sections \ref{step1}-\ref{step5}, we have obtained the (semi)-conjugation of the operator $ \mathcal{L}_\omega $ (defined in \eqref{Lom KdVnew}) to $\mathcal{L}_\infty $ (defined in \eqref{Lfinale}), namely \begin{equation} \label{def cal M 12} \mathcal{L}_\omega = {\cal M}_1 \mathcal{L}_\infty {\cal M}_2^{-1}, \qquad {\cal M}_1 := \Phi B \rho {\cal T} \Phi_1 \Phi_2 {\cal S} \Phi_{\infty}, \quad {\cal M}_2 := \Phi B {\cal T} \Phi_1 \Phi_2 {\cal S} \Phi_{\infty} \,, \end{equation} where $ \rho $ means the multiplication operator by the function $ \rho $ defined in \eqref{anche def rho}. By \eqref{prime di melnikov} and Lemma 4.2 of \cite{BBM} we deduce that
$ \| {\cal L}_\infty^{-1} g \|_s^{\mathrm{Lip}(\g)} \leq_s \gamma^{-1} \| g \|_{s+ 2 \tau + 1}^{\mathrm{Lip}(\g)} $. In order to estimate $\mathcal{M}_2, \mathcal{M}_1^{-1}$, we recall that the composition of tame maps is tame, see Lemma 6.5 in \cite{BBM}. Now, $ \Phi , \Phi^{-1}$ are estimated in Lemma \ref{lemma:stime coeff mL1}, $ B, B^{-1} $ and $ \rho $ in Lemma \ref{lemma:stime coeff mL2}, $ {\cal T}, {\cal T}^{-1} $ in Lemma \ref{lemma:stime coeff mL3}.
The decay norms $ |\Phi_1|_s^{\mathrm{Lip}(\g)}$, $ |\Phi_1^{-1}|_s^{\mathrm{Lip}(\g)} $, $ |\Phi_2|_s^{\mathrm{Lip}(\g)} $, $ |\Phi_2^{-1}|_s^{\mathrm{Lip}(\g)} \leq C(s) $ by Lemmata \ref{lemma:Dx A bounded}, \ref{A2 decay}. The decay norm of $ {\cal S}, {\cal S}^{-1} $ is estimated in Lemma \ref{lemma L6}, and $ \Phi_\infty, \Phi_\infty^{-1} $ in \eqref{stima Phi infty}. The decay norm controls the Sobolev norm by \eqref{interpolazione norme miste}. Thus, by \eqref{def cal M 12}, \[
\| \mathcal{M}_2 h \|_s^{\mathrm{Lip}(\g)} + \| \mathcal{M}_1^{-1} h \|_s^{\mathrm{Lip}(\g)}
\leq_s \| h \|_{s+3}^{\mathrm{Lip}(\g)} + \varepsilon \gamma^{-1} \| {\mathfrak{I}}_\delta \|_{s+\sigma+3}^{\mathrm{Lip}(\g)} \| h \|_{s_0}^{\mathrm{Lip}(\g)} \,, \] and \eqref{stima inverso linearizzato normale} follows. The last inequality in \eqref{stima inverso linearizzato normale} follows by \eqref{stima y - y delta} and \eqref{ansatz 0}. \end{proof}
\section{The Nash-Moser nonlinear iteration}\label{sec:NM}
In this section we prove Theorem \ref{main theorem}. It will be a consequence of the Nash-Moser Theorem \ref{iterazione-non-lineare} below.
Consider the finite-dimensional subspaces \[ E_n := \big\{ {\mathfrak{I}} (\varphi) = ( \Theta, y, z )(\varphi) : \, \Theta = \Pi_n \Theta, \ y = \Pi_n y, \ z = \Pi_n z \big\} \] where $ N_n := N_0^{\chi^n} $ are introduced in \eqref{defN}, and $ \Pi_n $ are the projectors (which, with a small abuse of notation, we denote with the same symbol) \begin{align}
\Pi_n \Theta (\varphi) := \sum_{|l| < N_n} \Theta_l e^{{\mathrm i} l \cdot \varphi}, \quad
\Pi_n y (\varphi) := \sum_{|l| < N_n} y_l e^{{\mathrm i} l \cdot \varphi}, \quad & \text{where} \ \Theta (\varphi) = \sum_{l \in \mathbb Z^\nu} \Theta_l e^{{\mathrm i} l \cdot \varphi}, \quad y(\varphi) = \sum_{l \in \mathbb Z^\nu} y_l e^{{\mathrm i} l \cdot \varphi}, \nonumber \\
\Pi_n z(\varphi,x) := \sum_{|(l,j)| < N_n} z_{lj} e^{{\mathrm i} (l \cdot \varphi + jx)}, \quad & \text{where} \ z(\varphi,x) = \sum_{l \in \mathbb Z^\nu, j \in S^c} z_{lj} e^{{\mathrm i} (l \cdot \varphi + jx)}. \label{Pin def} \end{align} We define $ \Pi_n^\bot := I - \Pi_n $. The classical smoothing properties hold: for all $\alpha , s \geq 0$, \begin{equation}\label{smoothing-u1}
\|\Pi_{n} {\mathfrak{I}} \|_{s + \alpha}^{\mathrm{Lip}(\g)}
\leq N_{n}^{\alpha} \| {\mathfrak{I}} \|_{s}^{\mathrm{Lip}(\g)} \, , \ \forall {\mathfrak{I}} (\omega) \in H^{s} \,, \quad
\|\Pi_{n}^\bot {\mathfrak{I}} \|_{s}^{\mathrm{Lip}(\g)}
\leq N_{n}^{-\alpha} \| {\mathfrak{I}} \|_{s + \alpha}^{\mathrm{Lip}(\g)} \, , \ \forall {\mathfrak{I}} (\omega) \in H^{s + \alpha} \, . \end{equation} We define the constants \begin{alignat}{3} \label{costanti nash moser} & \mu_1 := 3 \mu + 9\,,\quad & & \alpha := 3 \mu_1 + 1\,,\quad & & \alpha_1 := (\alpha - 3 \mu)/2 \,,
\\ & \kappa := 3 \big(\mu_1 + \rho^{-1} \big)+ 1\,,\qquad & & \beta_1 := 6 \mu_1+ 3 \rho^{-1} + 3 \, , \qquad & & 0 < \rho < \frac{1 - 3 a}{C_1(1 + a)}\,, \label{def rho} \end{alignat} where $ \mu := \mu (\tau, \nu) $ is the ``loss of regularity" defined in Theorem \ref{thm:stima inverso approssimato} (see \eqref{stima inverso approssimato 1}) and $ C_1 $ is fixed below.
\begin{theorem}\label{iterazione-non-lineare} {\bf (Nash-Moser)} Assume that $ f \in C^q $ with $ q > S := s_0 + \beta_1 + \mu + 3 $. Let $ \tau \geq \nu + 2 $. Then there exist $ C_1 > \max \{ \mu_1 + \alpha, C_0 \} $ (where $ C_0 := C_0 (\tau, \nu) $ is the one in Theorem \ref{teoremadiriducibilita}), $ \delta_0 := \delta_0 (\tau, \nu) > 0 $ such that, if \begin{equation}\label{nash moser smallness condition} N_0^{C_1} \varepsilon^{b_* + 1} \gamma^{-2}< \delta_0\,,\quad \gamma:= \varepsilon^{2 + a} = \varepsilon^{2b} \,,\quad N_0 := (\varepsilon \gamma^{-1})^\rho\,,\quad b_* := 6 - 2 b \, , \end{equation} then, for all $ n \geq 0 $:
\begin{itemize} \item[$({\cal P}1)_{n}$] there exists a function $({\mathfrak{I}}_n, \zeta_n) : {\cal G}_n \subseteq \Omega_\varepsilon \to E_{n-1} \times \mathbb R^\nu$, $\omega \mapsto ({\mathfrak{I}}_n(\omega), \zeta_n(\omega))$, $ ({\mathfrak{I}}_0, \zeta_0) := 0 $, $ E_{-1} := \{ 0 \} $,
satisfying $ | \zeta_n |^{\mathrm{Lip}(\g)} \leq C \|{\cal F}(U_n) \|_{s_0}^{\mathrm{Lip}(\g)} $, \begin{equation}\label{ansatz induttivi nell'iterazione}
\| {\mathfrak{I}}_n \|_{s_0 + \mu }^{{\rm Lip}(\gamma)} \leq C_* \varepsilon^{b_*} \gamma^{-1}\,, \quad
\| {\cal F}(U_n)\|_{s_0 + \mu + 3}^{{\rm Lip}(\gamma)} \leq C_*\varepsilon^{b_*} \,, \end{equation} where $U_n := (i_n, \zeta_n)$ with $i_n(\varphi) = (\varphi,0,0) + {\mathfrak{I}}_n(\varphi)$. The sets ${\cal G}_{n} $ are defined inductively by: $$
{\cal G}_{0} := \big\{\omega \in \Omega_\varepsilon \, :\, |\omega \cdot l| \geq 2 \gamma \langle l \rangle^{-\tau}, \, \forall l \in \mathbb Z^\nu \setminus \{0\} \big\}\,, $$ \begin{equation}\label{def:Gn+1} {\cal G}_{n+1} :=
\Big\{ \omega \in {\cal G}_{n} \, : \, |{\mathrm i} \omega \cdot l + \mu_j^\infty ( i_n) -
\mu_k^\infty ( i_n )| \geq \frac{2\gamma_{n} |j^{3}-k^{3}|}{\left\langle l\right\rangle^{\tau}}, \, \forall j , k \in S^c \cup \{0\}, \, l \in \mathbb Z^{\nu} \Big\}\,, \end{equation} where $ \gamma_{n}:=\gamma (1 + 2^{-n}) $ and $\mu_j^\infty(\omega) := \mu_j^\infty(\omega, i_n(\omega)) $ are defined in \eqref{espressione autovalori} (and $ \mu_0^\infty(\omega) = 0 $).
The differences $ \widehat {\mathfrak I}_n := {\mathfrak I}_n - {\mathfrak I}_{n - 1} $ (where we set $ \widehat {\mathfrak{I}}_0 := 0 $) is defined on $\mathcal{G}_n$, and satisfy \begin{equation} \label{Hn}
\| \widehat {\mathfrak I}_1 \|_{ s_0 + \mu}^{{\mathrm{Lip}(\g)}} \leq C_* \varepsilon^{b_*} \gamma^{-1} \, , \quad
\| \widehat {\mathfrak I}_n \|_{ s_0 + \mu}^{{\mathrm{Lip}(\g)}} \leq C_* \varepsilon^{b_*} \gamma^{-1} N_{n - 1}^{-\alpha_1} \, , \quad \forall n > 1 \, . \end{equation}
\item[$({\cal P}2)_{n}$] $ \| {\cal F}(U_n) \|_{ s_{0}}^{{\rm Lip}(\gamma)} \leq C_* \varepsilon^{b_*} N_{n - 1}^{- \alpha}$ where we set $N_{-1} := 1$. \item[$({\cal P}3)_{n}$] \emph{(High norms).}
\ $ \| {\mathfrak{I}}_n \|_{ s_{0}+ \beta_1}^{{\rm Lip}(\gamma)} \leq C_* \varepsilon^{b_*} \gamma^{-1} N_{n - 1}^{\kappa} $ and
$ \|{\cal F}(U_n ) \|_{ s_{0}+\beta_1}^{{\rm Lip}(\gamma)} \leq C_* \varepsilon^{b_*} N_{n - 1}^{\kappa} $.
\item[$({\cal P}4)_{n}$] \emph{(Measure).} The measure of the ``Cantor-like" sets $ {\cal G}_n $ satisfies \begin{equation}\label{Gmeasure}
| \Omega_\varepsilon \setminus {\cal G}_0 | \leq C_* \varepsilon^{2(\nu - 1)} \gamma \, , \quad
\big| {\cal G}_n \setminus {\cal G}_{n+1} \big| \leq C_* \varepsilon^{2(\nu - 1)} \gamma N_{n - 1}^{-1} \, . \end{equation} \end{itemize}
All the Lip norms are defined on $ {\cal G}_{n} $, namely $\| \ \|_s^{{\rm Lip}(\gamma)} = \| \ \|_{s,\mathcal{G}_n}^{{\rm Lip}(\gamma)} $.\end{theorem}
\begin{proof}
To simplify notations, in this proof we denote $\| \, \|^{{\rm Lip}(\gamma)}$ by $\| \, \|$. We first prove $({\cal P}1, 2, 3)_n$.
{\sc Step 1:} \emph{Proof of} $({\cal P}1, 2, 3)_0$.
Recalling \eqref{operatorF} we have $ \| {\cal F}( U_0 ) \|_s = $
$ \| {\cal F}(\varphi, 0 , 0, 0 ) \|_s = \| X_P(\varphi, 0 , 0 ) \|_s \leq_s \varepsilon^{6 - 2b} $ by \eqref{stima XP}. Hence (recall that $ b_* = 6 - 2 b $) the smallness conditions in $({\cal P}1)_0$-$({\cal P}3)_0$ hold taking $ C_* := C_* (s_0 + \beta_1) $ large enough.
{\sc Step 2:} \emph{Assume that $({\cal P}1,2,3)_n$ hold for some $n \geq 0$, and prove $({\cal P}1,2,3)_{n+1}$.} By \eqref{nash moser smallness condition} and \eqref{def rho}, $$ N_0^{C_1} \varepsilon^{b_* + 1} \gamma^{-2} = N_0^{C_1} \varepsilon^{1-3a} = \varepsilon^{1 - 3 a - \rho C_1(1 + a)} < \delta_0 $$ for $\varepsilon$ small enough, and the smallness condition \eqref{condizione-kam} holds. Moreover \eqref{ansatz induttivi nell'iterazione} imply \eqref{ansatz 0} (and so \eqref{ansatz delta}) and Theorem \ref{inversione linearized normale} applies. Hence the operator $ {\cal L}_\omega := {\cal L}_\omega(\omega, i_n(\omega))$ defined in \eqref{cal L omega} is invertible for all $\omega \in {\cal G}_{n + 1}$ and the last estimate in \eqref{stima inverso linearizzato normale} holds. This means that the assumption \eqref{tame inverse} of Theorem \ref{thm:stima inverso approssimato} is verified with $\Omega_\infty = {\cal G}_{n + 1}$. By Theorem \ref{thm:stima inverso approssimato} there exists an approximate inverse ${\bf T}_n(\omega) := {\bf T}_0 (\omega, i_n(\omega))$ of the linearized operator $L_n(\omega) := d_{i, \zeta} {\cal F}(\omega, i_n(\omega)) $, satisfying \eqref{stima inverso approssimato 1}. Thus, using also \eqref{nash moser smallness condition}, \eqref{smoothing-u1}, \eqref{ansatz induttivi nell'iterazione}, \begin{align}
\| {\bf T}_n g \|_s
& \leq_s \gamma^{-1} \big( \| g \|_{s + \mu}
+ \varepsilon \gamma^{-1}\{\| {\mathfrak{I}}_n \|_{s + \mu} +
\gamma^{-1} \|{\mathfrak I}_n \|_{s_0 + \mu} \|{\cal F}(U_n) \|_{s + \mu} \} \| g \|_{s_0+ \mu}\big) \label{stima Tn} \\
\| {\bf T}_n g \|_{s_0} &
\leq_{s_0} \gamma^{-1} \| g \|_{s_0 + \mu} \label{stima Tn norma bassa} \end{align} and, by \eqref{stima inverso approssimato 2}, using also \eqref{ansatz induttivi nell'iterazione}, \eqref{nash moser smallness condition}, \eqref{smoothing-u1}, \begin{align}
\| \big(L_n \circ {\bf T}_n -I \big) g \|_s
& \leq_s \gamma^{-1} \big( \| {\cal F}(U_n) \|_{s_0 + \mu} \| g\|_{s + \mu} +
\| {\cal F}(U_n) \|_{s + \mu} \| g \|_{s_0 + \mu} \nonumber \\
& \qquad + \varepsilon \gamma^{-1} \| {\mathfrak{I}}_n \|_{s + \mu} \| {\cal F}(U_n) \|_{s_0 + \mu} \| g \|_{s_0 + \mu} \big)\, ,
\label{stima Tn inverso approssimato} \\ \label{stima Tn inverso approssimato norma bassa}
\| \big(L_n \circ {\bf T}_n -I \big) g \|_{s_0}
& \leq_{s_0} \gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \mu} \| g \|_{s_0 + \mu} \nonumber \\
& \leq_{s_0} \gamma^{-1} \big( \|\Pi_n {\cal F}(U_n)\|_{s_0 + \mu}+ \|\Pi_n^\bot {\cal F}(U_n)\|_{s_0 + \mu} \big) \| g \|_{s_0 + \mu} \nonumber \\
& \leq_{s_0} N_{n }^{\mu} \gamma^{-1} \big( \| {\cal F}(U_n)\|_{s_0} + N_{n}^{-\beta_1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} \big) \| g \|_{s_0 + \mu}\, . \end{align} Then, for all $ \omega \in {\cal G}_{n+1} $, $ n \geq 0 $, we define \begin{equation}\label{soluzioni approssimate} U_{n + 1} := U_n + H_{n + 1}\,, \quad H_{n + 1} := ( \widehat {\mathfrak{I}}_{n+1}, \widehat \zeta_{n+1}) := - {\widetilde \Pi}_{n } {\bf T}_n \Pi_{n } {\cal F}(U_n) \in E_n \times \mathbb R^\nu\, , \end{equation} where $ {\widetilde \Pi}_n ( {\mathfrak{I}} , \zeta ) := ( \Pi_n {\mathfrak{I}} , \zeta ) $ with $ \Pi_n $ in \eqref{Pin def}. Since $ L_n := d_{i,\zeta} {\cal F}(i_n) $, we write $ {\cal F}(U_{n + 1}) = {\cal F}(U_n) + L_n H_{n + 1} + Q_n $, where \begin{equation}\label{def:Qn} Q_n := Q(U_n, H_{n + 1}) \, , \quad Q (U_n, H) := {\cal F}(U_n + H ) - {\cal F}(U_n) - L_n H \,, \quad H \in E_{n} \times \mathbb R^\nu . \end{equation} Then, by the definition of $ H_{n+1} $ in \eqref{soluzioni approssimate}, and writing $ {\widetilde \Pi}_n^\bot ({\mathfrak{I}}, \zeta ) := (\Pi_n^\bot {\mathfrak{I}}, 0) $, we have \begin{align} {\cal F}(U_{n + 1}) & =
{\cal F}(U_n) - L_n {\widetilde \Pi}_{n } {\bf T}_n \Pi_{n } {\cal F}(U_n) + Q_n =
{\cal F}(U_n) - L_n {\bf T}_n \Pi_{n } {\cal F}(U_n) + L_n {\widetilde \Pi}_n^\bot {\bf T}_n \Pi_{n } {\cal F}(U_n)
+ Q_n \nonumber\\ & = {\cal F}(U_n) - \Pi_{n } L_n {\bf T}_n \Pi_{n }{\cal F}(U_n) + ( L_n {\widetilde \Pi}_n^\bot - \Pi_n^\bot L_n ) {\bf T}_n \Pi_{n }{\cal F}(U_n) + Q_n \nonumber\\
& = \Pi_{n }^\bot {\cal F}(U_n) + R_n + Q_n + Q_n' \label{relazione algebrica induttiva} \end{align} where \begin{equation}\label{Rn Q tilde n} R_n := (L_n {\widetilde \Pi}_n^\bot - \Pi_n^\bot L_n) {\bf T}_n \Pi_{n }{\cal F}(U_n) \,, \qquad Q_n' := - \Pi_{n } ( L_n {\bf T}_n - I) \Pi_{n } {\cal F}(U_n)\,. \end{equation}
\begin{lemma}\label{lemma convergence} Define \begin{equation}\label{riscalamenti nash moser}
w_n := \varepsilon \gamma^{-2} \|{\cal F}(U_n) \|_{s_0}\,,\quad B_n := \varepsilon \gamma^{-1}\| {\mathfrak{I}}_n \|_{s_0 + \beta_1} + \varepsilon \gamma^{-2} \|{\cal F}(U_n) \|_{s_0 + \beta_1} \,. \end{equation} Then there exists $ K := K( s_0, \beta_1 ) > 0 $ such that, for all $n \geq 0$, setting $ \mu_1 := 3 \mu + 9$ (see \eqref{costanti nash moser}), \begin{equation}\label{relazioni induttive} w_{n + 1} \leq K N_{n }^{\mu_1 + \frac{1}{\rho} - \beta_1} B_n + K N_n^{\mu_1} w_n^2\,,\qquad B_{n + 1} \leq K N_{n }^{\mu_1 + \frac{1}{\rho}} B_n\, . \end{equation} \end{lemma} \begin{proof} We estimate separately the terms $ Q_n $ in \eqref{def:Qn} and $ Q_n' , R_n $ in \eqref{Rn Q tilde n}. \\[1mm] {\it Estimate of $ Q_n $.} By \eqref{def:Qn}, \eqref{operatorF}, \eqref{parte quadratica da P} and \eqref{ansatz induttivi nell'iterazione}, \eqref{smoothing-u1}, we have the quadratic estimates \begin{align} \label{stima parte quadratica norma alta}
\| Q(U_n, H)\|_s & \leq_s \varepsilon \big(\| \widehat {\mathfrak{I}} \|_{s + 3} \| \widehat {\mathfrak{I}} \|_{s_0 + 3} + \| {\mathfrak{I}}_n \|_{s + 3}
\| \widehat {\mathfrak{I}} \|_{s_0 + 3}^2 \big) \\ \label{stima parte quadratica norma bassa}
\| Q(U_n, H) \|_{s_0} & \leq_{s_0}\varepsilon N_n^6 \| \widehat {\mathfrak{I}} \|_{s_0}^2\,, \quad \forall \widehat {\mathfrak{I}} \in E_n \, . \end{align} Now by the definition of $H_{n + 1}$ in \eqref{soluzioni approssimate} and \eqref{smoothing-u1}, \eqref{stima Tn}, \eqref{stima Tn norma bassa}, \eqref{ansatz induttivi nell'iterazione}, we get \begin{align}
\| \widehat {\mathfrak{I}}_{n + 1} \|_{s_0 + \beta_1}
& \leq_{s_0 + \beta_1} N_n^{\mu} \big( \gamma^{-1} \|{\cal F}(U_n) \|_{s_0 + \beta_1} + \varepsilon \gamma^{-2} \| {\cal F}(U_n)\|_{s_0 + \mu} \{\| {\mathfrak{I}}_n\|_{s_0 + \beta_1} + \gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} \} \big) \nonumber \\
& \leq_{s_0 + \beta} N_n^{\mu} \big( \gamma^{-1}\| {\cal F}(U_n) \|_{s_0 + \beta_1}
+ \| {\mathfrak{I}}_n \|_{s_0 + \beta_1}\big)\, , \label{H n+1 alta} \\ \label{H n+1 bassa}
\| \widehat {\mathfrak{I}}_{n + 1}\|_{s_0}
& \leq_{s_0} \gamma^{-1}N_{n}^\mu \| {\cal F}(U_n)\|_{s_0} \, . \end{align} Then the term $ Q_n $ in \eqref{def:Qn} satisfies, by \eqref{stima parte quadratica norma alta}, \eqref{stima parte quadratica norma bassa}, \eqref{H n+1 alta}, \eqref{H n+1 bassa}, \eqref{nash moser smallness condition}, \eqref{ansatz induttivi nell'iterazione}, $ ({\cal P}2)_n $, \eqref{costanti nash moser}, \begin{align}
\| Q_n \|_{s_0 + \beta_1} & \leq_{s_0 + \beta_1}
N_n^{2 \mu + 9} \gamma \big( \gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} +
\| {\mathfrak{I}}_n \|_{s_0 + \beta_1} \big) \, , \label{Qn norma alta} \\
\| Q_n \|_{s_0}
& \leq_{s_0} N_n^{2 \mu + 6 } \varepsilon \gamma^{-2} \| {\cal F}(U_n) \|_{s_0}^2\, . \label{Qn norma bassa} \end{align} {\it Estimate of $ Q_n' $.} The bounds \eqref{stima Tn inverso approssimato}, \eqref{stima Tn inverso approssimato norma bassa}, \eqref{smoothing-u1}, \eqref{costanti nash moser}, \eqref{ansatz induttivi nell'iterazione} imply \begin{align} \label{Qn' norma alta}
\| Q_n' \|_{s_0 + \beta_1} & \leq_{s_0 + \beta_1} N_n^{2 \mu} \big(
\| {\cal F}(U_n) \|_{s_0 + \beta_1} + \| {\mathfrak{I}}_n \|_{s_0 + \beta_1} \| {\cal F}(U_n)\|_{s_0} \big) \, , \\ \label{Qn' norma bassa}
\| Q_n'\|_{s_0} & \leq_{s_0} \gamma^{-1} N_{n }^{2\mu }\big(\| {\cal F}(U_n) \|_{s_0} + N_{n}^{- \beta_1} \|{\cal F}(U_n)\|_{s_0 + \beta_1} \big) \| {\cal F}(U_n) \|_{s_0}\,. \end{align} {\it Estimate of $ R_n $.} For $ H := (\widehat {\mathfrak{I}}, \widehat \zeta ) $ we have $ (L_n {\widetilde \Pi}_n^\bot - \Pi_n^\bot L_n) H = $ $ [{\bar D}_n, \Pi_n^\bot ] \widehat {\mathfrak{I}} = $ $ [\Pi_n , {\bar D}_n] \widehat {\mathfrak{I}} $ where $ {\bar D}_n := d_i X_{H_\varepsilon}(i_n) + (0,0, \partial_{xxx} ) $. Thus Lemma \ref{lemma quantitativo forma normale}, \eqref{ansatz induttivi nell'iterazione}, \eqref{smoothing-u1} and \eqref{tame commutatori} imply \begin{align}\label{stima commutatore modi alti norma bassa}
\| (L_n {\widetilde \Pi}_n^\bot - \Pi_n^\bot L_n) H \|_{s_0} & \leq_{s_0+ \beta_1}
\varepsilon N_{n }^{- \beta_1 + \mu + 3} \big(\| \widehat {\mathfrak{I}} \|_{s_0 + \beta_1 - \mu} +
\| {\mathfrak{I}}_n \|_{s_0 + \beta_1 - \mu} \| \widehat {\mathfrak{I}} \|_{s_0 + 3}\big)\,, \\ \label{stima commutatore modi alti norma alta}
\| (L_n {\widetilde \Pi}_n^\bot - \Pi_n^\bot L_n) H \|_{s_0 + \beta_1} &
\leq_s
\varepsilon N_n^{\mu + 3} \big(\| \widehat {\mathfrak{I}} \|_{s_0 + \beta_1 - \mu } + \| {\mathfrak{I}}_n \|_{s_0 + \beta_1 - \mu }
\| \widehat {\mathfrak{I}} \|_{s_0 + 3} \big)\,. \end{align} Hence, applying \eqref{stima Tn}, \eqref{stima commutatore modi alti norma bassa}, \eqref{stima commutatore modi alti norma alta}, \eqref{nash moser smallness condition}, \eqref{ansatz induttivi nell'iterazione}, \eqref{smoothing-u1}, the term $R_n$ defined in \eqref{Rn Q tilde n} satisfies \begin{align} \label{stima Rn norma bassa}
\| R_n\|_{s_0}
& \leq_{s_0 + \beta_1} N_n^{ \mu + 6 - \beta_1} ( \varepsilon \gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} + \varepsilon \| {\mathfrak{I}}_n \|_{s_0 + \beta_1} )\,, \\ \label{stima Rn norma alta}
\| R_n \|_{s_0 + \beta_1}
& \leq_{s_0 + \beta_1} N_n^{ \mu + 6} ( \varepsilon \gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} + \varepsilon \| {\mathfrak{I}}_n \|_{s_0 + \beta_1} )\,. \end{align} {\it Estimate of $ {\cal F}(U_{n + 1}) $.} By \eqref{relazione algebrica induttiva} and \eqref{Qn norma alta},
\eqref{Qn norma bassa}, \eqref{Qn' norma alta}, \eqref{Qn' norma bassa}, \eqref{stima Rn norma bassa}, \eqref{stima Rn norma alta}, \eqref{nash moser smallness condition}, \eqref{ansatz induttivi nell'iterazione}, we get \begin{align}\label{F(U n+1) norma bassa}
& \| {\cal F}(U_{n + 1})\|_{s_0} \leq_{s_0 + \beta_1} N_{n }^{\mu_1 - \beta_1} ( \varepsilon \gamma^{-1}\| {\cal F}(U_n)\|_{s_0 + \beta_1} + \varepsilon \| {\mathfrak{I}}_n \|_{s_0 + \beta_1} )
+ N_n^{\mu_1} \varepsilon \gamma^{-2} \| {\cal F}(U_n)\|_{s_0}^2\,, \\ \label{F(U n+1) norma alta}
& \| {\cal F}(U_{n + 1}) \|_{s_0 + \beta_1}
\leq_{s_0 + \beta_1} N_n^{\mu_1} ( \varepsilon \gamma^{-1}\| {\cal F}(U_n) \|_{s_0 + \beta_1} + \varepsilon \| {\mathfrak{I}}_n \|_{s_0 + \beta_1} ) \,, \end{align} where $\mu_1 := 3 \mu + 9$.
\noindent {\it Estimate of $ {\mathfrak{I}}_{n+1} $.} Using \eqref{H n+1 alta} the term $ {\mathfrak{I}}_{n+1} = {\mathfrak{I}}_n + \widehat {\mathfrak I}_{n+1} $ is bounded by \begin{equation}\label{U n+1 alta}
\| {\mathfrak{I}}_{n + 1}\|_{s_0 + \beta_1} \leq_{s_0 + \beta_1}
N_n^\mu ( \| {\mathfrak{I}}_n\|_{s_0 + \beta_1} +\gamma^{-1} \| {\cal F}(U_n)\|_{s_0 + \beta_1} )\, . \end{equation} Finally, recalling \eqref{riscalamenti nash moser}, the inequalities \eqref{relazioni induttive} follow by \eqref{F(U n+1) norma bassa}-\eqref{U n+1 alta}, \eqref{ansatz induttivi nell'iterazione} and $ \varepsilon \gamma^{-1} = N_0^{1/\rho} \leq N_n^{1/\rho}$. \end{proof}
\emph{Proof of $({\cal P}3)_{n + 1}$}. By \eqref{relazioni induttive} and $ ({\cal P}3)_n $, \begin{equation} B_{n + 1} \leq K N_{n}^{\mu_1 + \frac{1}{\rho}} B_n \leq 2 C_* K \varepsilon^{b_* + 1} \gamma^{-2} N_{n}^{\mu_1+ \frac{1}{\rho}} N_{n-1}^\kappa \leq C_* \varepsilon^{b_* + 1} \gamma^{-2} N_n^{\kappa} \,, \label{stima B n+1} \end{equation} provided $ 2 K N_n^{\mu_1 + \frac{1}{\rho} - \kappa} N_{n-1}^\kappa \leq 1 $, $ \forall n \geq 0 $. This inequality holds by \eqref{def rho}, taking $N_0$ large enough (i.e $\varepsilon$ small enough). By \eqref{riscalamenti nash moser}, the bound $B_{n + 1} \leq C_* \varepsilon^{b_* + 1} \gamma^{-2} N_n^{\kappa}$ implies $({\cal P}3)_{n + 1}$.
\emph{Proof of $({\cal P}2)_{n + 1}$}. Using \eqref{relazioni induttive}, \eqref{riscalamenti nash moser} and $ ({\cal P}2)_n, ({\cal P}3)_n $, we get \begin{align*} w_{n + 1} & \leq K N_n^{\mu_1+ \frac{1}{\rho} - \beta_1} B_n + K N_{n }^{\mu_1} w_n^2 \leq K N_n^{\mu_1 + \frac{1}{\rho} - \beta_1} 2 C_* \varepsilon^{b_* + 1} \gamma^{-2} N_{n - 1}^{\kappa} + K N_{n}^{\mu_1} ( C_* \varepsilon^{b_* + 1} \gamma^{-2} N_{n - 1}^{- \alpha} )^2 \end{align*} which is $ \leq C_* \varepsilon^{b_* + 1} \gamma^{-2} N_n^{- \alpha} $ provided that \begin{equation} \label{provided 2} 4 K N_n^{\mu_1 + \frac{1}{\rho} - \beta_1 + \alpha} N_{n-1}^\kappa \leq 1, \quad 2 K C_* \varepsilon^{b_*+1} \gamma^{-2} N_n^{\mu_1 + \alpha} N_{n-1}^{-2\alpha} \leq 1 \, , \quad \forall n \geq 0. \end{equation} The inequalities in \eqref{provided 2} hold by \eqref{costanti nash moser}-\eqref{def rho}, \eqref{nash moser smallness condition}, $C_1 > \mu_1 + \alpha$, taking $\delta_0$ in \eqref{nash moser smallness condition} small enough. By \eqref{riscalamenti nash moser}, the inequality $w_{n + 1} \leq C_* \varepsilon^{b_* + 1} \gamma^{-2} N_n^{- \alpha}$ implies $({\cal P}2)_{n + 1}$.
\emph{Proof of $({\cal P}1)_{n + 1}$}. The bound \eqref{Hn} for $ \widehat {\mathfrak{I}}_1 $ follows by \eqref{soluzioni approssimate}, \eqref{stima Tn} (for $ s = s_0 + \mu $) and
$ \| {\cal F} ( U_0 ) \|_{s_0 + 2 \mu} = $ $ \| {\cal F}(\varphi, 0, 0, 0) \|_{s_0 + 2 \mu} \leq_{s_0+2\mu } \varepsilon^{b_*} $. The bound \eqref{Hn} for $ \widehat {\mathfrak{I}}_{n + 1} $ follows by \eqref{smoothing-u1}, \eqref{H n+1 bassa}, $({\cal P}2)_n $, \eqref{costanti nash moser}. It remains to prove that \eqref{ansatz induttivi nell'iterazione} holds at the step $n + 1$. We have \begin{equation}\label{W n+1 norma bassa}
\| {\mathfrak{I}}_{n + 1} \|_{s_0 + \mu}
\leq {\mathop \sum}_{k = 1}^{n + 1} \| \widehat {\mathfrak I}_k \|_{s_0 + \mu}
\leq C_* \varepsilon^{b_*}\gamma^{-1} {\mathop \sum}_{k \geq 1} N_{k - 1}^{- \alpha_1} \leq C_* \varepsilon^{b_*} \gamma^{-1} \end{equation} for $ N_0 $ large enough, i.e. $ \varepsilon $ small. Moreover, using \eqref{smoothing-u1}, (${\cal P}2)_{n +1}$, (${\cal P}3)_{n + 1}$, \eqref{costanti nash moser}, we get \begin{align*}
\| {\cal F}(U_{n + 1}) \|_{s_0 + \mu+ 3}
& \leq N_n^{\mu + 3} \|{\cal F}(U_{n + 1}) \|_{s_0} + N_{n}^{\mu + 3 - \beta_1} \| {\cal F}(U_{n + 1}) \|_{s_0 + \beta_1} \\ & \leq C_* \varepsilon^{b_*} N_n^{\mu + 3- \alpha} + C_* \varepsilon^{b_*} N_n^{\mu + 3 - \beta_1 + \kappa} \leq C_* \varepsilon^{b_*}\,, \end{align*}
which is the second inequality in \eqref{ansatz induttivi nell'iterazione} at the step $n + 1$. The bound $ | \zeta_{n+1} |^{\mathrm{Lip}(\g)} \leq C \|{\cal F}(U_{n+1}) \|_{s_0}^{\mathrm{Lip}(\g)} $ is a consequence of Lemma \ref{zeta = 0} (it is not inductive).
{\sc Step 3:} \emph{Prove $({\cal P}4)_n$ for all $n \geq 0$.} For all $n \geq 0$, \begin{equation}\label{Gn inclusioni} {\cal G}_n \setminus {\cal G}_{n + 1} = \!\! \!\!\! \bigcup_{\begin{subarray}{c} l \in \mathbb Z^\nu, \, j , k \in S^c \cup\{ 0 \} \end{subarray}} \!\! \! \!\! R_{ljk}(i_n) \end{equation} where \begin{equation}\label{resonant sets}
R_{ljk}(i_n) := \big\{ \omega \in {\cal G}_n \, : \, |{\mathrm i} \omega \cdot l + \mu_j^\infty (i_{n}) -
\mu_k^\infty (i_{n})| < 2\gamma_{n} |j^{3}-k^{3}|\left\langle l\right\rangle^{- \tau}\big\}\,. \end{equation} Notice that $R_{ljk}(i_n) = \emptyset$ if $j = k$, so that we suppose in the sequel that $j \neq k$.
\begin{lemma} \label{matteo 10}
For all $n \geq 1$, $|l| \leq N_{n - 1}$, the set $R_{ljk}(i_n) \subseteq R_{ljk}(i_{n - 1}) $. \end{lemma}
\begin{proof} Like Lemma 5.2 in \cite{BBM} (with $\omega$ in the role of $\lambda \bar\omega$, and $N_{n-1}$ instead of $N_n$). \end{proof}
By definition, $ R_{ljk} (i_n) \subseteq {\cal G}_n $ (see \eqref{resonant sets})
and Lemma \ref{matteo 10} implies that, for all $ n \geq 1$, $ |l| \leq N_{n-1} $, the set $R_{ljk}(i_n) \subseteq R_{ljk}(i_{n - 1}) $. On the other hand
$ R_{ljk}(i_{n - 1}) \cap {\cal G}_{n} = \emptyset $ (see \eqref{def:Gn+1}). As a consequence, for all $ |l| \leq N_{n-1} $, $ R_{ljk} (i_n) = \emptyset $ and, by \eqref{Gn inclusioni}, \begin{equation}\label{Gn Gn+1}
{\cal G}_n \setminus {\cal G}_{n+1} \subseteq \!\! \bigcup_{|l| > N_{n-1}, \, j, k \in S^c \cup \{0\}} \!\!\! \!\!\! R_{ljk} ( i_n) \qquad \forall n \geq 1. \end{equation}
\begin{lemma}\label{matteo 4} Let $n \geq 0$. If $R_{ljk}(i_n) \neq \emptyset$
then $|l| \geq C |j^3 - k^3| \geq \frac12 C (j^2 + k^2) $ for some $C > 0$. \end{lemma}
\begin{proof} Like Lemma 5.3 in \cite{BBM}. The only difference is that $\omega$ is not constrained to a fixed direction.
Note also that $ | j^3- k^3 | \geq (j^2 + k^2) / 2 $, $ \forall j \neq k $. \end{proof}
By usual arguments (e.g. see Lemma 5.4 in \cite{BBM}), using Lemma \ref{matteo 4} and \eqref{autofinali} we have:
\begin{lemma} \label{lemma:risonanti}
For all $n \geq 0$, the measure $ |R_{ljk}(i_n)| \leq C \varepsilon^{2(\nu - 1)} \gamma \langle l \rangle^{- \tau} $. \end{lemma}
By \eqref{Gn inclusioni} and Lemmata \ref{matteo 4}, \ref{lemma:risonanti} we get $$
|{\cal G}_0 \setminus {\cal G}_1 | \leq \sum_{l \in \mathbb Z^\nu, |j|, |k| \leq C |l|^{1/2}}
| R_{ljk} ( i_0)| \leq \sum_{l \in \mathbb Z^\nu} \frac{C \varepsilon^{2(\nu-1)} \gamma }{\langle l \rangle^{\tau-1}} \leq C' \varepsilon^{2(\nu-1)} \gamma \,. $$ For $ n \geq 1$, by \eqref{Gn Gn+1}, $$
|{\cal G}_n \setminus {\cal G}_{n+1} | \leq \sum_{|l| > N_{n-1}, |j|, |k| \leq C |l|^{1/2}}
| R_{ljk} ( i_n)| \leq \sum_{|l| > N_{n-1}} \frac{C \varepsilon^{2(\nu-1)} \gamma }{\langle l \rangle^{\tau-1}} \leq C' \varepsilon^{2(\nu-1)} \gamma N_{n-1}^{-1} $$
because $\tau \geq \nu + 2$. The estimate $ | \Omega_\varepsilon \setminus {\cal G}_0| \leq C \varepsilon^{2(\nu-1)} \gamma $ is elementary. Thus \eqref{Gmeasure} is proved. \end{proof}
\noindent {\bf Proof of Theorem \ref{main theorem} concluded.} Theorem \ref{iterazione-non-lineare} implies that the sequence $ ({\mathfrak{I}}_n, \zeta_n) $ is well defined for $ \omega \in {\cal G}_\infty := \cap_{n \geq 0} {\cal G}_n $, that $ {\mathfrak{I}}_n $
is a Cauchy sequence in $\| \ \|_{s_0 + \mu, {\cal G}_\infty}^{{\mathrm{Lip}(\g)}}$, see
\eqref{Hn}, and $ |\zeta_n |^{\mathrm{Lip}(\g)} \to 0 $. Therefore $ {\mathfrak{I}}_n $ converges to a limit $ {\mathfrak{I}}_\infty $
in norm $\| \ \|_{s_0 + \mu, {\cal G}_\infty}^{{\mathrm{Lip}(\g)}}$ and, by $ ({\cal P}2)_n $, for all $\omega \in {\cal G}_\infty$, $ i_\infty (\varphi) := (\varphi,0,0) + {\mathfrak{I}}_\infty(\varphi)$, is a solution of $$ {\cal F}(i_\infty, 0 )= 0\, \quad \text{with} \quad
\| {\mathfrak{I}}_\infty \|^{\mathrm{Lip}(\g)}_{s_0 + \mu, {\cal G}_\infty}
\leq C \varepsilon^{6 - 2b} \gamma^{-1} $$ by \eqref{ansatz induttivi nell'iterazione} (recall that $b_* := 6 - 2b$).
Therefore $\varphi \mapsto i_\infty(\varphi)$ is an invariant torus for the Hamiltonian vector field $ X_{H_\varepsilon} $ (see \eqref{hamiltoniana modificata}). By \eqref{Gmeasure}, $$
|\Omega_\varepsilon \setminus {\cal G}_\infty|
\leq |\Omega_\varepsilon \setminus {\cal G}_0| + \sum_{n \geq 0} |{\cal G}_n \setminus {\cal G}_{n + 1}| \leq 2 C_* \varepsilon^{2(\nu - 1)} \gamma + C_* \varepsilon^{2(\nu - 1)} \gamma \sum_{n \geq 1} N_{n - 1}^{-1} \leq C \varepsilon^{2(\nu - 1)} \gamma \,. $$
The set $\Omega_\varepsilon$ in \eqref{Omega epsilon} has measure $|\Omega_\varepsilon | = O( \varepsilon^{2 \nu} ) $. Hence $|\Omega_\varepsilon \setminus \mathcal{G}_\infty| / |\Omega_\varepsilon| \to 0$ as $\varepsilon \to 0$ because $ \gamma = o(\varepsilon^2)$, and therefore the measure of $\mathcal{C}_\varepsilon := \mathcal{G}_{\infty}$ satisfies \eqref{stima in misura main theorem}.
In order to complete the proof of Theorem \ref{main theorem} we show the linear stability of the solution $ i_\infty (\omega t ) $. By section \ref{costruzione dell'inverso approssimato} the system obtained linearizing the Hamiltonian vector field $ X_{H_\varepsilon } $ at a quasi-periodic solution $ i_\infty (\omega t ) $ is conjugated to the linear Hamiltonian system \begin{equation}\label{sistema lineare dopo} \begin{cases} \dot \psi & \hspace{-6pt} = K_{20}(\omega t) \eta + K_{11}^T (\omega t ) w \\ \dot \eta & \hspace{-6pt} = 0 \\ \dot w - \partial_x K_{0 2}(\omega t ) w & \hspace{-6pt} = \partial_x K_{11}(\omega t) \eta \end{cases} \end{equation} (recall that the torus $ i_\infty $ is isotropic and the transformed nonlinear Hamiltonian system is \eqref{sistema dopo trasformazione inverso approssimato} where $ K_{00}, K_{10}, K_{01} = 0 $, see Remark \ref{rem:KAM normal form}). In section \ref{operatore linearizzato sui siti normali} we have proved the reducibility of the linear system $ \dot w - \partial_x K_{0 2}(\omega t ) w $, conjugating the last equation in \eqref{sistema lineare dopo} to a diagonal system \begin{equation}\label{vjmuj} {\dot v}_j + \mu_j^\infty v_j = f_j (\omega t) \, , \quad j \in S^c \, , \quad \mu_j^\infty \in {\mathrm i} \mathbb R \, , \end{equation}
see \eqref{Lfinale}, and $ f (\varphi, x) = \sum_{j \in S^c} f_j (\varphi) e^{{\mathrm i} j x } \in H^s_{S^\bot} (\mathbb T^{\nu+1}) $. Thus \eqref{sistema lineare dopo} is stable. Indeed the actions $ \eta (t) = \eta_0 \in \mathbb R $, $ \forall t \in \mathbb R $. Moreover the solutions of the non-homogeneous equation \eqref{vjmuj} are $$ v_j (t) = c_j e^{ \mu_j^\infty t} + {\tilde v}_j (t) \,, \quad \text{where} \quad {\tilde v}_j (t) := \sum_{l \in \mathbb Z^\nu} \frac{f_{jl} \, e^{{\mathrm i} \omega \cdot l t } }{ {\mathrm i} \omega \cdot l + \mu_j^\infty } $$ is a quasi-periodic solution (recall that the first Melnikov conditions \eqref{prime di melnikov} hold at a solution). As a consequence (recall also $ \mu_j^\infty \in {\mathrm i} \mathbb R $)
the Sobolev norm of the solution of \eqref{vjmuj} with initial condition $ v(0) = \sum_{j \in S^c} v_j (0) e^{{\mathrm i} j x }
\in H^{s_0} (\mathbb T_x) $, $ s_0 < s $, does not increase in time. \qed \\[1mm] {\bf Construction of the set $S$ of tangential sites.} We finally prove that, for any $\nu \geq 1$, the set $S$ in \eqref{tang sites} satisfying $({\mathtt S}1)$-$({\mathtt S}2)$ can be constructed inductively with only a \emph{finite} number of restriction at any step of the induction.
First, fix any integer $\bar\jmath_1 \geq 1$. Then the set $J_1 := \{ \pm \bar\jmath_1 \}$ trivially satisfies $({\mathtt S}1) $-$({\mathtt S}2)$. Then, assume that we have fixed $n$ distinct positive integers $\bar\jmath_1, \ldots, \bar\jmath_n$, $n \geq 1$, such that the set $J_n := \{ \pm \bar\jmath_1, \ldots, \pm \bar\jmath_n \}$ satisfies $({\mathtt S}1)$-$({\mathtt S}2)$. We describe how to choose another positive integer $\bar\jmath_{n+1}$, which is different from all $j \in J_n$, such that $J_{n+1} := J_n \cup \{ \pm \bar\jmath_{n+1} \}$ also satisfies $({\mathtt S}1), ({\mathtt S}2)$.
Let us begin with analyzing $({\mathtt S}1)$. A set of 3 elements $j_1, j_2, j_3 \in J_{n+1}$ can be of these types: $(i)$ all ``old'' elements $j_1, j_2, j_3 \in J_n$; $(ii)$ two ``old'' elements $j_1, j_2 \in J_n$ and one ``new'' element $j_3 = \sigma_3 \bar\jmath_{n+1}$, $\sigma_3 = \pm 1$; $(iii)$ one ``old'' element $j_1 \in J_n$ and two ``new'' elements $j_2 = \sigma_2 \bar\jmath_{n+1}$, $j_3 = \sigma_3 \bar\jmath_{n+1}$, with $\sigma_2, \sigma_3 = \pm 1$; $(iv)$ all ``new'' elements $j_i = \sigma_i \bar\jmath_{n+1}$, $\sigma_i = \pm 1$, $i = 1,2,3$.
In case $(i)$, the sum $j_1 + j_2 + j_3$ is nonzero by inductive assumption. In case $(ii)$, $j_1 + j_2 + j_3$ is nonzero provided $\bar\jmath_{n+1} \notin \{ j_1 + j_2 : j_1, j_2 \in J_n \}$, which is a finite set. In case $(iii)$, for $\sigma_2 + \sigma_3 = 0$ the sum $j_1 + j_2 + j_3 = j_1$ is trivially nonzero because $0 \notin J_n$, while, for $\sigma_2 + \sigma_3 \neq 0$, the sum $j_1 + j_2 + j_3 = j_1 + (\sigma_2 + \sigma_3) \bar\jmath_{n+1} \neq 0$ if $\bar\jmath_{n+1} \notin \{ \tfrac12 j : j \in J_n \}$, which is a finite set. In case $(iv)$, the sum $j_1 + j_2 + j_3 = (\sigma_1 + \sigma_2 + \sigma_3) \bar\jmath_{n+1} \neq 0$ because $\bar\jmath_{n+1} \geq 1$ and $\sigma_1 + \sigma_2 + \sigma_3 \in \{ \pm 1, \pm 3\}$.
Now we study $({\mathtt S}2)$ for the set $J_{n+1}$. Denote, in short, $b := j_1^3 + j_2^3 + j_3^3 + j_4^3 - (j_1 + j_2 + j_3 + j_4)^3$.
A set of 4 elements $j_1, j_2, j_3, j_4 \in J_{n+1}$ can be of 5 types: $(i)$ all ``old'' elements $j_1, j_2, j_3, j_4 \in J_n$; $(ii)$ three ``old'' elements $j_1, j_2, j_3 \in J_n$ and one ``new'' element $j_4 = \sigma_4 \bar\jmath_{n+1}$, $\sigma_4 = \pm 1$; $(iii)$ two ``old'' element $j_1, j_2 \in J_n$ and two ``new'' elements $j_3 = \sigma_3 \bar\jmath_{n+1}$, $j_4 = \sigma_4 \bar\jmath_{n+1}$, with $\sigma_3, \sigma_4 = \pm 1$; $(iv)$ one ``old'' element $j_1 \in J_n$ and three ``new'' elements $j_i = \sigma_i \bar\jmath_{n+1}$, $\sigma_i = \pm 1$, $i=2,3,4$; $(v)$ all ``new'' elements $j_i = \sigma_i \bar\jmath_{n+1}$, $\sigma_i = \pm 1$, $i = 1,2,3,4$.
In case $(i)$, $b \neq 0$ by inductive assumption.
In case $(ii)$, assume that $j_1 + j_2 + j_3 + j_4 \neq 0$, and calculate \begin{align*}
b & = - 3 (j_1 + j_2 + j_3) \bar\jmath_{n+1}^2 - 3 (j_1 + j_2 + j_3)^2 \sigma_4 \bar\jmath_{n+1} + [j_1^3 + j_2^3 + j_3^3 - (j_1 + j_2 + j_3)^3] \ =: p_{j_1,j_2,j_3, \sigma_4}(\bar\jmath_{n+1}). \end{align*} This is nonzero provided $p_{j_1,j_2,j_3, \sigma_4}(\bar\jmath_{n+1}) \neq 0$ for all $j_1, j_2, j_3 \in J_n$, $\sigma_4 = \pm 1$. The polynomial $p_{j_1,j_2,j_3, \sigma_4}$ is never identically zero because either the leading coefficient $-3(j_1 + j_2 + j_3) \neq 0$ (and, if one uses $(\mathtt{S}_3)$, this is always the case), or, if $j_1 + j_2 + j_3 = 0$, then $j_1^3 + j_2^3 + j_3^3 \neq 0$ by \eqref{prodottino} (using also that $0 \notin J_n$).
In case $(iii)$, assume that $ j_1 + \ldots + j_4 = j_1 + j_2 + (\sigma_3 + \sigma_4) \bar\jmath_{n+1} \neq 0$, and calculate \begin{align*}
b & = - 3 \alpha \bar\jmath_{n+1}^3 - 3 \alpha^2 (j_1 + j_2) \bar\jmath_{n+1}^2 - 3 (j_1 + j_2)^2 \alpha \bar\jmath_{n+1} - j_1 j_2 (j_1 + j_2)
=: q_{j_1,j_2,\alpha}(\bar\jmath_{n+1}), \end{align*} where $\alpha := \sigma_3 + \sigma_4$. We impose that $q_{j_1,j_2,\alpha}(\bar\jmath_{n+1}) \neq 0$ for all $j_1, j_2 \in J_n$, $\alpha \in\{ \pm 2, 0 \}$. The polynomial $q_{j_1,j_2,\alpha}$ is never identically zero because either the leading coefficient $-3\alpha \neq 0$, or, for $\alpha = 0$, the constant term $- j_1 j_2 (j_1 + j_2) \neq 0$ (recall that $0 \notin J_n$ and $j_1 + j_2 + \alpha \bar\jmath_{n+1} \neq 0$).
In case $(iv)$, assume that $j_1 + \ldots + j_4 = j_1 + \alpha \bar\jmath_{n+1} \neq 0$, where $\alpha := \sigma_2 + \sigma_3 + \sigma_4 \in \{ \pm 1, \pm 3 \}$, and calculate \[ b = \alpha \bar\jmath_{n+1} r_{j_1,\alpha}(\bar\jmath_{n+1}), \quad r_{j_1, \alpha}(x) := (1-\alpha^2) x^2 - 3 \alpha j_1 x - 3 j_1^2. \] The polynomial $r_{j_1, \alpha}$ is never identically zero because $j_1 \neq 0$. We impose $r_{j_1, \alpha}(\bar\jmath_{n+1}) \neq 0$ for all $j_1 \in J_n$, $\alpha \in \{ \pm 1, \pm 3\}$.
In case $(v)$, assume that $j_1 + \ldots + j_4 = \alpha \bar\jmath_{n+1} \neq 0$, with $\alpha := \sigma_1 + \ldots + \sigma_4 \neq 0$, and calculate $b = \alpha (1 - \alpha^2) \bar\jmath_{n+1}^3$. This is nonzero because $\bar\jmath_{n+1} \geq 1$ and $\alpha \in \{\pm 2, \pm 4\}$.
We have proved that, in choosing $\bar\jmath_{n+1}$, there are only finitely many integers to avoid.
\noindent Pietro Baldi, Dipartimento di Matematica e Applicazioni ``R. Caccioppoli'', Universit\`a di Napoli Federico II, Via Cintia, Monte S. Angelo, 80126, Napoli, Italy, {\tt pietro.baldi@unina.it}.
\noindent Massimiliano Berti, Riccardo Montalto, SISSA, Via Bonomea 265, 34136, Trieste, Italy, {\tt berti@sissa.it}, {\tt riccardo.montalto@sissa.it}.
\end{document} | arXiv | {
"id": "1404.3125.tex",
"language_detection_score": 0.42739206552505493,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Scattering quantum random-walk search with errors}
\author{A.~G\'abris} \affiliation{ Research Institute for Solid State Physics and Optics, H-1525
Budapest, P. O. Box 49, Hungary } \author{T.~Kiss} \affiliation{ Research Institute for Solid State Physics and Optics, H-1525
Budapest, P. O. Box 49, Hungary } \author{I.~Jex} \affiliation{ Department of Physics, FJFI {\v C}VUT, {B\v r}ehov\'a 7, 115
19 Praha 1 - Star{\'e} M\v{e}sto, Czech Republic } \date{\today} \begin{abstract}
We analyze the realization of a quantum-walk search algorithm in a
passive, linear optical network. The specific model enables us to
consider the effect of realistic sources of noise and losses on the
search efficiency. Photon loss uniform in all directions is shown to
lead to the rescaling of search time. Deviation from directional
uniformity leads to the enhancement of the search efficiency
compared to uniform loss with the same average. In certain cases
even increasing loss in some of the directions can improve search
efficiency. We show that while we approach the classical limit of
the general search algorithm by introducing random phase
fluctuations, its utility for searching is lost. Using numerical
methods, we found that for static phase errors the averaged search
efficiency displays a damped oscillatory behaviour that
asymptotically tends to a non-zero value. \end{abstract}
\maketitle
\section{Introduction}
The generalization of random walks for quantum systems \cite{AHARONOV1993QRW} proved to be a fruitful concept \cite{kempe-2003-44} attracting much recent interest. Algorithmic application for quantum information processing is an especially promising area of utilization of quantum random walks (QRW) \cite{Ambainis2004Quantum-walks-a}.
In his pioneering paper \cite{Grover} Grover presented a quantum algorithm that can be used to search an unsorted database quadratically faster than the existing classical algorithms. Shenvi, Kempe and Whaley (SKW) \cite{shenvi:052307} proposed a search algorithm based on quantum random walk on a hypercube, which has similar scaling properties as the Grover search. In the SKW algorithm the oracle is used to modify the quantum coin at the marked vertex. In contrast to the Grover search, this algorithm generally has to be repeated several times to produce a result, but this merely adds a fixed overhead independent of the size of the search space.
There are various suggestions and some experiments how to realize quantum walks in a laboratory. The schemes proposed specifically for the implementation of QRWs include ion traps \cite{travaglione:032310}, nuclear magnetic resonance \cite{du-2003-67} (also experimentally verified \cite{Ryan2005Experimental-im}), cavity quantum electrodynamics \cite{di:032304,agarwal:033815}, optical lattices \cite{dur-2002-66}, optical traps \cite{eckert:012327}, optical cavity \cite{Roldan2005Optical-impleme}, and classical optics \cite{knight-2003-227}. Moreover, the application of standard general logic networks to the task is always at hand \cite{Hines-pra75, Fujiwara2005Scalable-networ}.
The idea of the scattering quantum random walk (SQRW) \cite{hillery:032314} was proposed as an answer to the question that can be posed as: how to realize a coined walk by a quantum optical network built from passive, linear optical elements such as beam splitters and phase shifters? It turned out that such a realization is possible and, in fact, it leads to a natural generalization of the coined walk, the scattering quantum random walk \cite{Kosik2005Scattering-mode}. The SQRW on the hypercube allows for a quantum optical implementation of the SKW search algorithm \cite{shenvi:052307}. Having a proposal for a physical realization at hand we are in the position to analyze in some detail the effects hindering its successful operation.
Noise and decoherence strongly influence quantum walks. For a recent review on this topic see \cite{Kendon2006Decoherence-in-}. The first investigations in this direction indicated that a small amount of decoherence can actually enhance the mixing property \cite{Kendon2003Decoherence-can}. For a continuous QRW on a hypercube there is a threshold for decoherence, beyond which the walk behaves classically \cite{Alagic2005Decoherence-in-}. Ko\v sik {\it et al} analyzed SQRW with randomized phase noise on a $d$ dimensional lattice \cite{Kosik2006Quantum-walks-w}. The quantum walk on the line has been studied by several authors in the linear optical context, with the emphasis on the effect of various initial states, as well as on the impact of decoherence \cite{jeong:pra68.012310, pathak:pra75.032351}. The quantum random walk search with imperfect gates was discussed in some detail by Li {\it et al} \cite{Li2006Gate-imperfecti}, who have considered the case when the Grover operator applied in the search is systematically modified. Such an imperfection decreases the search probability and also shifts its first maximum in time.
In this paper we analyze the impact of noise on the SKW algorithm typical for the experimental situations of the SQRW. In particular, first we focus on photon losses and show that, somewhat contradicting the na\"{\i}ve expectation, non-trivial effects such as the enhancement of the search efficiency can be observed. As a second type of errors we study randomly distributed phase errors in two complementary regimes. The first regime is characterized by rapid fluctuation of the optical path lengths, that leads to the randomization of phases for each run of the algorithm. We show that the classical limit of the SKW algorithm, reached by increasing the variance of the phase fluctuations, does not correspond to a search algorithm. In the other regime, the stability of the optical path lengths is maintained over the duration of one run, thus the errors are caused by static random phases. This latter case has not yet been considered in the context of QRWs. We found that static phase errors bring a significantly different behaviour compared to the case of phase fluctuations. Under static phase errors the algorithm retains its utility, with the average success probability displaying a damped oscillatory behaviour that asymptotically tends to a non-zero constant value.
The paper is organized as follows. In the next section we introduce the scattering quantum walk search algorithm. In section \ref{sec:uniform}.\ we derive analytic results for the success probability of search for the case when a single coefficient describes photon losses independent of the direction. In section \ref{sec:non-uniform}.\ we turn to direction dependent losses, and present estimations of the success probability based on analytical calculations and numerical evidence. In section \ref{sec:phase}.\ phase noise is considered and consequences for the success probability are worked out. Finally, we conclude in Sec.~\ref{sec:conclusions}.
\section{The scattering quantum walk search algorithm} \label{sec:sqrw}
The quantum walk search algorithm is based on the generalized notion of coined quantum random walk (CQRW), allowing the coin operator to be non-uniform accross the vertices. In the early literature the coin is considered as position (vertex) independent. The CQRW is defined on the product Hilbert space $ {\ensuremath{\mathcal{H}}} = \ensuremath{\cH^C} \otimes \ensuremath{\cH^G} $, where $ \ensuremath{\cH^C} $ refers to the quantum coin, and $ \ensuremath{\cH^G} $ represents the graph on which the walker moves. The discrete time-evolution of the system is governed by the unitary operator \begin{equation} U=SC\, , \end{equation} where $C$ is the coin operator which corresponds to flipping the quantum coin, and $S$ is the step or translation operator that moves the walker one step along some outgoing edge, depending on the coin state. Adopting a binary string representation of the vertices $V$ of the underlying graph $G=(V,E)$, the step operator $S$ (a permutation operator of the entire Hilbert space $ {\ensuremath{\mathcal{H}}} $) can be expressed as \begin{equation} S = \sum_{d=0}^{n-1} \sum_{x\in V} \ket{d, x\oplus e_{dx}} \bra{d,x}\, . \label{eq:S_gendef} \end{equation} In (\ref{eq:S_gendef}) $x$ denotes the vertex index. Here, and in the rest of this paper we identify the vertices with their indices and understand $V$ as the set of vertex indices. The most remarkable fact about $S$ is that it contains all information about the topology of the graph. In particular, the actual binary string values of $e_{dx}$ are determined by the set of edges $E$. This is accomplished by the introduction of direction indices $d$, which run from $0$ to $n-1$ in case of the $n$ regular graphs which are used in the search algorithm.
To implement the scattering quantum random walk on an $n$ regular graph of $N$ nodes, identical $n$-multiports \cite{Jex-optcomm117, Zukovski_pra55} are arranged in columns each containing $N$ multiports. The columns are enumerated from left to right, and each row is assigned a number sequentially. The initial state enters on the input ports of multiports in the leftmost column. The output and input ports of multiports of neighbouring columns $j$ and $j+$ are then indexed suitably and connected according to the graph $G$.
For the formal description of quantum walks on arrays of multiports, we propose to label every mode by the row index and input port index of its \textit{destination} multiport. We note that an equally good labelling can be defined using the row index and output port index of the \textit{source} multiport. To describe single excitation states, we use the notation $\ket{d,x}$ where the input port index of the destination multiport is $d=0,1,\ldots,n-1$, and the row index is $x=0,1,\ldots,N-1$. Thus the total Hilbert space can effectively be separated into some product space $ \ensuremath{\cH^C} \otimes \ensuremath{\cH^G} $. To be precise, the additional label $j$ would be necessary to identify in which column the multiport is, however, we think of the column index as a discrete time index, and drop it as an explicit label of modes. Thus a time-evolution $U=SC$ can be generated by the propagation through columns of multiports.
A quantum walk can be realized in terms of the basis defined using the destination indices, and we shall term it ``standard basis'' through this section. First, we recall that an $n$-multiport can be fully characterized by an SU($n$) transformation matrix $\mathbf{C}$. The effect of such multiport on single excitation states $\ket{\psi}\in \ensuremath{\cH^C} $ is given by the formula, \begin{equation}
\ket{\psi}=\sum_{d=0}^{n-1} a_d \ket{d} \to \sum_{d,k=0}^{n-1}
C_{dk} a_k \ket{d}, \label{eq:C} \end{equation} where $\ket{d}$ denotes the single photon state with the photon being in the $d$ mode, i.e.\ $\ket{d} = \ket0_0 \ldots \ket1_d \ldots \ket0_{n-1}$. We note, that a multiport with any particular transformation matrix $\mathbf{C}$ can be realized in a laboratory \cite{Reck_prl73}. To simplify calculations it may be beneficial to choose an indexing of input and output ports such that the connections required to realize the graph $G$ can be made in such way that each input port has the same index as the corresponding source output port. Therefore the label $d$ can stay unique during ``propagation.'' We emphasize that this is not a necessary assumption for a proper definition of SQRW, but an important property that makes also easier to see that SQRWs are a superset of generalized CQRWs. This indexing of input and output ports for walks on a hypercube is depicted on Fig.~\ref{fig:port-example}a, with some of the actual connections illustrated for a (three dimensional) cube on Fig.~\ref{fig:port-example}b.
Considering an array of identical multiports, an arbitrary input state undergoes the transformation by the same matrix $\mathbf{C}$ for every $x$. Let the output port $d$ of multiport $x$ be connected to multiport $x\oplus e_{dx}$ in the next row. Thus the mode labelled by the source indices $d$ and $x$, is labelled by $d$ and $x\oplus e_{dx}$ in terms of the destination indices. Therefore, effect of propagation in terms of our standard basis is written, \begin{equation} \sum_{d,x} a_{dx} \ket{d,x} \to \sum_{dkx} C_{dk} a_{kx} \ket{d,x
\oplus e_{dx}}. \label{eq:SC} \end{equation} Comparing this formula with Eqs.~(\ref{eq:S_gendef}) and (\ref{eq:C}) we see that this formula corresponds to a $U=SC=S(C_0\otimes\openone)$ transformation where $C_0$ is generated by the matrix $\mathbf{C}$. Due to the local nature of the realization of the coin operation, it is straight-forward to realize position dependent coin operations, such as the one required for the quantum walk search algorithm.
\begin{figure}\label{fig:port-example}
\end{figure}
In particular, the SKW algorithm \cite{shenvi:052307} is based on the application of two distinct coin operators, e.g. \begin{subequations} \label{eq:std-coin-pair} \begin{eqnarray} C_0 &=& G, \\ C_1 &=& -\openone, \end{eqnarray} \end{subequations} where $G$ is the Grover inversion or diffusion operator $G:= -\openone + 2
|s^C\rangle\langle s^C|$, with $|s^C\rangle=1/\sqrt{n} \sum_{d=1}^{n} \ket{d}$ \cite{moore02quantum}. In the algorithm, the application of the two coin operators is conditioned on the result of oracle operator $ {\ensuremath{\mathcal{O}}} $. The oracle marks one $\ensuremath{x_{\mathrm{t}}}$ as target, hence the coin operator becomes conditioned on the node: \begin{equation} C' = C_0 \otimes \openone + (C_1 - C_0)\otimes \ket{\ensuremath{x_{\mathrm{t}}}}\bra{\ensuremath{x_{\mathrm{t}}}}. \label{eq:pert_coin} \end{equation} When $n$ is large, the operator $U':=SC'$ can be regarded as a perturbed variation of $U=S(C_0\otimes\openone)$. The conditional transformation (\ref{eq:pert_coin}) is straight-forward to implement in the multiport network. For the two coins (\ref{eq:std-coin-pair}) one has to use a simple phase shifter at position $\ensuremath{x_{\mathrm{t}}}$ in every column of the array, and a multiport realizing the Grover matrix $G$ at every other position. The connection topology required to implement a walk on the hypercube is such that in the binary representation we have $e_d=0\ldots1\ldots0$ with 1 being at the $d$'th position, i.e.\ $e_d=2^d$. See Fig.~\ref{fig:port-example}b for a schematic example, when $\ensuremath{x_{\mathrm{t}}}=001$.
The above described scheme to realize quantum walks in an array of multiports using as many columns as the number of iterations of $U$ can be reduced to only a single column. To do this, one simply needs to connect the output ports back to the appropriate input ports of the destination multiport in the same column. This feed-back setup is similar to the one introduced in Ref.~\cite{Kosik2005Scattering-mode}.
\section{Uniform decay} \label{sec:uniform}
We begin our analysis of the effect of errors on the quantum walk search algorithm by concentrating on photon losses. In an optical network, photon losses are usually present due to imperfect optical elements. An efficient model for linear loss is to introduce fictitious beam-splitters with transmittances corresponding to the effective transmission rate (see Fig.~\ref{fig:loss-scheme}).
\begin{figure}
\caption{Schematic illustration of the photon loss model being
used. The losses suffered by each output mode are represented by
fictitious beam-splitters with transmittivities $\eta_d$. The
beam-splitters incorporate the combined effect of imperfections of
the multiport devices, and effects influencing the state during
propagation between the multiports (e.g.\ scattering and
absorption).}
\label{fig:loss-scheme}
\end{figure}
The simplest case is when all arms of the multiports are characterized by the same linear loss rate $\eta$. The operator describing the effect of decay on a single excitation density operator can then be expressed as \begin{equation} {\mathcal D} (\varrho) = \eta^2 \varrho + (1-\eta^2) \ket{0}\!\bra{0} . \label{eq:homloss_op} \end{equation} The total evolution of the system after one iteration may be written as $\varrho \to {\mathcal D} (U\varrho U^{\dag})$. It is important to note that with the introduction of this error, the original Hilbert space $ \ensuremath{\cH^G} $ of one-photon excitations must be extended by the addition of the vacuum state $\ket0$. The action of the SQRW evolution operator $U$ on the extended Hilbert space follows from the property $U\ket0=\ket0$. Due to the nature of Eq.~(\ref{eq:homloss_op}) and the extension of $U$, one can see that the order of applying the unitary time step and the error operator $ {\mathcal D} $ can be interchanged. Therefore, over $t$ steps the state of the system undergoes the transformation \begin{equation} \varrho \to \eta^{2t} U^t \varrho U^{\dag t}+ (1-\eta^{2t}) \ket{0}\!\bra{0} = {\mathcal D} ^t(U^t \varrho U^{\dag t}). \end{equation}
To simplify calculations, we introduce a linear (but non-unitary) operator to denote the effect of the noise operator $ {\mathcal D} $ on the search Hilbert space: \begin{equation} D \ket{\psi} = \eta \ket{\psi}. \end{equation} This operator is simply a multiplication with a number. It is obviously linear, however, for $\eta<1$ not unitary. The operator $D$ does not describe any coherence damping within the one-photon subspace, since it only uniformly decreases the amplitude of the computational states and introduces the vacuum. Since all final statistics are gathered from the search Hilbert space $ \ensuremath{\cH^G} $, it is possible to drop the vacuum from all calculations, and incorporate all information related to it into the norm of the remaining state. In other words, we can think of $DU$ as the time step operator, and relax the requirement of normalization. Using this notation, the effect of $t$ steps is very straight-forward to express: \begin{equation} \ket{\psi} \to \eta^t U^{t} \ket{\psi}. \label{eq:homo_nstep} \end{equation} This formula indicates that inclusion of the effect of uniform loss may be postponed until just before the final measurement. The losses, therefore, may simply be included in the detector efficiency (using an exponential function of the number of iterations).
Applying the above model of decay to the quantum walk search algorithm we define the new step operator $U''=DU'$, and write the final state of the system after $t$ steps as \begin{multline}
(U'')^t \ket{\psi_0} = \\ \eta^t \cos(\omega'_0t) \ket{\psi_0}- \eta^t
\sin(\omega'_0t)\ket{\psi_1}
+ \eta^t O\left(\frac{n^{3/4}}{\sqrt{2^n}}\right)
\ket{\tilde{r}}. \end{multline} Adopting the notation of Ref.~\cite{shenvi:052307}, the probability of measuring the target state $\ket{x=0}$ at the output after $t$ steps can be expressed as \begin{multline}
p_n(\eta,t) = \sum_{d=0}^{n-1} \left|\left< d,0 \left| (U'')^t \psi_0
\right.\right> \right|^2 \\ = \eta^{2t} \sin^2 (\omega'_0 t)
\left|\left<\left. R,0 \right| \psi_1\right> \right|^2 + 2^{-n}
\eta^{2t} \cos^2 (\omega'_0t) \\ + O(1/2^n). \label{eq:prob_uniform} \end{multline}
We know from Ref.~\cite{shenvi:052307} that $\left|\left<\left. R,0\right| \psi_1 \right>\right|^2 = 1/2 - O(1/n)$. Since an overall exponential drop of the success probability is expected due to the
$\eta^{2t}$ factor, we search for the maximum $t_f$ be before the ideal time-point $|\omega'_0|t=\pi/2$. This guarantees that $\sin^2 (\omega'_0 t_f)$ is finite, therefore due to the $2^{-n}$ factor for large $n$ the second term can be omitted, and it is sufficient to maximize the function \begin{equation} p_n(\eta,t) = \eta^{2t} \sin^2 (\omega'_0t) \left( 1/2 - O(1/n) \right), \label{eq:prob_uniform_approx} \end{equation}
with respect to $t$. After substituting the result $|\omega'_0| = 1/\sqrt{2^{n-1}}[ 1 - O(1/n) ] \pm O(n^{3/2}/2^n)$ from Ref.~\cite{shenvi:052307}, these considerations yield the global maximum at $t_f = \sqrt{2^{n-1}} \left[\mathop{\mathrm{acot}}\nolimits (-\ln\eta\sqrt{2^{n-1}}) + O(1/n) \right]$. During operation we set \begin{equation} t_m:= \sqrt{2^{n-1}} \mathop{\mathrm{acot}}\nolimits(-\ln\eta\sqrt{2^{n-1}}), \end{equation} or the closest integer, as the time yielding the maximum probability of success.
To simplify the upcoming formulae, we introduce the variables \begin{eqnarray} x &=& -\ln\eta\ \sqrt{2^{n-1}}, \\ \varepsilon &=& \log_2(1 - \eta). \end{eqnarray} The variable $\varepsilon$ can be regarded as a logarithmic transmission parameter (the ideal case corresponds to $\varepsilon=\infty$, and complete loss to $\varepsilon=0$). When $\varepsilon$ is sufficiently large, the expression $-\ln\eta$ can be approximated to first order in $2^{-\varepsilon}$ and we obtain \begin{equation} x \approx 2^{-\varepsilon + n/2 - 1/2}. \label{eq:x-approx} \end{equation} Upon substituting $t_m$ into (\ref{eq:prob_uniform_approx}) we can use the new variable $x$ to express the sine term as \begin{multline}
\sin^2(\omega'_0t_m) =\sin^2(|\omega'_0|t_m) =
\sin^2\left[\mathop{\mathrm{acot}}\nolimits x(1+O(1/n))\right] =\\
= \frac1{1+x^2} + \frac{2x\mathop{\mathrm{acot}}\nolimits
x}{1+x^2}O(1/n) + \frac{\mathop{\mathrm{acot}}\nolimits^2x}{1+x^2} O(1/n^2). \end{multline} Thus for the maximum success probability $p_n^{\mathrm{max}}(\eta) = p_n(\eta, t_m)$ we obtain \begin{equation} p_n^{\mathrm{max}}(\eta) = \frac{e^{-2x\mathop{\mathrm{acot}}\nolimits x}}{ 1 + x^2} \left[
\frac12 - O(1/n) + x\mathop{\mathrm{acot}}\nolimits x\, O(1/n) \right]. \label{eq:maxprob-homo} \end{equation} This formula is our main result for the case of uniform photon losses. In the large $n$ limit it gives the approximate performance of the SKW search algorithm as a function of the transmission rate and the size of the search space. Since $x\mathop{\mathrm{acot}}\nolimits x$ is bounded in $x$, the accuracy of the term in brackets is bounded by $O(1/n)$. The most notable consequence of the second $O(1/n)$ contribution is that while in the ideal case the probability $1/2$ is an upper bound, in the lossy case deviations from the leading term, \begin{equation} p^{\mathrm{max}}(x) = \frac12 \exp(-2x\mathop{\mathrm{acot}}\nolimits x) \frac1{1+x^2}, \label{eq:maxprob-approx} \end{equation} can be expected in either direction. The functional form of Eq.~(\ref{eq:maxprob-approx}), plotted on Fig.~\ref{fig:plot_prob_dimless}, allows for a universal interpretation of the dependence of success probability on the transmission rate and the size of the search space through the combined variable $x$. For small losses we can use the approximation (\ref{eq:x-approx}) and conclude that the search efficiency depends only on the difference $n/2-\varepsilon$. The approximation is compared with the results of numerical calculations on Fig.~\ref{fig:plot_max_prob_compare}. We can observe the $O(1/n)$ accuracy of the theoretical curves as expected, hence producing poorer fits at smaller ranks. The positive deviations from the theoretical curves observable at low transmission rates are due to the second $O(1/n)$ term of Eq.~(\ref{eq:maxprob-homo}).
\begin{figure}
\caption{Probability of measuring the target state after the optimal
number of iterations according to the approximation in
Eq.~(\ref{eq:maxprob-approx}). The probability is plotted against
the logarithm of $x$ which is a combination of the rank of the
hypercube $n$ and the logarithmic transmission parameter
$\varepsilon$.}
\label{fig:plot_prob_dimless}
\end{figure}
\begin{figure}
\caption{Maximum success probabilities for different ranks of
hypercube ($n$) calculated using the theoretical approximation, and
numerical simulations. The theoretical curves are drawn with
continuous lines of different patterns, and the numerical results
are represented by points interconnected with the same line pattern
and colour as the theoretical approximates corresponding to the same
logarithmic transmission parameter $\varepsilon$.}
\label{fig:plot_max_prob_compare}
\end{figure}
\section{Direction dependent loss} \label{sec:non-uniform}
In the present section we no longer assume equal loss rates, and consider the schematically depicted loss model on Fig.~\ref{fig:loss-scheme} with arbitrary $\eta_d$ parameters. Because of the high symmetry of the hypercube graph, and the use of mainly identical multiports, we can neglect the position dependence of the transmission coefficients. The operator $ {\mathcal D} $ describing the decoherence mechanism thus acts on a general term of the density operator as \begin{equation} {\mathcal D} (\ket{d,x}\bra{d',x'}) = \eta_d \eta_{d'} \ket{d,x}\bra{d',x'} + \delta_{xx'} \delta_{dd'} \eta_d^2 \ket{0}\!\bra{0} . \end{equation} To describe the overall effect of this operator on a pure state, we re-introduce the linear decoherence operator in a more general form, \begin{equation} D = \sum_{d} \eta_d \ket{d}\bra{d} \otimes \openone, \label{eq:D-inhomo} \end{equation} and use the notation $\{\eta\}$ to denote the set of coefficients $\eta_d$. Due to the symmetry of the system, the sequential order of coefficients is irrelevant. With the re-defined operator the effect of decoherence reads \begin{equation} {\mathcal D} (\varrho) = \varrho' + (1 - \mathop{\mathrm{Tr}}\nolimits \varrho') \ket{0}\!\bra{0} , \label{eq:inhomo-dop} \end{equation} where $\varrho=\ket{\psi}\bra{\psi}$ is the initial state, and the non-vacuum part of the output state is $\varrho'=\ket{\psi'}\bra{\psi'}$, with $\ket{\psi'}= D\ket{\psi}$. Therefore, we can again reduce our problem to calculating the evolution of unnormalized pure states, just as in the uniform case, and use the non-unitary step operator $U''=DU'$ with the more general noise operator.
Telling how well the algorithm performs under these conditions is a complex task. First we give a lower bound on the probability of measuring the target node, based on generic assumptions. To begin, we separate the noise operator into two parts \begin{equation} D = \eta + D', \label{eq:D-sep} \end{equation} where, for the moment, we leave $0\le\eta\le1$ undefined. As a consequence of Eq.~(\ref{eq:D-inhomo}) the diagonal elements of $D'$ are $[D']_{dd}=\delta_d=\eta_d-\eta$, and the off-diagonal elements are zero. From Eq.~(\ref{eq:inhomo-dop}) it follows that starting from a pure state $\ket{\psi_0}$, after $t$ non-ideal steps the state of the system can be characterized by the unnormalized vector $\ket{\psi'(t)}$, which is related to the state obtained from the same initial state by $t$ ideal steps as \begin{equation} \ket{\psi'(t)} = \eta^t\ket{\psi(t)} + \ket{r}. \end{equation} The expression of the residual vector $\ket{r}$ reads \begin{equation} \ket{r} = \sum_{k=1}^t (DU')^{t-k} D' \eta^{k-1} \ket{\psi(k)}. \end{equation} To obtain the probability of measuring the target state $\ket{x=0}$ we have to evaluate the formula \begin{equation}
p_n(\{\eta\},t) = \sum_{d=0}^{n-1} \left| \eta^t \braket{d,0}{\psi(t)} +
\braket{d,0}{r} \right|^2. \label{eq:inhomo-prob-def} \end{equation} Due to the symmetry of the graph and the coins, we use e.g.\ Eq.~(\ref{eq:prob_uniform_approx}) and obtain $\braket{d,0}{\psi(t)} \approx -\sin(\omega'_0t)/\sqrt{2n}$. To obtain a lower bound on $p_n(\{\eta\},t)$ we note that the sum is minimal if $\braket{d,0}{r}=\textrm{const}=K$ for every $d$ (we consider a worst case scenario when all $\braket{d,0}{r}$ are negative). Now we assume that the second term is a correction with an absolute value smaller than that of the first term. For the upper bound on $K$, we use the inequality \begin{equation}
\sum_{d=0}^{n-1} \left| \braket{d,0}{r} \right|^2 \le \braket{r}{r}. \end{equation}
The norm of $\ket{r}$ can be bound using the eigenvalues of $U$, $D$, and $D'$. Let $\eta_{\mathrm{max}} = \max \Set{\eta_d | d=0,\ldots,n-1}$ and $\delta_{\mathrm{max}} = \max
\Set{\left|\delta_d\right| | d=0,\ldots,n-1}$. Then we have \begin{equation} \braket{r}{r} \le \sum_{k=1}^t \eta_{\mathrm{max}}^{t-k} \delta_{\mathrm{max}} \eta^{k-1} = \frac{\eta_{\mathrm{max}}}{\eta} \frac{\delta_{\mathrm{max}}}{\eta_{\mathrm{max}}-\eta} (\eta_{\mathrm{max}}^t - \eta^t). \end{equation}
Since $U$ is unitary, its contribution to the above formula is trivial. Our upper bound on $|K|$ hence becomes $|K| \le 1/{\sqrt n} (\eta_{\mathrm{max}}\delta_{\mathrm{max}}/\eta) (\eta_{\mathrm{max}}^t - \eta^t) / (\eta_{\mathrm{max}} - \eta)$. Combining the results, we obtain a lower bound on the probability for measuring the target node, \begin{multline} p_n(\{\eta\},t) \ge \eta^{2t} \left\{ \sqrt{p_n^{(i)}(t)} \right. \\
\left. - \frac{\eta_{\mathrm{max}}}{\eta} \frac{\delta_{\mathrm{max}}}{\eta_{\mathrm{max}}-\eta}
\left[ \left(\frac{\eta_{\mathrm{max}}}{\eta}\right)^t -1
\right] \right\}^2, \label{eq:p-lower-bound} \end{multline} where $p_n^{(i)}(t)$ stands for the corresponding probability of the ideal (lossless) case. We maximize the lower bound with respect to the arbitrary parameter $\eta$. The procedure can be carried out noting that $\delta_{\mathrm{max}} = \max\{ \eta_{\mathrm{max}}-\eta, \eta-\eta_{\mathrm{min}} \}$, thereby we find the maximum at $\eta=\bar{\eta} \equiv (\eta_{\mathrm{max}} + \eta_{\mathrm{min}})/2$, yielding the formula \begin{equation} p_n(\{\eta\},t) \ge \bar{\eta}^{2t}\left\{ \sqrt{p_n^{(i)}(t)} -
({\eta_{\mathrm{max}}}/{\bar{\eta}}) \left[
\left({\eta_{\mathrm{max}}}/{\bar{\eta}}\right)^t -1 \right] \right\}^2. \label{eq:opt-lower-bound} \end{equation} To interpret the formula (\ref{eq:opt-lower-bound}), we consider the two terms in the curly braces separately. The first term returns the success probability for uniform losses with transmission coefficient $\bar\eta$. The second term may be considered as a correction term that depends not only on some average value of the loss distribution, but also on its degree of non-uniformity in a way that is reminiscent of a mean square deviation. We observe that Eq.~(\ref{eq:p-lower-bound}) provides a useful lower bound only for $\{\eta\}$ distributions violating uniformity to only a small degree. When the expression inside the curly braces becomes negative, the assumption made on the magnitude of the second term of Eq.~(\ref{eq:inhomo-prob-def}) becomes invalid, and therefore the formula does not give a correct lower bound.
The estimated lower bound (\ref{eq:p-lower-bound}) decreases with increasing degree of non-uniformity, in accordance with a naive expectation. However, as we shall show later, numerical simulations taking into account the full complexity of the problem provide evidence to the contrary: departure from uniformity can result in improved efficiency.
\begin{figure}\label{fig:fit-taylor}
\end{figure}
Inspired by the appearance of the average loss rate in the lower bound (\ref{eq:opt-lower-bound}), we introduce the mean and the variance of the direction dependent losses, \begin{equation} \left<\eta\right> = \frac1n \sum_{d=0}^{n-1} \eta_d, \quad \mbox{and} \quad Q = \frac1n \sum_{d=0}^{n-1} \delta_d^2. \end{equation} By using the Taylor expansion of the success probability function $p_n^{\mathrm{max}}$ around the point $\eta_d=\left<\eta\right>$, the deviations from the uniform loss case can be well estimated at small degrees of non uniformity. Using the permutation symmetry of $p_n^{\mathrm{max}}$ we can express the Taylor series as \begin{equation} p_n^{\mathrm{max}}(\{\eta\}) = p_n^{\mathrm{max}}(\left<\eta\right>) + B Q^2 + C W^3 + O(\delta_d^4), \label{eq:pmax-Taylor} \end{equation} where $W^3 = 1/n \sum_k \delta_k^3$. We notice that $Q$ may be regarded as the mean deviation of $\{\eta\}$ as a distribution, and hence it is a well-defined statistical property of the random noise. In other words, as long as a second order Taylor expansion gives an acceptable approximation, the probability of success depends only on the statistical average and variance ($\left<\eta\right>$, $Q$) of the noise and not on the specific values of $\{\eta\}$. Using numerical simulations, we have determined the values of $B$ up to rank $n=10$, and studied the impact of higher order terms.
The second order Taylor coefficients were determined by fitting over the numerically obtained success probabilities at data points where the higher order moments of the loss distributions were small. An example plot of $B$ is provided on Fig.~\ref{fig:fit-taylor}, for a system $n=8$. The higher order effects were suppressed by selecting the lowest values of $W$ from several repeatedly generated random distributions $\{\eta\}$. A general feature exhibited by all studied cases is that the second order coefficients satisfy the inequality \begin{equation} B\ge 2^{-n}. \label{eq:lowerb-second} \end{equation} It is remarkable that this tight lower bound depends only on the size of the system. The dependence of $B$ on $\left<\eta\right>$ is monotonous with discontinuities. We found the number of discontinuities to be proportional to the rank $n$. Our numerical studies have shown that the value of $B$ before the first discontinuity is always a constant, and equal to the empirical lower bound (\ref{eq:lowerb-second}).
\begin{figure}\label{fig:max-prob-Q35}
\end{figure}
To plot the success probabilities corresponding to arbitrary random coefficients we used the pair of variables $\left<\eta\right>$ and $Q$. On these plots, the higher order terms cause a ``spread'' of the appearing curves. A sample plot is displayed on Fig.~\ref{fig:max-prob-Q35} where the relative improvement is compared to the uniform case, in percentages. We observe a general increase of efficiency as compared to the uniform case with the same average loss rate. A general tendency is that for smaller values of $\left<\eta\right>$ the improvement is larger, interrupted, however, by discontinuities. These discontinuities closely follow those of the second order coefficient $B$.
\begin{figure}\label{fig:max-prob-max}
\end{figure}
The numerical studies, involving the generation of 1000 sets of uniformly randomly generated transmission coefficients for each of the systems of up to sizes $n=10$, indicate that with the help of Eq.~(\ref{eq:lowerb-second}) the first two terms of the expansion Eq.~(\ref{eq:pmax-Taylor}) can be used to obtain a general lower bound: \begin{equation} p_n^{\textrm{max}}(\{\eta\}) \ge p_n^{\mathrm{max}} (\left<\eta\right>) + 2^{-n}Q^2. \end{equation} The inequality implies that the overall contribution from higher order terms is positive, or always balanced by the increase of $B$. The appeal of this lower bound is that it depends only on the size of the system $N=2^n$, and the elementary statistical properties of the noise ($\left<\eta\right>$, $Q$). Therefore, together with the formula (\ref{eq:maxprob-approx}) for uniform loss, a straight-forward estimation of success probability is possible before carrying out an experiment.
Up to now, we concentrated on comparing the performance of the search algorithm suffering non-uniform losses with those suffering uniform loss with coefficient equal to the average of the non-uniform distribution. Another physically interesting question is how attenuation alone affects search efficiency. We can formulate this question using the notations above as follows. Consider a randomly generated distribution $\{\eta\}$ and compare the corresponding success probability with the one generated by a uniform distribution with transmission coefficient $\eta_{\mathrm{max}} = \max \{\eta\}$. We chose $Q$ as a measure of how much an $\eta_{\mathrm{max}}$ uniform distribution needs to be altered to obtain $\{\eta\}$, and made the comparisons using the same set of samples. A typical plot is presented on Fig.~\ref{fig:max-prob-max}. It appears that as we start deviating from the original uniform distribution, an initial drop of efficiency is followed by a region where improvement shows some systematic increase. However, it is still an open question, whether it is really a general feature that for some values of $Q$ the efficiency is always increased. On the other hand these plots provide clear evidence that for a significant number of cases the difference $p_n^{\mathrm{max}}(\{\eta\}) - p_n^{\mathrm{max}}(\eta_{\mathrm{max}})$ is positive. In other words, rather counter-intuitively, we can observe examples where increased losses result in the improvement of search efficiency. Since the time evolution with losses is non-unitary, the improvement cannot be trivially attributed to the fact that the Grover operator is not the optimal choice for the marked coin.
\section{Phase errors} \label{sec:phase}
In the present section we discuss another type of errors typically arising in optical multiport networks. These errors are due to stochastic changes of the optical path lengths relative to what is designated, and manifest as undesired random phase shifts. Depending on how rapidly the phases change, we may work in two complementary regimes. In the ``phase fluctuation'' regime the phases at each iteration are different. These errors can typically be caused by thermal noise. In the ``static phase errors'' regime, the undesired phases have slow drift such that on the time scale of an entire run of the quantum algorithm their change is insignificant. The origin of such errors can be optical element imperfections, optical misalignments, or a slow stochastic drift in one of the experimental parameters. Phase errors in the fluctuation regime have been studied in Ref.~\cite{Kosik2006Quantum-walks-w} for walks on $N$ dimensional lattices employing the generalized Grover or Fourier coin. The impact of a different type of static error on the SKW algorithm has been analyzed in Ref.~\cite{Li2006Gate-imperfecti}.
To begin the formal treatment, let $F$ denote the operator introducing the phase shifts, and write it as \begin{equation} F(\{\varphi\}) = \sum_{d,x} e^{i\varphi_{dx}} \ket{d,x}\!\bra{d,x}. \end{equation} This operator is unitary, hence the step operator \begin{equation}
U(\{\varphi\}) = S F(\{\varphi\}) C', \label{eq:phase-noise-U} \end{equation} that depends on the phases $\{ \varphi_{dx} \vert d=0..n-1, x=0..2^n-1\}$ is unitary as well. In case of phase fluctuations, at each iteration $t$ we have the parameters $\varphi_{dx}^{(t)}$ such that all $\varphi_{dx}^{(t)}$ are independent random variables for every $d$, $x$ and $t$, according to some probability distribution. In case of static phase errors, $\varphi_{dx}^{(t)}$ and $\varphi_{dx}^{(t')}$ are considered to be the same random variables for every pair of $d$ and $x$.
The formalism of Ref.~\cite{Kosik2006Quantum-walks-w} can be applied to the walk on the hypercube, and extended to the case of non-uniform coins and position dependent phases. Namely, using the shorthand notations $D=\left\{0,1,2,\ldots,n-1\right\}$ and \begin{equation} E(k,l) = \bigoplus_{j=l}^k e_{a_j}, \end{equation} the state after $t$ iterations can be expressed as \begin{multline} \ket{\psi(\{\varphi\},t)} = \frac1{\sqrt{n2^n}} \sum_{x_0\in V} (-1)^{\delta_{\ensuremath{x_{\mathrm{t}}} x_0}} \sum_{\underline{a}\in D^t}
e^{i\varphi(\underline{a}, x_0)} \\
\times \tilde \Xi_{\ensuremath{x_{\mathrm{t}}}} (\underline{a}, x_0) \ket{a_1, x_0 \oplus
E(t,1)}, \end{multline} where \begin{multline} \tilde \Xi_{\ensuremath{x_{\mathrm{t}}}} (\underline{a}, x_0) = \\ \prod_{j=1}^{t-1} \left(
C^{(0)}_{a_ja_{j+1}} + [C^{(1)} - C^{(0)}]_{a_ja_{j+1}}
\delta_{\ensuremath{x_{\mathrm{t}}} \oplus x_0, E(t,j+1)} \right), \end{multline} and $\varphi(\underline{a},x_0)= \sum_{j=1}^t \varphi^{(t+1-j)}_{a_j,x_0 \oplus E(t,j-1)}$. For the standard SKW algorithm, the coin matrices are $C^{(0)}_{aa'} = 2/n-\delta_{aa'}$ and $C^{(1)}_{aa'}=-\delta_{aa'}$, however, the SKW algorithm is reported to work with more general choices of operators $C_{0/1}$ \cite{shenvi:052307}.
For the following study, we express the probability of finding the walker at position $x$ after $t$ iterations as the sum $p_n(x,\{\varphi\},t)=p_n^I(x,\{\varphi\},t) + p_n^C(x,\{\varphi\},t)$, such that the incoherent and coherent contributions are \begin{eqnarray}
p_n^I(x,\{\varphi\},t) &=& \frac1{n2^n} \sum_{\underline{a}\in D^t} \left|
\tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}) \right|^2, \label{eq:px-incoh} \\ p_n^C(x,\{\varphi\},t) &=& \frac1{n2^n} \sum_{\underline{a}\neq
\underline{a}' } \Phi_{\underline{a}'}^* \Phi_{\underline{a}}
\tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}')^*
\tilde\Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}) \delta_{a_1'a_1}, \label{eq:px-coh} \end{eqnarray} where $\ensuremath{{\tilde x}_{\mathrm{t}}}=\ensuremath{x_{\mathrm{t}}} \oplus x$. The appearing phase factors are \begin{equation} \Phi_{\underline{a}} = (-1)^{\delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(t,1)}} e^{i\varphi(\underline{a}, x \oplus E(t,1))}, \end{equation} and $\tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a})=\tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}}(\underline{a}, E(t,1))$, i.e. \begin{equation} \tilde \Xi_{\ensuremath{{\tilde x}_{\mathrm{t}}}} (\underline{a}) = \prod_{j=1}^{t-1} \left(
C^{(0)}_{a_ja_{j+1}} + [C^{(1)} - C^{(0)}]_{a_ja_{j+1}}
\delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, E(j,1)} \right). \label{eq:Xi_tg} \end{equation} Note, that when the probability of finding the walker at the target node $\ensuremath{x_{\mathrm{t}}}$ is to be calculated we must set $x=\ensuremath{x_{\mathrm{t}}}$, therefore, we have $\ensuremath{{\tilde x}_{\mathrm{t}}}=0$.
In the following we shall show that the incoherent contribution is constant, \begin{equation} p_n^I(x, \{\varphi\}, t) = \frac1{2^n}, \label{eq:px-coh-const} \end{equation} for any two unitary coins $C_{0/1}$. Consequently, $p_n^I$ is constant also for balanced coins such as those in Eq.~(\ref{eq:std-coin-pair}). The summations in Eq.~(\ref{eq:px-incoh}) can be rearranged in increasing order of indices of $a_j$, yielding \begin{multline} p_n^I(x, \{\varphi\}, t) = \\
\frac1{n2^n} \sum_{a_1,a_2=0}^{n-1} \left|
C^{(0)}_{a_1a_2} + [C^{(1)}- C^{(0)}]_{a_1a_2}
\delta_{\ensuremath{{\tilde x}_{\mathrm{t}}}, e_{a_1}} \right|^2 \times \cdots \\
\times \sum_{a_{t-1}=0}^{n-1} \left| C^{(0)}_{a_{t-2}a_{t-1}} +
[C^{(1)} - C^{(0)}]_{a_{t-2}a_{t-1}} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}},
E(t-2,1)} \right|^2 \\
\times \sum_{a_t=0}^{n-1} \left| C^{(0)}_{a_{t-1}a_t} +
[C^{(1)} - C^{(0)}]_{a_{t-1}a_t} \delta_{\ensuremath{{\tilde x}_{\mathrm{t}}},
E(t-1,1)} \right|^2. \end{multline}
Since $E(t-1,1)$ depends on $a_j$ only when $j\leq t-1$, and due to the unitarity of the coins $\langle a_{t-1}| C_0^{\dag} C_0 | a_{t-1}
\rangle = \langle a_{t-1}|C_1^{\dag} C_1 | a_{t-1} \rangle = 1$, the summation over $a_t$ can be evaluated and we obtain $1$. Hence, we see that $p_n^I(x, \{\varphi\}, t) = p_n^I(x, \{\varphi\}, t-1)$, and this implies Eq.~(\ref{eq:px-coh-const}) by induction.
\begin{figure}
\caption{The averaged (1000 samples) probability of measuring the
target node, when $n=6$ and $\Delta\varphi = 3^o, 6^o, 9^o,
12^o$. The tendency of the success probability to a constant,
non-zero value can be observed on this numerically obtained plot. It
is also observable that a larger variance results in a smaller
asymptotic value.}
\label{fig:phase-noise-evolution}
\end{figure}
The average probability $\bar{p}_n(x,t)$ of finding the walker at node $x$ is obtained by averaging the random phases according to their appropriate probability distribution. Using Eq.~(\ref{eq:px-coh-const}) this probability can be expressed as \begin{equation} \bar{p}_n(x,t) = \left<p_n(x, \{\varphi\}, t) \right> = \frac1{2^n} + \left< p_n^C(x, \{\varphi\}, t) \right>, \end{equation} where $\left<\ldots\right>$ denotes taking the average for each random variable $\varphi^{(t)}_{dx}$ in case of phase fluctuations, and for each $\varphi_{dx}$ in case of static phase errors. It is reasonable to assume that each random variable has the same probability distribution. To analyze the impact of phase errors on the search efficiency, we study the behaviour of the coherent term $\left< p_n^C(x, \{\varphi\}, t) \right>$ for different random distributions.
In case of phase fluctuations characterized by a uniform distribution, the coherent term immediately vanishes and we obtain $\bar{p}_n(x,t) = 1/2^n$. This case can be considered as the classical limit of the quantum walk. Therefore, we conclude that the classical limit of the SKW algorithm is not a search algorithm, independently of the two unitary coins used.
Assuming a Gaussian distribution of random phases is motivated by the relation of each phase variable $\varphi$ to the optical path length. The changes in the optical path lengths which introduce phase shifts are not restricted to a $2\pi$ interval. In what follows, we assume that the random phases have a zero centered Gaussian distribution with a variance $\Delta\varphi$.
We arrive at the classical limit even when the phase fluctuations have a finite width Gaussian distribution, simply by repeatedly applying the time evolution operator $U\{\varphi\}$. For such Gaussian distribution, the coherent term exhibits exponential decrease with time, a behaviour also confirmed by our numerical calculations.
In the static phase error regime the mechanism of cancellation of phases is different than in the fluctuation regime, and more difficult to study analytically. For uniform random distribution we expect a sub-exponential decay of the coherent term to zero. For a zero centered Gaussian distribution with variance $\Delta \varphi$ we performed numerical simulations using the standard two coins of Eq.~(\ref{eq:std-coin-pair}).
\begin{figure}
\caption{Time dependence of success probabilities for two different
phase configurations, numerically calculated for a system of rank
$n=6$. The difference in frequencies of the major oscillations is
clearly observable for larger times.}
\label{fig:phase-noise-diff}
\end{figure}
The numerical results for the success probability $\bar{p}_n(\ensuremath{x_{\mathrm{t}}}, t)$ for several values of $\Delta\varphi$ are plotted on Fig.~\ref{fig:phase-noise-evolution}. The data points were obtained by calculating success probabilities for $1000$ randomly generated phase configurations and taking their averages at each time step $t$.
By studying the repetition of the random phase configuration we come to several remarkable conclusions. First, the time evolution of the success probability tends (on a long time scale, $t\gg t_{f}$) to a finite, non-zero constant value. Consequently, being subject to static phase errors, the SKW algorithm retains its utility as search algorithm. Second, the early steps of the time evolution are characterized by damped oscillations reminding of a collapse. Third, the smaller the phase noise the larger is the long time stationary value to which the system evolves. We have plotted the stationary values obtained by numerical calculations, against the rank of the hypercube on Fig.~\ref{fig:phase-noise}.
Better insight into the above features can be gained by examining the shape of the individual runs of the algorithm with the given random phase configurations. As it can be seen on Fig.~\ref{fig:phase-noise-diff}, the success probabilities for different runs display the typical oscillations around a non-zero value. They differ slightly in their frequencies depending on the random phases chosen, hence when these oscillations are summed up we get the typical collapse behaviour. Also, since these frequencies continuously fill up a band specified by the width of the Gaussian, we expect no revivals to happen later. For higher order hypercubes the success probability drops almost to zero for already very moderate phase errors, resembling a behaviour seen on Fig.~\ref{fig:plot_prob_dimless}.
\begin{figure}
\caption{The long time stationary values of the success probability
(obtained by averaging over 1000 samples) against the size of the
search space, for $\Delta\varphi=0^o,3^o,6^o,15^o$.}
\label{fig:phase-noise}
\end{figure}
\section{Conclusions} \label{sec:conclusions}
We studied the SQRW implementation of the SKW search algorithm and analyzed the influence on its performance the two most common type of disturbances, namely photon losses and phase errors. Our main result for the photon loss affected SQRW search algorithm is that the introduction of non-uniform distribution of the loss can significantly improve the search efficiency compared to uniform loss with the same average. In many cases, even the sole increase of losses in certain directions may improve the search efficiency. Mostly based on numerical evidence we have set a lower bound for the search probability as a function of the average and variance of the randomly distributed direction dependent loss.
We concentrated our analysis on two complementary regimes of phase errors. When the system is subject to rapid phase fluctuations, the classical limit of the quantum walk is approached. We have shown that in this limit the SKW algorithm loses its applicability to the search problem for any pair of unitary coins. On the other hand, we showed that when the phases are kept constant during each run of the search, the success rate does not drop to zero, but approaches a finite value. The effect in its mechanism is reminiscent to exponential localization found in optical networks \cite{TJS}. Therefore, in the long-time limit, static phase errors are less destructive than rapidly fluctuating phase errors.
\acknowledgments{ Support by the Czech and Hungarian Ministries of
Education (CZ-2/2005), by MSMT LC 06002 and MSM 6840770039 and by
the Hungarian Scientific Research Fund (T049234 and T068736) is
acknowledged. }
\end{document} | arXiv | {
"id": "0701150.tex",
"language_detection_score": 0.7948015332221985,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Representations of stack triangulations in the plane} \author{Thomas Selig \\ LaBRI, CNRS \\ Université Bordeaux 1 } \maketitle
\begin{abstract} Stack triangulations appear as natural objects when defining an increasing family of triangulations by successive additions of vertices. We consider two different probability distributions for such objects. We represent, or ``draw" these random stack triangulations in the plane $\mathbb{R}^2$ and study the asymptotic properties of these drawings, viewed as random compact metric spaces. We also look at the occupation measure of the vertices, and show that for these two distributions it converges to some random limit measure. \end{abstract}
{\bf Keywords:} Stack triangulations; Occupation measure; Limit Theorem.
\section{Introduction}\label{Intro} Consider a rooted triangulation of the plane, and some finite face $\mathfrak{f}$, say $ABC$, of this triangulation. We insert a vertex $M$ in $\mathfrak{f}$, and add the three edges $AM$, $BM$, $CM$ to the original triangulation. We obtain a triangulation with two faces more than the original triangulation (the face $\mathfrak{f}$ has been replaced by three new faces). Thus, starting from a single rooted triangle, after $k$ such insertions, we get a triangulation with $k$ internal vertices, that is which aren't vertices of the original rooted triangle, and $2k+1$ finite faces. The set of triangulations with $k$ internal vertices which can be reached through this growing procedure is denoted $\Delta_k$. We call such triangulations \emph{stack triangulations}. Note that through this construction we do not obtain the set of all rooted triangulations. This iterative process is demonstrated in Figure \ref{ST}.
\begin{figure}
\caption{Iterative construction of a stack triangulation}
\label{ST}
\end{figure}
We endow the set $\Delta_k$ with two natural probability distributions: \begin{itemize} \item The first is the uniform distribution $\mathbb{U}_k^{\Delta}$. \item The second distribution $\mathbb{H}_k^{\Delta}$ is the distribution induced by the above construction where at each step the face in which we insert the vertex is chosen uniformly at random among all finite faces, independently from the past. \end{itemize}
\subsection{The object of our study} In this paper, rather than look at stack triangulations as maps, that is up to homeomorphism, we look at particular representations, or drawings, of such objects in the plane, and the geometrical properties of such representations. That is, at each insertion of a new vertex, we draw the line segments corresponding to the edges added. We call such representations \textit{compact triangulations}, and view them as compact subspaces of $\mathbb{R}^2$. The main difference is that while maps are graphs drawn in the plane, they are considered only up to homeomorphism, whereas we are interested in the actual representation. We are rather informal here, but will give formal definitions later in the paper, in Section \ref{formal_def}.
We take $A=(0,0)$, $B=(1,0)$, $C = e^{i\frac{\pi}{3}}$ (identifying $\mathbb{C}$ and $\mathbb{R}^2$) to be the three points representing the initial rooted triangle, with $(A,B)$ its root. We start with $T_0 = T = [AB] \cup [BC] \cup [CA]$, and set $\mathcal{D}_0 = \{ T_0 \}$. We denote by $\tilde{T_0}$ the \textit{filled} triangle $T_0$, that is the union of $T_0$ and of the finite connected component of $\mathbb{R}^2 \setminus T_0$. At time $1$, we insert a point $M$ somewhere in $\tilde{T_0} \setminus T_0$. We then define $T_1(M) = T_0 \cup [AM] \cup [BM] \cup [CM] $, and set \[ \mathcal{D}_1 = \{ T_1(M); \ M \in \tilde{T_0} \setminus T_0\}.\]
Now let $T_1(M) \in \mathcal{D}_1$ for some $M \in \tilde{T_0} \setminus T_0$. Write $T_1^{(1)}(M) = [AB] \cup [BM] \cup [MA]$, $T_1^{(2)}(M) = [BC] \cup [CM] \cup [MB]$, $T_1^{(3)}(M) = [CA] \cup [AM] \cup [MC]$, and similarly, $\tilde{T_1}^{(i)}(M)$ for the corresponding filled triangles. At time 2, we insert a point $N$ somewhere in one of the $\tilde{T_1}^{(i)}(M) \setminus T_1^{(i)}(M)$'s. We then define \[ T_2(M,N) := T_1(M) \cup [XN] \cup [YN] \cup [MN],\] where $(X,Y) = \left\lbrace \begin{array}{cc} (A,B) & \mbox{if } N \in \tilde{T_1}^{(1)}(M) \setminus T_1^{(1)}(M)\\ (B,C) & \mbox{if } N \in \tilde{T_1}^{(2)}(M) \setminus T_1^{(2)}(M) \\ (C,A) & \mbox{if } N \in \tilde{T_1}^{(3)}(M) \setminus T_1^{(3)}(M) \end{array} \right. $, and finally set \[ \mathcal{D}_2 = \left\lbrace T_2(M,N); \ M \in \tilde{T_0} \setminus T_0 \mbox{ and } N \in \bigcup_{i=1}^3 \left( \tilde{T_1}^{(i)}(M) \setminus T_1^{(i)}(M) \right) \right\rbrace .\] Figure \ref{iter} illustrates this initial construction.
\begin{figure}
\caption{ Construction of $\mathcal{D}_0,\mathcal{D}_1$ and $\mathcal{D}_2$}
\label{iter}
\end{figure}
Iterating this construction by choosing at each step some triangular face of our drawing and splitting it, we obtain representations of stack triangulations in the plane, and call these compact triangulations. Denote $\mathcal{D}_k$ the set of such objects with $k$ internal vertices (that is, after $k$ successive insertions of vertices), and for $m \in \mathcal{D}_k$ write $\mathcal{V}(m)$ for its set of internal vertices (viewed as a set of points in the plane). Finally, for $m \in \mathcal{D}_k$ we define the occupation measure of $m$ by \beq\label{def_OM1} \mu(m) := \frac1k \sum_{x \in \mathcal{V}(m)} \delta_x, \eq where $\delta_x$ stands for the Dirac mass at $x$.
We are interested here in the case where the successive insertions of vertices are done at random. We suppose that all random variables in this paper are defined on some probability space $( \Omega, \mathcal{F}, \mathbb{P})$. We denote $\mathbb{E}$ the expected value, and ${\rm Var}\,$ the variance. We consider a probability distribution $\nu$ on $\mathbb{R}_+^3$ such that if $P=(P_1,P_2,P_3)$ has law $\nu$ then a.s. $P_i > 0$ for all $i \in \{1,2,3 \}$ and $P_1+P_2+P_3=1$ \footnote{This is the notion of splitting law, defined formally in Section \ref{formal_def}}. We suppose that each insertion of a vertex $M$ in a face $QRS$ is done according to $\nu$, that is we take $M$ to have barycentric coordinates $(Q,P_1),(R,P_2),(S,P_3)$ where $P=(P_1,P_2,P_3)$ has law $\nu$, independently from all previous insertions. Now the two distributions $\mathbb{U}_k^{\Delta}$ and $\mathbb{H}_k^{\Delta}$ on $\Delta_k$ introduced at the start of the section induce probability distributions $\mathbb{U}^{\nu}_k$ and $\mathbb{H}^{\nu}_k$ on $\mathcal{D}_k$. In words, they are the distributions of the drawings of stack triangulations with distribution $\mathbb{U}_k^{\Delta}$ and $\mathbb{H}_k^{\Delta}$, where the insertions of vertices are made according to $\nu$, independently from each other, and independently from the choice of the underlying stack triangulation. The object of the paper is to study the asymptotic behaviour of these two distributions.
\subsection{Outline of the paper} In Section \ref{sec_def}, we formally define compact triangulations. We then enrich the classical bijection between stack triangulations and ternary trees (see for instance \cite{AM}, Proposition 1) to encode compact triangulations. For this, in Section \ref{ad_cod}, we introduce the notion of coordinate-labelled ternary trees. These are ternary trees with labels at each vertex, which code compact triangulations via a bijection we establish in Theorem \ref{bij_drawing_markedtree}. We end the section by formally defining the distributions $\mathbb{U}^{\nu}_k$ and $\mathbb{H}^{\nu}_k$ on $\mathcal{D}_k$.
In Section \ref{unif}, we study the asymptotic behaviour of the uniform distribution $\mathbb{U}^{\nu}_n$ as $n \rightarrow \infty$. The main results are: \begin{itemize} \item The weak convergence of the occupation measure as defined in \eref{def_OM1} towards a Dirac mass at a random position (Theorem \ref{UOM}). \item The weak convergence of the distribution $\mathbb{U}^{\nu}_n$ towards a distribution on compact subspaces of $\mathbb{R}^2$ (Theorem \ref{conv_draw}). \end{itemize} Section \ref{sectionUOM} is thus dedicated to the statement and proof of Theorem \ref{UOM}. The statement is split into two parts. The first part states the convergence in distribution of an internal vertex of $m_n$ chosen uniformly at random, where $m_n$ has distribution $\mathbb{U}^{\nu}_n$, to some limit vertex. Though this is weaker than the second part, it is a key ingredient in its proof, and thus we choose to state it separately. In Section \ref{UD} we introduce the notion of local convergence for trees (Definition \ref{loc_conv_def}). In Theorem \ref{cont_thm}, we show that the bijection established in Theorem \ref{bij_drawing_markedtree} which maps a coordinate-labelled tree to a compact triangulation has a property of continuity with respect to this topology of local convergence, from which we infer Theorem \ref{conv_draw}.
Finally, in Section \ref{incr}, we study the asymptotic behaviour of the occupation measure under $\mathbb{H}^{\nu}_n$. The key ingredient is Poisson-Dirichlet fragmentation, which allows us to view the trees corresponding to the compact triangulations via Theorem \ref{bij_drawing_markedtree} as the underlying tree of a certain fragmentation tree (Theorem \ref{frag_tree}). We then show the convergence of the occupation measure as defined in \eref{def_OM1} to some (random) limit measure $\mu$ (Theorem \ref{IOM}). In Section \ref{IMP} we study the properties of $\mu$. We show (Proposition \ref{at_part}) that a.s. $\mu$ has no atom, and that it is supported on a set whose Hausdorff dimension is at most $\dfrac{2}{3 \log (3)}$ (Theorem \ref{HDim}).
\subsection{Literature and motivation}
Motivation for this work stems from the paper by Bonichon et al. \cite{Bon}, in which the authors look at convex straight line drawings of triangulations, and establish bounds for the minimal grid size necessary for these drawings, with the constraint that all vertices are located at integer grid points. More precisely, they show that to draw any triangulation with $n$ faces, a grid of size $(n-C) \times (n-C)$ (for some constant $C$) is sufficient, giving a constructive proof of this result by establishing an algorithm for drawing any triangulation. The aim of this paper if to provide an answer to the question: what do these drawings look like?
More specifically, we aim to explore an approach for the convergence of maps which differs from the traditional combinatorial one. Indeed, maps are embeddings of graphs, but in the combinatorial approach these are viewed up to homeomorphism, and equipped with the graph distance, that is every edge is given the same length. Concerning this approach, we cite the groundbreaking work by Schaeffer in his thesis \cite{Sch}, where he establishes a crucial bijection between maps and a class of labelled trees, as well as the more recent work by LeGall \cite{LeGall} and Miermont \cite{Mier}, who showed (separately, using different techniques) that uniform quandrangulations with $n$ faces, renormalised so that every edge has length $C.n^{\frac14}$ (for some constant $C$), converge in distribution to a continuous limit object called the Brownian map. In this paper however, we look at the convergence of the embeddings themselves, viewed as (random) compact spaces. This approach is analogous to the work of Curien and Kortchemski \cite{CK}. In this paper, the authors showed the universality of the Brownian triangulation introduced by Aldous \cite{Ald2}, in that is the limit of a number of discrete families called non-crossing plane configurations, such as dissections, triangulations, and non-crossing trees of the regular $n$-gon. As mentioned, Curien and Kortchemski view non-crossing plane configurations as random compact subspaces of the unit disk, and it is these compact spaces which converge to the limit object.
In this paper, we also study the asymptotic behaviour of the occupation measure, as defined in \eref{def_OM1}. Similar work includes the paper by Fekete \cite{Fek} on branching random walks. In this paper, he considers branching random walks where the underlying tree is a binary search tree (this is related to our distribution $\mathbb{H}^{\nu}_n$ in this paper). He shows that the occupation measure converges weakly to a limit measure which is deterministic. More work concerning the study of random measures similar to ours can be found in \cite{Ald1}. In this paper, Aldous proposes a natural model for random continuous ``distributions of mass", called the \textit{Integrated super-Brownian Excursion} (ISE), which is the (random) occupation measure of the Brownian snake with lifetime process the normalised Brownian excursion. ISE is defined using random branching structures, and appears to be the continuous limit of occupation measures of several discrete structures.
Finally, let us mention that the combinatorial aspect of stack triangulations has been extensively studied, notably by Albenque and Marckert \cite{AM}, and their paper will therefore be of great use to us. The authors studied both the uniform distribution $\mathbb{U}^{\Delta}_k$ and the other distribution$\mathbb{H}^{\Delta}_k$. More precisely, they showed that: \begin{itemize} \item for the topology of local convergence, $\mathbb{U}^{\Delta}_n$ converges weakly to a distribution on the set of infinite maps. \item For the Gromov-Hausdorff distance, with the normalising factor $n^{\frac12}$, a map with the uniform distribution $\mathbb{U}^{\Delta}_n$ converges weakly to the continuum random tree introduced by Aldous \cite{CRT} \item Under the distribution $\mathbb{H}^{\Delta}_n$, the distance between random points rescaled by $\frac{6}{11} \log n$ converges to $1$ in probability. \end{itemize}
\section{Compact triangulations and encoding with trees}\label{sec_def}
In this section we code compact triangulations, that is the representations of triangulations in the plane, by some labelled trees. There are two main ideas in this coding. First there is the combinatorial bijection between the discrete objects: stack triangulations (viewed up to homeomorphism) and ternary trees. There is a well known bijection which maps internal vertices of the triangulation to internal nodes of the tree and faces of the triangulation to leaves of the tree (see for instance \cite{AM} Proposition 1 and references therein). We then enrich this bijection to include the drawing of the triangulation by adding labels to the tree: these labels correspond to the barycentric coordinates of the vertices of the triangulation.
\subsection{Compact triangulations}\label{formal_def}
Here we build formally the set $\mathcal{D}_k$ of compact triangulations with $k$ internal vertices. The construction is done by induction, and is similar to the construction of stack triangulations. This allows us to observe the tree-like structure of these objects. During the construction, we will define the various notions necessary for the encoding discussed above. Set as in the introduction $A=(0,0)$, $B=(1,0)$, $C = e^{i\frac{\pi}{3}}$ to be the three points of the original triangle, and define $T = [AB] \cup [BC] \cup [CA]$. Now define $\mathcal{D}_0 = \{ T \}$, and set $\mathcal{V}(T) = \emptyset$. The set $\mathcal{V}(T)$ will be the set of internal nodes of $T$. Now assume we have constructed $\mathcal{D}_k$ for some $k \geq 0$, such that $\mathcal{D}_k$ is a set of compact subspaces of $\mathbb{R}^2$ and any $m \in \mathcal{D}_k$ satisfies the following properties: \begin{enumerate} \item The compact space $m$ is the union of line segments in the plane. \item There are exactly $2k+1$ finite connected components of $\mathbb{R}^2 \setminus m$, and these are all interiors of triangles. Let $\mathcal{F}^0(m)$ be the set of these connected components, and call the elements of $\mathcal{F}^0(m)$ \textit{faces} of $m$. For $f \in \mathcal{F}^0(m)$ we define $(A_{f},B_{f},C_{f})$ as the three points of the triangle $f$. We can in fact define these points non ambiguously as follows. \begin{itemize} \item $X_f = X$ for $X \in \{A,B,C \}$, if $f$ is the interior of the original triangle $T$. \item If a triangle $f$ is split into three triangles $f_1,f_2,f_3$ by adding a point $M$ in its interior, and $f$ is defined by the three points $A_{f},B_{f},C_{f}$, then $M=A_{f_1} = B_{f_2} = C_{f_3}$ with the other two vertices of each triangle $f_i$ unchanged (that is, $B_{f_1} = B_{f}$, $C_{f_1}=C_{f}$ and so forth). This is illustrated in Figure \ref{trg_order} below. \end{itemize} \item Finally assume that for any $m \in \mathcal{D}_k$ we have defined a set $\mathcal{V}(m)$ of $k$ points of $\tilde{T}$, which are the $k$ points inserted at each step of the construction of $m$. \end{enumerate}
Note that these properties are all satisfied for $k=0$. \begin{figure}
\caption{Ordering the vertices of a triangle}
\label{trg_order}
\end{figure}
We now construct the set $\mathcal{D}_{k+1}$. First, let \[ \dot{\mathcal{D}_k} = \lbrace (m,f) ; \ m \in \mathcal{D}_k, f \in \mathcal{F}^0(m) \rbrace \] be the set of compact triangulations with a marked face. Define a map $\mathcal{I}$ from $\dot{\mathcal{D}_k}$ onto the set of compact subspaces of $\mathbb{R}^2$ as follows. Let $(m,f) \in \dot{\mathcal{D}_k}$, and let $(A_{f},B_{f},C_{f})$ be the three (ordered) points of $f$. For any point $M$ in the face $f$ we define \[m' = \mathcal{I}_M(m,f) := m \cup \{ [A_{f}M], [B_{f}M], [C_{f}M] \}, \] that is the space $m$ with those three new lines added, connecting the points of the face $f$ with the inserted vertex $M$. The map $\mathcal{I}_M$ is illustrated in Figure \ref{insertion}. We see that there are exactly $2k+3$ finite connected components of $\mathbb{R}^2 \setminus m'$, and these are all interiors of triangles (we have replaced one of them, $f$, by three new ones). We also set \beq\label{def_V} \mathcal{V}(m') = \mathcal{V}(m) \cup \{M \}, \eq and thus the set $\mathcal{V}(m')$ is a set of $k+1$ points of $\tilde{T}$: it is the set of the internal vertices which define the faces of $m'$. Finally, we can define \[ \mathcal{D}_{k+1} :=
\left\lbrace \mathcal{I}_M(m,f); \, (m,f) \in \dot{\mathcal{D}_k}, M \in f\right\rbrace \] to be the image of this map. In words, it is the set of `` drawings " of stack triangulations with $k$ internal vertices, with edges included. \begin{figure}
\caption{The insertion map $\mathcal{I}$}
\label{insertion}
\end{figure}
\begin{de} Let $k\geq 0$. For $m \in \mathcal{D}_k$, we call the elements of $\mathcal{V}(m)$ (where $\mathcal{V}(m)$ is defined step by step by \eref{def_V}) the \emph{internal vertices} of $m$. The set $\mathcal{D}_k$ is called the set of compact triangulations with $k$ internal vertices. Finally, we denote \[ \mathcal{D} = \bigcup_{k \geq 0} \mathcal{D}_k \] the set of compact triangulations. \end{de}
\begin{de}\label{defOM} Let $m \in \mathcal{D}$. We define the \emph{occupation measure} of $m$ by \beq\label{def_OM} \mu(m) = \frac{1}{\vert \mathcal{V}(m) \vert} \sum_{x \in \mathcal{V}(m)} \delta_x. \eq This is a probability measure in $\mathbb{R}^2$. \end{de}
Note that $\mathcal{D}_k$ is a set of compact subspaces of $\mathbb{R}^2$. We aim to introduce some probability laws on these sets (as explained in the introduction), and for this we need to equip them with a $\sigma$-field. We first recall the definition of the Hausdorff distance for compact spaces.
\begin{de} Let $(E,d)$ be a compact metric space. For $A \subseteq E$ and $\varepsilon>0$, define the \emph{$\varepsilon$-neighbourhood of $A$} as the set of points of $E$ whose distance to $A$ is less than $\varepsilon$, that is \[ V^{ \varepsilon }(A) = \lbrace x \in E, \, d(x,A) < \varepsilon \rbrace. \] Then for two compact sets $A,B \subseteq E$, the Hausdorff distance between $A$ and $B$ is defined by \[ d_H(A,B) = \inf \lbrace \varepsilon > 0, \, A \subseteq V^{ \varepsilon }(B) \mbox{ and } B \subseteq V^{ \varepsilon }(A) \rbrace.\] This defines a distance on the set of compact subspaces of $E$. \end{de}
We equip the space of compact subspaces of $\tilde{T}$ with the Hausdorff distance. It is a well-known topological fact that this makes it a complete metric space (see for instance \cite{BBI} Section 7.3.1 p. 252). In fact, $(\mathcal{D}_k,d_H)$ is a compact metric space. We equip the sets $\mathcal{D}_k$ with the corresponding Borel $\sigma$-algebra.
\subsection{Encoding with properly marked trees}\label{ad_cod}
We now encode compact triangulations by certain labelled trees. We begin with the purely combinatorial aspect. Let \[ \mathcal{W} := \bigcup_{n \geq 0} \mathbb{N}^n \] be the set of all words on $\mathbb{N} = \lbrace 1,2,... \rbrace $, and by convention set $ \mathbb{N}^0 = \lbrace \emptyset \rbrace $.\\ If $u = (u_1,...,u_n) \in \mathcal{W}$ we write $ \vert u \vert = n $ and call this the \textit{height} of $u$. Also, if we take two words $u = (u_1,...,u_n) \, ,v = (v_1,...,v_m) \in \mathcal{W}$, we write $ uv = (u_1,...,u_n,v_1,...,v_m) $ for the concatenation of $u$ and $v$. By convention $u \emptyset = \emptyset u = u$. A \emph{planar tree} is a subset $t \subseteq \mathcal{W}$ such that \begin{enumerate} \item $ \emptyset \in t $. \item If $u(j) \in t$ for some $u \in \mathcal{W}$ and $j \in \mathbb{N}$, then $u \in t$. \\The notation $u(j)$ is used here to mark the fact that we are concatenating the \emph{words} $u$ and $(j)$, the latter being written so as to differentiate it from the letter $j$. \item For every $u \in t$ there exists $k_u(t) \in \mathbb{N}$ such that $u(j) \in t$ if and only if $1 \leq j \leq k_u(t)$. \end{enumerate}
The integer $k_u(t)$ corresponds to the number of children (or descendants) of $u$ in $t$.
We will denote by $\mathcal{U}$ the set of planar trees. If $t \in \mathcal{U}$ is a planar tree, its height $h(t)$ is defined by $h(t) := \sup \lbrace \vert u \vert; \ u \in t \rbrace \in \llbracket 0, \, \infty \rrbracket $. If $u \in t$ has no child (i.e. $k_u(t) = 0$) we say that $u$ is a \emph{leaf} of $t$. Any vertex which isn't a leaf is called an \emph{internal node} of $t$. We denote $t^0$ the set of internal nodes of a tree $t$. If $t$ is a tree, and $u,v$ are in $t$, we write $u \wedge v$ for the \emph{highest common ancestor} of $u$ and $v$, i.e. the element of maximal height of the set $ \lbrace w \in t ; \ \exists (u',v'), \, u=wu' \mbox{ and } v=wv' \rbrace $. If $u \in t$, we let $\theta_u(t) = \lbrace v \in \mathcal{W} ; uv \in t \rbrace $. This is the subtree of $t$ which has $u$ as a root. A ternary tree $t$ is a planar tree such that $ \forall u \in t, \, k_u(t) \in \lbrace 0,3 \rbrace $, i.e. every internal node has exactly three children.
We denote $\mathcal{T}$ the set of ternary trees, and henceforth we will simply call them trees. We denote $\mathcal{T}^{\infty}$ the infinite complete ternary tree, that is \[ \mathcal{T}^{\infty} = \bigcup_{n \geq 0} \{1,2,3 \}^n.\]
It is a well known fact that $\mathcal{T}$ is mapped bijectively to the set of stack triangulations. However, compact triangulations contain more information than stack triangulations, since compact triangulations contain the information of where each internal vertex is placed. The additional information will be put at each vertex of the associated ternary tree, and will be the barycentric coordinates of the point associated with $u$, at the time it has been inserted. The idea is thus to associate with a point $M$ its triplet $\mathcal{C}(M)$ of barycentric coordinates with respect to $(A,B,C)$, taken to be with sum equal to $1$. As such $ \mathcal{C}(A)=[1,0,0], \ \mathcal{C}(B)=[0,1,0], \ \mathcal{C}(C)=[0,0,1]$. Equivalently, if the splitting of $T$ is given as in Figure \ref{split}, then $\mathcal{C}(M) = (P_1,P_2,P_3)$, where $(P_1,P_2,P_3)$ are the respective ratios of the areas of the triangles $MBC,AMC,AMB$ over the area of $ABC$.
\begin{figure}
\caption{The splitting of a triangle via $P$}
\label{split}
\end{figure} Write \[ V_{2} := \lbrace (x_1, x_2 ,x_3) \in \mathbb{R}^3; \ x_1, x_2 ,x_3 \geq 0 \mbox{ and } x_1+ x_2 +x_3=1 \rbrace\] for the $3$-dimensional simplex, so that any point $M \in \tilde{T}$ corresponds bijectively to its (normalised) barycentric coordinates in $V_2$. Also, let
\[ V_{2}^* := \lbrace (x_1, x_2 ,x_3) \in \mathbb{R}^3; \ x_1, x_2 ,x_3 > 0 \mbox{ and } x_1+ x_2 +x_3=1 \rbrace \] be the $3$-dimensional simplex with its boundary removed.
\begin{de}\label{def_pltrees} \begin{enumerate} \item A \emph{fragmentation-labelled} tree is a pair $(t,(P(u))_{u \in t^0})$, $t \in \mathcal{T}$, such that for any $u \in t^0$, $P(u) \in V_2^*$, that is a tree $t$ and a set of triplets $P(u)$ indexed by the internal nodes of $t$. $P(u)$ is called the \emph{splitting triplet at $u$}. We denote $\mathcal{FT}_n$ the set of fragmentation-labelled trees with $n$ internal vertices, and $\mathcal{FT} = \bigcup \mathcal{FT}_n$. \item A \emph{coordinate-labelled} tree is a pair $(t,(\lambda(u))_{u \in t})$, $t \in \mathcal{T}$, with labels $\lambda(u)$ at each node $u$ such that: \begin{enumerate}[(a)] \item For each leaf $l$ of the tree $t$, we have $\lambda(l) \in V_2^3$, and write $\mathcal{C}(l) = \lambda(l)$. The elements of $\mathcal{C}(l)$ are called the \emph{coordinates} of $l$. \item For each internal node $u$ we have $\lambda(u) = (\mathcal{C}(u),P(u))$, with $\mathcal{C}(u) \in V_2^3$, and $P(u) \in V_2^*$. The elements of $\mathcal{C}(u)$ are called the \emph{coordinates} of $u$, and $P(u)$ is called the \emph{splitting triplet} at $u$. \item The coordinates of the root are $\mathcal{C}(\emptyset)=([1,0,0],[0,1,0],[0,0,1])$. \item If $\mathcal{C}(u)=(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ and $P(u)=(P_1,P_2,P_3)$ then for $i\in \{1,2,3\}$ we have $\mathcal{C}(u(i))=(\tilde{\mathcal{C}}_j)_{j \in \{1,2,3\} }$ where $\tilde{\mathcal{C}}_j$ is equal to $ C(u).P(u) := P_1\mathcal{C}_1+P_2\mathcal{C}_2+P_3\mathcal{C}_3$ if $j=i$ and $\mathcal{C}_j$ otherwise. This property is illustrated in Figure \ref{loc_label}. \end{enumerate} We denote $\mathcal{CT}_n$ the set of coordinate-labelled trees with $n$ internal vertices and likewise $\mathcal{CT} = \bigcup \mathcal{CT}_n$. For $ t \bullet \in \mathcal{CT}$ we will denote $p(t \bullet) \in \mathcal{T}$ the underlying tree. \end{enumerate} \end{de}
\begin{figure}
\caption{The local labelling rule (d) for coordinate-labelled trees}
\label{loc_label}
\end{figure}
\begin{rem}\label{proper_insertion} The condition $P(u) \in V_2^*$ (as opposed to $V_2$) means that the insertions of new vertices at each step are \emph{proper} insertions, that is the new point is added in the interior of a face and not on its border. This is crucial for Theorem \ref{bij_drawing_markedtree}. \end{rem}
\begin{rem}\label{def_Phi} We can map a fragmentation-labelled tree $(t,(P(u))_{u \in t^0}) \in \mathcal{FT}_k$ to a coordinate-labelled tree $t \bullet \in \mathcal{CT}_k$ by setting $p(t \bullet) = t$, keeping for every internal node $u$ of $t \bullet$ the same splitting triplet $P(u)$ as in $t$, and filling in the remaining triplets of coordinates using rules (c) and (d). This gives us a bijective mapping which we denote $\Phi_k: \mathcal{FT}_k \rightarrow \mathcal{CT}_k$. \end{rem}
Once more, we aim to define probability distributions on the sets $\mathcal{FT}$ and $\mathcal{CT}$. For this, we introduce a $\sigma$-field on the sets $\mathcal{FT}_k$ and $\mathcal{CT}_k$, with the help of a distance.
\begin{de} Let $k \geq 0$. The map $d_C : \mathcal{CT}_k \times \mathcal{CT}_k \rightarrow \mathbb{R}_+ $ defined by \[ d_C(t_1 \bullet,t_2 \bullet) = \mathds{1}_{p(t_1 \bullet) \neq p(t_2 \bullet)} + \mathds{1}_{p(t_1 \bullet) = p(t_2 \bullet)} \left( \left( \max_{u_1 \in p(t_1 \bullet), u_2 \in p(t_2 \bullet)} d(\lambda(u_1), \lambda(u_2)) \right) \wedge 1 \right), \] where $\lambda(u)$ is the label of a node $u$ and $d$ represents any distance on the set of labels (seen as a subspace of $\mathbb{R}^i$ for some $i$), is a distance on $\mathcal{CT}_k$ (for usual reasons). We call it the \emph{coordinate-label distance}. \end{de}
We define in an analogous manner a distance $d_F$ on $\mathcal{FT}_k$. The spaces $\mathcal{CT}_k$ and $\mathcal{FT}_k$ for $k \geq 0$ are equipped with the corresponding Borel $\sigma$-algebras. We can now state the main theorem of this section.
\begin{thm}\label{bij_drawing_markedtree} Let $n \geq 0$. Equip the set $\mathcal{CT}_n$ with the coordinate-label distance $d_C$ and $\mathcal{D}_n$ with the Hausdorff distance $d_H$. Then there exists a homeomorphism \[ \Psi_n : \mathcal{CT}_n \rightarrow \mathcal{D}_n \] \[ \hspace{1cm} t \bullet \mapsto m ,\] such that: \begin{enumerate} \item Each internal node $u$ of $t \bullet$ corresponds bijectively to an internal vertex $M$ of $m$. Moreover, if $ \lambda(u) = (\mathcal{C}(u),P(u))$ then the barycentric coordinates of the vertex $M$ with respect to $(A,B,C)$ are given by $\mathcal{C}(u).P(u)$. \item Each leaf $l$ of $t \bullet$ corresponds bijectively to a face $f$ of $m$. Moreover, if $\mathcal{C}(l) = (\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ is the label at $l$ and the face $f$ is defined by the three vertices $(A_f,B_f,C_f)$ then $\mathcal{C}(A_f) = \mathcal{C}_1$, $\mathcal{C}(B_f) = \mathcal{C}_2$, $\mathcal{C}(C_f) = \mathcal{C}_3$. \end{enumerate} \end{thm}
\begin{rem} Note that the spaces which are in one-to-one correspondence are both infinite, so that it is not the existence of the bijection as such which is of interest, but the fact that via this bijection all relevant information on a compact triangulation can be read in a coordinate-labelled tree. The measurability of the bijection will allow us to transport distributions. \end{rem}
\begin{de}\label{def_triangles} For a node $u$ in a coordinate-labelled tree $t \bullet \in \mathcal{CT}$, we define $T(u)$ to be the triangle whose three points are given by the triplet of coordinates $\mathcal{C}(u)$, and $\tilde{T}(u)$ for the filled triangle. This is illustrated in Figure \ref{index_trg}. \end{de}
\begin{figure}
\caption{The indexation of triangles}
\label{index_trg}
\end{figure}
\subsection{Proof of Theorem \ref{bij_drawing_markedtree}}
We proceed by induction on $n$. We follow a similar path to the proof of Proposition 1 in \cite{AM}, by constructing the bijection iteratively. For $n=0$ there is no work to do. We have $\mathcal{D}_0= \{ T \}$, $ \mathcal{CT}_0 = \{ \{ \emptyset \}, \mathcal{C}(\emptyset) \}$. By property (c) of Definition \ref{def_pltrees} the coordinates $\mathcal{C}(\emptyset)$ satisfy part 2 of the theorem, as desired.
Now assume we have constructed $\Phi_n$ as in the statement of Theorem \ref{bij_drawing_markedtree}, for some $n \geq 0$. Let $ t \bullet \in \mathcal{CT}_{n+1} $. Denote $t = p( t \bullet)$ and choose a node $u \in t$ such that $u(1),u(2),u(3)$ are leaves of $t$. Now define $t' := t \setminus \{u(1),u(2),u(3),\}$, and $t' \bullet$ to be the coordinate-labelled tree such that its labels coincide with those of $t \bullet$ except at $u$, and where we remove the splitting triplet $P(u) = (P_1,P_2,P_3)$, as $u$ is now a leaf of $t'$. Thus $t' \bullet \in \mathcal{CT}_n$ and by induction we can define $m' := \Psi_n(t') \in \mathcal{D}_n$. Let $f$ be the face of $m'$ corresponding to the leaf $u$ via $\Psi_n$. Write as in the statement of the theorem $(A_f,B_f,C_f)$ for the three vertices defining $f$.
Now let $M$ be the point in $f$ whose barycentric coordinates with respect to $(A_f,B_f,C_f)$ are $(P_1,P_2,P_3)$, and define $m = \Psi_{n+1}(t \bullet)= m' \cup [A_f,M] \cup [B_f,M] \cup [C_f,M]$. It follows that the barycentric coordinates of $M$ with respect to $(A,B,C)$ are $P_1 \mathcal{C}(A_f) + P_2 \mathcal{C}(B_f) + P_3 \mathcal{C}(C_f) = P(u).\mathcal{C}(u)$ by property 2 of the induction hypothesis applied to $u$ in $t'$. Thus, by mapping $u$ to $M$ and all other internal nodes of $ t \bullet $ to their corresponding internal vertex via $\Psi_n$, we see that $\Psi_{n+1}$ satisfies condition 1 of Theorem \ref{bij_drawing_markedtree}. To satisfy condition 2, we map all the leaves of $t'$ except $u$ to their corresponding faces via $\Psi_n$, noting that these faces are untouched by $\Psi_{n+1}$ so that the condition remains satisfied. Finally, we map the leaves $u(1),u(2),u(3)$ respectively to the faces $MB_fC_f,A_fMC_f,A_fB_fM$ of $m$. Because of the local growth property (d) of Definition \ref{def_pltrees}(2), we see that condition 2 is satisfied for these leaves. This iterative construction is illustrated in Figure \ref{loc_bij}.
\begin{figure}
\caption{Illustrating the growth property of the bijection $\Psi_n$}
\label{loc_bij}
\end{figure}
Two points remain. Firstly, that $\Psi_{n+1}$ is a bijection. But this follows from our construction and the definition of $\mathcal{D}_n$. Indeed $\mathcal{D}_{n+1}$ is obtained from $\mathcal{D}_n$ through the insertion of a vertex anywhere in a given face of an element of $\mathcal{D}_n$, while we have a similar iterative structure for coordinate-labelled trees. It is important here that each vertex is inserted in the interior of some face, and not on it's boundary (since the splitting triplets are in $V_2^*$ and not just in $V_2$), so that the face it is inserted in is defined non ambiguously.
The final point is to prove that $\Psi_{n}$ is bicontinuous with respect to the given distances. For this, we fix some $m \in \mathcal{D}_n$ and $\varepsilon > 0$. Write $t\bullet = \Psi_n^{-1}(m)$. Now there exists $\eta > 0$ such that for any $(\mathcal{C}=(\mathcal{C}_1,C_2,C_3),P=(P_1,P_2,P_3)),(\mathcal{C}'=(\mathcal{C}_1',C_2',C_3'),P'=(P_1',P_2',P_3')) \in V_2^3 \times V_2^*$, if $\|(\mathcal{C},P) - (\mathcal{C}',P')\| < \eta$ then $\| \mathcal{C}.P - \mathcal{C}'.P' \| < \varepsilon$. We may suppose that $\eta < 1$, so that if $d_C(t \bullet ', t \bullet) < \eta$ we have $p(t \bullet ') = p( t \bullet)$. This implies that if $d_C(t \bullet ', t \bullet) < \eta$ then for any vertex $u \in p( t \bullet)$ the corresponding vertex in $m$ is at distance less than $\varepsilon$ from the corresponding vertex in $m' := \Psi_n(t \bullet ')$. As a consequence, we get that $d_H(m',m) < \varepsilon$ and the continuity of $\Psi_n$ is proved. The bicontinuity stems immediately from the fact that it is a mapping between compact spaces.\qed
\subsection{Introducing randomness}
So far, we have worked in a purely deterministic setting. In this paragraph, we formally introduce the two probability distributions on $D_k$ which will be of interest to us.
\begin{de} A \emph{splitting law} $\nu$ is a distribution on $\mathbb{R}_+^3$ such that if $P=(P_1,P_2,P_3)$ is distributed according to $\nu$, then: \begin{enumerate} \item For any permutation $\sigma$ on $\lbrace 1,2,3 \rbrace$, $(P_{\sigma(1)},P_{\sigma(2)},P_{\sigma(3)})$ has same distribution as $(P_1,P_2,P_3)$, that is the law of $\nu$ is symmetric. \item For any $i \in \lbrace 1,2,3 \rbrace$, $P_i >0$ a.s.. \item $P_1 + P_2 + P_3 = 1$ a.s.. \end{enumerate} We denote $\mathcal{M}_S(V_2^*)$ the set of splitting laws, and say that a random variable $P=(P_1,P_2,P_3)$ is a \emph{splitting ratio} if its distribution is a splitting law. \end{de}
Fix some $n \geq 0$. We define two probability distributions on $\mathcal{T}_n$. \begin{itemize} \item The first, which we denote $\mathbb{U}_n^{\mathcal{T}}$, is the uniform distribution on $\mathcal{T}_n$. \item The second, which we denote $\mathbb{H}_n^{\mathcal{T}}$, is defined by induction. For $n = 0$, the distribution $\mathbb{H}_0^{\mathcal{T}}$ takes value the unique tree reduced to its root $\{ \emptyset \}$ a.s.. Now suppose we have defined a distribution $\mathbb{H}_{n-1}^{\mathcal{T}}$ on $\mathcal{T}_{n-1}$. Choose $t \in \mathcal{T}_{n-1}$ according to $\mathbb{H}_{n-1}^{\mathcal{T}}$. Conditionally to $t$, choose one of its $2n -1$ leaves uniformly at random, and replace that leaf by an internal node with three children. This gives us a probability distribution $\mathbb{H}_n^{\mathcal{T}}$ on $\mathcal{T}_n$. Note that the weight of a tree is proportional to the number of histories leading to its construction (starting from a single root node). \end{itemize}
We say that a random variable $t \in \mathcal{T}$ is an \textit{increasing tree} if it has distribution $\mathbb{H}_n^{\mathcal{T}}$ for some $n \geq 0$.
\begin{de} Let $\nu \in \mathcal{M}_S(V_2^*)$ be a splitting law, and $(P(u))_{u \in \mathcal{T}^{\infty}}$ be an i.i.d. sequence of random variables with law $\nu$. Let $n \geq 0$. \begin{enumerate} \item We denote $\mathbb{U}_n^{\mathcal{T},\nu}$ (resp. $\mathbb{H}_n^{\mathcal{T},\nu}$) the distribution of $t^P_n \bullet := \Phi_n (t_n,(P(u))_{u \in t_n^0}) $ where $t_n \in T_n$ is independent from $(P(u))_{u \in \mathcal{T}^{\infty}}$ and has distribution $\mathbb{U}_n^{\mathcal{T}}$ (resp. $\mathbb{H}_n^{\mathcal{T}}$), and $\Phi_n$ is as in Remark \ref{def_Phi}. \item We define the distributions $\mathbb{U}^{\nu}_n$ and $\mathbb{H}^{\nu}_n$ to be the respective images of the distributions $\mathbb{U}_n^{\mathcal{T},\nu}$ and $\mathbb{H}_n^{\mathcal{T},\nu}$ via the bijection $\Psi_n$ of Theorem \ref{bij_drawing_markedtree}. These are therefore two probability distributions on $\mathcal{D}_n$. \end{enumerate} \end{de}
\section{The uniform model}\label{unif}
In this section, we study the asymptotic behaviour of the distribution $\mathbb{U}^{\nu}_n$. That is, we look at random compact triangulations where the underlying stack triangulation is chosen uniformly, and the insertion of vertices is done according to some splitting law $\nu \in \mathcal{M}_S(V_2^*)$, independent from the choice of the underlying triangulation. We study both the occupation measure, and the asymptotic behaviour of the distribution itself.
\subsection{The occupation measure}\label{sectionUOM}
\begin{thm}\label{UOM} Let $(m_n)_n$ be a sequence of random compact triangulations, where $m_n$ has distribution $\mathbb{U}^{\nu}_n$. Recall (Definition \ref{def_V}) that $\mathcal{V}(m_n)$ denotes the set of internal vertices of $m_n$. For every $n$, conditionally to $ m_n $, let $U_n$ be a vertex of $\mathcal{V}(m_n)$, chosen uniformly at random. Finally, let $\mu_n$ be the occupation measure of $m_n$, as in \eref{def_OM}. Then \begin{enumerate} \item The random point $U_n$ converges in distribution to some random limit point $U_{\infty}$ as $n$ tends to infinity. \item We have \[ \mu_n \,{\buildrel (d) \over \longrightarrow}\, \delta_{U_{\infty}} \quad \mbox{as } n \rightarrow \infty,\] where the convergence is in distribution on the space of probability measures on the filled triangle $\tilde{T}$. \end{enumerate} \end{thm}
Theorem \ref{UOM} says is that in the uniform model, all the vertices of the compact triangulation are at the same place, except for a portion which tends to $0$. Although point 2 is stronger than point 1, we state both here as point 1 will be heavily used in the proof of point 2. In Figure \ref{simu1} is a simulation of the vertices of the map $m_n$ where we take for $\nu$ the special case $\nu = \delta_{\left(\frac13,\frac13,\frac13\right)}$, and $n \sim 10 000$. We can see that the vertices are indeed concentrated at one place.
\begin{figure}
\caption{A simulation of the set $\mathcal{V}(m_n)$ where $m_n$ has distribution $\mathbb{U}^{\nu}_n$ and $n \sim 10 000$}
\label{simu1}
\end{figure}
\subsubsection{Proof of Theorem \ref{UOM}.(1)}
We begin by recalling an elementary fact about uniform ternary trees.
\begin{fact}\label{unif_point} Take $U_n$ as in the statement of Theorem \ref{UOM}, and write $U'_n$ for the corresponding node in the coordinate-labelled tree $t\bullet_n := \Psi_n^{-1}(m_n)$, as in Theorem \ref{bij_drawing_markedtree}. Write $U'_n=(u_1,u_2,\cdots,u_h)$ where $h$ is the height of $U'_n$. Then conditionally to $h$, the random variables $u_1,u_2,...,u_h$ are i.i.d, and are uniformly distributed on $\lbrace 1,2,3 \rbrace$. \end{fact}
\begin{proof} By construction of the law $\mathbb{U}^{\nu}_n$, the tree $t_n := p(t\bullet_n)$ follows the uniform distribution on $\mathcal{T}_n$. We now use the following argument. If $t$ is a random ternary tree, chosen uniformly among trees of a given size, then conditionally to their sizes the subtrees at the root $\theta_{(1)}(t),\theta_{(2)}(t),\theta_{(3)}(t)$ are independent, and also follow the uniform distribution. It immediately follows that the $(u_i)$ are i.i.d, and the fact that the law of $u_1$ is uniform on $\lbrace 1,2,3 \rbrace$ stems from the symmetric nature of the uniform distribution in $\mathcal{T}_n$. \end{proof}
By definition of a coordinate-labelled tree we have the following: let $u$ be an internal node in a coordinate-labelled tree $t$, with coordinates $\mathcal{C}(u)=(\mathcal{C}_1,\mathcal{C}_2,\mathcal{C}_3)$ and splitting triplet $P(u)=(P_1,P_2,P_3)$, then: \[ \forall i \in \lbrace 1,2,3 \rbrace, \quad \mathcal{C}(u(i))^T=M_{\nu}^{(i)}.\mathcal{C}(u)^T,\] where $M_{\nu}^{(i)}$ is the three-by-three identity matrix in which the $i$-th line is replaced by $P(u)$, i.e. \beq\label{defM} M_{\nu}^{(1)} = \left( \begin{array}{ccc} P_1 & P_2 & P_3 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), \quad M_{\nu}^{(2)} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ P_1 & P_2 & P_3 \\ 0 & 0 & 1 \end{array} \right), \quad M_{\nu}^{(3)} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ P_1 & P_2 & P_3 \end{array} \right). \eq Henceforth, we will leave out the subscript $\nu$ wherever there is no risk of confusion. Combining this and Fact \ref{unif_point} gives us the following result.
\begin{pro} Let $m_n,U_n$ be as in the statement of Theorem \ref{UOM}. Write $U'_n$ for the corresponding node in the coordinate-labelled tree $t\bullet_n := \Psi_n^{-1}(m_n)$, and $\mathcal{C}(U'_n)=(\mathcal{C}_1(U'_n),\mathcal{C}_2(U'_n),\mathcal{C}_3(U'_n))$ for the coordinates of $U'_n$. Then for $i \in \{ 1,2,3 \}$, conditionally to $h$ the height of $U'_n$, the law of $\mathcal{C}_i(U'_n)$ is given by the $i$-th row of the product $M_h \cdots M_1$ where the $M_j$ are i.i.d random variables with law $\frac{1}{3} \delta_{M^{(1)}} + \frac{1}{3} \delta_{M^{(2)}} + \frac{1}{3} \delta_{M^{(3)}}$ (the $M^{(k)}$ being defined as in \eref{defM}). \end{pro}
Now to get the desired convergence of $U_n$, it is of course sufficient to show the convergence of the sequence of coordinates $(\mathcal{C}_n)_{n\geq0}$ where $\mathcal{C}_n$ is the barycentric coordinates of the point $U_n$. By Theorem \ref{bij_drawing_markedtree}(1) the law of $\mathcal{C}_n$ is $P_1\mathcal{C}_1(U'_n)+P_2\mathcal{C}_2(U'_n)+P_3\mathcal{C}_3(U'_n)$ where $P=(P_1,P_2,P_3)$ is a splitting ratio with distribution $\nu$, independent from $\mathcal{C}(U'_n)$. The previous proposition gives us the law of $\mathcal{C}(U'_n)$. Moreover, for any $A>0$, $\mathbb{P}( \vert U'_n \vert \geq A )$ tends to zero as $n$ goes to infinity. Thus, to prove Theorem \ref{UOM}, it is sufficient to show the following.
\begin{pro}\label{mat_prod} Let $(M_i)_{i\geq 1}$ be i.i.d. random variables with law $\frac{1}{3} \delta_{M^{(1)}} + \frac{1}{3} \delta_{M^{(2)}} + \frac{1}{3} \delta_{M^{(3)}}$. Then the product $S_n:=M_n \cdots M_1$ converges a.s. as $n \rightarrow \infty$ to some random matrix $S$ whose three lines are identical. \end{pro}
\begin{proof} We write $S_n= \left( \begin{array}{c} L^{(1)}_n \\ L^{(2)}_n \\ L^{(3)}_n \end{array} \right) $. We wish to show that there exists $L_{\infty}$ such that a.s. for all $i \in \{1,2,3 \}$, $L^{(i)}_n \rightarrow L_{\infty}$.
\begin{lem}\label{subseq} Let $(n_k)$ be some sub-sequence of integers such that $L^{(i)}_{n_k} \rightarrow L^{(i)}$ a.s. for all $i \in \{1,2,3 \}$. Then $L^{(1)}=L^{(2)}=L^{(3)}$ a.s.. \end{lem}
\begin{proof} We proceed by contradiction. To simplify notation we assume that $L^{(i)}_n \rightarrow L^{(i)}$ a.s. for $i \in \{1,2,3 \}$. Write $A,B,C$ for the three points whose respective coordinates are $L^{(1)},L^{(2)},L^{(3)}$. Similarly write $A_n,B_n,C_n$ for the three points with respective coordinates $L^{(1)}_n,L^{(2)}_n,L^{(3)}_n$. We may assume that $\mathbb{P}(A \neq C) > 0$, and from now on work conditionally to this event.
Fix some $\varepsilon > 0$ such that $6 \varepsilon < d(A,C) $. Now there exists $N$ such that for any $n \geq N$, we have $d(X_n,X) < \varepsilon$ for $X \in \lbrace A,C \rbrace$. Thus by construction the balls $B(A_n, 2 \varepsilon)$ and $B(C_n, 2 \varepsilon)$ do not intersect, and $B(X, \varepsilon) \subseteq B(X_n, 2 \varepsilon)$ for $X \in \lbrace A,C \rbrace$. Define $Y_n := P_1 A_n + P_2 B_n + P_3 C_n$, where $P=(P_1,P_2,P_3)$ is a splitting ratio, independent from $(A_n, B_n, C_n)$. Then $d(Y_n,A_n) \geq 2\varepsilon$ or $d(Y_n,C_n) \geq 2\varepsilon$, so that $d(Y_n,A) \geq \varepsilon$ or $d(Y_n,C) \geq \varepsilon$. See Figure \ref{smallD} below for an illustration of this situation. Using the definition of the matrices $(M_i)$ we get that with probability equal to $1$ (still conditionally on the event $A \neq C$) there exists $n_0 \geq N$ such that one of the following occurs: \begin{enumerate} \item We have $d(Y_{n_0},A) \geq \varepsilon$ and $M_{n_0 + 1} = M^{(1)}$ so that $A_{n_0 +1} = Y_{n_0}$, which contradicts $d(A_{n_0 + 1},A) < \varepsilon$. \item We have $d(Y_{n_0},C) \geq \varepsilon$ and $M_{n_0 + 1} = M^{(3)}$ so that $C_{n_0 +1} = Y_{n_0}$, which contradicts $d(C_{n_0 + 1},C) < \varepsilon$. \end{enumerate} This completes the proof of the lemma.
\begin{figure}
\caption{An illustration of the proof of Lemma \ref{subseq}}
\label{smallD}
\end{figure}
\end{proof}
Let us now prove Proposition \ref{mat_prod}. Let $(L,L,L)$ be the a.s. limit along some subsequence of $\big((L^{(1)}_n,L^{(2)}_n,L^{(3)}_n)\big)$. Write $M$ for the point in $\tilde{T}$ with (barycentric) coordinates $L$. Similarly, write $(M^{(1)}_n,M^{(2)}_n,M^{(3)}_n)$ for the points with respective coordinates $(L^{(1)}_n,L^{(2)}_n,L^{(3)}_n)$. Now a.s. for any $\varepsilon > 0$ there exists $N$ such that $d(M^{(i)}_N,M) < \varepsilon$ for all $i \in \{ 1,2,3 \}$. But for $n \geq N$ the points $M^{(i)}_n$ are all in the filled triangle defined by $(M^{(1)}_N,M^{(2)}_N,M^{(3)}_N)$ by construction. It follows therefore that for any $n \geq N$, we have $d(M^{(i)}_n,M) < \varepsilon$ for all $i \in \{ 1,2,3 \}$. This proves that a.s. \[ \left(M^{(1)}_n,M^{(2)}_n,M^{(3)}_n\right) \longrightarrow (M,M,M) \quad \mbox{as } n \rightarrow \infty, \] which is the desired result. \end{proof}
\subsubsection{Proof of Theorem \ref{UOM}.(2)}
The idea of the proof is as follows. Consider a uniform ternary tree $t_n \in \mathcal{T}_n$ and choose two independent nodes $u^{(1)}_n,u^{(2)}_n$ uniformly at random in $t_n$. Then the greatest common ancestor of these $v_n:=u^{(1)}_n \wedge u^{(2)}_n$ is at height of order $n^{\frac{1}{2}}$. This says that the corresponding two vertices $U^{(1)}_n,U^{(2)}_n$ are in a small triangle. Intuitively, this suggests they will asymptotically be near to each other. This is made clear in the following lemma.
\begin{lem}\label{2points} Keeping the same notation as in the statement of Theorem \ref{UOM}, conditionally to $m_n$, choose two vertices $U^{(1)}_n,U^{(2)}_n \in \mathcal{V}(m_n)$ independently, uniformly at random. Then the following convergence holds in probability:
\[ \| U^{(2)}_n - U^{(1)}_n \| \,{\buildrel \p \over \longrightarrow}\, 0, \quad \mbox{as } n \rightarrow \infty.\] \end{lem}
\begin{proof}
Let as before $U'^{(1)}_n$ (resp. $U'^{(2)}_n$) be the node corresponding to $U^{(1)}_n$ (resp. $U^{(2)}_n$) in the tree $t\bullet_n$, via the bijection $\Psi_n$ established in Theorem \ref{bij_drawing_markedtree}. Write $V_n := U'^{(1)}_n \wedge U'^{(2)}_n$ for the greatest common ancestor of these two nodes. It is clear that $ \| U^{(2)}_n - U^{(1)}_n \| \leq \mathrm{diam}(\tilde{T}(V_n))$, where $\mathrm{diam}(S)$ is the diameter of set $S$. Moreover, we know that for any $A>0$, $\mathbb{P}( \vert V_n \vert \geq A )$ tends to zero as $n$ goes to infinity. Using this, and Proposition \ref{unif_point}, it is sufficient to show the following.
\begin{lem}\label{small_diam} Let $(u_k)_{k \geq 1}$ be a sequence of i.i.d. random variables, uniform on $\lbrace 1,2,3 \rbrace$. Write $W_n := u_1 \cdots u_n \in \mathcal{T}^{\infty}$. Then the following convergence holds in probability: \[ \mathrm{diam}(\tilde{T}(W_n)) \,{\buildrel \p \over \longrightarrow}\, 0 \quad \mbox{as } n \rightarrow \infty.\] \end{lem}
In fact, the convergence holds a.s.. Note that the sequence of triangles $ \big( \tilde{T}(W_n) \big)_n $ is non increasing for inclusion, therefore $ \big( \mbox{diam}(\tilde{T}(W_n)) \big)_n$ is non increasing, so converges a.s. to some limit $l \geq 0$. Now take some subsequence $(n_k)$ such that the triangle $ \tilde{T}(W_{n_k}) $ converges to some limit triangle $\tilde{T_0} = (A_0,B_0,C_0)$\footnote{When we say ``the triangle converges" here, we mean that the triplet of points of the triangle converges}. We can then use the same proof as for Lemma \ref{subseq} to show that $A_0=B_0=C_0$ a.s., and hence $l=0$ a.s. as desired. \end{proof}
We now use Lemma \ref{2points} to prove Theorem \ref{UOM}.(2). We denote, for any measure $\mu$ on the triangle $\tilde{T}$ and any measurable function $f$ on $\tilde{T}$, \[ \left\langle f, \mu \right\rangle := \int_{\tilde{T}} f \, d\mu .\]
We show that for any real-valued function $f$ continuous on $\tilde{T}$, $ \left\langle f,\mu_n \right\rangle \, \,{\buildrel (d) \over \longrightarrow}\, \, \left\langle f,\delta_{U_{\infty}}\right\rangle = f(U_{\infty})$. It suffices to show that \[ \Big( \left\langle f,\mu_n\right\rangle \, , f(U_n) \Big) \,{\buildrel (d) \over \longrightarrow}\, \Big(f(U_{\infty}),f(U_{\infty})\Big), \] where $U_n$ is as in the statement of Theorem \ref{UOM}. Since point $(1)$ of Theorem \ref{UOM} implies that $f(U_n) \,{\buildrel (d) \over \longrightarrow}\, f(U_{\infty})$, it suffices to show that \[ \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \,{\buildrel (d) \over \longrightarrow}\, 0.\] Now $\mathbb{E} ( \left\langle f,\mu_n\right\rangle ) = \mathbb{E} \big( \frac{1}{n} \sum_{x \in \mathcal{V}(m_n)} f(x) \big) = \mathbb{E} \big(f(U_n)\big)$, thus it is sufficient to show that \[{\rm Var}\, \big( \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \big) \longrightarrow 0. \] Let $U^{(1)}_n,U^{(2)}_n$ be as in the statement of Lemma \ref{2points}. We have \[ {\rm Var}\, \big( \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \big) = \mathbb{E}\big( ( \left\langle f,\mu_n\right\rangle \, - f(U_n) )^2 \big)\] \[= \mathbb{E}\big(f\left(U^{(1)}_n\right)^2 - f\left(U^{(1)}_n\right)f\left(U^{(2)}_n\right)\big), \] so that
\[ {\rm Var}\, \big( \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \big) \leq \| f \| _{\infty} \mathbb{E}\big( \vert f\left(U^{(1)}_n\right) - f\left(U^{(2)}_n \right) \vert \big) . \]
Since $\tilde{T}$ is compact and $f$ continuous, $f$ is uniformly continuous. Fix some $\varepsilon > 0$. There exists $\eta > 0$ such that for any $x,y \in \tilde{T}$ with $\| x-y \| \leq \eta$, we have $\vert f(x) - f(y) \vert \leq \varepsilon$. Then
\[ {\rm Var}\, \big( \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \big) \leq \| f \| _{\infty}\Big(\varepsilon + 2 \| f \| _{\infty} \mathbb{P}\big(\| U^{(2)}_n - U^{(1)}_n \| > \eta \big) \Big). \] Using Lemma \ref{2points} we get that
\[ \limsup_{n \rightarrow \infty} {\rm Var}\, \big( \left\langle f,\mu_n\right\rangle \, - \, f(U_n) \big) \leq \varepsilon \| f \| _{\infty}, \] and since this holds for any $\varepsilon > 0$, the desired result follows. This completes the proof of Theorem \ref{UOM}. \qed
It may also be interesting to obtain information on the law of the limit point $U_{\infty}$, since this point is where the occupation measure is concentrated asymptotically. Proposition \ref{mat_prod} tells us that the coordinates $C_{\infty}$ of the limit point $U_{\infty}$ follow the law of one line of this matrix $S$, and satisfies the following equation in distribution: \beq\label{dist_eq} C_{\infty} \,{\buildrel (d) \over =}\, C_{\infty}.M_{\nu} \mbox{, where $M_{\nu}$ has distribution }\frac{1}{3} \delta_{M_{\nu}^{(1)}} + \frac{1}{3} \delta_{M_{\nu}^{(2)}} + \frac{1}{3} \delta_{M_{\nu}^{(3)}}, \eq and the $M_{\nu}^{(i)}$ are defined as in \eref{defM}. This can be interpreted as follows. Split the original triangle $T$ in three using the splitting law $\nu$, and pick one of the three subsequent triangles uniformly at random. Now choosing a point with respect to $C_{\infty}$ in that triangle is the same (has the same law) as choosing a point with respect to $C_{\infty}$ in $T$. The distribution of $C_{\infty}$ is thus the limit distribution of a (very) simple Markov chain.
\begin{pro} Let $\mathcal{M}_2(C)$ be the set of symmetric (probability) laws on $V_2^*$. For any splitting law $\nu \in \mathcal{M}_S(V_2^*)$, the distribution equation $C_{\infty} \,{\buildrel (d) \over =}\, C_{\infty}.M_{\nu}$ has a unique solution $C_{\infty} \in \mathcal{M}_2(C)$. \end{pro}
This tells us that Equation \eref{dist_eq} characterises the distribution of the limit point.
\begin{proof}
We endow $\mathcal{M}_2(C)$ with the usual $\mathcal{L}^2$ norm denoted $\|.\|_2$. This makes it complete, and thus by Banach's fixed point theorem it is sufficient to show that the map $ \left\lbrace \begin{array}{c} \mathcal{M}_2(C) \rightarrow \mathcal{M}_2(C) \\ \hspace{0.7cm} \mathbf{L}(X) \mapsto \mathbf{L}(X.M_{\nu}) \end{array} \right.$, where $\mathbf{L}(Y)$ denotes the law of a random variable $Y$, is a contraction.
Take $\mu \in \mathcal{M}_2(C)$ and let $X=(X_1,X_2,X_3)$ have distribution $\mu$. Write for short $m_2= \mathbb{E}(X_i^2)$ and $m_{1,1}=\mathbb{E}(X_iX_j)$ for $i \neq j$. Then
\[ \| X \|_2 ^2 = 3m_2. \]
We now wish to compute $\| XM_{\nu} \|_2 ^2$. We have
\[\| XM_{\nu} \|_2 ^2 = \frac{1}{3} \mathbb{E} \Big( \sum_{i=1}^3 XM^{(i)}(M^{(i)}) ^TX^T \Big). \] Now a computation gives us that \[ M^{(1)}(M^{(1)})^T = \left( \begin{array}{ccc} \vert P \vert ^2 & P_2 & P_3 \\ P_2 & 1 & 0 \\ P_3 & 0 & 1 \end{array} \right), \] where $\vert P \vert ^2 := P_1^2 + P_2 ^2 + P_3^2 $. Thus \[ XM^{(1)}(M^{(1)})^TX^T = X_1^2 \vert P \vert ^2 + X_2^2 P_2 + X_3^2P_3 + 2(X_1X_2P_2 + X_1X_3P_3).\] It follows that \[\mathbb{E}\Big(XM^{(1)}(M^{(1)})^TX^T\Big) = m_2\Big( \mathbb{E} \big(\vert P \vert ^2 \big) + \frac{2}{3} \Big) + \frac{4}{3} m_{1,1}. \] Here we use the symmetry of $\nu$ (hence in particular $\mathbb{E}(P_i) = \frac{1}{3})$. Since the above equality is symmetric, it immediately follows that \beq\label{norm}
\| XM_{\nu} \|_2 ^2 = m_2\left( \mathbb{E} \big(\vert P \vert ^2 \big) + \frac{2}{3} \right) + \frac{4}{3} m_{1,1}. \eq Now $\mathbb{E}(P_1^2) \leq \mathbb{E}(P_1) = \frac{1}{3} $ since $P_1 \leq 1$ a.s. and moreover, this inequality is strict since $P_1=0 \mbox{ or } 1$ a.s. is not allowed. Write $a=3\mathbb{E}(P_1^2)=\mathbb{E} \big(\vert P \vert ^2 \big)<1$. By the Cauchy-Schwarz inequality \[ m_{1,1} = \mathbb{E}(X_1X_2) \leq \sqrt{\mathbb{E}(X_1^2)} \sqrt{\mathbb{E}(X_2^2)} = m_2. \] It follows from \eref{norm} that
\[\| XM_{\nu} \|_2 ^2 \leq (a+2)m_2 = \frac{a+2}{3} \| X \|_2 ^2, \] and since $a<1$ this shows that the map $X \rightarrow XM_{\nu} $ is indeed a contraction. \end{proof}
\paragraph{Special case:} when $P = (\frac{1}{3},\frac{1}{3},\frac{1}{3})$ a.s., the law of $U_{\infty}$ is the uniform distribution on $\tilde{T}$.
\noindent Indeed, putting a point at the centre of gravity of a triangle, choosing one of the three resulting triangles uniformly at random and placing a point uniformly in that triangle, is the same as placing a point uniformly in the original triangle.
\subsection{The drawing of the triangulation}\label{UD}
The previous results give us information on the asymptotic behaviour of the occupation measure, and thus tell us where the vertices are located asymptotically. In this section we obtain information on the behaviour of the drawings themselves, that is the behaviour of compact triangulations under $\mathbb{U}^{\nu}_n$. We immediately state the main result.
\begin{thm}\label{conv_draw} Let $\nu \in \mathcal{M}_S(V_2^*)$ be a splitting law and let $(m_n)_{n \geq 0}$ be a sequence of compact triangulations under the distribution $\mathbb{U}^{\nu}_n$. There exists a random compact space $m_{\infty}$ such that \[ m_n \longrightarrow m_{\infty}, \quad \mbox{as } n \rightarrow \infty \] where the convergence holds in distribution in the set of compact subspaces of the filled triangle $\tilde{T}$ equipped with the Hausdorff distance. \end{thm}
The limit space $m_{\infty}$ is characterised as follows. Start with the initial triangle $T$ split in three by adding a point according to $\nu$. Pick one of these three triangles uniformly at random, call it $T'$. For each of the other two triangles, consider independently a random critical Galton-Watson ternary tree - this object shall be defined later in the paper - and draw the corresponding compact triangulation (each vertex insertion according to $\nu$, independently from all previous insertions and from the trees). Iterate ad infinitum this construction, replacing $T$ with $T'$, and take the closure of the space obtained (so as to have a compact space). Figure \ref{simu2} illustrates this convergence, showing a simulation of the map $m_n$ with $n \sim 10000$. The fact we can only see a handful of ``macroscopic" triangles suggests the convergence of the drawings.
To show Theorem \ref{conv_draw}, we restrict ourselves to the special case where the splitting law \beq\label{nu0} \nu = \nu^0 := \delta_{\left(\frac13,\frac13,\frac13\right)}, \eq that is, a splitting ratio with law $\nu^0$ takes value $\left(\frac13,\frac13,\frac13\right)$ a.s.. There is no additional difficulty in the general case, but this special case simplifies certain statements such as Theorem \ref{cont_thm}, as well as certain formulae such as \eref{dist_bar}. We will be careful to always specify how we would proceed in the general case. Recall the definitions of the bijections $\Phi,\Psi$ in Remark \ref{def_Phi} and Theorem \ref{bij_drawing_markedtree}. We define a map \beq\label{def_psi0} \begin{array}{c} \Psi^0 : \mathcal{T} \longrightarrow E \\ \hspace{5.8cm} t \longmapsto \overbar{ \Psi \circ \Phi \left(t,\left(\left(\frac13,\frac13,\frac13\right),u \in t^0 \right)\right)}, \end{array} \eq where $E$ is the set of compact subspaces of $\tilde{T}$, and $\bar{S}$ denotes the closure of a subspace $S \subseteq \tilde{T}$. In words, we take a tree $t$, make it a fragmentation labelled tree by adding the labels $\left(\frac13,\frac13,\frac13\right)$ at each internal vertex, and map it to its corresponding compact triangulation via the bijections established in Section \ref{sec_def} (taking the closure if the tree is infinite, so as to always work with compact spaces). Our main tool is the local convergence of Galton-Watson trees, and our main reference \cite{Gil}.
\begin{figure}
\caption{A simulation of a map $m_n$ under the distribution $\mathbb{U}_n^{\nu^0}$, with $n \sim 10000$.}
\label{simu2}
\end{figure}
\subsubsection*{Galton-Watson trees and local convergence}
\begin{de} A $ \zeta$ Galton-Watson (or GW($\zeta)$-) tree is a random variable $ \tau \in \mathcal{U}$ such that \begin{enumerate} \item $ k_{\emptyset}(\tau) $ has law $ \zeta $, i.e. $ \mathbb{P} ( k_{\emptyset}(\tau) = k ) = \zeta (k) $ for any $k \in \mathbb{N}$. \item For any $k$ such that $ \zeta(k) > 0 $, under the conditional probability $ \mathbb{P}(. \, \big\vert k_{\emptyset}(\tau) = k ) $, the trees $ \theta_{(1)}( \tau ),\theta_{(2)}( \tau ),\cdots,\theta_{(k)}( \tau ) $ are i.i.d and have the same law as $\tau$ under $\mathbb{P}$. \end{enumerate} \end{de}
\begin{pro}\label{unif-GW} Let $\xi$ have law $\frac{2}{3} \delta_0 + \frac{1}{3} \delta_3$, and $\tau$ be a GW($\xi$)-tree. Then a.s. $\tau \in \mathcal{T}$ and for any $n \geq 0$, conditionally to the event $\tau \in \mathcal{T}_n$, $\tau$ is uniform in $\mathcal{T}_n$. \end{pro}
\begin{proof} First, $\tau$ is a.s. a ternary tree since by definition of $\xi$ every node has three children or none. Now for any $t \in \mathcal{T}_n$, for some $n\geq0$, \[ \mathbb{P}(\tau = t) = \left( \frac{1}{3} \right)^n \left( \frac{2}{3} \right)^{2n+1} ,\] since any $t \in \mathcal{T}_n$ has $n$ internal nodes which each have three children and $2n+1$ leaves. Thus all trees with the same size have the same weight. \end{proof}
We can therefore view a uniform ternary tree in $\mathcal{T}_n$ as a GW($\xi$)-tree, conditional to have size $n$. We now define the topology of local convergence on trees.
\begin{de} Let $t \in \mathcal{U}$ be a planar tree, and $r > 0$ some real number. Then $B_r(t)$ is the subtree of $t$ whose vertices all have height at most $r$, that is \[ B_r(t) = \lbrace u \in t; \ \vert u \vert \leq r \rbrace.\] \end{de}
\begin{de}\label{loc_conv_def} Let $t,t' \in \mathcal{U}$ be two planar trees. Define the distance $\tilde{d}$ between $t$ and $t'$ by \[ \tilde{d}(t,t') = \inf \left\lbrace \frac{1}{r+1}; \ B_r(t) = B_r(t'), \, r \in \mathbb{R} \right\rbrace.\] \end{de}
One checks that $\tilde{d}$ is indeed a distance. The following proposition is a consequence of Proposition \ref{unif-GW} and Theorem III.3.1 in \cite{Gil}.
\begin{pro}\label{cv_loc} Let $t_n$ be a uniform tree in $\mathcal{T}_n$. Then there exists a random variable $t_{\infty} \in \mathcal{T}$ such that \[ t_n \longrightarrow t_{\infty}, \quad \mbox{as } n \rightarrow \infty, \] where the convergence holds in distribution in $\mathcal{T}$ equipped with the distance $\tilde{d}$. Moreover, the following properties hold a.s.: \begin{enumerate} \item $t_{\infty}$ has a unique infinite branch, written $t_{\infty}^0 = \emptyset,t_{\infty}^1,t_{\infty}^2,\cdots$. \item For any $k$, conditionally to $t_{\infty}^k$, the law of $t_{\infty}^{k+1}$ is $\frac{1}{3} (\delta_{t_{\infty}^k(1)} + \delta_{t_{\infty}^k(2)} + \delta_{t_{\infty}^k(3)} )$. That is, the infinite branch is an infinite sequence of i.i.d. uniform left, middle, and right turns. \item For any $u$ on the infinite branch, the two finite subtrees among $\theta_{u(1)}(t_{\infty}), \theta_{u(2)}(t_{\infty}), \theta_{u(3)}(t_{\infty})$ are independent GW($\xi$) trees, where $\xi$ has law $\frac{2}{3} \delta_0 + \frac{1}{3} \delta_3$. \end{enumerate} \end{pro}
Now to prove Theorem \ref{conv_draw} in the special case \eref{nu0}, it is sufficient to have some continuity of the function $\Psi^0$ defined by \eref{def_psi0}. In fact, this function is not continuous on $\mathcal{T}$. However, Theorem 25.7 in \cite{Bil} says that to transport convergence in distribution via a function $f$, it is sufficient that $f$ be continuous on the support of the limit in distribution. Therefore, given the properties of $t_{\infty}$ listed in Proposition \ref{cv_loc}, the following suffices.
\begin{thm}\label{cont_thm} Let $\mathcal{T}$ be equipped with the distance of local convergence $\tilde{d}$. Let $t^0 \in \mathcal{T}$ be a tree with exactly one infinite branch, and assume that along the infinite branch there are an infinity of left, middle, and right turns, i.e. if $(u_0,u_1,\cdots)$ is the infinite branch, then \beq\label{cond_tree} \vert \lbrace i; \ u_i = j \rbrace \vert = \infty \mbox{ for any }j=1,2,3. \eq Then the map $\Psi^0 : \mathcal{T} \longrightarrow E$ defined by \eref{def_psi0} is continuous at $t^0$, where $E$ is equipped with the Hausdorff distance. \end{thm}
\begin{rem} In the general case where $\nu \neq \nu^0$ this theorem should be re-stated as a continuity theorem of a function which maps the set of distributions on $\mathcal{FT}$ to the set of distributions on $E$, both sets equipped with the topology of weak convergence. The path of the proof remains unchanged, though formula \eref{dist_bar} is more complicated as is therefore the proof of Lemma \ref{small_infinite_branch}. \end{rem}
\begin{proof} Let $t^0$ be as in the statement of the Theorem \ref{cont_thm} and write $(u_0,u_1, \cdots) $ for its infinite branch. Let $(t_n)$ be a sequence of trees in $\mathcal{T}$ such that $t_n \rightarrow t^0$ as $n$ tends to infinity for the distance $\tilde{d}$. Define $m_n = \Psi^0(t_n)$ and $m^0 = \Psi^0(t^0)$. We wish to show that $m_n \rightarrow m^0$ for the Hausdorff distance. Recall from Definition \ref{def_triangles} that for a node $u \in t$, we write $\tilde{T}(u)$ for the corresponding (filled) triangle in the compact triangulation.
\begin{lem}\label{small_infinite_branch} Let $(u_0,u_1,\cdots)$ be the infinite branch of $t^0$ satisfying condition \eref{cond_tree}. Then \[ \mathrm{diam}\left(\tilde{T}((u_0,u_1, \cdots ,u_k))\right) \longrightarrow 0, \quad \mbox{as } k \rightarrow \infty. \] That is, the diameter of the triangle corresponding to the $k$-th node of the infinite branch of $t^0$ tends to zero as $k$ tends to infinity. \end{lem}
\begin{proof} Consider Figure \ref{dist_bar1}, where $M$ is the centre of gravity of the triangle. Then a computation gives us \beq\label{dist_bar} d = \frac{1}{3} \sqrt{ 2a^2 + 2c^2 - b^2 } \leq \frac{2}{3} \max\{a,b,c\}. \eq
\begin{figure}\label{dist_bar1}
\end{figure}
It follows that if $k_1 := \min \{ k \geq 0; \, \vert \lbrace i \leq k; \ \forall j \in \{1,2,3\}, \, u_i=j \rbrace \vert \geq 1 \}$ (that is, there is at least one left, right and middle turn in the $k_1$ first steps), then $ \mbox{diam}\left(\tilde{T}((u_0, \cdots, u_{k_1}))\right) \leq \frac{2}{3} \mbox{diam}(\tilde{T}) $. Define inductively, for $l \geq 1$, \[ k_{l+1} := \min \{ k \geq k_l; \, \forall j \in \{1,2,3\}, \, \vert \lbrace k_l < i \leq k; \ u_i=j \rbrace \vert \geq 1 \}.\] Condition \eref{cond_tree} implies that for any $l$, $k_l$ is finite. Moreover, we have: \[ \mbox{diam}\left(\tilde{T}((u_0, \cdots, u_{k_l}))\right) \leq \left( \frac{2}{3} \right)^l \mbox{diam}(\tilde{T}), \] and taking $l \rightarrow \infty$ completes the proof of Lemma \ref{small_infinite_branch}. \end{proof}
Now to prove Theorem \ref{cont_thm}, fix some $\varepsilon > 0$, and choose $k$ such that $\mbox{diam}\left(\tilde{T}((u_0, \cdots, u_k))\right) \leq \varepsilon$. Write $u^k=(u_0, \cdots, u_k)$. By the assumptions made on $t^0$ and by definition of the distance $\tilde{d}$, for sufficiently large $n$ the trees $t_n$ and $t^0$ coincide except perhaps on the subtrees $\theta_{u^k}(t_n)$ and $\theta_{u^k}(t^0)$. But this immediately implies that $d_H(m_n,m^0) \leq \mbox{diam}\big(\tilde{T}(u^k)\big) \leq \varepsilon$, and the theorem is proved. \end{proof}
\section{The increasing case}\label{incr} In this section we study the asymptotic behaviour of $\mathbb{H}^{\nu}_n$. That is, at every step, we choose one of the faces of our triangulation uniformly at random and split it into three. We call this the \textit{increasing case}.
We will see that the asymptotic behaviour of the occupation measure is different to the uniform case. Intuitively this is because, in the uniform case, the distance between two vertices chosen at random tends to zero, since the height of their greatest common ancestor tends to infinity, whereas in the increasing case its law converges to a geometric distribution.
\subsection{The key ingredient: Poisson-Dirichlet fragmentation}\label{frag}
Here we give a construction of the increasing ternary tree as the underlying tree of a fragmentation tree. First, let us describe the deterministic fragmentation tree associated to a sequence of choices $\textbf{u}=(u_i)_{i \geq 1}$ with $u_i \in [0,1)$ for any $i$, and a sequence $\textbf{y}=(y^u)_{u \in \mathcal{T}^{\infty}}$ where for all $u \in \mathcal{T}^{\infty}$, $y^u=(y_1^u,y_2^u,y_3^u)\in V_2^*$.
With these sequences, we associate a sequence $\textbf{F}_n=F(n,\textbf{u},\textbf{y})$ of fragmentation trees with $2n+1$ leaves, each node being marked with a sub-interval of $[0,1)$, as follows.
\begin{itemize} \item At time 0, $F_0$ is the root tree $\lbrace \emptyset \rbrace$ marked by $I_{\emptyset} = [0,1)$. \item Assume that at time $k$ the tree $F_k$ is built, and that it is a ternary tree with $2k+1$ leaves, each node $u$ being marked by a semi-open interval $I_u = [a_u,b_u) \subseteq [0,1)$. Moreover, assume that the leaf intervals $(I_l \mbox{, $l$ is a leaf of } F_k)$ form a partition of $[0,1)$. The tree $F_{k+1}$ is then built as follows. Denote $\tilde{l}$ the (unique) leaf of $F_k$ such that $u_{k+1} \in I_{\tilde{l}}$. We give to $\tilde{l}$ three children $\tilde{l}(1),\tilde{l}(2),\tilde{l}(3)$ and mark each of these with a sub-interval of $I_{\tilde{l}}$ whose lengths are prescribed by $y^{\tilde{l}}$. More specifically, if $I_{\tilde{l}} = [a,b)$ then we take $I_{\tilde{l}(1)} = [a, a + (b-a)y^{\tilde{l}}_1)$, $ I_{\tilde{l}(2)} = [a + (b-a)y^{\tilde{l}}_1), a + (b-a)y^{\tilde{l}}_1) + (b-a)y^{\tilde{l}}_2)$, $I_{\tilde{l}(3)} = [a + (b-a)y^{\tilde{l}}_1) + (b-a)y^{\tilde{l}}_2, b)$. \end{itemize}
Given a fragmentation tree $F$ we will write $\pi(F)$ for the underlying tree (that is, the fragmentation tree with marks removed).
\begin{de} The $2$-dimensional Dirichlet distribution with parameter $\alpha \in (0, +\infty)$, denoted $\mathrm{Dir}_{2}(\alpha)$ is the probability measure on $V_{2}$ with density \[ f_{\alpha,2}(x_1, x_2, x_3) := \frac{\Gamma(3 \alpha)}{\Gamma(\alpha)^3} \, x_1^{\alpha-1} x_2^{\alpha-1} x_3^{\alpha-1} \] with respect to the uniform measure on $V_{2}$. \end{de}
The following fundamental result is due to Albenque and Marckert \cite{AM}.
\begin{thm}\label{frag_tree} Let $\textbf{U}=(U_i)_{i \geq 1}$ be a sequence of i.i.d. random variables, uniform on $[0,1)$, and $\textbf{Y}=(Y^u)_{u \in \mathcal{T}^{\infty}}$ be a sequence of i.i.d. random variables with $\mathrm{Dir}_2(\frac{1}{2})$ distribution. Now let $\textbf{F}_n = F(n,\textbf{U},\textbf{Y})$ be the sequence of corresponding random fragmentation trees as described above. Then for any $n \geq 0$ the underlying ternary tree $\pi(\textbf{F}_n)$ follows the distribution of an increasing ternary tree on $\mathcal{T}_n$. \end{thm}
One particular consequence of this result is the following. Let $t_n \in \mathcal{T}_n$ be a family of increasing ternary trees. Then the proportion of internal nodes in each of the first three subtrees $(\mathcal{P}_1,\mathcal{P}_2,\mathcal{P}_3)$, where $\mathcal{P}_i$ is the proportion of internal nodes in the $i$-th first subtree of $t_n$, that is $\mathcal{P}_i := \frac{1}{n} \sharp \lbrace u \in t_n^0; \ u=(i,u_2, \cdots, u_h) \rbrace$, converges in distribution to a $\mathrm{Dir}_2(\frac{1}{2})$ distribution.
\subsection{Convergence of the occupation measure}
In this section we show that the occupation measure $\mu_n$ defined by \eref{def_OM} converges to a random measure $\mu$. Let $\nu$ be a splitting law and $(P(u), u \in \mathcal{T}^{\infty})$ a sequence of i.i.d. splitting ratios with distribution $\nu$. Recall the previous construction. Let $\textbf{U}=(U_i)_{i \geq 1}$ be a sequence of i.i.d. random variables, uniform on $[0,1]$, and $\textbf{Y}=(Y^u)_{u \in \mathcal{T}^{\infty}}$ be a sequence of i.i.d. random variables with $\mathrm{Dir}_2(\frac{1}{2})$ distribution. Let $\textbf{F}_n = F(n,\textbf{U},\textbf{Y})$ be the sequence of corresponding random fragmentation trees, and let $I^u$ be the interval marked at the node $u$. If we write $t_n = \pi(\textbf{F}_n)$ for the underlying ternary tree, then $t_n$ is an increasing tree of size $n$ according to the previous theorem. Let $m_n := \Psi_n \circ \Phi_n \big((t_n,(P(u),u \in t_n^0))\big)$ be the corresponding compact triangulation. By definition, $m_n$ has distribution $\mathbb{H}^{\nu}_n$. Write $\mu_n$ for its occupation measure as defined by \eref{def_OM}. The remark at the end of Section \ref{frag} says that \beq\label{OM_trg_conv} \forall u \in t^0, \ \mu_n\left(\tilde{T}(u)\right) \rightarrow \vert I^u \vert. \eq This is in fact just the law of large numbers. Indeed, the quantity $\mu_n(\tilde{T}(u))$ is the proportion of uniform random variables on $[0,1]$ which fall in a sub-interval $I^u$. as such, with this construction the convergence in \eref{OM_trg_conv} is a.s..
\begin{de} Let $u_1,u_2, \cdots ,u_k$ be $k$ nodes of $\mathcal{T}^{\infty}$. We say that $u_1, \cdots ,u_k$ are \textbf{covering} if the following two conditions hold: \begin{enumerate} \item We have $ \bigcup_i \tilde{T}(u_i) = \tilde{T}$. \item For any $i \neq j$, $\mathrm{Int}\left(\tilde{T}(u_i)\right) \cap \mathrm{Int}\left(\tilde{T}(u_j)\right) = \emptyset$, where $\mathrm{Int}(S)$ denotes the interior of a set $S$. \end{enumerate} \end{de}
Another important property of the occupation measure $\mu_n$ is that it has weight zero along the edges of the triangles $\tilde{T}(u)$. Indeed, the vertices are always added to the interior of the triangles, so that \beq\label{border_cond} \forall u \in \mathcal{T}^{\infty}, \ \mu_n\left( \partial\left( \tilde{T}(u)\right)\right) \rightarrow 0, \eq where $\partial S$ represents the boundary of a set $S$. Once again, in our construction this convergence holds a.s..
We now state the main result of this section.
\begin{thm}\label{well_def} Let $(l^u)_{u \in \mathcal{T}^{\infty}}$ be a sequence of positive random variables such that: \beq\label{covering_prop} \mbox{for any nodes } u_1,\cdots ,u_k \in \mathcal{T}^{\infty}, \mbox{ if } u_1\cdots ,u_k \mbox{ are covering, then a.s. } l^{u_1} + \cdots + l^{u_k} = 1. \eq Then there exists an a.s. unique random measure $\mu$ on the triangle $\tilde{T}$ such that the following hold a.s.: \begin{enumerate} \item For any $u \in \mathcal{T}^{\infty}$, $\mu\left(\tilde{T}(u)\right) = l^u$. \item For any $u \in \mathcal{T}^{\infty}$, $\mu\left(\partial \left( \tilde{T}(u)\right)\right) = 0$. \end{enumerate} \end{thm}
Since the random variables $\vert I^u \vert$ satisfy condition \eref{covering_prop}, using the convergences of \eref{OM_trg_conv} and \eref{border_cond} we obtain the following consequence.
\begin{thm}\label{IOM} Let $l^u := \vert I^u \vert$ for all nodes $u \in \mathcal{T}^{\infty}$, and let $\mu$ be the unique measure of Theorem \ref{well_def} for this choice of $(l^u)$. Then the following convergence \[ \mu_n \,{\buildrel (d) \over \longrightarrow}\, \mu, \quad \mbox{as } n \rightarrow \infty \] holds in distribution in the set of probability measures on $\tilde{T}$ equipped with the topology of weak convergence. \end{thm}
\begin{rem} Theorem \ref{well_def} tells us that the information on the triangles $\tilde{T}(u)$ is sufficient to characterise the measure $\mu$. It is crucial that there is no mass on the edges of the triangles here. Indeed, if there were some mass on the edge $[AB]$ of the original triangle, the knowledge of just the values of $\left(\mu(\tilde{T}(u)), u \in \mathcal{T}^{\infty} \right)$ would not be sufficient to obtain information on how that mass is distributed. \end{rem}
\begin{proof} The existence of $\mu$ is a consequence of the property \eref{covering_prop} and Kolmogorov's extension theorem. Let us prove uniqueness. Let $\mu,\mu'$ be two measures on $\tilde{T}$ satisfying the conditions of Theorem \ref{well_def}. For the remainder of the proof, we work with a fixed $\omega$ in our probability space $\Omega$, where the properties (1) and (2) of Theorem \ref{well_def} hold.
Define the set $\hat{T}$ as the triangle $T$ with the boundaries of all the triangles $\tilde{T}(u)$ removed, that is \[\hat{T} := \tilde{T} \setminus \bigcup_{u \in \mathcal{T}^{\infty}} \partial \left( \tilde{T}(u) \right).\] Because of property (2) of Theorem \ref{well_def}, we may view $\mu$ and $\mu'$ as measures on $\hat{T}$. Now the sets $\left(\tilde{T}(u) \cap \hat{T}, u \in \mathcal{T}^{\infty} \right)$ form a basis of open sets for a certain topology on $\hat{T}$, say $\mathcal{O}'$. We first show the following.
\begin{lem}\label{topo} Let $\mathcal{O}$ denote the topology induced by the usual metric topology on $\hat{T}$. Then $\mathcal{O}' = \mathcal{O}$. \end{lem}
\begin{proof} First, note that $\mathcal{O}' \subseteq \mathcal{O}$, since for any $u \in \mathcal{T}^{\infty}$, the set $T(u) \cap \hat{T}$ is an open set for the metric topology on $\hat{T}$. To show the converse, we show that \beq\label{topo_eq} \forall O \in \mathcal{O},\, \exists u \in \mathcal{T}^{\infty},\ T(u) \subseteq O.
\eq Fix $O \in \mathcal{O}$ and $x \in O$. Define $u^n(x)$ to be the unique vertex $u \in \mathcal{T}^{\infty}$ s.t. $|u| = n$ and $x \in T(u)$. The uniqueness of $u^n(x)$ stems from the fact that $x \notin \bigcup_{u \in \mathcal{T}^{\infty}} \partial \left( \tilde{T}(u) \right)$. For simplicity we write $T^n(x) := T(u^n(x))$. Now to show \eref{topo_eq}, it is sufficient to show that \[ \mathrm{diam} (T^n(x)) \rightarrow 0, \mbox{ as } n \rightarrow \infty.\]
We write, for any $n$, $u^n(x)=(u_1(x),\cdots,u_n(x))$. Notice that by construction, the $u_i(x)$ are well defined (i.e. they do not depend on $n$). Now if we show that the sequence $(u_i(x),i\geq1)$ satisfies condition \eref{cond_tree}, then we can follow the path of the proof of Lemma \ref{small_infinite_branch} to get the desired result. Let us therefore show that \beq\label{end_pf} \forall j \in \{1,2,3\}, \ \vert \lbrace i; \ u_i(x) = j \rbrace \vert = \infty. \eq
We proceed by contradiction. If \eref{end_pf} doesn't hold, then there are two possibilities: \begin{enumerate}[(1)] \item There exists $N \in \mathbb{N}$ and $j \in \{1,2,3\}$, such that for all $i \geq N$, $u_i(x)=j$, that is there is exactly one value of $j$ such that $\vert \lbrace i; \ u_i(x) = j \rbrace \vert$ is infinite. \item There exists $N \in \mathbb{N}$ and $j \in \{1,2,3\}$, such that for all $i \geq N$, $u_i(x) \neq j$ and for $k \in \{1,2,3\} \setminus j$, $\vert \lbrace i; \ u_i(x) = k \rbrace \vert = \infty$, that is there are exactly two values of $j$ such that $\vert \lbrace i; \ u_i(x) = j \rbrace \vert$ is infinite. \end{enumerate}
Consider case (1). Let $N \in \mathbb{N}$ so that for example for all $i \geq N$, $u_i(x)=1$. Write $T_i(x) = (A_i(x),B_i(x),C_i(x))$ for any $i$. Now for any $i \geq N$, we have $B_i(x) = B_N(x) :=B(x) $ and $C_i(x) = C_N(x) := C(x)$ (recalling the ordering of triangles after splitting as shown in Figure \ref{trg_order}). Now if $A(x)$ is a limit point of some subsequence of $(A_n(x))_{n \geq N}$ we can show, using similar arguments as in the proof of Lemma \ref{subseq}, that $A(x) \in [B(x) \, C(x)]$. This implies that as $n$ tends to infinity, the distance between $A_n(x)$ and the line segment $[B(x) \, C(x)]$ tends to zero (see Figure \ref{zero_dist} below). But this would imply that $x \in [B(x) \ C(x)]$, which is impossible since $x \notin \bigcup_{u \in \mathcal{T}^{\infty}} \partial \left( \tilde{T}(u) \right)$. Figure \ref{zero_dist} provides an illustration of this case.
\begin{figure}
\caption{Case (1)}
\label{zero_dist}
\end{figure}
Now consider case (2). We suppose that for $i \geq N$, $u_i(x) \neq 1$ and that $\vert \lbrace i; \ u_i(x) = j \rbrace \vert = \infty$ for $j=1,2$. We still write $T_i(x) = A_i(x),B_i(x),C_i(x))$ for any $i$. Now for any $i \geq N$ we have $A_i(x) = A_N(x) := A(x)$. As above, one shows that the sequences $d(A(x),B_n(x))$, $d(A(x),C_n(x))$ both tend to zero as $n$ tends to infinity, so that we should have $x = A(x)$. But this contradicts once more the fact $x \notin \bigcup_{u \in \mathcal{T}^{\infty}} \partial \left( \tilde{T}(u) \right)$. Thus, we have proved \eref{end_pf} which concludes the proof of Lemma \ref{topo}. \end{proof}
To complete the proof of Theorem \ref{well_def}, we use Dynkin's $\pi - \lambda$ theorem (Theorem 3.2 in \cite{Bil}). For any set $S$ of compact subspaces of $\tilde{T}$, denote $\sigma(S)$ the $\sigma$-algebra generated by $S$, so that $\sigma(\mathcal{O})$ is the usual Borel $\sigma$-algebra on $\tilde{T}$. To prove Theorem \ref{well_def}, it is sufficient to show that \beq\label{s-algebra} \sigma \left( \{ \tilde{T}(u); \, u \in \mathcal{T}^{\infty} \} \right) = \sigma(\mathcal{O}). \eq But $\sigma \left( \{ \tilde{T}(u); \, u \in \mathcal{T}^{\infty} \} \right)$ is a Dynkin system (since it is a $\sigma$-algebra). Moreover, since $\mathcal{T}^{\infty}$ is countable, Lemma \ref{topo} implies that \[ \mathcal{O} \subset \sigma \left( \{ \tilde{T}(u); \, u \in \mathcal{T}^{\infty} \} \right), \] and Dynkin's theorem immediately implies \eref{s-algebra}. \end{proof}
\subsection{Properties of the limit measure}\label{IMP}
We have seen that the occupation measure $\mu_n$ converges in distribution to a limit measure $\mu$, which satisfies $\mu(\tilde{T}(u))=\vert I^u \vert$ where $I^u$ is the interval marking the node $u$ in the fragmentation construction introduced in Section \ref{frag}. Moreover, for any node $u$, $\mu(\partial \tilde{T}(u)) = 0$. In this section, we determine additional properties of the measure $\mu$.
\begin{pro}\label{at_part} The atomic part of $\mu$ is a.s. zero. That is, a.s. there is no point $x \in T$ s.t. $\mu( \lbrace x \rbrace) >0$. \end{pro}
\begin{proof}
Define $T^{(n)} := \lbrace \tilde{T}(u); \, u \in \mathcal{T}^{\infty}, |u| = n \rbrace $, that is the set of triangles `` at height $n$ ". It suffices to show that a.s. \beq\label{proof_at} \sup_{\tau \in T^{(n)}} \mu(\tau) \rightarrow 0, \quad \mbox{as } n \rightarrow \infty. \eq Indeed, if there exists with positive probability some $x \in T$ such that $\mu(\{x \}) >0$, then, using the notation $T^n(x)$ introduced in the proof of Theorem \ref{well_def}, with positive probability \[ \limsup_n \sup_{\tau \in T^{(n)}} \mu(\tau) \geq \limsup_n \mu (T_n(x)) \] \[ \geq \limsup_n \mu( \{x \}) = \mu( \{x \}) > 0, \] and therefore proving \eref{proof_at} is sufficient.
For this, we will use a branching process result. Notice that if $|u| = n$, then the law of $\mu(\tilde{T}(u))$ is $\mathcal{P}_1 \cdots \mathcal{P}_n$ where the $\mathcal{P}_i$ are i.i.d. random variables with distribution the first (or equivalently, any) marginal of a $\mathrm{Dir}_2(\frac{1}{2})$ distribution. We shall show that \beq\label{proof_at'} \inf_{\tau \in T^{(n)}} -\log(\mu(\tau)) \rightarrow +\infty, \quad \mbox{as } n \rightarrow \infty, \eq which is equivalent to \eref{proof_at}.
Now the law of $\inf_{\tau \in T^{(n)}} -\log(\mu(\tau))$ is the law of the time of first birth at generation $n$ for a branching process with birth times $-\log(P_1)$, $-\log(P_2)$, $-\log(P_3)$ where $(P_1,P_2,P_3)$ has law $\mathrm{Dir}_2(\frac{1}{2})$ (and every vertex has exactly three children).
We define $\Phi$ to be the Laplace transform of the reproduction law: \[\Phi(\theta):= \mathbb{E}\left[\sum_{i=1}^3 \exp(- \theta.(-\log(\mathcal{P}_i)))\right].\] Thus $\Phi(\theta) = 3\mathbb{E}\left((\mathcal{P}_1)^{\theta}\right)$. Kingman proved in \cite{King} that if $a,\theta>0$ satisfy $\Phi(\theta)e^{\theta a} < 1$ then the first birth process $(B_n)$ satisfies $\liminf_n \frac{B_n}{n} \geq a$.
Now since $\Phi(\theta)$ tends to zero as $\theta$ tends to infinity, we can choose $\theta_0$ such that $\Phi(\theta_0) \leq (2e)^{-1}$ and taking $a=\theta_0^{-1}$ will give us the desired result. This is clearly enough to show \eref{proof_at'} (since $a>0$), and thus Proposition \ref{at_part} is proved. \end{proof}
It is a well known fact that any Borel measure $\nu$ on $\mathbb{R}^d$ can be decomposed as $\nu = \nu_{Leb} + \nu_{atom} + \nu_{sing} $, where $\nu_{Leb}$ is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^d$, $\nu_{atom}$ is a countable (weighted) sum of Dirac atoms, and $\nu_{sing}$ has no atoms and is singular with respect to the Lebesgue measure on $\mathbb{R}^d$. The previous theorem means that a.s. $\mu_{atom} = 0$. We seek additional information on $\mu$.
\begin{de}\label{HDimdef} Let $M$ be a metric space, and $X \subseteq M$ a subspace of $M$. For any $d \geq 0$, we define the \emph{$d$-dimensional Hausdorff measure} $\mu_d$ of $X$ by \[ \mu_d(X) = \lim_{ \varepsilon \rightarrow 0} \inf \sum_{i \in I} (\mathrm{diam}(U_i))^d,\] where the infimum is taken over all countable coverings $(U_i)_{i \in I}$ of $X$ such that for any $i \in I$, $\mathrm{diam} (U_i) < \varepsilon$. This infimum is non decreasing as $\varepsilon$ decreases, thus the limit exists. The \emph{Hausdorff dimension} $\dim_H$ of $X$ is then defined by \[ \mathrm{dim}_H(X) = \sup \{ d \geq 0; 0 < \mu_d(X) < \infty \}. \] \end{de}
\begin{thm}\label{HDim} The limit measure $\mu$ is supported by a subset $S_{\nu}(\mu)$ of $\tilde{T}$ which satisfies \[ \mathrm{dim}_H(S_{\nu}(\mu)) = \frac{2}{3 \mathbb{E}(-\log(P_1))} \mbox{ a.s.} ,\] where $P=(P_1,P_2,P_3)$ is a splitting ratio with distribution $\nu$. \end{thm}
Using a convexity inequality we get that \[ \frac{2}{3 \mathbb{E}(-\log(P_1))} \leq \frac{2}{-3 \log(\mathbb{E}(P_1))} = \frac{2}{3 \log(3)} ,\] and since in particular the latter quantity is strictly less than $2$ we get that $\mu$ is a.s. singular with respect to the Lebesgue measure, i.e. a.s. $\nu_{Leb} = 0$. Notice also that we have equality in the above when we take the special case $P_1 = \frac13$ a.s., that is $\nu = \nu^0$ as defined in \eref{nu0}.
\begin{proof} We apply the second point of Corollary IV.b in \cite{Bar}. In our case, $c=3$, $(W_0,W_1,W_2)$ follows the $\mathrm{Dir}_2(\frac12)$ distribution, and $(L_0,L_1,L_2) = P = (P_1,P_2,P_3)$, where $P$ is a splitting ratio with distribution $\nu$. Barral's result states that the measure $\mu$ is supported by a set with Hausdorff dimension \[ \mathrm{dim}_H(S(\mu)) = \frac{\mathbb{E} \left( \sum_{j=0}^{c-1} W_j \log W_j \right)}{\mathbb{E} \left( \sum_{j=0}^{c-1} W_j \log L_j \right)}\] \[ = \frac{\mathbb{E}(W_0 \log(W_0))}{ \mathbb{E}(W_0) \mathbb{E}(\log(P_1))} ,\] and the desired result follows from a computation. \end{proof}
\end{document} | arXiv | {
"id": "1309.2566.tex",
"language_detection_score": 0.7409120798110962,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Hypoelliptic SDE with singular drift]{Weak well posedness for hypoelliptic stochastic differential equation with singular drift: a sharp result}
\author{P.E. Chaudru de Raynal} \address{UNIVERSITE SAVOIE MONT BLANC, LAMA. } \email[]{pe.deraynal@univ-savoie.fr}
\begin{abstract} In this paper, we prove weak uniqueness of hypoelliptic stochastic differential equation with Hölder drift, with Hölder exponent strictly greater than 1/3. We then extend to a weak framework the previous work \cite{chaudru_strong_2012} where strong uniqueness was proved when the Hölder exponent is strictly greater than 2/3. We also show that this result is sharp, by giving a counter example to weak uniqueness when the Hölder exponent is just below 1/3.
Our approach is based on martingale problem formulation of Stroock and Varadhan and is based on smoothing properties of the associated PDE. \end{abstract} \maketitle \section{Introduction} Let $d$ be a positive integer and $\mathcal{M}_{d}(\mathbb{R})$ be the set of $d\times d$ matrices with real coefficients. For a given positive $T$, given measurable functions $F_1,F_2,\sigma: [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d \times \mathbb{R}^d \times \mathcal{M}_{d}(\mathbb{R})$ and $(W_{t}, t\geq 0)$ a standard $d$-dimensional Brownian motion defined on some filtered probability space $(\Omega, \mathcal{F}, \mathbb{P}, (\mathcal{F}_t)_{ t \geq 0})$ we consider the following $\mathbb{R}^d \times \mathbb{R}^d$ system for any $t$ in $[0,T]$:
\begin{equation}\label{systemEDO} \left\lbrace \begin{array}{llll} {\rm d} X^1_t = F_1(t,X^1_t,X^2_t){\rm d} t + \sigma(t,X^1_t,X_t^2) {\rm d} B_t,\qquad & X^1_0=x_1,\\ {\rm d} X^2_t = F_2(t,X^1_t,X^2_t){\rm d} t,\qquad & X^2_0=x_2,\\ \end{array} \right. \end{equation} where $x_1$ and $x_2$ belong to $\mathbb{R}^d$ and where the diffusion matrix $a:=\sigma \sigma^*$ is\footnote{The notation ``$*$'' stands for the transpose.} supposed to be uniformly elliptic.
In this work, we aim at proving that this system is well-posed (\emph{i.e.} there exists a unique solution), in the weak sense, when the drift is singular. Indeed, in that case, uniqueness of the associated martingale problem from Stroock and Varadhan's theory \cite{stroock_multidimensional_1979} fails since the noise of the system degenerates. We nevertheless show that under a suitable H\"{o}lder assumption on the drift, Lipschitz condition on the diffusion matrix, and hyppoellipticity condition on the system, weak well-posedness holds for \eqref{systemEDO}. By suitable, we mean that there exists a threshold for the Hölder-continuity of the drift with respect to (w.r.t.) the degenerate argument. This Hölder-exponent is supposed to be strictly greater than $1/3$. We also show that this threshold is sharp thanks to a counter-example when the Hölder exponent is strictly less than $1/3$.\\
\textbf{Mathematical background.} It may be a rel challenge to show well-posedness of a differential system with drift less than Lipschitz (see \cite{diperna_ordinary_1989} for a work in that direction). The Peano example is a very good illustration of this phenomenon: for any $\alpha$ in $(0,1)$ the equation \begin{equation}
{\rm d} Y_t = \sign(Y_t)|Y_t|^\alpha {\rm d} t,\ Y_0=0, \quad t\in [0,T], \end{equation} as an infinite number of solutions of the form $\pm c_\alpha (t-t^\star)^{1/(1-\alpha)} \mathbf{1}_{[t^\star;+\infty)},\ t^\star \in [0,T]$. Nevertheless, it has been shown that this equation is well-posed (in a strong and weak sense) as soon as it is infinitesimally perturbed by a Brownian motion. More precisely, the equation \begin{equation}\label{eq:peano1} {\rm d} Y_t = b(Y_t) {\rm d} t + {\rm d} B_t,\ Y_0=0, \quad t\in [0,T], \end{equation} admits a unique strong solution (\emph{i.e.} there exists an almost surely unique solution adapted to the filtration generated by the Brownian motion) as soon as the function $b: \mathbb{R}^d \ni x \mapsto b(x)\in \mathbb{R}^d$ is measurable and bounded. This phenomenon is known as \emph{regularization by noise.}
Regularization by noise of systems with singular drift has been widely studied in the past few years. Since the pioneering one dimensional work of Zvonkin \cite{zvonkin_transformation_1974} and its generalization to the multidimensional setting by Veretenikov \cite{veretennikov_strong_1980} (where stochastic system with bounded drift and additive noise are handled), several authors extended the result. Krylov and Röckner \cite{krylov_strong_2005} showed that SDE with additive noise and $\mathbb{L}_p$ drift (where $p$ depends on the dimension of the system) are also well-posed and Zhang \cite{zhang_strong_2005} proved the case of multiplicative noise with uniformly elliptic and Sobolev diffusion matrix. More recently, Flandoli, Issoglio and Russo \cite{flandoli_multidimensional_2014} and Delarue and Diel \cite{delarue_rough_2015} studied the case of weak well-posedness of \eqref{eq:peano1} for a distributional drift (\emph{i.e.} in the Hölder space $C^\alpha$ where $\alpha$ is respectively greater than $-1/3$ and $-2/3$, see \cite{hairer_introduction_2015} for a definition of such a space) and Catellier and Gubinelli \cite{catellier_averaging_2012} considered systems perturbed by fractional Brownian motion. We refer to the notes of Flandoli \cite{flandoli_random_2011} for a general account on this topics.\\
In our case the setting is a bit different since the noise added in the system acts only by mean of random drift (\emph{i.e.} the system degenerates). Indeed, the archetypal example of system \eqref{systemEDO} writes \begin{equation}\label{systemEDOex} {\rm d} X^2_t = \big(B_t + F_2(X^2_t)\big){\rm d} t,\qquad X^2_0=x_2, \end{equation} where the function $F_2$ is supposed to be only Hölder-continuous. Thus, the system can be seen as a classical ODE whose drift is perturbed by a Brownian motion: the perturbation is then of macroscopic type. We hence consider a \emph{regularization by stochastic drift.}
The first work in that direction is due to Chaudru de Raynal \cite{chaudru_strong_2012} where strong well-posedness of \eqref{systemEDO} is proved when the drift is Hölder continuous with Hölder exponent w.r.t. the degenerated argument strictly greater than $2/3$ and where the system is also supposed to be hyppoelliptic. Since then, several Authors have studied the strong well-posedness of \eqref{systemEDO} with different approaches and have obtained, with weaker conditions, the same kind of threshold: in \cite{wang_degenerate_2015}, the Authors used an approach based on gradient estimates on the associated semi-group to show that the system is strongly well-posed when the drift satisfies a Hölder-Dini condition with Hölder exponent of 2/3 w.r.t. the degenerate component; in \cite{fedrizzi_regularity_2016}, the Authors used a PDE approach and obtained strong well-posedness as soon as the drift is weakly differentiable in the degenerate direction, with order of derivation of 2/3. This work then ``extends'' the results to the case of a threshold of $1/3$ for weak well-posedness and shows that the result obtained is sharp thanks to a counter-example.\\
\textbf{Strategy of proof.} Our strategy relies on the martingale problem approach of Stroock and Varadhan \cite{stroock_multidimensional_1979}. We indeed know that under our assumptions the system \eqref{systemEDO} admits at least a weak solution. We then show that this solution is unique. To do so, we investigate the regularity of the (mild) solution of the associated PDE. Namely, denoting by ${\rm Tr}(a)$ the trace of the matrix $a$, ``$\cdot$'' the standard Euclidean inner product on $\mathbb{R}^d$ and $\mathcal{L}$ the generator of \eqref{systemEDO}: \begin{eqnarray} \mathcal{L} &:=& \frac{1}{2}{\rm Tr}(a(t,x_1,x_2)D^2_{x_1}) + \left[F_1(t,x_1,x_2)\right] \cdot \left[D_{x_1}\right] + \left[F_2(t,x_1,x_2)\right] \cdot \left[D_{x_2}\right],\label{gengen} \end{eqnarray} we exhibit a ``good'' theory for the PDE \begin{equation}\label{eq:thepde} (\partial_t+\mathcal{L})u = f \end{equation} set on the cylinder $[0,T)\times \mathbb{R}^{2d}$ with terminal condition $0$ at time $T$ and where the function $f$ belongs to a certain class of functions $\mathcal{F}$.
By ``good'', we mean that we can consider a sequence of classical solutions $(u^n)_{n\geq 0}$ and associated derivative in the non-degenerate direction $(D_{x_1} u^n)_{n\geq 0}$ along a sequence of mollified coefficients $(F_1^n,F_2^n,a^n)_{n\geq 0}$ that satisfy \emph{a priori} estimates depending only on the regularity of $(F_1,F_2,a)$. By using Arzella-Ascoli Theorem, this allows to extract a converging subsequence to the mild solution of \eqref{eq:thepde} on every compact subset of $[0,T]\times \mathbb{R}^{2d}$.
Hence, thanks to Itô's Formula, one can show that the quantity $$\Big(u(t,X_t^1,X_t^2) - \int_0^t f(s,X_s^1,X_s^2)\Big)_{0\leq t \leq T},$$ is a martingale. By letting the class of function $\mathcal{F}$ be sufficiently rich, this allows us to prove uniqueness of the marginals of the weak solution of \eqref{systemEDO} and then of the law itself.\\
Here, the crucial point is that the operator is not uniformly parabolic: the second order differentiation operator in $\mathcal{L}$ only acts in the first (and non-degenerate) direction ``$x_1$''. Therefore, we expect a loose of the regularization effect w.r.t. the degenerate component of \eqref{systemEDO}. Nevertheless, we show that the noise still regularizes, even in the degenerate direction, by mean of the random drift: we can benefit from the hypoellipticity of the system.
The system \eqref{systemEDOex} indeed relies on the so-called Kolmogorov example \cite{kolmogorov_zufallige_1934}, which is also the archetypal example of hypoelliptic system without elliptic diffusion matrix. In our setting, the hypoellipticity assumption translates as a non-degeneracy assumption on the derivative of the drift function $F_2$ w.r.t. the first component. Together with the Hölder assumption, this can be seen as a weak Hörmander condition, in reference to the work of Hörmander \cite{hormander_hypoelliptic_1967} on degenerate operators of divergent form.
Thus, our system appears as a non-linear generalization of Kolmogorov's example. Degenerate operators of this form have been studied by many authors see \emph{e.g.} the works of Di Francesco and Polidoro \cite{di_francesco_schauder_2006}, and Delarue and Menozzi \cite{delarue_density_2010}. We also emphasize that, in \cite{menozzi_parametrix_2011}, Menozzi proved the weak well-posedness of a generalization of \eqref{systemEDO} with Lipschitz drift and Hölder diffusion matrix.
Nevertheless, to the best of our knowledge a ``good'' theory, in the sense mentioned above, for the PDE \eqref{eq:thepde} has not been exhibiting yet. We here prove the aforementioned estimates by using a first order parametrix (see \cite{friedman_partial_1964}) expansion of the operator $\mathcal{L}$ defined by \eqref{gengen}. This parametrix expansion is based on the knowledge of the related linearized and frozen version of \eqref{systemEDO} coming essentially from the previous work of Delarue and Menozzi \cite{delarue_density_2010}.\\
\textbf{Minimal setting to restore uniqueness.} Obviously, all the aforementioned works, as well as this one, lead to the question of the minimal assumption that could be done on the drift in order to restore well-posedness. Having in mind that most of these works use a PDE approach, it seems clear that the assumption on the drift relies on the regularization properties of the semi-group generated by the solution. In comparison with the previous works, the threshold of $1/3$ can be seen as the price to pay to balance the degeneracy of the system: the smoothing effect of the semi-group associated to a degenerate Gaussian process is less efficient than the one of a non-degenerate Gaussian process. We prove that our assumptions are (almost) minimal by giving a counter-example in the case where the drift $F_2$ is Hölder continuous with Hölder exponent just below $1/3$.\\
Although this example concerns our degenerate case, we feel that the method could be adapted in order to obtain the optimal threshold (for the weak well posedness) in other settings. This is the reason why we wrote it in a general form. Let us briefly explain why and expose the heuristic rule behind our counter example.
It relies on the work of Delarue and Flandoli \cite{delarue_transition_2014}. In this paper, the Peano example is investigated: namely, the system of interest is \begin{equation}\label{eq:peano}
{\rm d} Y_t = \sign(Y_t)|Y_t|^\alpha {\rm d} t + \epsilon {\rm d} B_t,\ Y_0=0, \quad \epsilon >0, \ 0<\alpha<1. \end{equation} The Authors studied the zero-noise limit of the system ($\epsilon \to 0$) pathwiselly. When doing so, they put in evidence the following crucial phenomenon: in small time there is a competition between the irregularity of the drift and the fluctuations of the noise. The fluctuations of the noise allow the solution to leave the singularity while the irregularity of the drift (possibly) captures the solution in the singularity. Thus, the more singular the drift is, the more irregular the noise has to be.
This competition can be made explicit. In order to regularize the equation, the noise has to dominate the system in small time. This means that there must exists a time $0<t_{\epsilon}<1$ such that, below this instant, the noise dominates the system and push the solution far enough from the singularity, while above, the drift dominates the system and constrains the solution to fluctuate around one of the extreme solution of the deterministic Peano equation. A good way to see how the instant $t_{\epsilon}$ looks like is to compare the fluctuations of the extreme solution ($\pm t^{1/(1-\alpha)}$) with the fluctuations of the noise. Denoting by $\gamma$ the order of the fluctuations of the noise this leads to the equation \begin{equation*} \epsilon t_\epsilon^{\gamma} = t_\epsilon^{1/(1-\alpha)}, \end{equation*} which gives $t_\epsilon = \epsilon^{(1-\gamma(1-\alpha))/(1-\alpha)}$ and leads to the condition: \begin{equation}\label{eq:threshold} \alpha > 1-1/\gamma. \end{equation} The counter example, which also especially compares the fluctuations of the noise with the extreme solution, leads to the same threshold and says that weak uniqueness fails below this ceil.
Obviously, cases where $\alpha<0$ have to be considered carefully. But if we formally consider the case of a Brownian perturbation, we get $\gamma =1/2$ and so $\alpha>-1$, which is the sharp threshold exhibited in the recent work of Beck, Flandoli, Gubinelli and Maureli \cite{beck_stochastic_2014}.
In our setting, as suggested by the example \eqref{systemEDOex}, the noise added in \eqref{eq:peano} can be seen as the integral of a Brownian path, which gives $\gamma=3/2$. We deduce from equation \eqref{eq:threshold} that the threshold for the Hölder-regularity of the drift is $1/3$. We finally emphasize that this heuristic rule gives another (pathwise) interpretation for our threshold in comparison with the one obtained in the non-degenerate cases. Since the noise added in our system degenerates, the fluctuations (which are typically of order $3/2$) are not enough stronger to push the solution far enough from the singularity when the drift is too much singular (say less than $C^{1/3}$). \\
\textbf{Organization of this paper.} This paper is organized as follows. In Section \ref{sec:MR}, we give our main result: weak existence and uniqueness holds for \eqref{systemEDO}. Smoothing properties of PDE \eqref{eq:thepde} are given in Section \ref{sec:pdeandproof} as well as the proof of our main result. Then, we show in Section \ref{sec:counterexample} that our result is sharp thanks to a counter-example. Finally, the regularization properties of the PDE \eqref{eq:thepde} are proved in Section \ref{sec:pde}
\section{Notations, assumptions and main results}\label{sec:MR} \textbf{Notations.} In order to simplify the notations, we adopt the following convention: $x, y, z,\xi,$ \emph{etc.} denote the $2d-$dimensional real variables $(x_1,x_2), (y_1,y_2), (z_1,z_2), (\xi_1,\xi_2),$ \emph{etc.}. We denote by $g(t,x) = g(t,x_1,x_2)$ any function $g$ from $[0,T] \times\mathbb{R}^{d} \times \mathbb{R}^d$ to $\mathbb{R}^{N},\ N\in \mathbb{N}$ evaluated at point $(t,x_1,x_2)$. Below we sometimes write $X_t = (X_t^1,X_t^2)$and, when necessary, we write $(X^{t,x}_s)_{t\leq s \leq T}$ the process defined by \eqref{systemEDO} which starts from $x$ at time $t$, \emph{i.e.} such that $X_t^{t,x}=x$.
We denote by $\mathcal{M}_d(\mathbb{R})$ the set of real $d \times d$ matrices, by ``${\rm Id}$'' the identity matrix of $\mathcal{M}_d(\mathbb{R})$ we denote by $\mathcal{B}$ the $2d \times d$ matrix: $B=(\rm{Id}, 0_{\mathbb{R}^{d}\times \mathbb{R}^d})^*$. We write ${\rm GL}_d(\mathbb{R})$ the set of $d\times d$ invertible matrices with real coefficients. We recall that $a$ denotes the square of the diffusion matrix $\sigma$, $a :=\sigma \sigma^*$. The canonical Euclidean inner product on $\mathbb{R}^d$ is denoted by ``$\cdot$''.
Subsequently, we denote by $c$, $C$, $c'$, $C'$, $c''$ \emph{etc.} a positive constant, depending only on known parameters in \textbf{(H)}, given just below, that may change from line to line and from an equation to another.
For any function from $[0,T]\times \mathbb{R}^d \times \mathbb{R}^d$, we use the notation $D$ to denote the total space derivative, we denote by $D_1$ (resp. $D_2$) the derivative with respect to the first (resp. second) $d$-dimensional space component. In the same spirit, the notation $D_{z}$ means the derivative w.r.t the variable $z$. Hence, for all integer $n$, $D^n_{z}$ is the $n^{\rm{th}}$ derivative w.r.t $z$ and for all integer $m$ the $n\times m$ cross differentiations w.r.t $z$, $y$ are denoted by $D^n_{z}D^m_{y}$. Furthermore, the partial derivative $\partial/\partial_t$ is denoted by $\partial_t$.\\
\textbf{Assumptions} \textbf{(H).} We say that assumptions \textbf{(H)} hold if the following assumptions are satisfied. \begin{description} \item[(H1) Regularity of the coefficients] there exist $0<\beta_i^j<1$, $1\leq i,j\leq2$ and three positive constants $C_1, C_2, C_\sigma$ such that for all $t$ in $[0,T]$ and all $(x_1,x_2)$ and $(y_1,y_2)$ in $\mathbb{R}^d \times \mathbb{R}^d$ \begin{eqnarray*}
&&|F_1(t,x_1,x_2) - F_1(t,y_1,y_2)| \leq C_{1} (|x_1-y_1|^{\beta^1_1} + |x_2-y_2|^{\beta^2_1})\\
&&|F_2(t,x_1,x_2) - F_2(t,y_1,y_2)| \leq C_2(|x_1-y_1|+|x_2-y_2|^{\beta^2_2})\\
&&|\sigma(t,x_1,x_2) - \sigma(t,y_1,y_2)| \leq C_\sigma (|x_1-y_1| + |x_2-y_2|). \end{eqnarray*} Moreover, the coefficients are supposed to be continuous w.r.t the time and the exponents $\beta_i^2,\ i=1,2$ are supposed to be strictly greater than $1/3$. Thereafter, we set $\beta_2^1=1$ for notational convenience. \item[(H2) Uniform ellipticity of $\sigma\sigma^*$] The function $\sigma\sigma^*$ satisfies the uniform ellipticity hypothesis: \begin{equation*}
\exists \Lambda >1,\ \forall \zeta \in \mathbb{R}^{2d},\quad \Lambda^{-1}|\zeta|^2\leq \left[\sigma \sigma^*(t,x_1,x_2)\zeta\right] \cdot \zeta \leq \Lambda |\zeta|^2, \end{equation*} for all $(t,x_1,x_2) \in [0,T] \times \mathbb{R}^d \times \mathbb{R}^d$. \item[(H3-a) Differentiability and regularity of $\ x_1 \mapsto F_2(.,x_1,.)$] For all $(t,x_2) \in [0,T]\times \mathbb{R}^d$, the function $F_2(t,.,x_2):\ x_1 \mapsto F_2(t,x_1,x_2)$ is continuously differentiable and there exist $0<\alpha^1<1$ and a positive constant $ \bar{C}_2$ such that, for all $(t,x_2)$ in $[0,T] \times \mathbb{R}^d$ and $x_1,y_1$ in $\mathbb{R}^d$ \begin{eqnarray*}
&&|D_{1}F_2(t,x_1,x_2) - D_{1}F_2(t,y_1,x_2)| \leq \bar{C}_2 |x_1-y_1|^{\eta}. \end{eqnarray*} \item[(H3-b) Non-degeneracy of $(D_{1}F_2)(D_{1}F_2)^*$] There exists a closed convex subset $\mathcal{E} \subset {\rm GL}_d(\mathbb{R})$ (the set of $d\times d$ invertible matrices with real coefficients) such that for all $t$ in $[0,T]$ and $(x_1,x_2)$ in $\mathbb{R}^{2d}$ the matrix $D_{1}F_2(t,x_1,x_2)$ belongs to $\mathcal{E}$. We emphasize that this implies that \begin{equation*}
\exists \bar{\Lambda} >1,\ \forall \zeta \in \mathbb{R}^{2d},\quad \bar{\Lambda}^{-1}|\zeta|^2\leq \left[(D_{1}F_2)(D_{1}F_2)^*(t,x_1,x_2)\zeta\right] \cdot \zeta \leq \bar{\Lambda} |\zeta|^2, \end{equation*} for all $(t,x_1,x_2) \in [0,T] \times \mathbb{R}^d \times \mathbb{R}^d$. \end{description}
\begin{remarque} The convexity assumption in \textbf{(H3-b)} could seems, at first sight, \end{remarque}
Here is the main result of this paper. \begin{theoreme}\label{TH:theTH} Under \textbf{(H)}, there exists a unique weak solution to \eqref{systemEDO}. \end{theoreme}
\section{PDE result and proof of Theorem \ref{TH:theTH}}\label{sec:pdeandproof} Let us first begin by giving the smoothing properties of the PDE \eqref{eq:thepde}. Let $(F_1^n, F_2^n, a^n)_{n\geq 0}$ be a sequence of mollified coefficients (say infinitely differentiable with bounded derivatives of all order greater than 1) satisfying \textbf{(H)} uniformly in $n$ that converges to $(F_1, F_2, a)$ uniformly on $[0,T]\times \mathbb{R}^d\times \mathbb{R}^d$ (such an example of coefficients can be found in \cite{chaudru_strong_2012}). Let us denote by $(\mathcal{L}^n)_{n \geq 0}$ the associated sequence of regularized versions of the operator $\mathcal{L}$ defined by \eqref{gengen}. We have the following result.
\begin{theoreme}\label{TH:PDEres} Let $\mathcal{F}$ be the set of 1-Lipschitz in space functions $f:[0,T]\times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$. For each $n$, the PDE \eqref{eq:thepde} with $\mathcal{L}^n$ instead of $\mathcal{L}$ admits a unique classical solution $u^n$.
Moreover, there exist a positive $\mathcal{T}_{\ref{TH:PDEres}}$, a positive $\delta_{\ref{TH:PDEres}}$ and a positive $\nu$, depending on known parameters in \textbf{(H)} only, such that for all $T$ less tan $\mathcal{T}_{\ref{TH:PDEres}}$ the solution of the regularized PDE \eqref{eq:thepde} with source term $f$ satisfies: \begin{equation}\label{eq:defdenormenu}
||D_{2}u^n|| + ||D_{1}u^n||_{\infty} + ||D^2_{1}u^n||_{\infty} + ||D_{1}u^n||_{\nu} \leq CT^\delta, \end{equation} where \begin{equation}
||D_{1}u^n||_{\nu} = \sup_{t\in [0,T], x_1\in \mathbb{R}^d} \sup_{x_2\neq z_2} \frac{|D_{1}u^n(t,x_1,x_2)-D_{1}u^n(t,x_1,z_2)|}{|x_2 - z_2|+|x_2 - z_2|^{\beta_1^2} + |x_2 - z_2|^{\beta_2^2} + |x_2 - z_2|^\nu}. \end{equation} Moreover, each classical solution $u^n$ is uniformly bounded on every compact subset of $[0,T]\times \mathbb{R}^d \times \mathbb{R}^d$. \end{theoreme} \begin{proof} The proof of this result is postponed to Section \ref{sec:pde}. \end{proof}
We are now in position to prove uniqueness of the martingale problem associated to \eqref{systemEDO}. Under our assumptions, it is clear from Theorem 6.1.7 of \cite{stroock_multidimensional_1979} that the system \eqref{systemEDO} has at least one weak solution (the linear growth assumption assumed here is not a problem to do so).
Let $f:[0,T]\times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}^d$ be some 1-Lipschitz in space function, let $u^n$ be the classical solution of the regularized version of the PDE \eqref{eq:thepde} with source term $f$ and let $(X^1,X^2)$ be a weak solution of \eqref{systemEDO} starting from $x$ at time 0. Let now suppose that $T$ is less than $\mathcal{T}_{\ref{TH:PDEres}}$ given in Theorem \ref{TH:PDEres}. Applying Itô's Formula on $u^n(t,X_t^1,X_t^2)$ we obtain that \begin{eqnarray*} u^n(t,X_t^1,X_t^2) &=& u^n(0,x_1,x_2) + \int_0^t (\partial_t + \mathcal{L}) u^n(s,X_s^1,X_s^2) {\rm d} s + \int_0^t D_x u^n(s,X_s^1,X_s^2)\mathcal{B}\sigma(s,X_s^1,X_s^2) {\rm d} B_s\\ &=& u^n(0,x_1,x_2) + \int_0^t (\partial_t + \mathcal{L}^n)u^n(s,X_s^1,X_s^2) {\rm d} s + \int_0^t (\mathcal{L}-\mathcal{L}^n) u^n(s,X_s^1,X_s^2) {\rm d} s\\ && \quad +\int_0^t D_x u^n(s,X_s^1,X_s^2)\mathcal{B}\sigma(s,X_s^1,X_s^2) {\rm d} B_s\\ &=& u^n(0,x_1,x_2) + \int_0^t f(s,X_s^1,X_s^2) {\rm d} s + \int_0^t (\mathcal{L}-\mathcal{L}^n) u^n(s,X_s^1,X_s^2) {\rm d} s\\ && \quad +\int_0^t D_x u^n(s,X_s^1,X_s^2)\mathcal{B}\sigma(s,X_s^1,X_s^2) {\rm d} B_s, \end{eqnarray*} since $u^n$ is the solution of the regularized version of \eqref{eq:thepde} and where we recall that $\mathcal{B}$ is the $2d \times d$ matrix: $\mathcal{B}=(\rm{Id}, 0_{\mathbb{R}^{d}\times \mathbb{R}^d})^*$.
Thanks to Theorem \ref{TH:PDEres} and Arzelà -Ascoli Theorem, we know that we can extract a subsequence of $(u^n)_{n\geq 0}$ and $(D_{x_1}u^n)_{n\geq 0}$ that converge respectively to the function $u$ and $D_{x_1}u$ uniformly on compact subset of $[0,T] \times \mathbb{R}^d \times \mathbb{R}^d$. Thus, together with the uniform convergence of the regularized coefficients, we can deduce that \begin{equation}\label{eq:mgprop} \left(u(t,X_t^1,X_t^2) - \int_0^t f(s,X_s^1,X_s^2) {\rm d} s - u(0,x_1,x_2) \right)_{0\leq t \leq T}, \end{equation} is a $\mathbb{P}$-martingale by letting the regularization procedure tend to the infinity.\\
Let us now come back to the canonical space, and let $\mathbb{P}$ and $\tilde{\mathbb{P}}$ be two solutions of the martingale problem associated to \eqref{systemEDO} with initial condition $(x_1,x_2)$ in $\mathbb{R}^d\times \mathbb{R}^d$. Thus, for all continuous in time and Lipschitz in space functions $f :[0,T]\times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$ we have from \eqref{eq:mgprop} (recall that $u(T,\cdot,\cdot) = 0$), \begin{equation*} u(0,x_1,x_2) = \mathbb{E}_{\mathbb{P}}\left[ \int_0^T f(s,X_s^1,X_s^2) {\rm d} s\right] = \mathbb{E}_{\tilde{\mathbb{P}}}\left[ \int_0^T f(s,X_s^1,X_s^2) {\rm d} s\right], \end{equation*} so that the marginal law of the canonical process are the same under $\mathbb{P}$ and $\tilde{\mathbb{P}}$. We extend the result on $\mathbb{R}^+$ thanks to regular conditional probabilities, see \cite{stroock_multidimensional_1979} Chapter 6.2. Uniqueness then follows from Corollary 6.2.4 of \cite{stroock_multidimensional_1979}.
\section{Counter example} \label{sec:counterexample}
As we said, we feel that this counter example does not reduce to our current setting. Hence, we wrote it in a general form in order to be adapted to different cases. Let $\mathcal{W}$ be a random process with continuous path satisfying $(\mathcal{W}_t,\ t \geq 0) = (-\mathcal{W}_t,\ t \geq 0) $, for all $t\geq 0$: $t^{\gamma}\mathcal{W}_1 = \mathcal{W}_t$ and $\mathbb{E}|\mathcal{W}_1| < +\infty$. Let $\alpha <1$ and $c_\alpha := (1-\alpha)^{1/(1-\alpha)}$. We suppose that $\mathcal{W}$ and $\alpha$ are such that there exists a weak solution of \begin{equation}\label{eq:peano}
X_t = x + \int_0^t \rm{sign}(X_s) |X_s|^{\alpha} {\rm d} s + \mathcal{W}_t,\\ \end{equation} for any $x \geq 0$ that satisfies Kolmogorov's criterion. Given $0<\beta<1$ we define for any continuous path $Y$ from $\mathbb{R}^+$ to $\mathbb{R}$ the variable $\tau(Y)$ as $$\tau(Y) = \inf\{ t \geq 0\ : \ Y_t \leq (1-\beta)c_\alpha t^{1/(1-\alpha)}\}.$$ We now have the following Lemma:
\begin{lemme}\label{lemme:ce}
Let $X$ be a weak solution of \eqref{eq:peano} starting from some $x>0$ and suppose that $\alpha < 1-1/\gamma$. Then, there exists a positive $\rho$, depending on $\alpha$, $\beta$, $\gamma$ and $\mathbb{E} |\mathcal{W}_1|$ only such that \begin{eqnarray}\label{eq:ce} \mathbb{P}_x(\tau(X) \geq \rho) \geq 3/4. \end{eqnarray} \end{lemme}
We are now in position to give our counter-example. Note that if $X$ is a weak solution of \eqref{eq:peano} with the initial condition $x=0$, then, $-X$ is also a weak solution of \eqref{eq:peano}. So that, if uniqueness in law holds $X$ and $-X$ have the same law.
Let us consider a weak solution $X^n$ of \eqref{eq:peano} starting from $1/n$, $n$ being a positive integer. Since each $X^n$ satisfies Kolmogorov's criterion, the sequence of law $(\mathbb{P}_{1/n})_{n \geq 0}$ of $X^n$ is thigh, so that we can extract a converging subsequence $(\mathbb{P}_{1/n_k})_{k\geq 0}$ to $\mathbb{P}_0$, the law of the weak solution $X$ of \eqref{eq:peano} starting from 0. Since the bound in \eqref{eq:ce} does not depend on the initial condition we get that $$\mathbb{P}_0(\tau(X) \geq \rho) \geq 3/4,$$ and, thanks to uniqueness in law $$\mathbb{P}_0(\tau(-X) \geq \rho) \geq 3/4,$$ which is a contradiction. Choosing $\mathcal{W} = \int_0^\cdot W_s {\rm d} s$, so that $\gamma = 3/2$, we get that weak uniqueness fails as soon as $$\alpha < 1-1/\gamma = 1/3.$$ We now prove Lemma \ref{lemme:ce} which allows to understand how the threshold above, exhibited in the introduction, also appears in our counter-example.
\begin{proof}[Proof of Lemma \ref{lemme:ce}] Let $X$ be a weak solution of \eqref{eq:peano} starting from $x>0$. Since it has continuous path, we have almost surely that $\tau(X)>0$. Then, note that on $[0,\tau(X)]$ we have: \begin{eqnarray*}
X_t &=& x + \int_0^t \rm{sign}(X_s) |X_s|^{\alpha} {\rm d} s + \mathcal{W}_t\\ &\geq & (1-\beta)^{\alpha} c_{\alpha} t^{1/(1-\alpha)} + \mathcal{W}_t. \end{eqnarray*} Hence, choosing $\eta$ such that $(1-\eta) = [(1-\beta)^{\alpha}+(1-\beta)]/2$ we get that:
\begin{eqnarray*} X_t &\geq & (1-\eta)c_\alpha t^{1/(1-\alpha)}+ (\beta-\eta) c_\alpha t^{1/(1-\alpha)} + \mathcal{W}_t, \end{eqnarray*} for all $t$ in $[0,\tau(X)]$.
Now let $\rho$ be a positive number, set $\tilde{c}_\alpha = (\beta-\eta) c_\alpha$ and $$A = \left\{ \tilde{c}_\alpha t^{1/(1-\alpha)} + \mathcal{W}_t > 0 \text{ for all } t \text{ in } (0,\rho]\right\}.$$ Note that on $A$ we have \begin{eqnarray*} X_t \geq (1-\eta)c_\alpha t^{1/(1-\alpha)} \geq (1-\beta)c_\alpha t^{1/(1-\alpha)} \end{eqnarray*} for all $t$ in $[0,\tau(X)]$. But this is compatible only with the event $\{\tau(X) \geq \rho\}$ so that $A \subset \{\tau(X) \geq \rho\}$. Hence \begin{equation} \mathbb{P}(\tau(X) \geq \rho) \geq \mathbb{P}(A). \end{equation}
We are now going to bound from below the probability of the event $A$. We have \begin{eqnarray*} \mathbb{P}(A^c) &=& \mathbb{P} \left(\exists t \in (0,\rho]\ :\ \tilde{c}_\alpha t^{1/(1-\alpha)} + \mathcal{W}_t \leq 0 \right)\\
&\leq & \mathbb{P} \left(\exists t \in (0,\rho]\ :\ |\mathcal{W}_t | \geq \tilde{c}_\alpha t^{1/(1-\alpha)} \right)\\
&=& \mathbb{P} \left(\exists t \in (0,1]\ :\ (\rho t)^{\gamma} |\mathcal{W}_1| \geq \tilde{c}_\alpha (\rho t)^{1/(1-\alpha)} \right)\\
&=& \mathbb{P} \left(\exists t \in (0,1]\ :\ |\mathcal{W}_1| \geq \tilde{c}_\alpha (\rho t)^{-\delta} \right), \end{eqnarray*} where $\delta = \gamma - 1/(1-\alpha)$. Since $\alpha < 1-1/\gamma$, we get that $\delta>0$ and we obtain from the previous computations that
\begin{eqnarray*}
\mathbb{P}(A^c) \leq \mathbb{P} \left(|\mathcal{W}_1| \geq \tilde{c}_\alpha \rho^{-\delta} \right) \leq \mathbb{E}|\mathcal{W}_1| \tilde{c}_\alpha^{-1} \rho^{\delta}, \end{eqnarray*} from Markov inequality. Thus \begin{eqnarray*}
\mathbb{P}(\tau(X) \geq \rho) \geq \mathbb{P}(A) \geq 1 - \mathbb{E}|\mathcal{W}_1| \tilde{c}_\alpha^{-1} \rho^{\delta}, \end{eqnarray*} so that there exists a positive $\rho$ such that \begin{eqnarray*} \mathbb{P}(\tau(X) \geq \rho) \geq 3/4. \end{eqnarray*} \end{proof}
\section{Smoothing properties of the PDE}\label{sec:pde} This section is dedicated to the proof of Theorem \ref{TH:PDEres}. This proof is in the same spirit and uses the same tools as the one used in the work \cite{chaudru_strong_2012}. In the first subsection \ref{Subsec:froesys} we recall some of the results of this work that are useful for our proof and we refer to it, especially to the sections 3 and 4, for more details. Then, we prove Theorem \ref{TH:PDEres} in Subsection \ref{Subsec:estisol}.
Theorem \ref{TH:PDEres} concerns the solution of the regularized version of \eqref{eq:thepde}. Thus, for the sake of clarity, we forget the superscript $n$ that follows from the regularization procedure and we suppose throughout this section that the coefficients $F_1,F_2$ and $a:=\sigma\sigma^*$ are smooth (say infinitely differentiable with bounded derivative of all orders greater than one). We then specify the dependence of the constants when necessary.\\
As said in the introduction, the proof follows from a first order parametrix expansion of the solution of \eqref{eq:thepde}. This parametrix expansion (see \cite{mckean_jr._curvature_1967},\cite{friedman_partial_1964}) allows to represent the solution as a perturbation of the solution of the PDE driven by the linearized and frozen version of the operator $\mathcal{L}$ defined by \eqref{gengen}. The crucial point being that we have a good knowledge of the smoothing properties of the linearized and frozen version of $\mathcal{L}$. Thanks to Feynman-Kack formulae, this allows to obtain a representation of the solution in term of the semi-group associated to the linearized and frozen operator which can be estimated.
We first present the frozen and linearized frozen system and then give the smoothing properties of the associated semi-group. Then, we give the representation of the solution in term of first order parametrix expansion and we estimate it.
\subsection{The frozen system}\label{Subsec:froesys} Given any frozen point $(\tau,\xi)$ in $[0,T]\times \mathbb{R}^{2d}$, we consider the following system on $[\tau,T]$ \begin{equation}\label{thetadef} \left\lbrace\begin{array}{ll} \displaystyle \frac{{\rm d}}{{\rm d} s}\theta_{\tau,s}^1(\xi) = F_1(s,\theta_{\tau,s}(\xi)),\quad \theta_{\tau,\tau}^1(\xi)=\xi_1,\\ \displaystyle \frac{{\rm d}}{{\rm d} s}\theta_{\tau,s}^2(\xi) = F_2(s,\theta_{\tau,s}(\xi)),\quad \theta_{\tau,\tau}^2(\xi)=\xi_2, \end{array}\right. \end{equation} which is well posed under our regularized framework and we extend the definition of its solution on $[0,\tau)$ by assuming that for all $ (v>r)$ in $[0,T]^2$, for all $\xi$ in $\mathbb{R}^{2d}$, $\theta_{v,r}(\xi)= 0$. Given the solution $(\theta_{\tau,s}(\xi))_{s \leq T}$ of this system, we define the linearized and frozen version of \eqref{systemEDO}: \begin{equation}\label{LS} \left\lbrace\begin{array}{ll}
{\rm d}\tilde{X}^{1,t,x}_s = F_1(s,\theta_{\tau,s}(\xi)) {\rm d} s + \sigma(s,\theta_{\tau,s}(\xi)) {\rm d} W_s\\
{\rm d}\tilde{X}^{2,t,x}_s = \left[F_2(s,\theta_{\tau,s}(\xi)) + D_{1}F_2(s,\theta_{\tau,s}(\xi))(\tilde{X}^{1,t,x}_s-\theta^1_{\tau,s}(\xi)) \right]{\rm d} s \end{array}\right. \end{equation} for all $s$ in $(t,T]$, any $t$ in $[0,T]$, and for any initial condition $x$ in $\mathbb{R}^{2d}$ at time $t$. We then have the following Proposition.
\begin{proposition}[Chaudru de Raynal, \cite{chaudru_strong_2012}]\label{estfundsol} Under our assumptions:
(i) There exists a unique (strong) solution of \eqref{LS} with mean $$(m^{\tau,\xi}_{t,s})_{t \leq s \leq T} =(m^{1,\tau,\xi}_{t,s},m^{2,\tau,\xi}_{t,s})_{t \leq s \leq T}, $$ where \begin{eqnarray}\label{meanGauss} &&m^{1,\tau,\xi}_{t,s}(x) = x_1 + \int_t^s F_1(r,\theta_{\tau,r}(\xi)) {\rm d} r,\\ &&m^{2,\tau,\xi}_{t,s}(x) = x_2+\int_t^s \bigg[F_2(r,\theta_{\tau,r}(\xi)) + D_{1}F_2(r,\theta_{\tau,r}(\xi))(x_1-\theta_{\tau,r}^1(\xi)) \nonumber\\ && \hphantom{m^{2,\xi}_{\tau,s}(x)} \quad + D_{1}F_2(r,\theta_{\tau,r}(\xi))\int_t^r F_1(v,\theta_{\tau,v}(\xi)){\rm d} v \bigg] {\rm d} r,\nonumber \end{eqnarray} and uniformly non-degenerate covariance matrix $(\tilde{\Sigma}_{t,s})_{t\leq s \leq T}$:
\begin{equation}\label{covmatrice} \tilde{\Sigma}_{t,s}=\begin{pmatrix} \int_t^s \sigma \sigma^*(r,\theta_{\tau,r}(\xi)){\rm d} r & \int_t^s R_{r,s}(\tau,\xi) \sigma \sigma^*(r,\theta_{\tau,r}(\xi)){\rm d} r\\
\int_t^s \sigma \sigma^*(r,\theta_{\tau,r}(\xi))R^*_{r,s}(\tau,\xi) {\rm d} r & \int_t^s R_{t,r}(\tau,\xi)\sigma \sigma^*(r,\theta_{\tau,r}(\xi)) R_{t,r}^*(\tau,\xi) {\rm d} r\\ \end{pmatrix}, \end{equation} where: \begin{equation*} R_{t,r}(\tau,\xi)=\left[\int_t^r D_{1}F_2(v,\theta_{\tau,v}(\xi)){\rm d} v\right],\quad t\leq r\leq s\leq T. \end{equation*}
(ii) This solution is a Gaussian process with transition density: \begin{eqnarray}\label{gtd}
\tilde{q}(t,x_1,x_2;s,y_1,y_2) = \frac{3^{d/2}}{(2\pi)^{d/2}} (\det[\tilde{\Sigma}_{t,s}])^{-1/2} \exp \left( -\frac{1}{2}|\tilde{\Sigma}_{t,s}^{-1/2} (y_1-m^{1,\tau,\xi}_{t,s}(x), y_2-m^{2,\tau,\xi}_{t,s}(x))^* |^2\right), \end{eqnarray} for all $s$ in $(t,T]$.
(iii) This transition density $\tilde{q}$ is the fundamental solution of the PDE driven by $\tilde{\mathcal{L}}^{\tau,\xi}$ and given by: \begin{eqnarray} \tilde{\mathcal{L}}^{\tau,\xi} &:=& \frac{1}{2} Tr\left[a(t,\theta_{\tau,t}(\xi))D^2_{x_1}\right] + \left[ F_1(t,\theta_{\tau,t}(\xi))\right] \cdot D_{x_1} \nonumber\\ && \quad + \left[F_2(t,\theta_{\tau,t}(\xi))+ D_{1} F_2(t,\theta_{\tau,t}(\xi))\left(x_1 - \theta^1_{\tau,t}(\xi)\right)\right] \cdot D_{x_2}. \label{frozgen} \end{eqnarray}
(iv) There exist two positive constants $c$ and $C$, depending only on known parameters in \textbf{(H)}, such that \begin{equation}\label{c1defqc} \tilde{q}(t,x_1,x_2;s,y_1,y_2) \leq C\hat{q}_{c}(t,x_1,x_2;s,y_1,y_2), \end{equation} where \begin{eqnarray*}
\hat{q}_{c}(t,x_1,x_2;s,y_1,y_2)= \frac{c}{(s-t)^{2d}}\exp \left( -c \left(\frac{\big|y_1-m^{1,\tau,\xi}_{t,s}(x)\big|^2}{s-t}+ \frac{\big|y_2-m^{2,\tau,\xi}_{t,s}(x)\big|^2}{(s-t)^{3}} \right)\right), \end{eqnarray*} and \begin{eqnarray}\label{estidertranker}
\left| D_{x_1}^{N^{x_1}} D_{x_2}^{N^{x_2}} D_{y_1}^{N^{y_1}} \tilde{q}(t,x_1,x_2;s,y_1,y_2)\right| \leq C (s-t)^{-[3N^{x_2} + N^{x_1} + N^{y_1}]/2} \hat{q}_c(t,x_1,x_2;s,y_1,y_2), \end{eqnarray} for all $s$ in $(t,T]$ and any integers $N^{x_1},N^{x_2},N^{y_1}$ less than 2.\\ \end{proposition}
We now define some notations that appear when writing and estimating the first order parametrix expansion of the regularized solution of the PDE \eqref{eq:thepde}. \begin{definition}\label{def:defforparam} For all $\zeta=(\zeta_1,\zeta_2)$ in $\mathbb{R}^{d} \times \mathbb{R}^d$ we introduce the perturbation operator $\Delta(\zeta)$ as \begin{equation} \Delta(\zeta) : \mathbb{R}^{d} \times \mathbb{R}^d \ni (x_1,x_2) \mapsto (x_1-\zeta_1) + (x_2-\zeta_2) \in \mathbb{R}^d, \end{equation} and for $i=1,2$ \begin{equation} \Delta^i(\zeta): \mathbb{R}^{d} \times \mathbb{R}^d \ni (x_1,x_2) \mapsto (x_i-\zeta_i) \in \mathbb{R}^d. \end{equation} Next we set for all measurable function $\varphi : [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$, for all $t<s$ in $[0,T]^2$ and $\xi$ and $x$ in $\mathbb{R}^{2d}$: \begin{equation} \left[\tilde{P}_{t,s}^\xi \varphi\right](s,x) = \int_{\mathbb{R}^{2d}} \varphi(s,y) \tilde{q}(t,x;s,y) {\rm d} y, \end{equation} and \begin{equation} \left[\hat{P}_{t,s}^\xi \varphi\right](s,x) = \int_{\mathbb{R}^{2d}} \varphi(s,y) \hat{q}_c(t,x;s,y) {\rm d} y, \end{equation} \end{definition}
Finally, we have the following Proposition from \cite{chaudru_strong_2012} regarding the smoothing properties of $\tilde{P}$ defined above: \begin{proposition}\label{prop:smootheffect} Suppose that assumptions \textbf{(HR)} hold. Then, there exist three positive constants $C, C'$ and $C''$, depending on known parameters in \textbf{(H)} only such that for all $t<s$ in $[0,T]^2$, $\xi$ and $x$ in $\mathbb{R}^{2d}$ and all measurable function $\varphi : [0,T] \times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$: \begin{enumerate}[(i)]
\item $\displaystyle \Big| D_{x_i} \left[\tilde{P}_{t,s}^\xi \varphi\right](s,x) \Big| \leq C' (s-t)^{-i+1/2} \left[\hat{P}_{t,s}^\xi \big|\varphi\big|\right](s,x),$
\item $\displaystyle \Big| D_{x_i} \left[\tilde{P}_{t,s}^\xi \varphi\right](s,x) \Big| \leq C'' (s-t)^{-i+1/2} \left[\hat{P}_{t,s}^\xi \big|\varphi-\varphi(\cdot,\zeta)\big|\right](s,x),$ \end{enumerate} for $i=1,2$ and for all $\gamma$ in $(0,1]$: \begin{equation}\label{eq:smootheffect}
\left[\hat{P}_{t,s}^\xi |\Delta^i(\theta_{t,s}(\xi))|^\gamma\right](s,x)\bigg|_{(\tau,\xi)=(t,x)} \leq C''' (s-t)^{(i-1/2)\gamma}. \end{equation} \end{proposition} \begin{proof} Let us recall the basics arguments of the proof, since it will be used below. Assertion (i) follows from Proposition \ref{estfundsol} and Definition \ref{def:defforparam}. Since by definition of $\tilde{P}$ we have that the quantity $\left[\tilde{P}_{t,s}^\xi \varphi(\cdot,\zeta)\right](s,x)$ does not depend on $x$ we get that $D_{x_i}\left[\tilde{P}_{t,s}^\xi \varphi(\cdot,\zeta)\right](s,x)=0$ for $i=1,2$. Then, assertion (ii) follows from the following splitting: $$\forall \zeta \in \mathbb{R}^{2d},\ \varphi = \varphi -\varphi(\cdot,\zeta) + \varphi(\cdot,\zeta).$$ The last assertion of the Proposition follows from the Gaussian decay of $\tilde{q}$. Indeed, by definition we have \begin{eqnarray*}
\left[\hat{P}_{t,s}^\xi |\Delta^i(\theta_{t,s}(\xi))|^\gamma\right](s,x) &=& \int_{\mathbb{R}^{2d}} |y_i-\theta^i_{t,s}(\xi)|^\gamma \hat{q}_c(t,x;s,y) {\rm d} y\\
&=& \int_{\mathbb{R}^{2d}} \Bigg\{ (s-t)^{(i-1/2)\gamma}\left|\frac{y_i-\theta^i_{t,s}(\xi)}{(s-t)}\right|^\gamma \frac{c}{(s-t)^{2d}}\\
&& \quad \times \exp \left( -c \left(\frac{\big|y_1-m^{1,\tau,\xi}_{t,s}(x)\big|^2}{s-t}+ \frac{\big|y_2-m^{2,\tau,\xi}_{t,s}(x)\big|^2}{(s-t)^{3}} \right)\right) \Bigg\}{\rm d} y \end{eqnarray*} Note that for all $s$ in $[t,T]$, the mean $(m^{1,t,x}_{t,s}(x),m^{2,t,x}_{t,s}(x))$ satisfies the ODE \eqref{meanGauss} with initial data $(t,x)$. Hence, the forward transport function defined by \eqref{meanGauss} with the initial data $(\tau,\xi) = (t,x)$ is equal to the mean: $\theta_{t,s}(x)=m^{t,x}_{t,s}(x)$. We deduce the result by letting $(\tau,\xi)=(t,x)$ and by using the following inequality: $$ \forall \eta>0,\ \forall q>0,\ \exists \bar{C}>0 \text{ s.t. } \forall \sigma >0,\ \sigma^{q} e^{-\eta\sigma}\leq \bar{C}.$$ \end{proof}
\subsection{Estimation of the solution}\label{Subsec:estisol} Let us now expand the regularized solution of \eqref{eq:thepde} to a a first order parametrix: we rewrite this PDE as $$(\partial_t + \tilde{\mathcal{L}}^{\tau,\xi}) u(t,x) = - \left(\mathcal{L}-\tilde{\mathcal{L}}^{\tau,\xi}\right)u(t,x) + f(t,x) ,$$ on $[0,T)\times \mathbb{R}^{2d}$ with terminal condition $0$ at time $T$. Thus, using the definitions given in the previous subsection, we obtain that for every $(t,x)$ in $[0,T]\times \mathbb{R}^{2d}$, the solution $u$ writes \begin{eqnarray}\label{eq:expressionu} u(t,x) &=& -\int_t^T\Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi)))\cdot D_{1}u\right](s,x) \notag\\ && + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x)\notag\\ && + \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x) \Bigg\} {\rm d} s, \end{eqnarray} by choosing $\tau =t$. We made this choice for the freezing time $\tau$ in the following.\\
We next assume without loss of generality that $T<1$. We are now in position to prove the main estimates of Theorem \ref{TH:PDEres}. This is done by proving the following results and then using circular argument (see Section 4 of \cite{chaudru_strong_2012}). \begin{proposition} There exists four positive constants $C_1$, $C_2$, $C'$ and $C''$, and three positive numbers $\delta$, $\delta'$ and $\delta''$, depending on known parameters in \textbf{(H)} only, such that: \begin{eqnarray}
||D^n_{1}u||_{\infty} &\leq& C_n T^{\delta} \left(||f||_{{\rm Lip}} + ||D_1 u ||_{\infty} + ||D_2 u ||_{\infty} \right),\quad n=1,2,\\
||D_{2}u||_{\infty} &\leq & C' T^{\delta'} \left(||f||_{{\rm Lip}} + ||D_1 u ||_{\nu} + ||D_2 u ||_{\infty} \right),\\
||D_1 u ||_{\nu} &\leq & C'' T^{\delta''}\left(||f||_{{\rm Lip}} + ||D_1 u ||_{\nu} + ||D_2 u ||_{\infty} \right), \end{eqnarray}
where $||\cdot||_\nu$ is defined by \eqref{eq:defdenormenu} and for all $\nu$ such that: \begin{equation}\label{eq:conditionnu} \nu <\inf_{i=1,2}\beta_i^2. \end{equation} \end{proposition} \begin{proof} The main strategy consists in estimating the time integrands of the representation \eqref{eq:expressionu} and then to invert the differentiation and integration operators. Let $n\in \{1,2\}$ and $s$ in $(t,T]$. We have from Proposition \ref{prop:smootheffect}: \begin{eqnarray*}
&&\Bigg| D_{x_1}^n \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x) \\
&& \quad + \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x)\Bigg\}\Bigg|\\
&& \leq C(s-t)^{-n/2}\Bigg\{ \left[\hat{P}_{t,s}^\xi \big|f-f(s,\theta_{t,s}(\xi))\big|\right](s,x) + \left[\hat{P}_{t,s}^\xi \big|(F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\big|\right](s,x)\\
&&+ \left[\hat{P}_{t,s}^\xi \big|\frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\big|\right](s,x)\\
&& \quad + \left[\hat{P}_{t,s}^\xi \big|(F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\big|\right](s,x)\Bigg\}. \end{eqnarray*} By using the regularity of the coefficients assumed in \textbf{(H)} (and expanding $F_2$ around the forward transport $\theta$) we get that the right hand side above is bounded by
\begin{eqnarray*}
&& C(s-t)^{-n/2}\Bigg\{ \left[\hat{P}_{t,s}^\xi \big|\Delta(\theta_{t,s}(\xi))(\cdot)\big|\right](s,x) \\
&& \quad + ||D_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^1}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^2}\Big)\right](s,x)\\
&&\quad + ||D^2_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\Big)\right](s,x)\\
&&\quad + ||D_2u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big( \big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{1+\eta}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_2^2}\Big)\right](s,x)\Bigg\}. \end{eqnarray*} By letting $\xi=x$ we obtain from estimate \eqref{eq:smootheffect} in Proposition \ref{prop:smootheffect} that
\begin{eqnarray*}
&&\Bigg| D_{x_1}^n \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x)\\
&&\quad +\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x) \Bigg\}\Bigg|\\
&& \leq C(s-t)^{-n/2} \bigg( ||f||_{{\rm Lip}}(s-t) + ||D_1u||_{\infty} \Big( (s-t)^{\beta_1^1/2} + (s-t)^{3\beta_1^2/2} \big)\\
&&\quad + ||D^2_1u||_{\infty} \Big( (s-t)^{1/2} + (s-t)^{3/2} \big) + ||D_2u||_{\infty} \Big( (s-t)^{(1+\eta)/2} + (s-t)^{3\beta_2^2/2} \big)\bigg), \end{eqnarray*} where all the time-singularities in the right hand side are integrables. Therefore
\begin{eqnarray*}
|D^n_{x_1}u(t,x)| &\leq & C T^{(n-1)/2} \bigg(T^{}||f||_{{\rm Lip}}
+ ||D_1 u ||_{\infty} \Big(T^{\beta_1^1/2}+T^{3\beta_1^2/2}\Big)\\
&& + ||D^2_1 u ||_{\infty} \Big(T^{1/2}+T^{3/2}\Big) + ||D_2 u ||_{\infty} \Big(T^{(1+\eta)/2}+T^{3\beta_2^2/2}\Big)\bigg). \end{eqnarray*}
We now estimate the derivative of the solution in the degenerate direction. By using the integration by parts argument given in Lemma 3.5 of \cite{chaudru_strong_2012}, \begin{eqnarray}
&&\Bigg|D_{x_2}\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x) \Bigg|\label{eq:ibp}\\
&& \leq (s-t)^{-3/2}\Bigg\{\left[\hat{P}_{t,s}^\xi \Big|\frac{1}{2}{\rm Tr}\left[(a-a(s,\cdot,\theta^2_{t,s}(\xi))) D^2_{1}u\right]\Big|\right](s,x)\notag\\
&&\quad + \left[\hat{P}_{t,s}^\xi \Big| \frac{1}{2}{\rm Tr}\left[D_1a(s,\cdot,\theta_{t,s}^2(\xi)) (D_{1}u-D_1u(s,\cdot,\theta^2_{t,s}(\xi)))\right] \Big|\right](s,x)\Bigg\}\notag\\
&& \quad + (s-t)^{-2} \left[\hat{P}_{t,s}^\xi \Big|\frac{1}{2}{\rm Tr}\left[(a(s,\cdot,\theta_{t,s}^2(\xi)))-a(s,\theta_{t,s}(\xi))) (D_{1}u-D_1u(s,\cdot,\theta^2_{t,s}(\xi)))\right] \Big|\right](s,x).\notag \end{eqnarray}
Thus
\begin{eqnarray*}
&&\Bigg| D_{x_2} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x) \\
&&\quad +\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x) \Bigg\}\Bigg|\\
&& \leq C(s-t)^{-3/2}\Bigg\{ \left[\hat{P}_{t,s}^\xi \big|f-f(s,\cdot,\theta^2_{t,s}(\xi))\big|\right](s,x) + \left[\hat{P}_{t,s}^\xi \big|(F_1-F_1(s,\cdot,\theta^2_{t,s}(\xi))) D_{1}u\big|\right](s,x)\\
&&\quad + \left[\hat{P}_{t,s}^\xi \big|(F_1(s,\cdot,\theta^2_{t,s}(\xi)))-F_1(s,\theta_{t,s}(\xi))) (D_{1}u-D_{1}u(s,\cdot,\theta^2_{t,s}(\xi))\big|\right](s,x)\\
&& \quad+ \left[\hat{P}_{t,s}^\xi \big|(F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) D_{2}u\big|\right](s,x) \\
&&\quad +\left[\hat{P}_{t,s}^\xi \left|\frac{1}{2}{\rm Tr}\left[(a-a(s,\cdot,\theta^2_{t,s}(\xi))) D_{1}^2u\right]\right|\right](s,x) +\left[\hat{P}_{t,s}^\xi \Big|\frac{1}{2}{\rm Tr}\left[(a-a(s,\cdot,\theta^2_{t,s}(\xi))) D^2_{1}u\right]\Big|\right](s,x) \\
&&\quad + \left[\hat{P}_{t,s}^\xi \Big| \frac{1}{2}{\rm Tr}\left[D_1a(s,\cdot,\theta_{t,s}^2(\xi)) (D_{1}u-D_1u(s,\cdot,\theta^2_{t,s}(\xi)))\right] \Big|\right](s,x)\Bigg\}\\
&&\quad + (s-t)^{-2} \left[\hat{P}_{t,s}^\xi \Big|\frac{1}{2}{\rm Tr}\left[(a(s,\cdot,\theta_{t,s}^2(\xi)))-a(s,\theta_{t,s}(\xi))) (D_{1}u-D_1u(s,\cdot,\theta^2_{t,s}(\xi)))\right] \Big|\right](s,x). \end{eqnarray*}
By using the regularity of the coefficients assumed in \textbf{(H)} we get that the right hand side above is bounded by
\begin{eqnarray*}
&& C(s-t)^{-3/2}\Bigg\{ \left[\hat{P}_{t,s}^\xi \big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\right](s,x) + ||D_1u||_{\nu} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^1}\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\nu}\Big)\right](s,x)\\
&& + ||D_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^2}\Big)\right](s,x)\\
&& + ||D_2u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big( \big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{1+\eta}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_2^2}\Big)\right](s,x)\\
&& +||D^2_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\Big)\right](s,x) + ||D_1u||_{\nu}\bigg( \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\nu}\Big)\right](s,x)\\
&&\quad + (s-t)^{-1/2}\left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^\nu\Big)\right](s,x) \bigg)\Bigg\}. \end{eqnarray*} By letting $\xi=x$ we obtain from estimate \eqref{eq:smootheffect} in Proposition \ref{prop:smootheffect} that
\begin{eqnarray*}
&&\Bigg| D_{x_2} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x) \\
&&\quad +\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x) \Bigg\}\Bigg|\\
&& \leq C(s-t)^{-3/2} \bigg( ||f||_{{\rm Lip}}(s-t)^{3/2} + ||D_1u||_{\nu} (s-t)^{\beta_1^1/2+3\nu/2} +||D_1u||_{\infty} (s-t)^{3\beta_1^2/2} \\
&&\quad + ||D_2u||_{\infty} \Big( (s-t)^{(1+\eta)/2} + (s-t)^{3\beta_2^2/2} \Big) + ||D_1u||_{\nu} (s-t)^{1/2+3\nu/2} +||D_1^2u||_{\infty} (s-t)^{3/2} \\
&& \qquad + ||D_1u||_{\nu} (s-t)^{3\nu/2} \bigg). \end{eqnarray*} Since $\beta_j^2>1/3$, and $\nu$ is constrained by \eqref{eq:conditionnu}, all the time-singularities of the right hand side above are integrables on $(t,T]$. Hence, we deduce from \eqref{eq:expressionu} and the estimate above that there exists a positive $\delta'$, depending on known parameters in \textbf{(H)} only, such that:
\begin{eqnarray*}
|D_{x_2}u(t,x)| &\leq & C T^{\delta'} \bigg(||f||_{{\rm Lip}} + ||D_1 u ||_{\nu} + ||D_1u||_{\infty} + ||D_2 u ||_{\infty} + ||D^2_1 u ||_{\infty} + ||D^2_1u||_{\infty}\bigg). \end{eqnarray*}
Finally, we compute the Hölder semi norm of $D_{x_1}u$. Let $x_2\neq z_2$ belong to $\mathbb{R}^{d}$. We have from \eqref{eq:expressionu}: \begin{eqnarray} &&D_{x_1}u(t,x_1,x_2)-D_{x_1}u(t,x_1,z_2) \label{eq:holdedu}\\ &&= - D^n_{x_1}\int_t^T\Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,x_2)-\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,z_2) \notag\\ &&+ \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,x_2)-\left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2) \notag\\ && + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,x_2)\notag\\ &&\quad -\left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,z_2) \notag\\ && + \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,x_2)- \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,z_2)\Bigg\}{\rm d} s.\notag \end{eqnarray}
We first estimate for any $s$ in $(t,T]$ the quantity: \begin{eqnarray}\label{eq:targetholder}
&&\Bigg| D_{x_1}\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,x_2)-D_{x_1}\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,z_2) \\ &&+ D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi)))\cdot D_{1}u\right](s,x_1,x_2)-D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2)\notag \\ && + D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,x_2)\notag\\ && -D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,z_2) \notag \\ && + D_{x_1}\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,x_2)\notag\\
&& - D_{x_1}\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,z_2)\Bigg|.\notag \end{eqnarray}
To do this, we split the time interval w.r.t. the characteristic time-scale of the second space variable: let $\mathcal{S}:=\left\{s\in (t,T]:\ |x_2-z_2|\leq (s-t)^{3/2} \right\}$. Note that on $\mathcal{S}$ we have for any measurable function $\varphi:[0,T]\times \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$: \begin{eqnarray*}
&&\left|D_{x_1}\left[\tilde{P}_{t,s}^\xi \varphi \right](s,x_1,x_2)-D_{x_1}\left[\tilde{P}_{t,s}^\xi \varphi \right](s,x_1,z_2)\right|\\
&& \quad \leq \sup_{\lambda \in (0,1)} \left| D_{x_2}D_{x_1} \left[\tilde{P}_{t,s}^\xi \varphi \right](s,x_1,\lambda x_2+ (1-\lambda)z_2)\right||x_2-z_2|\\
&& \quad \leq C (s-t)^{-2} \left[\hat{P}_{t,s}^\xi |\varphi|\right](s,x_1,x_2)|x_2-z_2|\\
&& \quad \leq C (s-t)^{-1/2-3\nu/2} \left[\hat{P}_{t,s}^\xi |\varphi|\right](s,x_1,x_2)|x_2-z_2|^{\nu}, \end{eqnarray*} for every $0<\nu<1$. Hence, by using this estimate together with Proposition \ref{prop:smootheffect}, by repeating the computations done when estimating $D_{x_1}u$, we have that the quantity \eqref{eq:targetholder} is bounded on $\mathcal{S}$ by
\begin{eqnarray}\label{eq:estigolder1}
&& C(s-t)^{-1/2}\Bigg\{ \left[\hat{P}_{t,s}^\xi \big|\Delta(\theta_{t,s}(\xi))(\cdot)\big|\right](s,x) + ||D_1u||_{\infty} \bigg[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^1}\\
&&+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^2}\Big)\bigg](s,x_1,x_2) + ||D_2u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big( \big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{1+\eta}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_2^2}\Big)\right](s,x_1,x_2)\notag\\
&& + ||D^2_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\Big)\right](s,x)\Bigg\}.\notag \end{eqnarray} Thus, by choosing $\xi=x$ we obtain that \eqref{eq:targetholder} is bounded on $\mathcal{S}$ by
\begin{eqnarray}\label{eq:holdedu1}
&&C'(s-t)^{-1/2-3\nu/2} \bigg( ||f||_{{\rm Lip}}(s-t)^{1/2} + ||D_1u||_{\infty} \Big( (s-t)^{\beta_1^1/2} + (s-t)^{3\beta_1^2/2} \big)\\
&&\quad + ||D^2_1u||_{\infty} \big( (s-t)^{1/2} + (s-t)^{3/2} \big) + ||D_2u||_{\infty} \big( (s-t)^{(1+\eta)/2} + (s-t)^{3\beta_2^2/2} \big)\bigg),\notag \end{eqnarray} for all $\nu$ satisfying \eqref{eq:conditionnu}.
We now estimate \eqref{eq:targetholder} on $\mathcal{S}^c$. On a first hand, we have from the computations done when estimating $D_{x_1}u$ that:
\begin{eqnarray*}
&&\Bigg| D_{x_1} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x) \\
&&\quad + \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x)\Bigg\}\Bigg|\\
&& \leq C(s-t)^{-1/2}\Bigg\{ ||f||_{{\rm Lip}}\left[\hat{P}_{t,s}^\xi \big|\Delta(\theta_{t,s}(\xi))(\cdot)\big|\right](s,x) \\
&&+ ||D_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^1}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^2}\Big)\right](s,x)\\
&&+ ||D_2u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big( \big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{1+\eta}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_2^2}\Big)\right](s,x)\\
&&+ ||D^2_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\Big)\right](s,x)\Bigg\}. \end{eqnarray*}
Since on $\mathcal{S}^c$ we have $1\leq (s-t)^{-3\nu/2} |x_2-z_2|^{\nu}$, by choosing $\xi=x$ and then using Proposition \ref{prop:smootheffect} it comes that
\begin{eqnarray}\label{eq:estiholder2}
&&\Bigg| D_{x_1} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x) \\
&& \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x)\Bigg|\notag\\
&&\quad + \left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) \cdot D^2_{1}u\right]\right](s,x)\Bigg\}\Bigg|\notag\\
&& \leq C(s-t)^{-1/2-3\nu/2}\Bigg\{ ||f||_{{\rm Lip}}\left[\hat{P}_{t,s}^\xi \big|\Delta(\theta_{t,s}(\xi))(\cdot)\big|\right](s,x) + ||D_1u||_{\infty} \bigg[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^1}\notag\\
&&+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_1^2}\Big)\bigg](s,x_1,x_2) + ||D_2u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big( \big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|^{1+\eta}+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|^{\beta_2^2}\Big)\right](s,x_1,x_2)\notag\\
&&+||D^2_1u||_{\infty} \left[\hat{P}_{t,s}^\xi \Big(\big|\Delta^1(\theta_{t,s}(\xi))(\cdot)\big|+\big|\Delta^2(\theta_{t,s}(\xi))(\cdot)\big|\Big)\right](s,x) \Bigg\}|x_2-z_2|^{\nu}.\notag \end{eqnarray}
We emphasize that all the time singularity above are again integrables provided $\nu$ satisfies \eqref{eq:conditionnu}. It thus only remains to estimate the last part of \eqref{eq:targetholder} on $\mathcal{S}^c$, namely
\begin{eqnarray*}
&&\Bigg| D_{x_1} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,z_2) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,z_2)\\
&&\qquad +\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,z_2) \Bigg\}\Bigg|. \end{eqnarray*}
The main issue here is that the estimate of Proposition \ref{prop:smootheffect} can not be applied immediately, the semi-group being evaluating at point $(s,x_1,z_2)$ and the freezing point being previously chosen as $\xi=(x_1,x_2)$. The main idea consists in re-centering all the terms above and taking advantage on the fact that $|\theta_{t,s}^2(x) - m^{2,x}_{t,s}(x_1,z_2)| \leq |x_2-z_2|$.\\
Let us first begin with the term $D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2)$. Splitting first $(F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u$ as $$\Big(F_1-F_1(s,\theta^1_{t,s}(\xi),m^{2,x}_{t,s}(x_1,z_2))\Big) \cdot D_{1}u + \Big(F_1(s,\theta^1_{t,s}(\xi),m^{2,x}_{t,s}(x_1,z_2))-F_1(s,\theta_{t,s}^1(\xi),\theta_{t,s}^2(\xi))\Big) \cdot D_{1}u,$$ we get \begin{eqnarray*}
&&\left|D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2)\right| \\
&&\leq C(s-t)^{-1/2}||D_{1}u||_\infty \bigg[\hat{P}_{t,s}^\xi \Big(|\Delta^1(\theta_{t,s}(\xi))|^{\beta_1^1} + |\Delta^2(m^{2,x}_{t,s}(x_1,z_2))|^{\beta_1^2}\\
&&\qquad + |\theta_{t,s}^2(\xi) -m^{2,x}_{t,s}(x_1,z_2) |^{\beta_1^2}\Big) \bigg](s,x_1,z_2)\\
&& \leq C'(s-t)^{-1/2} ||D_1u||_{\infty} \Big( (s-t)^{\beta_1^1/2} + (s-t)^{3\beta_1^2/2} + |x_2-z_2|^{\beta_1^2} \big). \end{eqnarray*}
Next we split $(F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u$ as
$$\Big(F_2-F_2(s,\cdot,m^{2,x}_{t,s}(x_1,z_2))\Big) \cdot D_{2}u + \Big(F_2(s,\cdot,m^{2,x}_{t,s}(x_1,z_2)) - F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))\Big) \cdot D_{2}u,$$ and we obtain
\begin{eqnarray*}
&&\left|D_{x_1}\left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,z_2)\right| \\
&&\leq C(s-t)^{-1/2}||D_{2}u||_\infty \bigg[\hat{P}_{t,s}^\xi \Big( |\Delta^2(m^{2,x}_{t,s}(x_1,z_2))|^{\beta_2^2} \\
&&\qquad + |\Delta^1(\theta_{t,s}(\xi))|^{1+\eta} + |\theta_{t,s}^2(\xi) -m^{2,x}_{t,s}(x_1,z_2) |^{\beta_2^2}\Big) \bigg](s,x_1,z_2)\\
&& \leq C'(s-t)^{-1/2} ||D_2u||_{\infty} \Big( (s-t)^{3\beta_2^2/2} + (s-t)^{(1+\eta)/2} + |x_2-z_2|^{\beta_2^2} \big). \end{eqnarray*}
Finally, we write $(1/2){\rm Tr}\left[(a - a(s,\theta_{t,s}(\xi))D^2u_1\right]$ as $$\frac{1}{2}{\rm Tr}\left[(a-a(s,\theta^1_{t,s}(\xi),m^{2,x}_{t,s}(x_1,z_2))\Big) D_{1}^2u\right] + \frac{1}{2}{\rm Tr}\left[\Big(a(s,\theta_{t,s}^1(\xi),\theta_{t,s}^2(\xi))-a(s,\theta^1_{t,s}(\xi),m^{2,x}_{t,s}(x_1,z_2))\Big) D_{1}u\right],$$ and we obtain
\begin{eqnarray*}
&&\left|D_{x_1}\left[\tilde{P}_{t,s}^\xi\frac{1}{2}{\rm Tr}\left[(a - a(s,\theta_{t,s}(\xi))D^2u_1\right]\right](s,x_1,z_2)\right| \\
&&\leq C(s-t)^{-1/2}||D^2_{1}u||_\infty \bigg[\hat{P}_{t,s}^\xi \Big(|\Delta^1(\theta_{t,s}(\xi))|\\
&&\qquad + |\Delta^2(m^{2,x}_{t,s}(x_1,z_2))|+ |\theta_{t,s}^2(\xi) -m^{2,x}_{t,s}(x_1,z_2) |\Big) \bigg](s,x_1,z_2)\\
&& \leq C'(s-t)^{-1/2} ||D^2_1u||_{\infty} \Big( (s-t)^{1/2} + (s-t)^{3/2} + |x_2-z_2| \Big). \end{eqnarray*}
Hence, putting the previous estimates together, letting $\xi=x$ we get that on $\mathcal{S}^c$ \begin{eqnarray}\label{eq:estiholder3}
&&\Bigg| D_{x_1} \Bigg\{\left[\tilde{P}_{t,s}^\xi f\right](s,x_1,z_2) + \left[\tilde{P}_{t,s}^\xi (F_1-F_1(s,\theta_{t,s}(\xi))) \cdot D_{1}u\right](s,x_1,z_2) \\ && \quad + \left[\tilde{P}_{t,s}^\xi (F_2-F_2(s,\theta_{t,s}(\xi))-D_1F_2(s,\theta_{t,s}(\xi))) \cdot D_{2}u\right](s,x_1,z_2)\notag\\
&& \quad +\left[\tilde{P}_{t,s}^\xi \frac{1}{2}{\rm Tr}\left[(a-a(s,\theta_{t,s}(\xi))) D^2_{1}u\right]\right](s,x_1,z_2) \Bigg\}\Bigg|\notag\\
&&\leq C(s-t)^{-1/2} \Bigg\{ ||f||_{{\rm Lip}} \big((s-t)^{(1-3\nu)/2} + (s-t)^{3(1-\nu)/2}\big) \notag\\
&&\quad + ||D_1u||_{\infty} \bigg( (s-t)^{(\beta_1^1-3\nu)/2} + (s-t)^{3(\beta_1^2-\nu)/2} + [x_2-z_2|^{\beta_1^2-\nu}\bigg)\notag\\
&&\quad + ||D_2u||_{\infty}\bigg( (s-t)^{3(\beta_2^2-\nu)/2} + (s-t)^{(1+\eta-\nu)/2} + |x_2-z_2|^{\beta_2^2-\nu}\bigg)\Bigg\} |x_2-z_2|^\nu\notag\\
&&\quad + ||D^2_1u||_{\infty} \bigg( (s-t)^{(1-3\nu)/2} + (s-t)^{3(1-\nu)/2} + [x_2-z_2|^{1-\nu}\bigg),\notag \end{eqnarray}
since $1\leq (s-t)^{-3\nu/2} |x_2-z_2|^\nu$ and this holds for all $\nu$ satisfying \eqref{eq:conditionnu}. Putting together estimates \eqref{eq:holdedu1}, \eqref{eq:estiholder2} and \eqref{eq:estiholder3}, we can invert the differentiation and integration operators in \eqref{eq:holdedu} and we deduce that there exists a positive $\delta''$ depending on known parameters in \textbf{(H)} only such that
$$||D_1u||_{\nu} \leq CT^{\delta''}(||D_1u||_{\infty} + ||D_2u||_{\infty} + ||D^2_1u||_{\infty} ),$$
where $||\cdot||_{\nu}$ is defined in Theorem \ref{TH:PDEres}. \end{proof}
\textbf{Acknowledgment } I would like to thanks François Delarue and Mario Maurelli for valuable contribution to this work.
\end{document} | arXiv | {
"id": "1606.05458.tex",
"language_detection_score": 0.6105731725692749,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Subjective and Objective Probabilities in Quantum Mechanics} \author{Mark Srednicki} \email{mark@physics.ucsb.edu} \affiliation{ Department of Physics, University of California, Santa Barbara, CA 93106 USA }
\begin{abstract} We discuss how the apparently objective probabilities predicted by quantum mechanics can be treated in the framework of Bayesian probability theory, in which all probabilities are subjective. Our results are in accord with earlier work by Caves, Fuchs, and Schack, but our approach and emphasis are different. We also discuss the problem of choosing a noninformative prior for a density matrix. \end{abstract}
\maketitle
\section{Introduction} \label{intro}
Probability plays a central role throughout human affairs, and so everyone has an intuitive idea of what it is. Moreover, because of the extreme generality and widespread use of the concept of probability, it cannot be easily defined in terms of anything more basic. For example, the dictionary that I have in my office, {\it Webster's Ninth New Collegiate}, says that {\it probability\/} is ``the state or quality of being probable''; that to be {\it probable\/} is to be ``supported by evidence strong enough to establish presumption but not proof''; and that {\it presumption\/} is ``the ground, reason, or evidence lending probability to a belief''. This is clearly unhelpful to anyone who does not already know what probability is.
In mathematics and physics, we are often faced with a concept that is both simple enough to be clearly understood, and fundamental enough to resist definition; for example, a {\it straight line\/} in euclidean geometry. To make progress, we do not attempt to devise ever clearer definitions, but instead formulate axioms that our understood but undefined objects are postulated to obey. Then, using codified rules of logical inference, we prove theorems that follow from the axioms.
It is instructive to treat probability as one of these primitive concepts.
Dispensing, then, with any attempt at definition, we say that the {\it probability\/} that a {\it statement\/} is {\it true\/} is a real number between zero and one. A statement may be true or false; if we know it to be true, we assign it a probability of one, and if we know it to be false, we assign it a probability of zero. If we do not know whether it is true or false, we assign it a probability between zero and one.
There is typically no definitive way to make this assignment. Different people could (and often do) assign different numerical values to the probability that some particular statement (``the stock price of Microsoft will be higher one year from now'') is true. In this sense, probability is {\it subjective}. This point of view is {\it Bayesian}.
Probability also enters quantum mechanics, in a seemingly more fundamental way. For example, given a wave function $\psi(x,t)$ for a particle in one dimension, the rules of quantum mechanics (which are apparently laws of nature)
tell us that we must assign a probability $|\psi(x,t)|^2 \,dx$ to the statement ``at time $t$, the particle is between $x$ and $x+dx$''. Different people do not appear to have a choice about this assignment. In this sense, quantum probability appears to be {\it objective}.
The goal of this paper is to understand the how the apparently objective probababilities of quantum mechanics can be fit into the Bayesian framework, which allows different people to make different probability assignments. This issue has been addressed before by Caves, Fuchs, and Schack \cite{cfs}, and our results are in broad agreement with theirs. However, we emphasize a somewhat different approach to certain issues that we will explain as we go along.
In section \ref{axioms}, in order to fix the notation and key concepts, we briefly review the axioms and basic theorems of probability theory. In section \ref{probprob}, we introduce the notion of a probability of a probability, and explain how it can be applied to experimental data to turn an originally subjective probability into an increasingly objective one, in the sense that all but strongly biased observers agree with the final probability assignment. In section \ref{qm}, we apply this formalism to the probabilities of quantum mechanics. In section \ref{pdmdm}, we discuss when and why it is preferable to assign probabilities to possible density matrices for a quantum system, rather than assigning a particular density matrix. In section \ref{nonin}, we discuss the construction of noninformative prior distributions for density matrices. We summarize and conclude in section \ref{con}.
\section{The Axioms of Probability} \label{axioms}
The statements to which we may assign probabilities must obey a logical calculus. Some key definitions (in which ``iff'' is short for ``if and only if''):
$S =$ a statement.
$\Omega =$ a statement known to be true.
$\emptyset= $ a statement known to be false.
$S\llap{$\overline{\phantom{I}}$} =$ a statement that is true iff $S$ is false.
$S_1 \vee S_2 =$ a statement that is true iff either $S_1$ or $S_2$ is true.
$S_1 \wedge S_2 =$ a statement that is true iff both $S_1$ and $S_2$ are true.
$S_1$ and $S_2$ are {\it mutually exclusive\/} iff
$S_1 \wedge S_2 = \emptyset.$
$S_1, \ldots, S_n$ are a {\it complete set\/} iff
$S_1 \vee \ldots \vee S_n = \Omega$ and
$S_i \wedge S_j = \emptyset$ for $i\ne j$.
Elementary logical relationships among statements include $S \vee S\llap{$\overline{\phantom{I}}$} = \Omega$, $S \wedge S\llap{$\overline{\phantom{I}}$} = \emptyset$, $S \wedge \Omega = S$,
$S_1\wedge(S_2\vee S_3) = (S_1\wedge S_2)\vee(S_1\wedge S_3)$, etc. Denoting the probability assigned to a statement $S$ as $P(S)$, we can state the first three axioms of probability.
Axiom 1. $P(S)$ is a nonnegative real number.
Axiom 2. $P(S) = 1$ iff $S$ is known to be true. Axiom 3. If $S_1$ and $S_2$ are mutually exclusive, then $P(S_1 \vee S_2) = P(S_1) + P(S_2)$.
From these axioms, and the logical calculus of statements, we can derive some simple lemmas:
Lemma 1. $P(S\llap{$\overline{\phantom{I}}$}\,) = 1 - P(S).$
Lemma 2. $P(S) \le 1.$
Lemma 3. $P(S)=0$ iff $S$ is known to be false.
Lemma 4. $P(S_1 \wedge S_2) = P(S_1) + P(S_2) - P(S_1 \vee S_2).$
\noindent We omit the proofs, which are straightforward.
We will also need the notion of a {\it conditional\/} statement
$S_2|S_1$. $S_2|S_1$ is a statement
if and only if $S_1$ is true; otherwise, $S_2|S_1$ is {\it not\/} a statement, and cannot be assigned a probability.
Given that $S_1$ is true, the statement $S_2|S_1$ is true if and only if $S_2$ is true. The probability that $S_2|S_1$ is true is then specified by
Axiom 4. $P(S_2|S_1) = P(S_1 \wedge S_2)/P(S_1).$
\noindent Note that, if $P(S_1)=0$, then $S_1=\emptyset$ by Lemma~3, and so both sides of Axiom~4 are undefined: the right side because we have divided by zero, and
the left side because $S_2|\emptyset$ is not a statement.
Another concept we will need is that of {\it independence\/} between statements. Two statements are said to be {\it independent\/} if the knowledge that one of them if true tells us nothing about whether or not the other one is true. Thus, if $S_1$ and $S_2$ are independent, we should have
$P(S_1|S_2)=P(S_1)$ and $P(S_2|S_1)=P(S_2)$. Using these relations and Axiom~4, we get a result that can be used as the definition of independence,
$S_1$ and $S_2$ are {\it independent\/} if and only if $P(S_1 \wedge S_2) = P(S_1) P(S_2)$.
Note that independence is a property of probability assignments, rather than the statements themselves. Thus, people can disagree on whether or not two statements are independent.
\section{Probabilities of probabilities} \label{probprob}
What limitations, if any, should be placed on the nature of statements to which we are allowed to assign probabilities?
There are various schools of thought. {\it Frequentists\/} assign probabilities only to {\it random variables}, a highly restricted class of statements that we shall not attempt to elucidate. {\it Bayesians} allow a wide range of statements, including statements about the future such as ``when this coin is flipped it will come up heads,'' statements about the past such as ``it rained here yesterday,'' and timeless statements such as ``the value of Newton's constant is between $6.6$ and $6.7 \times 10^{-11}\,$m$^3$/kg$\,$s$^2$.'' Some level of precision is typically insisted on, so that, for example, ``red is good'' might be rejected as too vague.
A major thesis of this paper is that the class of allowed statements should include statements about the probabilities of other statements. Some Bayesians (for example, de Finetti \cite{definpp}) reject this concept as meaningless. However, it has found some acceptance and utility in decision theory, where it is sometimes called a {\it second order probability\/}; see, e.g., \cite{dec}. In particular, it is an experimental fact that people's decisions depend not only on the probabilities they assign to various alternatives, but also on the degree of confidence that they have in their own probability assignments \cite{dec}. This degree of confidence can be quantified and treated as a probability of a probability.
To illustrate how we will use the concept, consider the following problem. Suppose that we have a situation with exactly two possible outcomes (for example, a coin flip). Call the two outcomes $A$ and $B$. In the terminology of the logical calculus, $A\vee B=\Omega$ and $A\wedge B=\emptyset$, so that $A$ and $B$ are a complete set. The probability axioms then require $P(A)+P(B)=1$, but do not tell us anything about either $P(A)$ or $P(B)$ alone.
In the absence of any other information, we invoke Laplace's {\it principle of insufficient reason} (also called the {\it principle of indifference}): when we have no cause to prefer one statement over another, we assign them equal probabilities. Thus we are instructed to choose $P(A)=P(B)={\textstyle{1\over2}}$. While this assignment is logically sound, we clearly cannot have a great deal of confidence in it; typically, we are prepared to abandon it as soon as we get some more information.
Another (and, we argue, better) strategy is to retreat from the responsibility of assigning a particular value to $P(A)$, and instead assign a probability $P(H)$ to the statement $H = {}$``the value of $P(A)$ is between $h$ and $h+dh$.'' Here $dh$ is infinitesimal, and $0\le h\le 1$. Then $P(H)$ takes the form $p(h)dh$, where $p(h)$ is a nonnegative function that we must choose, normalized by $\int_0^1 p(h)dh = 1$. We might choose $p(h)=1$, for example.
Now suppose we get some more information about $A$ and $B$. Suppose that the situation that produces either $A$ or $B$ as an outcome can be recreated repeatedly (each repetition will be called a {\it trial\/}), and that the outcomes of the different trials are (we believe) independent. Suppose that the result of the first $N$ trials is $N_{\!A}$ $A$'s and $N_{\!B}$ $B$'s, in a particular order. What can we say now?
The formula we need is
Bayes' Theorem. $P(H|D) =P(D|H)P(H)/P(D).$
\noindent Bayes' theorem follows immediately from Axiom~4; since $H\wedge D$ is the same as $D\wedge H$, we have
$P(H|D)P(D) = P(H\wedge D) = P(D|H)P(H)$. While $H$ and $D$ can be any allowed statements, the letters are intended to denote ``hypothesis'' and ``data''. Bayes' theorem tells us that, given a hypothesis $H$ to which we have somehow assigned a {\it prior probability\/} $P(H)$ (whether by the principle of indifference, or by any other means), and we know (or can compute)
the {\it likelihood\/} $P(D|H)$ of getting a particular set of data $D$ given that the hypothesis $H$ is true, then we can compute the {\it posterior probability\/} $P(H|D)$ that the hypothesis $H$ is true, given the data $D$ that we have obtained. Furthermore, if we have a complete set of hypotheses $H_i$, then we can express $P(D)$ in terms of the associated likelihoods and prior probabilities: starting with $D=D\wedge\Omega=D\wedge(H_1\vee H_2\vee\ldots) =(D\wedge H_1)\vee(D\wedge H_2)\vee\ldots,$ and noting that $D\wedge H_i$ and $D\wedge H_j$ are mutually exclusive when $i\ne j$, we have \begin{equation}
P(D)=\sum_i P(D\wedge H_i) = \sum_i P(D|H_i)P(H_i) , \label{pofd} \end{equation} where the first equality follows from Axiom~3, and the second from Axiom~4.
To apply these results to the case at hand, recall that our hypothesis is $H={}$``$P(A)$ is between $h$ and $h+dh$''. We have assigned this hypothesis a prior probability $P(H)=p(h)dh$. The data $D$ is a string of $N_{\!A}$ $A$'s and $N_{\!B}$ $B$'s, in a particular order; each of the $N=N_{\!A}+N_{\!B}$ outcomes is assumed to be independent of all the others. Using the definition of independence, we see that the likelihood is \begin{eqnarray}
P(D|H) &=& P(A)^{N_{\!A}} P(B)^{N_{\!B}} \nonumber \\ \noalign{
} &=& h^{N_{\!A}}(1-h)^{N_{\!B}}. \label{pdh} \end{eqnarray} Applying Bayes' Theorem, we get the posterior probability \begin{equation}
P(H|D)= P(D)^{-1} h^{N_{\!A}}(1-h)^{N_{\!B}}p(h)dh, \label{bayesab} \end{equation} where \begin{equation} P(D)=\int_0^1 h^{N_{\!A}}(1-h)^{N_{\!B}}p(h)dh. \label{bayespd} \end{equation}
If the number of trials $N$ is large, and if the prior probability $p(h)$ has been chosen to be a slowly varying function, then the posterior probability $P(H|D)$ has a sharp peak at $h=h_{\rm exp} \equiv N_{\!A}/N$, the fraction of trials that resulted in outcome $A$. The width of this peak is proportional to $N^{-1/2}$ if both $N_{\!A}$ and $N_{\!B}$ are large, and to $N^{-1}$ if either $N_{\!A}$ or $N_{\!B}$ is small (or zero). Thus, after a large number of trials, we can be confident that the probability $P(A)$ that the next outcome will be $A$ is close to the fraction of trials that have already resulted in $A$. The only people who will not be convinced of this are those whose choice of prior probability $p(h)$ is strongly biased against the value $h=h_{\rm exp}$. Thus, the value $h_{\rm exp}$ for the probability $h$ is becoming {\it objective}, in the sense that almost all observers agree on it. Furthermore, those who do not agree can be identified {\it a priori\/} by noting that their prior probabilities are strong functions of $h$.
Those who reject the notion of a probability of a probability, but who accept the practical utility of this analysis (which was originally carried out by Laplace), have two options. Option one is to declare that $h$ is not actually a probability; it is rather a {\it limiting frequency\/} or a {\it propensity\/} or a {\it chance}. Option two is to declare that $p(h)dh$ is not actually a probability; it is a {\it measure\/} or a {\it generating function}.
Let us explore option two in more detail. Rather than assigning a second-order probability to $H={}$``$P(A)$ is between $h$ and $h+dh$'', we assign a probability to {\it every finite sequence\/} of outcomes; that is, we choose values for $P(A)$, $P(B)$, $P(AB)$, $P(BA)$, $P(AAA)$, $P(AAB)$,
and so on, for strings of arbitrarily many outcomes. We assume that all possible strings of $N$ outcomes form a complete set. Our probability assignments must of course satisfy the probability axioms, so that, for example, $P(A)+P(B)=1$. We also insist that the assignments be {\it symmetric}; that is, independent of the ordering of the outcomes, so that, for example, \begin{equation} P(AAB)=P(ABA)=P(BAA). \label{paab} \end{equation} Furthermore, the assignments for strings of $N$ outcomes must be consistent with those for $N+1$ outcomes; this means that, for any particular string of $N$ outcomes $S$, \begin{equation} P(S)=P(SA)+P(SB). \label{psab} \end{equation} A set of probability assignments that satisfies these requirements is said to be {\it exchangeable}. Then, the {\it de Finetti representation theorem\/} \cite{defin} states that, given an exchangeable set of probability assignments for all possible strings of outcomes, the probability of getting a specific string $D$ of $N$ outcomes that includes exactly $N_{\!A}$ $A$'s and $N_{\!B}$ $B$'s can always be written in the form \begin{equation} P(D)=\int_0^1 h^{N_{\!A}}(1-h)^{N_{\!B}}p(h)dh, \label{defin} \end{equation} where $p(h)$ is a unique nonnegative function that obeys the normalization condition $\int_0^1 dh\,p(h)=1$, and is the same for every string $D$. Note that \eq{defin} is exactly the same as \eq{bayespd}. Thus an exchangeable probability assignment to sequences of outcomes can be characterized by a function $p(h)$ that can be (as we have seen) consistently treated as a probability of a probability. But those who find this notion unpalatable are free to think of $p(h)$ as specifying a measure, or a generating function, or a similar euphemism.
To summarize, if we need to assign a prior probability but have little information, it can be more constructive to abjure, and instead assign a probability to a range of possible values of the needed prior probability. This probability of a probability can then be updated with Bayes' theorem as more information comes in.
\section{Probability in Quantum Mechanics} \label{qm}
Suppose we are given a qubit: a quantum system with a two-dimension Hilbert space. (We will use the language appropriate to a spin-one-half particle to describe it.) We are asked to make a guess for its quantum state.
Without further information, the best we can do is invoke the principle of indifference. In the case of a finite set of possible outcomes, this principle is based on the permutation symmetry of the outcomes; we choose the unique probability assignment that is invariant under this symmetry. The quantum analog of the permutation of outcomes is the unitary symmetry of rotations in Hilbert space. The only quantum state that is invariant under this symmetry is the fully mixed density matrix \begin{equation} \rho = {\textstyle{1\over2}} I. \label{rhomix} \end{equation} Thus we are instructed to choose \eq{rhomix} as the quantum state of the system. While this assignment is logically sound, we clearly cannot have a great deal of confidence in it; typically, we are prepared to abandon it as soon as we get some more information.
Another (and, we argue, better) strategy is to retreat from the responsibility of assigning a particular state (pure or mixed) to the system, and instead assign a probability $P(H)$ to the statement $H = {}$``the quantum state of the system is a density matrix within a volume $d\rho$ centered on $\rho$'', where $\rho$ is a particular $2\times 2$ hermitian matrix with nonnegative eigenvalues that sum to one, and $d\rho$ is a suitable differential volume element in the space of such matrices. We can parameterize $\rho$ with three real numbers $x$, $y$, and $z$ via \begin{equation} \rho ={1\over2} \pmatrix{ 1+z & x- iy \cr \noalign{
}
x+iy & 1-z \cr}, \label{rhoxyz} \end{equation} where where $x^2 + y^2 + z^2 \equiv r^2 \le 1$. We then take $d\rho =dV$, where $dV=(3/4\pi)dx\,dy\,dz$ is the normalized volume element: $\int dV = 1$. $P(H)$ takes the form $p(\rho)dV$, where $p(\rho)$ is a nonnegative function that we must choose, normalized by $\int p(\rho)dV = 1$. We might choose $p(\rho)=1$, for example.
Now suppose we get some more information about the quantum state of the system. Suppose that the procedure that prepares the quantum state of the particle can be recreated repeatedly (each repetition of this will be called a {\it trial\/}), and that the outcomes of measurements performed on each prepared system are (we believe) independent. Suppose further that we have access to a Stern--Gerlach apparatus that allows us to measure whether the spin is $+$ or $-$ along an axis of our choice. We choose the $z$ axis. Suppose that the result of the first $N$ trials is $N_{+}$ $+$'s and $N_{-}$ $-$'s. What can we say now?
Given a density matrix $\rho$, parameterized by \eq{rhoxyz}, the rules of quantum mechanics tell us that the probability that a measurement of the spin along the $z$ axis will yield $+1$ is \begin{equation}
P(\sigma_z\,{=}\,{+}1|\rho) = {\rm Tr}\,{\textstyle{1\over2}}(1+\sigma_z)\rho = {\textstyle{1\over2}}(1+z) , \label{PSzp} \end{equation} where $\sigma_z$ is a Pauli matrix, and the probability that this measurement will yield $-1$ is \begin{equation}
P(\sigma_z\,{=}\,{-}1|\rho) = {\rm Tr}\,{\textstyle{1\over2}}(1-\sigma_z)\rho = {\textstyle{1\over2}}(1-z) . \label{PSzm} \end{equation}
Now we use Bayes' theorem. Our hypothesis is $H={}$``the quantum state is within a volume $d\rho$ centered on $\rho$''. We have assigned this hypothesis a prior probability $P(H)=p(\rho)d\rho$. The data $D$ is a string of $N_+$ $+$'s and $N_-$ $-$'s, in a particular order; each of the $N=N_++N_-$ outcomes is assumed to be independent of all the others. Using the definition of independence, we see that the likelihood is \begin{eqnarray}
P(D|H) &=& [P(\sigma_z\,{=}\,{+}1|\rho)]^{N_+} [P(\sigma_z\,{=}\,{+}1|\rho)]^{N_-} \nonumber \\ \noalign{
} &=& [{\textstyle{1\over2}}(1+z)]^{N_+} [{\textstyle{1\over2}}(1-z)]^{N_-} . \label{pdhqm} \end{eqnarray} Applying Bayes' Theorem, we get the posterior probability \begin{equation}
P(H|D)= P(D)^{-1} [{\textstyle{1\over2}}(1+z)]^{N_+} [{\textstyle{1\over2}}(1-z)]^{N_-}p(\rho)d\rho, \label{bayesabqm} \end{equation} where \begin{equation} P(D)=\int [{\textstyle{1\over2}}(1+z)]^{N_+} [{\textstyle{1\over2}}(1-z)]^{N_-}p(\rho)d\rho. \label{bayespdqm} \end{equation} When the number of trials $N$ is large, and the prior probability $p(\rho)$ is a
slowly varying function, the posterior probability $P(H|D)$ has a sharp peak at $z=z_{\rm exp}\equiv(N_+ - N_-)/N$. Thus, after a large number of trials in which we measure $\sigma_z$, we can be confident of the value of the parameter $z$ in the density matrix of the system. The only people who will not be convinced of this are those whose choice of prior probability $p(\rho)$ is strongly biased against the value $z=z_{\rm exp}$. Furthermore, those who do not agree can be identified {\it a priori\/} by noting that their prior probabilities are strong functions of $\rho$.
We can of course orient our Stern--Gerlach apparatus along different axes. If we choose the $x$ axis or the $y$ axis, the relevant predictions of quantum mechanics are \begin{eqnarray}
P(\sigma_x\,{=}\,{+}1|\rho) &=& {\rm Tr}\,{\textstyle{1\over2}}(1+\sigma_x)\rho = {\textstyle{1\over2}}(1+x) , \label{PSxp} \\ \noalign{
}
P(\sigma_x\,{=}\,{-}1|\rho) &=& {\rm Tr}\,{\textstyle{1\over2}}(1-\sigma_x)\rho = {\textstyle{1\over2}}(1-x) , \label{PSxm} \\ \noalign{
}
P(\sigma_y\,{=}\,{+}1|\rho) &=& {\rm Tr}\,{\textstyle{1\over2}}(1+\sigma_y)\rho = {\textstyle{1\over2}}(1+y) , \label{PSyp} \\ \noalign{
}
P(\sigma_y\,{=}\,{-}1|\rho) &=& {\rm Tr}\,{\textstyle{1\over2}}(1-\sigma_y)\rho = {\textstyle{1\over2}}(1-y) . \label{PSym} \end{eqnarray} For each trial, we can choose whether to measure $\sigma_x$, $\sigma_y$, or $\sigma_z$. (We could also choose to measure along any other axis.) Then, if the outcomes include $N_{+z}$ measurements of $\sigma_z$ with the result $\sigma_z=+1$, and so on, the posterior probability becomes \begin{eqnarray}
P(H|D)&=& P(D)^{-1} [{\textstyle{1\over2}}(1+x)]^{N_{+x}} [{\textstyle{1\over2}}(1-x)]^{N_{-x}} \nonumber \\ && {} \times [{\textstyle{1\over2}}(1+y)]^{N_{+y}} [{\textstyle{1\over2}}(1-y)]^{N_{-y}} \nonumber \\ && {} \times [{\textstyle{1\over2}}(1+z)]^{N_{+z}} [{\textstyle{1\over2}}(1-z)]^{N_{-z}} p(\rho)d\rho, \qquad \label{bayesabcqm} \end{eqnarray} where $P(D)$ is given by the obvious integral. Clearly the discussion in the preceding paragraph is simply triplicated, and, when the number of trials is large, we have determined the entire density matrix to the satisfaction of all but strongly biased observers. Our subjective probabilities of probabilities have led us to an objective conclusion about quantum probabilities.
In \cite{cfs}, Caves et al arrived at an essentially identical result. The main difference in their analysis is that they regarded $p(\rho)d\rho$ as a measure rather than a probability. This approach required them to prove, first, a quantum version of the de Finetti theorem \cite{qdef}, and, second, that Bayes' theorem can be applied to $p(\rho)d\rho$ \cite{qbayes}. Both steps become unnecessary if we treat $p(\rho)d\rho$ as, fundamentally, a probability.
\section{Probabilities for Density Matrices vs.~Density Matrices} \label{pdmdm}
If we assign an impure density matrix $\rho$ to a quantum system, does this not already take into account our ignorance about it? Why is it preferable to assign, instead, a probability $p(\rho)d\rho$ to the set of possible density matrices?
It depends on the nature of our ignorance. Suppose, for example, the system is the spin of an electron plucked from the air. Then we expect that \eq{rhomix} will describe it, in the sense that if we do repeated trials (plucking a new electron each time, and measuring its spin along an axis of our choice), we will find that $x_{\rm exp}\equiv (N_{+x}-N_{-x})/(N_{+x}+N_{-x})$, $y_{\rm exp}\equiv (N_{+y}-N_{-y})/(N_{+y}+N_{-y})$, and $z_{\rm exp}\equiv (N_{+z}-N_{-z})/(N_{+z}+N_{-z})$ all tend to zero.
Suppose instead that the spin is prepared by a technician who (with the aid of a Stern--Gerlach device) puts it in either a pure state with $\sigma_z=+1$, or a pure state with $\sigma_x=+1$, and each time decides which choice to make by flipping a coin that we believe is fair. In this case the appropriate density matrix is \begin{eqnarray} \rho &=& {\textstyle{1\over2}}[{\textstyle{1\over2}}(1+\sigma_z)] + {\textstyle{1\over2}}[{\textstyle{1\over2}}(1+\sigma_x)] \nonumber \\ \noalign{
} &=& {1\over 4}\pmatrix{ 3 & 1 \cr 1 & 1 \cr}. \label{rhoodd} \end{eqnarray} Comparing with \eq{rhoxyz}, we see that we now we expect $x_{\rm exp}$, $y_{\rm exp}$, and $z_{\rm exp}$ to approach $+{\textstyle{1\over2}}$, $0$, and $+{\textstyle{1\over2}}$, respectively.
Now suppose that the spin is prepared by a technician who puts it in either a pure state with $\sigma_z=+1$, or a pure state with $\sigma_x=+1$, and {\it makes the same choice every time}. We, however, are not aware of what her choice is.
If forced to assign a particular density matrix, we would have to choose \eq{rhoodd}. However, our situation is clearly different from what it was in the previous example. In the present case, repeated experiments would {\it not\/} verify \eq{rhoodd}, but would instead converge on either $x_{\rm exp}$ = 0 and $z_{\rm exp}=+1$, or $x_{\rm exp}$ = +1 and $z_{\rm exp}=0$. Therefore, in this case, it is more appropriate to assign a prior probability of one-half to $\rho={\textstyle{1\over2}}(1+\sigma_z)$ and a prior probability one-half to $\rho={\textstyle{1\over2}}(1+\sigma_x)$. Then, as data comes in, we can update these probability assignments with Bayes' theorem, as described in section \ref{qm}.
Thus, it is better to choose $p(\rho)d\rho$ when it is possible that there is something about the preparation procedure that consistently prefers a particular direction in Hlibert space, but we do not know what that direction is. Since this possibility can rarely be ruled out {\it a priori}, we are typically better served by choosing a prior probability $p(\rho)d\rho$, rather than a particular vaue of $\rho$ itself.
\section{Noninformative Priors for Density Matrices} \label{nonin}
Suppose we have decided to choose a prior probability $p(\rho)d\rho$ for the density matrix $\rho$ of some quantum system. How should we choose this probability?
In the case where we have little or no information about the quantum system, we would like to formulate the appropriate analog of the principle of indifference. Consider a qu$n$it, a quantum system whose Hilbert space has dimension $n$ that is known to us. (We will not consider the even more general problem where $n$ is unknown.) We can always write the density matrix (whatever it is) in the form \begin{equation} \rho = U^{-1}\tilde\rho \,U, \label{UrhoU} \end{equation} where $U$ is unitary with determinant one, and $\tilde\rho$ is diagonal with nonnegative entries $p_1,\ldots, p_n$ that sum to one. There is a natural measure for special unitary matrices, the Haar measure; it is invariant under $U\to CU$, where $C$ is a constant special unitary matrix. In the simplest case of $n=2$, we can parameterize $U$ as $U=e^{i\alpha_1\sigma_3} e^{i\alpha_2\sigma_2} e^{i\alpha_3\sigma_3}$, with $0\le\alpha_1\le\pi$, $0\le\alpha_2\le\pi/2$, $0\le\alpha_3\le \pi$; then the normalized Haar measure is $dU=\pi^{-2}\sin(2\alpha_2)d\alpha_1 d\alpha_2 d\alpha_3$. This construction is extended to all $n$ in \cite{sudar}.
Suppose we know that the state of the quantum system is pure. Then we can set $\tilde\rho_{ij}=\delta_{i1}\delta_{j1}$, and parameterize $\rho$ via $U$. Then it is natural to choose $d\rho=dU$ and $p(\rho)=1$, because this is the only choice that is invariant under unitary rotations in Hilbert space.
Now consider the more general case where we do not have information about the purity of the system's quantum state. Following \cite{meas}, we define the volume element via \begin{equation} d\rho \equiv dU\kern0.5pt dF, \label{drho} \end{equation} where $dU$ is the normalized Haar measure for $U$, and \begin{equation} dF = (n{-}1)! \,\delta(p_1+\ldots+p_n-1)dp_1\ldots dp_n \label{dF} \end{equation} is a normalized measure for the $p_i$'s that we will call the {\it Feynman measure\/} (because it appears in the evaluation of one-loop Feynman diagrams). \Eq{dF} assumes that each $p_i$ runs from zero to one; then \eq{UrhoU} is an overcomplete construction, because $U$ can rearrange the $p_i$'s. This is easily fixed by imposing $p_1\ge\ldots\ge p_n$, and multiplying $dF$ by $n!$. However, \eq{dF} as it stands is easier to write and think about; the overcompleteness of this construction of $\rho$ causes no harm.
In the case $n=2$, we previously chose $d\rho=dV=(3/4\pi)dx\,dy\,dz$ for the parameterization of \eq{rhoxyz}. In this case, the eigenvalues of $\rho$ are ${\textstyle{1\over2}}(1+r)$ and ${\textstyle{1\over2}}(1-r)$, with $0\le r\le 1$. After integrating over $U$, $dV\to 3 r^2\,dr$; in comparison, $dF=dr$ for this case.
The purity of a density matrix $\rho$ can be paramertized by ${\rm Tr}\,\rho^2$, which for $n=2$ is ${\textstyle{1\over2}}(1+r^2)$. Thus the volume measure $dV$ is more biased towards pure states than is the Feynman measure $dF$; we have $dV=3(2\,{\rm Tr}\,\rho^2-1)dF$.
In general, we can accomodate any such bias by taking $p(\rho)d\rho$ to be of the form \begin{equation} p(\rho)d\rho = p({\rm Tr}\,\rho^2)dU\kern0.5pt dF , \label{prho} \end{equation} where $p(x)$ is an increasing function if we are biased towards having a pure state. For $n>2$, we can take $p$ to be a function of ${\rm Tr}\,\rho^k$ for $2\le k\le n$. Arguments in favor of various choices of $p$ have been put forth \cite{rhop}, but no single choice seems particularly compelling. Of course, once we have done enough experiments, our original biases become largely irrelevant, as we saw in section \ref{qm}.
\section{Conclusions} \label{con}
We have argued that, in a Bayesian framework, the nature of our ignorance about a quantum system can often be more faithfully represented by a prior probability $p(\rho)d\rho$ over the range of allowed density matrices, rather than by a specific choice of density matrix. This method is particularly appropriate when (1) the preparation procedure may favor a direction in Hilbert space, but we do not know what that direction is, and (2) we can recreate the preparation procedure repeatedly, and perform measurements of our choice on each prepared system. In this case, as data comes in, we use Bayes' theorem to update $p(\rho)d\rho$. Eventually, all but strongly biased observers (who can be identified {\it a priori\/} by an examination of their choice of prior probability) will be convinced of the values of the quantum probabilities. In this way, initially subjective probability assignments become more and more objective.
In choosing $p(\rho)d\rho$, we can use the principle of indifference, applied to the unitary symmetry of Hilbert space, to reduce the problem to one of choosing a probability distribution for the eigenvalues of $\rho$. There is, however, no compelling rationale for any particular choice; in particular, we must decide how biased we are towards pure states.
\begin{acknowledgments}
I am grateful to Jim Hartle for illuminating discussions, and prescient comments on earlier drafts of this paper. This work was supported in part by NSF Grant No.~PHY00-98395.
\end{acknowledgments}
\end{document} | arXiv | {
"id": "0501009.tex",
"language_detection_score": 0.8644636869430542,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\date{\today}
\title{Rigorous numerics for critical orbits \\ in the quadratic family}
\author{A. Golmakani\footnote{Ali Golmakani, Universidade Federal de Alagoas, Av.\ Lourival Melo Mota, s/n, Macei\'{o}, Alagoas 57072-900, Brazil; \emph{aligolmakani@gmail.com}}, \ C. E. Koudjinan\footnote{Comlan Edmond Koudjinan, Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34151 Trieste, Italy; \emph{koudjinanedmond@gmail.com} }, \ S. Luzzatto \footnote{Stefano Luzzatto, Abdus Salam International Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34151 Trieste, Italy; \emph{luzzatto@ictp.it} }, \ P. Pilarczyk\footnote{Pawe\l{} Pilarczyk, Gda\'{n}sk University of Technology, Faculty of Applied Physics and Mathematics, ul.\ Gabriela Narutowicza 11/12, 80-233 Gda\'{n}sk, Poland; \emph{pawel.pilarczyk@pg.edu.pl}} } \date{} \maketitle
\begin{abstract} We develop algorithms and techniques to compute \emph{rigorous} bounds for finite pieces of orbits of the critical points, for \emph{intervals of parameter values}, in the quadratic family of one-dimensional maps \(f_a (x) = a - x^2\). We illustrate the effectiveness of our approach by constructing a dynamically defined partition \( \mathcal P \) of the parameter interval \( \Omega=[1.4, 2] \) into almost 4 million subintervals, for each of which we compute to high precision the orbits of the critical points up to some time~\( N \) and other dynamically relevant quantities, several of which can vary greatly, possibly spanning several orders of magnitude. We also subdivide \( \mathcal P \) into a family \( \mathcal P^{+} \) of intervals which we call \emph{stochastic intervals} and a family \( \mathcal P^{-} \) of intervals which we call \emph{regular intervals}. We numerically prove that each interval \( \omega \in \mathcal P^{+} \) has an \emph{escape time}, which roughly means that some iterate of the critical point taken over all the parameters in \( \omega \) has considerable width in the phase space. This suggests, in turn, that most parameters belonging to the intervals in \( \mathcal P^{+} \) are stochastic and most parameters belonging to the intervals in \( \mathcal P^{-} \) are regular, thus the names. We prove that the intervals in \( \mathcal P^{+} \) occupy almost 90\% of the total measure of \( \Omega \). The software and the data is freely available at \href{http://www.pawelpilarczyk.com/quadr/}{http://www.pawelpilarczyk.com/quadr/}, and a web page is provided for carrying out the calculations. The ideas and procedures can be easily generalized to apply to other parametrized families of dynamical systems. \end{abstract}
\vskip 24pt
\noindent In the 1970s Robert May introduced the logistic family of one-dimensional maps as an example of a simple mathematical model which nevertheless exhibits extremely complex behaviour. Since then, the logistic family and the very closely related quadratic family have become an icon of Chaos Theory. Notwithstanding some very deep analytic and abstract results obtained over the last several decades by top mathematicians, and extensive numerical studies by physicists and nonlinear dynamicists, starting from Feigenbaum, there are literally only a handful of rigorous concrete numerical results. This is not too surprising because it is indeed the essence of the chaotic dynamics on these families which makes them numerically very challenging.
In this paper we develop some rigorous numerical techniques for studying the quadratic family and obtain several interesting ``statistical'' results about how often certain dynamical situations occur in parameter space. In particular, we conclude that stochastic-like dynamics is likely to occur for almost 90\% of parameters. Our research is motivated by a specific ambitious project to identify true chaotic dynamics in the family. However, our techniques can certainly be easily adapted to a large variety of situations.
\section{Introduction} \label{sec:intro}
The rigorous computation of orbits of dynamical systems is well known to be very delicate due to inevitable approximation errors caused by the fact that computers work with only a finite set of ``representable'' numbers, such as the 64-bit floating point numbers following the IEEE 754 standard \cite{ieee754}, implemented in most modern processors. A standard and effective way to deal with this problem is to use \emph{interval arithmetic}~\cite{Moore1966,WT2011} to obtain rigorous bounds for the iterates of a single point, which can be made arbitrarily sharp by paying the price in computing time. The situation can, however, get significantly more complicated if we need to bound the images of an ``ensemble'' of points or the images of a single point for different parameter values. The purpose of this paper is to illustrate some of the problems and provide computational techniques to address them. We focus on a particular case which is motivated by a bigger and more ambitious project, as explained below. However, similar problems appear in more general situations, and our approach should be relatively straightforward to apply in other settings.
\subsection{The quadratic family}
We consider the classical quadratic family of one dimensional maps given by \begin{equation}\label{eq:quadratic} f_{a}(x) = a-x^{2} \end{equation} and restrict ourselves to parameters \( a\in \Omega:=[1.4, 2] \), since the dynamics of \( f_{a} \) is essentially trivial and well understood for \( a\notin\Omega \), and initial conditions \( x\in I_{a} \), where the interval~\( I_{a} \) depends continuously on the parameter \( a \) and has the property that \( f(I_{a})\subseteq I_{a} \), and that the iterates of all the points \( x\notin I_{a} \) converge to \( -\infty \). The existence of \( I_{a} \) follows by elementary observations and its properties imply that any non-trivial dynamics is contained in \( I_{a} \).
Let \( \omega\subseteq \Omega \) be an arbitrary parameter interval. Formally, we could even take \( \omega=\Omega \), but in general our calculations are most effective for quite small intervals. In the computations to be given below as an illustration of our methods, we will construct dynamically a partition of \( \Omega \) into subintervals \( \omega \) whose length varies from an order of \( 10^{-3} \) to as small as \( 10^{-10} \). Let \( c \) denote the critical point \( 0 \) of \( f_a \). For each \( n\geq 0 \), we let \begin{equation}\label{eq:omegan} c_{n}(a):= f_{a}^{n}(f_{a}(c)) \qquad \text{ and } \qquad \omega_{n}:=\{c_{n}(a): a\in \omega\}. \end{equation} Notice that the critical value \( c_{0}(a) \) equals \( a \); therefore, \( \omega_{0} \) coincides with \( \omega \). For \( n\geq 1 \), \( c_{n}(a) \) is simply the \( n \)'th image of the critical value and \( \omega_{n} \) is the interval given by the \( n \)'th images of the critical values for all the parameters \( a\in \omega \).
The first and main objective of this paper is to describe and implement effective computational techniques to obtain \emph{arbitrarily sharp} and \emph{rigorous} approximations for \( \omega_{n} \) under a verifiable technical assumption \eqref{eq:mon} to be given below. We will also describe arguments to obtain rigorous bounds on a few other relevant dynamical quantitities. These objectives are motivated by a bigger project that we discuss in the following subsections. In Section \ref{sec:results} we present and discuss the results arising from our computations. In Section \ref{sec:strategies} we give a relatively detailed overview of the computational strategies used to achieve our goals, and in Section \ref{sec:P} explain how these are used to construct the dynamically defined partition \( \mathcal P \). In Section~\ref{sec:procedure} we give all the details of the computational procedures and explain how we are able to ensure rigorous bounds, and in Section~\ref{sec:algorithms} we give details of the algorithms. The source code of the software, programmed in C++, is freely available at the website \cite{software}, which also features a user-friendly interface to run the software directly from the web browser. The data resulting from our computations is published in~\cite{data20}.
\subsection{Regular and stochastic dynamics} \label{sec:motivation} The specific approach developed in this paper concerns calculations of quantities of very general interest, in a variety of settings relevant to anyone studying dynamical systems from a numerical point of view. In our case, they are directly motivated by a more ambitious long-term research programme whose main interest lies precisely in the subtle and non-trivial synergy between rigorous computational methods and more standard analytic, geometric and probabilistic mathematical arguments. In this section we outline the main features and goals of this programme and emphasize the crucial role of the computational methods introduced in this paper.
The quadratic family \eqref{eq:quadratic} of one-dimensional maps is possibly one of the most studied families of dynamical systems. It contains a mind-boggling richness of dynamical phenomena, which has still not been completely classified or understood, and the dependence of the dynamics on the parameter is extremely complicated. It is known, however, that only two types of dynamical phenomena occur with positive probability in the parameter interval \( \Omega \): \emph{regular} dynamics, where \( f_{a} \) admits a unique attracting periodic orbit to which Lebesgue almost every \( x\in I_{a} \) converges, or \emph{stochastic} dynamics, where \( f_{a} \) admits a unique invariant probability measure \( \mu_{a} \) to which the ergodic averages of Lebesgue almost every point \( x\in I_{a} \) converge (in a very ``chaotic'' way, thus the term ``stochastic-like''). In other words, the union of the two sets
\begin{equation}\label{eq:regstoc} \Omega^{-}:=\{a\in \Omega: a \text{ regular}\} \quad \text{ and } \quad \Omega^{+}:=\{a\in \Omega: a \text{ stochastic}\}, \end{equation} has \emph{full measure} in \( \Omega \) \cite{Lyu02, AviLyudMel03}. It is also known that \( \Omega^{-} \) is \emph{open and dense} in \( \Omega \) \cite{GraSwi97, Lyu97, Lyu97a} and therefore \( \Omega^{+} \) is \emph{nowhere dense}, but has \emph{positive Lebesgue measure} \cite{Jak81, BenCar85}. A natural question is: \begin{quote} Given an explicit parameter \( a\in \Omega^{-}\cup\Omega^{+} \), can we decide if \( a\in \Omega^{-} \) or \( a\in \Omega^{+} \)? \end{quote} It turns out that for most parameters in \( \Omega^{-}\cup\Omega^{+} \), this is an extremely difficult question, and the set \( \Omega^{+} \) is in fact formally \emph{undecidable} \cite{ArbMat04}.
Nevertheless, some results do exist. Rigorous computer assisted arguments have been developed in \cite{TucWil09} to explicitly compute intervals of parameters belonging to \( \Omega^{-} \). These arguments have been applied, at the cost of an equivalent of a whole year of CPU time, to the logistic family \( g_{\lambda}(x)=\lambda x (1-x) \) to show that at least \( 10.2\% \) of parameters in a parameter interval roughly corresponding to our interval \( \Omega \) belong to \( \Omega^{-} \); these parameters apparently consist of almost 5~million subintervals corresponding to regions with associated attracting periodic orbits of period up to about \( 30{,}000 \). An improved method was later applied in \cite{Gal17} to obtain a slightly better estimate with considerably lower computation time. The logistic family is in fact smoothly conjugate by an explicit formula to the quadratic family \eqref{eq:quadratic} and so in principle the periodic windows for the quadratic family can be known explicitly by taking images of those computed for the logistic family. Since the conjugacy is nonlinear, an estimate of the corresponding measure is non trivial, and will be computed in a future paper, though it turns out to yield very similar estimates, thus leaving almost \( 90\% \) of parameters unaccounted for; indeed, the results we present below are very much aligned with this figure.
Approaching the problem from the other side, notwithstanding the impossibility in general to establish that a given parameter \( a\) belongs to \(\Omega^{+} \), it may be possible to assign a well-defined lower bound to the \emph{probability} that \( a\in \Omega^+ \). Suppose, for example, that \( \omega \) is a small neighbourhood of the parameter \( a \) in \( \Omega \) and that, letting \( \omega^{+}:=\omega\cap\Omega^{+} \), we could show that \( |\omega^{+}|\geq \eta |\omega| \) for some \( \eta\in (0,1) \). Then we could say that the probability that \( a\in \omega^{+} \) is at least \( \eta \).
The very first proof that \( |\Omega^{+}|>0 \) goes back to Jakobson \cite{Jak81}, after which there have been many generalizations \cite{BenCar85, BenCar91, LuzTuc99, LuzVia00, PacRovVia98}, all based on a combination of analytic, combinatorial and probabilistic arguments which imply that for some \emph{sufficiently small} neighbourhood \( \omega \) of some ``good'' parameter value \( a^{*} \) we have \( |\omega^{+}|>0 \). However, none of the papers cited provides any explicit lower bound for the measure of \( \omega^{+} \).
In \cite{Jak01}, Jakobson extended the arguments developed in \cite{Jak81} towards a more explicit and quantitative formulation, and designed an algorithm to estimate rigorously from below the measure of stochastic parameter values in quadratic and similar smooth families of unimodal maps. It is worth noting that this is not simply a matter of ``keeping track of the constants'' but requires a reformulation of some of the starting conditions of the results in order to make them computationally verifiable, and a corresponding modification of the arguments. The actual implementation of such an algorithm was however first carried out in \cite{LuzTak06}, using arguments more closely related to \cite{BenCar85, BenCar91}, where it was shown that \( 97\% \) of parameters in the interval \( \omega:= [2-10^{-4990}, 2] \) are stochastic, thus implying that \( |\omega^{+}|\geq 0.97 \cdot 10^{-4990} \geq 10^{-5000} \). This is of course an extremely small lower bound and undoubtedly very far from optimal in terms of the overall measure of stochastic parameters in \( \Omega^{+} \) , but notwithstanding several preliminary announcements, it still remains to this day the only explicit and rigorous bound available. In Section \ref{sec:compstart} we briefly outline a possible strategy for extending the arguments of \cite{LuzTak06} to other parameter intervals in \( \Omega \) and explain how the results and calculations presented in this paper form part of this strategy.
\subsection{Computable starting conditions} \label{sec:compstart} Extending the methods introduced in \cite{LuzTak06} to other parameter intervals in \( \Omega \) requires non-trivial computer-assisted calculations in order to verify some explicit \emph{starting conditions}, which were verified analytically in \cite{LuzTak06} by choosing a very small neighbourhood of the special parameter value \( a^{*}=2 \). It is beyond the scope of this paper to give a complete and precise list of the quantities which need to be calculated, so we refer the reader to \cite{LuzTak06} for the full technical details. Here we limit ourselves to a heuristic (and incomplete) overview which we hope nevertheless helps to get a preliminary idea and to motivate the results presented in this paper.
We suppose first of all that we have fixed a parameter interval \( \omega\subseteq \Omega \). Some conditions, labelled as (A1)-(A4) and involving a number of constants, are formulated in \cite{LuzTak06} where it is proved that if these conditions are satisfied for a set of constants which satisfy certain inequalities, then an explicit formula gives a rigorous lower bound for the proportion of stochastic parameters in \( \omega \). A crucial and non-trivial aspect of the result is that the required conditions (A1)-(A4) are all \emph{verifiable} and the corresponding constants are \emph{computable}, albeit by highly non-trivial computations, in finite time and with finite precision (unlike the starting conditions of the generalizations of Jakobson's Theorem mentioned above, apart from some very exceptional cases).
The first two conditions, (A1) and (A2), are by far the most important, while (A3) and (A4) can be considered ``technical'' and may possibly even be relaxed to some extent. We therefore focus on the first two. Without going into the precise formulation of condition (A1) we mention that it involves the choice of a constant \( \delta>0\) which defines the critical neighbourhood \begin{equation} \label{eq:Delta} \Delta:=(-\delta, \delta). \end{equation} Notice that the critical point \( c=0 \) is a critical point for all parameter values \( a \) and thus \( \Delta \) can be chosen independently of the parameter \( a \).
Condition (A1) then essentially says that there exists a constant \( \lambda>0 \) such that the derivative \( |(f^{k})'(x)| \) of any initial condition \( x \) is \emph{growing exponentially} with exponential rate \( \lambda \), i.e. \( |(f^{k})'(x)| \geq Ce^{\lambda k} \) for some constant \( C>0 \) independent of \( x \), as long as the images of \( x \) stay outside the critical neighbourhood, i.e. as long as \( x, f(x),...,f^{k-1}(x)\notin\Delta \). This is a highly non-trivial condition if \( \delta \) is small (which it needs to be in order for the overall argument to work) since the orbit of \( x \) can still pick up some very small derivatives even outside \( \Delta \). It can be verified analytically in ``sufficiently small'' parameter neighbourhoods \( \omega \) (whose size is however not explicitly known) of ``good'' parameters \( a^{*} \) defined by conditions which are in general also not explicitly verifiable. The only option to verify this condition in general parameter intervals \( \omega \) is therefore by direct and explicit computation. Rigorous algorithms and computational techniques for this purpose were developed in \cite{DKLMOP08,GolLuzPil16} based on the construction of some relevant weighted directed graphs.
The exponential growth of the derivative outside the critical neighbourhood \( \Delta \) is an open condition in parameter space and is in itself compatible with pretty much any kind of overall asymptotic dynamical behaviour. Indeed, as mentioned above, the set \( \Omega^{-} \) is open and dense in \( \Omega \) and therefore any interval \( \omega \) will contain a non-empty (in fact open and dense) subset of regular parameters which admit an attracting periodic orbit. Our objective however is to show that \( \omega \) also contains stochastic parameters and indeed to obtain a lower bound for the proportion of stochastic parameters in \( \omega \). By standard results, a sufficient condition for a parameter \( a \) to be stochastic is the \emph{Collet-Eckmann} condition that the derivative along the orbit of the critical value \( c_{0}:=f_{a}(c) \) is growing exponentially fast, i.e. that there exist constants \( C, \lambda >0 \) such that \( |(f^{n})'(c_{0})| \geq Ce^{\lambda n} \) for \emph{every} \( n\geq 1 \). If condition (A1) discussed above holds, then this is satisfied as long as the orbit of the critical value stays outside \( \Delta \) for all iterates, which can and does indeed happen but only for an exceptional set of parameters of zero Lebesgue measure. To obtain meaningful results we cannot therefore avoid having to deal with returns of the critical value to the critical neighbourhood, and in fact to returns which may come arbitrarily close to the critical point. In these cases it is impossible to verify the Collet-Eckmann condition computationally because it is not implied by any finite time condition and therefore we would need to check directly the derivative for an infinite number of iterates. We remark that there exist also weaker sufficient conditions for the parameter \( a \) to be stochastic, in some cases it is for exmaple sufficient to show that \( |(f^{n})'(c_{0})| \to \infty\), but they are still all not computationally verifiable since they are all asymptotic conditions that cannot be checked in any finite number of iterations. This is essentially the reason why stochastic parameters are undecidable, as mentioned above, and why they occur as Cantor sets and not open sets of parameters.
The strategy, first developed by Jakobson, and refined in subsequent papers to deal with the situation described above, is to set up a \emph{probabilistic} argument based on two fundamental facts. The first, which is relatively elementary, is that the exponential growth of the derivative for the critical orbit is implied by a \emph{bounded recurrence} condition on the critical orbit, essentially something of the form \( |c_{n}|\geq e^{-\alpha n} \) for all \( n\geq 1 \) and for some sufficiently small \( \alpha>0 \) (in fact a little bit more is needed but this gives the main idea). This condition allows the critical point to be recurrent, i.e. to have arbitrarily close returns, but in a sufficiently controlled way, and also suggests that one way to establish abundance of stochastic parameters is to show that many of them have bounded recurrence. Based on this observation, the second, and much more sophisticated, key part of the strategy is to show that the intervals \( \omega_{n} \), which are precisely the union of images \( c_{n}(a) \) of the critical points for the parameters in \( \omega \), tend to \emph{grow} (exponentially fast), implying that the points \( c_{n}(a) \) are sufficiently ``spread out'' in the phase space and thus only a very small proportion can actually come close to the critical point and fail the bounded recurrence condition.
The growth in size of the intervals \( \omega_{n} \) is thus an essential ingredient in all the proofs of all variations of Jakobson's Theorem. The proof of this fact is very involved and requires a combination of several techniques, including some combinatorial, analytic and probabilistic arguments, which themselves however rely on features of the dynamics corresponding to the parameters in \( \omega \). It turns out that the uniform expansivity outside the critical neighbourhood \( \Delta \), as formulated in condition (A1) and as mentioned above, is one of the two most crucial features required. The second is formulated in condition (A2) which uses the definition of \emph{escape time} which we formulate here in a slightly simplified form as follows. \begin{definition} \label{def:escape} \( N \) is called an \emph{escape time} for \( \omega \) if the following holds: \begin{equation}\label{eq:escape}
\omega_{i}\cap \Delta=\emptyset \quad \text{for all \( 0\leq i < N \)}, \quad \text{and} \quad |\omega_{N}|\geq \sqrt\delta. \end{equation} \end{definition} This says that all intervals \( \omega_{n} \) remain outside the critical neighbourhood (and thus in particular ``benefit'' from the expansivity provided by (A1)) up to time \( N \) and that they grow to ``large scale'' (in this case defined as \( \sqrt\delta \) but this can be flexible) at time~\( N \). The wording ``escape time'' is purposefully borrowed from \cite{BenCar91} and later generalizations such as \cite{LuzTuc99, LuzVia00, LuzTak06}, and attempts the capture the idea, mentioned above, that the large size of the interval \( \omega_{N} \) implies that most images do not fall close to the critical point and therefore ``escape'' the constraints of the bounded recurrence condition.
We remark that the foreseen future applications of our estimates to the general problem of the measure of stochastic parameters, and the actual formulation of condition (A2) requires \( N \) to be ``sufficiently large'' depending on the other constants involved, such as the size of the critical neighbourhood \( \Delta \) and the expansivity exponent \( \lambda \). The main goal of this paper is for the moment more limited and is to develop the computational techniques to construct a large number of (small) parameter intervals which have an escape time at some (possibly large) value of \( N \). The verification of an escape time requires the computation of rigorous \emph{enclosures} (we give the precise definitions below) of the sequence of intervals \( \omega_{i} \) for \( 0\leq i \leq N \). For these reasons, the main technical part of this paper consists of the development of some very efficient and effective procedures for estimating the precise location and size of intervals \( \omega_{i} \).
In view of future applications to parameter exclusion arguments, but also out of independent interest, we compute some additional quantities related to an interval \( \omega \) and an escape time~\( N \). A first obvious quantity of interest is the accumulated derivative along the critical orbit. We will compute bounds for this and thus introduce the following notation: \begin{equation} \label{eq:fnprime} (f^{n})'(\omega) := \left[ \inf_{a\in \omega}(f^{n}_{a})'(c_{0}(a)), \ \sup_{a\in \omega}(f^{n}_{a})'(c_{0}(a)) \right]. \end{equation} Also of great interest is the way in which the iterate \( c_{n}(a) \) of the critical point depends on the parameter. To study this dependeance, by some slight abuse of notation, let \( c_{n}\colon\omega \to \omega_{n} \) denote the map \( a \mapsto c_{n}(a). \) The map~\( c_{n} \) is smooth with respect to \( a \) because the family \( f_{a} \) depends smoothly on the parameter, and so we let \( c_{n}'(a) \) denote the derivative of \( c_{n} \) \emph{with respect to the parameter} (which is crucial in the parameter exclusion argument). Then we let \begin{equation}\label{eq:cnprime} c'_{n}(\omega):=\left[\inf_{a\in \omega}c_{n}'(a), \sup_{a\in\omega}c_{n}'(a)\right]. \end{equation} Also of interest, for less obvious and more technical reasons, in the parameter exclusion arguments, is the ratio between the derivatives with respect to the parameter and with respect to the phase space variable. We will therefore also compute the following quantities: \begin{equation} \label{eq:cfnprime} \frac{c_{n}'}{(f^{n})'}(\omega) := \left[ \inf_{a\in \omega}\frac{c_{n}'(a)}{(f^{n}_{a})'(c_{0}(a))},\ \sup_{a\in \omega}\frac{c_{n}'(a)}{(f^{n}_{a})'(c_{0}(a))} \right]. \end{equation} Notice that bounds for \eqref{eq:cfnprime} can be easily derived from bounds for \eqref{eq:fnprime} and \eqref{eq:cnprime} but these may be quite far from optimal as there is no reason a priori for the lower and upper bounds in \eqref{eq:fnprime} and \eqref{eq:cnprime} to be attained for the same parameters. We will therefore compute bounds for \eqref{eq:cfnprime} directly.
\section{The Results} \label{sec:results}
We now present and discuss the data obtained by our computations. In subsections \ref{sec:stocreg} and~\ref{sec:basic} we give a short overview of the procedure for subdividing the parameter space \( \Omega \) into a potentially large number of smaller subintervals. Then in the remaining subsections we give the statistics of several measurements which we carry out for these intervals. The raw data generated by our software is available in~\cite{data20}.
\subsection{Stochastic and regular intervals} \label{sec:stocreg} One of the results of our computations consists of a finite partition \( \mathcal P \) of \( \Omega \) made up of almost \emph{4~million} explicit subintervals of \( \Omega \). We will write \( \mathcal P \) as the union of two disjoint subsets \[
\mathcal P^{+}=\{\text{``stochastic'' intervals}\}
\quad \text{ and } \quad
\mathcal P^{-} =\{\text{``regular'' intervals}\}
\] according to some rigorous and computationally verifiable properties of each interval. The construction of the partition \( \mathcal P \) depends on certain parameters of which the most important is the constant \( \delta>0 \) which defines the critical neighbourhood \( \Delta \), see \eqref{eq:Delta}. Recalling the notion of escape time in \eqref{eq:escape}, given any \( N_{0}\geq 1 \), we will construct the collection of intervals \( \mathcal P^{+} \) so that
\[
\emph{for each interval \( \omega\in\mathcal P^{+} \) there exists an escape time \( N\geq N_{0} \) for \( \omega \)}.
\] The collection \( \mathcal P^{-} \) then consists simply of intervals for which the existence of such an escape time cannot be verified or, for whatever reason, is not verified in our computations.
The terminology ``regular interval'' and ``stochastic interval'' is only heuristic but suggestive of the fact that, while it is beyond the scope of this paper to prove this, it is reasonable to expect that most parameters in regular intervals are regular and most parameters in stochastic intervals are stochastic, as defined in \eqref{eq:regstoc}. For regular intervals this expectation is based on the data we compute, see discussion at the end of Section \ref{sec:exclusion}. For stochastic intervals this expectation is based on the arguments \cite{LuzTak06}, see discussion in Section \ref{sec:compstart}, and its verification is work in progress.
The purpose of this section is to describe the structure and properties of \( \mathcal P^{+}\) and \( \mathcal P^{-} \) for a particular choice of \( \delta \) and \( N_{0} \), namely \[ \delta=10^{-3} \quad \text{ and } \quad N_{0}=25. \] This particular choice of values is just for definiteness and does not have a particular meaning. Our main goal is to show the kind of information that can be obtained by our computations. In particular, we will give rigorous estimates for the total measure of intervals in \( \mathcal P^{+}\) and \( \mathcal P^{-} \), as well as information about the distribution of the sizes of intervals, the computed values of \( N \), the sizes of \( \omega_{N} \), and other interesting information. The computations could just as well be carried out for any other values of \( \delta \) and \( N_{0} \), though they are clearly more intensive and ``expensive'' for smaller values of \( \delta \) and larger values of \( N_{0} \).
\subsection{Basic strategy} \label{sec:basic}
An important part of our approach is that \( \mathcal P^{+}\) and \( \mathcal P^{-} \) are \emph{dynamically defined}. We do not just try to verify the escape time condition in some a priori given subdivision of \( \Omega \), but rather use dynamical information to subdivide the parameter space \( \Omega \) in an efficient way. This makes a significant difference in terms of maximising the measure of intervals in \( \mathcal P^{+} \) and obtaining much more meaningful results. We describe this construction in detail in Section \ref{sec:P}. There are several non-trivial technical aspects to be addressed, especially in order to guarantee that all our estimates are rigorous, but the general strategy is actually very simple and we sketch it here.
We start with the entire parameter space \( \omega=\Omega \) and consider the iterates \( \omega_{i} \) until they hit~\( \Delta \) at some time \( n\geq 1 \) (that is, \( \omega_n \cap \Delta \neq \emptyset \)). Then we chop \( \omega \) into (at most 3) closed subintervals \( \omega=\omega^{\ell}\cup\omega^{\Delta}\cup\omega^{r} \) (the \emph{left}, \emph{middle}, and \emph{right} parts) with disjoint interiors in such a way that \( \omega^{\ell}_{n}\cap \Delta=\emptyset \) and \( \omega^{r}_{n}\cap \Delta=\emptyset \), and thus \( \omega_n \cap \Delta \subset \omega^\Delta_n \). We let \( \omega^{\Delta}\in \mathcal P^{-} \) and no longer consider any of its further iterations. If \( \omega^{\ell} \) is too small (according to some criteria specified precisely in Section~\ref{sec:assigning}), we also let it belong to \( \mathcal P^{-} \) and stop iterating; the same with \( \omega^{r} \). Otherwise, we continue iterating \( \omega^{l} \) and \( \omega^{r} \) until they hit \( \Delta \), and then we repeat the procedure. Every time an interval hits \( \Delta \), we verify whether \( n \geq N_{0} \) and the escape time condition holds. If this happens then we let the interval belong to \( \mathcal P^{+} \) and stop iterating this interval. Moreover, if it is detected at any time during the computation of the iterates \( \omega_i \) that certain other conditions are met which suggest that none of further iterates of \( \omega \) is likely to lead to an escape time, we let the interval belong to~\( \mathcal P^{-} \) and stop iterating.
It is clear from the description of the construction that the collections \( \mathcal P^{+} \) and \( \mathcal P^{-} \) do not depend canonically on the choices of \( \delta \) and \( N_{0} \). Moreover, they also depend on some other choices; for example, on the level of binary precision \( p \) chosen for the computations, which we set as \( p := 250 \), on the minimum size \( w \) (relative to \( \Omega \)) of an interval to consider it worth iterating, which we set as \( w := 10^{-10} \), and some other values relevant to the construction, as explained in detail in Section \ref{sec:P}. For the escape condition,we use the bound \( |\omega_N| \geq 0.0317 > \sqrt{\delta} \). In a future paper we plan to analyse systematically the effect of changing these variables of the construction, but preliminary experiments indicate that while different choices may of course lead to quite different intervals being constructed, the overall statistics are remarkably stable and do not depend in a sensitive way on these choices, provided that we do not impose too severe restrictions on the computations, such as taking the precision \( p \) too low or the relative size \( w \) too large.
The computations were completed using the software described in Section~\ref{sec:software} and available at~\cite{software}. They were completed within 35 minutes on a personal laptop computer with the Intel\textsuperscript{\textregistered} Core{\texttrademark} i5-8265U processor. The results of the computations are available in~\cite{data20}.
\subsection{Measure of regular and stochastic intervals} \label{sec:exclusion}
The partition \( \mathcal P \) obtained by our computations is made up of the disjoint union of the families of \( \mathcal P^{+} \) and \( \mathcal P^{-} \) made up respectively of stochastic intervals, which satisfy the escape time condition~\eqref{eq:escape}, and regular intervals, for which this condition was not verified. The fundamental quantities of interest are therefore the number and total measure of the intervals in each family. The first and most striking observation is that \begin{quote} \emph{almost 90\% of parameters belong to stochastic intervals.} \end{quote}
This means that 90\% of parameters belong to intervals which have an escape at some relatively large time \( N\geq 25 \) (and are therefore good candidates for the parameter exclusion arguments). More precisely, letting \( \#\mathcal P \) denote the cardinality of the partition \( \mathcal P \) and, by some slight abuse of notation, letting \( |\mathcal P| \) denote the total measure of intervals in \( \mathcal P \), we have the following results. The partition \( \mathcal P \) is formed by almost 4 million intervals or, more precisely, \[
\#\mathcal P = 3,\!969,\!763 \quad \text{ and } \quad |\mathcal P|=0.6 = |\Omega|. \] Of these, about 36\% in number and 90\% in measure are stochastic, more precisely \[ \#\mathcal P^{+} = 1,\!436,\!063 \geq 0.36 \#\mathcal P
\quad \text{ and } \quad
|\mathcal P^{+}|\geq 0.539934844013 \geq 0.89989 |\Omega|, \] and therefore \[ \#\mathcal P^{-} = 2,\!533,\!700\ \leq 0.64 \#\mathcal P
\quad \text{ and } \quad
|\mathcal P^{-}| \leq 0.060065155986 \leq 0.10011 |\Omega|. \]
In the following subsections we analyse in detail several properties of the family \( \mathcal P^{+} \) of stochastic intervals, which are our main objects of interest. It is worth, dwelling a little bit here on the collection \( \mathcal P^{-} \) of regular intervals, which also exhibit some very interesting features. First of all, as many intervals in \( \mathcal P^{-} \) are adjacent to each other (the same is also true in \( \mathcal P^{+} \)), it can be useful to merge adjacent intervals and consider ``connected components'' of \( \mathcal P^{-} \) which are a bit less dependent on the specifics of the construction. In terms of these connected components, it is interesting to observe that the total measure of \( \mathcal P^{-} \)is disproportionately concentrated on larger intervals. The 100 largest components (actually made up of \( 1{,}124{,}307 \) intervals of \( \mathcal P^{-} \)) take up a total measure of about \( 0.05726 \), which is 95\% of the total measure of \( \mathcal P^{-} \), and the 3 largest components (made up of \( 11{,}830 \), \( 7{,}955 \) and \( 8{,}313 \) intervals respectively) alone take up more than 30\% of the total measure of \( \mathcal P^{-} \). These largest 3 are contained in the following intervals: \begin{eqnarray*}
&&I_{1} = [1.75208241722, 1.77992046728], \quad |I_{1}|= 0.0278381, \\
&&I_{2} = [1.47590994781, 1.48293277717], \quad |I_{2}|= 0.00702283, \\
&&I_{3} = [1.62533272418, 1.63110961362], \quad |I_{3}| = 0.00577689. \end{eqnarray*}
\begin{figure}
\caption{Distribution of the measure of stochastic parameters in \( \Omega \). Blue bars show the percentage of stochastic parameters in each of the 100 subintervals of \( \Omega = [1.4,2] \). Horizontal red lines show the location of 10 largest connected components of \( \mathcal{P}^- \). Bifurcation diagram for the quadratic map is shown along the horizontal axis. More detailed discussion of this picture at the end of Section \ref{sec:exclusion}.}
\label{fig:stochasticintervals}
\end{figure}
In Figure \ref{fig:stochasticintervals}, we have represented the ten largest connected components of \( \mathcal P^{-} \) by red horizontal bars to highlight how they match up remarkably well, albeit unsurprisingly, with the well known \emph{periodic windows} which appear in the standard \emph{bifurcation diagram}. It would clearly be interesting to prove that most parameters in regular intervals are indeed regular, perhaps adapting the techniques of \cite{TucWil09}. For clarity and completeness, we remark that Figure \ref{fig:stochasticintervals} was created by dividing \( \Omega = [1.4,2] \) into \( 100 \) intervals \( \omega_1, \ldots, \omega_{100} \) of the same width \( 0.006 \). For each of these intervals \( \omega_i \), the corresponding blue bar shows the percentage of stochastic parameters in \( \omega_i \), that is, the measure of \( \omega_i \cap P^+ \), where \( P^+ \subset \Omega \) is the union of all the intervals in \( \mathcal{P}^+ \). The height of a blue bar below 100\% indicates that \( \omega_i \) intersects some intervals in \( \mathcal{P}^- \). Note that the height of the bars above a large periodic window in the bifurcation diagram (shown along the horizontal axis) is zero if the corresponding \( \omega_i \) is entirely covered by intervals in \( \mathcal{P}^- \). The alignment of bars considerably lower than 100\% with the periodic windows clearly shows how the periodic windows contribute to \( \mathcal{P}^-\). For example, from the graph (or actually from raw data that was used to plot the graph) one can read that about 99.34\% of the interval \( [1.988,1.994] \) is covered by \( \mathcal{P}^+ \), while only some 73.6\% of the interval \( [1.94,1.946] \) is covered by \( \mathcal{P}^+ \).
\subsection{Distribution of sizes of stochastic intervals}
\begin{figure}
\caption{Distribution and number of stochastic intervals of different sizes}
\label{fig:stochdist}
\end{figure}
We now focus on the family \( \mathcal P^{+} \) of stochastic intervals, which occupy almost 90\% of the parameter space \( \Omega \) and are our main objects of interest.
Figure \ref{fig:stochdist} shows the distribution of sizes of stochastic intervals, which turns out to span several orders of magnitude of different scales. Most of the measure is taken up by ``medium'' to ``small'' intervals, whereas ``large'' intervals (\( |\omega|\geq 10^{-3} \)), and ``very small'' intervals (\( |\omega|\leq 10^{-7} \)) each take up about 5\% of the total measure. Notice that, perhaps also unsurprisingly, the number of very small intervals is more than the number of intervals of all other sizes put together. This seems to suggest that, similarly to the regular intervals, while the number of small intervals grows quite fast, it does not grow fast enough to have a significant effect on the measure.
\subsection{Distribution of escape times \( N \)}
\begin{figure}
\caption{Distribution of escape times of stochastic intervals}
\label{fig:distescape}
\end{figure}
Figure \ref{fig:distescape} shows the distribution of escape times of stochastic intervals. Remarkably, more than 90\% of the intervals, occupying more than 50\% of the measure, escape at the very first opportunity, with escape time \( N=N_{0}=25 \). Most other intervals have escape times just slightly larger than 25, with more than 99.7\% of intervals, occupying 94\% of the measure, having escape times \( 25\leq N \leq 32 \). We, emphasize, however that there is a long ``tail,'' and intervals exist with much higher escape times, up to a maximum of escape time \( N=199 \) for 73 distinct intervals in \( \mathcal P^{+} \) taking up 0.00173\% of the total measure of stochastic intervals.
\subsection{Distribution of sizes of intervals \( \omega_{N} \) at escape times}
\begin{figure}
\caption{Distribution of sizes of intervals at escape times}
\label{fig:excludedmeasure}
\end{figure}
Figure \ref{fig:excludedmeasure} shows another distribution, namely the sizes of the images \( \omega_{N} \) of stochastic intervals at their escape times. The results are, in our opinion, quite \emph{unexpected and interesting} even though, given the unpredictable way intervals are regularly chopped as part of the construction of the partition \( \mathcal P \), there seems to be no elementary heuristic argument for predicting the size of \( \omega_{N} \).
Recall that by definition of escape time we always have a lower bound of \( 0.0317 \gtrsim \sqrt \delta\), which is relatively small in relation to the interval \( I_{a} \) of definition of the map (which is 4 for the ``top'' parameter \( a=2 \) and slightly less for other parameters). It seems therefore quite remarkable that more than 99.7\% of intervals, occupying almost 90\% of the measure, have relatively ``macroscopic'' size, with \( |\omega_{N}| \geq 0.5 \). Even more, it turns out that intervals occupying some 20\% of the measure have ``very large'' images, i.e. \( |\omega_{N}|\geq 3 \). In Section \ref{sec:study} we analyse in detail the ``personal history'' of one, more or less randomly chosen, interval \( \omega\in\mathcal P^{+} \) with escape time \( N=26 \) and such that \( |\omega_{N}|\geq 3.5 \), in order to help understand the mechanism by which this situation can occur.
Figure~\ref{fig:excludedmeasure} reveals one more interesting piece of information. The first few pieces in the pie chart show the measure of intervals \(\omega\) that yield smaller \( \omega_{N} \). Although their measure is considerable, the actual number of intervals that yield this measure is not very big. For example, the first 4 pieces that yield over 50\% of the measure consist of only 44,256 individual intervals. On the other hand, the last two pieces of the pie chart, corresponding to the largest \( |\omega_{N}| \), comprise as little as 12.4\% of the measure, yet they consist of almost 1.1 million intervals. This shows that there are many large intervals that yield small \( \omega_N \) and many tiny intervals that yield huge \(\omega_N\); one could call it \emph{negative correlation} between \( |\omega| \) and \( |\omega_N| \).
\subsection{Accumulation of derivatives} \label{sec:accum}
\begin{figure}
\caption{Distribution of the lower bounds on \( \tilde f_{N}(\omega) \) computed for the stochastic intervals. The highest encountered value was \( \approx 0.732 \).}
\label{fig:rateF}
\end{figure}
\begin{figure}
\caption{Distribution of the lower bounds on \( \tilde c_{N}(\omega) \) computed for the stochastic intervals. The highest encountered value was \( \approx 0.716 \).}
\label{fig:rateC}
\end{figure}
Figures \ref{fig:rateF} and \ref{fig:rateC} show some results related to the computations of the space and parameter derivatives, as in \eqref{eq:fnprime} and \eqref{eq:cnprime}, These can be of significant interest in a variety of contexts, especially when they exhibit \emph{exponential growth}, which is a non-trivial feature, given that some iterates can be very close to the critical point where the derivative vanishes. In view of this fact, and of the large variation in the escape times for stochastic intervals, it seems best to present the data in the form of average \emph{exponential rate} of growth along the orbits. Thus, for a stochastic interval \( \omega\in \mathcal P^{+} \) with escape time \( N \), and a parameter \( a\in \omega \), we define \[
\tilde f_{N}(a):= \frac 1N \log |(f^{n}_{a})'(c_{0}(a))| \quad\text{ and } \quad
\tilde c_{N}(a):= \frac 1N \log |c'_{N}(a)| \] and then, analogously to \eqref{eq:fnprime} and \eqref{eq:cnprime}, we define \[ \tilde f_{N}(\omega) := \left[ \inf_{a\in \omega} \tilde f_{N}(a), \ \sup_{a\in \omega} \tilde f_{N}(a) \right] \quad\text{ and } \quad \tilde c_{N}(\omega) := \left[ \inf_{a\in \omega} \tilde c_{N}(a), \ \sup_{a\in \omega} \tilde c_{N}(a) \right]. \]
Figure \ref{fig:rateF} shows the distribution of lower bounds computed for \( \tilde f_{N}(\omega) \). We note that for 12.5\% of intervals in measure we do not have a positive lower bound, but this does not necessarily mean that there is no exponential growth. Indeed, all these intervals have a positive upper bound (not represented here) and it seems most likely that the lack of a positive lower bound is due to overestimates caused by using interval arithmetic in evaluating these quantities. We also note that there is a remarkably even distribution of lower bounds, with about 10-20\% in measure of parameter intervals in each band, except for the highest rates of growth above \( 0.3 \) which is exhibited only by 2.9\% of parameters. We mention, however, that higher rates are exhibited by smaller fractions of parameters, all the way up to 0.732.
\begin{figure}
\caption{Distribution of the upper bounds on the quotient of derivatives \( c_{N}'(a) / (f^{N}_{a})'(c_{0}(a)) \) computed for the stochastic intervals. The lowest encountered value was almost \( 1.2 \), the highest was close to \( 10^{13} \).}
\label{fig:D}
\end{figure}
\begin{figure}
\caption{Distribution of the upper bounds on the quotient \( \mathcal{D} \) computed for the stochastic intervals. The lowest encountered value was slightly above \( 1 \), the highest was almost \( 400 \).}
\label{fig:quotD}
\end{figure}
Figure \ref{fig:rateC} shows the corresponding statistics for \( \tilde c_{N}(\omega) \) which turn out to be remarkably similar to those for \( \tilde f_{N}(\omega) \). We note however that the close relationships between these values is ``real'', not just statistical, as demonstrated in Figures \ref{fig:D} and \ref{fig:quotD} which refer to the measurements of the ratio \eqref{eq:cfnprime} between these two quantities.
Figure \ref{fig:D} shows the statistics of the upper bounds for this ratio, and should be interpreted in conjunction with Figure \ref{fig:quotD} which gives upper bounds for the \emph{distortion}
\[
\mathcal D:= \frac{\sup_{a\in\omega}\{|c'_{N}(a)/(f^{N}_{a})'(c_{0}(a))|\} }
{\inf_{a\in\omega}\{|c'_{N}(a)/(f^{N}_{a})'(c_{0}(a))|\}}. \]
It seems highly remarkable that this distortion is very close to 1 in most of the intervals, both in cardinality and in measure, and \( <1.5 \) for more than 75\% of intervals, both in cardinality and in measure. This means that for most parameters the upper and lower bounds for \( |c'_{N}(\omega)/(f^{N}_{a})'(c_{0}(\omega))| \) are comparable and thus Figure \ref{fig:D} gives a good representation of its actual values. It seems therefore also highly remarkable that this ratio is \( < 2 \) for over 75\% in the measure of parameter intervals.
\section{The Computations} \label{sec:strategies}
In order to cater for readers with different levels of familiarity with computational methods, in Sections \ref{sec:strategies}-\ref{sec:algorithms} we give increasingly detailed and technical description of the computational procedures and algorithms used to obtain the results given in Section \ref{sec:results}. We begin, in this section, by explaining our general strategy for the computations, in a way that is easily accessible to anyone with some familiarity with one-dimensional dynamics, emphasising nevertheless some crucial but subtle aspects related to the need to obtain rigorous explicit bounds. In Section \ref{sec:P} we give a detailed but non-technical explanation of the procedure for constructing the families of intervals \( \mathcal P^{-} \) and \( \mathcal P^{+} \) using the results of the calculations described in this section. In Section \ref{sec:procedure} we explain how the calculations can be formalised in order to work with computer representable numbers and to yield rigorous bounds for all the quantities we compute. Finally, in Section \ref{sec:algorithms} we describe precisely the algorithms used to implement each step of the procedure.
The computations can be divided roughly into three categories, which we describe in the following three subsections.
\subsection{Iterating} \label{sec:iterate} The core challenge we address in this paper is the development of effective techniques for the computation of intervals \( \omega_{n}:=\{c_{n}(a): a\in \omega\} \), for some given parameter interval \( \omega\subseteq \Omega \).
The first step in this direction is clearly the development of effective techniques for the computation of the point \( c_{n}(a):=f^{n}_{a}(c_{0}(a))=f^{n}_{a}(a) \) for a fixed parameter \( a\in \omega \) (recall that \( c=0 \) and so \( c_{0}(a)=f_{a}(c)=a \)). This is already non-trivial since the value of \( a \) may not be computer representable and therefore require an approximation strategy before we even begin iterating. Even if \( a \) is representable, its first image \( c_{1}(a):=f_{a}(a)=a-a^{2} \) is very possibly not representable, and similarly for higher iterates. Fortunately, tried and tested methods, known as \emph{interval arithmetic} \cite{Moore1966, WT2011}, exist and can be very effective for these kinds of computations. They consist essentially of \emph{enclosing} the point to be iterated in a small interval whose endpoints are representable numbers, and then applying the map to this interval to obtain a rigorous \emph{enclosure}, and therefore an approximation, of the image of the given point. The method of course gives increasingly large enclosures, and therefore increasingly poor approximations, for higher iterates \( c_{n}(a) \) but these can still be obtained to any desired precision for a fixed \( n \) by increasing the computer precision and therefore the cardinality, and ``density'', of the set of representable numbers. For example, in the calculations in Section \ref{sec:results} we work with about 80 decimal places.
In principle we could blindly apply the interval arithmetic techniques also to the computation of the intervals \( \omega_{n} \). Indeed, supposing for example that the parameter interval \( \omega=[\mathbf{a},\mathbf{b}] \) was given by endpoints which are representable numbers (here and below we will conventionally use bold type to denote representable numbers) and that the same was true of the interval \( \omega_{i} := [\mathbf{a}_{i},\mathbf{b}_{i}] \) for some \( i\geq 0 \) (or that we had a representable enclosure of the interval \( \omega_{i} \), this does not make much of a difference for the discussion here). Then we could use interval arithmetic to compute a rigorous enclosure for all possible values of \( a-x^{2} \) for all possible \( a\in \omega \) and \( x\in \omega_{i} \), thus yielding a rigorous enclosure for \( \omega_{i+1} \). It is easy to see, however, that this will very likely produce \emph{huge} overestimates of \( \omega_{i+1} \), which would moreover compound at each iteration, and is therefore not at all a very \emph{effective} way to proceed. The reason for the overestimation is due to the fact that this approach consists of iterating \emph{every} point in \( \omega_{i} \) by \( f_{a} \) for \emph{every} parameter \( a\in \omega \), rather than iterating each point in \( \omega_{i} \) just by the corresponding parameter. The enclosure for \( \omega_{i+1} \) will therefore contain the points \( f_{\mathbf{a}}(\mathbf{b}_{i}) \) and \( f_{\mathbf{b}}(\mathbf{a}_{i}) \) which may be much further apart than necessary if, for example, \( \omega_{i} \) lies on the right of the critical point.
At first sight, there is an obvious solution to this problem, which is to simply iterate the points corresponding to the endpoints \( \mathbf{a} \) and \( \mathbf{b} \) of the parameter interval \( \omega \), i.e., to compute the points \( c_{n}(\mathbf{a}) \) and \( c_{n}(\mathbf{b}) \). As mentioned above, the computation of these points can be easily achieved to arbitrary precision. The problem, however, is that it is not necessarily the case that \( c_{n}(\mathbf{a}) \) and \( c_{n}(\mathbf{b}) \) are the endpoints of \( \omega_{n} \) even though \( \mathbf{a} \) and \( \mathbf{b} \) are the endpoints of \( \omega \), since the map \( c_{n}\colon\omega\to\omega_{n} \) may fail to be injective, and if it is not injective then it may ``fold'' and one of \( c_{n}(\mathbf{a}) \) or \( c_{n}(\mathbf{b}) \) may lie in the interior of \( \omega_{n} \). We can resolve this issue if, recalling \eqref{eq:cnprime}, we have \begin{equation}\label{eq:mon} 0\notin c_{n}'(\omega) \end{equation} which implies that \( c_{n}'(a) \neq 0 \) for every \( a\in \omega \) and therefore that the map \( c_{n} \) is \emph{monotone} on \( \omega \). This implies that \( c_{n}(\mathbf{a}) \) and \( c_{n}(\mathbf{b}) \) are indeed the endpoints of \( \omega_{n} \) and therefore provides both \emph{inner} and \emph{outer} enclosures of \( \omega_{n} \) to arbitrary precision.
We emphasize that \eqref{eq:mon} cannot always be verified and that its verification is implicitly one of the conditions required for a parameter interval \( \omega \) to belong to \( \mathcal P^{+} \). As mentioned above, the collection \( \mathcal P^{-} \) is formed by those intervals for which the escape condition cannot be verified, for a variety of possible reasons, and failure to satisfy \eqref{eq:mon} is one of these reasons. We will describe below the precise way in which we check \eqref{eq:mon}, we just mention here that it will be done by a simple inductive procedure. For \( n=0 \), we have \( c_{0}(a)=a \), and therefore \( c_{0}'(a) = 1 \) for all \(a \in \omega\). For \( n\geq 1 \), we use the formula \begin{equation}\label{eq:derivC} c_n'(a)=-2c_{n-1}(a) \cdot c_{n-1}'(a)+ 1. \end{equation}
If we have rigorous enclosures for both \( \omega_{n-1} \) and \( c'(\omega_{n-1}) \) then we can use \eqref{eq:derivC} and standard interval arithmetic computations to obtain a rigorous enclosure for \( c'_{n}(\omega) \) and check \eqref{eq:mon}.
\subsection{Differentiating} \label{sec:estimate}
As mentioned in the introduction, we are also interested in computing rigorous enclosures for the intervals \( (f^{n})'(\omega) \) and \( c_{n}'/(f^{n})'(\omega) \) defined in \eqref{eq:fnprime} and \eqref{eq:cfnprime}. For both intervals we use an inductive procedure similar to that used for the calculation of \( c'_{n}(\omega) \) above. Specifically, by the chain rule we have \begin{equation}\label{eq:chain} \begin{aligned} (f^{n}_{a})'(c_0(a)) &= f'_{a}(f^{n-1}_{a}(c_0(a))) (f^{n-1}_{a})'(c_0(a)) \\ &= -2 f^{n-1}_{a}(c_0(a)) (f^{n-1}_{a})'(c_0(a)) \\ &= -2 c_{n-1}(a) (f^{n-1}_{a})'(c_0(a)) \end{aligned} \end{equation} and therefore, using interval arithmetic, rigorous enclosures for \( \omega_{n-1} \) and for \( (f^{n-1})'(\omega) \) immediately yield rigorous enclosures for \( (f^{n})'(\omega) \). Similarly, \eqref{eq:derivC} and \eqref{eq:chain} imply \begin{equation}\label{eq:quot} \frac{c'_{n}(a)}{(f^{n}_a)'(c_0(a))}= \frac{c'_{n-1}(a)}{(f^{n-1}_a)'(c_0(a))}+\frac{1}{(f^{n-1}_a)'(c_0(a))} \end{equation} and therefore, rigorous enclosures for \( c'_{n-1}/(f^{n-1})'(\omega) \) and for \( (f^{n-1})'(\omega) \) yield a rigorous enclosure for \( c'_{n}/(f^{n})'(\omega) \). Notice that an enclosure for \( c'_{n}/(f^{n})'(\omega) \) could also be computed directly from the enclosures of \( c'_{n} \) and~\( (f^{n})' \) by taking the worst case bounds, but the bounds we compute here, using \eqref{eq:quot} inductively, are clearly much sharper.
\subsection{Chopping} \label{sec:avoid} Finally, we discuss in a bit more detail, the chopping procedure described briefly at the end of Section \ref{sec:stocreg} leading the the construction of the partition \( \mathcal P \). As described there, the basic strategy is very simple and intuitive, chopping intervals which hit the critical neighbourhood \( \Delta \), say at some time \( n\geq 1 \), into subintervals which either land outside \( \Delta \) and can be iterated further, or continue to intersect \( \Delta \) and therefore belong to \( \mathcal P^{-} \). The computational problem is simply stated and consists of finding the boundary between the parameters which fall into \( \Delta \) and those which do not. While we can have a very good approximation of the entire interval \( \omega_{n} \) following the procedure described in Section \ref{sec:iterate}, this is based on computation of the endpoints and does not help in identifying the parameters in the interior of \( \omega \) whose images fall into a particular position, such as close to the boundary of \( \Delta \). The map \( c_{n} \colon \omega\to \omega_{n}\) is not affine and therefore we cannot directly recover the parameters in \( \omega \) which map to the boundary points of \( \Delta \) under \( c_{n} \), even knowing with a good degree of accuracy the position of the boundary points of \( \omega_{n} \).
Our approach is to use a relatively straightforward variant of the numerical algorithm known as the \emph{bisection method} (see e.g. \cite[\S 3.1]{KinChe1991}). In order to explain this approach, let us fix \( \omega = [u,v] \) and \( n > 0 \). Assume condition \eqref{eq:mon} holds true, and \( \omega_n \cap \Delta \neq \emptyset \). Assume \( c_n (u) \notin \Delta \). For simplicity of notation, assume \( c_n \) is increasing. To make the idea clear, let us ignore rounding errors for the moment and assume the computations are exact. We are going to construct inductively two sequences of numbers \( \{x_i\} \) and \( \{y_i\} \), with \( x_i < y_i \)
and \( |y_i - x_i| = 2^{-i} |v - u| \), with the following property: \( c_n ([u,x_i]) \cap \Delta = \emptyset \) and \( c_n ([u,y_i]) \cap \Delta \neq \emptyset \). In this way, by computing consecutive elements of the two sequences, we are going to get a gradually better approximation of \( c_n^{-1} (-\delta) \). Set \( x_0 := u \) and \( y_0 := v \), which satisfies the required properties. Now assume \( x_{i} \) and \( y_{i} \) have been constructed. Take \( t_{i+1} := (x_i + y_i) / 2 \), and compute \( c_n (t_{i+1}) \). If \( [c_n (u), c_n (t_{i+1})] \cap \Delta = \emptyset \) then set \( x_{i+1} := t_{i+1} \) and \( y_{i+1} := y_{i} \). Otherwise, set \( x_{i+1} := x_{i+1} \) and \( y_{i+1} := t_{i+1} \). It is straightforward to see that the new elements \( x_{i+1} \) and \( y_{i+1} \) also satisfy the properties. Take the interval \( [u,x_{k}] \) for some relatively large \( k > 0 \), e.g., \( k = 30 \), for one of the subintervals, say \( \omega^{\ell} \). Repeat the same for the other endpoint of \( \omega \) to obtain the other subinterval \( \omega^{r} \), provided that \( c_n (v) \notin \Delta \).
We remark that the convergence of the bisection method is exponential; for example, after \( 30 \) steps, the size of the new interval is computed with the precision of \( 2^{-30} \approx 10^{-9} \) relative to the size of the original interval, which may be satisfactory in most cases. The computation of each step is fast, because it consists of computing \( c_n \) for a single point. The quantities discussed in the previous two subsections, computed along with the iterations of \( \omega \), can be used further with the smaller intervals, or can be re-computed from scratch; in this paper we chose the second option, because the computation is not very costly, and we can expect to get better estimates for those quantities, due to the smaller interval \( \omega \). Finally, note that due to approximations and rounding, or different monotonicity of \( c_n \) than assumed above, the actual procedure is technically more sophisticated; we discuss the details in Section~\ref{sec:procedure}.
\section{The Partition} \label{sec:P} The construction of the partition \( \mathcal P \), as mentioned above, is based on the computations described in Section \ref{sec:strategies}. However, the way these computations are combined to explicitly construct \( \mathcal P \) is non-trivial, and requires the introduction of some auxiliary constants, and various criteria on when to stop the computations and on how to decide if an interval belongs to \( \mathcal P^{-} \) or \( \mathcal P^{+} \). We describe heuristically, but in some detail, the overall scheme, and postpone to Section \ref{sec:algorithms} the precise formulation of the formal structure of the algorithms.
\subsection{Defining a queue}
The general principle underlying the construction of the partition \( \mathcal P \) is quite simple and is outlined in Section \ref{sec:stocreg}. The construction relies in a fundamental way on the computations discussed in Section \ref{sec:strategies} and essentially boils down to a combination of iterating and chopping parameter intervals. We note, however, that this produces a large number (\emph{possibly millions!}) of small parameter intervals and some criteria need to be put in place regarding the order with which we handle these intervals, at which point we stop iterating, and how we decide to assign such intervals to either one of the families \( \mathcal P^{+} \) or \( \mathcal P^{-} \). For that purpose, we use the notion of a \emph{queue}. In our setting this can be formulated in the following way. At any given moment we have a partition \( \mathcal P \) of \( \Omega \) given by the union of three families of closed intervals with disjoint interiors: \[ \mathcal P = \mathcal P^{+}\cup \mathcal P^{-}\cup\mathcal P^{q} \] where \( \mathcal P^{+} \) consists of stochastic intervals, \( \mathcal P^{-} \)
consists of regular intervals, and \( \mathcal P^{q} \) consists of intervals in the queue.
Initially, the entire parameter space \( \Omega \) is placed in the queue as a single interval, and therefore we have \begin{equation}\label{eq:Pstart} \mathcal P = \mathcal P^{q}=\{\Omega\} \quad \text {and } \quad
\mathcal P^{+} = \mathcal P^{-} = \emptyset.
\end{equation}
As the process runs, intervals in \( \mathcal P^{q} \) get iterated and possibly chopped and, according to a set of criteria which we are about to describe, the resulting subintervals are either assigned to \( \mathcal P^{+} \) or \( \mathcal P^{-} \), after which they are no longer iterated, or to \( \mathcal P^{q} \) for possible further iteration. Eventually we end up with a situation where
\begin{equation}\label{eq:Pend}
\mathcal P^{q}=\emptyset
\quad \text {and } \quad
\mathcal P = \mathcal P^{+}\cup \mathcal P^{-}
\end{equation} at which point we consider to have concluded our construction. In the following subsections we explain the precise mechanism and criteria for moving intervals from the queue into \( \mathcal P^{+} \) or~\( \mathcal P^{-} \) and adding intervals to the queue. We say that an interval is \emph{enqueued} if it is added to the queue, and it is \emph{dequeued} if it is taken back from the queue.
\subsection{Processing the queue} \label{sec:assigning} The process of moving from the initial partition \eqref{eq:Pstart} to the final partition \eqref{eq:Pend} requires several actions and decisions based on the outcome of the computations described in Section \ref{sec:strategies}. For clarity, we subdivide our explanation of these actions and decisions in a few steps. We suppose that \( \omega\in \mathcal P^{q} \) is an interval in the queue and explain what we do with it and how we decide at some point whether it belongs to \( \mathcal P^{-} \) or \( \mathcal P^{+} \) or whether it gets chopped, at which point we need to decide what to do with the remaining subintervals. We consider various cases.
The first and, in some sense, most important case, is when we are successfully able to compute (approximate) iterates of \( \omega \) up to some time \( n\geq 1 \)
for which \( \omega_{n}\cap \Delta\neq\emptyset \) (or, more precisely, where the outer enclosure of \( \omega_{n} \) intersects \( \Delta \), thus indicating that \( \omega_{n} \) \emph{may} intersect~\( \Delta \)). We will consider two subcases.
\noindent
\textbf{(P1a)} If \( \omega_{n}\cap \Delta\neq\emptyset \), \( n\geq N_{0} \) and the escape time conditions \eqref{eq:escape} hold, we let \( \omega\in \mathcal P^{+} \).
\noindent
This is the one and only situation where we are ``successful'' and place intervals in \( \mathcal P^{+} \). In all other cases below, possibly after subdividing the original interval, we will either ``give up'' on one or more of the resulting subintervals and place them in \( \mathcal P^{-} \), or save them for further iteration by placing them back in the queue \( \mathcal P^{q} \).
\noindent
\textbf{(P1b)} If \( \omega_{n}\cap \Delta\neq\emptyset \) but \( n< N_{0} \) or the escape time conditions \eqref{eq:escape} do not hold, then we chop the interval \( \omega \) according to the procedure described in Sections \ref{sec:stocreg} and \ref{sec:avoid} (and in detail in Algorithm~\ref{alg:hitDelta}). This chopping procedure subdivides \( \omega \) into at most 3 disjoint subintervals \( \omega=\omega^{\ell}\cup\omega^{\Delta}\cup\omega^{r} \) such that \( \omega^{\ell}_{n}\cap \Delta=\emptyset \) and \( \omega^{r}_{n}\cap \Delta=\emptyset \). We let \( \omega^{\Delta}\in\mathcal P^{-} \) since it intersects~\( \Delta \) and therefore cannot ever satisfy the escape time conditions at any time in the future. The decision about what to do with \( \omega^{\ell}, \omega^{r} \) depends on their size. If they are too small they may contribute little to the final result, and thus one might consider processing them a waste of the computational resources that could otherwise be assigned to investigating larger intervals. We therefore introduce the variable \( w \geq 0 \) to indicate the minimum width of an interval, relative to the width of \( \Omega \), that we are willing to continue iterating. If the size of the subintervals is \( \geq w \) we place them back in the queue \( \mathcal P^{q} \), whereas, if their sizes are \( <w \) we ``abandon'' them by placing them in \( \mathcal P^{-} \). The results described in Section \ref{sec:results} are based on a choice of \( w=10^{-10} \).
\noindent There are only two reasons for which we may fail to arrive at a situation where \( \omega_{n}\cap \Delta\neq\emptyset \): there may be some technical/computational issue which does not allow us to properly compute the iterates of \( \omega \); or it may happen simply that we keep iterating \( \omega \) and it just never hits \( \Delta \). In the first case we distinguish again two subcases.
\noindent
\textbf{(P2a)} It may happen, possibly due to overestimations caused by the rounding procedures, that for some iterate \( n \) we may have \( 0\in c_{n}'(\omega) \) and/or \( 0\in (f^{n})'(\omega) \) (recall \eqref{eq:cnprime} and \eqref{eq:fnprime}). The first case indicates a failure of the technical condition \eqref{eq:mon} which is required to continue iterating the interval, and the second a failure of another technical condition which is required to verify some properties of our calculations, see \eqref{eq:ind} in Theorem \ref{thm:intervals} below. In both these cases, rather than giving up on \( \omega \) straight away, by placing it in \( \mathcal P^{-} \), we bisect \( \omega \) and consider the two resulting halves of the interval. As in (P1b) we then consisder the size of these subintervals. If they are larger than \( w=10^{-10} \) of the size of \( \Omega \) we place them back in the queue \( P^{q} \), while if they are smaller than \( w \) we place them in \( \mathcal P^{-} \).
\noindent \textbf{(P2b)} A second technical issue which can arise is the situation where the lower and upper bounds for the endpoints of \( \omega_n \) are further apart than the distance between the endpoints themselves. This situation is an effect of rounding errors introduced while evaluating the function~\( f_a \) and suggests that the precision of representable numbers used for the computation is too low. If this happens then we do not get any reasonable lower bound on the width of \( \omega_n \), and therefore iterating \( \omega \) further is pointless; moreover, this situation is explicitly excluded at various steps of our arguments, see \eqref{eq:innerouter} and \eqref{eq:indn}. There is no way to improve the result, apart from choosing a different precision of numerical computation (choosing a different set \( \mathbf{R} \) of representable numbers). We therefore ``abandon'' such an interval by assigning it to \( \mathcal P^{-} \). We note, however, that due to the very high precision with which we work, the situation can only occur with extremely small intervals, and therefore this does not seem to provide a significant loss in terms of measure of intervals which eventually make up \( \mathcal P^{+} \).
\noindent Finally, we consider the case where we can continue iterating \( \omega \) but it never intersects \( \Delta \).
\noindent \textbf{(P3)} If \( \omega \) is iterated a huge number of times without hitting \( \Delta \) then this most likely means that the sequence \( \{\omega_n\} \) got trapped inside the attracting neighbourhood of some stable periodic orbit and we are very unlikely to see any escape time in the future. We therefore define the maximum number \( N_{\max} > 0 \) of iterations that we allow without ever hitting \( \Delta \) and assign an interval to \( \mathcal P^{-} \) if this number if exceeded (see Algorithm~\ref{alg:interval} for the implementation). The results described in Section \ref{sec:results} are based on a choice of \( N_{\max} = 200 \).
\noindent We remark that, while it is not our goal in this paper to prove that any particular parameters belong to~\( \Omega^{-} \), our rules for placing parameter intervals in \( \mathcal P^{-} \) suggest a strong probability that such intervals belong to, or substantially intersect, open sets in \( \Omega^{-} \).
Since we know these intervals explicitly, our calculations may provide the foundations for further work, possibly applying techniques similar to those of~\cite{TucWil09} or~\cite{Gal17}, to actually prove that certain parameter intervals are indeed regular.
\subsection{Emptying the queue} By setting up the numbers \( w \) and \( N_{\max} \), we ensure that all the parameters in \( \Omega \) are eventually moved into either \( \mathcal P^{+} \) or \( \mathcal P^{-} \) and that therefore the process eventually terminates. Our choice of constants used to obtain the results presented in Section \ref{sec:results}, lead to a complete construction of the partition \( \mathcal P \), made up of more than 3.9 million intervals, in only 35 minutes of computation time on a laptop computer.
It is not completely clear, however, how quickly the computation time may increase if we choose smaller values of the radius \( \delta \) of the critical neighbourhood, larger values of \( N_{0} \), smaller values of the minimum size \( w \) of intervals we consider, or larger values for the number \( N_{\max} \) of iterates before giving up on an interval. It is worth therefore putting in place some ``safeguards'' against the possibility of an essentially never ending computation. We can easily do this by specifying some criteria which limit the amount of computations which we carry out. If any of these criteria are met, we simply stop the computations and transfer all intervals still in the queue to \( \mathcal P^{-} \). This is still completely consistent with the spirit of the construction since the family \( \mathcal P^{-} \) is just the collection of intervals for which we could not verify the escape time condition. The three constraints we can impose are fairly obvious.
\noindent 1) We can fix the maximal number \( i_{\max} > 0 \) of intervals to be processed: we keep track of each time an interval is picked form the queue for iteration, until one of the situations described above occurs. After processing this number of intervals, we interrupt the computations.
\noindent
2) We can fix the maximal allowed queue size \( q > 0 \); for every interval \( \omega \) that is processed, up to two new intervals are added to the queue when \( \omega_n \) hits \( \Delta \) for some \( n \) or when a problem occurs and the interval \( \omega \) is halved; thus the size of the queue grows linearly during the progress of the computation. If the number of intervals stored in the queue reaches or exceeds \( q \) then we interrupt the computations. Setting the limit on the queue size protects against memory overflow that might be caused by storing too many intervals in the queue, especially if high precision numbers are used that might occupy considerable amount of memory.
\noindent 3) We can fix the maximal time \( t > 0 \) (in seconds) that can be used by the program. This constraint is especially useful in order to bound the amount of time that one is willing to wait for the final result, and also to protect the web server's resources when providing access to the program through the web interface.
We conclude this section with a discussion of the non-trivial problem of deciding how to prioritize the intervals in the queue, i.e., how to decide which interval to iterate at any given moment. This is especially important if the computation is stopped before the queue is empty, for example, if the program is allowed to run for a limited amount of time only.
From a computational point of view, a queue is a data structure that is capable of storing objects of certain type, and provides means for extracting them. There are different types of queues in terms of the order in which the objects are extracted. For example, the \emph{fifo queue} (``first in -- first out'') provides the objects in the order in which they were put in the queue (like a typical queue in a supermarket), and the \emph{lifo queue} (``last in -- first out'') provides the most recently stored object first (like a stack of plates).
It seems that a good approach is to use a \emph{priority queue}, in which objects are sorted based on some priority, and the ones with the highest priority are extracted first. More specifically, the queue stores parameter intervals together with the number of times they were successfully iterated, and this number serves as the priority in our queue; \emph{we first extract intervals that were iterated the least number of times}. In this way, we prioritise those intervals that are lagging behind the others in iterations, so that we could achieve a state in which all the intervals that are left in the queue have been iterated at least a certain number of times. if several intervals in the queue have been iterated the same number of times (as is of course often the case) we take the biggest first. In the quite unlikely event that two such intervals are exactly the same size, we introduce other criteria such as priorities depending on the reasons an interval was added to the queue.
\subsection{Case study} \label{sec:study}
We conclude this section with a case study of the ``history'' of a specific stochastic interval \( \omega \in \mathcal{P}^+ \) that actually appeared in the computations described in Section~\ref{sec:results}, in order to illustrate some of the processes described above in a concrete case. We consider the interval \( \omega^{(1)} \) for which we have the following outer enclosure when rounding the endpoints to 12 significant digits: \[ \omega^{(1)} \subset [1.96076793815,1.96077475689]. \]
Its iterates are shown in Table~\ref{tab:bigN}. This is the 1953rd interval taken for iterations from the queue (recall Section~\ref{sec:assigning} for details on how the ``queue'' works). The numbers in Table~\ref{tab:bigN} show that the interval got close to \( \Delta \) at the 7th and 15th iterates. Its width was steadily growing with sudden drops after those two events; eventually the interval ``exploded'' to take up almost the entire phase space at the 26th iterate, thus satisfying the escape time condition with \( |\omega_{N}|\geq 3.5 \).
\begin{table}[htbp] \centerline{ \begin{tabular}{rcl} \hline
\( n \) & \( \omega^{(1)}_n \) & \( |\omega^{(1)}_n| \) \\ \hline 0 & \( [1.9607,1.9608] \) & \( 0.0000068187 \) \\ 1 & \( [-1.8839,-1.8838] \) & \( 0.000019921 \) \\ 2 & \( [-1.5882,-1.5880] \) & \( 0.000068238 \) \\ 3 & \( [-0.56150,-0.56128] \) & \( 0.00020992 \) \\ 4 & \( [1.6455,1.6458] \) & \( 0.00022887 \) \\ 5 & \( [-0.74766,-0.74689] \) & \( 0.00076011 \) \\ 6 & \( [1.4017,1.4030] \) & \( 0.0011428 \) \\ 7 & \textcolor{red}{\( [-0.0073968,-0.0041981] \)} & \( 0.0031985 \) \\ 8 & \( [1.9607,1.9608] \) & \( 0.000030267 \) \\ 9 & \( [-1.8838,-1.8836] \) & \( 0.00012551 \) \\ 10 & \( [-1.5879,-1.5873] \) & \( 0.00047967 \) \\ 11 & \( [-0.56046,-0.55892] \) & \( 0.0015298 \) \\ 12 & \( [1.6466,1.6484] \) & \( 0.0017193 \) \\ 13 & \( [-0.75638,-0.75071] \) & \( 0.0056585 \) \\ 14 & \( [1.3886,1.3972] \) & \( 0.0085210 \) \\ 15 & \textcolor{red}{\( [0.0086113,0.032357] \)} & \( 0.023745 \) \\ 16 & \( [1.9597,1.9607] \) & \( 0.00096598 \) \\ 17 & \( [-1.8836,-1.8797] \) & \( 0.0037938 \) \\ 18 & \( [-1.5871,-1.5727] \) & \( 0.014284 \) \\ 19 & \( [-0.55781,-0.51266] \) & \( 0.045141 \) \\ 20 & \( [1.6496,1.6980] \) & \( 0.048329 \) \\ 21 & \( [-0.92227,-0.76048] \) & \( 0.16177 \) \\ 22 & \( [1.1101,1.3825] \) & \( 0.27222 \) \\ 23 & \( [0.049666,0.72824] \) & \( 0.67856 \) \\ 24 & \( [1.4304,1.9584] \) & \( 0.52784 \) \\ 25 & \( [-1.8742,-0.085416] \) & \( 1.7887 \) \\ 26 & \( [-1.5518,1.9535] \) & \( 3.5052 \) \\ \hline \end{tabular} }
\caption{Iterates of one specific interval \( \omega \in \mathcal{P}^+ \) with large \( |\omega_n| \) at escape time. All the numbers rounded to 5 significant digits. An outer bound on each \( \omega_n \) is shown, as well as a lower bound on its width, as calculated in the high-precision arithmetic. Close encounters with \( \Delta \) are shown in red. Full discussion in Section \ref{sec:study}.} \label{tab:bigN} \end{table}
Let us check the circumstances under which this interval entered the queue. Each interval that is put in the queue is assigned a consecutive number, starting from 1 that was assigned to the original interval \( \Omega = [1.4,2] \). The computation log shows that \( \omega^{(1)} \) was assigned the number 2565, and it was put in the queue as a result of halving another interval, let us call it its parent and denote by \( \omega^{(0)} \). This was the 1914th processed interval, and it was halved due to a problem with determining the sign of \( c'_n(\omega) \), discussed in Section~\ref{sec:assigning} as subcase (P2a), after having computed its 7th iterate. Recall that \( \omega^{(1)}_7 \) was indeed close to \( \Delta \); which means that halving the interval \( \omega^{(0)} \) instead of throwing it into \( \mathcal{P}^- \) was a good decision, because it allowed saving at least a half of it for \( \mathcal{P}^+ \).
Let us have a look at the ``twin brother'' \( \omega^{(2)} \) of the successful interval \( \omega^{(1)} \). Its 12-digit outer enclosure is \( [1.96076111942,1.96076793816] \); it was assigned the number 2564 when it was put in the queue. It was iterated just before \( \omega^{(1)} \), that is, it was the 1952nd iterated interval. It was subject to the same problem as \( \omega^{(0)} \) and was halved after 7 iterates. Its two halves \( \omega^{(3)} \) and \( \omega^{(4)} \) were put back in the queue, got consecutive numbers 2619 and 2620, and were iterated as the 1992nd and 1993rd intervals, respectively. The interval \( \omega^{(4)} \) hit \( \Delta \) after a remarkable number of 31 iterates, and the width of its 31st iterate slightly exceeded \( 1.5 \), so it was added to \( \mathcal{P}^+ \). This means that we already qualified \( 3/4 \) of \( \omega^{(0)} \) as stochastic! However, the problem persisted for \( \omega^{(3)} \), which was then halved again. Its children \( \omega^{(5)} \) and \( \omega^{(6)} \) got consecutive numbers 2678 and 2679 in the queue. They were pulled out from the queue and processed as the 2032nd and 2033rd intervals, respectively. The problem persisted for \( \omega^{(5)} \), while \( \omega^{(6)} \) was iterated 15 times until it hit \( \Delta \); its 15th iterate was contained in \( [0.000554,0.002311] \) and was not large enough to put the interval in \( \mathcal{P}^+ \), so it was chopped; note that the left endpoint of \( \omega^{(6)}_{15} \) was actually in \( \Delta \), so the chopping resulted in only one part put back in the queue. The loss was considerable, only \( 70\% \) of \( \omega^{(6)} \) survived the chopping. We stop our investigation here. Although the fraction of \( \omega^{(0)} \) qualified as stochastic did not increase to \( 7/8 \), there is hope that some of the descendants of \( \omega^{(6)} \) eventually contributed to \( \mathcal{P}^+ \) in further iterations.
This short excerpt of the family saga of the interval \( \omega^{(1)} \) illustrates the main ideas of our approach in constructing the sets \( \mathcal{P}^+ \) and \( \mathcal{P}^- \), and shows a variety of dynamical situations encountered.
\section{The Numerics} \label{sec:procedure}
The strategies introduced in Section~\ref{sec:strategies} are intertwined into a single computational procedure for computing inner and outer bounds for \( \omega_n \) together with rigorous estimates for the derivatives \eqref{eq:chain} and \eqref{eq:quot}, and splitting the interval \( \omega \) into smaller parts whenever condition \eqref{eq:mon} is not satisfied or \( \omega_n \) hits \( \Delta \). In this section, we explain the issues involved in making sure we obtain \emph{rigorous} bounds for all these calculations. In Section \ref{sec:algorithms} we then describe the structure of the algorithms used to implement the calculations.
\subsection{Precision}
The very first step in the construction is the choice of the set of representable real numbers \( \mathbf{R} \subset \mathbb{R} \). In practice, the choice of this set depends on the representation of numbers used in the software, and is different for double-precision floating point numbers following the IEEE~754 standard \cite{ieee754} than for floating-point numbers of fixed size implemented by the GNU MPFR software library \cite{mpfr}. For clarity of presentation, within Section~\ref{sec:algorithms} we are going to use bold typeface to denote elements of \( \mathbf{R} \); for example, \( \mathbf{a} \in \mathbf{R} \), as opposed to the general \( a \in \mathbb{R} \).
It is important to be aware of the fact that the actual result of an arithmetic operation or the result of the computation of the value of a function on representable numbers need not be a representable number in general. However, in a proper setting, it is possible to request that such results are rounded downwards or upwards to representable numbers in the actual machine computations. Therefore, even if it is not possible in general to compute the exact value of many expressions, it is always possible to compute a lower and an upper bound for each of them. In the case of elementary operations, such as addition or multiplication, the standards such as IEEE~754 typically require that the result is rounded downwards or upwards to the closest representable number in the corresponding direction; thanks to this feature, the inaccuracy caused by rounding is minimised.
The most important quantity of interest regarding the precision of our calculations is the binary precision \( p \) to be used for representable numbers implemented by the MPFR library; for example, if \( p = 250 \) then the relative accuracy of numbers used in the computations is roughly \( 2^{-250} \approx 10^{-80} \), that is, all the numbers are rounded at the \( 80 \)th decimal digit, which we consider quite high precision for the results, yet computationally feasible in terms of the speed and memory usage. The choice of binary precision is closely related to the number of iterations which we want to compute, and \( p=250 \) is quite sufficient for the number of iterations we consider for the results we describe in Section \ref{sec:results}. Higher precision would be desirable and could easily be implemented if we carried out the calculations for higher values of \( N_{0} \).
A second quantity which affects to some extent the precision of our calculations is the number \( s \) of bisection steps used in the chopping procedure described in Section \ref{sec:avoid} (and more formally in Algorithm~\ref{alg:bisection} below); the higher the value, the more expensive the computations; the lower the value, the less accurate the chopping procedure. In order to determine reasonable balance between these two, we made some experiments to check the improvement in the results and the increase of computation time with the increase in \( s \), and we decided to use \( s = 40 \).
\subsection{Rounding} \label{sec:repr} Instead of introducing separate notation for representable versions of all the operations and functions with rounding downwards or upwards, we use the two special assignment symbols ``\( :\leq \)'' and ``\( :\geq \)'' instead of ``\( := \)'' in order to indicate the rounding direction. Specifically, if \( \varphi \colon \mathbb{R}^k \to \mathbb{R} \) for some \( k \in \mathbb{N} \) then \[ \mathbf{u} :\leq \varphi (\mathbf{x}_1, \ldots, \mathbf{x}_k) \] means that \( \mathbf{u} \in \mathbf{R} \) is the result of machine computation of a representable number that is a lower bound for the actual value of \( \varphi (\mathbf{x}_1, \ldots, \mathbf{x}_k) \). We define the upwards-rounded counterpart \[ \mathbf{v} :\geq \varphi (\mathbf{x}_1, \ldots, \mathbf{x}_k) \] in an analogous way. If the direction of rounding is not important, we use the ``rounding to the nearest'' mode, and we use the symbol \( :\approx \) to explicitly indicate the fact that rounding to a representable number takes place when computing the expression on the right hand side of \( :\approx \). This happens, for example, when we compute an approximation of the middle of an interval: ``\( \mathbf{c} :\approx (\mathbf{a} + \mathbf{b}) / 2 \).'' We remark that even if one computes a lower bound, the rounding direction does not have to be ``downwards'' in all the operations. Consider, for example, the computation of \( 1 / (x + y) \) for \( x, y > 0 \). One would first compute \( \mathbf{z} :\geq x + y \) and then \( \mathbf{s} :\leq 1 / \mathbf{z} \) so that the number \( \mathbf{s} \) is a lower bound on \( 1 / (x + y) \).
All the numbers that appear in the algorithms and in the computations must be representable. In case any number appears in the description that is not exactly reprsentable in the binary floating-point arithmetic that we use, such as \( 1.4 \) for example, it is implicitly rounded to the nearest representable number when passed to the algorithm.
Before moving on to describe our main procedure, we introduce some notation. \begin{definition}\label{def:defsign} An interval \( [x^-,x^+] \) is said to be \emph{of definite sign} if \( 0 \notin [x^-,x^+] \). \end{definition} Note that an interval is of definite sign if its both endpoints are not zero and have the same sign. Given two intervals \( x = [x^-,x^+] \) and \( y = [y^-,y^+] \), let \[ g^- (x,y):=\min\{-2uv : u \in x, v \in y \} \quad\text{ and } \quad g^+ (x,y):=\max\{-2uv : u \in x, v \in y \} \] These functions provide the tightest outer enclosure for the arithmetic operation \( -2xy \) on intervals. Notice that the intervals \( x\) and \( y \) are not necessarily representable and even if they were, the output of the functions \( g^{-}, g^{+} \) are not necessarily representable. However, if they are of definite sign, they are given by a simple formula \begin{equation*} \label{eq:g} g^-(x,y) = \begin{cases} -2 x^+ y^+ & \text{if } x > 0, y > 0, \\ -2 x^- y^- & \text{if } x < 0, y < 0, \\ -2 x^- y^+ & \text{if } x > 0, y < 0, \\ -2 x^+ y^- & \text{if } x < 0, y > 0; \end{cases} \qquad g^+(x,y) = \begin{cases} -2 x^- y^- & \text{if } x > 0, y > 0, \\ -2 x^+ y^+ & \text{if } x < 0, y < 0, \\ -2 x^+ y^- & \text{if } x > 0, y < 0, \\ -2 x^- y^+ & \text{if } x < 0, y > 0. \end{cases} \end{equation*} and can be rounded up and down to get inner and outer enclosures of the set \( \{-2uv : u \in x, v \in y \} \) by representable numbers.
\subsection{Inductive assumptions} \label{sec:indStart}
In this subsection, we introduce an inductive procedure for iterating \( \omega \) and computing a sequence of numbers that, as we shall prove, provide lower and upper bounds for the quantities discussed in Section~\ref{sec:strategies}. Let \( \mathbf{a} < \mathbf{b} \) be representable numbers, and consider the parameter interval \begin{equation*} \omega := [\mathbf{a},\mathbf{b}]. \end{equation*}
We are going to construct bounds on the quantities discussed in Sections \ref{sec:iterate} and~\ref{sec:estimate}. To formulate these bounds we will define inductively representable intervals \begin{equation}\label{eq:int} \mathbf{a}_n := [\mathbf{a}_n^-,\mathbf{a}_n^+], \ \mathbf{b}_n := [\mathbf{b}_n^-,\mathbf{b}_n^+], \ \mathbf{c}_n := [\mathbf{c}_n^-,\mathbf{c}_n^+], \ \mathbf{d}_n := [\mathbf{d}_n^-,\mathbf{d}_n^+], \ \mathbf{f}_n := [\mathbf{f}_n^-,\mathbf{f}_n^+] \end{equation} and (when possible) two additional intervals \begin{equation}\label{eq:omegadef} \underline{\omega_{n}}\subseteq \overline{\omega_{n}}, \end{equation} defined in terms of those above, where \( \overline{\omega_{n}} \) is the \emph{convex hull} of \( \mathbf{a}_n \) and \( \mathbf{b}_n \), i.e., the smallest closed interval containing both \( \mathbf{a}_n \) and \( \mathbf{b}_n \), and \( \underline{\omega_{n}} \) is the closure of the \emph{unique bounded component} of \( \mathbb R\setminus \{\mathbf{a}_n\cup \mathbf{b}_n\} \) \emph{if} \( \mathbf{a}_n\cap \mathbf{b}_n=\emptyset \) (and \emph{undefined} otherwise). Clearly \( \underline{\omega_{n}}, \overline{\omega_{n}} \), when defined, are also representable intervals, and if \( \mathbf{a}_n\cap \mathbf{b}_n=\emptyset \), are given by the following simple formulas: \begin{equation} \label{eq:innerouter} \underline{\omega_n} := \begin{cases} [\mathbf{a}_n^+, \mathbf{b}_n^-] & \text{if } \mathbf{a}_n^+ < \mathbf{b}_n^-, \\ [\mathbf{b}_n^+, \mathbf{a}_n^-] & \text{otherwise}. \end{cases} \qquad \text{ and } \qquad \overline{\omega_n} := \begin{cases} [\mathbf{a}_n^-, \mathbf{b}_n^+] & \text{if } \mathbf{a}_n^- < \mathbf{b}_n^+, \\ [\mathbf{b}_n^-, \mathbf{a}_n^+] & \text{otherwise}. \end{cases} \end{equation} The definition of the intervals \eqref{eq:int} is inductive and for the initialisation of the induction, \( n=0 \), we define the following representable numbers: \begin{equation} \label{eq:ind0} \mathbf{a}_0^- \coloneqq \mathbf{a}_0^+ \coloneqq \mathbf{a},\quad \mathbf{b}_0^- \coloneqq \mathbf{b}_0^+ \coloneqq \mathbf{b},\quad \mathbf{c}_{0}^-= \mathbf{c}_{0}^+ \coloneqq 1, \quad \mathbf{f}_{0}^-= \mathbf{f}_{0}^+ \coloneqq 1,\quad \mathbf{d}_{0}^-= \mathbf{d}_{0}^+ \coloneqq 1, \end{equation} and define the corresponding intervals \( \mathbf{a}_0, \mathbf{b}_0, \mathbf{c}_0,\mathbf{d}_0,\mathbf{f}_0, \overline{\omega_{0}}, \underline{\omega_{0}} \) as in \eqref{eq:int}-\eqref{eq:omegadef}. Note that we admit degenerate intervals as singletons if both endpoints are equal, and we distinguish such intervals (sets) from the individual numbers such as \( \mathbf{a} \) and \( \mathbf{b} \). We also consider our intervals ``ordered'' in the sense that the left endpoint as written is always assumed to be \( \leq \) the right endpoint. Let us now assume inductively that intervals \( \mathbf{a}_k, \mathbf{b}_k, \mathbf{c}_k,\mathbf{d}_k,\mathbf{f}_k\), and therefore also \( \overline{\omega_{k}} \) (but not necessarily \( \underline{\omega_{k}} \)), have been defined for some \( k\geq 0 \), \emph{and that \( \overline{\omega_{k}} \) has definite sign}: \begin{equation}\label{eq:defsign}\tag*{\ensuremath{(\star)_{k}}} 0\notin\overline{\omega_{k}}. \end{equation} With these assumptions, we define the intervals \( \mathbf{a}_{k+1}, \mathbf{b}_{k+1}, \mathbf{c}_{k+1},\mathbf{d}_{k+1},\mathbf{f}_{k+1} \) in the next section.
\subsection{Inductive step} \label{sec:indStep}
Recall that \( \mathbf{a} \) and \( \mathbf{b} \) (without subscripts) are the endpoints of the interval \( \omega \) and for every \( a\in \omega \), \( f_a \) is the quadratic map defined in \eqref{eq:quadratic}. Recall from Section~\ref{sec:repr} that we use the notation ``\( \mathbf{u} :\leq \varphi(\mathbf{y}) \)'' to indicate that \( \mathbf{u} \) is computed to be a lower bound on the expression \( \varphi(\mathbf{y}) \); similarly with ``\( :\geq \)''. If \( \mathbf{a}_k^+ < 0 \), we set \begin{align} \label{eq:indIncreasing} &\mathbf{a}_{k+1}^- :\leq f_{\mathbf{a}}(\mathbf{a}_{k}^-),\quad \mathbf{a}_{k+1}^+ :\geq f_{\mathbf{a}}(\mathbf{a}_{k}^+),\quad \mathbf{b}_{k+1}^- :\leq f_{\mathbf{b}}(\mathbf{b}_{k}^-),\quad \mathbf{b}_{k+1}^+ :\geq f_{\mathbf{b}}(\mathbf{b}_{k}^+); \end{align} otherwise, we set \begin{align} \label{eq:indDecreasing} &\mathbf{a}_{k+1}^- :\leq f_{\mathbf{a}}(\mathbf{a}_{k}^+),\quad \mathbf{a}_{k+1}^+ :\geq f_{\mathbf{a}}(\mathbf{a}_{k}^-),\quad \mathbf{b}_{k+1}^- :\leq f_{\mathbf{b}}(\mathbf{b}_{k}^+),\quad \mathbf{b}_{k+1}^+ :\geq f_{\mathbf{b}}(\mathbf{b}_{k}^-). \end{align} Then we let \begin{align}
\label{eq:indC} \mathbf{c}_{k+1}^- &:\leq 1 + g^- (\mathbf{c}_k,\ovl{\omega_k}), \qquad \mathbf{c}_{k+1}^+ :\geq 1+ g^+(\mathbf{c}_k,\ovl{\omega_k}), \\ \label{eq:indF}
\mathbf{f}_{k+1}^- &:\leq g^-(\mathbf{f}_{k}, \ovl{\omega_k}),
\qquad \qquad \mathbf{f}_{k+1}^+ :\geq g^+(\mathbf{f}_{k}, \ovl{\omega_k} ),\\ \label{eq:indD} \mathbf{d}_{k+1}^- &:\leq \mathbf{d}_{k}^- + 1/{\mathbf{f}_{k+1}^+}, \qquad\quad \mathbf{d}_{k+1}^+ :\geq \mathbf{d}_{k}^+ + 1/{\mathbf{f}_{k+1}^-}. \end{align} It is easy to see that this gives well defined intervals \( \mathbf{a}_{k+1}, \mathbf{b}_{k+1}, \mathbf{c}_{k+1},\mathbf{d}_{k+1},\mathbf{f}_{k+1} \) and that these are explicitly and rigorously computable given the representable intervals \( \mathbf{a}_k, \mathbf{b}_k, \mathbf{c}_k,\mathbf{d}_k,\mathbf{f}_k\) and under assumption \( (\star)_{k} \). In fact, \( (\star)_{k} \) is only required to ensure that the intervals \( \mathbf{a}_{k+1}, \mathbf{b}_{k+1} \) defined in \eqref{eq:indIncreasing}-\eqref{eq:indDecreasing} are well-defined, the other intervals are well-defined with no assumptions. However, it is not immediate, nor in fact is it always the case, that these intervals give us any dynamical information. In the next section we prove a non-trivial result which gives conditions for these intervals to provide the required bounds.
\subsection{Rigorous bounds} The main result of this section gives conditions which ensure that the intervals defined above provide the bounds for the required quantities.
\begin{theorem} \label{thm:intervals} Let \( \omega := [\mathbf{a},\mathbf{b}]\) be a parameter interval and let \( n \geq 0 \). Suppose
that for every \( 0\leq k \leq n \) the intervals \( \mathbf{a}_k, \mathbf{b}_k, \mathbf{c}_k,\mathbf{d}_k,\mathbf{f}_k \) have been defined as above and satisfy condition \( (\star)_{k} \). Then \( \mathbf{a}_{n+1}, \mathbf{b}_{n+1}, \mathbf{c}_{n+1},\mathbf{d}_{n+1},\mathbf{f}_{n+1} \) are defined and \begin{equation}\label{eq:indab}
c_{n+1} (\mathbf{a})\in \mathbf{a}_{n+1}
\quad\text{ and } \quad c_{n+1} (\mathbf{b}) \in \mathbf{b}_{n+1}. \end{equation} If \( \mathbf{c}_{n}\) and \( \mathbf{f}_{n} \) have definite sign, then also \begin{equation}\label{eq:ind}
\quad c_{n+1}' (\omega) \subseteq \mathbf{c}_{n+1}, \quad (f^{n+1})' (\omega) \subseteq \mathbf{f}_{n+1},\quad
{c_{n+1}'}/{(f^{n+1})'} \subseteq \mathbf{d}_{n+1}. \end{equation} If, moreover, \( \mathbf{a}_{n+1}\cap \mathbf{b}_{n+1} =\emptyset \) and \( \mathbf{c}_{n+1} \) has definite sign, then \begin{equation}\label{eq:indn}
\underline{{\omega}_{n+1}}\subseteq {\omega}_{n+1}\subseteq \overline{{\omega}_{n+1}}. \end{equation} \end{theorem}
We emphasise the fact that the assumptions of the theorem for a fixed \( n \geq 0 \) can be verified by means of finite machine computation, which includes the computation of the various numbers and intervals. In the next section, we introduce Algorithm~\ref{alg:interval} that does precisely this. The conclusion of the theorem, however, provides nontrivial mathematical properties whose verification may not be obvious at all. In particular, the inner and outer bounds on \( \omega_n \), and the outer bounds on the various derivatives, computed in the inductive way using the formulas provided in this subsection, are nontrivial ingredients of the computations needed in the bigger project outlined in Section~\ref{sec:motivation}.
\begin{proof} We prove Theorem~\ref{thm:intervals} by induction on \( n \). For \( n = 0 \), \eqref{eq:indab} and \eqref{eq:ind} follow directly from \eqref{eq:ind0} and the fact that for every \( a\in\omega \), we have \( c_0(a)=a \), and thus \( c_0'(a)=1 \), and moreover, \( f^0 \) is the identity map, so \( (f^0)'(a) = 1 \). Condition \eqref{eq:indn} follows immediately from \eqref{eq:ind0} and \eqref{eq:innerouter}, and from the fact that \( \mathbf{a} < \mathbf{b} \). We therefore assume inductively the conclusions of the theorem for all \( 0\leq k \leq n \) under the corresponding assumptions.
To prove \eqref{eq:indab}, since \( \overline{\omega_n} \) is of definite sign, \( f_a \) is monotone on \( \overline{\omega_k} \) for every \( a \), in particular for \(a = \mathbf{a} \) and for \(a = \mathbf{b} \). Since \( \mathbf{a}_n \subset \overline{\omega_n} \), the direction of this monotonicity can be determined by the single number \( \mathbf{a}_n^+ \). If this number is negative then both \( f_{\mathbf{a}} \) and \( f_{\mathbf{b}} \) are increasing on \( \overline{\omega_n} \), and the numbers computed using \eqref{eq:indIncreasing} satisfy \( \mathbf{a}_{n+1}^- \leq \mathbf{a}_{n+1}^+ \) and \( \mathbf{b}_{n+1}^- \leq \mathbf{b}_{n+1}^+ \); the reasoning is analogous if \eqref{eq:indDecreasing} has to be used. This argument, combined with the formula \( c_{n+1} (\mathbf{a}) = f_{\mathbf{a}} (c_n (\mathbf{a})) \) that defines the critical orbit for \( \mathbf{a} \) (and similarly for \( \mathbf{b} \)), proves \( c_{n+1} (\mathbf{a})\in \mathbf{a}_{n+1} \) and \( c_{n+1} (\mathbf{b}) \in \mathbf{b}_{n+1}\).
The three terms in \eqref{eq:ind} are all proved by almost the same argument. For the first one, notice that the formula \eqref{eq:indC} can be seen as a numerical version of \eqref{eq:derivC}; specifically, if \( \mathbf{c}_{n} \) is an interval containing \( c'_{n}(a) \), as per our inductive assumptions, and if \( \mathbf c_{n} \) and \( \overline{\omega_{n}} \) are of definite sign, as per the assumptions in the theorem, then \eqref{eq:indC} provides an interval that contains \( c'_{n+1}(a) \), i.e. \(c_{n+1}' (\omega) \subseteq \mathbf{c}_{n+1}\). A very similar argument applies to the last two terms of \eqref{eq:ind} except we look at \eqref{eq:indF} as a numerical version of \eqref{eq:chain}, and \eqref{eq:indD} as a numerical version of \eqref{eq:quot}.
Finally, to prove \eqref{eq:indn}, first notice that \( c_{n+1} \) is monotone on \( \omega \): this follows from the fact that \( \mathbf{c}_{n+1} \) is of definite sign, as per the assumptions of the theorem, combined with the just proved property \eqref{eq:ind} stating that \( c'_{n+1}(\omega) \subseteq \mathbf{c}_{n+1} \). Thanks to this monotonicity, the image of the interval \( \omega \) by \( c_{n+1} \) lies entirely between the images of its endpoints, \( \mathbf{a} \) and \( \mathbf{b} \). The images of these points are contained in the corresponding intervals \( \mathbf{a}_{n+1} \) and \( \mathbf{b}_{n+1} \), respectively; the latter fact was just proved as \eqref{eq:indab}. Under the assumption that \( \mathbf{a}_{n+1} \) and \( \mathbf{b}_{n+1} \) are disjoint the formula \eqref{eq:innerouter} clearly defines \( \overline{\omega_n} \) as the smallest interval containing both intervals \( \mathbf{a} \) and \( \mathbf{b} \), and therefore containing both endpoints of \( c_{n+1} (\omega) \), and thus the entire interval \( c_{n+1} (\omega) \). Moreover, \eqref{eq:innerouter} defines \( \underline{\omega_{n+1}} \) as the closure of \( \overline{\omega_{n+1}} \setminus (\mathbf{a} \cup \mathbf{b}) \), which is an interval contained between the images of the endpoints of \( \omega \), and therefore contained in \( c_{n+1} (\omega) \). \end{proof}
\section{The Algorithms} \label{sec:algorithms}
\vskip -3pt
In this section, we introduce algorithms that serve the purpose of conducting the computations described in Section~\ref{sec:procedure}. While introducing the algorithms, we are going to use the concept of a \emph{controller}. It is an object to which the progress of computations is reported, which submits obtained results for further processing if desired, and which is responsible for making decisions on how to proceed whenever problems are encountered.
\subsection{Algorithm for iterating a parameter interval} \label{sec:algorithm}
\vskip -3pt
Algorithm~\ref{alg:interval} below conducts inductive computations described in Sections \ref{sec:indStart}--\ref{sec:indStep} for a single interval \( \omega \) of paramters, and verifies the assumptions of Theorem~\ref{thm:intervals} at each iteration. The algorithm is defined in the form of an iterative procedure that is in principle indefinite; therefore, in the actual computations described in Section~\ref{sec:results}, we impose some specific stopping criteria that are enforced by the controller. Note that there is no single object returned by the algorithm as its output; instead, the algorithm produces a multitude of data, and supplies this data to the controller that might, for example, store it in a file or send to another procedure for further processing. The details on how the controller reacts to the different events indicated by calling its various functions in Algorithm~\ref{alg:interval} are discussed and explained in Section~\ref{sec:problems}, and the procedure for splitting the interval \( \omega \) if its iteration hits the critical neighbourhood \( \Delta = (-\delta,\delta) \) is provided in Section~\ref{sec:chop}. The instruction ``break'' makes the algorithm exit the loop.
\begin{algorithm}\noindent\label{alg:interval} \rm \begin{tabbing} \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \kill \kw{function} process\_an\_interval \\ \kw{input:} \\ \> \( \omega = [\mathbf{a}, \mathbf{b}] \): an interval; \\ \kw{begin} \\ \> initialize the induction as defined by \eqref{eq:ind0}; \\ \> define \( \overline{\omega_0} \) and \( \underline{\omega_0} \) following \eqref{eq:innerouter}; \\ \> \kw{for} \( n := 0, 1, 2, 3, \ldots \) \kw{do}: \\ \>\> compute \( \mathbf{c}_{n+1} \) following \eqref{eq:indC}; \\ \>\> \kw{if} \( 0 \in \mathbf{c}_{n+1} \) \kw{then} \\ \>\>\> controller.problemC (\( \omega \), \( n \)); \kw{break}; \\ \>\> compute \( \mathbf{a}_{n+1} \) and \( \mathbf{b}_{n+1} \) following \eqref{eq:indIncreasing} or \eqref{eq:indDecreasing}, as appropriate; \\ \>\> \kw{if} \( \mathbf{a}_{n+1} \cap \mathbf{b}_{n+1} \neq \emptyset \) \kw{then} \\ \>\>\> controller.innerEmpty (\( \omega \), \( n \)); \kw{break}; \\ \>\> define \( \overline{\omega_{n+1}} \) and \( \underline{\omega_{n+1}} \) following \eqref{eq:innerouter}; \\ \>\> compute \( \mathbf{f}_{n+1} \) following \eqref{eq:indF}; \\ \>\> \kw{if} \( 0 \in \mathbf{f}_{n+1} \) \kw{then} \\ \>\>\> controller.problemF (\( \omega \), \( n \)); \kw{break}; \\ \>\> compute \( \mathbf{d}_{n+1} \) following \eqref{eq:indD}; \\ \>\> controller.notify (\(\omega\), \( n+1 \)); \\ \>\> \kw{if} \( \inter \overline{\omega_{n+1}} \cap \Delta \neq \emptyset \) \kw{then} \\ \>\>\> omegaHitDelta (\( \omega \), \( n+1 \)); \kw{break}; \\ \kw{end.} \end{tabbing} \end{algorithm}
The next result is an immediate consequence of the fact that Algorithm~\ref{alg:interval} follows the construction introduced in Sections \ref{sec:indStart}--\ref{sec:indStep} and verifies the assumptions of Theorem~\ref{thm:intervals}, except instead of checking that the interval \( \overline{\omega_{n+1}} \) is of definite sign, it checks a stronger condition; namely, given some \( \delta > 0 \), the algorithm verifies whether the distance of \( \overline{\omega_{n+1}} \) from the critical point \( c = 0 \) is at least \( \delta \).
\begin{corollary} \label{cor:intervals} Let Algorithm \ref{alg:interval} be called with a compact interval \( \omega = [\mathbf{a},\mathbf{b}] \subseteq [1,\infty) \) with \( \mathbf{a} < \mathbf{b} \). Assume that the radius \( \delta > 0 \) of the critical neighbourhood satisfies \( \delta < 1 \). Then, every time the algorithm makes a call to the procedure \emph{controller.notify} with \( \omega \) and \( n + 1 \), the quantities computed by this algorithm satisfy the properties \eqref{eq:indab}--\eqref{eq:indn}. \end{corollary}
\subsection{A queue of parameter intervals} \label{sec:queue}
In this section we introduce an algorithm for managing a collection of intervals that are waiting to be processed using Algorithm~\ref{alg:interval}. The intervals in some input collection are first added to the queue, obviously together with the number of times they were previously iterated defined as~\( 0 \). We assume that the controller introduced in Algorithm~\ref{alg:interval} has unlimited access to this queue. In the framework of the computations, intervals are extracted from the queue one by one and processed individually by Algorithm~\ref{alg:interval}. This procedure is introduced in Algorithm \ref{alg:queue} below.
It is important to mention here that some intervals with certain priorities might be added to the queue by the controller, in response to the different situations that may be encountered in Algorithm~\ref{alg:interval}. This feature makes the problem of determining which interval to process more sophisticated than just sorting the list of initial intervals at the beginning and processing them in this order. This observation justifies using the structure of a queue for that purpose.
\begin{algorithm}\noindent\label{alg:queue} \rm \begin{tabbing} \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \kill \kw{function} process\_all\_intervals \\ \kw{input:} \\ \> \( \{\omega^i = [\mathbf{a}^i, \mathbf{b}^i]\}_{i = 1}^{M} \) for some natural \( M \ge 0 \) \\ \kw{begin} \\ \> \( Q \) := a queue of pairs (interval, integer); \\ \> \kw{for} \( i := 1 \) to \( M \): \\ \>\> \( Q \).enqueue \( (\omega^i \), \( 0 \)); \\ \> \kw{while} \( Q \) is not empty: \\ \>\> \( \omega \) := \( Q \).dequeue(); \\ \>\> process\_an\_interval (\( \omega \)); \emph{// Algorithm~\ref{alg:interval}} \\ \kw{end.} \end{tabbing} \end{algorithm}
\subsection{Overestimate problems} \label{sec:problems}
Whenever assumptions of Theorem~\ref{thm:intervals} cannot be successfully verified in Algorithm~\ref{alg:interval}, the controller is notified and must take some action. The two obvious choices are either to abandon the problematic interval and not to consider it for further processing, or subdivide it into smaller parts and put some or all of them in the queue \( Q \) that is defined in Algorithm~\ref{alg:queue} above. In this section we describe the actions that we chose to undertake in the cases shown in Algorithm~\ref{alg:interval}.
The two problems with verifying the various technical assumptions in Algorithm~\ref{alg:interval}, reported to the controller using the functions \emph{controller.problemC} and \emph{controller.problemF}, are of similar nature. The first problem is reported when we fail to verify \eqref{eq:mon} that would imply the monotonicity of \( c_n \), and thus we cannot use our method for computing a rigorous bound on \( \omega_n \) introduced in Section~\ref{sec:iterate}. It is likely that the problem with verifying the monotonicity of \( c_n \) is caused in many cases by considerable overestimates in computing the rigorous bound \( \mathbf{c}_n \) for \( c'_n (\omega) \). The second problem appears if the overestimates in computing an outer bound \( \mathbf{f}_n \) for \( (f^{n})'(\omega) \) become so bad that the bound includes \( 0 \), which is obviously wrong. Algorithm~\ref{alg:halve} shows a suggestion of what one can do in these two situations. Our strategy is to halve the interval \( \omega \) in hope that the problem will disappear (which indeed often happens, as illustrated in the case study described in Section~\ref{sec:study}). The controller puts both halves of \( \omega \) to the queue \( Q \) so that these smaller intervals can be processed later.
\begin{algorithm}\noindent\label{alg:halve} \rm \begin{tabbing} \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \kill \kw{function} controller.problemC, \kw{function} controller.problemF \\ \kw{input:} \\ \> \( \omega = [\mathbf{a}, \mathbf{b}] \): an interval; \\ \> \( n \): an integer; \\ \kw{begin} \\ \> \( \mathbf{c} :\approx (\mathbf{a} + \mathbf{b}) / 2 \); \\ \> \( Q \).enqueue (\( [\mathbf{a},\mathbf{c}] \), \( n \)); \\ \> \( Q \).enqueue (\( [\mathbf{c},\mathbf{b}] \), \( n \)); \\ \kw{end.} \end{tabbing} \end{algorithm}
The problem reported in Algorithm~\ref{alg:interval} by a call to the function \emph{controller.innerEmpty}, however, is of different nature, and directly related to the situation described in \textbf{(P2b)} in Section \ref{sec:assigning}. The interval must be then abandoned (moved to \( \mathcal{P}^- \)). We do not provide pseudocode for this algorithm, because it is trivial.
\subsection{Subdivisions of parameter intervals} \label{sec:chop}
In this subsection we introduce an algorithm for subdividing a parameter interval \( \omega \) when the numerical computations indicate that \( \omega_n \) might intersect the critical neighbourhood \( \Delta \). The purpose of this subdivision is to cut out the part of \( \omega \) that falls onto \( \Delta \), and to leave as much as possible from the interval \( \omega \) in the form of one or two subintervals of \( \omega \) that can be iterated further.
We begin by introducing Algorithm~\ref{alg:bisection} that uses the idea of the bisection method explained in Section~\ref{sec:avoid} to find a possibly small (or large) value of the parameter \( a \) such that \( c_n (a) \) is proved numerically to be below (or above, depending on which one is requested, the parameter called \textit{below} is used to make the choice) a certain ``border'' value \( \mathbf{v} \). The course of action of the algorithm depends on whether \( c_n \) is increasing or decreasing, and this information is passed to the algorithm in the parameter called \textit{increasing}. The number of bisection steps to conduct is given by the parameter \( s > 0 \). The features of the algorithm are precisely stated in Proposition~\ref{prop:bisection} below.
\begin{algorithm}\noindent\label{alg:bisection} \rm \begin{tabbing} \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \kill \kw{function} bisection \\ \kw{input:} \\ \> \( \omega = [\mathbf{a}, \mathbf{b}] \): an interval; \\ \> \( \mathbf{v} \): real number; \\ \> \( n, s \): positive integers; \\ \> \( \textit{increasing}, \textit{below} \): boolean values (true or false); \\ \kw{begin} \\ \> \kw{repeat} \( s \) \kw{times}: \\ \>\> \( \mathbf{m} :\approx (\mathbf{a} + \mathbf{b}) / 2 \); \\ \>\> \( \mathbf{c}^- :\leq c_n (\mathbf{m}) \); \\ \>\> \( \mathbf{c}^+ :\geq c_n (\mathbf{m}) \); \\ \>\> \kw{if} \( \textit{increasing} \) \kw{and} \( \textit{below} \) \kw{then} \\ \>\>\> \kw{if} \( \mathbf{c}^+ \leq \mathbf{v} \) \kw{then} \( \mathbf{a} := \mathbf{m} \); \kw{else} \( \mathbf{b} := \mathbf{m} \); \\ \>\>\> \( \mathbf{p} := \mathbf{a} \); \\ \>\> \kw{if} \( \textit{increasing} \) \kw{and} \kw{not} \( \textit{below} \) \kw{then} \\ \>\>\> \kw{if} \( \mathbf{c}^- \geq \mathbf{v} \) \kw{then} \( \mathbf{b} := \mathbf{m} \); \kw{else} \( \mathbf{a} := \mathbf{m} \); \\ \>\>\> \( \mathbf{p} := \mathbf{b} \); \\ \>\> \kw{if} \kw{not} \( \textit{increasing} \) \kw{and} \( \textit{below} \) \kw{then} \\ \>\>\> \kw{if} \( \mathbf{c}^+ \leq \mathbf{v} \) \kw{then} \( \mathbf{b} := \mathbf{m} \); \kw{else} \( \mathbf{a} := \mathbf{m} \); \\ \>\>\> \( \mathbf{p} := \mathbf{b} \); \\ \>\> \kw{if} \kw{not} \( \textit{increasing} \) \kw{and} \kw{not} \( \textit{below} \) \kw{then} \\ \>\>\> \kw{if} \( \mathbf{c}^- \geq \mathbf{v} \) \kw{then} \( \mathbf{a} := \mathbf{m} \); \kw{else} \( \mathbf{b} := \mathbf{m} \); \\ \>\>\> \( \mathbf{p} := \mathbf{a} \); \\ \> \kw{return} \( \mathbf{p} \); \\ \kw{end.} \end{tabbing} \end{algorithm}
\begin{proposition}\label{prop:bisection} Let \( \omega = [\mathbf{a}, \mathbf{b}] \subset [1,\infty) \) be a compact interval. Let \( n > 0 \) be an integer such that \( c_n \) is monotone on \( \omega \). Let the constant called ``\textit{increasing}'' have the value ``true'' if and only if \( c_n \) is increasing. Let \( s > 0 \) be an integer. Let \( \mathbf{v} \in \underline{\omega_n} \). Let \( \mathbf{p}^- \) be the number returned by Algorithm~\ref{alg:bisection} with the parameter ``\textit{below}'' set to ``true'', and let \( \mathbf{p}^+ \) be the number returned by Algorithm~\ref{alg:bisection} with the parameter ``\textit{below}'' set to ``false''.
Then \begin{align} \label{eq:bisection} & c_n ( \mathbf{p}^- ) \leq \mathbf{v} \leq c_n ( \mathbf{p}^+ ), \end{align} and the same holds true for the numerically computed bounds for \( c_n ( \mathbf{p}^- ) \) and \( c_n ( \mathbf{p}^+) \). \end{proposition}
\proof Consider the case in which \( c_n \) is increasing (the case of \( c_n \) decreasing is analogous), and thus assume \textit{increasing} is set to ``true.'' We shall prove that \( c_n ( \mathbf{p}^- ) \leq \mathbf{v} \) (the other part is analogous), and thus assume \textit{below} is set to ``true.'' By the assumptions, \( c_n (\mathbf{a}) \leq \mathbf{v} \leq c_n (\mathbf{b}) \), so \( \mathbf{a} \) is a good initial guess for \( \mathbf{p}^- \), but we are going to get a tighter bound. In the loop repeated \( s \) times, the approximate midpoint \( \mathbf{m} \) of the interval \( [\mathbf{a},\mathbf{b}] \) is computed. Then a lower bound \( \mathbf{c}^- \) and an upper bound \( \mathbf{c}^+ \) for the value of \( c_n (\mathbf{m}) \) are computed, and compared with \( \mathbf{v} \). If it was proved numerically that \( c_n (\mathbf{m}) \leq \mathbf{v} \) then the interval \( [\mathbf{a},\mathbf{b}] \) is replaced with \( [\mathbf{m},\mathbf{b}] \), and \( \mathbf{m} \) becomes a new candidate for \( \mathbf{p}^- \). Otherwise, \( \mathbf{a} \) remains a candidate for \( \mathbf{p}^- \), but we tighten the interval \( [\mathbf{a},\mathbf{b}] \) by replacing it with \( [\mathbf{a},\mathbf{m}] \). After this step, \( c_n (\mathbf{p}^-) \leq \mathbf{v} \), and the same holds true after the number of \( s \) steps.
The fact that the same inequalities hold true for the numerically computed bounds follows immediately from the fact that precisely these bounds are computed in Algorithm~\ref{alg:bisection} and the corresponding inequalities verified to obtain \eqref{eq:bisection}. We note, however, that the numerical method for computing these bounds must be identical each time, or otherwise this final conclusion may not hold true, due to rounding. \qed
\begin{remark}\label{rem:bisection} Since the interval \( [\mathbf{a},\mathbf{b}] \) is halved \( s \) times in Algorithm~\ref{alg:bisection}, the precision with which \( \mathbf{p}^- \) and \( \mathbf{p}^+ \) are estimated corresponds to \( 2^{-s} \) of the initial size of the interval \( \omega \). \end{remark}
Next, we are going to introduce Algorithm~\ref{alg:hitDelta} that uses Algorithm~\ref{alg:bisection} to chop the interval \( \omega \) into three pieces: \( \omega^1 \), \( \omega^2 \), and \( \omega^3 \), with mutually disjoint interiors, such that \( c_n (\omega^i) \cap \Delta = \emptyset \) for \( i = 1, 2 \) unless \( \omega^i \) is degenerate (a singleton). Conceptually, if \( c_n (\omega) \) intersects \( \Delta \) then we cut out the part \( \omega^3 \) that hits \( \Delta \) from the middle of \( \omega \), so that we can continue iterating the remaining two subintervals of \( \omega \). The two computed subintervals \( \omega^1 \) and \( \omega^2 \) are added to the queue, unless they are too small, which is defined in terms of a certain fraction of the length of the interval \( \omega \), and the controller is notified about the interval \( \omega^3 \) excluded from further computations.
\begin{algorithm}\noindent\label{alg:hitDelta} \rm \begin{tabbing} \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \hspace{.5 cm} {\, \equiv\, } \kill \kw{function} omega\_hit\_Delta \\ \kw{input:} \\ \> \( \omega = [\mathbf{a}, \mathbf{b}] \): an interval; \\ \> \( n \): an integer; \\
\> \( \underline{\omega_n} = [\mathbf{u}^-, \mathbf{u}^+] \): an interval; \\ \> \textit{increasing}: a boolean value (true or false); \\ \kw{begin} \\ \> \( [\mathbf{a}',\mathbf{b}'] := [\mathbf{a},\mathbf{b}] \); \\ \> let \( s > 0 \) be the number of bisection steps recommended by the controller; \\ \> \kw{if} \( \mathbf{u}^- < -\delta \) \kw{then} \\ \>\> \kw{if} \textit{increasing} \kw{then} \\ \>\>\> \( \mathbf{a}' \) := bisection ( \( \omega \), \( \mathbf{v} = -\delta \), \( n \), \( s \), \textit{increasing} = true, \textit{below} = true); \\ \>\> \kw{else} \\ \>\>\> \( \mathbf{b}' \) := bisection ( \( \omega \), \( \mathbf{v} = -\delta \), \( n \), \( s \), \textit{increasing} = false, \textit{below} = true); \\ \> \kw{if} \( \mathbf{u}^+ > \delta \) \kw{then} \\ \>\> \kw{if} \textit{increasing} \kw{then} \\ \>\>\> \( \mathbf{b}' \) := bisection ( \( \omega \), \( \mathbf{v} = \delta \), \( n \), \( s \), \textit{increasing} = true, \textit{below} = false); \\ \>\> \kw{else} \\ \>\>\> \( \mathbf{a}' \) := bisection ( \( \omega \), \( \mathbf{v} = \delta \), \( n \), \( s \), \textit{increasing} = false, \textit{below} = false); \\ \> \kw{if} \( \mathbf{a} \neq \mathbf{a}' \) \kw{then} \( Q \).enqueue (\( [\mathbf{a},\mathbf{a}'] \), \( n \)); \\ \> \kw{if} \( \mathbf{b} \neq \mathbf{b}' \) \kw{then} \( Q \).enqueue (\( [\mathbf{b}',\mathbf{b}] \), \( n \)); \\ \> controller.notify\_excluded\_interval (\( [\mathbf{a}',\mathbf{b}'] \)); \\ \kw{end.} \end{tabbing} \end{algorithm}
\begin{proposition}\label{prop:disjoint} Let \( \omega = [\mathbf{a}, \mathbf{b}] \subset [1,\infty) \) be a compact interval of parameters. Let \( n > 0 \) be an integer. Assume that \( \underline{{\omega}_{n}}\subseteq {\omega}_{n}\subseteq \overline{{\omega}_{n}} \), and that \( \overline{{\omega}_{n}} \cap \Delta \neq \emptyset \). Assume \( c_n \) is monotone on \( \omega \) and the value of the parameter ``\textit{increasing}'' is \emph{true} if and only if \( c_n \) is increasing.
Then all the intervals \( \omega^i \) added to the queue \( Q \) by Algorithm~\ref{alg:hitDelta} applied to these objects satisfy the following: \begin{align} & \omega^i \subset \omega. \\ & \overline{\omega^i_n} \cap \Delta = \emptyset, \end{align} where \( \overline{\omega^i_n} \) is the interval computed in Algorithm~\ref{alg:interval} for \( \omega^i \). \end{proposition}
\proof By construction, it is obvious that \( \omega^i \subset \omega \). We are going to prove that the outer bound \( \overline{\omega^1_n} \) for \( c_n ([\mathbf{a},\mathbf{a}']) \) does not intersect \( \Delta \) if \( \mathbf{a} \neq \mathbf{a}' \) (the argument about \( [\mathbf{b}',\mathbf{b}] \) is analogous). By Proposition~\ref{prop:bisection}, if \( c_n \) is increasing on \( \omega \) and \( \mathbf{u}^- < -\delta \) then the bisection method provides \( \mathbf{a}' \) for which the numerically computed upper bound \( \mathbf{w} \) for \( c_n (\mathbf{a}') \) satisfies \(\mathbf{w} \leq -\delta \), and then indeed \( c_n ([\mathbf{a},\mathbf{a}']) \cap \Delta = \emptyset \), also as computed in the numerical version (with rounding). \qed
\subsection{Software} \label{sec:software}
A software implementation of the algorithms introduced above is publicly available at \cite{software}. The program is a command-line utility (to be launched in a text terminal, or at the command prompt), written in C++. It complies with the GNU C++ compiler (version 8.3.0, as of writing the paper). The GNU MPFR software library \cite{mpfr} is used for arithmetic operations on real numbers whenever high precision of the results and conrolled rounding are necessary. In particular, all real numbers provided in the input in the decimal form are rounded to the nearest representable numbers at the target precision by an MPFR function.
We additionally provide a web interface at \cite{software} that makes it possible to run the program and see the results directly from the web browser. One fills out a table with the arguments to be passed to the program, hits the button to submit the form, and obtains the output produced by the program directly in the web page. The web interface allows the user to specify the parameter interval of interest and set several of the parameters involved in the computations, such as \( \delta, N_{0} \) and other parameters discussed in Section \ref{sec:assigning}, and possibly some other parameters not documented here (related, for example to the form in which the output is presented). For further details, we refer the reader to~\cite{software}.
\section*{Acknowledgments} \label{sec:ack}
A.G. and C.E.K. are grateful to ICTP, where part of this research was carried out, for its generous hospitality.
\section*{Data availability statement} \label{sec:data}
The raw data generated by our software, which constitutes a basis for the figures and for the analysis conducted in Section~\ref{sec:results}, is available in~\cite{data20}.
\end{document} | arXiv | {
"id": "2004.13444.tex",
"language_detection_score": 0.8086221814155579,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Approximation of $N$-player stochastic games with singular controls by mean field games\\{\em Honoring Prof. Jin Ma's 65th birthday}
\begin{abstract}
This paper establishes that a class of $N$-player stochastic games with singular controls, either of bounded velocity or of finite variation, can both be approximated by mean field games (MFGs) with singular controls of bounded velocity. More specifically, it shows (i) the optimal control to an MFG with singular controls of a bounded velocity $\theta$ is shown to be an $\epsilon_N$-NE to an $N$-player game with singular controls of the bounded velocity, with $\epsilon_N = O(\frac{1}{\sqrt{N}})$, and (ii) the optimal control to this MFG is an $(\epsilon_N + \epsilon_{\theta})$-NE to an $N$-player game with singular controls of finite variation, where $\epsilon_{\theta}$ is an error term that depends on $\theta$. This work generalizes the classical result on approximation $N$-player games by MFGs, by allowing for discontinuous controls. \end{abstract}
\section{Introduction} {$N$}{-player} non-zero-sum stochastic games are notoriously hard to analyze. The theory of Mean Field Games (MFGs), pioneered by \cite{LL2007} and \cite{HMC2006}, presents a powerful approach to study stochastic games of a large population with small interactions. (See the lecture notes and books \cite{BFY2013}, \cite{CDLL2015}, \cite{CarmonaDelarue}, \cite{GLL2010}, and the references therein for more details on MFGs). The key idea behind MFGs is to avoid directly analyzing the difficult $N$-player stochastic games, and instead to approximate the dynamics and the objective function via the notion of population's probability distribution flows, a.k.a., mean information processes. This idea is feasible if an MFG can approximate the corresponding $N$-player game, under proper criteria. The seminal work of \cite{HMC2006} demonstrated that this is indeed the case, and showed that the value function of an $N$-player game under the criterion of Nash equilibrium (NE) can be approximated by the value function of an associated MFG with an error of order $\frac{1}{\sqrt{N}}$. There are other on higher order error analyses through the central limit theorem and the large deviation principle for MFGs. For instance, \cite{DLR2018b} and \cite{DLR2019} studied diffusion-based models with common noise via the coupling approach, and \cite{BC2018} and \cite{CP2018} analyzed finite state space models without common noise using master equations. As such, MFGs provide an elegant and analytically feasible framework to approximate $N$-player stochastic games.
All existing works on approximation of $N$-player stochastic games by MFGs are established within the framework of regular controls where controls are absolutely continuous. However, most control problems from engineering and economics are not absolutely continuous, or even continuous. A natural question is, will this relation between the MFG and the $N$-player game hold when controls may not be continuous?
The focus of this paper is to establish, within the singular control framework, the approximation of $N$-player stochastic games by their corresponding MFGs.
{\bf MFGs and stochastic games with singular controls.} Compared with regular controls, singular controls provide a more general and natural mathematical framework where both the controls and the state space may be discontinuous. However, it is well documented that analysis for singular controls is much harder than for regular controls. From a PDE perspective, the associated fully nonlinear PDE is coupled with possibly state and time dependent gradient constraints. From a control perspective, the Hamiltonian for singular controls of finite variation diverges \cite{Pham2009} and the standard stochastic maximal principle fails; even in the case of bounded velocity, the Hamiltonian is discontinuous. In contrast, the existence of solutions to MFGs relies on the assumption that the Hamiltonian $H(x,p)$ has sufficient regularity, especially with respect to $p$. For instance, \cite{LL2007} assumed that $H$ is of class $\mathcal{C}^1$ in $p$, and \cite{Cardaliaguet2013} assumed that $H$ is of class $\mathcal{C}^2$ and that the second-order derivative with respect to $p$ is Lipschitz continuous. The exception is \cite{Lacker2015}, which established in a general framework the existence of Markovian equilibrium solutions when controls are continuous but may not be Lipschitz. \cite{FH2016} adopted the notion of relaxed controls for the existence of solution to MFGs with singular controls and established its approximation by MFGs with regular controls.
Nevertheless, the question remains as to whether $N$-player games can be approximated by MFGs, when controls may not be absolutely continuous.
{\bf Our work.} There are two types of singular controls, namely, singular controls of finite variation and singular controls of bounded velocity. This paper establishes that $N$-player stochastic games with singular controls, {\it both} of finite variation and of bounded velocity, can be approximated under the NE criterion by MFGs with singular controls of bounded velocity. This result suggests that one may completely circumvent the more difficult MFGs of singular controls of finite variation, when analyzing stochastic games of singular type, and instead focus on singular controls games of bounded velocity.
Indeed, singular controls of bounded velocity share some nice properties with regular controls and are easier to analyze than singular controls of finite variation. This conviction underlines the main idea in our analysis of the relation between MFGs and the associated $N$-player stochastic games. The analysis starts with two basic components. First is the relationship between the underlying singular control problems, bounded velocity vs finite variation. Theorem~\ref{thetainfty} shows that under proper assumptions, the value function of the former converges to that of the latter. Second is on the existence, the uniqueness, and the regularity for the solution to the MFG with singular controls of bounded velocity, established in Theorem~\ref{mainthm}. These two ingredients lead to the main theorem on approximation of MFGs to the corresponding $N$-player games. Specifically, (i) given a bounded velocity $\theta$, the optimal control to the MFG with singular controls of bounded velocity is an $\epsilon_N$-NE to an $N$-player game with singular controls of bounded velocity with $\epsilon_N = O(\frac{1}{\sqrt{N}})$, and (ii) the optimal control to the MFG is an $(\epsilon_N + \epsilon_{\theta})$-NE to an $N$-player game with singular controls of finite variation, where $\epsilon_{\theta}$ is an error term that depends on $\theta$.
{\bf Other related work.} There are earlier works relating singular controls with bounded velocity and with finite variation. For instance, exploiting this relation enables \cite{MT1989} to establish the existence of the optimal singular control of finite variation for a controlled Brownian motion. This relation is also analyzed in \cite{HPY2016} for a monotone follower type of singular controls. None of these works is in a game setting. Moreover, to establish the relation between MFGs and $N$-player games in a singular control framework, one needs more explicit construction for the optimal control policies.
A Markov chain based approximation approach was proposed in \cite{BBC2018} for numerically solving MFGs with reflecting barriers and showed its convergence. Then in \cite{DF2018} it was shown that, under the notion of weak (distributional) NE, $N$-player stochastic games with singular controls of finite variation can be approximated by that of bounded velocity, if the set of Nash equilibria for the latter is relatively compact under an appropriate topology. The focus and approach of these works are different from ours.
Finally, the existence of Markovian NE solution for MFGs in Theorem \ref{mainthm} was established in \cite{Lacker2015} in a more general class of MFGs. His approach is sophisticated and consists of two main steps. The first step is the existence of a weak solution under the convexity assumption, and the second step is to go through a measurable selection argument to show that this weak solution is in fact the desirable one. Our approach is to directly construct the Markov NE using the fixed point approach, based on the special structure of the game. This yields more explicit solution structure with additional regularity properties, which are necessary for the subsequent analysis to connect MFGs and the associated $N$-player games.
\section{Problem formulations and main results} \label{setup}
We start with $(\Omega, \mathcal{F}, \mathbb{F} =(\mathcal{F}_t)_{0\leq t \leq \infty}, P)$ a probability space in which {$\mathbf{W}^i = \{W_t^i\}_{0\leq t\leq \infty}$} are i.i.d. standard Brownian motion with $i=1,\ldots,N<\infty$. Let $\mathcal{P} (\mathbb{R}) $ be the set of all probability measures on $\mathbb{R}$, and
$\mathcal{P}_p (\mathbb{R}) $ be the set of all probability measures of $p$th order on $\mathbb{R}$. That is
$$\mathcal{P}_p (\mathbb{R}) = \biggl\lbrace \mu \in \mathcal{P} (\mathbb{R}) \biggl| \left(\int_\mathbb{R} |x|^p \mu( dx)\right )^\frac{1}{p} < \infty \biggr\rbrace.$$ To define the flow of probability measures $\{\mu_t\}_{t\ge 0}$, let us recall the $p$th order Wasserstein metric on $ \mathcal{P}_p(\mathbb{R})$ defined as
$$D^p(\mu,\mu')= \inf_{\tilde{\mu} \in \Gamma(\mu, \mu') } \limits \left(\int_{\mathbb{R}\times\mathbb{R}} |y-y'|^p \tilde{\mu} (dy,dy') \right)^{\frac{1}{p}},$$ where $\Gamma(\mu, \mu')$ is the set of all coupling of $\mu $ and $\mu' $. Denote $C([0, T], \mathcal{P}_2 (\mathbb{R}))$ for all continuous mappings from $[0, T]$ to $\mathcal{P}_2 (\mathbb{R})$. Then $\mathcal{M}_{[0,T]} \subset C([0, T],\mathcal{P}_2 (\mathbb{R}))$ is a class of flows of probability measures such that there exists a positive constant $c$ so that \begin{align*}
\mathcal{M}_{[0,T]} = \biggl\lbrace \{ \mu_t\}_{0\le t\le T} ~\biggl|~ \sup_{s\neq t}\frac{ D^1(\mu_t,\mu_s) }{|t-s|^{\frac{1}{2}}} \leq c, \sup_{t \in [0,T]} \int_\mathbb{R} |x|^2 \mu_t(dx) \leq c\biggr\rbrace. \end{align*} $\mathcal{M}_{[0,T]}$ is a metric space endowed with the metric \begin{align}\label{metric1} d_\mathcal{M}\biggl(\{\mu_t\}_{0\le t\le T},\{\mu_t'\}_{0\le t\le T}\biggr) = \sup_{0\le t\le T} D^2(\mu_t,\mu_t'). \end{align}
Throughout, we will use $Lip(\psi)$ as a Lipschitz coefficient of $\psi$ for any given Lipschitz function $\psi$. That is, $|\psi(x)-\psi(y) | \leq Lip(\psi) |x-y|$ for any $x,y \in \mathbb{R}$. {For any $\psi(x)\in\mathcal{C}^2$, we will use \[\mathcal{L} \psi (x) = b(x) \partial_x \psi (x)+\frac{1}{2} \sigma^2 (x) \partial_{xx} \psi (x),\] for the infinitesimal generator for any stochastic process \[dx_t = b(x_t) dt + \sigma (x_t) dW_t,\]
where $b$ and $\sigma$ are Lipschitz continuous and of linear growth; we say that a function $f$ is of a polynomial growth if $|f (x)| \leq c(|x|^k+1) $ for some positive constant $c$ and $k\in\mathbb{N}$, for all $x\in\mathbb R$.}
\subsection{Problems of N-player stochastic games and MFGs}
{\bf ${N}$-player game with singular controls of finite variation.} Fix a time $T <\infty$ and suppose that there are $N$ {rational and indistinguishable} players in the game. Denote $ \{x_t^i\}_{s \leq t \leq T}$ as the state process in $\mathbb{R}$ for player $i$ ($i = 1, \ldots, N$), with $x_{s-}^i=x^i$ starting from time $s\in [0,T]$. Now assume that the dynamics of $\{x_t^i\}$ follows, for $s\le t \le T$, \begin{align} \label{nSDE}
dx_t^i = \frac{1}{N}\sum_{j=1}^N b_0(x_t^i,x_t^j) dt + \sigma dW_t^i +d\xi_t^{i+}-d\xi_t^{i-}, \ \ \ x_{s-}^i=x^i,
\end{align} where $b_0: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is bounded, Lipschitz continuous, and $\sigma$ is a positive constant. Here $ \xi_\cdot^i = (\xi_\cdot^{i+},\xi_\cdot^{i-}) $ is the control by player $i$ with $ (\xi_\cdot^{i+},\xi_\cdot^{i-}) $ nondecreasing, c\`adl\`ag, $\xi_{s-}^{i+}=\xi_{s-}^{i-} = 0$, $\mathbb{E} \biggl [\int_s^T d\xi_t^{i+} \biggr]< \infty, $ and $\mathbb{E} \biggl[\int_s^T d\xi_t^{i-} \biggr] < \infty $.
Given Eqn. (\ref{nSDE}), the objective of player $i$ is to minimize, over an appropriate control set $\mathcal{U}^N$, her cost function $J^{i,N}(s,x^i , \xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i} )$. That is
\begin{align}\label{Nsingular}\tag{N-FV} \begin{split} \inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}^N} J^{i,N}(s,x^i,\xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i}) & = \inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i-}) \in \mathcal{U}^N} \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(x_t^i, x_t^j)dt + \gamma_1 d\xi_t^{i+}+ \gamma_2 d\xi_t^{i-} \right].
\end{split} \end{align}
Here $\xi_\cdot^{ -i}=\{ (\xi_\cdot^{j +}, \xi_\cdot^{j -})\}_{j=1, j \neq i }^N$ denotes the set of controls for all the players except for player $i$, the cost function $f_0: \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz continuous, $\gamma_1$ and $\gamma_2 $ are constants, and
\begin{align*}
\mathcal{U}^N = \biggl\lbrace (\xi_\cdot^+,\xi_\cdot^-) ~\biggl|~ & \xi_t^+ \text{ and }\xi_t^- \space \text{ are } \mathcal{F}_t^{(x_t^1,\ldots, x_t^N)} \text{-adapted, c\'adl\'ag, nondecreasing, } \\ &
\xi_{s-}^+ = \xi_{s-}^- =0, \mathbb{E} \biggl[\int_s^T d\xi_t^+ \biggl]<\infty, \text{ and } \mathbb{E} \biggl[\int_s^T d\xi_t^- \biggl] < \infty ,\text{ for } 0\le s\le t\le T \biggl\rbrace, \end{align*} with {$\{ \mathcal{F}_t^{(x_t^1,\ldots, x_t^N)}\}_{s\le t\le T}$} the natural filtration of $\{x_t^1,\ldots, x_t^N\}_{s\le t\le T}.$
{\bf $N$-player game with singular controls of bounded velocity.} If one restricts the controls $(\xi_\cdot^{i+},\xi_\cdot^{i-})$ to be with a bounded velocity such that for a given constant $\theta >0 $, $$d\xi_t^{i+} = \dot{\xi}_t^{i+}dt, \ \ \ d\xi_t^{i-} = \dot{\xi}_t^{i-} dt, $$ with $0 \leq \dot{\xi}_t^{i+}, \dot{\xi}_t^{i-} \leq \theta$. Then game (\ref{Nsingular}) becomes
\begin{align}\label{Nbound}\tag{N-BD} \inf_{(\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}_{\theta}^N} J^{i,N}_{\theta} (s,x^i,\xi_\cdot^{i+},\xi_\cdot^{i-};\xi_\cdot^{ -i} ) & = \inf_{ (\xi_\cdot^{i +}, \xi_\cdot^{i -}) \in \mathcal{U}_\theta^N} \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(x_t^i, x_t^j)dt + \gamma_1 \dot{\xi}_t^{i+}dt +\gamma_2\dot{\xi}_t^{i-}dt \right],
\\ \text{subject to } \quad dx_t^i &= \frac{1}{N}\sum_{j=1}^N b_0(x_t^i,x_t^j) dt + \sigma dW_t^i +\dot{\xi}_t^{i+}dt-\dot{\xi}_t^{i-}dt, \ \ \ x_s^i=x^i. \end{align}
Here the admissible set is given by
$$\mathcal{U}_{\theta}^N = \biggl\lbrace (\xi_\cdot^+,\xi_\cdot^-) ~\biggl|~ (\xi_\cdot^+,\xi_\cdot^-) \in \mathcal{U}^N , 0\le \dot{\xi}_t^+,\dot{\xi}_t^- \le \theta, \text{ for } 0 \le s\le t\le T \biggl\rbrace.$$
There are several criteria to analyze stochastic games. Two standard ones are the Pareto optimality and the {Nash equilibrium (NE)}. In this paper we will focus on NE. Depending on the problem setting and in particular the admissible controls, there are several forms of Nash equilibria (NEs), including the open loop NE, the closed loop NE, and the closed loop in feedback form NE (a.k.a., the Markovian NE). Throughout the paper, we will consider the Markovian NE. Markovian NE means that the controls are deterministic functions of time $t$, current state $x_t$, and a fixed measure $\mu_t$. More precisely,
\begin{definition}[Markovian $\epsilon$-Nash equilibrium to (\ref{Nsingular})] A Markovian control $(\xi_\cdot^{i*+}, \xi_\cdot^{i*-}) \in \mathcal{U}^N$ for $i = 1,\ldots, N$ is a \emph{Markovian $\epsilon$-Nash equilibrium} to \emph{(\ref{Nsingular})} if for any $i \in \{1, \ldots, N\}$, any $(s,x) \in [0,T]\times \mathbb{R}$ and any Markovian $(\xi_\cdot^{i'+},\xi_\cdot^{i'-}) \in \mathcal{U}^N$, $$E_{x_{s-}^{N}}\left[J^{i,N} (s,x_{s-}^{N},\xi_\cdot^{i'+},\xi_\cdot^{i'-};\xi_\cdot^{*-i} )\right] \geq E_{x_{s-}^{N}}\left[J^{i,N} (s,x_{s-}^N,\xi_\cdot^{i*+},\xi_\cdot^{i*-};\xi_\cdot^{*-i} )\right] -\epsilon.$$ \end{definition}
{\begin{definition}[Markovian $\epsilon$-Nash equilibrium to (\ref{Nbound})] A Markovian control $(\xi_\cdot^{i*+}, \xi_\cdot^{i*-}) \in \mathcal{U}_{\theta}^N$ for $i = 1,\ldots, N$ is a \emph{Markovian $\epsilon$-Nash equilibrium} to \emph{(\ref{Nbound})} if for any $i \in \{1, \ldots, N\}$, any $(s,x) \in [0,T]\times \mathbb{R}$ and any Markovian $(\xi_\cdot^{i'+},\xi_\cdot^{i'-}) \in \mathcal{U}_{\theta}^N$, $$E_{x_{s,\theta}^N}\left[J^{i,N}_{\theta} (s,x_{s,\theta}^N,\xi_\cdot^{i'+},\xi_\cdot^{i'-};\xi_\cdot^{*-i} )\right] \geq E_{x_{s,\theta}^N}\left[J^{i,N}_{\theta} (s,x_{s,\theta}^N,\xi_\cdot^{i*+},\xi_\cdot^{i*-};\xi_\cdot^{*-i} ) \right]-\epsilon.$$ \end{definition}}
We will show that both $N$-player games, game (\ref{Nbound}) and game (\ref{Nsingular}), can be approximated by MFGs with singular controls of bounded velocity, as introduced below.
{\bf MFGs with singular controls of bounded velocity.} Assume that all $N$ players are identical. That is, for each time $t \in [0,T]$, all $x_t^i$ have the same probability distribution. Define $ \mu_t = \lim_{N \rightarrow \infty} \limits \frac{1}{N} \sum_{i =1}^N \limits \delta_{x_t^i}$ as a limit of the empirical distributions of $x_t^i$. Then, according to SLLN, as $N\rightarrow \infty$, \begin{align*} &\frac{1}{N}\sum_{j=1}^N b_0(x_t ,x_t^j) \rightarrow \int_\mathbb{R} b_0(x_t ,y) \mu_t(dy)=b(x_t ,\mu_t) , \ \ \mathbb{P}-a.s., \\ &\frac{1}{N}\sum_{j=1}^N f_0(x_t , x_t^j) \rightarrow \int_\mathbb{R} f_0(x_t , y) \mu_t(dy) =f(x_t , \mu_t),\ \ \mathbb{P}-a.s., \end{align*} subject to appropriate technical conditions. Here $b, f:\mathbb{R} \times \mathcal{P}_1(\mathbb{R}) \rightarrow \mathbb{R}$ are functions satisfying assumptions to be specified later. That is, instead of game (\ref{Nbound}), one can solve for a pair of control $\{\xi^*_t\}_{t\in[0,T]}$ and mean information $\{\mu^*_t\}_{t\in[0,T]}$ such that \begin{enumerate}
\item Under $\{\mu^*_t\}_{t\in[0,T]}$, $\{\xi^*_t\}_{t\in[0,T]}=\{(\xi_t^{*,+},\xi_t^{*-})\}_{t\in[0,T]}$ is an optimal strategy for
\begin{align}\label{MFGbounded1}\tag{MFG-BD} \begin{split}
v_{\theta} (s, x|\{\mu^*_t\} ) & := \inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} J_{\theta} (s, x , \xi_\cdot^{ +}, \xi_\cdot^{ -} |\{\mu^*_t\}) \\&: = \inf_{ (\xi_\cdot^{+}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} \mathbb{E}_{\mu^*_s}\left[ \int_s^T \left( f(x_t, \mu_t)+ \gamma_1\dot{\xi}_t^{+}+\gamma_2\dot{\xi}_t^{-} \right) dt|x_s=x\right], \end{split} \end{align} subject to \begin{equation}\label{dynamics-bdd}
dx_t = \left( b(x_t,\mu_t^*) + \dot{\xi}_t^{+}- \dot{\xi}_t^{-} \right) dt + \sigma dW_t , \quad x^*_s \sim \mu^*_s, \end{equation} \begin{align*}
\mathcal{U}_{\theta} = \biggl\lbrace (\xi_\cdot^+,\xi_\cdot^-) \biggl|& \xi_t^+\text{ and }\xi_t^- \text{ are } \mathcal{F}_t^{(x_{t-})} \text{-adapted, c\'adl\'ag, nondecreasing, }
\xi_{s}^+=\xi_{s}^-=0, \\& 0\le \dot{\xi}_t^+,\dot{\xi}_t^-\le \theta,\mathbb{E} \biggl[\int_s^T d\xi_t^+ \biggl] <\infty , \text{ and } \mathbb{E} \biggl[\int_s^T d\xi_t^- \biggl] < \infty, \text{ for } 0\le s\le t \le T \biggl\rbrace, \end{align*} with $ \{ \mathcal{F}_t^{(x_{t-})} \}_{s\le t\le T} $ the filtration of $ \{(x_{t-})\}_{s\le t\le T} $. When $\theta\to \infty$, we simply write $\mathcal{U}$ instead of $\mathcal{U}_{\infty}$ for notational simplicity. \item $\mu_t^*$ is the probability distribution of $x_t^*$ which is given by $$ dx_t^{*} = \left( b(x_t^{*},\mu_t^{*}) + \dot{\xi}_t^{*+}- \dot{\xi}_t^{*-} \right) dt + \sigma dW_t ,\quad s\le t\le T, \quad x_s^{*} \sim \mu_s^*. $$ \end{enumerate} Such a pair $(\xi^{*,+}_\cdot,\xi^{*,-}_\cdot)\in\mathcal{U}_{\theta}$ and $\{\mu_t^*\}\in\mathcal{M}_{[0,T]}$ constitute a solution of \eqref{MFGbounded1}. \begin{remark} \label{remark-fixedinitial}
Note here the game value $v_{\theta}(s, x|\{\mu^*_t\})=\inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta}} J_{\theta} (s, x , \xi_\cdot^{ +}, \xi_\cdot^{ -} |\{\mu^*_t\})$ with $x_s^*=x$ being a sample from $\mu^*_s$. An alternative definition of the game is to solve
$\tilde{v}_{\theta}(s, \mu^*_s)$ with $\tilde{v}_{\theta}(s, \mu^*_s)=\mathbb{E}_{\mu^*_s}[v_{\theta}(s, x_s)]$. This game value can be easily recovered from
$v_{\theta}(s,x)$. (See also \cite{GX2019} and \cite[Section 2.2.2]{LZ2018} for a similar set up.) \end{remark} For ease of exposition, we will use the following notion of control function, for a fixed $\mu_t$. \begin{definition} [Control function] A control of bounded velocity $\xi_t$ is called \emph{Markovian}
if $ d\xi_t = \dot{\xi}_t dt = \varphi (t,x_t|\{\mu_t\}) dt$ for some function $\varphi:[0,T]\times \mathbb{R}\rightarrow \mathbb{R}$. $\varphi(t,x_t|\{\mu_t\})$ is called the \emph{control function} for the fixed $\{\mu_t\}$. A control of a finite variation $\xi_t$ is called \emph{Markovian} if $ d\xi_t = d\varphi(t,x_t|\{\mu_t\})$ for some function $\varphi$. $\varphi$ is called the \emph{control function} for the fixed $\{\mu_t\}$. \end{definition}
\subsection{Main results} The main results are derived based on the following assumptions.
\begin{itemize}
\item[(A1).] $b_0(x,y)$ and $f_0(x,y)$ are Lipschitz continuous in both $x$ and $y$. That is, $|b_0(x_1,y_1)-b_0(x_2,y_2)|\leq Lip(b_0)(|x_1-x_2|+|y_1-y_2|)$ and $|f_0(x_1,y_1)-f_0(x_2,y_2)|\leq Lip(f_0)(|x_1-x_2|+|y_1-y_2|)$ for some $Lip(b_0),Lip(f_0)>0$. Moreover, $|b_0(x,y)|\leq c_1$ for some $c_1$. $b(x,\mu)$ and $ f(x,\mu) $ are Lipschitz continuous in $x$ and $\mu$, and $b(x,\mu)$ is bounded. That is, $| b(x_1,\mu^1)- b(x_2,\mu^2)| \le Lip(b)( |x_1-x_2| + D^1(\mu^1,\mu^2)) $ for some $Lip(b) >0$, and $| f(x_1,\mu^1)- f(x_2,\mu^2)| \le Lip(f)( |x_1-x_2| + D^1(\mu^1,\mu^2)) $ for some $Lip(f) >0$, and $|b(x,\mu)| \le c_2$ for some $c_2$.
\item[(A2).] $f(x,\mu) $ has a first-order derivative in $x$ with $f(x,\mu)$ and $\partial_x f(x,\mu)$ satisfying the polynomial growth condition. Moreover, for any fixed $\mu \in \mathcal{P}_2(\mathbb{R})$, $f(x,\mu)$ is convex and nonlinear in $x$. Moreover, there exists some constant $c_f$ satisfying $|f(x,\mu)| \leq c_f\biggl(1 + |x|^2+ \int_\mathbb{R} y^2 \mu(dy) \biggl)$ for any $x\in \mathbb{R}, \mu \in \mathcal{P}_2(\mathbb{R})$.
Note that this assumption is well-posed: by definition of $\mathcal{M}_{[0,T]}$, $\mu \in \mathcal{P}_2$.
\item[(A3).] $b(x,\mu) $ has first- and second-order derivatives with respect to $x$ with uniformly continuous and bounded derivatives in $x$. \item[(A4).] $-\gamma_1<\gamma_2$. This ensures the finiteness of the value function. Indeed, take game (\ref{Nsingular}) with $ -\gamma_1 > \gamma_2$. Then, letting $d\xi_t^{i+} = d\xi_t^{i-} = M $ and $M \rightarrow \infty$, we will have $J^{i,N}\rightarrow -\infty$.
\item[(A5).] (Monotonicity of the cost function) $f$ satisfies either \begin{align*} \mbox{(i).} \int_\mathbb{R} (f(x,\mu^1) - f(x,\mu^2)) (\mu^1 -\mu^2) (dx) \geq 0, \text{ for any } \mu^1 , \mu^2 \in \mathcal{P}_2 (\mathbb{R}) , \end{align*} and $H(x,p ) = \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta] } \limits \{ (\dot{\xi}^+ -\dot{\xi}^- )p +\gamma_1 \dot{\xi}^+ + \gamma_2 \dot{\xi}^- \} $ satisfies the following condition for any $x,p,q \in \mathbb{R}$ \begin{align*} \text{if }H(x,p+q) - H(x,p) - \partial_p H(x,p) q = 0, \text{ then } \partial_p H(x,p+q) = \partial_p H(x,p), \ \ \ \mbox{or} \end{align*} \begin{align*} \mbox{(ii).} \int_\mathbb{R} (f(x,\mu^1) - f(x,\mu^2)) (\mu^1 -\mu^2) (dx)> 0, \text{ for any } \mu^1 \neq \mu^2 \in \mathcal{P}_2 (\mathbb{R}). \end{align*}
As in \cite{LL2007, Cardaliaguet2013}, Assumption (A5) is critical to ensure the uniqueness for the solution of (\ref{MFGbounded1}), as will be clear from the proof of Proposition \ref{uniq} for the uniqueness of the fixed point.
\item[(A6).] (Rationality of players) For any control function $ \varphi $, any $t \in [0,T], $ any fixed $\{\mu_t\}$, and any $ x,y \in \mathbb{R}$, $(x-y)\biggl( \varphi (t,x|\{\mu_t\})- \varphi (t,y|\{\mu_t\})\biggl) \leq 0 $.
Intuitively, this assumption says that the better off the state of an individual player, the less likely the player exercises controls, in order to minimize her cost. This assumption first appeared in \cite{EKPPQ1997} in the analysis of BSDEs. \end{itemize}
\begin{mainthm} Assume \emph{(A1)--(A6)}. Then, \begin{itemize} \item[a).] For any fixed $\theta$, the optimal control to \emph{(\ref{MFGbounded1})} is an $\epsilon_{N }$-NE to \emph{(\ref{Nbound})}, given that the distribution of $x_{s,\theta}^N$ at any given initial time $s\in[0,T]$ among $N$ players are permutation invariant. Here $\epsilon_{N } = O\biggl(\frac{1}{\sqrt{N}}\biggl)$; \item[b).] The optimal control to \emph{(\ref{MFGbounded1})} is an $(\epsilon_N + \epsilon_\theta)$-NE to \emph{(\ref{Nsingular})}, given that the distribution of $x_{s}^N$ at any given initial time $s\in[0,T]$ among $N$ players are permutation invariant. Here $\epsilon_{N } = O\biggl(\frac{1}{\sqrt{N}}\biggl)$, and $\epsilon_\theta \rightarrow 0$ as $\theta \rightarrow \infty $. \end{itemize} \end{mainthm}
\section{Derivation of the Main Theorem}
The relationship between the stochastic games (\ref{Nsingular}), (\ref{Nbound}), and (\ref{MFGbounded1}) is built in three steps.
The first step concerns the analysis of the associated stochastic control problem for (\ref{MFGbounded1}).
\subsection{Control problems}
To start, we introduce the underlying stochastic control problems.
{\bf Control problem of a bounded velocity.} Let $\{\mu_t\} \in \mathcal{M}_{[0,T]}$ be a fixed exogenous flow of probability measures, and consider the following control problem, \begin{align} \label{Control}\tag{Control-BD} \begin{split}
v_{\theta} (s,x |\{\mu_t\}) & \triangleq \inf_{(\xi_\cdot^{ +}, \xi_\cdot^{ -}) \in \mathcal{U}_{\theta} } J_{\theta} (s,x , \xi_\cdot^{ +}, \xi_\cdot^{ -}| \{\mu_t\}) \\& = \inf_{ (\xi_\cdot^{+}, \xi_\cdot^{-})\in \mathcal{U}_{\theta} } \mathbb{E} \left[ \int_s^T \left( f(x_t, \mu_t)+ \gamma_1 \dot{\xi}_t^{+}+\gamma_2 \dot{\xi}_t^{-} \right) dt \right], \end{split} \end{align} subject to $dx_t=b(x_t, \mu_t)dt+\sigma dW_t, x_s=x$.
If controls are of finite variation, that is, $\theta=\infty$, then we have the following control problem.
{\bf Control problem of finite variation.} \begin{align} \label{Control-FV} \tag{Control-FV}
v(s, x |\{\mu_t\} ) & \triangleq \inf_{ (\xi_\cdot^{+}, \xi_\cdot^{-}) \in \mathcal{U}} \mathbb{E} \left[ \int_s^T f(x_t, \mu_t) dt + \gamma_1 d{\xi}_t^{+}+\gamma_2 d{\xi}_t^{-} \right],
\end{align} subject to \begin{equation*}
dx_t = b(x_t,\mu_t) dt + \sigma dW_t+ d\xi_t^{+}- d\xi_t^{-} , \quad x_{s-} = x. \end{equation*}
Note that
problem (\ref{Control})
is a classical stochastic control problem. The associated HJB equation with the terminal condition is given by \begin{align} \begin{split} - \partial_t v_{\theta} &= \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]} \left\lbrace \left( b(x,\mu ) + (\dot{\xi}^+-\dot{\xi}^-) \right) \partial_x v_{\theta} + \left(f(x ,\mu ) +\gamma_1\dot{\xi} ^+ + \gamma_2 \dot{\xi}^- \right) \right\rbrace + \frac{\sigma^2}{2}\partial_{xx} v_{\theta} \\&=\min \biggl\lbrace ( \partial_x v_{\theta}+ \gamma_1)\theta,(- \partial_x v_{\theta} + \gamma_2)\theta, 0 \biggl\rbrace +b(x,\mu ) \partial_x v_{\theta} +f(x ,\mu )+ \frac{\sigma^2}{2}\partial_{xx} v_{\theta}.
\\ &\text{with } v_{\theta} (T, x|\{\mu_t\})=0, \quad \forall x \in \mathbb{R} . \end{split} \label{HJBHJBHJB} \end{align}
\begin{proposition}\label{optimization} Assume \emph{(A1)--(A4)}. The HJB Eqn. \emph{(\ref{HJBHJBHJB})} has a unique solution $v $ in {$C^{1,2}( [0,T) \times \mathbb{R})\bigcap C( [0,T] \times \mathbb{R})$} with a polynomial growth. Furthermore, this solution is the value function to problem \emph{(\ref{Control})}, and the corresponding optimal control function is \begin{align}\label{optcontrols}
\varphi_\theta (t,x_t|\{\mu_t\})=\dot{\xi}^+_{t,\theta}-\dot{\xi}^-_{t,\theta} = \left\{ \begin{array}{c l}
\theta & \text{if} \quad \partial_x v_{\theta} (t,x_t|\{\mu_t\}) \leq -\gamma_1,
\\ 0 & \text{if} \quad -\gamma_1 < \partial_x v_{\theta} (t,x_t|\{\mu_t\}) < \gamma_2,
\\ -\theta & \text{if} \quad \gamma_2 \leq \partial_x v_{\theta} (t, x_t|\{\mu_t\}). \end{array}\right. \end{align}
Moreover, the optimal control function $\varphi_\theta (t,x|\{\mu_t\})$ is unique and so is the optimally controlled state process $x_{t,\theta}$
with $$ dx_{t,\theta} = \biggl( b(x_{t,\theta},\mu_t) + \varphi_\theta (t,x_{t,\theta}|\{\mu_t\}) \biggl) dt + \sigma dW_t , \quad x_{s,\theta} = x.$$ \end{proposition} \begin{proof} By~\cite[Theorem 6.2, Chapter VI]{FR2012}, the HJB Eqn. (\ref{HJBHJBHJB}) has a unique solution $w$ in {$C^{1,2}( [0,T) \times \mathbb{R})\bigcap C( [0,T] \times \mathbb{R})$} with a polynomial growth. Standard verification argument will show that it is the value function to problem (\ref{Control}). Moreover, the optimal control function is
$$\varphi_\theta (t,x_t|\{\mu_t\})= \left\{ \begin{array}{c l}
\theta & \text{if} \quad \partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}) \leq -\gamma_1,
\\ 0 & \text{if} \quad -\gamma_1 < \partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}) < \gamma_2,
\\ -\theta & \text{if} \quad \gamma_2 \leq \partial_x v_{\theta} (t,x_{t,\theta}|\{\mu_t\}). \end{array}\right. $$
Now, by Proposition \ref{optimization}, there exists a unique value function $v_{\theta} (t,x|\{\mu_t\}) $ to problem (\ref{Control}). Furthermore, by (\ref{optcontrols}), the optimal control function $\varphi_\theta (t,x|\{\mu_t\})$ is uniquely determined. Let us prove that the optimally controlled state process $x_{t,\theta}$ exists and is unique.
For any given fixed $x_{t,\theta}^n$, consider a mapping $\Phi$ such that $\Phi(x_{t,\theta}^n) = x_{t,\theta}^{n+1}$ where $x_{t,\theta}^{n+1}$ is a solution to the following SDE: \begin{align} \label{mapeqn}
dx_{t,\theta}^{n+1} & = \biggl( b(x_{t,\theta}^n,\mu_t) + \varphi_\theta (t,x_{t,\theta}^{n+1}|\{\mu_t\}) \biggl) dt + \sigma dW_t , \quad x_{s,\theta}^{n+1} = x. \end{align} By~\cite{Z1974}, for any given $x_{t,\theta}^n$, the SDE (\ref{mapeqn}) has a unique solution $x_{t,\theta}^{n+1}$, so the mapping $\Phi$ is well defined. Then, for any $n\in \mathbb{N}$, \begin{align*}
d(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}) = \biggl( b(x_{t,\theta}^n,\mu_t)-b(x_{t,\theta}^{n+1} ,\mu_t) + \varphi_\theta (t,x_{t,\theta}^{n+1 }|\{\mu_t\}) - \varphi_\theta (t,x_{t,\theta}^{n+2} |\{\mu_t\}) \biggl) dt. \end{align*}
Because $\varphi_\theta (t,x|\{\mu_t\}) $ is nonincreasing in $x$, \begin{align*} &d(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2})^2
\\& =2(x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2})\biggl( b(x_{t,\theta}^n,\mu_t)-b(x_{t,\theta}^{n+1} ,\mu_t) + \varphi_\theta (t,x_{t,\theta}^{n+1}|\{\mu_t\}) - \varphi_\theta (t,x_{t,\theta}^{n+2} |\{\mu_t\}) \biggl) dt
\\& \leq 2 Lip(b) |x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}| | x_{t,\theta}^{n}-x_{t,\theta}^{n+1}| dt
\\& \leq Lip(b) \biggl( |x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}|^2+ | x_{t,\theta}^{n}-x_{t,\theta}^{n+1}|^2 \biggl) dt. \end{align*} By Gronwall's inequality, for any $t\in [0,T]$, \begin{align*}
|x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}|^2
\le Lip(b)\exp\biggl( Lip(b)t\biggr) \int_0^t | x_{s,\theta}^{n}-x_{s,\theta}^{n+1}|^2 ds. \end{align*} Hence, for any $n\in\mathbb{N}$, \begin{align*}
|x_{t,\theta}^{n+1}-x_{t,\theta}^{n+2}|^2
\le \frac{\biggl(Lip(b)t\biggr)^n \exp\biggl( nLip(b)t\biggr)}{n!} | x_{t,\theta}^{1}-x_{t,\theta}^{2}|^2 . \end{align*} As $n \rightarrow \infty$, $\Phi$ is a contraction mapping, and the SDE (\ref{mapeqn}) has a unique fixed point solution. Therefore, there exists a unique optimally controlled state process $x_{t,\theta}$ to problem (\ref{Control}). Furthermore, the optimal Markovian control $(\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^- )$ to (\ref{Control}) also uniquely exists.
\end{proof}
Next, we establish the regularity of the value function to problem (\ref{Control}). \begin{proposition}\label{strictconvex}
Assume \emph{(A1)--(A4)}. For any fixed $t \in [0,T]$, the value function $v_{\theta} (t, x|\{\mu_t\}) $ for problem (\ref{Control}) is strictly convex in $x$. \end{proposition} \begin{proof} Fix any $x_1,x_2\in \mathbb{R}$ and any $\lambda \in [0,1]$. For any $ (\xi_\cdot^{1,+}, \xi_\cdot^{1,-}) \in \mathcal{U}_{\theta} $ and $ (\xi_\cdot^{2,+}, \xi_\cdot^{2,-}) \in \mathcal{U}_{\theta} $, by the convexity of $f$, \begin{align*}
& \lambda J_{\theta} (s,x_1 , \xi_\cdot^{1,+}, \xi_\cdot^{1,-}|\{\mu_t\}) + (1-\lambda) {J_{\theta}} (s,x_2 , \xi_\cdot^{2,+}, \xi_\cdot^{2,-}| \{\mu_t\})
\\ \geq & J_{\theta} (s,\lambda x_1 + (1-\lambda) x_2 ,\lambda \xi_\cdot^{1,+} + (1-\lambda) \xi_\cdot^{1,+},\lambda\xi_\cdot^{2,+} + (1-\lambda)\xi_\cdot^{2,-}| \{\mu_t\})
\\\geq & v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2| \{\mu_t\}). \end{align*} Since this holds for any $ (\xi_\cdot^{1,+}, \xi_\cdot^{1,-}) \in \mathcal{U}_{\theta} $ and $ (\xi_\cdot^{2,+}, \xi_\cdot^{2,-}) \in \mathcal{U}_{\theta} $,
\begin{align*}
\lambda v_{\theta} (s,x_1|\{\mu_t\}) + (1-\lambda) J_{\theta} (s,x_2 , \xi_\cdot^{2,+}, \xi_\cdot^{2,-}|\{\mu_t\})
\geq v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2|\{\mu_t\}), \end{align*}
\begin{align*}
\lambda v_{\theta} (s,x_1|\{\mu_t\}) + (1-\lambda) v_{\theta} (s,x_2| \{\mu_t\})
\geq v_{\theta} (s, \lambda x_1 + (1-\lambda) x_2|\{\mu_t\}). \end{align*}
Hence, $v_{\theta} (s, x|\{\mu_t\})$ is convex in $x$. By Proposition \ref{optimization}, $v_{\theta} (s, x| \{\mu_t\})$ is a $\mathcal{C}^{1,2} ([0,T]\times \mathbb{R})$ solution to the equation \begin{align*} - \partial_t v_{\theta} =\min \biggl\lbrace ( \partial_x v_{\theta} + \gamma_1)\theta,(- \partial_x v_{\theta} + \gamma_2)\theta, 0 \biggl\rbrace +b(x,\mu ) \partial_x v_{\theta} +f(x ,\mu )+ \frac{\sigma^2}{2}\partial_{xx} v_{\theta}. \end{align*}
Since $f(x,\mu)$ is not linear in $x$, the solution to this equation is also nonlinear in $x$. Hence, $v_{\theta} (s, x|\{\mu_t\})$ is strictly convex. \end{proof}
With this convexity, we have \begin{theorem} \label{thetainfty}
Assume \emph{(A1)--(A4)}. Then for any $(s,x) \in [0,T]\times \mathbb{R}$, as $\theta\rightarrow \infty$, the value function $v_{\theta} (s,x|\{ \mu_t\}) $ of \emph{(\ref{Control})} converges to the value function $v(s,x|\{ \mu_t\}) $ of \emph{(\ref{Control-FV})}. Moreover, there exists an optimal control of a feedback form for \emph{(\ref{Control-FV})}. \end{theorem} \begin{proof}Fix $\{\mu_t\} \in \mathcal{M}_{[0,T]}$.
For any $(\zeta_{\cdot}^+,\zeta_{\cdot}^-) \in \mathcal{U}$, since each path of a finite variation process is almost everywhere differentiable, there exists a sequence of bounded velocity functions which converges to the path as $\theta \rightarrow \infty$. Hence, there exists a sequence $\{(\zeta_{\cdot,\theta}^+ ,\zeta_{\cdot,\theta}^-)\}_{\theta \in [0,\infty)}$ such that $(\zeta_{\cdot,\theta}^+,\zeta_{\cdot,\theta}^- ) \in \mathcal{U}_{\theta}$ and $\mathbb{E} \int_0^T |\dot{\zeta}_{t,\theta}^+ dt - d\zeta_{t}^+ | \rightarrow 0 , \mathbb{E} \int_0^T |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^- | \rightarrow 0$ as $\theta \rightarrow \infty$.
Define $\epsilon_\theta$ as \begin{align}\label{epsilontheta}
\epsilon_\theta = O\biggl( \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^- dt-d\zeta_{t}^{-}| \biggl), \end{align} and $\epsilon_\theta \rightarrow 0$ as $\theta \rightarrow \infty$.
Denote \begin{align*} d\hat{x}_{t,\theta} & = (b(\hat{x}_{t,\theta}, \mu_t ) +\dot{\zeta}_{t,\theta}^+ - \dot{\zeta}_{t,\theta}^-) dt + \sigma dW_t, \quad \hat{x}_{s,\theta}= x, \text{ and } \\ d\hat{x}_t & = b(\hat{x}_t,\mu_t) dt+ \sigma dW_t + d\zeta_{t}^{+}- d\zeta_{t}^{-} , \quad \quad \hat{x}_{s-} = x. \end{align*} Then, for any $\tau \in [s,T]$, \begin{align*}
|\hat{x}_{\tau,\theta} - \hat{x}_\tau| &\le \int_s^\tau | b(\hat{x}_{t,\theta},\mu_t) - b(\hat{x}_t,\mu_t)| dt + \int_s^\tau |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \int_s^\tau |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^{-}|
\\ & \le \int_s^\tau Lip(b)|\hat{x}_{t,\theta} - \hat{x}_t| dt + \int_s^\tau |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \int_s^\tau |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^{-}| . \end{align*} By Gronwall's inequality, \begin{align*}
\mathbb{E} |\hat{x}_{\tau,\theta} - \hat{x}_\tau| \le O\left(\mathbb{E}\int_0^\tau |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^\tau |\dot{\zeta}_{t,\theta}^- dt - d\zeta_{t}^{-}| \right). \end{align*} Consequently, \begin{align*}
& \biggl|J_(s,x,\zeta_{t}^+,\zeta_{t}^- |\{\mu_t\} ) - J_{\theta} (s,x,\zeta_{t,\theta}^+,\zeta_{t,\theta}^-|\{\mu_t\} )\biggl|
\\ \le & \ \mathbb{E}\biggl[ \biggl| \int_s^T f(\hat{x}_t, \mu_t) - f(\hat{x}_{t,\theta},\mu_t) +\gamma_1 d\zeta_{t}^+ + \gamma_2 d\zeta_{t}^- - \gamma_1 \dot{\zeta}_{t,\theta}^+dt - \gamma_2 \dot{\zeta}_{t,\theta}^-dt \biggl| \biggl]
\\ \le & \ \mathbb{E}\biggl[ \int_s^T Lip(f)|\hat{x}_t -\hat{x}_{t,\theta}| + \gamma_1 |d\zeta_{t}^+ -\dot{\zeta}_{t,\theta}^+ dt| + \gamma_2 |d\zeta_{t}^- - \dot{\zeta}_{t,\theta}^-dt | \biggl]
\\\le & \ O\biggl(\mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^+dt - d\zeta_{t}^{+} |+ \mathbb{E}\int_0^T |\dot{\zeta}_{t,\theta}^- dt-d\zeta_{t}^{-}| \biggl). \end{align*} Therefore,
$\biggl |v (s,x|\{ \mu_t\}) - v_{\theta} (s,x|\{ \mu_t\})\biggl| \rightarrow 0 \text{ as } \theta \rightarrow 0$.
Now a similar argument as in Corollary (4.11) \cite{MT1989} shows the existence of a feedback control for \emph{(\ref{Control-FV})}. \end{proof}
\subsection{Game (MFG-BD) }\label{proof} Our next step is to analyze the game (MFG-BD). In particular, we see that \begin{theorem} Assume \emph{(A1)--(A6)}. Then there exists a unique solution $((\xi_\cdot^{*+},\xi_\cdot^{*-}),\{\mu_t^*\} )$ of \emph{(\ref{MFGbounded1})}. Moreover, the corresponding value function $v_{\theta}(s,x)$ for \emph{(\ref{MFGbounded1})} is in {$C^{1,2}( [0,T) \times \mathbb{R})\bigcap C( [0,T] \times \mathbb{R})$} with a polynomial growth. \label{mainthm} \end{theorem}
The proof of the existence of the MFG solution proceeds as follows.
First, from Proposition \ref{optimization} we see that for any given fixed $\{\mu_t\}$ there exists a unique optimal control function as $\varphi_\theta(t,x |\{\mu_t\} ) $. Now, one can define a mapping $\Gamma_1 $ from
$\mathcal{M}_{[0,T]}$ to a class of pairs of the optimal control function $\varphi_{\theta}$ and the fixed flow of probability measures $\{\mu_t\}$ such that $$\Gamma_1 (\{\mu_t\}) = \biggl( \varphi_\theta(t,x|\{\mu_t\}) , \{\mu_t\}\biggl).$$ Moreover, by Proposition \ref{optimization} the optimally controlled process $x_{t,\theta} $ under the fixed $\{\mu_t\}$ exists uniquely with \begin{align*}
d x_{t,\theta} = \biggl( b(x_{t,\theta},\mu_t) + \varphi_\theta(t,x_{t,\theta}|\{\mu_t\}) \biggl) dt + \sigma dW_t, \quad \quad x_{s,\theta} = x. \end{align*}
Consequently, we can define $\Gamma_2 $ so that $$\Gamma_2 \biggl( \varphi_\theta(t,x|\{\mu_t\}), \{\mu_t\}\biggl) = \{ \tilde{\mu}_t \} ,$$ where $ \tilde{\mu}_t $ is the probability measure of $x_{t,\theta}$ for each $t\in [0,T]$.
Now, define a mapping $\Gamma$ as $$\Gamma(\{ \mu_t\})= \Gamma_2 \circ \Gamma_1 (\{\mu_t\}) = \{ \tilde{\mu}_t\}.$$ We will use the Schauder fixed point theorem~\cite[Theorem 4.1.1]{Smart1980} to show the existence of a fixed point.
The key is to prove that $\Gamma$ is a continuous mapping of $\mathcal{M}_{[0,T]} $ into $ \mathcal{M}_{[0,T]}$, and the range of $\Gamma$ is relatively compact \cite{B2013}.
\begin{proposition}\label{MM}
Assume \emph{(A1)--(A4)}. $\Gamma$ is a mapping from $\mathcal{M}_{[0,T]}$ to $\mathcal{M}_{[0,T]}$. \end{proposition} \begin{proof}
For any $\{\mu_t\}$ in $\mathcal{M}_{[0,T]}$, let us prove that $\{\tilde{\mu}_t\} = \Gamma (\{\mu_t\})$ is also in $\mathcal{M}_{[0,T]}$. Without loss of generality, suppose $s > t$, and $$x_{s} = x_t + \int_t^{s} \biggl(b(x_r,\mu_r)+ \varphi_\theta(r,x_r|\{\mu_t\} )\biggl) dr + \int_t^{s}\sigma dW_r.$$
Since $b(x, \mu ) $ is bounded, $|\varphi_\theta(s,x_s|\{\mu_t\} )| \leq \theta$, and $\mathbb{E}\biggl| (b(x_r,\mu_r)+ \varphi_\theta(r,x_r|\{\mu_t\} ))\biggl| \le M $ for large $M$ and for any $r \in [0,T]$, \begin{align*}
D^1(\tilde{\mu}_s,\tilde{\mu}_t ) &\leq \mathbb{E} | x_s -x_t |
\\ & \leq \mathbb{E} \int_t^s \biggl|b(x_r,\mu_r)+ \varphi(r,x_r |\{\mu_t\})\biggl| dr + \sigma \mathbb{E} \sup_{r \in [t,s]}\limits |W_r-W_t |
\\ & \leq M|s-t| + \sigma \mathbb{E} \sup_{r \in [t,s]}\limits |W_r-W_t | \leq M|s-t| + \sigma |s-t|^{\frac{1}{2}}. \end{align*}
Therefore,
$ \sup_{s\neq t}\frac{ D^1(\tilde{\mu}_t,\tilde{\mu}_s) }{|t-s|^{\frac{1}{2}}} \leq c$.
For any $t \in [0,T]$, since $|b(x,\mu)|$ is bounded, \begin{align*}
\int_\mathbb{R} |x|^2 \tilde{\mu}_t(dx) \leq 2 \mathbb{E} \biggl[ \int_\mathbb{R} |x|^2 d \tilde{\mu}_0 + c_2^2 t^2 + \sigma^2 t \biggl] \leq 2 \mathbb{E} \biggl[ \int_\mathbb{R} |x|^2 d \tilde{\mu}_0 + c_1^2 T^2 + \sigma^2 T \biggl], \end{align*}
and $\sup_{t \in [0,T]} \limits \int_\mathbb{R} |x|^2 \tilde{\mu}_t(dx) \leq c$.
\end{proof}
\begin{proposition} \label{continuous} Assume \emph{(A1)--(A6)}. $\Gamma : \mathcal{M}_{[0,T]} \rightarrow \mathcal{M}_{[0,T]}$ is continuous. \end{proposition} \begin{proof} Let $ \{ \mu_t^n \} \in \mathcal{M}_{[0,T]}$ for $n = 1, \ldots,$ be a sequence of flows of probability measures $d_\mathcal{M}(\{\mu_t^n\},\{\mu_t\}) \rightarrow 0 $ as $n \rightarrow \infty$, for some $\{\mu_t\} \in \mathcal{M}_{[0,T]}$. Fix $\tau \in [0,T)$. By Proposition \ref{optimization},
for each $\{\mu_t^n\} $, problem (\ref{Control}) has a value function $v_{\theta}^n(s,x | \{\mu_t^n\})$ with the optimal control $\varphi^n(t,x)$.
Let $\{x_t^n\}$ be the corresponding optimal controlled process:
$$dx_t^n = \biggl(b(x_t^n ,\mu_t^n)+\varphi^n_\theta(t,x_t^n | \{ \mu_t^n \} )\biggl)dt + \sigma dW_t, \quad \tau \leq t \leq T, \quad x_\tau ^n =x.$$ Let $ \{\tilde{\mu}_t^n\}$ be a flow of probability measures of $\{x_t^n\}$, then $\Gamma (\{\mu_t^n\} ) = \{\tilde{\mu}_t^n\}$.
Similarly, for each $\{\mu_t \} $, problem (\ref{Control}) has a value function $v_{\theta} (s,x|b\{\mu_t \})$ with the optimal control $\varphi_\theta(t,x | \{ \mu_t \})$. Let $\{x_t\}$ be the corresponding optimal controlled process:
$$dx_t = \biggl(b(x_t ,\mu_t)+ \varphi_\theta(t,x_t | \{ \mu_t \})\biggl)dt + \sigma dW_t, \quad \tau \leq t \leq T, \quad x_\tau =x.$$ Let $ \{\tilde{\mu}_t\}$ be a flow of probability measures of $\{x_t\}$, then $\Gamma (\{\mu_t\} ) = \{\tilde{\mu}_t\}$.
To show that $\Gamma$ is continuous,
we need to show $$d_{\mathcal{M}} \biggl(\{\tilde{\mu}_t^n \}, \{\tilde{\mu}_t\}\biggl) \rightarrow 0 \text{ as } n\rightarrow \infty.$$
This is established in four steps.
Step 1. We first establish some relation between $D^2 (\{\tilde{\mu}_t^n \}, \{\tilde{\mu}_t\})$ and $D^2(\{\mu_t^n \}, \{\mu_t\})$. Note here $ D^1 (\tilde{\mu}_t,\tilde{\mu}_t^n) \le D^2(\tilde{\mu}_t,\tilde{\mu}_t^n) $.
For any $s \in [\tau ,T]$, \begin{align*}
d(x_s -x_s^n)= \biggl(b(x_s,\mu_s)-b(x_s^n,\mu_s^n) + \varphi_\theta(s, x_s | \{ \mu_t \})- \varphi^n_\theta(s,x^n_s | \{ \mu_t^n \})\biggl) ds. \end{align*} Then, for any $t \in [\tau ,T]$, \begin{align*}
|x_t-x^n_t|^2 & = 2 \int_\tau^t \biggl(b(x_s ,\mu_s )-b(x^n_s, \mu_s^n)+ \varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \}) \biggl)(x_s-x_s^n)ds
\\& \leq 2\int_\tau^t Lip(b) \biggl(|x_s-x_s^n|+D^1(\mu_s,\mu_s^n)\biggl)|x_s-x_s^n|
\\& \quad +\biggl( \varphi_\theta(s,x_s | \{ \mu_t \})- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \})\biggl)(x_s -x_s^n)ds. \end{align*} \begin{align*}
& Lip(b)\biggl(|x_s-x_s^n|+D^1(\mu_s,\mu_s^n)\biggl)|x_s-x_s^n|
\leq Lip(b_0)|x_s - x_s^n|^2 + \frac{Lip(b)}{2}\biggl((D^1(\mu_s,\mu_s^n))^2+ |x_s-x_s^n|^2\biggl). \end{align*} By Assumption (A6), \begin{align*}
& ( \varphi_\theta(s,x_s | \{ \mu_t \})- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \}))(x_s -x_s^n)
\\ \leq & \ \biggl( \varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )+ \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )- \varphi^n_\theta (s,x_s^n | \{ \mu^n_t \})\biggl)(x_s -x_s^n)
\\ \leq & \ ( \varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} ) )(x_s -x_s^n)
\\ \leq & \ \frac{1}{2}\biggl(|\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 +|x_s -x_s^n|^2\biggl). \end{align*}
Consequently, \begin{align*}
|x_t-x_t^n|^2 \leq & \int_\tau^t ( 3Lip(b )+1) |x_s-x_s^n|^2 + Lip(b_0) (D^1(\mu_s,\mu_s^n))^2 + |\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 ds . \end{align*}
By Gronwall's inequality, \begin{align}\label{ineqcontinuity1}
(D^2(\tilde{\mu}_t,\tilde{\mu}_t^n))^2 &\leq c_2 \int_\tau^t Lip(b ) (D^1(\mu_s,\mu_s^n))^2 + \mathbb{E}\biggl[|\varphi_\theta(s,x_s | \{ \mu_t \} )- \varphi^n_\theta(s,x_s | \{ \mu^n_t \} )|^2 \biggl] ds , \end{align} for some constant $c_2$ depending on $T$ and $Lip(b )$.
Step 2. Now we prove that for any $(t,x)\in [\tau,T]\times \mathbb{R} $, $$\partial_x v_{\theta}^n(t,x|\{\mu_t^n\})\rightarrow \partial_x v(t,x|\{\mu_t \})\text{ as } n\rightarrow \infty.$$ By Proposition \ref{optimization}, $v_{\theta}$ and $v_{\theta}^n$ are the solutions to the HJB Eqn. (\ref{HJBHJBHJB}).
For notation simplicity, let us denote
$$\varphi_{1, \theta}(s,x| \{ \mu_t \}) = \max \{\varphi_\theta(s,x| \{ \mu_t \}),0\}, \ \
\varphi_{2,\theta}(s,x| \{ \mu_t \}) =- \min \{\varphi_\theta(s,x| \{ \mu_t \}),0\},$$
$$\varphi^n_{1, \theta}(s,x | \{ \mu^n_t \}) = \max \{\varphi^n_\theta(s,x | \{ \mu^n_t \}),0\}, \ \ \varphi^n_{2, \theta}(s,x | \{ \mu^n_t \}) =- \min \{\varphi^n_\theta(s,x | \{ \mu^n_t \}),0\}.$$
Since $\varphi_{1, \theta| \{ \mu_t \}},\varphi_{2, \theta| \{ \mu_t \}}$ are optimal controls, using It\^o's formula and the HJB Eqn. (\ref{HJBHJBHJB}), we obtain
\begin{align}\label{eqeqeq} \begin{split}
& -v_{\theta}(\tau,x|\{\mu_t\} ) \\&= v_{\theta}(T,x_T|\{\mu_t \}) - v_{\theta}(\tau,x|\{\mu_t\} )
\\& = - \int_\tau^T \biggl( f(x_s,\mu_s) + \gamma_1 \varphi_{1, \theta} (s,x_s| \{ \mu_t \})+ \gamma_2 \varphi_{2, \theta} (s,x_s| \{ \mu_t \})\biggl) ds + \int_\tau^T \sigma \partial_x v_{\theta} (s,x_s|\{\mu_t \}) dW_s. \end{split} \end{align}
Similarly, for any $n \in \mathbb{N}$, applying It\^o's formula to $v_{\theta}^n(s,x)$ and $\{x_t\}$ yields \begin{align*}
&v_{\theta}^n(T,x_T|\{\mu_t^n\}) - v_{\theta}^n(\tau,x|\{\mu_t^n\})
\\ = &\int_\tau^T \partial_t v^n_{\theta} (s,x_s|\{\mu_t^n\}) + ( b(x_s,\mu_s) + \varphi_\theta(s,x_s| \{ \mu_t \}) ) \partial_x v_{\theta}^n(s,x_s|\{\mu_t^n\}) + \frac{\sigma^2}{2} \partial_{xx} v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds
\\& + \int_\tau^T\sigma \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) dW_s
\\ =& \int_\tau^T \partial_t v^n_{\theta} (s,x_s|\{\mu_t^n\}) + ( b(x_s,\mu_s^n) + \varphi^n_\theta(s,x_s | \{ \mu^n_t \} ) ) \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) + \frac{\sigma^2}{2} \partial_{xx} v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds
\\& + \int_\tau^T\sigma \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) dW_s
\\& - \int_\tau^T(b(x_s,\mu_s^n) -b(x_s,\mu_s) +\varphi^n_\theta(s,x_s | \{ \mu^n_t \}) -\varphi_\theta (s,x_s| \{ \mu_t \})) \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds
\\ =& - \int_\tau^T \biggl( f(x_s,\mu_s^n) + \gamma_1 \varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \})+ \gamma_2 \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \})\biggl) ds + \int_\tau^T\sigma \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) dW_s
\\& - \int_\tau^T(b(x_s,\mu_s^n) -b(x_s,\mu_s) + \varphi^n_\theta(s,x_s | \{ \mu^n_t \}) -\varphi_\theta (s,x_s| \{ \mu_t \})) \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) ds. \end{align*} The last equality is due to the HJB Eqn. (\ref{HJBHJBHJB}). Hence, \begin{align}\label{eqnn} \begin{split}
v_{\theta}^n(\tau,x|\{\mu_t^n\} ) & = \int_\tau^T \biggl( f(x_s ,\mu_s^n ) + \gamma_1 \varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \} )+ \gamma_2 \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \} ) \biggl) ds - \int_\tau^T \sigma \partial_x v^n_{\theta} (s,x_s|\{\mu_t^n\} ) dW_s \\& + \int_\tau^T \biggl(b(x_s,\mu_s^n) -b(x_s,\mu_s) + \varphi^n_\theta(s,x_s | \{ \mu^n_t \}) -\varphi_\theta (s,x_s| \{ \mu_t \})) \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}\biggl) ds. \end{split} \end{align} Denote $
H(s,x ) = \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]} \limits \{ ( \dot{\xi}^+-\dot{\xi}^-)\partial_x v_{\theta}(s,x|\{\mu^n_t\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\}, $ and\\ $H^n(s,x ) = \inf_{\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]} \limits \{ ( \dot{\xi}^+-\dot{\xi}^-)\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\}. $ Then for any $\dot{\xi}^+,\dot{\xi}^- \in [0,\theta]$, \begin{align*}
&\left| \left(( \dot{\xi}^+-\dot{\xi}^-)\partial_x v_{\theta}(s,x|\{\mu_t \} ) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^- \right)- \left( (\dot{\xi}^+-\dot{\xi}^-)\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) + \gamma_1 \dot{\xi}^++ \gamma_2 \dot{\xi}^-\right) \right|
\\ &\leq \biggl| \dot{\xi}^+ \biggl( \partial_x v_{\theta}(s,x|\{\mu_t \})-{\partial_x v^n_{\theta}(s,x|\{\mu_t^n\})\biggl)} - \dot{\xi}^-\biggl( \partial_x v_{\theta}(s,x|\{\mu_t \})-{\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl)} \biggl|
\\& \leq 2\theta \left| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) \right|. \end{align*} Hence, for any $s,x \in [\tau,T]\times \mathbb{R}$,
$$ |H(s,x)-H^n(s,x)| \leq 2\theta \biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\})\biggl|.$$ By definition, \begin{align*}
2\theta &\biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl|
\\\geq & \biggl| \biggl( \varphi_{1, \theta}(t,x| \{ \mu_t \}) -\varphi_{2, \theta}(t,x| \{ \mu_t \}) \biggl)\partial_x v_{\theta}(s,x|\{\mu_t \}) + \gamma_1 \varphi_{1, \theta}(t,x| \{ \mu_t \})+ \gamma_2 \varphi_{2, \theta}(t,x| \{ \mu_t \})
\\& - \biggl( \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \}) -\varphi_{2, \theta}^n(t,x | \{ \mu^n_t \}) \biggl)\partial_x v^n_{\theta}(s,x|\{\mu_t^n\} ) + \gamma_1 \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \})+\gamma_2 \varphi_{2, \theta}^n(t,x | \{ \mu^n_t \}) \biggr|
\\ = & \biggl| \biggl(\gamma_1+\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{1, \theta}(t,x| \{ \mu_t \})- \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \})\biggl) \\
+ & \biggl(\gamma_2-\partial_x v_{\theta}(s,x|\{\mu_t^n\}) \biggl) \biggl(\varphi_{2, \theta}(t,x| \{ \mu_t \})- \varphi_{2, \theta}^n(t,x | \{ \mu^n_t \})\biggl)
\\+& \biggl(\partial_x v_{\theta} (s,x|\{\mu_t \})-\partial_x v^n_{\theta} (s,x|\{\mu_t^n\})\biggl)\biggl( \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \}) -\varphi_{2, \theta}^n(t,x | \{ \mu^n_t \})\biggl ) \biggr|
\\\geq & \biggl| \biggl( \gamma_1+\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{1, \theta}(t,x| \{ \mu_t \})- \varphi_{1, \theta}^n(t,x | \{ \mu^n_t \})\biggl) \\
+ &\biggl( \gamma_2 -\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{2, \theta}(t,x| \{ \mu_t \})- \varphi_{2, \theta}^n(t,x | \{ \mu^n_t \})\biggl) \biggr|
- \theta \biggl| \partial_x v_{\theta} (s,x|\{\mu_t \})-\partial_x v^n_{\theta} (s,x |\{\mu_t^n\}) \biggr|. \end{align*} Hence, \begin{align} \label{eqeqeqeq} \begin{split}
3\theta \biggl| \partial_x v_{\theta}(s,x|\{\mu_t \})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl|
& \geq \biggl| \biggl( \gamma_1 +\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{1, \theta}(s,x| \{ \mu_t \})- \varphi_{1, \theta}^n(s,x | \{ \mu^n_t \})\biggl)
\\& + \biggl( \gamma_2 -\partial_x v_{\theta}(s,x|\{\mu_t \}) \biggl) \biggl(\varphi_{2, \theta}(s,x| \{ \mu_t \})- \varphi_{2, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \biggr|. \end{split} \end{align} Similarly, \begin{align} \label{eqeqeqeqeq} \begin{split}
3\theta \biggl| \partial_x v_{\theta}(s,x|\{\mu_t\})-\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl|
& \geq \biggl| \biggl( \gamma_1 +\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl) \biggl(\varphi_{1, \theta}(s,x| \{ \mu_t \})- \varphi_{1, \theta}^n(s,x | \{ \mu^n_t \})\biggl)
\\& + \biggl( \gamma_2 -\partial_x v^n_{\theta}(s,x|\{\mu_t^n\}) \biggl) \biggl(\varphi_{2, \theta}(s,x| \{ \mu_t \})- \varphi_{2, \theta}^n(s,x | \{ \mu^n_t \})\biggl) \biggr|. \end{split} \end{align}
Step 3. We can further show $\varphi^n_\theta( s,x | \{ \mu^n_t \}) \rightarrow \varphi_\theta(s,x| \{ \mu_t \})$ for any $ s,x \in [0,T]\times \mathbb{R}$ as $n \rightarrow \infty$.
Indeed, from Eqns. (\ref{eqeqeq}) and (\ref{eqnn}) and by It\^o's isometry and Cauchy--Schwartz inequality, \begin{align*}
&\biggl( v_{\theta}(\tau,x |\{\mu_t\}) - v^n_{\theta}(\tau,x |\{\mu_t^n\})\biggl)^2 + \sigma^2 \mathbb{E}\biggl[\int_\tau^T\biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl)^2 ds \biggl]
\\ \leq & 3(T-\tau) \mathbb{E} \biggl[ \int_\tau^T \biggl(f(x_s ,\mu_s) - f(x_s ,\mu_s^n )\biggl)^2 + \biggl( (b(x_s,\mu_s) -b(x_s,\mu_s^n) ) \partial_x v^n_{\theta}(s,x_s|\{\mu_t \}) \biggl)^2 \\& + \biggl( (\gamma_1+ \partial_x v^n_{\theta}(s,{x_s}|\{\mu_t^n\})) (\varphi_{1, \theta} (s,x_s| \{ \mu_t \} )-\varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \} ))
\\& + ( \gamma_2 - \partial_x v^n_{\theta}(s,{x_s}|\{\mu_t^n\} ))(\varphi_{2, \theta}(s,x_s| \{ \mu_t \} )- \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \} )) \biggl)^2 ds \biggl]
\\ \leq & 3(T-\tau) \mathbb{E} \biggl [ \int_\tau^T \biggl( Lip(f) D^1(\mu_s ,\mu_s^n) \biggl)^2 + \biggl( Lip(b) D^1(\mu_s ,\mu_s^n) \biggl| \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl| \biggl)^2 \\& + \biggl( (\gamma_1+ \partial_x v^n_{\theta}(s,x_s |\{\mu_t^n\})) (\varphi_{1, \theta} (s,x_s| \{ \mu_t \} )-\varphi_{1, \theta}^n (s,x_s | \{ \mu^n_t \} ))
\\&+ (\gamma_2- \partial_x v^n_{\theta}(s,x_s |\{\mu_t^n\}))(\varphi_{2, \theta}(s,x_s| \{ \mu_t \} )- \varphi_{2, \theta}^n (s,x_s | \{ \mu^n_t \} )) \biggl) ^2 ds \biggl]
\\ \leq & 3(T-\tau) \mathbb{E} \biggl [ \int_\tau^T \biggl( Lip(f) D^1(\mu_s ,\mu_s^n) \biggl)^2 + \biggl( Lip(b) D^1(\mu_s ,\mu_s^n)| \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\})| \biggl )^2 \\& + \biggl (3\theta (\partial_x v_{\theta}(s,x_s|\{\mu_t^n\}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\})) \biggl)^2 ds \biggl]. \end{align*} Let $\delta = \frac{\sigma^2}{54\theta^2}$. Then, for any $\tau \in [T-\delta, T]$, \begin{align*}
& \biggl( v_{\theta}(\tau,x|\{\mu_t \} ) - v^n_{\theta}(\tau,x |\{\mu_t^n\}) \biggl)^2 + \frac{\sigma^2}{2} \mathbb{E} \biggl[\int_\tau^T(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) )^2 ds \biggl]
\\ \leq & 3(T-\tau) \mathbb{E} \biggl[ \int_\tau^T \biggl( Lip(f) D^1(\mu_s ,\mu_s^n) \biggl)^2 + \biggl( Lip(b) D^1(\mu_s ,\mu_s^n)| \partial_x v^n(s,x_s|\{\mu_t^n\})| \biggl)^2 ds \biggl]. \end{align*}
Hence, for any $\tau \in [T-\delta, T]$, $$ v_{\theta}(\tau,x|\{\mu_t \} ) - v^n_{\theta}(\tau,x|\{\mu_t^n\} ) \rightarrow 0,$$and $$ \mathbb{E} \biggl[\int_\tau^T \biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t \}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl)^2 ds \biggl] \rightarrow 0 \text{ as } n \rightarrow \infty.$$
Since $\delta >0$, one can repeat this process for $[T-2\delta, T-\delta]$. Proceeding recursively, one can show that for any $(t,x) \in [0,T]\times \mathbb{R}$, $ v^n_{\theta}(t,x|\{\mu_t^n\} ) \rightarrow v_{\theta}(t,x |\{\mu_t \}), $ and $ \mathbb{E} \biggl[\int_0^T \biggl(\partial_x v_{\theta}(s,x_s|\{\mu_t\}) - \partial_x v^n_{\theta}(s,x_s|\{\mu_t^n\}) \biggl )^2 ds \biggl] \rightarrow 0 \text{ as } n \rightarrow \infty.$ Hence, for any $(s,x) \in [0,T]\times \mathbb{R}$,
$$\partial_x v^n_{\theta} (s,x|\{\mu_t^n\}) \rightarrow \partial_x v_{\theta}(s,x|\{\mu_t\})\text{ as } n\rightarrow \infty.$$
By Proposition \ref{strictconvex}, $\partial_x v^n_{\theta } (s,x|\{\mu_t^n\}),\partial_x v_{\theta}(s,x|\{\mu_t \}) $ are strictly increasing in $x$, and by definition of $\varphi^n_\theta$ and $\varphi_\theta$, $\varphi^n_\theta( s,x | \{ \mu^n_t \}) $ converges to $\varphi_\theta(s,x| \{ \mu_t \})$ for any $(s,x) \in [0,T]\times \mathbb{R} $.
Step 4. We are now ready to show $d_\mathcal{M} \biggl(\{\tilde{\mu}_t\},\{\tilde{\mu}_t^n\} \biggl) \rightarrow 0$ as $n \rightarrow \infty$.
From previous steps, $\varphi^n_\theta( s,x_s | \{ \mu^n_t \})\rightarrow\varphi_\theta(s,x_s| \{ \mu_t \})$ a.s. as $n\rightarrow \infty$, and by the Dominated Convergence Theorem in the $L^2$ space, for each $s\in [0,T]$, $\mathbb{E} \biggl|\varphi^n_\theta( s,x_s | \{ \mu^n_t \})-\varphi^n_\theta(s,x_s | \{ \mu^n_t \}) \biggl|^2 \rightarrow 0.$ Hence, by inequality (\ref{ineqcontinuity1}), $D^2(\tilde{\mu}_t,\tilde{\mu}_t^n) \rightarrow 0$ for any $t \in [0,T]$, $d_\mathcal{M} \biggl(\{\tilde{\mu}_t\},\{\tilde{\mu}_t^n\} \biggl) \rightarrow 0 \text{ as } n \rightarrow \infty.$ That is, $\Gamma $ is continuous.
\end{proof}
\begin{proposition}\label{uniq} Assume \emph{(A1)--(A6)}. Then $\Gamma:\mathcal{M}_{[0,T]}\rightarrow \mathcal{M}_{[0,T]}$ has a fixed point, and \emph{(\ref{MFGbounded1})} has a unique solution. \end{proposition} \begin{proof} As in the proof in Section 3.2 and the proof of Lemma 5.7 in \cite{Cardaliaguet2013}, the range of the mapping $\Gamma $ is relatively compact, and by Proposition \ref{continuous}, $\Gamma$ is a continuous mapping. Hence, due to the Schauder fixed point theorem~\cite[Theorem 4.1.1]{Smart1980}, $\Gamma$ has a fixed point such that $\Gamma (\{\mu_t\}) = \{\mu_t\} \in \mathcal{M}_{[0,T]}$. By Assumption (A5), there exists at most one fixed point \cite{Cardaliaguet2013, LL2007}. Therefore, there exists a unique fixed point solution of flow of probability measures $\{\mu_t^*\}$. By definition of the solution to a MFG and Proposition \ref{optimization}, the optimal control is also unique. \end{proof}
\subsection{Proof of main Theorem}
Suppose that $ \biggl((\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^- ),\{\mu_{t,\theta} \} \biggl)$ is a solution to (\ref{MFGbounded1}) with a given bound $\theta$, and $x_{t,\theta}$ is the optimally controlled process: \begin{align*}
dx_{t,\theta} = \biggl(b(x_{t,\theta}, \mu_{t,\theta}) +\varphi_{1,\theta}(t,x_{t,\theta}|\{\mu_{t,\theta}\}) - \varphi_{2,\theta}(t,x_{t,\theta}|\{\mu_{t,\theta}\}) \biggl) dt + \sigma dW_t, \quad x_{s,\theta} = x, \end{align*}
where $\dot{\xi}_{t,\theta}^+-\dot{\xi}_{t,\theta}^- = \varphi_\theta (t,x|\{\mu_{t, \theta}\}) = \varphi_{1,\theta} (t,x |\{\mu_{t,\theta}\}) - \varphi_{2,\theta}(t,x |\{\mu_{t,\theta}\}) $ is the optimal control function. Note that we explicit write $\mu_{t, \theta}$ here to emphasize the dependence on $\theta$ for the game (MFG-BD).
Given this $\{\mu_{t,\theta}\}$, let $v(s,x|\{ \mu_{t,\theta}\}) $ be the value function of the stochastic control problem (\ref{Control-FV}), and let $x_{t}$ be the optimal controlled process \begin{align*} dx_{t} = b(x_{t}, \mu_{t,\theta})dt + \sigma dW_t +d\xi_{t}^+ -d\xi_{t}^- , \quad x_{s-} = x, \end{align*} where the optimal control $\xi_{t}$ is of a feedback form.
Hence, denote $$ d \varphi (t,x|\{\mu_{t,\theta}\})=d \varphi_{1} (t,x|\{\mu_{t,\theta}\})-d \varphi_{2} (t,x|\{\mu_{t,\theta}\}) = d\xi_{t}^+ -d\xi_{t}^-$$ as the optimal control function for the stochastic control problem of
(\ref{Control-FV}) with the fixed $\{\mu_{t,\theta}\}$.
Now define \begin{align*}
dx_{t,\theta}^i &= \biggl(b(x_{t,\theta}^i, \mu_{t,\theta}) +\varphi_{1,\theta}(t,x_{t,\theta}^i|\{\mu_{t,\theta}\} ) - \varphi_{2,\theta}(t,x_{t,\theta}^i|\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_{t }^i, \quad x_{s,\theta}^i = x, \\
dx_{t}^i &= b(x_{t}^i, \mu_{t,\theta})dt +d \varphi_{1} (t,x_{t}^i|\{\mu_{t,\theta}\} )-d \varphi_{2} (t,x_{t}^i|\{\mu_{t,\theta}\} ) + \sigma dW_t^i, \quad x_{s-}^i = x, \\
dx_{t,\theta}^{i, N}& = \biggl( \frac{1}{N} \sum_{ j = 1}^N b_0(x_{t,\theta}^{i, N}, x_{t,\theta}^{j, N} ) +\varphi_{1,\theta}(t,x_{t,\theta}^{i, N}|\{\mu_{t,\theta}\} ) - \varphi_{2,\theta}(t,x_{t,\theta}^{i, N}|\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_t^i, \quad x_{s,\theta}^{i, N} = x, \end{align*}
Recall that $(\mu_{t,\theta}, \varphi_\theta)$ is the solution to (\ref{MFGbounded1}) and $x_{t,\theta}^i$ are i.i.d., and $\mu_{t,\theta}$ is the probability measure of $x_{t,\theta}^i$ for any $i = 1,\ldots, N$. We first establish some technical Lemmas. \begin{lemma} For any $ 1 \le i \le n$,
$ \mathbb{E} \sup_{s\leq t\leq T} \limits \biggl|x_{t,\theta}^i-x_{t,\theta}^{i,N} \biggl|^2 = O \biggl(\frac{1}{N} \biggl)$. \label{Nash1} \end{lemma} \begin{proof}
\begin{align*}
d(x_{t,\theta}^i-x_{t,\theta}^{i,N} ) = \left( \int_\mathbb{R} b_0(x_{t,\theta}^i,y) \mu_{t,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{t,\theta}^{i,N},x_{t,\theta}^{j,N}) + \varphi_\theta (t, x_{t,\theta}^i|\{\mu_{t,\theta}\} )- \varphi_\theta(t, x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\right) dt, \end{align*} and \begin{align*}
d(x_{t,\theta}^i-x_{t,\theta}^{i,N} )^2 &= \biggl\lbrace 2 (x_{t,\theta}^i-x_{t,\theta}^{i,N} ) \biggl(\int_\mathbb{R} b_0(x_{t,\theta}^i,y) \mu_{t,\theta} (dy) \\ &\quad -\frac{1}{N}\sum_{j=1}^N b_0(x_{t,\theta}^{i,N} ,x_{t,\theta}^{j,N} ) + \varphi_\theta (t, x_{t,\theta}^i|\{\mu_{t,\theta}\} )- \varphi_\theta(t, x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) \biggl) \biggr\rbrace dt. \end{align*}
By Assumption (A6), $(x_{t,\theta}^i-x_{t,\theta}^{i,N} ) \biggl(\varphi_\theta (t, x_{t,\theta}^i|\{\mu_{t,\theta}\} )- \varphi_\theta(t, x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) \biggl ) \leq 0$. Consequently, for any $t\in[s,T]$,
\begin{align*}
|x_{t,\theta}^i-x_{t,\theta}^{i,N}|^2 &
\leq \int_s^t 2 | x_{u,\theta}^i-{x_{u,\theta}^{i,N}}| \biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( {x_{u,\theta}^{i,N}},x_{u,\theta}^{j,N}) \biggl|du
\\ & \leq\int_s^t2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|\biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j) \biggl|du
\\& \quad +\int_s^t2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|\biggl| \frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j) - \frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^{j,N}) \biggl|du
\\& \quad +\int_s^t2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|\biggl| \frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^{j,N}) - \frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i,N},x_{u,\theta}^{j,N}) \biggl|du
\\& \leq \int_s^t2|x_{u,\theta}^i-x_{u,\theta}^{i,N}|\biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j) \biggl|du
\\& \quad +\int_s^t2Lip(b_0)|x_{u,\theta}^i-x_{u,\theta}^{i,N}|^2du+\int_s^t\frac{Lip(b_0)}{N}\sum_{j=1}^N2|x_{u,\theta}^i-x_{u,\theta}^{i,N}||x_{u,\theta}^j-x_{t,\theta}^{j,N}|du
\\& \leq \int_s^t\biggl| \int_\mathbb{R} b_0(x_{u,\theta}^i,y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^j) \biggl|^2du
\\& \quad +\int_s^t[1+3Lip(b_0)]|x_{u,\theta}^i-x_{u,\theta}^{i,N}|^2du+\int_s^t\frac{Lip(b_0)}{N}\sum_{j=1}^N|x_{u,\theta}^j-x_{u,\theta}^{j,N}|^2du.
\end{align*} By the assumption that the initial distribution among $N$ players is permutation invariant, \begin{align*}
\mathbb{E} |x_{t,\theta}^{i }-x_{t,\theta}^{i}|^2
\leq & [1+ 4Lip(b_0)] \mathbb{E} \int_s^t | x_{u,\theta}^{i }-x_{u,\theta}^{i,N}|^2 du\\& + \mathbb{E} \int_s^t \biggl| \int_\mathbb{R} b_0(x_{u,\theta}^{i },y) \mu_{u,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{u,\theta}^{i},x_{u,\theta}^{j}) \biggl|^2du, \end{align*}
and $x_{\cdot,\theta}^{i}$'s are now i.i.d.. Due to the boundedness of $b_0$, $$\mathbb{E} \biggl| \int_\mathbb{R} b_0(x_{t,\theta}^{i },y) \mu_{t,\theta} (dy)-\frac{1}{N}\sum_{j=1}^N b_0( x_{t,\theta}^{i},x_{t,\theta}^{j}) \biggl|^2=\epsilon_N^2=O\left(\frac{1}{N}\right).$$ Consequently, \begin{align*}
\mathbb{E} | x_{t,\theta}^{i }-x_{t,\theta}^{i,N} |^2
&\leq \mathbb{E} \int_s^t (1+4Lip(b_0) ) | x_{u,\theta}^{i}-x_{u,\theta}^{i,N} |^2 du+ \epsilon_N^2 du. \end{align*} By Gronwall's inequality,
$$ \mathbb{E} | x_{t,\theta}^{i }-x_{t,\theta}^{i,N} |^2\leq \int_s^t \epsilon_N^2 du \cdot \mathbb{E} \biggl[\exp(\int_s^t [1+4Lip(b_0) ] du) \biggl]\leq \epsilon_N^2\cdot T\cdot\exp\left\{T[1+4Lip(b_0)]\right\}, $$ and hence, \begin{equation*}
\mathbb{E} \sup_{s \leq t \leq T} |x_{t,\theta}^{i}-x_{t,\theta}^{i,N}|^2 \leq \epsilon_N^2\cdot T\cdot\exp\left\{T[1+4Lip(b_0)]\right\} = O \biggl( \frac{1}{N} \biggl). \end{equation*}
Therefore, $ \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-x_{t,\theta}^{i,N}|^2 = O\left( \frac{1}{N} \right)$. \end{proof}
Suppose that the first player chooses a different control $\xi_t' $ which is of a bounded velocity and all other players $i=2,3,\ldots, N$ choose to stay with the optimal control $\{\xi_{t,\theta}\}$. Denote $$d\xi_t' = \dot{\xi}_t' dt = \varphi'(t,x) dt, \quad \text{ and } \quad d\xi_{t,\theta} = \dot{\xi}_{t,\theta} dt = \varphi_\theta (t,x|\{\mu_{t,\theta}\} ) dt.$$
Then the corresponding dynamics for the MFG is \begin{align*} d \tilde{x}_{t,\theta}^1 &= \biggl( b (\tilde{x}_{t,\theta}^1,\mu_{t,\theta}) + \varphi'(t,\tilde{x}_{t,\theta}^1) \biggl) dt + \sigma dW_t^1 \end{align*}
The corresponding dynamics for $N$-player game are \begin{align*} d\tilde{x}_{t,\theta}^{1,N} &= \left( \frac{1}{N}\sum_{j=1}^N b_0( \tilde{x}_{t,\theta}^{1,N} ,\tilde{x}_{t,\theta}^{j,N}) + \varphi'(t,\tilde{x}_{t,\theta}^{1,N})\right) dt + \sigma dW_t^1, \\d\tilde{x}_{t,\theta}^{i,N} &= \left( \frac{1}{N}\sum_{j=1}^N b (\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) +
\varphi_\theta (t,\tilde{x}_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\right) dt + \sigma dW_t^i, \quad \quad \quad 2 \leq i \leq N. \end{align*} We first show \begin{lemma}\label{Lemma-Nash}
$ \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{0\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}| \leq O \biggl(\frac{1}{\sqrt{N}} \biggl) $. \end{lemma} \begin{proof} For any $2 \leq i \leq N$, \begin{align*}
d(x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}) = \left[ \frac{1}{N}\sum_{j=1}^N \left( b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N})-b_0(\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) \right) + \varphi_\theta (t,x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) -\varphi_\theta(t,\tilde{x}_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} )\right] dt. \end{align*}
Because $\varphi_\theta (t,x|\{\mu_{t,\theta}\} )$ is nonincreasing in $x$, \begin{align*}
|x_{T,\theta}^{i,N}-\tilde{x}_{T,\theta}^{i,N}|^2 &\leq \int_s^T 2 (x_{t,\theta}^{i,N}- \tilde{x}_{t,\theta}^{i,N})\left( \frac{1}{N}\sum_{j=1}^N \left(b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N})-b_0(\tilde{x}_{t,\theta}^{i,N},\tilde{x}_{t,\theta}^{j,N}) \right) \right) dt
\\ &\leq \int_s^T 2 (x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}) \frac{1}{N}\sum_{j=1}^N Lip(b_0) \biggl( |x_{t,\theta}^{i,N} - \tilde{x}_{t,\theta}^{i,N}|+ |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}| \biggl) dt\allowdisplaybreaks
\\ &\leq 2 Lip(b_0) \int_s^T |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 + |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}| \frac{1}{N}\sum_{j=1}^N |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}| dt
\\ &\leq 2 Lip(b_0) \int_s^T |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 + \frac{1}{2N}\sum_{j=1}^N \biggl( |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2+ |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}|^2 \biggl) dt
\\ &\leq Lip(b_0) \int_s^T 3 |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 + \frac{1}{ N}\sum_{j=1}^N |x_{t,\theta}^{j,N}-\tilde{x}_{t,\theta}^{j,N}|^2 dt , \end{align*} and \begin{align*}
\sup_{2\leq i\leq N} \limits & \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2
\\ & \leq Lip(b_0) \int_s^T [ \sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t'\leq t} \limits 3 |x_{t',\theta}^{i,N}-\tilde{x}_{t',\theta}^{i,N}|^2 \\& \quad \quad + \frac{N-1}{N} \sup_{2\leq j\leq N} \limits \mathbb{E} \sup_{s\leq t' \leq t} \limits |x_{t',\theta}^{j,N}-\tilde{x}_{t',\theta}^{j,N}|^2+ \frac{1}{N}\mathbb{E} |x_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1,N}|^2 ] dt
\\&= Lip(b_0) \int_s^T \left[ \frac{4N-1}{N} \sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t' \leq t} \limits |x_{t',\theta}^{i,N}-\tilde{x}_{t',\theta}^{i,N}|^2 + \frac{1}{N} \mathbb{E}|x_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1,N}|^2 \right] dt. \end{align*} By Gronwall's inequality, \begin{align*}
\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|^2 \leq Lip(b_0) \int_s^T \frac{1}{N} \mathbb{E} |x_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1,N}|^2 dt \cdot e^{\int_0^T Lip(b_0) \frac{4N-1}{N} dt} =O \left( \frac{1}{N} \right). \end{align*}
So, $\sup_{2\leq i\leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t,\theta}^{i,N}|=O \biggl(\frac{1}{\sqrt{N}} \biggl). $ \end{proof}
\begin{proof}[Proof of Main Theorem a)]
By Lemma \ref{Nash1}, for any $2 \le i \le N$,
$ \sup_{s\leq t\leq T} \limits \mathbb{E} |x_{t,\theta}^{i }-x_{t,\theta}^{i,N} | = O \left(\frac{1}{\sqrt{N}} \right)$, and by the triangle inequality, $\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-\tilde{x}_{t,\theta}^{i,N}| = O(\frac{1}{\sqrt{N}})$. Therefore, \begin{equation*}
\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-\tilde{x}_{t,\theta}^{i, N}| + \sup_{1 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-x_{t,\theta}^{ i,N}| = O \left(\frac{1}{\sqrt{N}} \right). \end{equation*}
Finally, define
\begin{align*}
d\bar{x}_{t,\theta}^{1,N} &= \left( \frac{1}{N}\sum_{j=1}^N b_0( \bar{x}_{t,\theta}^{1,N} ,x_{t,\theta}^{j}) + \varphi'(t,\bar{x}_{t,\theta}^{1,N})\right) dt + \sigma dW_t^1,
\end{align*}
Since $(x-y)(\varphi'(t,x)-\varphi'(t,y)) \leq 0$ by Assumption (A6), then a similar proof as that for Lemma~\ref{Nash1} shows
$\mathbb{E} \sup_{0\leq t\leq T} \limits |\tilde{x}_{t,\theta}^{1,N}-\bar{x}_{t,\theta}^{1,N}| = O \left(\frac{1}{\sqrt{N}} \right)$ and $ \mathbb{E} \sup_{0\leq t\leq T} \limits |\bar{x}_{t,\theta}^{1,N}-\tilde{x}_{t,\theta}^{1 }| = O\left( \frac{1}{\sqrt{N}} \right)$.
Therefore, \begin{align*}
&E_{x_{s-,\theta}^{N}}\left[J^{1,N}_{\theta}(s,x_{s-,\theta}^{N},\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \right] \\&= \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\tilde{x}_{t,\theta}^{1,N}, \tilde{x}_{t,\theta}^{j,N}) + \gamma_1 \varphi'_1(t,\tilde{x}_{t,\theta}^{1,N})+\gamma_2 \varphi'_2(t,\tilde{x}_{t,\theta}^{1,N}) dt \right] \\&\geq \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\tilde{x}_{t,\theta}^{1,N},x_{t,\theta}^{j }) +\gamma_1 \varphi'_1(t,\tilde{x}_{t,\theta}^{1,N})+\gamma_2 \varphi'_2(t,\tilde{x}_{t,\theta}^{1,N}) dt \right] -O\left(\frac{1}{\sqrt{N}} \right) \\&\geq \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\bar{x}_{t,\theta}^{1,N}, x_{t,\theta}^j) +\gamma_1 \varphi'_1(t,\bar{x}_{t,\theta}^{1,N})+\gamma_2 \varphi'_2(t,\bar{x}_{t,\theta}^{1,N}) dt \right] -O\left(\frac{1}{\sqrt{N}} \right) \\&\geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f_0 (\tilde{x}_{t,\theta}^{1},y) \mu_{t,\theta} (dy) + \gamma_1 \varphi'_1(t,\tilde{x}_{t,\theta}^{1}) + \gamma_2 \varphi'_2(t,\tilde{x}_{t,\theta}^{1}) dt \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\&\geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f_0 (x_{t,\theta}^{1} ,y) \mu_{t,\theta} (dy) + \gamma_1 \varphi_{1,\theta} (t,{x}_{t,\theta}^{1}|\{\mu_{t,\theta}\} )+\gamma_2 \varphi_{2,\theta} (t,{x}_{t,\theta}^{1}|\{\mu_{t,\theta}\} ) dt \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\& = \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(x_{t,\theta}^{1,N} ,x_{t,\theta}^{j,N} ) +\gamma_1 \varphi_{1,\theta} (t,{x}_{t,\theta}^{1,N}|\{\mu_{t,\theta}\} )+\gamma_2 \varphi_{2,\theta} (t,{x}_{t,\theta}^{1,N}|\{\mu_{t,\theta}\} ) dt \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\& = E_{x_{s-,\theta}^{N}}\left[{J^{1,N}_{\theta}}(s,x_{s-,\theta}^N,\xi_{\cdot,\theta}^{ +},\xi_{\cdot,\theta}^{ -};\xi_{\cdot,\theta}^{ -1}|\{\mu_{t,\theta}\} )\right] -O\left(\frac{1}{\sqrt{N}} \right), \end{align*} where the last inequality is due to the optimality of $\varphi$ for problem (\ref{MFGbounded1}), and the last equality follows a similar proof of Lemma \ref{Nash1}. \end{proof}
\begin{proof}[Proof of Main Theorem b)] Let all players except player 1 choose the optimal controls $(\xi_{\cdot,\theta}^+,\xi_{\cdot,\theta}^-) $, let player one choose any other controls $(\xi_{\cdot}^{'+},\xi_{\cdot}^{'-}) \in \mathcal{U}$. Denote $$d \xi_t'= d \varphi ' (t,x )= d \varphi_1' (t,x )-d \varphi_2' (t,x ), $$
\begin{align*} d\tilde{x}_{t}^1 &= b (\tilde{x}_{t}^1, \mu_{t,\theta} ) dt +d\varphi_1'(t,\tilde{x}_{t}^1) - d\varphi_2'(t,\tilde{x}_{t}^1 ) + \sigma dW_t^1 \quad \tilde{x}_{s-}^1 = x,\\ d\tilde{x}_{t}^{1,N} &= \frac{1}{N} \sum_{ j = 1,\ldots, N} b_0(\tilde{x}_{t}^{1,N}, \tilde{x}_{t,}^{j,N} ) dt +d\varphi_1'(t,\tilde{x}_{t}^{1,N} ) -d \varphi_2'(t,\tilde{x}_{t}^{1,N}) + \sigma dW_t^1, \quad \tilde{x}_{s-}^{1,N} = x,
\\d\tilde{x}_{t}^{i,N} &= \biggl( \frac{1}{N} \sum_{ j = 1,\ldots, N} b_0(\tilde{x}_{t}^{i,N}, \tilde{x}_{t}^{j,N} ) +\varphi_{1,\theta}(t,\tilde{x}_{t}^{i,N}|\{\mu_{t,\theta}\} ) - \varphi_{2,\theta}(t,\tilde{x}_{t}^{i,N} |\{\mu_{t,\theta}\} ) \biggl) dt + \sigma dW_t^i, \quad x_{s-}^{i,N} = x, \\& \text{ for } i = 2,\ldots,N. \end{align*} Then, \begin{align*}
d(x_{t,\theta}^{i,N}-\tilde{x}_{t}^{i,N}) = \left[ \frac{1}{N}\sum_{j=1}^N \left( b_0(x_{t,\theta}^{i,N},x_{t,\theta}^{j,N})-b_0(\tilde{x}_{t}^{i,N},\tilde{x}_{t}^{j,N}) \right) + \varphi_\theta(t,x_{t,\theta}^{i,N}|\{\mu_{t,\theta}\} ) -\varphi_\theta (t,\tilde{x}_{t}^{i,N}|\{\mu_{t,\theta}\} )\right] dt. \end{align*}
By definition, $\varphi_\theta (t,x|\{\mu_{t,\theta}\} )$ is nonincreasing in $x$. Hence, a similar proof to the one for Lemma \ref{Lemma-Nash} yields
\begin{equation} \label{Lemma-Nash2}
\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i,N}-\tilde{x}_{t}^{i,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl) \end{equation}
From Lemma \ref{Nash1} and the triangle inequality, $\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{i}-\tilde{x}_{t}^{i,N}| = O \biggl(\frac{1}{\sqrt{N}} \biggl)$. Therefore, \begin{equation*}
\sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-\tilde{x}_{t}^{i, N}| + \sup_{2 \leq i \leq N} \limits \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta}^{ i}-x_{t,\theta}^{ i,N}| = O \left(\frac{1}{\sqrt{N}} \right). \end{equation*} Since $d\varphi'(t,x) $ is also nonincreasing in $x$, then again the same proof as that for Lemma~\ref{Nash1} shows
$$\mathbb{E} \sup_{s\leq t\leq T} \limits | \tilde{x}^{ 1 ,N}_{t}-\tilde{x}_{t}^1| = O \left(\frac{1}{\sqrt{N}} \right).$$ By the Lipschitz continuity of $f,f_0$, \begin{align*}
&E_{x_{s-}^N}\left[J^{1,N} (s,x_{s-}^N ,\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \right] \\&= \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\tilde{x}_{t}^{1, N}, \tilde{x}_{t}^{j, N})dt +\gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1, N} ) + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1, N} ) \right] \\&\geq \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j=1}^N f_0(\tilde{x}_{t}^{1, N} , x_{t,\theta}^{ j })dt +\gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1, N} ) + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1, N} ) \right] -O\left(\frac{1}{\sqrt{N}} \right) \\&\geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1, N} , y) \mu_{t,\theta}(dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1, N}) +\gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1, N} ) \right] -O\left(\frac{1}{\sqrt{N}} \right) \\&\geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1 } , y) \mu_{t,\theta} (dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1,N} ) +\gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) \right] -O\left(\frac{1}{\sqrt{N}} \right). \end{align*} By definitions of $\tilde{x}_{t }^{1 }$ and $\tilde{x}_{t}^{1,N }$, \begin{align}\label{notc} \begin{split}
& \mathbb{E} \left| d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } )-d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1 } ) \right|
\\& \leq \mathbb{E} d | \tilde{x}_{t}^{1,N } -\tilde{x}_{t}^{1} | + \mathbb{E} \left| \frac{1}{N} \sum_{ j = 1,\ldots, N} b_0(\tilde{x}_{t}^{1,N } , \tilde{x}_{t}^{j,N } ) - b (\tilde{x}_t^{1 }, \mu_{t,\theta } ) \right| dt
= O\left(\frac{1}{\sqrt{N}} \right),
\end{split} \end{align} and by definition of $\varphi_1',\varphi_2'$, \begin{align*}
&\left| \left( d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } ) \right)+\left( - d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1} ) \right)\right|
\\ = & \left| d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } ) \right|+\left| - d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1 } )\right|. \end{align*} Therefore,
$$ \mathbb{E} \sup_{s\leq t\leq T} \left| d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) -d\varphi'_1(t,\tilde{x}_{t}^{1 } ) \right| = O\left(\frac{1}{\sqrt{N}} \right),$$ $$ \mathbb{E} \sup_{s\leq t\leq T}\left| - d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) +d\varphi'_2(t,\tilde{x}_{t}^{1 } )\right|= O\left(\frac{1}{\sqrt{N}} \right),$$ and \begin{align*}
& \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1 } , y) \mu_{t,\theta}(dy) dt + \gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1,N } ) + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1,N } ) \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\ & \geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(\tilde{x}_{t}^{1 } , y) \mu_{t,\theta} (dy) dt +\gamma_1 d\varphi'_1(t,\tilde{x}_{t}^{1} ) + \gamma_2 d\varphi'_2(t,\tilde{x}_{t}^{1 } ) \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\ & \geq \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(x_{t}^{1 } , y) \mu_{t,\theta } (dy) dt + \gamma_1 d\varphi_{1} (t,x_{t}^{1 } |\{\mu_{t,\theta}\} ) + \gamma_2 d\varphi_{2}(t,x_{t}^{1 } |\{\mu_{t,\theta}\} ) \right] -O\left(\frac{1}{\sqrt{N}} \right)
\\ & = v (s,x |\{\mu_{t,\theta } \}) -O\left(\frac{1}{\sqrt{N}} \right). \end{align*} The last inequality is due to the optimality of $\varphi$.
Now, by Theorem \ref{thetainfty}, $$ \biggl|v_{\theta} (s,x |\{\mu_{t,\theta}\}) - v (s,x |\{\mu_{t,\theta} \}) \biggl| \le \epsilon_\theta.$$
Hence, by $ \mathbb{E} \sup_{s\leq t\leq T} \limits |x_{t,\theta }^{i} -x_{t,\theta}^{i,N } | = \epsilon_N $ and by the analysis as in the previous steps \begin{align*}
&E_{x_{s-}^N}\left[J^{1,N} (s,x_{s-}^N ,\xi_\cdot^{'+},\xi_\cdot^{'-};\xi_{\cdot,\theta}^{-1}|\{\mu_{t,\theta}\} ) \right] = E_{x_{s-}^N}[v(s,x_{s-}^N| \{\mu_{t,\theta } \})] -\epsilon_N
\\ & \geq E_{x_{s-}^N}[v_{\theta} ( s,x_{s-}^N|\{\mu_{t,\theta} \})] - (\epsilon_N+ \epsilon_\theta )
\\&= \mathbb{E} \left[ \int_s^T \int_\mathbb{R} f(x_{t,\theta}^{1 } , y) \mu_{t,\theta} (dy) dt + \gamma_1 d\varphi_{1,\theta}(t,x_{t,\theta}^{1 }|\{\mu_{t,\theta}\} ) + \gamma_2 d\varphi_{2,\theta } (t,x_{t,\theta}^{1 }|\{\mu_{t,\theta}\} ) \right] -(\epsilon_N+ \epsilon_\theta )
\\& \geq \mathbb{E} \left[ \int_s^T \frac{1}{N} \sum_{j = 1}^N f_0(x_{t,\theta}^{1,N } , x_{t,\theta}^{j,N} ) dt +\gamma_1 d\varphi_{1,\theta} (t,x_{t,\theta}^{1,N } |\{\mu_{t,\theta}\} ) + \gamma_2 d\varphi_{2,\theta}(t,x_{t,\theta}^{1,N } |\{\mu_{t,\theta}\} ) \right] -(\epsilon_N+ \epsilon_\theta )
\\& =E_{x_{s-}^N}\left[J^{1,N} (s,x_{s-}^N ,\xi_{\cdot,\theta}^{ +},\xi_{\cdot,\theta}^{ -};\xi_{\cdot,\theta}^{ -1}|\{\mu_{t,\theta}\} )\right]-(\epsilon_N+ \epsilon_\theta ) . \end{align*} \end{proof}
\section{Conclusion and discussion} In this paper, we study the approximation of $N$-player stochastic games with singular controls by a proper model of MFGs with singular control of bounded velocity. In particular, under a set of strategies derived from the MFG solution, the corresponding game value of the $N$-player game with singular controls will deviate from that under NE strategies by at most an error term $\epsilon$; for $N$-player games with singular controls of bounded velocity, this error term $\epsilon = \epsilon_N$ solely depends on the number of players $N$ and $\epsilon_N=O\left(\frac{1}{\sqrt{N}}\right)$; with singular controls of finite variation, this error term $\epsilon$ can be decomposed into $\epsilon=\epsilon_N+\epsilon_\theta$, where $\epsilon_N=O\left(\frac{1}{\sqrt{N}}\right)$ and $\epsilon_\theta$ will vanish when the velocity bound $\theta$ tends to infinity. This finding enriches the literature on the relation between MFGs and $N$-players games in terms of how well MFG models could approximate the corresponding $N$-player games, even when the control processes are not continuous.
We also notice that there is another direction of approximation one could study: starting from NEs of $N$-player games, whether they will converge to the MFG solutions as $N$ tends to infinity. There have been some works in this direction. For instance, it was shown in \cite{Lacker2016} that the $N$-player open-loop NEs could converge to the mean-field limit in a weak sense of mixed mean-field equilibria; subsequently in \cite{Lacker2020} a closed-case was considered. In \cite{Card2017}, the NE to the $N$-player game was seen as the solution to a system of coupled-HJB equations and its limit as a mean-field system with local coupling was analyzed in terms of propagation of chaos. A special case of time games was studied in \cite{NMT2020} where both the $N$-player game and the mean field game exhibit multiple NEs; it pointed out a transversality condition playing an important role for the mean-field system being the limit of the $N$-player game; concurrently, \cite{CPFP2019} also studied this convergence issue without uniqueness. More recently, \cite{LT2020} studied this convergence for both non-cooperative and cooperative game through propagation of chaos. The majority of the existing works consider the case of continuous controls. It remains to be explored what would happen when non-continuous controls are allowed.
\end{document} | arXiv | {
"id": "2202.06835.tex",
"language_detection_score": 0.5342326164245605,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\sloppy
\begin{abstract} A hyperbolic set on a compact manifold $M$, satisfies the property: given two of your any points $p$ and $q$, such that for all positive $\epsilon>0$, there is a trajectory in the hyperbolic set from a point $\epsilon$-close to $p$ to a point $\epsilon$-close to $q$, then there is a point in $M$ whose $\alpha$-limit is that of $p$ and whose $\omega$-limit is that of $q$. Bautista and Morales in \cite{bm1}, give a version of this property, for sectional-Anosov flows (vector fields whose maximal invariant set is sectional-hyperbolic), including some conditions; among them that limit the dimension of $M$ to three. In this paper, we prove a generalization of this result, for sectional-hyperbolic sets of codimension one in high dimensions.\\ \end{abstract}
\maketitle
\section{Introduction}
The sectional-hyperbolic sets are a more general class than hyperbolic sets, since it includes these and other non-hyperbolic sets as geometric Lorenz attractor, then it's relevant study which properties valid for hyperbolic sets are also satisfied by the sectional-hyperbolic sets. A property of the Anosov flows (when the whole manifold is a hyperbolic set), is the Anosov connecting lemma (Theorem \ref{teor1}), this result was extended by Bautista and Morales in \cite{bm1}, for sectional-Anosov flows (when the maximal invariant is a sectional-hyperbolic set), in dimension three, including some necessary conditions; this result is known as Sectional- Anosov Connecting Lemma (Theorem \ref{teor3}).\\
Although the Anosov Connecting Lemma, it is very useful in hyperbolic dynamics, presents the limitation of requiring that the flow should be Anosov, however, thanks to the theory of invariant manifolds (see \cite{hps}), the same property can be obtained for arbitrary hyperbolic sets (Theorem \ref{teor2}). In this paper, our main objective is extend the sectional-Anosov connecting Lemma to high dimensions without the limitation of that the flow should be sectional-Anosov, allowing to use it directly in sectional-hyperbolic sets that contain the unstable manifolds of their hyperbolic subsets. For this, we including some conditions, and also generalize the characterization of omega-limit sectional-hyperbolic sets which are closed orbits, given by Bautista and Morales \cite{bm2}, from dimension three to high dimensions. Below we will give the necessary definitions to specify our objective.\\
Hereafter $M$ will be a compact manifold possibly with nonempty boundary endowed with a Riemannian metric $\langle\cdot,\cdot\rangle$ induced by norm $||\cdot||$. Given $X$ an $C^1$ vector field, inwardly transverse to the boundary (if nonempty) we call $X_t$ its induced {\em flow} on $M$. Define the {\em maximal invariant set} of $X$ by $$ M(X)=\displaystyle\bigcap_{t\geq0}X_t(M). $$
The \textit{orbit} of a point $p \in M(X)$ is defined by $\mathcal{O}(p)=\{X_t(p)\,|\,t\in\mathbb{R}\}$. A {\em singularity} is a point $q$ where $X$ is zero, i.e. $X(q)=0$ (or equivalently $\mathcal{O}(q)=\{q\}$)and a {\em periodic orbit} is an orbit $\mathcal{O}(p)$ such that $X_T(p)=p$ for some minimal $T>0$ and $\mathcal{O}(p)\neq \{p\}$. By a {\em closed orbit} is a singularity or a periodic orbit.\\
Given $p\in M$ we define the {\em omega-limit set},
$\omega_X(p)=\{x\in M\,|\,x=\lim_{n\rightarrow\infty}X_{t_n}(p)$ for some sequence $t_n\rightarrow\infty\}$, if $p \in M(X)$, define the {\em alpha-limit set} $\alpha_X(p)=\{x\in M:x=\lim_{n\to{\infty}} X_{-t_n}(p), $ for some sequence $ t_n\to \infty\}$.\\
A compact subset $\Lambda$ of $M$ is called {\em invariant} if $X_t(\Lambda)=\Lambda$ for all $t\in\mathbb{R}$; {\em transitive} if $\Lambda = \omega_X(p)$ for some $p\in \Lambda$. A compact invariant set $\Lambda$ is {\em attracting} if there is a neighborhood $U$ such that \[\Lambda=\cap_{t \geq 0}X_t(U),\] and an {\em attractor} of $X$, is an attracting set $\Lambda$ which is {\em transitive}. On the other hand, a compact invariant set $\Lambda$ is {\em Lyapunov stable}, if for every neighborhood $U$ of $\Lambda$, exists a neighborhood $W$ such that: $X_t(p)\in U$ for all $p\in W$ and $t\geq 0$.
\begin{defin}\label{defin1} A compact invariant set $\Lambda \subseteq M(X)$ is {\em hyperbolic} if there are positive constants $K,\lambda$ and a continuous $DX_t$-invariant splitting of tangent bundle $T_{\Lambda}M=E^s_{\Lambda}\oplus E^X_{\Lambda}\oplus E^u_{\Lambda}$,
such that for every $x \in \Lambda$ and $t \geq 0$: \begin{enumerate}
\item [$(1)$] $\| DX_t(x)v^s_x \| \leq K e^{-\lambda t}\| v^s_x \|,\ \ \forall v^s_x \in E^s_x$;
\item [$(2)$] $ \| DX_t(x)v^u_x \| \geq K^{-1} e^{\lambda t} \| v^u_x \|,\ \ \forall v^u_x \in E^u_x$; \item [$(3)$] $E^{X}_{x}=\left\langle X(x)\right\rangle $. \end{enumerate} \end{defin}
If $E^s_x\neq 0$ and $E^u_x\neq 0$ for all $x\in \Lambda$ we will say that $\Lambda$ is a {\em saddle-type hyperbolic set}. A closed orbit is hyperbolic if it does as a compact invariant set of $X$.
When $\Lambda =M$, we say that the flow generated by $X$ is an Anosov flow.\\
The invariant manifold theory \cite{hps} asserts that if $H\subseteq M$ is hyperbolic set of $X$ and $p \in H$, then the topologic sets: $$W^{ss}(p)=\{q \in M: \lim_{t\to\infty} d(X_t(q),X_t(p))=0\}$$ and $$W^{uu}(p)=\{q \in M: \lim_{t\to- \infty} d(X_t(q),X_t(p))=0\}$$ they are $C^1$ manifolds in $M$, so-called strong stable and unstable manifolds, tangent at $p$ to the subbundles $E^s_p$ and $E^u_p$ respectively. Saturating them with the flow we obtain the stable and unstable manifolds $W^{s}(p)$ and $W^{u}(p)$ respectively, which are invariant. If $p,p' \in H$, we have to $W^{ss}(p)$ and $W^{ss}(p')$ are same or disjoint (similarly for $W^{uu}$).\\
\begin{defin}\label{defin2} A compact invariant set $\Lambda \subseteq M(X)$ is {\em sectional-hyperbolic} if every singularity in $\Lambda$ is hyperbolic (as invariant set) and there are a continuous $DX_t$-invariant splitting of tangent bundle $T_{\Lambda} M = \mathsf{F}^s_{\Lambda}\oplus \mathsf{F}^c_{\Lambda}$, and positive constants $K,\lambda$ such that for every $x \in \Lambda$ and $t \geq 0$:
\begin{enumerate} \item [$(1)$]
$\| DX_t(x)v^s_x \| \leq K e^{-\lambda t}\| v^s_x \| ,\ \ \forall v^s_x \in \mathsf{F}^s_x$; \item [$(2)$]
$\| DX_t(x)v^s_x \| \cdot \| v^c_x \| \leq K e^{-\lambda t}
\| DX_t(x)v^c_x \| \cdot \| v^s_x \|,\ \ \forall v^s_x \in \mathsf{F}^s_x,\ \ \forall v^c_x \in \mathsf{F}^c_x$; \item [$(3)$]
$ \| DX_t(x)u^c_x , DX_t(x)v^c_x \|_{X_t(x)} \geq K^{-1} e^{\lambda t} \| u^c_x , v^c_x \|_x, \ \
\forall u^c_x , v^c_x \in \mathsf{F}^c_x$. Where $||\cdot, \cdot ||_x$ is induced $2$-norm by the Riemannian metrics $\langle \cdot, \cdot \rangle_x$ of $T_x\Lambda$, given by
$$||v_x,u_x||_x=\sqrt{\langle v_x,v_x \rangle_x \cdot \langle u_x,u_x\rangle_x - \langle v_x,u_x \rangle_x^2}$$ for all $x \in \Lambda$ and every $u_x,v_x\in T_x\Lambda$ \end{enumerate} \end{defin}
The third condition guarantees, the increase exponential of the area of parallelograms in the central subbundle $\mathsf{F}^c$. Since $ X(x) \in \mathsf{F}^c_x$ for all $x \in \Lambda$ (see lemma 4 in \cite{bm}), have that dimension of the central subbundle must be greater than or equal to $2$. In the particular case where $dim(\mathsf{F}^c_x)=2$ we will say that $\Lambda$ is a sectional-hyperbolic set of \textit{codimension $1$}. \\
When $\Lambda=M(X)$, we say that the flow generated by $X$ is an sectional-Anosov flow.\\
Also the invariant manifold theory \cite{hps} asserts that through any point $x$ of a sectional-hyperbolic set $\Lambda$ has the strong stable manifolds $\mathcal{F}^{ss}(x)$, tangent at $x$ to the subbundle $\mathsf{F}^s_x$, which induces an foliation over $\Lambda$; saturating them with the flow we obtain the invariant manifold $\mathcal{F}^s(x)$.\\
Unlike hyperbolic sets, the sectional-hyperbolic sets can have regular orbits accumulating singularities. We have to:
\begin{lema}\label{lema1} If $\Lambda \subseteq M(X)$ is sectional-hyperbolic set, and $\sigma$ is an singularity in $\Lambda$ then: $$\mathcal{F}^{ss}(\sigma) \cap \Lambda = \{\sigma\}$$ \end{lema}
\begin{proof} See corollary 2 in \cite{bm}. \end{proof}
All singularity $\sigma$ in an sectional-hyperbolic set, is hyperbolic, so your invariant manifolds $W^{uu}(\sigma)$ and $W^{ss}(\sigma)$ are well defined. The strong stable manifold sectional $\mathcal{F}^{ss}(\sigma)$ it is a submanifold of $W^{ss}(\sigma)$, with respect to your dimension, exists two possibilities:
\begin{enumerate} \item $dim(W^{ss}(\sigma)) = dim(\mathcal{F}^{ss} (\sigma))$, in this case $W^{ss}(\sigma) = \mathcal{F}^{ss} (\sigma)$; \item $dim(W^{ss}(\sigma)) = dim(\mathcal{F}^{ss}(\sigma)) + 1$, in this case, we say that the singularity is {\em Lorenz-like}. \end{enumerate}
Every singularity Lorenz-like is type-saddle hyperbolic set with at least two negative eigenvalues, one of which is real eigenvalue $\lambda$ with multiplicity one such that the real part of the other eigenvalues are outside the closed interval $[\lambda, -\lambda]$.\\
Over a Lorenz-like singularity $\sigma \in \Lambda$, we have $F^{ss}(\sigma)$ is tangent to the subspace associated the eigenvalues with real part less than $\lambda$, and $\mathcal{F}^{ss}(\sigma)$ divide a $W^{ss}(\sigma)$ in two connected component. If $\Lambda$ intersect just one connected component of $W^{ss}(\sigma) \setminus \mathcal{F}^{ss}(\sigma)$, we say that the singularity Lorenz Like is {\em of boundary-type}.\\
We say that a cross section $\Sigma$ of $X$ is associated to a Lorenz-like singularity $\sigma$ in a sectional-hyperbolic set $\Lambda$, if $\Sigma$ is very close to $\sigma$, $\Sigma \cap \Lambda \neq \emptyset$ and one of the connected components of $W^{ss}(\sigma) \setminus \mathcal{F}^{ss}(\sigma)$ contains a point in $int(\Sigma)$.\\ Another important result about the sectional-hyperbolic sets, is the \emph{\textbf{hyperbolic lemma}} (see lemma 9 in \cite{bm}), which assert that any invariant subset $H$ without singularities of a sectional-hyperbolic set $\Lambda$, is hyperbolic, in this case, we have to that $\mathsf{F}^s_H=E^s_H$ and $\mathsf{F}^c_H=E^u_H \oplus E^X_H$, so $W^{ss}(p)=\mathcal{F}^{ss}(p)$ for all $p \in H$.\\
Let $p,q\in M$, we say that $p\prec q$ if and only if for all $\epsilon>0$ exists a orbit from a point $\epsilon$-close to $p$ to a point $\epsilon$-close to $q$.
\begin{teor}[Anosov Connecting Lemma] \label{teor1}
If $X$ is an Anosov flow on a compact manifold $M$ and $p, q \in M $ satisfy that $p \prec q$, then there is a point $x \in M$, such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$. \end{teor}
The following theorem is a generalization of the Anosov connecting lemma, which allows to be used in hyperbolic sets, even when the flow is not Anosov.
\begin{teor}\label{teor2} Let $H$ be a hyperbolic set of vector field $X$ on $M$. If $p, q \in H$ and there are sequences $z_n \in H$, $t_n \in \mathbb{R}^+$ such that $z_n \to p$ and $X_{t_n}(z_n) \to q$, then there is $x \in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$.
\end{teor}
As previously mentioned, Bautista and Morales generalized the Theorem \ref{teor1}, in the sectional-hyperbolic dynamics, for sectional-Anosov flows in dimension three:
\begin{teor}[Sectional-Anosov Connecting Lemma]\label{teor3} If $X$ is a sectional-Anosov flow on a compact 3-manifold $M$, $p \in M(X)$ and $q \in M$, satisfy that $p \prec q$, and $\alpha(p)$ don’t have singularities, then there is $x \in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$ or $\omega(x)$ is a singularity. \end{teor}
As the main result of this paper, we prove the following generalization of the previous theorem:
\begin{teor}[Main: Sectional Connecting Lemma]\label{teor4} Let $\Lambda$ a sectional-hyperbolic set of codimension $1$ of a vector field $X$ on $M$, such that $W^{u}(H) \subseteq \Lambda$ for all hyperbolic subset $H$ of $\Lambda$. If $p, q \in \Lambda$ satisfy that $p \prec q$ and $\alpha(p)$ don't have singularities,then there is $x \in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$ or $\omega(x)$ is a singularity. \end{teor}
Note that in the theorem \ref{teor4}, two of the hypotheses of the theorem \ref{teor3} are replaced by more general ones; specifically, it is not requested that $M$ be of dimension three, but that $\Lambda$ be a sectional-hyperbolic set of codimension one, and the hypothesis that $X$ is a Sectional-Anosov flow is changed by the condition that $\Lambda$ contains the unstable manifolds their hyperbolic subsets; these variations, generate a change in the proof, however, some of the arguments are similar.\\
As direct consequences of the main theorem we have that:
\begin{coro}\label{coro1} Every sectional-hyperbolic Lyapunov stable set of codimension $1$ of a vector field $X$ over $M$, satisfy that if $p,q \in \Lambda$, $p \prec q$ and $\alpha(p)$ don't have singularities, then there is $x \in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$ or $\omega(x)$ is a singularity. \end{coro}
\begin{coro} \label{coro2} Every sectional-Anosov flow of codimension $1$ of a vector field over $M$, satisfy that if $p,q \in \Lambda$, $p \prec q$ and $\alpha(p)$ don't have singularities, then there is $x \in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)=\omega(q)$ or $\omega(x)$ is a singularity. \end{coro}
To proof our main theorem, in section 2, we will introduce the definition of sectional partition \footnote {This definition results from making a modification to the definition of the singular partition introduced in \cite{bm2}}, we will prove its existence for invariant compact sets; in section 3 we will proof some properties of these partitions on sectional-hyperbolic sets of codimension one; in section 4, we will use the sectional partitions to characterize of omega-limit sectional-hyperbolic sets which are closed orbits the transitive hyperbolic sectional sets in codimension one, which are closed orbits, and with this characterization, finally in section 5 we will proof the main theorem.
\section{Sectional Partition}
Denote by $\mathcal{R}'= \{S_1, S_2, ... S_k\}$ a finite collection of cross sections, then we define: $$\mathcal{R}=\bigcup_{i=1}^kS_i \,\,\text{ , }\,\, \partial \mathcal{R}=\bigcup_{i=1}^k\partial S_i \,\,\text{ , } \,\,int(\mathcal{R})=\bigcup_{i=1}^k int(S_i).$$ The diameter of $\mathcal{R}$ is given by the max of the diameters of the elements of
$\mathcal{R}'$, we say that is of the time $\epsilon$ if $\mathcal{R} \cap X_{[-\epsilon,\epsilon]}(y)=\{y\}$ for all $y \in \mathcal{R}$.
\begin{defin}\label{defin3} A {\em sectional partition} of a compact invariant set
$H$ of $X$ is a finite and disjoint collection of cross sections $\mathcal{R}'$ of $X$ with nonzero time, such that: $$Sing(X)\cap H = \{y \in H: X_t(y) \notin int(\mathcal{R}),\forall t \in \mathbb{R} \}.$$ \end{defin}
\begin{teor}[Existence of sectional partitions]\label{teor5}
Let $\Lambda$ be a compact and invariant set of the field $X$ over $M$. If $\Lambda$ is not a singularity and every singularity of $\Lambda$ is hyperbolic, then for all $\delta>0$ there is a sectional partition of diameter less than $\delta$. \end{teor}
\begin{proof} Let $\delta>0$, as all the singularities of $\Lambda$ are hyperbolic, we know that $\Lambda$ has a finite number of singularities, so there exists $\delta_0$ such that: $$Sing(X)\cap \Lambda=\bigcap_{t\in\mathbb{R}}X_t \left( \bigcup_{\sigma\in Sing(X)\cap \Lambda} B_{\delta_0} (\sigma)\right).$$ We define: $$H=\Lambda \backslash \left( \bigcup_{\sigma\in Sing(X)\cap\Lambda}B_{\delta_0}(\sigma)\right).$$ If $H=\emptyset$, then $\Lambda$ does not contain regular orbits, in this case it is a singularity, this contradicts the hypothesis, then $H\neq \emptyset$ and $H \cap Sing(X)=\emptyset$. Then for all $z\in H$ there is a cross section $R_z$ with $z \in int(R_z)$, of arbitrarily small diameter. We consider this diameter much smaller than $\delta$ and defined: $$V_z=\bigcup_{t\in(-1,1)} X_t(int(R_z));$$ clearly $z \in V_z$, so we have that $\{V_z: z\in H\}$ is an open covering of $H$, but since $H$ is
compact (a closed set inside a compact set), there is $\{z_1,...,z_r\}$ such that: $$H\subseteq \bigcup_{i=1}^r V_{z_i}.$$ Let us consider the rectangles $R_{z_1},R_{z_2},...,R_{z_r}$, if necessary we can move them through the flow and assume that they are pairwise
disjoint. Now observe that $$\mathcal{R}'= \{R_{z_1}, ... ,R_{z_r}\};$$ satisfies the conditions of sectional partition. If $x\notin \Lambda \cap Sing(X)$, then $$x \notin \bigcap_{t\in\mathbb{R}}X_t \left( \bigcup_{\sigma\in Sing(X)\cap \Lambda} B_{\delta_0} (\sigma)\right)=Sing(X)\cap\Lambda$$ so there is $t_0 \in \mathbb{R}$ such that $$X_{t_0}(x) \notin \bigcup_{\sigma\in Sing(X)\cap \Lambda} B_{\delta_0} (\sigma)$$ therefore $X_{t_0}(x) \in H$ and $X_{t_0}(x) \in V_{z_i} $ for some $ 1 \leq i \leq r $. We can say that $ X_{t_0}(x) = X_{t_1}(w)$ for some
$-1<t_1<1$ and $ w \in int(R_{z_i}) $, so $ X_{t_0-t_1}(x) = w \in int(R_{z_i})\subseteq int(\mathcal{R})$, where $$x \notin \{y \in \Lambda : X_t(y) \notin int(\mathcal{R}),\forall t \in \mathbb{R} \}$$ Therefore $$\{y \in \Lambda : X_t(y) \notin int(\mathcal{R}),\forall t \in \mathbb{R} \} \subseteq Sing(X)\cap \Lambda$$ since the other containment is immediate, we get the result.
\end{proof}
Given a sectional partition $\mathcal{R}'$ of a compact invariant set $H$ of a field $X$ we define the function $$ \Pi_{(\mathcal{R},int(\mathcal{R}))}: Dom(\Pi_{(\mathcal{R},int(\mathcal{R})})\subseteq \mathcal{R} \rightarrow int(\mathcal{R})$$ with $$Dom(\Pi_{(\mathcal{R},int(\mathcal{R})})=\{x \in \mathcal{R}: X_t(x)\in int(\mathcal{R}) \text{ for some } t > 0\}$$ given by $$\Pi_{(\mathcal{R},int(\mathcal{R}))}(x) = X_{t(x)}(x),$$ where $t(x)$ is the time of return, i.e., the first $t>0$ for which $X_t(x)\in int(\mathcal{R})$. In the remainder of this section we shall represent $\Pi_{(\mathcal{R},int(\mathcal{R}))}$ only by $\Pi$.\\
Given $x \in S_i \in \mathcal{R'}$ we define $B_{\epsilon}(x,\mathcal{R})=B_\epsilon (x) \cap S_i$.
\begin{lema}\label{lema2} Let $\mathcal{R}'$ a sectional partition of the invariant compact set
$H$ of $X$, with all its hyperbolic singularities, then we have the following properties:
\renewcommand{\arabic{enumi}.}{\arabic{enumi}.} \begin{enumerate} \item [$(1)$] $H\cap \mathcal{R} \cap Dom(\Pi)\subseteq int(Dom(\Pi))$ in $\mathcal{R}$ and $\Pi$ is $C^1$ in a neighborhood of $H\cap int(\mathcal{R})$ in $\mathcal{R}$; \item [$(2)$] $(H\cap \mathcal{R}) \backslash Dom(\Pi) \subseteq \bigcup_{\sigma \in Sing(X)\cap H} W^s(\sigma).$ \end{enumerate} \end{lema}
\begin{proof} For simplicity, we denote $H^0=H\cap \mathcal{R}$\\
\renewcommand{\arabic{enumi}.}{\arabic{enumi}.} \begin{enumerate} \item [$(1)$] Let $x \in H^0\cap Dom(\Pi)$, so $X_{t(x) }(x)\in int(S_i)$ for some $S_i\in \mathcal{R}'$ and $t(x)>0$, due to continuous dependence of the flow, there is $\epsilon_x>0$, such that $\mathcal{O}^+(y) \cap int(S_i)\neq \emptyset$ for all $y\in B_{\epsilon_x}(x)$, then $B_{\epsilon_x}(x,\mathcal{R}) \subseteq Dom(\Pi)$, thereby $x \in int(Dom (\Pi))$ in $\mathcal{R}$. Define $$U = \bigcup_{x\in H^0\cap Dom(\Pi)} B_{\epsilon_x}(x,\mathcal{R})$$ we have that $U$ is a neighborhood of $H^0$ in $\mathcal{R}$ such that $\Pi$ is $C^1$.
\item [$(2)$] Let $p \in H^0 \backslash Dom(\Pi)$, thus $\mathcal{O}^+(p) \cap int(\mathcal{R}) = \emptyset$. Suppose there is a regular point $r \in \omega(p) \subseteq H$, then by the definition of sectional partition, there exists $t_0 \in \mathbb{R}$ such that $X_{t_0}(r)\in int(\mathcal{R})$, thus $X_{t_0}(r) \in int(S_j)$ for some $S_j \in \mathcal{R}'$. Given that $X_{t_0}(r)\in \omega(p)$ there is a sequence $t_n \rightarrow \infty$ such that $X_{t_n}(p) \rightarrow X_{t_0}(r)$ and since
$S_j$ is transversal to the flow we have to $\mathcal{O}^+(p)\cap int(S_j)\neq \emptyset$ this is a contradiction.
Thus $\omega(p)$ is a singularity and $p \in \bigcup_{\sigma \in Sing(X)\cap H} W^s(\sigma)$. \end{enumerate} \end{proof}
\begin{lema}\label{lema3} Given $q \in M$ if $\omega(q)$ is not a singularity and $\mathcal{R}'$ is a sectional partition of $\omega(q)$, so we have that
$\mathcal{O}^+(q)\cap int(\mathcal{R})=\{q_1,q_2,...\}$ is an infinite sequence ordered in a way that
$\Pi(q_n)=q_{n+1}$.\\ \end{lema}
\begin{proof} Since the singularities are isolated and $\omega(q)$ is not a singularity, then
$\omega(q)$ contains regular orbits and by the definition of a sectional partition, every regular orbit of
$\omega(q)$ intersects $int(\mathcal{R})$. Given $x\in \omega(q)\cap int(\mathcal{R})$, we have $x\in int(S_j)$ for some $S_j \in \mathcal{R}'$. As $S_j$ is transverse to the flow and $\mathcal{O}^+(q)$ accumulates $x$ there is a sequence of points in $\mathcal{O}^+(q)\cap int(S_j)$, that accumulate $x$. Then $\mathcal{O}^+(q)\cap int(\mathcal{R})$ is an infinite set and since the cross sections in $\mathcal{R}'$ are finite, disjoint and transverse to the flow, it follows that $\mathcal{O}^+(p)\cap int(\mathcal{R})$ is a countable set, which we order according to the time of return to the interior of $\mathcal{R}$. \end{proof}
\section{Sectional partition and Sectional-hyperbolic sets}
Given $T_\Lambda M = \mathsf{F}^s_\Lambda \oplus \mathsf{F}^c_\Lambda$ the sectional decomposition of a sectional-hyperbolic set $\Lambda$, this can be extended to $T_{U_\Lambda} M= \mathsf{F}^s_{U_\Lambda} \oplus \mathsf{F}^c_{U_\Lambda}$ where $U_\Lambda$ is a neighborhood of $\Lambda$, this extension is done continuously for $\mathsf{F}^c_U$ and integrable for $\mathsf{F}^s_U$.\\
Let $\Sigma \subset U_\Lambda$ a cross section, we will denote by $\mathcal{F}^s_{\Sigma}$ the {\em vertical foliation} of $\Sigma$ obtained by the projection of $\mathcal{F}^{ss}$ over $\Sigma$ along the flow $X$, (i.e., $\mathcal{F}^s(x,\Sigma)$ is a leaf in $\Sigma$ obtained by the projection of the leaf $\mathcal{F}^{ss}(x)$ over $\Sigma$ along the flow $X$, for all $x \in \Sigma$).
We also denote $\partial^v \Sigma$ and $\partial^h \Sigma$ the vertical and horizontal border of $\Sigma$ respectively.
We assume that the components of the vertical border $\partial^v \Sigma$ are conformed by foliation leaves of $\mathcal{F}^s_\Sigma$, and $\partial^h \Sigma$
is transversal to $\mathcal{F}^s_\Sigma$.\\
Given a sectional partition $\mathcal{R}'$, of a sectional-hyperbolic set $\Lambda$, such that $\mathcal{R} \subseteq U_\Lambda$, the foliation of $\mathcal{R}$, $\mathcal{F}^s_\mathcal{R}$, is determined by the foliation of the cross sections which makes contains.\\
\begin{teor}\label{teor6} Let $\omega(q)$ is a sectional-hyperbolic set of codimension $1$, if $\omega(q)$ is not a singularity, then for all $\alpha>0$ there is a sectional partition $\mathcal{R}'$ of $\omega(q)$, with diameter less than $\alpha$, such that: \begin{enumerate} \item[$(1)$] $int(\mathcal{R}) \cap \mathcal{O}^+(q)=\{q_1,q_2,...\}$ with $\Pi(q_{n-1})=q_n$, \item[$(2)$] there is $\delta > 0$ and $N \in \mathbb{N}$ such that if $n \geq N$ then one of the following statements is true:\\
$(A)$ $B_\delta(q_n,\mathcal{R})\subseteq Dom(\Pi)$ and $\Pi |_{B_\delta(q_n,\mathcal{R})}$ is $C^1$, or\\
$(B)$ $B_\delta^+(q_n,\mathcal{R})\subseteq Dom(\Pi)$ and $\Pi
|_{B_\delta^+(q_n,\mathcal{R})}$ is $C^1$,\\ where $B_\delta^+(q_n,\mathcal{R})$ denotes the connected component of $B_\delta(q_n,\mathcal{R}) \backslash s_n$ which contains $q_n$, and $s_n$ is a submanifold contained in the intersection of $\bigcup_{\sigma \in Sing(X)\cap \omega(q)}W^s(\sigma)$ with $B_\delta(q_n,\mathcal{R})$. \end{enumerate} \end{teor}
\begin{proof} By Theorem \ref{teor5} there exists a sectional partition $\mathcal{R}'$ of $\omega(q)$ of arbitrarily small diameter such that $R \subseteq U_{\omega(q)}$, and by Lemma \ref{lema3} we have ($1$).\\
Now, by being an omega-limit sectional hyperbolic set, all singularities of $\omega(q)$ are Lorenz-like, and by codimension $1$ we have that
$ W^u(\sigma)$ is one-dimensional; therefore $W^s(\sigma)$ is a manifold of dimension $n-1$. On the other hand $\mathcal{O}^+(q)\cap W^s(\sigma) = \emptyset$ for all singularities since $\omega(q)$ is not a singularity.\\
For simplicity we will denote: \begin{align*} A_1 & =\omega(q)\cap \mathcal{R} \cap Dom(\Pi)\\ A_2 & =\left( \omega(q) \cap int(\mathcal{R})\right)\setminus Dom(\Pi)\\ A_3 & = \left( \omega(q) \cap \partial(\mathcal{R}) \cap Cl \left[ \mathcal{O}^+(q) \cap int(\mathcal{R})\right] \right) \setminus Dom(\Pi)\\ A_4 &=\left( \omega(q) \cap \partial({\mathcal{R}})\right)\setminus \left( Dom(\Pi) \cap Cl \left[ \mathcal{O}^+(q)\cap int(\mathcal{R})\right] \right) \end{align*} Observe that $A_1\cup A_2 \cup A_3 \cup A_4=\omega(q)\cap \mathcal{R}$, and besides these sets, they are pairwise disjoint. Next, to each point $x\in \omega(q) \cap \mathcal{R}$, we associate a $\delta_x>0$, according to the set to which it belongs, as follows:
\begin{enumerate}[{\textbf{Case} 1}.] \item If $x\in A_1$, then by the Lemma \ref{lema2} we choose $\delta_x$, such that
\begin{align} \label{ecu3}
B_{\delta_x}(x,\mathcal{R}) \subseteq Dom(\Pi) \text{ and } \Pi|_{B_{\delta_x}(x,\mathcal{R})} \text{ is } C^1 \end{align} \end{enumerate}
For the cases $A_2$ and $A_3$, observe first that $x \in \omega(q) \cap \mathcal{R} \setminus Dom(\Pi)$ implies $x \in S_j$ for some $S_j\in \mathcal{R}'$. By Lemma \ref{lema2}, there is $\sigma_x \in \omega(q)\cap Sing(x)$ such that $x\in W^s(\sigma_x)$. We have that $S_j$ and $W^s(\sigma_x)$ are manifolds of dimension $n-1$ and $x\in W^s(\sigma_x)\cap S_j$, in addition $W^s(\sigma_x)$ is invariant and $S_j$ is transversal to the flow, so the connected component of $S_j \cap W^s(\sigma_x)$ containing $x$ is a submanifold of dimension $n-2$ which we denote $s_x$. On the other hand, $\mathcal{F}^s(x)$ is a invariant manifold of dimension $n-1$, then $\mathcal{F}^s(x,S_j)$ is of dimension $n-2$ and we have $\mathcal{F}^s(x) \subseteq W^s(\sigma_x)$ and $x\in W^s(\sigma_x)$, then by Lemma \ref{lema1}, $\mathcal{F}^s(x,S_j)=s_x$.\\
Now, as $W^u(\sigma_x)$ is one-dimensional, then $W^u(\sigma_x)\backslash \{\sigma_x\}$ is divided into two connected components $W^+$ y $W^-$ and as $\mathcal{O}^+(q)\cap W^s(\sigma_x) = \emptyset$, $\mathcal{O}^+(q)$ must accumulate at least one connected component. Therefore, we have one of the following statements: \begin{enumerate} \item[(a)] $W^+\subseteq \omega(q)$ and $W^-\subseteq \omega(q)$; \item[(b)] $W^+\subseteq \omega(q)$ and $W^-\nsubseteq \omega(q)$; \item[(c)] $W^+ \nsubseteq \omega(q)$ and $W^-\subseteq \omega(q)$. \end{enumerate} We consider $\beta_x$ small, such that $\mathcal{O}^+(y)$ accumulates $W^u(\sigma_x)$ for all $y \in B_{\beta_x}(x,\mathcal{R})\backslash s_x$.\\
\begin{enumerate}[{\textbf{Case} 1.}] \addtocounter{enumi}{1}
\item If $y \in A_2$ then $y \in int(S_j)$. If the statement (a) occurs, then $W^+\subseteq \omega(q)$ and $W^-\subseteq \omega(q)$. Since $W^+$ and $W^-$ are regular orbits, by the definition of sectional partition we have $W^+\cap int(\mathcal{R})\neq \emptyset$ and $W^-\cap int(\mathcal{R})\neq \emptyset$. Then there are $S_i , S_k\in \mathcal{R}'$ such that $W^+\cap int(S_i)\neq \emptyset$ and $W^- \cap int(S_k)\neq \emptyset$. Using the continuous dependency of the flow, there is $\delta_y<\beta_y$ small enough, so that $\mathcal{O}^+(p) \cap int(S_i) \neq \emptyset$ or $\mathcal{O}^+(p) \cap int(S_k) \neq \emptyset$ thus $ p \in B_{\delta_y}(y,\mathcal{R})\setminus s_y$. Therefore \begin{align}\label{ecu1}
B_{\delta_y}(y,\mathcal{R})\backslash s_y \subseteq Dom(\Pi)\,\text{ and } \,\,\,\Pi|_{B_{\delta_y}(y,\mathcal{R}) \setminus_{s_y}} \text{ es } C^1. \end{align} If the statement (b) occurs, we have $W^+\subseteq \omega(q)$ and $W^-\nsubseteq \omega(q)$. Then, there is $S_i\in \mathcal{R}'$ such that $W^+\cap int(S_i)\neq \emptyset$. Also $\mathcal{O}^+(q)$ does not accumulate on $W^-$. For every $\gamma$, $s_y$, divide $B_{\gamma}(y,\mathcal{R})$ into two connected components, we call $B_{\gamma}^+(y,\mathcal{R})$ the component that accumulates on $W^+$ y $B_{\gamma}^-(y,\mathcal{R})$ the other; we take $\delta_y<\beta_y$ small enough, so that $\mathcal{O}^+(q)$ does not intersect $B_{\delta_y}^-(y,\mathcal{R})$ and $\mathcal{O}^+(p)\cap int(S_i)\neq \emptyset$ for all $p\in B_{\delta_y}^+(y,\mathcal{R})$. Therefore \begin{align}\label{ecu2} B_{\delta_y}^+(y,\mathcal{R})\backslash s_y\subseteq Dom(\Pi)\text{ , }
\,\,\,\Pi|_{B_{\delta_y}^+(y,\mathcal{R}) \setminus s_y} \text{ is } C^1. \end{align}
If the statement (c) occurs, we consider $B_{\delta_y}^+(y,\mathcal{R})$ as the component that approaches $W^-$ and the result follows similarly to when we have (b).\\
\item If $z\in A_3$ then $z \in \partial(S_j)$. If $z \in \partial^v(S_j)$ then $s_z \subseteq \partial^v(S_j)$ and consequently $B_{\beta_z}(z,\mathcal{R})\setminus s_z$ accumulates only on one of the components $W^+$ or $W^-$. Without loss of generality we can assume that it accumulates on $W^+$. Let $S_i \in \mathcal{R}$, such that $W^+$ intersects the interior of $S_i$, we choose $\delta_z <\beta_z$ small enough, so that $\mathcal{O}^+(p) \cap (int(S_i)) \neq \emptyset$, for all $p\in B_{\delta_z}(z,\mathcal{R}) \setminus s_z$. Then we obtain \begin{align}
B_{\delta_z}(z,\mathcal{R})\backslash s_z \subseteq Dom(\Pi)\,\text{ and } \,\,\,\Pi|_{B_{\delta_z}(z,\mathcal{R}) \setminus_{s_z}} \text{ is } C^1. \end{align} Now if $z \in \partial^h(S_j)$, then $s_z$ divides $B_{\beta_z}(z,\mathcal{R})$ into two connected components, and reasoning in the same way as in the case of $A_2$, we obtain that
\begin{align}
B_{\delta_z}(z,\mathcal{R})\backslash s_z \subseteq Dom(\Pi)\,\text{ and } \,\,\,\Pi|_{B_{\delta_z}(z,\mathcal{R}) \setminus_{s_z}} \text{ es } C^1. \end{align} or \begin{align} B_{\delta_z}^+(z,\mathcal{R})\backslash s_z\subseteq Dom(\Pi)\text{ , }
\,\,\,\Pi|_{B_{\delta_z}^+(z,\mathcal{R}) \setminus s_z} \text{ is } C^1. \end{align}\\
\item If $w \in A_4$, then $w \in \partial(S_j)$ for some $S_j\in \mathcal{R}'$, we can choose $\delta_w< \frac{diam(S_j)}{2}$ such that $B_{\delta_w}(w,\mathcal{R}) \cap \left[\mathcal{O}^+(q)\cap int (\mathcal{R})\right] =\emptyset$. \end{enumerate}
Note that $ \omega(q)\cap \mathcal{R}\backslash Dom(\Pi)$ is contained in $$\left( \bigcup_{y_j\in A_2} B_{\frac{\delta_{y_j}}{2}}(y_j,\mathcal{R}) \right) \cup \left( \bigcup_{z_k\in A_3} B_{\frac{\delta_{z_k}}{2}}(z_k,\mathcal{R}) \right)\cup \left( \bigcup_{w_m\in A_4} B_{\frac{\delta_{w_m}}{2}}(w_m,\mathcal{R}) \right),$$ and since $\omega(q)\cap \mathcal{R} \backslash Dom(\Pi)$ is compact, then is contained in $$\left( \bigcup_{k=1}^{l_2} B_{\frac{\delta_{y_k}}{2}}(y_k,\mathcal{R}) \right) \cup \left( \bigcup_{j=1}^{l_3} B_{\frac{\delta_{z_j}}{2}}(z_j,\mathcal{R}) \right) \cup \left( \bigcup_{m=1}^{l_4} B_{\frac{\delta_{w_m}}{2}}(w_m,\mathcal{R}) \right).$$ We define $$ B_2 =\bigcup_{j=1}^{l_1} B_{\frac{\delta_{y_j}}{2}}(y_j,\mathcal{R}), \,\,\, B_3 = \bigcup_{k=1}^{l_3} B_{\frac{\delta_{z_k}}{2}}(z_k,\mathcal{R}), \,\,\, B_4 = \bigcup_{m=1}^{l_4} B_{\frac{\delta_{w_m}}{2}}(w_m,\mathcal{R})$$ and $$H =\omega(q)\cap \mathcal{R} \backslash (B_2 \cup B_3 \cup B_4)$$ Observe that $H \subseteq \omega(q)\cap \mathcal{R} \cap Dom(\Pi)$. Thus $$H \subseteq \bigcup_{x_i\in A_1}B_{\frac{\delta_{x_i}}{2}}(x_i,\mathcal{R}),$$ Since $H$ is compact we obtain $$H\subseteq \bigcup_{i=1}^{l_1} B_{\frac{\beta_{x_i}}{2}}(x_i,\mathcal{R})= B_1.$$ Also, as $\omega(q)\cap \mathcal{R} \subseteq H\cup (\omega(q)\cap \mathcal{R} \backslash Dom(\Pi))$ then $(B_1 \cup B_2\cup B_3 \cup B_4)$ is a open covering of $\omega(q)\cap \mathcal{R}$. \\
Now, as $\mathcal{O}^+(q) \cap int(\mathcal{R}) = \{q_1, q_2, ...\}$, it follows that $\{q_n\}_{n\in \mathbb{N}}$ accumulates on $\omega(q)\cap \mathcal{R}$, so there exists a large enough $N\in\mathbb{N}$ such that for all $n>N$, we obtain $q_n \in B_1 \cup B_2 \cup B_3$, we exclude $B_4$ by the way we define
$A_4$ and the $B_{\delta_w}(w,\mathcal{R})$.\\
Take $\delta = min \left\lbrace \dfrac{\delta_{x_i}}{8},\dfrac{\delta_{y_j}}{8},\dfrac{\delta_{z_k}}{8} : 1\leq i \leq l_1, 1\leq j \leq l_2, 1 \leq k \leq l_3 \right\rbrace$, we have three possibilities: $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{x_i}}(x_i,\mathcal{R})$, $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{y_j}}(y_j,\mathcal{R})$ or $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{z_k}}(z_k,\mathcal{R})$.\\
If $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{x_i}}(x_i,\mathcal{R})$ then by \ref{ecu3}, we have to
$B_\delta(q_n,\mathcal{R})\subseteq Dom(\Pi)$ y $\Pi|_{B_\delta(q_n,\mathcal{R})}$ is $C^1$. In this case we obtain $(A)$.\\
If $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{y_j}}(y_j,\mathcal{R})$, define $s_n = s_{y_j}\cap B_\delta(q_n,\mathcal{R})$, $q_n \notin s_n$ because otherwise $\omega(q)$ would be a singularity. Therefore we have $q_n \in B_\delta(q_n,\mathcal{R})\backslash s_n$. We definite $B_\delta^+(q_n,\mathcal{R})$ as the connected component of $B_\delta(q_n,\mathcal{R})\backslash s_n$ which contains $q_n$. Here we have two subcases depending on whether we have \ref{ecu1} or \ref{ecu2}.\\
If \ref{ecu1} occurs, $B_\delta^+(q_n,\mathcal{R})\backslash s_n \subseteq B_{\delta_{y_j}}(y_j,\mathcal{R})\backslash s_{y_j}$, therefore $B_\delta^+(q_n,\mathcal{R})\subseteq Dom(\Pi)$, and $\Pi|_{B_\delta^+(q_n,\mathcal{R})}$ is $C^1$.\\
If \ref{ecu2} is satisfied, since $q_n \in \mathcal{O}^+(q)$, then $q_n \in B_{\delta_{y_j}}^+(y_j,\mathcal{R})$, from where $B_\delta^+(q_n,\mathcal{R})\subseteq B_{\delta_{y_j}}^+(y_j,\mathcal{R})$. Thus
$B_\delta^+(q_n,\mathcal{R})\setminus s_n \subseteq Dom(\Pi)$ y $\Pi|_{B_\delta^+(q_n,\mathcal{R})\setminus s_n}$ is $C^1$. Then for both subcases we get $(B)$.\\
If $B_\delta(q_n,\mathcal{R})\subseteq B_{\delta_{z_k}}(z_k,\mathcal{R})$, analogously to the previous case we obtain $(B)$.\\
Then, for all $n\geq N$ we have $(A)$ or $(B)$ which proves the theorem. \end{proof}
\section{Characterizing omega-limit sets which are closed orbits in codimension one}
\begin{defin}\label{defin4} A point $q\in M$ satisfies the property $P_{(\Sigma)}$ if there exists an interval $I\subseteq M$ with $q \in \partial I$ and a closed set $\Sigma \subseteq M$ such that: \begin{enumerate} \item $Cl(\mathcal{O}^+(q)\cap \Sigma) = \emptyset$, \item $\mathcal{O}^+(p) \cap \Sigma \neq \emptyset$ for all $p\in I$. \end{enumerate} \end{defin}
\begin{lema}\label{lema4} Let $q \in M$ a point satisfying the property $(P)_\Sigma$ for some closed subset $\Sigma$ with $\omega(q)$ a sectional-hyperbolic set of codimension $1$. If $\omega(q)$ is not a singularity, then there is a sectional partition $\mathcal{R}'$ of $\omega(q)$, $\delta>0$, $S\in \mathcal{R}'$, a sequence $\{\widehat{q}_n\}_{n\in \mathbb{N}}$ of points in $int(S) \cap \mathcal{O}^+(q)$ and a sequence of intervals $\widehat{J}_1,\widehat{J}_2,...\subseteq S$ in the positive orbit of $I$ with $\widehat{q_i} \in \partial(\widehat{J_i})$ and $l(\widehat{J_i})\geq \delta$ for all $i$. \end{lema}
\begin{proof} Without loss of generality we can assume that $q\in U_{\omega(q)}$ and the arc $I$ that refers to property $(P)_\Sigma$, is tangent to $\mathsf{F}^c$ and transverse to the flow. Since $Cl(\mathcal{O}^+(q))$ and $\Sigma$ are disjoint, there exists a compact neighborhood $W\subseteq U_{\omega(q)}$ of $\omega(q)$ such that $W\cap \Sigma = \emptyset$ and $\mathcal{O}^+(q)\subseteq W$, then by the theorem \ref{teor6}, we have a sectional partition $\mathcal{R}'=\{S_1, S_2, ... , S_k\}$ of $\omega(q)$ contained in $int(W)$ (since we can take the diameter of $\mathcal{R}$ arbitrarily small), and $N\in \mathbb{N}$ such that $\mathcal{O}^+(q)\cap int(\mathcal{R})=\{q_1,q_2,...\}$, and for all $n \geq N$, $q_n$ satisfies $(A)$ or $(B)$. We will assume $N=1$.\\
For all $n$ there is $S_{j_n}\in \mathcal{R}'$, such that $q_n \in int(S_{j_n})$. As $q \in \partial(I)$, for the continuous dependence we have to $q_n$ must be a border point of the positive orbit of $I$, and since $S_{j_n}$ is transverse to the flow as well as $I$, we can guarantee that exists $I_1$ in the positive orbit of $I$ such that:
$$I_1\subseteq S_{j_1}\cap Dom(\Pi) \,\,\, \text{ and }\,\,\, q_1\in \partial(I_1).$$
If necessary we can reduce it to $I$ so that $I_1\subseteq int(B_\delta(q_1,\mathcal{R}))$ or $I_1\subseteq int(B_\delta^+(q_1,\mathcal{R}))$ depending on whether occurs $(A)$ or $(B)$ for $q_1$; we define $I_i = \Pi(I_{i-1}) = \Pi^i(I_1)$ for $i>1$ and while $I_{i-1}\subseteq B_\delta(q_{i-1},\mathcal{R})$ or $I_{i-1}\subseteq B_\delta^+(q_{i-1},\mathcal{R})$ again depending on whether occurs $(A)$ or $(B)$ for $q_{i-1}$. Since $W \cap \Sigma = \emptyset$ and the positive orbit of $I$ intersects $\Sigma$, there exists a first index $i_1$ such that: $$I_{i_1} \nsubseteq B_\delta(q_{i_1},\mathcal{R})\,\,\, \text{ or } \,\,\, I_{i_1} \nsubseteq B_\delta^+(q_{i_1},\mathcal{R}).$$ We define $J_{i_1} \subseteq I_{i_1}$ as the connected component of $I_{i_1} \cap B_\delta(q_{i_1},\mathcal{R})$ (or $I_{i_1} \cap B_\delta^+(q_{i_1},\mathcal{R})$) which bounded with $q_{i_1}$, and some point in $\partial (B_\delta(q_{i_1} \mathcal{R}))$ (or in $\partial(B_\delta^+(q_{i_1},\mathcal{R}))$. Remember that $s_{i_1}\subseteq \bigcup_{\sigma \in Sing(X)\cap \omega(q)}W^s(\sigma)$ and $\mathcal{O}^+(I_{i_1})\cap \Sigma \neq \emptyset$ we have $I_{i_1}\cap s_{i_1} = \emptyset$ and we can conclude that $l(J_{i_1}) \geq \delta$.\\
Now, if necessary, we reduce $I_{i_1}$ so that $\Pi(I_{i_1}) \subseteq B_\delta(q_{i_1+1},\mathcal{R})$ (or $ \Pi(I_{i_1}) \subseteq B_\delta^+(q_{i_1+1},\mathcal{R})$) and repeat the argument used in $I_1$ for $I_{i_1}$ and in this way find an index $i_2$ and construct the interval $J_{i_2} \subseteq I_{i_2}$ satisfies same conditions of $J_{i_1}$, then we would reduce if necessary to $I_{i_2}$ to repeat the process and so on, then we get a sequence $\{J_{i_m}\}_{m\in \mathbb{N}}$ such that $J_{i_m}\subseteq S_{j_{i_m}}$, $q_{i_m}\in \partial (J_{i_m})$ y $l(J_{i_m})\geq \delta$ for all $m$.\\
Since $\mathcal{R}$ is a finite collection we have that this sequence has a subsequence $\{J_{i_{m_s}}\}_{s\in \mathbb{N}}$ such that $J_{i_{m_s}}\subseteq S_r$ for some $S_r \in \mathcal{R}'$, considering $S=S_r$, $\widehat{J}_s=J_{i_{m_s}} $ and $\widehat{q}_s=q_{i_{m_s}}$ the result is obtained. \end{proof}
\begin{teor}\label{teor7} Let $q \in M$ be a point satisfying the property $(P)_\Sigma$ for some subset $\Sigma$ closed and $\omega(q)$ is a sectional hyperbolic codimension $1$ set. If $\omega(q)$ is not a singularity, then $\omega(q)$ is a periodic orbit. \end{teor}
\begin{proof} Let $W$ and $\mathcal{R}'$ as in the proof of the Lemma \ref{lema4}, since $\omega(q)$ is not a singularity, then there is $S \in \mathcal{R}'$, $\widehat{q_i}$, $\widehat{J_i}$ and $\delta>0$ such that $\widehat{q_i}\in int(S) \cap \mathcal{O}^+(q)$, $\widehat{J_i} \subseteq \mathcal{O}^+(I) \cap S$, $\widehat{q_i}\in \partial \widehat{J_i}$, and $l(\widehat{J_i})\geq \delta$ for all $i\in \mathbb{N}$, also, suppose there is $x \in \omega(q) \cap S$, such that $\widehat{q_i}$ accumulates on $x$.\\
If $x \in \partial^v(S)$, then $\widehat{q_i} \notin \mathcal{F}^s(x,S) \text{ for all } i$, since $ \mathcal{F}^s(x,S) \in \partial(S)$. Since $\widehat{j_i}$ is tangent a $\mathsf{F}^c_U$ and transverse to $X$ then the angle between the arc $\widehat{j_i}$ and $\mathcal{F}^s_\Sigma$ is bounded away from zero for all $i$, also as $l(\widehat{J_i}) \geq \delta$ and $\widehat{q_i}\to x$ there will eventually
$z$ such that: $$z \in \widehat{J_r}\cap \mathcal{F}^s(\widehat{q_j},S)$$ for some pair $r, k \in \mathbb{N}$. Since $z\in \widehat{J_r} \subseteq \mathcal{O}^+(I)$, then $\mathcal{O}^+(z)\cap \Sigma \neq \emptyset$, on the other hand $z \in \mathcal{F}^s(\widehat{q_j},S)$ so $\mathcal{O}^+(z)$ is asymptotic at $\mathcal{O}^+(q)$ and as $W$ is compact, then
$\mathcal{O}^+(z) \subseteq W$ so $\mathcal{O}^+(z) \cap \Sigma = \emptyset$; which is a contradiction, therefore $x\notin \partial^v(S)$.\\
Now, if $x \in \partial^h(S)$ or $x\in int(S)$, we have that $\{\widehat{q_1}, \widehat{q_2},...\} \setminus \mathcal{F}^s(x,S)$ has a finite number of elements, otherwise we would find again that there is $z \in \widehat{J_r}\cap \mathcal{F}^s(\widehat{q_j},S)$ for some pair $r, k \in \mathbb{N}$ and we have a contradiction. Thus $\{\widehat{q_1},\widehat{q_2}, ...\} \cap \mathcal{F}^s(x,S)$ is a infinite set and
we can organize as a succession $\{q_n\}_{n\in \mathbb{N}}$, such that $q_i$ belongs to positive orbit of $q_{i-1}$, and so, the hypotheses of the Lemma 11\footnote{Although this is presented in dimension three, it is valid in arbitrary dimension} in \cite{s} are satisfied, then there is $p \in Per(X)\cap \omega(q)$ such that $q_n \in \mathcal{F}^s(p)$ but as $q_n \in \mathcal{O}^+(q)$ therefore we can conclude that $\omega(q) = \gamma=\mathcal{O}(p)$. \end{proof}
\begin{teor}\label{teor8} Let $q \in M$, such that $\omega(q)$ is a sectional-hyperbolic set of codimension $1$. If $\omega(q)$ is a closed orbit, then $q$ satisfies the property $(P)_\Sigma$ for some closed subset $\Sigma$. \end{teor}
\begin{proof} See Theorem 4 in \cite{s} \end{proof}
As a direct consequence of the Theorems \ref{teor7} and \ref{teor8}, we obtain:
\begin{teor}\label{teor9} Let $\omega(q)$ a sectional-hyperbolic set of codimension $1$ of a vector field $X$ on $M$, then $q$ satisfies the property $P_{(\Sigma)}$ if and only if $\omega(q)$ is a closed orbit. \end{teor}
\section{Proof the main theorem}
For the proof, we will first analyze two particular cases and then the general case.
\begin{teor}\label{teor10} Let $\Lambda$ be a sectional-hyperbolic set of codimension $1$ from a field $X$ on $M$, such that $W^{u}(H) \subseteq \Lambda$ for every hyperbolic subset $H$ of $\Lambda$. If $p,\sigma \in \Lambda$ satisfy $p \prec \sigma$ where $p$ a periodic point and $\sigma$ a singularity, then there exists $x\in \Lambda$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)$ is a singularity. \end{teor}
\begin{proof} Since $p \prec q$, then there are successions $(z_n)_{n \in \mathbb{N}}$ with $z_n \to p$, and $(t_n)_{n\in \mathbb{N}}$ with $t_n > 0$ such that $X_{t_n}(z_n) \to \sigma$; we can take $t_n \to \infty$ and without loss of generality we can assume that $z_n \in U_\Lambda$ for all $n \in \mathbb{N}$.
We denote by $O=\mathcal{O}(p)$ the periodic orbit containing $p$, as $O \subseteq \Lambda$ is hyperbolic we have $W^{uu}(p)$ is well defined and by hypothesis contained in $\Lambda$, in addition $\mathcal{F}^{ss}(p)=W^{ss}(p)$. Now for the continuity of $\mathcal{F}^{ss}(p)$ and given that $z_n \to p$, for $n$ sufficiently large $\mathcal{F}^{ss}(z_n)$ intersects $W^{uu}(p)$ at a point $z'_n$. Since $z_n$ and $z'_n$ have the same strong stable manifold, $X_{t_n}(z_n) \to \sigma$ and $t_n \to \infty$, we have $X_{t_n}(z'_n) \to \sigma$. But $z'_n \in W^u(O)$ which is invariant, then $\sigma \in Cl(W^u(O))$, therefore, $\sigma$ is Lorenz-like.\\
We choose two cross-sections $\Sigma_1 \subseteq U_\Lambda$ and $\Sigma_2 \subseteq U_\Lambda$, associated with $\sigma$ such that the intersection of $\Sigma_1$ with one of the connected components of $W^{ss}(\sigma)\setminus \mathcal{F}^{ss}(\sigma)$ is a point $y_1 \in int(\Sigma_1)$ and the intersection of $\Sigma_2$ with the other component is a point $y_2 \in int(\Sigma_2)$. We take this sections of small size, so that $O \cap (\Sigma_1 \cup \Sigma_2) = \emptyset$. Since $\Lambda \cap \mathcal{F}^{ss}(\sigma)=\{\sigma\}$, then we can establish $(\partial^h\Sigma_1 \cup \partial^h\Sigma_2)\cap \Lambda = \emptyset$. Let $\mathcal{F}^s_{\Sigma_1}$ y $\mathcal{F}^s_{\Sigma_2}$ the vertical foliations of $\Sigma_1$ and $\Sigma_2$.\\
On the other hand as $W^u(O)$ accumulates on $\sigma$ then accumulates on $\mathcal{F}^s(y_1,\Sigma_1)$, $\mathcal{F}^s(y_2,\Sigma_2)$ or both. Assume without loss of generality that accumulates on $\mathcal{F}^s(y_1,\Sigma_1)$. We can select a point $c \in W^{uu}(p) \cap int(\Sigma_1)$, as the first point where $W^{uu}(p)$ that intersects $int(\Sigma_1)$. Taking the negative orbit of $c$ and using the fact that $dim(W^{uu}(p))=1$, we obtain a fundamental domain $D^u = [a,b]$ of $W^{uu}(p)$ such that $D^u \cap \Sigma = \emptyset$. Furthermore, $b$ is in the positive orbit of $a$ and in the negative orbit of $c$.\\
We define the function $\Pi_D: Dom(\Pi_D) \subseteq D^u \to int(\Sigma_1)$ with $$Dom(\Pi_D)=\{x \in D^u: X_t(x)\in int(\Sigma_1) \text{ for some } t > 0\}$$ given by $\Pi_D(x) = X_{t(x)}(x)$, where $t(x)$ is the return time, that is, the first $t>0$ for which $X_t(x)\in int(\Sigma_1)$.\\
By construction you have $a, b \in Dom(\Pi_D)$, and since $b$ is in the positive orbit of $a$ it follows that $\Pi_D(a) = \Pi_D(b) = c \in int(\Sigma_1)$. Define\\ \begin{align*} q^*=
Sup \{s \in [a,b] : [a,s] \subseteq Dom(\Pi_D), \Pi_D([a,s]) & \subseteq int(\Sigma) \\ & \text{ and } \Pi_D|_{[a,s]} \text{ is } C^1 \}
\end{align*} \begin{align*} q^{**}=
Inf \{s \in [a,b] : [s,b] \subseteq Dom(\Pi_D), \Pi_D([s,b]) & \subseteq int(\Sigma) \\ & \text{ and } \Pi_D|_{[s,b]} \text{ is } C^1 \} \end{align*}
Since $\Pi_D(a), \Pi_D(b) \in int(\Sigma)$, by the continuous dependence of the flow, $q^*$ y $q^{**}$ are well defined and $a < q^*$ y $q^{**} < b$; now if $q^*=b$, $q^{**}=a$ o $q^*=q^{**}$, we would have $\Pi_D([a,b])$ is a curve closed $l$ in $int(\Sigma_1)$ (without a point in the third case) and therefore tangent to $\mathcal{F}^s_{\Sigma_1}$ in at least one point, but as $D^u \subseteq W^{uu}(p)$, then the vectors tangent to $l$ belong to the central subspace $\mathsf{F}^c$, meaning that $l$ is transversal to the strong stable foliation in $\Lambda$
and therefore to the foliation $\mathcal{F}^s_{\Sigma_1}$, which contradicts that it is closed, then $a<q^*<q^{**}<b$, also again by the continuous dependency we have that $q^*,q^{**} \notin Dom(\Pi_D)$, otherwise $q^*$ would not be a supremum or $q^{**}$ an infimum.\\
Let $c^*,c^{**}\in int(\Sigma_1)$, the open extreme of the semi-open arcs $\Pi_D([a,q^*))$ and $\Pi_D((q^{**},b])$ respectively and $l =\Pi_D([a,q^*)\cup (q^{**},b])$, as $\Pi_D(a)=\Pi_D(b)=c$, we have $l$, it is an open connected arc with ends $c^*$ y $c^{**}$, in addition $([a,q^*)\cup (q^{**},b])\subseteq W^{uu}(p)$, then $l$ is transverse to $\mathcal{F}^s_{\Sigma_1}$. On the other hand, since $\Lambda$ has codimension $1$, $\mathcal{F}^s(y_1,\Sigma_1)$ divide a $\Sigma_1$ into two connected components. Two cases are presented:
\begin{enumerate} \item[{\bf Case 1:}] {If $c^*$ and $c^{**}$ are in different sides of $\mathcal{F}^s(y_1,\Sigma_1)$}, then, the arc $l$ intersects the leaf $\mathcal{F}^s(y_1,\Sigma_1)$, that is, there exists $x \in W^{s}(\sigma)\cap W^{uu}(p)$, then $\alpha(x)=\alpha(p)$ and $\omega(x) = \{\sigma\}$.\\
\item[{\bf Case 2:}] {If $c^*$ and $c^{**}$ are on the same side of $\mathcal{F}^s(y_1,\Sigma_1)$, or $c^* \in \mathcal{F}^s(y_1,\Sigma_1)$ or $c^{**} \in \mathcal{F}^s(y_1,\Sigma_1)$}. Assume without loss of generality that $c^*$ is closer to $\mathcal{F}^s(y_1,\Sigma_1)$, than $c^{**}$. Consider the cross section $\Sigma_0 \subseteq \Sigma_1$, given by the leaves $\mathcal{F}^s(y_1,\Sigma_1)$, $\mathcal{F}^s(\Pi_D(r),\Sigma_1)$, and all the leaves $\mathcal{F}^s_{\Sigma_1}$ between them, at where $r \in (q^{**},b)$. We have that $\mathcal{O}^+(q^*)$ does not intersect the interior of $\Sigma_1$ and $\partial^h\Sigma_0 \cap \Lambda = \emptyset$, then $q^*$ satisfies the property $P_{(\Sigma_0)}$ by taking $I= (a,q^*)$.\\
Since $q^* \in \Lambda$ sectional-hyperbolic, $\omega(q^*)$ is also sectional-hyperbolic, then by the theorem \ref{teor8}, $\omega(q^*)$ is a periodic orbit or a singularity. If $\omega(q^*) = \mathcal{O}(\widehat{p})$ with $\widehat{p} \in \Lambda \cap Per(X)$, then follows from Inclination Lemma (see lemma 2.15 in \cite{ap}), that the positive orbit $I$ accumulates on $W^u(\mathcal{O}(\widehat{p}))$. In particular, the positive orbit of $I$ contains an open arc $I^*$ arbitrarily close to $\widehat{D^u}=[\widehat{a},\widehat{b}]$, where $\widehat{D^u}$ is a fundamental domain of $W^{uu}(\widehat{p})$.\\
We define the function $\Pi_{\widehat{D}}: Dom(\Pi_{\widehat{D}}) \subseteq \widehat{D^u} \to int(\Sigma_1)$ with $$Dom(\Pi_{\widehat{D}})=\{x \in \widehat{D^u}: X_t(x)\in int(\Sigma_1) \text{ for some } t > 0\}$$ given by $\Pi_{\widehat{D}}(x) = X_{t(x)}(x)$, where $t(x)$ is the return time, that is, the first $t>0$ for which $X_t(x)\in int(\Sigma_1)$.\\
By projecting $I^*$ onto $\widehat{D^u}$ along the strong stable manifolds of the points in $I^*$, we can conclude that $\widehat{D^u} \subseteq Dom(\Pi_{\widehat{D}})$, then $\Pi_{\widehat{D}}(\widehat{D^u})$ is a closed curve $\widehat{l} \subseteq int(\Sigma_1)$, but since $\widehat{D^u}$ is a fundamental domain of $W^{uu}(\widehat{p}) \subseteq \Lambda$, then $\widehat{l}$ is transversal to $\mathcal{F}^s_{\Sigma_1}$, which contradicts that $\widehat{l}$ is closed. From the above contradiction we conclude that $\omega(q^*)$ is a singularity, therefore $q^* \in W^s(\sigma^*) \cap W^{uu}(p)$ for some singularity $\sigma^* \in \Lambda$. Then taking $x = q^*$, we get the result. \end{enumerate}
\end{proof}
\begin{teor}\label{teor11} Let $\Lambda$ be a sectional-hyperbolic set of codimension $1$ from a field $X$ over $M$, such that $W^{u}(H) \subseteq \Lambda$ for every hyperbolic subset $H$ of $\Lambda$. If $p,\sigma \in \Lambda$ satisfies $p \prec \sigma$, $\alpha(p)$ does not contain singularities and $\sigma$ is a singularity, then there exists $x\in M$ such that $\alpha(x)=\alpha(p)$ and $\omega(x)$ is a singularity. \end{teor}
\begin{proof} We have that $\alpha(p)$ hyperbolic since it has no singularities. We fix $y \in \alpha(p)$, then there is a sequence $(t_n)_{n\in \mathbb{N}}$ with $t_n \to \infty$ such that $X_{-t_n} (p) \to y$. We extend the hyperbolic decomposition of $\alpha(p)$ to a neighborhood $U_{\alpha(p)}$, as the negative orbit of $p$ becomes close to $\alpha(p)$, we can assume that $X_{-t_n} (p) \in U_{\alpha(p)}$, we can use graphics transformation techniques \cite{hps,hps1}, to find a $\epsilon > 0$ and a succession of open intervals $(I_n)_{n\in \mathbb{N}}$ where $I_n = (X_{-t_n }(p) - \epsilon, X_{-t_n} (p) + \epsilon) \subseteq W^{uu}(X_{-t_n} (p))$, converging to the open interval $I = (y - \epsilon, y + \epsilon) \subseteq W^{uu}(y)$. \\
On the other hand, by applying the Shadowing Lemma \cite{kh} to the negative orbit of $p$, we can establish a succession of periodic hyperbolic points $\{p_n\}_{n\in \mathbb{N}}$ so that $p_n \to y$ and $p_n \in U_{\alpha(p)}$. For $n$ large enough, by the continuity $W^{ss}_{U_{\alpha(p)}}$, we have $W^{ss}(p_n)$ intersect $W^{uu}(y)$ at a point $q_n$, given that $p_n$ and $q_n$ have the same strong stable manifold $\omega(q_n) = \omega(p_n)= \mathcal{O}(p_n)$ and as $q_n \in W^{uu}(y) \subseteq \Lambda$, then $p_n \in \Lambda$. \\
In addition, strong unstable manifolds $W^{uu}(p_n)$ have uniformly large size and approach to $I$ when $n \to \infty$. Then both $W^{uu}(p_n)$ and $I_n$ approximate the interval $I$ when $n \to \infty$; this allows us to fix $n_0,n_1 \in \mathbb{N}$ such that $p_{n_1} \in \Lambda$ and satisfies the property:\\
\textbf{Property $(Q)$} The strong stable manifold of each point close to $X_{-t_{n_0}}(p)$ intersects $W^{uu}(p_{n_1})$, and conversely, the strong stable manifold of any point close to $p_{n_1}$ intersects $I_{n_0}$\\
Now, since $p \prec \sigma$, so we also have to $X_{-t_{n_0}} (p) \prec \sigma$, from where, there are successions $(z_m)_{m\in \mathbb{N}}$ con $z_m \to X_{-t_{n_0}} (p)$ and $(t_m)_{m \in \mathbb{N}}$ with $t_m > 0$ such that $X_{t_m} (z_m) \to \sigma$. Then the property $(Q)$ implies that there is another sequence $(z_m')_{m \in \mathbb{N}} \subseteq W^{uu}(p_{n_1} ) \cap W^{ss}(z_m)$; then $X_{t_m}(z_m') \to \sigma$, therefore $p_{n_1} \prec \sigma$. Applying the theorem \ref{teor10}, we have that exists $x^* \in \Lambda$ such that $\alpha(x^*) = \alpha(p_{n_1})$ y $\omega(x^*)$ is a singularity $\sigma^*$.\\
Taking the negative orbit of $x^*$, we can assume that $x^*$ is close enough to $p_{n_1}$. Then the property $(Q)$ implies that $W^{ss}(x^*)$ intersects $I_{n_0}$ at some point $x$. Then as $I_{n_0} \subseteq W^{uu}(X_{-t_{n_0}}(p))$ and $\alpha(X_{-t_{n_0}} (p)) = \alpha(p)$, then you have $\alpha(x) = \alpha(p)$, we have $x \in W^{ss}(x^*)$ then$\omega(x) = \omega(x^*) = \sigma^*$, which proves the result. \end{proof}
\begin{proof}[Proof main theorem] The result is immediate if $q \in \mathcal{O}^+(p)$, then assume that $q \in \mathcal{O}^+(p)$. If $\omega(p)$ or $\omega(q)$ contain a singularity $\sigma$, then $p \prec \sigma$, similarly if $\alpha(q)$ contains a singularity $\sigma$, then the continuity of the flow $X_t$, and the fact that $q \notin \mathcal{O}^+(p)$ implies that $p \prec \sigma$; then the result follows from the previous theorem.\\ We will assume that the set $\alpha(p) \cup \omega(p) \cup \alpha(q) \cup \omega(q)$ has no singularities, then there is $\delta_1 > 0$ such that $p$ and $q$ are in $H_1$ defined by:
$$H_1 = \bigcap_{t \in \mathbb{R}} X_t(\Lambda \setminus B_{\delta_1} (Sing(X))).$$
Since $H_1$ has no singularities, then it is a hyperbolic set, for which $W^{uu}(p)$ is well defined, and by hypothesis contained in $\Lambda$, reasoning analogously to the beginning of the proof of theorem \ref{teor10}, there exists a sequence $\{z'_n\}_{n \in \mathbb{N}}$, such that $z_n' \in W^{uu}(p)$ and $X_{t_n}(z'_n) \to q$ with $z'_n \to p$ and $t_n \to \infty$.\\
Suppose for a moment that for all $k \in \mathbb{N}$ there is $\sigma_k \in Sing(X)$ such that $$\sigma_k \in Cl\left( \bigcup_{n=k}^\infty \mathcal{O}^+(z'_n)\right)$$
Since the number of singularities in $\Lambda$ is finite, we can assume that $\sigma = \sigma_k$ does not depend on $k$. As $z_n' \to p$, then it is concluded that $p \prec \sigma$; then the result follows from the theorem \ref{teor11}. Then we can assume that there is $k_0 \in \mathbb{N}$ y $0 < \delta_2 < \delta_1$ such that $$ \left( \bigcup_{n=k_0}^\infty \mathcal{O}^+(z'_n) \right) \cap B_{\delta_2}(Sing(X))=\emptyset$$ Observe that $\mathcal{O}(z_n') \subseteq \Lambda$, for which, $$\mathcal{O}^+(z_{n_0}) \subseteq \Lambda \setminus B_{\delta_2}(Sing(X))$$ for all $n \geq k_0$. On the other hand, $z'_n \in W^{uu}(p)$, then $\alpha(z'_n) = \alpha(p)$ that has no singularities. Then, like $z'_{n_0} \to p$, we conclude that there exists $\delta_3 < \delta_2$ such that $\mathcal{O}^-(z'_n) \subseteq U \setminus B_{\delta_3}(Sing(X))$, for all $n \geq k_0$. Consequently $(z_n')_{n \geq k_0} \subseteq H$ where: $$H=\bigcap_{t \in \mathbb{R}}X_t(U \setminus B_{\delta_3}(Sing(X)))$$
which has no singularities and therefore is hyperbolic. Then as $X_{t_n} (z_n') \to q$ y $H$ is hyperbolic, by the theorem \ref{teor2}, there is $x \in M$ such that $\alpha(x) = \alpha(p)$ and $\omega(x) = \omega(q)$. \end{proof}
\end{document} | arXiv | {
"id": "1804.00646.tex",
"language_detection_score": 0.6722015142440796,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{General polygamy inequality of multi-party quantum entanglement}
\author{Jeong San Kim} \email{freddie1@suwon.ac.kr} \affiliation{
Department of Mathematics, University of Suwon, Kyungki-do 445-743, Korea } \date{\today}
\begin{abstract} Using entanglement of assistance, we establish a general polygamy inequality of multi-party entanglement in arbitrary dimensional quantum systems. For multi-party closed quantum systems, we relate our result with the monogamy of entanglement to show that the entropy of entanglement is an universal entanglement measure that bounds both monogamy and polygamy of multi-party quantum entanglement. \end{abstract}
\pacs{ 03.67.Mn, 03.65.Ud }
\maketitle
Entanglement has been recognized as a key resource in many quantum information processing tasks such as quantum teleportation~\cite{tele} and dense coding~\cite{BW92}. Entanglement can be used to perform quantum key distribution where entangled-state analysis is used to prove the security of privacy amplification~\cite{BB84,Eke91}. Entanglement is also essential in certain models of quantum computing such as non-deterministic gate teleportation~\cite{NC+97} and one-way quantum computation~\cite{RB01}.
One distinct phenomenon of quantum entanglement from classical correlation is that it cannot be shared freely in multi-party systems. A simple example arises when a pair of parties in a multi-party quantum system share maximal entanglement, in which case they cannot have any entanglement nor classical correlations with other parties. This restricted shareability of entanglement in multi-party quantum systems is known as the {\em Monogamy of Entanglement} (MoE)~\cite{T04}. MoE does not have any classical counterpart because all classical probability distributions can be shared among parties, and correlation between a pair of parties (whether they are perfectly correlated or not) does not restrict other parties' correlation. Thus MoE makes quantum physics fundamentally different form classical physics.
The first mathematical characterization of MoE was established by Coffman-Kundu-Wootters (CKW) for three-qubit systems using tangle as the bipartite entanglement measure~\cite{CKW}. For a three-qubit state $\rho_{ABC}$ with two-qubit reduced density matrices $\mbox{$\mathrm{tr}$}_C\ket{\psi}_{ABC}\bra{\psi}=\rho_{AB}$ and $\mbox{$\mathrm{tr}$}_B\ket{\psi}_{ABC}\bra{\psi}=\rho_{AC}$, \begin{equation} \tau\left(\rho_{A(BC)}\right)\geq \tau\left(\rho_{AB}\right)+\tau\left(\rho_{AC}\right), \label{eq: CKW} \end{equation} where $\tau\left(\rho_{A(BC)}\right)$ is the entanglement of $\rho_{ABC}$ with respect to the bipartition between $A$ and $BC$ measured by tangle, $\tau\left(\rho_{AB}\right)$ and $\tau\left(\rho_{AC}\right)$ are tangle of $\rho_{AB}$ and $\rho_{AC}$ respectively. Inequality~(\ref{eq: CKW}) (also referred as CKW inequality) shows the mutually exclusive nature of multi-party quantum entanglement in a quantitative way; more entanglement shared between two qubits ($A$ and $B$) necessarily implies less entanglement between the other two qubits ($A$ and $C$). Later, CKW inequality was generalized for multi-qubit systems rather than just three qubits~\cite{OV}.
However, CKW inequality is know to fail in its generalization for higher-dimensional quantum systems due to the existence of quantum states violating Inequality~(\ref{eq: CKW})~\cite{KDS}. Moreover, this characterization of MoE in forms of an inequality is not generally true even in three-qubit systems for other entanglement measures; one can easily find an example of three-qubit state that violates CKW inequality if we use Entanglement of Formation~(EoF)~\cite{BDSW} instead of tangle. Thus having a proper way of quantifying bipartite entanglement is crucial in the study of MoE.
Later, monogamy of multi-qubit entanglement and some cases of higher-dimensional quantum systems were characterized in terms of various entanglement measures~\cite{KDS, KSRenyi, KT, KSU}. For general monogamy inequality of multi-part entanglement, it was recently shown that squashed entanglement~\cite{CW04} is a faithful entanglement measure (it vanishes only for separable states)~\cite{BCY10}, which also shows monogamy inequality of multi-party entanglement in arbitrary dimensional quantum systems~\cite{KW}.
Whereas MoE is about the restricted shareability of bipartite entanglement in multi-party quantum systems, the dual concept of bipartite entanglement namely {\em Entanglement of Assistance} (EoA) is known to have a dually monogamous (thus polygamous) property in multipartite quantum systems; for a three-qubit pure state $\ket{\psi}_{ABC}$, {\em Polygamy of Entanglement} (PoE) was characterized as a dual inequality~\cite{GMS, GBS} \begin{equation} \tau\left(\ket{\psi}_{A(BC)}\right)\le\tau_a\left(\rho_{AB}\right) +\tau_a\left(\rho_{AC}\right), \label{3dual} \end{equation} where $\tau_a\left(\rho_{AB}\right)$ and $\tau_a\left(\rho_{AC}\right)$ are the tangle of assistance~\cite{GBS} of $\rho_{AB}$ and $\rho_{AC}$ respectively.
For MoE characterized by CKW inequality, the bipartite entanglement between $A$ and $BC$ measured by tangle is an upper bound for the sum of two-qubit entanglement between $A$ and each of $B$ and $C$. Interestingly, the same quantity also plays as a lower bound for the sum of two-qubit entanglement of assistance in the polygamy inequality. Later PoE was generalized into multi-qubit systems~\cite{GBS, KT} and three-party pure states of arbitrary dimension~\cite{BGK}. Recently, a tight upper bound on polygamy inequality was also proposed in an arbitrary-dimensional multi-party quantum systems~\cite{Kim09}. However, a general polygamy inequality of multi-party, higher-dimensional quantum systems was an open question.
The study of quantum entanglement in higher-dimensional quantum systems is important for not only theoretical sense but practical reasons as well; MoE can restrict the possible correlation between authorized users and the eavesdropper, which tightens security bounds in quantum cryptography (QC). Furthermore, to optimize the efficiency of entanglement usage as a resource in QC, higher-dimensional quantum systems rather than qubits are preferred in some physical systems for stronger security in quantum key distribution (QKD)~\cite{GJV+06}. However, generalization from qubits to qudits is not straightforward for example, in the complexity of a no-go theorem for universal transversal gates in quantum error correlation~\cite{CCCZC08}.
Here, we provide a polygamy inequality of entanglement that holds for multi-party quantum systems of arbitrary high dimensions. For multi-party closed quantum systems, we relate our result with the monogamy inequality~\cite{KW}, and show that the entropy of entanglement serves as both upper and lower bounds for monogamy and polygamy of multi-party quantum entanglement. Thus the entropy of entanglement is an universal entanglement measure that bounds both MoE and PoE.
Let us recall the definition of EoF and EoA of bipartite quantum systems. For a bipartite pure state $\ket{\psi}_{AB}$, its entropy of entanglement is \begin{align} E\left(\ket{\psi}_{AB}\right):=S\left(\rho_A\right) \label{pureE} \end{align} where $\rho_A=\mbox{$\mathrm{tr}$}_B \ket{\psi}_{AB}\bra{\psi}$ and $S\left(\rho \right)=-\mbox{$\mathrm{tr}$} \rho \log \rho$ is von Neumann entropy. For a mixed state $\rho_{AB}$, its EoF is defined as the minimum average entanglement \begin{equation} E_f\left(\rho_{AB}\right)=\min \sum_{i}p_i E\left(\ket{\psi}_{AB}\right), \label{EoF} \end{equation} where the minimization is taken over all possible pure-state decompositions of $\rho_{AB}=\sum_{i} p_i \ket{\psi^i}_{AB}\bra{\psi^i}$. This procedure of minimizing over all pure-state decompositions to determine mixed-state entanglement is known as the convex-roof extension. From the convex-roof nature inherent in the definition, EoF of~$\rho_{AB}$ is considered as the minimum amount of entanglement needed to prepare $\rho_{AB}$, hence the terminology {\em formation}.
As a dual quantity to EoF, EoA of $\rho_{AB}$ is defined as the maximum average entanglement \begin{equation} E_a\left(\rho_{AB}\right)=\max \sum_{i}p_i E\left(\ket{\psi}_{AB}\right), \label{EoA} \end{equation} over all possible pure-state decompositions of $\rho_{AB}$~\cite{LVvE03}. If we consider $\rho_{AB}$ together with a purification $\ket{\psi}_{ABC}$ such that $\rho_{AB}=\mbox{$\mathrm{tr}$} \ket{\psi}_\text{ABC}\bra{\psi}$, the party $C$ having the purification of $\rho_{AB}$ can help increasing the entanglement of $\rho_{AB}$ by performing measurements on its own system $C$ and communicating the measurement results to $A$ and $B$.
Furthermore, the one-to-one correspondence between rank-one measurements of $C$ and pure state ensembles of~$\rho_{AB}$ makes this maximum possible average entanglement between $A$ and $B$ with the assistance of $C$ as an intrinsic definition; the maximum average entanglement over all possible pure-state decompositions of $\rho_{AB}$, which is the definition of EoA in Eq.~(\ref{EoA}). Thus, not only the mathematical duality between EoF and EoA (one takes the minimum whereas the other one takes the maximum), they also have physical interpretations that are dual to each other; one is the concept of formation and the other is the possible achievable entanglement with assistance of the environment.
Before we discuss polygamy relation of multi-party entanglement in terms of EoA, let us first consider a trade-off relation between EoA and {\em one-way unlocalizable entanglement} (UE) in three-party quantum systems~\cite{BGK}; for a three-party pure state $\ket{\psi}_{ABC}$ with reduced density matrices $\rho_{AB}$ and $\rho_{AC}$, we have \begin{align} S(\rho_A)&=E_u^{\leftarrow}(\rho_{AB})+E_a(\rho_{AC}), \label{eq: 3UEEA} \end{align} where $E_u^{\leftarrow}(\rho_{AB})$ is UE of $\rho_{AB}$ \begin{equation} E_u^{\leftarrow}(\rho_{AB}):=\min_{\{M_x\}} \left[S(\rho_A)-\sum_x p_x S(\rho^x_A)\right],\\ \label{eq:UE} \end{equation} with the minimum being taken over all possible rank-1 measurements $\{M_x\}$ applied on system $B$.
Eq.~(\ref{eq: 3UEEA}) implies that the entropy of entanglement of $\ket{\psi}_{ABC}$ with respect to the bipartition between $A$ and $BC$ consists of two distinct parts: one is the robust entanglement that can be localized onto $AC$ after the local measurement of $B$ (denoted by $E_a(\rho_{AC})$), and the other part is too sensitive to be localized onto $AC$ (denoted by $E_u^{\leftarrow}\left(\rho_{AB}\right)$).
UE is known to be subadditive under tensor product of quantum states, and bounded below by the coherent information~\cite{BGK}. Furthermore, form the quantitative relation between UE and mutual information \begin{equation} E_u^{\leftarrow}(\rho_{AB})\leq\frac{I(\rho_{AB})}{2}, \label{upper} \end{equation} Eq.~(\ref{eq: 3UEEA}) was shown to imply a trade-off relation of localizable entanglement measured by EoA in three-party quantum systems; for any tripartite pure state $\ket{\psi}_{ABC}$, \begin{equation} S(\rho_A)\leq E_a(\rho_{AB})+E_a(\rho_{AC}). \label{3poly} \end{equation} Thus, for a tripartite pure state of arbitrary dimension, there always exists a polygamy relation of localizable entanglement that can be quantified by the entropy of entanglement and EoA, which is illustrated in Figure~\ref{fig1}.
\begin{figure}
\caption{The entanglement between $A$ and $BC$ of a three-party pure state $\ket{\psi}_{A(BC)}$ measured by $S(\rho_A)$ is always bounded by the sum of localizable entanglement between $A$ and $B$ measured by $E_a(\rho_{AB})$, and between $A$ and $C$ measured by $E_a(\rho_{AC})$.}
\label{fig1}
\end{figure}
Now we generalize Inequality~(\ref{3poly}) for arbitrary mixed states of multi-party quantum systems rather than just three parties. We first show that Inequality~(\ref{3poly}) is true for tripartite mixed states, and use the result to establish a general polygamy inequality of EoA in arbitrary dimensional multi-party quantum systems. For a three-party mixed state $\rho_{ABC}$, let $\rho_{A(BC)}=\sum_j p_j \ket{\psi_j}_{A(BC)}\bra{\psi^j}$ be an optimal decomposition for EoA with respect to the bipartition between $A$ and $BC$, \begin{align} E_a\left(\rho_{A(BC)}\right)=\sum_j p_j E\left(\ket{\psi^j}_{A(BC)}\right), \label{3mixopt} \end{align} where $E\left(\ket{\psi^j}_{A(BC)}\right)=S\left(\rho^j_A \right)$ is the pure-state entanglement of $\ket{\psi^j}_{A(BC)}$ for each $j$ with $\rho^j_A=\mbox{$\mathrm{tr}$}_{BC}\ket{\psi^j}_{ABC}\bra{\psi^j}$.
For each $j$, $\ket{\psi^j}_{ABC}$ is a tripartite pure state, therefore Inequality~(\ref{3poly}) leads us to \begin{align} E\left(\ket{\psi^j}_{A(BC)}\right) =& S\left(\rho^j_A \right)\nonumber\\ \leq& E_a\left(\rho^j_{AB}\right)+E_a\left(\rho^j_{AC}\right) \label{unipolypsii} \end{align} with $\rho^j_{AB}=\mbox{$\mathrm{tr}$}_C \ket{\psi^j}_{ABC}\bra{\psi^j}$ and $\rho^j_{AC}=\mbox{$\mathrm{tr}$}_B \ket{\psi^j}_{ABC}\bra{\psi^j}$. The linearity of partial trace implies \begin{align} \sum_{j}p_j\rho_{AB}^j=\rho_{AB},~\sum_{j}p_j\rho_{AC}^j=\rho_{AC}, \label{rhosum} \end{align} and together with the definition of EoA, we have \begin{align} \sum_{j}p_jE_a\left(\rho_{AB}^j\right)&\leq E_a\left(\rho_{AB}\right),\nonumber\\ \sum_{j}p_jE_a\left(\rho_{AC}^j\right)&\leq E_a\left(\rho_{AC}\right). \label{rhosum} \end{align} From the inequalities~(\ref{unipolypsii}), (\ref{rhosum}) and Eq.~(\ref{3mixopt}), we thus have \begin{align} E_a\left(\rho_{A(BC)}\right)=&\sum_j p_j E\left(\ket{\psi^j}_{A(BC)}\right)\nonumber\\
\leq &\sum_j p_jE_a\left(\rho^j_{AB}\right)+\sum_j p_jE_a\left(\rho^j_{AC}\right) \nonumber\\ \leq& E_a\left(\rho_{AB}\right)+E_a\left(\rho_{AC}\right). \label{3polymixed} \end{align} In other words, Inequality~(\ref{3poly}) can be generalized for tripartite mixed states in terms of EoA.
Now let us consider a multi-party quantum state $\rho_{A_1A_2\cdots A_n}$ rather than just three parties. By letting $A_1=A$, $A_2=B$ and $A_3\cdots A_n=C$, we can consider $\rho_{A_1A_2\cdots A_n}$ as a tripartite quantum state, and Inequality~(\ref{3polymixed}) leads us to \begin{align} E_a\left(\rho_{A_1(A_2\cdots A_n)}\right) \leq& E_a\left(\rho_{A_1A_2}\right)+E_a\left(\rho_{A_1(A_3\cdots A_n)}\right), \label{polymixed1} \end{align} where $\rho_{A_1A_2}=\mbox{$\mathrm{tr}$}_{A_3\cdots A_n}\rho_{A_1A_2\cdots A_n}$, $\rho_{A_1A_3\cdots A_n}=\mbox{$\mathrm{tr}$}_{A_2}\rho_{A_1A_2\cdots A_n}$, and $E_a\left(\rho_{A_1(A_3\cdots A_n)}\right)$ is EoA of $\rho_{A_1A_3\cdots A_n}$ with respect to the bipartition between $A_1$ and $A_3\cdots A_n$. Because $\rho_{A_1A_3\cdots A_n}$ in Inequality~(\ref{polymixed1}) is a $(n-1)$-party quantum state, we can apply Inequality~(\ref{3polymixed}) to obtain $E_a\left(\rho_{A_1(A_3\cdots A_n)}\right) \leq E_a\left(\rho_{A_1A_3}\right)+E_a\left(\rho_{A_1(A_4\cdots A_n)}\right)$. Thus, by iterating Inequality~(\ref{3polymixed}) on the last term of Inequality~(\ref{polymixed1}), we obtain the following polygamy inequality of multi-party entanglement \begin{align} E_a\left(\rho_{A_1(A_2\cdots A_n)}\right) \leq& E_a\left(\rho_{A_1A_2}\right)+\cdots +E_a\left(\rho_{A_1A_n}\right). \label{npolymixed} \end{align} In contrast to monogamy inequality, which provides an upper bound on the shareability of bipartite entanglement in multi-party quantum systems, polygamy inequality in (\ref{npolymixed}) provides a lower bound of how much entanglement can be created on bipartite subsystems with assistance of the other parties.
Here we further note that this upper and lower bounds of entanglement distribution in multi-party quantum systems can be determined by a single quantity, the entropy of entanglement for closed systems. The result about three-party monogamy inequality in terms of squashed entanglement (SE) in~\cite{KW} can easily be generalized into an arbitrary multi-party quantum system; for a $n$-party state $\rho_{A_1A_2\cdots A_n}$, \begin{align} E_{sq}\left(\rho_{A_1(A_2\cdots A_n)}\right)\geq E_{sq}\left(\rho_{A_1A_2}\right)+ \cdots +E_{sq}\left(\rho_{A_1A_2}\right), \label{nmonomix} \end{align} where $E_{sq}\left(\rho_{AB}\right)$ is SE of $\rho_{AB}$ defined as \begin{equation} \label{eq:squashed} E_{sq}\left(\rho_\text{AB}\right):=\frac{\inf\left\{S(\rho_{AE}) +S(\rho_{BE})-S(\rho_{ABE})-S(\rho_{E})\right\}}{2} \end{equation} with the infimum taken over all possible extension $\rho_{ABE}$ of $\rho_{AB}$ such that $\mbox{$\mathrm{tr}$}_E \rho_{ABE}=\rho_{AB}$. The quantity inside the parenthesis of Eq.~(\ref{eq:squashed}) is quantum conditional mutual information of $\rho_{ABE}$ denoted by
$I(A;B|E)$.
For a bipartite pure state~$\ket{\psi}_{AB}$, any possible extension $\rho_{ABC}$ such that $\mbox{$\mathrm{tr}$}_{C} \rho_{ABC}=\ket{\psi}_{AB}\bra{\psi}$ must be a product state $\ket{\psi}_{AB}\bra{\psi}\otimes\rho_{C}$ for some $\rho_{C}$ of subsystem $C$. From this fact, it is also straightforward to verify that SE in~Eq.~(\ref{eq:squashed}) coincides with $S(\rho_{A})$ for any pure state $\ket{\psi}_{AB}$ with reduced density matrix $\rho_\text{A}$. In other words, for a multi-party closed quantum system described by a pure state $\ket{\psi}_{A_1A_2\cdots A_n}$, the monogamy inequality in terms of SE in~(\ref{nmonomix}) becomes \begin{align} S\left(\rho_{A_1}\right)\geq E_{sq}\left(\rho_{A_1A_2}\right)+ \cdots +E_{sq}\left(\rho_{A_1A_2}\right), \label{nmonopure} \end{align} where $S\left(\rho_{A_1}\right)=E\left(\ket{\psi}_{A_1(A_2\cdots A_n)}\right)$ is the entropy of entanglement of the pure state $\ket{\psi}_{A_1A_2\cdots A_n}$ with respect to the bipartition between
$A_1$ and the other parties. Furthermore, from the definition of EoA in Eq.~(\ref{EoA}), we also note that the left-hand side of polygamy inequality in~(\ref{npolymixed}) becomes the entropy of entanglement, \begin{align} S\left(\rho_{A_1}\right) \leq& E_a\left(\rho_{A_1A_2}\right)+\cdots +E_a\left(\rho_{A_1A_n}\right), \label{npolypure} \end{align} for this closed quantum system described by $\ket{\psi}_{A_1A_2\cdots A_n}$. Thus the entropy of entanglement quantifying bipartite pure-state entanglement is an universal entanglement measure that provides bounds for both monogamy and polygamy of multi-party quantum entanglement.
To summarize, we have shown the polygamous nature of bipartite entanglement distribution in multipartite quantum systems; using EoA, we establish a general polygamy inequality of multi-party entanglement in arbitrary high-dimensional quantum systems rather than just qubits. For multi-party closed quantum systems, we have related our polygamy inequality with the monogamy inequality in terms of SE, and clarified that the entropy of entanglement serves as both upper and lower bounds for MoE and PoE.
Our result completely characterizes the polygamous nature of entanglement distribution in multi-party quantum systems of arbitrary high dimension. Noting the importance of the study on high-dimensional multipartite entanglement, our result can provide a rich reference for future work on the study of multipartite entanglement.
\section*{Acknowledgments} This work was supported by Emerging Technology R\&D Center of SK Telecom.
\end{document} | arXiv | {
"id": "1202.2184.tex",
"language_detection_score": 0.7836737632751465,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{{\bf Exact Polynomial Eigensolutions of the Schr\"{o}dinger Equation for the Pseudoharmonic Potential }\\ Sameer M. Ikhdair\thanks{ sikhdair@neu.edu.tr} and \ Ramazan Sever\thanks{ sever@metu.edu.tr}} \address{$^{\ast }$Department of Physics, Near East University, Nicosia, North Cyprus, Mersin 10, Turkey.\\ $^{\dagger }$Department of Physics, Middle East Technical University, 06531 Ankara, Turkey.} \date{\today} \author{}
\begin{abstract} The polynomial solution of the Schr\"{o}dinger equation for the Pseudoharmonic potential is found for any arbitrary angular momentum $l$. The exact bound-state energy eigenvalues and the corresponding eigen functions are analytically calculated. The energy states for several diatomic molecular systems are calculated numerically for various principal and angular quantum numbers. By using a proper transformation, this problem can be also solved very simply using the known eigensolutions of anharmonic oscillator potential.{\normalsize \newline}
Keywords{\normalsize :} Pseudoharmonic potential, anharmonic oscillator potential, Schr\"{o}dinger equation, diatomic molecules, eigenvalues and eigenfunctions. \end{abstract} \pacs {03.65.-w; 03.65.Fd; 03.65.Ge}
\maketitle
\begin{verbatim}
\end{verbatim}
\section{Introduction}
\noindent The three-dimensional $(3D)$ anharmonic oscillators are of great importance in different physical phenomena with many applications in molecular physics [1]. The solutions of the Schr\"{o}dinger equation for any $l$-state for such potentials are also of much concern. Morse potential is commonly used for anharmonic oscillator. However, its wavefunction is not vanishing at the origin. On the other hand, the Mie-type and also the Pseudoharmonic potentials do vanish. The Mie-type potential has the general features of the true interaction energy [1], interatomic and inter-molecular and dynamical properties in solid-state physics [2]. The Pseudoharmonic potential may be used for the energy spectrum of linear and non-linear systems [3]. The Pseudoharmonic and Mie-type potentials [3,4] are two exactly solvable potentials other than the Coulombic and anharmonic oscillator.
The anharmonic oscillator and H-atom (Coulombic) problems have been thoroughly studied in $N$-dimensional space quantum mechanics for any angular momentum $l.$ These two problems are related together and hence the resulting second-order differential equation has the normalized orthogonal polynomial function solution (cf. Ref.[5] and the references therein).
In this brief letter we will follow parallel solution to Refs.[6,7,8] and give a complete normalized polynomial solution of $3D$ Schr\"{o}dinger equation with Pseudoharmonic potential, anharmonic oscillator like potential with an additional centrifugal potential barrier, for any arbitrary $l$ -state. Further, by a proper transformation, we obtain the eigensolutions of this problem from the well-known eigensolutions of the anharmonic oscillator potential. As an application, we present some numerical results of the energy states of $N_{2},$ $CO,$ $NO$ and $CH$ molecules [9].
The contents of this paper is as follows. In Section II, we give the eigensolutions of the $3D$ Schr\"{o}dinger equation with Pseudoharmonic potential and calculate numerically the energy levels for various diatomic molecular systems. We also obtain the eigensolutions of the Pseudoharmonic potential from a known anharmonic eigensolutions using a proper transformation. Finally, in Section III, we give our results and conclusions.
\section{SCHR\"{O}DINGER EQUATION WITH PSEUDOHARMONIC POTENTIAL}
\noindent We wish to solve the Schr\"{o}dinger equation for a pseudoharmonic potential [3] given by
\begin{equation} V(r)=D_{0}\left( \frac{r}{r_{0}}-\frac{r_{0}}{r}\right) ^{2}, \end{equation} where $D_{0}$ is the dissociation energy between two atoms in a solid and $ r_{0}$ is the equilibrium intermolecular seperation.
For brevity, we write the radial part of the Schr\"{o}dinger equation as \begin{equation} \left[ -\frac{\hbar ^{2}}{2\mu }{\bf \nabla }^{2}+V(r)\right] \psi (r,\theta ,\varphi )=E_{nl}\psi (r,\theta ,\varphi ), \end{equation} and employing the transformation $\psi (r,\theta ,\varphi )=\frac{R_{nl}(r)}{ r}Y_{lm}(\theta ,\varphi ),$ to reduce it into the form [6,7] \begin{equation} \left\{ \frac{d^{2}}{dr^{2}}-\frac{l(l+1)}{r^{2}}+\frac{2\mu }{\hbar ^{2}} \left[ E_{nl}+2D_{0}-\frac{D_{0}r^{2}}{r_{0}^{2}}-\frac{D_{0}r_{0}^{2}}{r^{2} }\right] \right\} R_{nl}(r)=0. \end{equation} Furthermore, using the dimensionless abbreviations:
\begin{equation} \rho =r/r_{0};\text{ }\varepsilon ^{2}=\frac{2\mu r_{0}^{2}}{\hbar ^{2}} \left( E_{nl}+2D_{0}\right) ;\text{ }\gamma ^{2}=\frac{2\mu r_{0}^{2}}{\hbar ^{2}}D_{0,} \end{equation} gives the following simple form equation
\begin{equation} \frac{d^{2}R_{nl}(\rho )}{d\rho ^{2}}+\left[ \varepsilon ^{2}-\gamma ^{2}\rho ^{2}-\frac{\gamma ^{2}+l(l+1)}{\rho ^{2}}\right] R_{nl}(\rho )=0. \end{equation} The behaviour of the solution at $\rho =0,$ determined by the centrifugal term and its asymptotic behaviour, determined by the oscillator terms, suggests us to write: \begin{equation} R_{nl}(\rho )=\rho ^{q}\exp (-\frac{\gamma }{2}\rho ^{2})g(\rho ), \end{equation} with the numerator of $x^{-2}$ term equal to zero leads to
\begin{equation} q=\frac{1}{2}\pm \sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}.\text{ \ \ } \end{equation} As $q>0,$ the above wavefunction vanishes at $\rho =0,$ corresponding to the strong repulsion between the two atoms. It is reasonable to set Eq.(6) into Eq.(5) and then to use, instead of $\rho ,$ the variable: \begin{equation} s=\gamma \rho ^{2}, \end{equation} giving the general type of Kummer's (Confluent Hypergeometric) differential equation
\[ sg^{\prime \prime }(s)+\left[ 1+\sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}-s\right] g^{\prime }(s) \] \begin{equation} -\frac{1}{2}\left( 1+\sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}- \frac{\varepsilon ^{2}}{2\gamma }\right) g(s)=0, \end{equation} with the Kummer's function solution:
\[ g(\rho )=C_{11}F_{1}\left( \frac{1}{2}\left( 1+\sqrt{\left( l+\frac{1}{2} \right) ^{2}+\gamma ^{2}}-\frac{\varepsilon ^{2}}{2\gamma }\right) ,1+\sqrt{ \left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}};\gamma \rho ^{2}\right) \]
\begin{equation} +C_{21}F_{1}\left( \frac{1}{2}\left( 1-\sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}-\frac{\varepsilon ^{2}}{2\gamma }\right) ,1-\sqrt{\left( l+ \frac{1}{2}\right) ^{2}+\gamma ^{2}};\gamma \rho ^{2}\right) \rho ^{-2\sqrt{ \left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}}. \end{equation} At $\rho =0,$ the second part of the solution so that $C_{2}=0.$ This clearly differes from the linear oscillator where no boundary condition exists at origin. A confluent series behaves asymptotically at large positive values of its argument as
\begin{equation} _{1}F_{1}(a,c;z)\rightarrow \frac{\Gamma (c)}{\Gamma (a)}\exp (z)z^{a-c}, \end{equation} leads us to write
\begin{equation} R_{nl}(\rho )\rightarrow \rho ^{\frac{1}{2}\pm \sqrt{\left( l+\frac{1}{2} \right) ^{2}+\gamma ^{2}}}\exp (-\frac{\gamma }{2}\rho ^{2})\exp (\gamma \rho ^{2})\rho ^{-\left( 1+\sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2} }+\frac{\varepsilon ^{2}}{2\gamma }\right) }, \end{equation} which is exponentially divergent wavefunction. This divergence can be avoided, in cutting off the series in Eq.(10), by putting the parameter $ a=-n,$ with $n=0,1,2,...,$ thus transforming the series into a polynomial of degree $n.$ Hence
\begin{equation} \frac{1}{2}\left( 1+\sqrt{\left( l+\frac{1}{2}\right) ^{2}+\gamma ^{2}}- \frac{\varepsilon ^{2}}{2\gamma }\right) =-n, \end{equation} with
\begin{equation} \frac{\varepsilon ^{2}}{2\gamma }=\frac{r_{0}}{\hbar }\sqrt{\frac{\mu }{ 2D_{0}}}\left( E_{nl}+2D_{0}\right) . \end{equation} Thus, solving Eqs.(13) and (14) for the energy eigenvalues gives
\begin{equation} E_{nl}=-2D_{0}+\frac{\hbar }{r_{0}}\sqrt{\frac{2D_{0}}{\mu }}\left[ 2n+1+ \sqrt{\frac{2\mu D_{0}r_{0}^{2}}{\hbar ^{2}}+\left( l+\frac{1}{2}\right) ^{2} }\right] , \end{equation} and further from Eqs.(6), (10) and (13), we write the final form of the wavefunction as \[ \psi (r,\theta ,\varphi )=N_{nl}r^{-\frac{1}{2}+\sqrt{\frac{2\mu D_{0}r_{0}^{2}}{\hbar ^{2}}+\left( l+\frac{1}{2}\right) ^{2}}}\exp \left( - \sqrt{\frac{\mu D_{0}}{2\hbar ^{2}}}\frac{r^{2}}{r_{0}}\right) \times \] \begin{equation} _{1}F_{1}\left( -n,1+\sqrt{\frac{2\mu D_{0}r_{0}^{2}}{\hbar ^{2}}+\left( l+ \frac{1}{2}\right) ^{2}};\sqrt{\frac{2\mu D_{0}}{\hbar ^{2}}}\frac{r^{2}}{ r_{0}}\right) Y_{lm}(\theta ,\varphi ), \end{equation} where $N_{nl}$ is a normalization constant to be determined from the normalization condition and $Y_{lm}(\theta ,\varphi )=\sin ^{m}\theta P_{n}^{(m,m)}(\cos \theta )\exp (\pm im\varphi )$ is the angular part of the wave function.
On the other hand, for the sake of simplicity, we can immediately obtain the energy eigenvalues and the corresponding wave functions of the Pseudoharmonic potential by transforming Eq.(3) to another Schr\"{o}dinger-like equation with $L(L+1)=l(l+1)+\frac{2\mu D_{0}r_{0}^{2}}{ \hbar ^{2}},$
\begin{equation} \left[ \frac{d^{2}}{dr^{2}}-\frac{L(L+1)}{r^{2}}+\frac{2\mu }{\hbar ^{2}} \left( E_{nL}^{\prime }-B^{2}r^{2}\right) \right] R_{nL}(r)=0, \end{equation} where
\begin{equation} E_{nL}^{\prime }=E_{nl}+2D_{0},\text{ }B^{2}=\frac{D_{0}}{r_{0}^{2}},\text{ \ and }L=\frac{1}{2}\left[ -1+\sqrt{\left( 2l+1\right) ^{2}+\frac{8\mu D_{0}r_{0}^{2}}{\hbar ^{2}}}\right] . \end{equation} At this point, we should report that Eq. (17) corresponds to the Schr\"{o}dinger equation of anharmonic oscillator potential, $ V(r)=B^{2}r^{2},$ with energy levels \begin{equation} E_{nL}^{\prime }=\sqrt{\frac{\hbar ^{2}}{2\mu }}B(4n+2L+3),\text{ } n=0,1,2,\cdots , \end{equation} and wave functions
\begin{equation} \psi (r,\theta ,\varphi )=A_{nL}r^{L}\exp \left( -\sqrt{\frac{\mu }{2\hbar ^{2}}}Br^{2}\right) L_{n}^{(L+\frac{1}{2})}(\sqrt{\frac{2\mu }{\hbar ^{2}}} Br^{2})\sin ^{m}\theta P_{n}^{(m,m)}(\cos \theta )\exp (\pm im\varphi ), \end{equation} where $m=-(n+L+1).$
Finally, in the light of transformation (18), the eigenvalues (15) and the eigen functions (16) can be easily determined from the traditional formulas (19) and (20), respectively, with the Laguerre function being expressed in terms of Kummer's function, that is, $L_{n}^{(\nu )}(z)=_{1}F_{1}(-n,\nu +1;z)..$
\section{RESULTS AND CONCLUSIONS}
In this work we have studied the analytical solution for a Pseudoharmonic potential. Considering this potential, the problem is reduced to a harmonic oscillator potential plus an additional centrifugal potential barrier of order $1/r^{2}.$ The exact eigensolutions for this particular case have been obtained, in a similar way as the Hydrogenic solutions [5,8]. We have calculated the energy eigenvalues and the corresponding wave functions considering bound-states for any quantum-mechanical system of any angular momentum $l$ bound by a pseudoharmonic potential. The present results for the potential parameters $\gamma =0$ reduces to a Harmonic oscillator solution.
Finally, we calculate the binding energies of the Pseudoharmonic potential for $N_{2},$ $CO,$ $NO$ and $CH$ diatomic molecules by means of Eq.(15) with the potential parameter values [9,10].given in Table 1. The explicit values of the energy for different values of $n$ and $l$ are shown in Table 2.
\acknowledgments This research was partially supported by the Scientific and Technological Research Council of Turkey. The authors wish to thank the referee(s) for the positive and invaluable suggestions. S.M. Ikhdair wishes to dedicate this work to his family for their love and assistance.
\baselineskip= 2\baselineskip
\begin{table}[tbp] \caption{Reduced masses and spectroscopically determined properties of $ N_{2},$ $CO,$ $NO$ and $CH$ diatomic molecules in the ground electronic state.} \begin{tabular}{lllll} Parameters\tablenotemark[1]\tablenotetext[1]{The parameter values here are taken from [10].} & $N_{2}$ & $CO$ & $NO$ & $CH$ \\ \tableline$D_{0}$ $(cm^{-1})$ & $96288.03528$ & $87471.42567$ & $64877.06229$ & $31838.08149$ \\ $r_{0}$ $(A^{\circ })$ & $1.0940$ & $1.1282$ & $1.1508$ & $1.1198$ \\ $\mu $ (amu) & $7.00335$ & $6.860586$ & $7.468441$ & $0.929931$ \end{tabular} \end{table}
\begin{table}[tbp] \caption{Calculated energy eigenvalues of the pseudoharmonic potential for $ N_{2},$ $CO,$ $NO$ and $CH$ diatomic molecules with different values of $n$ and $l$ in $eV.$} \begin{tabular}{llllll} State $(n)$ & $l$ & $N_{2}$ & $CO$ & $NO$ & $CH$ \\ \tableline$0$ & $0$ & $0.1091559$ & $0.1019306$ & $0.0824883$ & $0.1686344$ \\ $1$ & $0$ & $0.3273430$ & $0.3056722$ & $0.2473592$ & $0.5050072$ \\ & $1$ & $0.3278417$ & $0.3061508$ & $0.2477817$ & $0.5085903$ \\ $2$ & $0$ & $0.5455302$ & $0.5094137$ & $0.4122301$ & $0.841380$ \\ & $1$ & $0.5460288$ & $0.5098923$ & $0.4126526$ & $0.8449631$ \\ & $2$ & $0.5470260$ & $0.5108495$ & $0.4134977$ & $0.8521246$ \\ $4$ & $0$ & $0.9819045$ & $0.9168969$ & $0.7419718$ & $1.5141255$ \\ & $1$ & $0.9824031$ & $0.9173755$ & $0.7423944$ & $1.5177087$ \\ & $2$ & $0.9834003$ & $0.9183327$ & $0.7432395$ & $1.5248701$ \\ & $3$ & $0.9848961$ & $0.9197684$ & $0.7445070$ & $1.5356002$ \\ & $4$ & $0.9868903$ & $0.9216825$ & $0.7461969$ & $1.5498843$ \\ $5$ & $0$ & $1.2000916$ & $1.1206384$ & $0.9068427$ & $1.8504983$ \\ & $1$ & $1.2005902$ & $1.1211170$ & $0.9072653$ & $1.8540815$ \\ & $2$ & $1.2015875$ & $1.1220742$ & $0.9081104$ & $1.8612429$ \\ & $3$ & $1.2030832$ & $1.1235099$ & $0.9093779$ & $1.8719729$ \\ & $4$ & $1.2050774$ & $1.1254240$ & $0.9110678$ & $1.8862571$ \\ & $5$ & $1.2075699$ & $1.1278165$ & $0.9131799$ & $1.9040761$ \end{tabular} \end{table}
\end{document} | arXiv | {
"id": "0611183.tex",
"language_detection_score": 0.6638618111610413,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Probability tails of local times]{On the probability distribution of the local times of diagonally operator-self-similar Gaussian fields with stationary increments}
\author[K. Kalbasi]{Kamran Kalbasi} \thanks{Institute of Mathematics, EPFL (Swiss Federal Institute of Technology Lausanne)}
\author[T. Mountford]{Thomas S. Mountford}
\address{Institute of Mathematics, EPFL (Swiss Federal Institute of Technology Lausanne)}
\keywords{ local times, probability tail decay, {G}aussian fields, operator-self-similar random fields, fractional {B}rownian fields}
\begin{abstract} In this paper we study the local times of vector-valued Gaussian fields that are `diagonally operator-self-similar' and whose increments are stationary. Denoting the local time of such a Gaussian field around the spatial origin and over the temporal unit hypercube by $Z$, we show that there exists $\lambda\in(0,1)$ such that under some quite weak conditions, $\lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}$ and $\lim_{x\rightarrow +\infty}\frac{-\log \mathbb{P}(Z>x)}{x^{\frac{1}{\lambda}}}$ both exist and are strictly positive (possibly $+\infty$).
Moreover, we show that if the underlying Gaussian field is `strongly locally nondeterministic', the above limits will be finite as well. These results are then applied to establish similar statements for the intersection local times of diagonally operator-self-similar Gaussian fields with stationary increments. \end{abstract}
\maketitle
\section{Introduction} Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space, and $\B{X}_{\B{t}}:=\bigl(X^1_{\B{t}}, \cdots, X^d_{\B{t}}\bigr)$, $\B{t}\in\mathbb{R}^N$ be an $N$-parameter $d$-dimensional centered Gaussian field on $(\Omega, \mathcal{F}, \mathbb{P})$, i.e. each component $X^i_{\B{t}}$ is a real-valued zero-mean Gaussian field indexed by $\mathbb{R}^N$. We call such a random field a centered Gaussian $(N,d)$-field.
We denote by $\mathbb{R}_+$, $\mathbb{N}$ and $\mathbb{Q}$ respectively the sets of strictly positive real numbers ($>0$), strictly positive integers ($\geq 1$), and finally the rational numbers. Evidently $\mathbb{R}_{\geq0}$ denotes the real numbers that are positive or zero. We denote the space of matrices of size $m\times n$ with real entries by $\mathbb{R}^{n\times m}$. For any two same-sized vectors $\B{u}=(u_1, \cdots, u_n)$ and $\B{v}=(v_1, \cdots, v_n)$ in $\mathbb{R}^n$, $\B{u}\circ\B{v}$ denotes their Schur product, i.e., the vector $\B{u}\circ\B{v}:=(u_1v_1, \cdots, u_nv_n)$. For any square matrix $\BU{Y}$, we denote its trace (i.e. the sum of all its diagonal entries) by $\textrm{tr}(\BU{Y})$. For any matrix $\BU{Y}$, we denote its transpose by $\BU{Y}^\dagger$. For any matrices $\BU{A_1}$, $\BU{A_2}$, ..., $\BU{A_n}$, we define $\textrm{diag}(\BU{A_1}, \BU{A_2}, \cdots, \BU{A_n})$ as the block diagonal matrix that has matrices $\BU{A_1}$, $\BU{A_2}$, ..., $\BU{A_n}$ on its diagonal (respecting the order) and is zero elsewhere.
For any $\B{p}\in\mathbb{R}^N$ and $\B{T}\in\mathbb{R}_+^N$, let $\mathcal{C}({\B{p}},\B{T})$ denote $\B{p}+\prod_{i=1}^N[0,T_i]$, i.e., the $N$-dimensional cube of side lengths equal to $\{T_i\}_{i=1}^N$ and based at point $\B{p}$. We also denote $[0,1]$ by $\mathcal{I}$.
For any measurable subset $\mathcal{B}\subset\mathbb{R}^d$, we denote its Lebesgue measure by $\vol(\mathcal{B})$. For any subset $\mathcal{A}$ of an arbitrary set $\mathcal{X}$, we denote its indicator function by $\mathbf{1}_{\{\mathcal{A}\}}$, i.e. $$ \mathbf{1}_{\{\mathcal{A}\}}(x):= \begin{cases} 1 \;,\quad \text{for} \;x\in \mathcal{A}\\ 0 \;,\quad \text{for} \;x\not\in \mathcal{A}. \end{cases} $$
For any $k$-dimensional Gaussian random vector $\B{Y}=(Y_1, \cdots, Y_k)$, we denote the determinant of its covariance matrix by $\detcov[\B{Y}]$; in other words $$\detcov[\B{Y}]:=\det \bigl[\mathbb{E} \bigl( \B{Y}\, \B{Y}^\dagger\bigr)-\mathbb{E}(\B{Y})\mathbb{E}(\B{Y}^\dagger)\bigr],$$ where $\B{Y}$ is regarded as a $k\times 1$ matrix. For any finite family of vectors $\B{y}_i=(y_1^i,\cdots,y_k^i)$, $i=1,\cdots,n$, we call the following vector as their adjoined vector: $$ [y_1^1,\cdots,y_k^1, y_1^2,\cdots,y_k^2, \cdots, y_1^n,\cdots,y_k^n], $$ and we denote it by $[\B{y}_1, \cdots, \B{y}_n]$.
Once a centered Gaussian $(N,d)$-field $\B{X}$ is fixed, for any positive integer $k$ and any $\B{t}_1, \cdots, \B{t}_k\in \mathbb{R}^N$ we define \begin{equation}\label{K_n Definition} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n):= (2\pi)^{-\frac{nd}{2}}\bigl(\det \cov \bigl[\B{X}_{\B{t}_1}, \cdots, \B{X}_{\B{t}_n}]\bigr)^{-\frac{1}{2}}. \end{equation}
We use the following definition of local times which provides a pointwise characterisation. The more common definition of local times as the Radon-Nikodym derivative of the occupation measure of a random process (if it is absolutely continuous), only provides an almost-sure characterisation of the occupation density. In section \ref{Two different definitions of local times}, we will see more on this and the link between the two definitions.
\begin{dfn} Let $\{\B{X}_{\B{t}}\}_{\B{t}}$ be a random field on $\mathbb{R}^N$ with values in $\mathbb{R}^d$. We define the local time of $\B{X}$ at $\B{x}\in\mathbb{R}^d$ and over the cube $\mathcal{C}(\B{p},\B{T})$ as the following limit (if it exists) $$ L_{\B{x}}(\B{X};\mathcal{C}(\B{p},\B{T})):= \lim_{\varepsilon\rightarrow 0} \int_{\mathcal{C}(\B{p},\B{T})}\frac{1}{\vol(\mathcal{B}_\varepsilon(\B{x}))}
\mathbf{1}_{\{\|\B{X}_{\B{t}}-\B{x}\|<\varepsilon\}}(\B{t})\mathrm{d}\B{t}, $$
where $\|\cdot\|$ is an arbitrary norm on $\mathbb{R}^d$, and $\mathcal{B}_\varepsilon(\B{x}):=\{\B{y}\in\mathbb{R}^d ; \|\B{x}-\B{y}\|<\varepsilon\}$.
\end{dfn}
We are interested in the tail-decay behavior of the probability distribution of $L_{\B{0}}(\B{X};\mathcal{I}^N)$. The first work in this direction goes back to \cite{KasahKonoOgawa99}. They consider a one-parameter one-dimensional ($N=d=1$) Gaussian process $X(t)$ with stationary increments satisfying the local nondeterminism condition \cite{Xiao08}. Moreover, defining $\sigma^2(t)=\mathbb{E}\bigl[(X_t-X_s)^2\bigr]$, they assume that $\sigma(t)$ is continuous and strictly increasing on the interval $[0,1]$, that $\frac{1}{\sigma(t)}$ is integrable over $\mathcal{I}=[0,1]$, and finally $\sigma(t)$ varies regularly at $0$ with some exponent $0<H<1$, i.e., $\lim_{t\rightarrow0}\frac{\sigma(\omega t)}{\sigma(t)}=\omega^H$ for every $\omega>0$. In fact this latter condition is a gauge for asymptotic self-similarity near the origin. Under these conditions they show that the local times of $X(\cdot)$ exist, and moreover $$ 0< \liminf_{x\rightarrow +\infty}\frac{-\log\mathbb{P}[L_{0}(X,\mathcal{I})>x]}{\sigma^{-1}(\frac{1}{x})}\leq \limsup_{x\rightarrow +\infty}\frac{-\log\mathbb{P}[L_{0}(X,\mathcal{I})>x]}{\sigma^{-1}(\frac{1}{x})} <+\infty. $$ When $\sigma(t)=t^H$, which corresponds to the fractional Brownian motion of Hurst parameter $H$, the exponential decay rate $\sigma^{-1}(\frac{1}{x})$ equals $x^{\frac{1}{H}}$.
More recently, \cite{ChenLiRosinskyShao11} considers the one-parameter d-dimensional fractional Brownian motion $\B{B}^H(t)=(B^H_1(t), \cdots,B^H_d(t))$ and also d-dimensional fractional Riemann-Liouville process $\B{W}^H(t)=(W^H_1(t), \cdots,W^H_d(t))$ where $\{B^H_i\}_{i=1}^d$ ($\{W^H_i\}_{i=1}^d$) are $d$ independent copies of a fractional Brownian motion (fractional Riemann-Liouville process) with Hurst parameter $H$. They show that the following limits exist $$ \lim_{x\rightarrow +\infty} x^{-\frac{1}{dH}}\log\mathbb{P}[L_{\B{0}}(\B{B}^H,\mathcal{I})>x] \qquad \text{and}\qquad \lim_{x\rightarrow +\infty} x^{-\frac{1}{dH}}\log\mathbb{P}[L_{\B{0}}(\B{W}^H,\mathcal{I})>x]. $$
We will prove the existence of this exponential tail-decay limit for the class of Gaussian fields that have stationary increments (Property $\mathfrak{A}_2$ below) and are `diagonally self-similar' as defined in Property $\mathfrak{A}_3$ below.
Throughout the paper we assume that the random field $\B{X}$ has both of the following two properties ($\mathfrak{A}_0$ and $\mathfrak{A}_1$).
\noindent \textbf{Property $\mathfrak{A}_0$: } There exists a positive constant $c_0>0$ such that $\var(X_{\B{t}}^i)\leq c_0$ for every $\B{t}\in[0,1]^N$ and $i=1,\cdots,d$.\\ As we do not assume any kind of continuity of $\B{X}$ or its covariance matrix, the boundedness of its variance (Property $\mathfrak{A}_0$), seems inevitable.
\noindent \textbf{Property $\mathfrak{A}_1$: }
The (N,d)-Gaussian field $\B{X}$ has the property that for any positive integer $n$,
the expression $\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)$ is integrable over $\bigl(\mathcal{I}^N\bigr)^{n}$.\\
Property $\mathfrak{A}_1$ guarantees the existence of the local times at every point, i.e. $L_{\B{x}}(\B{X};\mathcal{I}^N)$, and the finiteness of all their moments, see Proposition \ref{existence and approximation}. In fact, $\mathfrak{A}_1$ is the weakest-known sufficient condition for the existence of local time at the origin and the finiteness of all its moments.
Next we have the following two properties that form our main framework.
\noindent \textbf{Property $\mathfrak{A}_2$(Stationary Increments): }The random field $\B{X}$ is zero at the origin and has stationary increments, i.e., for any $\B{p}\in\mathbb{R}^N$ we have the following equality for every $\B{s}, \B{t}\in\mathbb{R}^N$ and $i,j\in\{1, \cdots, d\}$ $$ \mathbb{E}\bigl[X^{(i)}_{\B{s}}X^{(j)}_{\B{t}}\bigr] =\mathbb{E}\bigl[(X^{(i)}_{\B{s+p}}- X^{(i)}_{\B{p}})(X^{(j)}_{\B{t+p}}-X^{(j)}_{\B{p}})\bigr]. $$
\noindent \textbf{Property $\mathfrak{A}_3$(Diagonal Self-Similarity): } There exist a vector $\B{\alpha}=(\alpha_1, \cdots, \alpha_N)\in \mathbb{R}_+^N$ and a matrix $\BU{H}\in\mathbb{R}^{d\times d}$ with positive trace ($\textrm{tr}(\BU{H})>0$) such that for every $\omega>0$ we have \begin{equation}\label{diagonal self-similarity equality in distribution} \B{X}_{\B{t}\circ\omega^{\B{\alpha}}} \overset{d}{=}\omega^{\BU{H}} \B{X}_{\B{t}} \;;\quad \forall \omega\in\mathbb{R}_+, \end{equation}
where $\omega^{\B{\alpha}}:=(\omega^{\alpha_1}, \cdots, \omega^{\alpha_1})$, the values of $\B{X}_{\B{t}}$ are considered as $d\times 1$ matrices, $\overset{d}{=}$ means equality in finite dimensional distributions for the two random fields, and $\omega^{\BU{H}}$ denotes matrix exponential with the usual definition, i.e. $$ \omega^{\BU{H}}:=e^{\ln(\omega)\,\BU{H}} =\sum_{i=0}^{\infty}\frac{\bigl(\ln(\omega) \,\BU{H}\bigr)^n}{n!} \;;\quad \forall \omega\in \mathbb{R}_+. $$
\begin{remark} This definition is a special case of the more general concept of what is called operator-self-similar random fields, e.g. studied in \cite{LiXiao2011}. In the general case, the vectors $\B{\alpha}$ and hence $\omega^{\B{\alpha}}$ are replaced by a matrix $\BU{E}$ and its matrix exponential $\omega^{\BU{E}}$, respectively. Evidently in this more general setting, the Shur product $\omega^{\B{\alpha}}\circ \B{t}$ should be replaced by the usual matrix multiplication $\omega^{\BU{E}} \B{t}$. This justifies us calling Property $\mathfrak{A}_3$ as `diagonal' self-similarity. \end{remark}
\begin{remark}\label{diagonal self-similarity covariance formulation} For zero-mean Gaussian fields, Equation \eqref{diagonal self-similarity equality in distribution} in Property $\mathfrak{A}_3$ is equivalent to the following equation $$ \mathbb{E}\bigl(\BU{X}_{\B{s}\circ\omega^{\B{\alpha}}} \BU{X}_{\B{t}\circ\omega^{\B{\alpha}}}^\dagger \bigr)= \omega^{\BU{H}} \, \mathbb{E}\bigl(\BU{X}_{\B{s}} \BU{X}_{\B{t}}^\dagger\bigr)\omega^{\BU{H}^\dagger}, \quad \forall \B{s}, \B{t}\in\mathbb{R}^N, $$ where $\B{X}_{\B{t}}$ is considered as a $d\times 1$ matrix as above, and $\BU{H}^\dagger$ denotes the transpose of matrix $\BU{H}$. For more on matrix exponential see e.g. \cite[ch.2]{Hall2003}. \end{remark}
An important special case of Property $\mathfrak{A}_3$ is the following condition.
\noindent \textbf{Property $\mathfrak{A}_3^\circ$(Two-sided Diagonal Self-Similarity): } There exist $\B{\alpha}=(\alpha_1, \cdots, \alpha_N)\in \mathbb{R}_+^N$ and $(H_1, \cdots, H_d)\in \mathbb{R}_+^d$ such that for every $\omega>0$ we have \begin{equation}\label{diagonal self-similarity covarience formula} \mathbb{E}\bigl[X^{(i)}_{\B{s}\circ\omega^{\B{\alpha}}} X^{(j)}_{\B{t}\circ\omega^{\B{\alpha}}}\bigr]= \omega^{H_i+H_j} \, \mathbb{E}\bigl[X^{(i)}_{\B{s}}X^{(j)}_{\B{t}}\bigr], \quad \forall i,j\in\{1,\cdots,d\} \;,\, \forall \B{s}, \B{t}\in\mathbb{R}^N, \end{equation} where $\omega^{\B{\alpha}}:=(\omega^{\alpha_1}, \cdots, \omega^{\alpha_1})$.
\begin{remark} For a zero-mean Gaussian field $\B{X}_{\B{t}}$, Property $\mathfrak{A}_3^\circ$ is satisfied if and only if Property $\mathfrak{A}_3$ is satisfied with $\BU{H}=\textrm{diag}(H_1, H_2, \cdots, H_d)$, i.e., the diagonal matrix whose diagonal entries are $\omega^{H_1}$,..., $\omega^{H_d}$ (respecting the order) and is zero elsewhere. This is true because we have $$ \omega^{\textrm{diag}(H_1, H_2, \cdots, H_d)}=\textrm{diag}(\omega^{H_1},\cdots, \omega^{H_d}). $$ \end{remark}
\begin{remark}
A very important random field that satisfies both Properties $\mathfrak{A}_2$ and $\mathfrak{A}_3^\circ$ is the multi-parameter fractional Brownian motion, i.e. the centered Gaussian field with stationary increments characterised by $\mathbb{E}[(X_{\B{s}}-X_{\B{t}})^2]=|\B{s-t}|^{2H}$ for every $\B{s}, \B{t}\in\mathbb{R}^N$, where $H\in(0,1]$ is the Hurst parameter of the Gaussian field. Furthermore, the centered Gaussian $(N,d)$-field consisting of $d$ independent multi-parameter fractional Brownian motions each with its own Hurst parameter $H_i$ satisfies also $\mathfrak{A}_2$ and $\mathfrak{A}_3$, hence falls in the scope of this paper as well. \end{remark}
\begin{remark} Let $c_1, \cdots, c_N\in \mathbb{R}_+$, $p_1, \cdots, p_N\in (0,2]$ and $H\in(0,1]$. Consider the $(N,1)$-Gaussian field that we call `anisotropic fractional Brownian motion', i.e., the $\mathbb{R}^N$-indexed centered Gaussian field $X_{\B{s}}$ with stationary increments given by $$ \mathbb{E}[(X_{\B{s}}-X_{\B{t}})^2]=\phi(\B{s}-\B{t}), $$ where $$
\phi(\B{s})=\bigl(\Sigma_{i=1}^N c_i|s_i|^{p_i}\bigr)^{2H}. $$ This Gaussian field satisfies both Properties $\mathfrak{A}_2$, and $\mathfrak{A}_3^\circ$ with $\tilde{\B{\alpha}}:=(\frac{1}{p_1}, \cdots, \frac{1}{p_N})$ and $H_1:=H$.
For other interesting examples of operator-self-similar random processes and fields with stationary increments, see e.g. \cite{MaejimaMason1994}.
In Section \ref{MainResults} we gather all the main results of this paper. In Section \ref{Two different definitions of local times}, we discuss the pointwise versus functional definitions of local times which are relevant to our work. In Section \ref{Formulation in moments growth rate} we state the relation between the exponential decay rate of the probability tail of local times and the exponential growth rate of their moments. Sections \ref{Section on Upper bounds} and \ref{Existence of the limit} contain the technical proofs.
\section{Main Results}\label{MainResults} In this section we give some technical definitions and state our results. The proofs will come in the subsequent sections.
For every $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N)\in \mathbb{R}_+^N$,
we define the $\B{\alpha}$-length as follows \begin{equation}\label{alpha distance}
\|\B{t}\|_{\B{\alpha}}:=\sum_{i=1}^{N}|t_i|^{1/\alpha_i}\;:\quad \forall \B{t}=(t_1, \cdots, t_N)\in \mathbb{R}^N \end{equation}
It is evident that $\|\B{t}\|_{\B{\alpha}}$ defines a translation invariant topology on $\mathbb{R}_+^N$. Moreover, if $\forall i=1, \cdots, N : \alpha_i\geq1$ then $\|\B{t}\|_{\B{\alpha}}$ defines a translation invariant metric on $\mathbb{R}_+^N$ which we call the $\B{\alpha}$-distance. Nevertheless, it is not a norm except for the special case where all the exponents are equal to $1$.
We introduce the following definition which generalizes the idea of Strong Local Nondeterminism to vector-valued Gaussian fields.
\begin{dfn}[Strong Local Nondeterminism]\label{local nondeterminism definition} We call a centered Gaussian $(N,d)$-field $\B{X}_{\B{t}}$ strongly locally nondeterministic over a cube $\mathcal{J}\subseteq\mathbb{R}^N$ with scaling vector $\B{\xi}:=(\xi_1, \xi_2, \cdots, \xi_N)\in\mathbb{R}_+^N$ if there exist constants $H>0$ and $C>0$
such that for any positive integer $n$, and any arbitrary vectors $\B{u}, \B{t}_1, \cdots, \B{t}_n\in \mathcal{J}$, we have $$
\detcov[\B{X}_{\B{u}}|\B{X}_{\B{t}_1}, \B{X}_{\B{t}_2},\cdots, \B{X}_{\B{t}_n}]\geq C \min_{0\leq i\leq n}
\|\B{u}-\B{t}_i\|_{\B{\alpha}}^{2H}, $$ where $\B{t}_0:=\B{0}$, the expression
$\detcov[\B{X}_{\B{u}}|\B{X}_{\B{t}_1}, \B{X}_{\B{t}_2},\cdots, \B{X}_{\B{t}_n}]$ denotes the determinant of the conditional covariance matrix of the random vector $\B{X}_{\B{u}}$ conditioned on all the random vectors $\B{X}_{\B{t}_1}, \B{X}_{\B{t}_2},\cdots, \B{X}_{\B{t}_n}$, and finally, $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N):=H \B{\xi}$.
\end{dfn}
\begin{remark} The reason why only the normalized vector $\B{\xi}=\frac{1}{H}\B{\alpha}$ is relevant, is due to the fact that for any $p>0$ there exist positive constants $c_1, c_2>0$ such that for every $\B{x}=(x_1, \cdots, x_N)\in \mathbb{R}^N$ $$
c_1(\sum_{i=1}^{N}|x_i|^{1/\alpha_i})^{2H}\leq
(\sum_{i=1}^{N}|x_i|^{p/{\alpha_i}})^{^\frac{2H}{p}}\leq c_2(\sum_{i=1}^{N}|x_i|^{1/\alpha_i})^{2H}. $$ In fact we have the following proposition. \end{remark} \begin{prop}\label{Equivalence of self-similar functions} Let $f:\mathbb{R}^N\rightarrow\mathbb{R}^{\geq 0}$ be a continuous function such that $f(\B{x})=0$ if and only if $\B{x}=\B{0}$, and for some vector $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N)\in\mathbb{R}_+^N$ and $H>0$, we have $f(\B{x}\circ \omega^{\B{\alpha}})=\omega^H f(\B{x})$ for every $\B{x}\in \mathbb{R}^N$ and $\omega>0$. Then there exist constants $c_1, c_2>0$ such that for every $\B{x}=(x_1, \cdots, x_N)\in \mathbb{R}^N$ $$
c_1(\sum_{i=1}^{N}|x_i|^{1/\alpha_i})^{H}\leq f(\B{x})\leq c_2(\sum_{i=1}^{N}|x_i|^{1/\alpha_i})^{H}. $$ \end{prop} \begin{proof} In Section \ref{Section on Upper bounds}. \end{proof} \begin{remark} Let $\B{X}_{\B{t}}$ be a diagonally self-similar centered Gaussian $(N,d)$-field that satisfies Property $\mathfrak{A}_3$ with matrix $\BU{H}\in \mathbb{R}^{d\times d}$ and vector $\B{\alpha}=(\alpha_1, \cdots, \alpha_N)\in \mathbb{R}_+^N$. If $\B{X}_{\B{t}}$ is strongly locally nondeterministic with the scaling vector $\B{\xi}:=(\xi_1, \xi_2, \cdots, \xi_N)$, then it is easy to verify that $\B{\xi}=\frac{1}{\textrm{tr}(\B{H})}\B{\alpha}$. In other words, for diagonally self-similar centered Gaussian $(N,d)$-fields, the strong local nondeterminism can be satisfied only with a unique scaling vector. \end{remark}
\begin{prop}\label{parameters restrictions} Let $\B{X}_{\B{t}}$ be a diagonally self-similar centered Gaussian $(N,d)$-field with stationary increments, i.e. it satisfies Properties $\mathfrak{A}_0$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ with some matrix $\BU{H}\in \mathbb{R}^{d\times d}$ and vector $(\alpha_1, \cdots, \alpha_N)\in \mathbb{R}_+^N$. Let $\beta$ be a positive real number. If the kernel $\big(\U{K}_n^{\B{X}}\bigr)^{\beta}$ is integrable over the cube $\bigl(\mathcal{I}^N\bigr)^{n}$ for some integer $n$, then the following inequality has to hold true $$ \sum_{i=1}^{N}\alpha_i > \beta \, \textrm{tr}(\BU{H}). $$ \end{prop} \begin{proof} In Section \ref{Section on Upper bounds}. \end{proof}
\begin{lem}\label{local nondeterminism theorem}
Let $\B{X}_{\B{t}}$ be a centered Gaussian $(N,d)$-field which is strongly locally nondeterministic over $\mathcal{I}^N$ with scaling vector $\B{\xi}=(\xi_1, \xi_2, \cdots, \xi_N)$ and constant $C_0$.
Then for any positive real number $\beta$ such that $\beta <\sum_{i=1}^{N}\xi_i$
, and any positive integer $n$, the kernel $\big(\U{K}_n^{\B{X}}\bigr)^{\beta}$ is integrable over the cube $\bigl(\mathcal{I}^N\bigr)^n$, and $$ \int_{(\mathcal{I}^N)^n}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\leq c^n \, (n!)^{\frac{\beta}{\sum_{i=1}^{N}\xi_i}} $$ where $c$ is a constant that depends only on $C_0$, $N$, $\B{\alpha}$, $\beta$, $H$ and $d$. \end{lem} \begin{proof} In Section \ref{Section on Upper bounds}. \end{proof}
\begin{thm} Let $\B{X}_{\B{t}}$ be a centered Gaussian $(N,d)$-field that is strongly locally nondeterministic over $\mathcal{I}^N$ with
scaling vector $\B{\xi}=(\xi_1, \xi_2, \cdots, \xi_N)\in \mathbb{R}_+^N$ such that $1<\sum_{i=1}^{N}\xi_i$. Then the local times $Z_{\B{x}}:=L_{\B{x}}(\B{X},\mathcal{I}^N)$ of the random field $\B{X}_{\B{t}}$ exist at every point $\B{x}\in \mathbb{R}^d$, and $$ \mathbb{E}(Z_{\B{x}}^n)\leq c^n \, (n!)^{\frac{ 1}{\sum_{i=1}^{N}\xi_i}}, $$ for some constant $c$ which does not depend on $n$. \end{thm} \begin{proof} It is immediate from Lemma \ref{local nondeterminism theorem}. \end{proof}
\begin{dfn}
Let $\{\B{X}_{k}(\B{t}_k)\,:\,\B{t}_k\in \mathbb{R}^{N_k}\}_{k=1}^m$ be a family of $m$ independent Gaussian fields such that for every $k=1, \cdots, m$, the random field $\B{X}_{k}(\B{t})$
is a centered Gaussian $(N_k,d)$-field. We define their ($m$-fold) intersection local time around the origin and over $\mathcal{I}$ as the local time of the following $(\sum_{k=1}^m N_k,(m-1)d)$-field at $\B{0}$ and over the cube $\mathcal{I}^{\sum_{k=1}^m N_k}$ (if it exists) $$ \bigl(\B{X}_1(\B{t_1})-\B{X}_2(\B{t_2}), \B{X}_2(\B{t_2})-\B{X}_3(\B{t_3}),\cdots, \B{X}_{m-1}(\B{t_{m-1}})-\B{X}_m(\B{t_m})\bigr). $$ \end{dfn}
\begin{thm} \label{Upper bound for intersection local times} Let $m$ be a positive integer, and $\{\B{X}_{k}(\B{t}_k)\,:\,\B{t}_k\in \mathbb{R}^{N_k}\}_{k=1}^m$ be a family of $m$ independent Gaussian fields such that for every $k=1, \cdots, m$, the random field $\B{X}_{k}(\B{t}_k)$ is a centered Gaussian $(N_k,d)$-field that is strongly locally nondeterministic with scaling vector $\B{\xi}_k=(\xi_{k,1}, \xi_{k,2}, \cdots, \xi_{k,N_k})\in \mathbb{R}_+^{N_k}$. If $\sum_{k=1}^{m}\sum_{i=1}^{N_k}\xi_{k,i}>m-1$ , then the $m$-fold intersection local time of the family $\{\B{X}_{k}(\cdot)\}_{k=1}^m$ over the interval $\mathcal{I}$ exists. Moreover, denoting this intersection local time by $\mathfrak{I}_{\B{X}}$, and defining $\tilde{\xi}_k:=\sum_{i=1}^{N_k}\xi_{k,i}$, for any arbitrary sequence of positive numbers $q_1$, $q_2$, ..., $q_m$ such that $\sum_{k=1}^{m}q_k=m-1$, and such that $0\leq q_k\leq 1$ and $q_k<\tilde{\xi}_k$ (for every $k=1, \cdots, m$), we have $$ \mathbb{E}\bigl((\mathfrak{I}_{\B{X}})^n\bigr)\leq c^n \, (n!)^{\sum_{k=1}^{m} \frac{q_k}{\tilde{\xi}_k}}, $$ where $c$ is a positive constant that does not depend on $n$.\\
\end{thm} \begin{proof} In Section \ref{Section on Upper bounds}. \end{proof}
\begin{thm}\label{main theorem on local times} Suppose $\B{X}_{\B{t}}$ is a centered Gaussian $(N,d)$-field satisfying Properties $\mathfrak{A}_0$, $\mathfrak{A}_1$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ with some matrix $\BU{H}\in \mathbb{R}^{d\times d}$ and vector $(\alpha_1, \cdots, \alpha_N)\in \mathbb{R}_+^N$
such that $\alpha_i$'s are mutually rational, i.e., $\frac{\alpha_i}{\alpha_j}\in \mathbb{Q}$ for every $i$ and $j$. Then the following limits exist in $\mathbb{R}_+\bigcup\{+\infty\}$
, and are strictly positive $$ \lim_{x\rightarrow +\infty}\frac{-\log \mathbb{P}(Z>x)}{x^{\frac{1}{\lambda}}}\quad \text{and}\quad \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^{\lambda}},
$$ where $Z:=L_{\B{0}}(\B{X},[0,1]^N)$ and $\lambda:=\frac{\textrm{tr}(\BU{H})}{\sum_{k=1}^{N}\alpha_k}$. Moreover, if $\B{X}_{\B{t}}$ is also strongly locally nondeterministic over $\mathcal{I}^N$,
then the above limits will be finite. \end{thm}
\begin{proof} In Section \ref{Existence of the limit}. \end{proof}
\begin{remark}\label{Exceptional Case} One should note that although Properties $\mathfrak{A}_0$, $\mathfrak{A}_1$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ guarantee the convergence of the sequence $\{\frac{\bigl(\mathbb{E}(Z^n)\bigr)^{\frac{1}{n}}}{n^{\lambda}}\}_n$, they do not imply the finiteness of the limit. Probably the simplest example would be the centered Gaussian (2,1)-field $X(s,t)$ ($s,t\in \mathbb{R}$)
characterized by $X(0,0)=0$ and $\mathbb{E}\bigl(X(s_1,t_1)-X(s_2,t_2)\bigr)^2=(t_1-s_1)^{2H}+(t_2-s_2)^{2H}$, where $H\in(0,1)$. It clearly satisfies all the properties $\mathfrak{A}_0$, $\mathfrak{A}_1$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ with the self-similarity scaling $(\B{\alpha},H)$, where $\B{\alpha}:=(1,1)$. So by Theorem \ref{main theorem on local times}, we know that $\{\frac{\sqrt[n]{\mathbb{E}(L_0^n)}}{n^{H/2}}\}_n$ converges, where $L_0$ is the local time of $X$ around the origin on the square $\mathcal{I}^2$. On the other hand, one can easily verify that $X$ is equivalent to $\{B_1(s)-B_2(t)\,:\, (s,t)\in \mathbb{R}^2\}$, where $B_1$ and $B_2$ are two independent fractional Brownian motions of Hurst parameter $H$. So $L_0$, i.e., the local time of $X$ around the origin, is the same as $\mathfrak{I}_B$, i.e., the intersection local time of two independent fractional Brownian motions with the same Hurst parameter. By \cite[Theorem 2.4.]{ChenLiRosinskyShao11}, we know that $\{\frac{\sqrt[n]{\mathbb{E}(\mathfrak{I}_B^n)}}{n^{H}}\}_n$ converges to a strictly positive finite constant. This shows that the right growth exponent of $\sqrt[n]{\mathbb{E}(L_0^n)}$ is $n^{H}$. \end{remark}
\begin{cor}\label{limit theorem on intersection local times} Let $m$ be a positive integer, and $\{\B{X}_{k}(\B{t}_k)\,:\,\B{t}_k\in \mathbb{R}^{N_k}\}_{k=1}^m$ be a family of $m$ independent Gaussian fields such that for every $k=1, \cdots, m$, the random field $\B{X}_{k}(\B{t})$ is a centered Gaussian $(N_k,d)$-field satisfying Properties $\mathfrak{A}_0$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ with the self-similarity scaling matrix $\BU{H}\in \mathbb{R}^{d\times d}$ and scaling vector $\B{\alpha}_k=(\alpha_{k,1}, \cdots, \alpha_{k,N_k})\in \mathbb{R}_+^{N_k}$. If for every positive integer $n$ and every $k=1, \cdots, m$, the kernel $\big(\U{K}_n^{\B{X}_k}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\frac{m-1}{m}}$ is integrable over $\bigl(\mathcal{I}^{N_k}\bigr)^{n}$, then the $m$-fold intersection local time of $\{\B{X}_{k}\}_{k=1}^m$ on the interval $[0,1]$ exists, and
if moreover, every pair of $\alpha_{k,i}$ and $\alpha_{l,j}$ are mutually rational, i.e., $\frac{\alpha_{k,i}}{\alpha_{l,j}}\in \mathbb{Q}$, then denoting the $m$-fold intersection local time of $\{\B{X}_k\}_{k}$ by $\mathfrak{I}_{m}$, the following limits exists $$ \lim_{y\rightarrow +\infty}\frac{-\log \mathbb{P}(\mathfrak{I}_{m}>y)}{y^{\frac{1}{\gamma}}} \quad \text{and} \quad \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}\bigl((\mathfrak{I}_{m})^n\bigr)}} {n^{\gamma}}, $$ where $\gamma:=\frac{(m-1)\textrm{tr}(\BU{H})} {\sum_{k=1}^{m}\sum_{i=1}^{N_k}\alpha_{k,i}}$. \end{cor} \begin{proof} In Section \ref{Existence of the limit}. \end{proof}
\begin{remark} Although this theorem affirms the convergence of $\{\frac{\sqrt[n]{\mathbb{E}\bigl((\mathfrak{I}_{m})^n\bigr)}} {n^{\gamma}}\}_n$, it does not guarantee that $\gamma=\frac{(m-1)\textrm{tr}(\BU{H})} {\sum_{i=1}^{m}\sum_{k=1}^{N_i}\alpha_{i,k}}$ is the right exponent for the growth of $\sqrt[n]{\mathbb{E}\bigl((\mathfrak{I}_{m})^n\bigr)}$. For that, we would also need the finiteness of the above limit. In Corollary \ref{Exceptional Case}, we saw some examples where the right growth exponent is larger than the exponent given by this theorem ($H$ instead of $H/2$). \end{remark}
\section{Definition of local times: Pointwise versus functional }\label{Two different definitions of local times} Let $\B{X}$ be an $(N,d)$-random field over a cube $\mathcal{C}({\B{p}},\B{T})$ in $\mathbb{R}^N$. Let $\mu$ be the occupation measure of $\B{X}$, i.e. for every Borelian subset $\mathcal{B}\subseteq\mathbb{R}^d$ we have $$ \mu_{\B{X}}(\mathcal{B})=\lambda_N(\{\B{t}\in\mathcal{C}({\B{p}},\B{T});\; \B{X}_{\B{t}}\in\mathcal{B}\}), $$ where $\lambda_N$ denotes the Lebesgue measure on $\mathbb{R}^N$. \begin{dfn}[Functional definition of local time] If the occupation measure of $\B{X}$ is almost-surely absolutely continuous with respect to the Lebegue measure on $\mathbb{R}^d$, i.e. when $\mu_{\B{X}}\ll \lambda_d$, its Radon-Nikodym derivative is called the local time (or occupation density) of $\B{X}$ over $\mathcal{C}({\B{p}},\B{T})$, and we denote it by we denote this function by $\bar{L}_{\B{X}}$. \end{dfn} Clearly, this definition does not provide a unique pointwise definition for the local time but a set of functions that are equal to each other almost surely. It is also clear that for any positive (or bounded) measurable function $f:\mathbb{R}^d\rightarrow \mathbb{R}$, we have $$ \int_{\mathcal{C}(\B{p},\B{T})}f(\B{X}_{\B{t}})\,\mathrm{d}\B{t}= \int_{\mathbb{R}^d}f(\B{x}) \bar{L}_{\B{X}}(\B{x})\,\mathrm{d}\B{x}). $$ A sufficient condition for the existence of the local time as defined above, is the following \begin{equation}\label{sufficient condition for existence of functional local time} \int_{\mathcal{C}(\B{p},\B{T})}\int_{\mathcal{C}(\B{p},\B{T})} \frac{1}{\sqrt{\detcov(\B{X}_{\B{t}}-\B{X}_{\B{s}})}}\, \mathrm{d}\B{t}\,\mathrm{d}\B{s}<+\infty; \end{equation} see e.g. \cite{Pitt78} or \cite{GemanHorowitz1980}. By Corollary \ref{Reduction Inequality for detCov}, we have $$ \detcov[\B{X}_{\B{t}} \B{X}_{\B{s}}]\leq \detcov(\B{X}_{\B{t}}) \detcov(\B{X}_{\B{t}}-\B{X}_{\B{s}}). $$ So it is clear that conditions $\mathfrak{A}_1$ and $\mathfrak{A}_0$ imply Equation \eqref{sufficient condition for existence of functional local time}. So assuming these two conditions, we have the existence of the local times, both in the pointwise and functional definitions. Moreover, for every $\B{x}\in\mathbb{R}^d$ we have $$ \int_{\mathcal{B}_\varepsilon(\B{x})} \bar{L}_{\B{X}}(\B{y})\,\mathrm{d}\B{y}= \int_{\mathcal{C}(\B{p},\B{T})}
\mathbf{1}_{\|\B{X}_{\B{t}}-\B{x}\|<\varepsilon}(\B{t}) \,\mathrm{d}\B{t}, $$ hence $$ \lim_{\varepsilon\rightarrow0}\frac{1}{\vol(\mathcal{B}_\varepsilon(\B{x}))} \int_{\mathcal{B}_\varepsilon(\B{x})} \bar{L}_{\B{X}}(\B{y})\,\mathrm{d}\B{y} =L_{\B{x}}(\B{X};\mathcal{C}(\B{p},\B{T})), $$ which means that irrespective of the chosen version of $\bar{L}_{\B{X}}$, its pointwise local mean equals the local time defined in the pointwise manner.
\section{Formulation in moments growth rate}\label{Formulation in moments growth rate} The following theorem is a special case of Kasahara's Tauberian theorem \cite[Theorem 4]{Kasahara78} which relates the probability tail behavior to the moments asymptotic behavior.
\begin{thm}[Kasahara 1978]\label{Kasahara} For any positive random variable $Y$, any positive number $\lambda$, and any $A\in(0,+\infty]$,
the limit $$ \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Y^n)}}{n^\lambda} $$ exists and equals $A$ if and only if the limit $$ \lim_{x\rightarrow +\infty}\frac{-\log \mathbb{P}(Y>x)}{x^{\frac{1}{\lambda}}} $$ exists and equals $\frac{\lambda}{e A^{\frac{1}{\lambda}}}$. \end{thm} We aim to prove that for any centered Gaussian $(N,d)$-field $\B{X}$ satisfying conditions $\mathfrak{A}_1$, $\mathfrak{A}_0$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$, the following limit exists $$ \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}, $$ where $Z:=L_{\B{0}}(\B{X},\mathcal{I}^N)$ and $\lambda:=\frac{ \textrm{tr}(\BU{H})}{\sum_{k=1}^{N}\alpha_k}$. Clearly this along with the above theorem proves the existence of the following limit with $\lambda:=\frac{ \textrm{tr}(\BU{H})}{\sum_{k=1}^{N}\alpha_k}$. $$ \lim_{x\rightarrow +\infty} x^{-\frac{1}{\lambda}} \log\mathbb{P}[L_{\B{0}}(\B{X},\mathcal{I}^N)>x]. $$
We have the following proposition on the existence of the local time of zero-mean Gaussian fields at the origin and its moments. In the proof we use some arguments of \cite{Pitt78}. \begin{prop}\label{existence and approximation} For any Gaussian field $\B{X}$ satisfying condition $\mathfrak{A}_1$, the local times $Z_{\B{x}}:=L_{\B{x}}(\B{X},\mathcal{I}^N)$ exist for every $\B{x}\in\mathbb{R}^d$, and we have $$ \mathbb{E}(Z_{\B{x}}^n)\leq \int_{\prod_{i=1}^n \mathcal{I}^N} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n) \,\mathrm{d}\B{t}_1\cdots \mathrm{d}\B{t}_n\;:\quad \forall \B{x}\in\mathbb{R}^d, $$ and $$ \mathbb{E}(Z_{\B{0}}^n)= \int_{\prod_{i=1}^n \mathcal{I}^N} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n) \,\mathrm{d}\B{t}_1\cdots \mathrm{d}\B{t}_n, $$
where $\prod_{i=1}^n\mathcal{I}^N$ denotes the $n$-times Cartesian product $\mathcal{I}^N\times \cdots\times \mathcal{I}^N$, and $\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)$ is as defined in Equation \eqref{K_n Definition}.
\end{prop} \end{remark} \begin{proof}
We prove the proposition for $\B{x}=\B{0}$. The proof for the general $\B{x}$ is similar. For the rest of the proof, we denote $Z_{\B{0}}$ simply by $Z$. Let $\|\cdot\|$ be some arbitrary norm on $\mathbb{R}^d$ and define $Z_\varepsilon:=\int_{\mathcal{I}^N}\frac{1}{V_\varepsilon}
\mathbf{1}_{\{\|\B{X}_{\B{t}}\|<\varepsilon\}}(\B{t})\mathrm{d}\B{t}$, where $V_\varepsilon$ denotes the volume of the $d$-dimensional ball $\{x\in\mathbb{R}^d ; \|x\|<\varepsilon\}$, and $\mathbf{1}_{\{\cdot\}}$ denotes the indicator function. First we show that $\{Z_\varepsilon\}$ is cauchy in $\mathrm{L}^n(\Omega, \mathbb{P})$. Indeed, let $\mathfrak{S}$ be the set of all possible functions from $\{1, \cdots, n\}$ into $\{0,1\}$. For any function $\sigma\in \mathfrak{S}$, we define $\xi_i^\sigma$ to be equal to $\varepsilon$ if $\sigma(i)=0$ and be equal to $\delta$ if $\sigma(i)=1$. It is then easy to verify the following equality $$ \mathbb{E}\bigl[(Z_\varepsilon-Z_\delta)^n\bigr]= \int_{\prod_{i=1}^kI^N} \sum_{\sigma\in \mathfrak{S}} \frac{(-1)^{\sum_{i=1}^n\sigma(i)}}{\Pi_{i=1}^n V_{\xi_i^\sigma}}
\mathbb{P}\bigl(\bigcap_{i=1}^n \{|\B{X}_{\B{t}_i}\|<\xi_i^\sigma\})
\mathrm{d}\B{t}_1\cdots \mathrm{d}\B{t}_n. $$ As the variables $\{\B{X}_{\B{t}}\}_{{\B{t}}}$ are jointly normal with mean zero, for every $\sigma\in \mathfrak{S}$ and each fixed $\B{t}_1, \cdots, \B{t}_n$, we have $$ \frac{1}{\Pi_{i=1}^n V_{\xi_i^\sigma}}
\mathbb{P}\bigl(\|\B{X}_{\B{t}_1}\|<\xi_1^\sigma , \ldots, \|\B{X}_{\B{t}_n}\|<\xi_n^\sigma\bigr)\leq \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)
$$ and $$ \frac{1}{\Pi_{i=1}^n V_{\xi_i^\sigma}}
\mathbb{P}\bigl(\|\B{X}_{\B{t}_1}\|<\xi_1^\sigma , \ldots, \|\B{X}_{\B{t}_n}\|<\xi_n^\sigma\bigr) \overset{\varepsilon, \delta\downarrow 0}{\longrightarrow} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)
$$ Noting that $\sum_{\sigma\in \mathfrak{S}}(-1)^{\sum_{i=1}^n\sigma(i)}=(1-1)^n=0$, by dominated convergence and Property $\mathfrak{A}_1$ we have $$ \mathbb{E}\bigl[(Z_\varepsilon-Z_\delta)^n\bigr]\overset{\varepsilon, \delta\downarrow 0}{\longrightarrow} 0, $$ which proves that $Z_\varepsilon$ is $\mathrm{L}^n(\Omega, \mathbb{P})$-Cauchy whose limit is noting but $Z$. Similarly one can also show that $$ \mathbb{E}(Z^n)=\lim_{\varepsilon\rightarrow 0}\mathbb{E}(Z_\varepsilon^n)=\int_{\prod_{i=1}^n \mathcal{I}^N} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\, \mathrm{d}\B{t}_1\cdots \mathrm{d}\B{t}_n. $$ \end{proof}
\section{Upper bound}\label{Section on Upper bounds} In this section we prove Propositions \ref{Equivalence of self-similar functions}, \ref{parameters restrictions}, Lemma \ref{local nondeterminism theorem}, and finally Proposition \ref{Upper bound for intersection local times}.
\begin{proof}[Proof of Proposition \ref{Equivalence of self-similar functions}]
First we normalize the vector $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N)$ and $H$ so that for every $i$ we have $\alpha_i\geq 1$. This is possible because $f(\B{x}\circ \omega^{\B{\alpha}/p})=\omega^{H/p} f(\B{x})$ for every $p>0$ as well. This turns the $\B{\alpha}$-distance $d(\B{x},\B{y}):=\|\B{x}-\B{y}\|_{\B{\alpha}}$ into a translation-invariant metric on $\mathbb{R}^N$.\\
i) We denote the $\ell^2$-norm on $\mathbb{R}^N$ by $\|\cdot\|_2$, and take the standard definition for the $\B{\alpha}$ and $\ell^2$-balls centered at the origin of radius $r>0$ as $B_{\B{\alpha}}(\B{0},r):=\{\B{x}\in\mathbb{R}^N :\,\|\B{x}\|_{\B{\alpha}}<r\}$ and $B_{2}(\B{0},r):=\{\B{x}\in\mathbb{R}^N :\,\|\B{x}\|_{2}<r\}$. It is easy to verify that every $\B{\alpha}$-ball centered at the origin contains a $\ell^2$-ball centered at the origin, and vice versa. This shows that the ${\B{\alpha}}$-metric induces the same topology on $\mathbb{R}^N$ as the standard topology induced by the $\ell^2$-norm. In particular it means that the function $f$ is also continuous under the ${\B{\alpha}}$-metric. So there exists $\varepsilon>0$ such that for every $\B{x}$ we have $|f(\B{x})|<1$ if $\|\B{x}\|_{\B{\alpha}}<\varepsilon$. Now for any $\B{x}\in \mathbb{R}^N$, choose $\omega>0$ such that $\|\B{x}\circ \omega^{\B{\alpha}}\|_{\B{\alpha}}=\omega \|\B{x}\|_{\B{\alpha}}=\varepsilon/2$. It follows that $|f(\B{x}\circ \omega^{\B{\alpha}})|<1$ which means $|f(\B{x})|<\frac{2^H}{\varepsilon^H}\|\B{x}\|_{\B{\alpha}}^H$.\\
ii) As the set $\mathcal{S}_{\B{\alpha}}:=\{\B{x}\in \mathbb{R}^N: \; \|\B{x}\|_{\B{\alpha}}=1\}$ is compact, the image of the continuous function $f$ achieves a minimum over $\mathcal{S}_{\B{\alpha}}$ which is strictly positive. In other words, there exists $\delta>0$ such that for every $\B{x}\in \mathcal{S}_{\B{\alpha}}$ we have $f(\B{x})\geq \delta$. Now for a general $\B{x}\in \mathbb{R}^N$, we can choose $\omega>0$ such that $\|\B{x}\circ \omega^{\B{\alpha}}\|_{\B{\alpha}}=1$. So we have $|f(\B{x}\circ \omega^{\B{\alpha}})|\geq \delta$, hence $|f(\B{x})|\geq \delta \|\B{x}\|_{\B{\alpha}}^H$. \end{proof}
\begin{proof}[Proof of Proposition \ref{parameters restrictions}] We prove it for the case of $n=1$. The proof for larger $n$'s is similar.\\ Let the surface $\mathcal{S}_{\B{\alpha}}^+\subset\mathbb{R}^N$ be defined as follows $$ \mathcal{S}_{\B{\alpha}}^+:=\{(x_1, x_2, \cdots, x_N)\in \mathbb{R}_{+}^N: \; \sum_{i=1}^{N} \sqrt[\alpha_i]{x_i}=1\}. $$ By diagonal self-similarity (Property $\mathfrak{A}_3$) and noting Remark \ref{diagonal self-similarity covariance formulation}, for any $\B{t}\in \mathbb{R}^N$ and any $\omega>0$ we have $$ \detcov (\B{X}_{\omega^{\B{\alpha}}\circ\B{t}})=\det (\omega^{\BU{H}})\, \detcov(\B{X}_{\B{t}})\, \det(\omega^{\BU{H}^\dagger})= \omega^{2\textrm{tr}(\BU{H})}\,\detcov(\B{X}_{\B{t}}), $$ where we used the fact that the determinant of the exponential of a matrix equals the exponential of its trace, i.e., $\det(e^{\BU{H}})=e^{\textrm{tr}(\BU{H})}$ (see e.g. \cite[ch.2]{Hall2003}). Suppose $\B{s}\mapsto\B{\sigma}_{\B{s}}$ is an arbitrary parametrization of the surface $\mathcal{S}_{\B{\alpha}}^+$, where $\B{\sigma}_{\B{s}}=(\sigma_1(\B{s}), \cdots,\sigma_N(\B{s}))$ and $\B{s}=(s_1, \cdots, s_{N-1})$. Then using the change of variables $(\omega,\B{s})$ with $\B{t}=\omega^{\B{\alpha}}\circ\B{\sigma}_{\B{s}}$, we have $$ \begin{aligned} \int_{[0,1]^N} \bigl(\U{K}_1(\B{t})\bigr)^{\beta}\, \mathrm{d}\B{t}&\geq \int_{0}^1 \int_{\B{\sigma}^{-1}(\mathcal{S}_{\B{\alpha}}^+)} \omega^{\sum_{i=1}^N\alpha_i-1} \bigl(\U{K}_1(\omega^{\B{\alpha}}\circ\B{\sigma}_{\B{s}})\bigr)^{\beta}\, J_\sigma(\B{s}) \mathrm{d}\B{s}\,\mathrm{d}\omega\\ &=\int_{0}^1 \frac{\omega^{\sum_{i=1}^N\alpha_i-1}}{\omega^{\beta\, \textrm{tr}(\BU{H})}}\mathrm{d}\omega \int_{\B{\sigma}^{-1}(\mathcal{S}_{\B{\alpha}}^+)} \bigl(\U{K}_1(\B{\sigma}_{\B{s}})\bigr)^{\beta}\, J_\sigma(\B{s})\mathrm{d}\B{s}, \end{aligned} $$ where $J_\sigma(\B{s})$ is the absolute value of the following determinant \begin{equation}\label{J_sigma} \begin{vmatrix}
\alpha_1 \sigma_1 & \frac{\partial \sigma_1}{\partial s_1} & \frac{\partial \sigma_1}{\partial s_2} & \dots & \frac{\partial \sigma_1}{\partial s_{N-1}} \\
\vdots & \vdots & \vdots& \dots & \vdots \\
\alpha_N \sigma_N & \frac{\partial \sigma_N}{\partial s_1} & \frac{\partial \sigma_N}{\partial s_2}& \dots & \frac{\partial \sigma_N}{\partial s_{N-1}} \end{vmatrix}. \end{equation}
This implies that if $\bigl(\U{K}_1(\B{t})\bigr)^{\beta}$ is integrable over the cube $[0,1]^N$, then the following inequality has to hold true $$ \sum_{i=1}^N\alpha_i> \beta\, \textrm{tr}(\BU{H}). $$ \end{proof}
Now we turn to the proof of Lemma \ref{local nondeterminism theorem}. First we need a definition \begin{dfn}\label{narrowing definition} Consider $n\in\mathbb{N}$, and a sequence of vectors $\B{t}_1$, ..., $\B{t}_n\in\mathbb{R}^N$, and let $d$ be a metric on $\mathbb{R}^N$. We say that the sequence is `narrowing' with respect to the metric $d$, if for any $k=1, \cdots, n-1$ we have $$ d(\B{t}_{k+1}, \B{t}_k)=\min_{i=1,\cdots, k}d(\B{t}_{k+1},\B{t}_{i}). $$ \end{dfn} \begin{remark}\label{arranging into narrowing} For any fixed metric $d$, every finite subset $\mathcal{F}\subset\mathbb{R}^N$ of $n$ points can be arranged in such a way that the ordered sequence is narrowing with respect to the metric $d$. Indeed, the procedure is simple: start with an arbitrary point in $\mathcal{F}$ and label it $\B{t}_{n}$. Having chosen $\B{t}_{n}$, $\B{t}_{n-1}$, ..., $\B{t}_{k}$, choose among the remaining points, i.e., from $\mathcal{F}\setminus\{\B{t}_{n}, \B{t}_{n-1}, ...,\B{t}_{k}\}$, the one which is the closest to $\B{t}_{k}$ and label it $\B{t}_{k-1}$. \end{remark}
We will need a theorem from \cite{Tassiulas1997} which relates our problem to the nearest neighbor strategy for solving the travelling salesman problem. We proceed with some preliminary definitions from \cite{Tassiulas1997}.
For a given metric $d$ on $\mathbb{R}^N$, the diameter of a subset $\mathcal{C}\subseteq\mathbb{R}^N$ is defined as $$ D_d(\mathcal{C}):=\sup_{\B{x}, \B{y}\in \mathcal{C}} d(\B{x}, \B{y}). $$ A family $\mathcal{P}=\{\mathcal{C}_l\}_{l=1}^{P}$ of subsets of $\mathbb{R}^N$ is called a covering of subset $\mathcal{A}\subset\mathbb{R}^N$ if $\mathcal{A}\subseteq\bigcup_{l=1}^{P}\mathcal{C}_l$. The diameter of a covering $\mathcal{P}$, denoted by $D_d(\mathcal{P})$ is defined as the maximum diameter of its elements, i.e. $$ D_d(\mathcal{P})=\max_{l=1,\cdots,P} D_d(\mathcal{C}_l). $$ For any $n$ distinct points in $\mathbb{R}^N$, an arrangement (ordering) $\B{t}_{1}$, $\B{t}_{2}$, ..., $\B{t}_{n}$ of the points is a `nearest neighbor tour' if it satisfies the following property $$ d(\B{t}_{k+1}, \B{t}_{k})=\min_{i=k+1, \cdots, n} d(\B{t}_{k}, \B{t}_{i})\;: \quad \forall k=1,\cdots, n-1. $$ For every nearest neighbor tour $\mathfrak{T}$ of $n$ points, one defines its (loop) length as $$ \mathbb{L}_d(\mathfrak{T}):=\sum_{k=1}^{n}d(\B{t}_{k+1}, \B{t}_{k}), $$ where $\B{t}_{n+1}:=\B{0}$. We are interested in finding a general upper bound on this length when all the points of the tour are required to lie inside the unit cube $\mathcal{I}^N$. Indeed, the `worst case length' of $n$-point nearest neighbor (NN) tours over a subset $\mathcal{A}\subset\mathbb{R}^N$ is defined as follows $$
\mathbb{L}_d(n;\mathcal{A}):=\sup_{\substack{\mathcal{F}\subset\mathcal{A}\\|\mathcal{F}|=n}} \max_{\mathfrak{T}\in \textrm{NN}(\mathcal{F})} \mathbb{L}_d(\mathfrak{T}), $$
where $|\mathcal{F}|$ denotes the cardinality of the set $\mathcal{F}$, and $\textrm{NN}(\mathcal{F})$ is the set of all possible nearest neighbor tours of the points of $\mathcal{F}$. Then we have the following theorem from \cite{Tassiulas1997}. \begin{thm}\label{nearest neighbor covering bound} Let $d$ be a metric on $\mathbb{R}^N$, and $\mathcal{A}\subset\mathbb{R}^N$. Then for any sequence of coverings $\{\mathcal{P}_m\}_{m=1}^M$ of $\mathcal{A}$ with decreasing diameters, i.e. $D_d(\mathcal{P}_m)\geq D_d(\mathcal{P}_{m+1})$ for every $m=1,\cdots, M-1$. Then the worst case length of $n$-point nearest neighbor tours is bounded as follows $$
\mathbb{L}_d(n;\mathcal{A})\leq nD_d(\mathcal{P}_M)+\sum_{m=2}^M |\mathcal{P}_m|\bigl(D_d(\mathcal{P}_{m-1})-D_d(\mathcal{P}_m)\bigr)+
|\mathcal{P}_1|\bigl(D_d(\mathcal{A})-D_d(\mathcal{P}_1)\bigr), $$
where $|\mathcal{P}_m|$ denotes the cardinality of $\mathcal{P}_m$. \end{thm} We should mention that this theorem has been stated in \cite{Tassiulas1997} only for the case where the metric $d$ comes from a norm on $\mathbb{R}^N$. Nevertheless, with a careful examination of their proof one can verify that their arguments work even when $d$ is a general metric on $\mathbb{R}^N$.
Now we are ready to prove the following theorem. \begin{thm}\label{nearest neigbor bound theorem}
Let $N\in\mathbb{N}^{\geq 2}$ and $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N)\in [1,\infty)^N$. Then the worst case length of nearest neighbor tours of $n$-point subsets of $[0,1]^N$ with respect to the metric $\|\cdot\|_{\B{\alpha}}$ defined in \eqref{alpha distance}, is bounded as follows $$ \mathbb{L}_{\B{\alpha}}(n,[0,1]^N)\leq c_{\B{\alpha}}\, n^{1-\frac{1}{\sum_{i=1}^{N}\alpha_i}}, $$ where $c_{\B{\alpha}}$ is a constant that depends only on $\B{\alpha}$. \end{thm} \begin{proof} For every $m\in\mathbb{N}$, we define the set of points $\mathcal{G}_m\subset [0,1]^N$ as $$
\mathcal{G}_m:=\Bigl\{\,(\frac{\ell_1}{m^{\alpha_1}}, \frac{\ell_2}{m^{\alpha_2}}, \cdots, \frac{\ell_N}{m^{\alpha_N}})\,\Big|\, \ell_i\in \{0, 1, \cdots, \lceil m^{\alpha_i}\rceil-1\}:\, \forall i=1, 2, \cdots, N\Bigr\}, $$ where $\lceil\cdot\rceil$ denotes the ceiling function, i.e. $\lceil x\rceil$ is the smallest integer that is larger than or equal to $x$. For every $\B{p}\in\mathcal{G}_m$, we define the sub-cube $\mathcal{C}(\B{p},m^{-\B{\alpha}}) $ as usual, i.e. $$ \mathcal{C}(\B{p},m^{-\B{\alpha}}) := \B{p}+\prod_{i=1}^{N}[0,\frac{1}{m^{\alpha_i}}]. $$ So for any positive integer $m$ we define the following covering of the set $[0,1]^N$ $$ \mathcal{P}_m:=\{\mathcal{C}(\B{p},m^{-\B{\alpha}})\;: \quad\B{p}\in \mathcal{G}_m\}. $$ We note that for every $m$ we have $$ D(\mathcal{P}_m)=\frac{N}{m}, \qquad \text{and} \qquad
|\mathcal{P}_m|=\prod_{i=1}^{N}\lceil m^{\alpha_i}\rceil. $$ So by Theorem \ref{nearest neighbor covering bound}, for every $M\in\mathbb{N}$ we have $$ \begin{aligned} \mathbb{L}_{\B{\alpha}}(n,[0,1]^N) & \leq \frac{n N}{M}+N \sum_{m=2}^{M}(\frac{1}{m-1}-\frac{1}{m})\prod_{i=1}^{N}\lceil m^{\alpha_i}\rceil \\& \leq \frac{n N}{M}+N 2^{N+1}\sum_{m=2}^{M} m^{\sum_{i=1}^N\alpha_i-2} \end{aligned} $$ Due to the assumption, we have $\sum_{i=1}^N\alpha_i\geq 2$, and hence $$ \begin{aligned} \mathbb{L}_{\B{\alpha}}(n,[0,1]^N)
\leq \frac{n N}{M}+N 2^{N+1}\int_{1}^{M} m^{\sum_{i=1}^N\alpha_i-2}
\leq \frac{n N}{M}+\frac{N 2^{N+1}}{\sum_{i=1}^N\alpha_i-1} M^{\sum_{i=1}^N\alpha_i-1}. \end{aligned} $$ If we choose $M:=n^{\frac{1}{\sum_{i=1}^N\alpha_i}}$ we get the desired bound. \end{proof}
So we are ready to prove Lemma \ref{local nondeterminism theorem}.
\begin{proof}[Proof of Lemma \ref{local nondeterminism theorem}]
Choose $\B{\alpha}=(\alpha_1, \alpha_2, \cdots, \alpha_N)\in\mathbb{R}^N_+$ and $H>0$ such that for every $i=1, \cdots, N$ we have $\alpha_i\geq 1$ and $\alpha_i=H\xi_i$. First we prove the theorem for the case of $N\geq 2$. We define $\mathcal{N}_{n, \B{\alpha}}$ as the set of all $nN$-tuples $(\B{t}_{1}, \B{t}_{2}, \cdots, \B{t}_{n})\in \mathcal{I}^N\times\cdots\times\mathcal{I}^N$ such that the sequence $\B{t}_{1}$, $\B{t}_{2}$, ..., $\B{t}_{n}$ is narrowing with respect to the $\B{\alpha}$-distance defined by $\|\cdot\|_{\B{\alpha}}$ in Equation \eqref{alpha distance}. Then by Remark \ref{arranging into narrowing}, and the fact that $\U{K}_n^{\B{X}}(\B{t}_1,\B{t}_2, \cdots, \B{t}_n)$ is symmetric with respect to its arguments, i.e. it is permutation-invariant, we have \begin{equation}\label{permutation sum} \int_{(\mathcal{I}^N)^n}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\leq n! \int_{\mathcal{N}_{n, \B{\alpha}}}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n. \end{equation} By strong local nondeterminism, for any narrowing sequence $\B{t}_1$,$\B{t}_2$, ..., $\B{t}_n$ we have $$
\detcov[\B{X}_{\B{t}_k}|\B{X}_{\B{t}_1}, \B{X}_{\B{t}_2},\cdots, \B{X}_{\B{t}_{k-1}}]\geq C \min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}^{2H},
\|\B{t}_k\|_{\B{\alpha}}^{2H}\}\;:\quad \forall k=2, \cdots,n. $$ So by Proposition \ref{Inequality for detCov} we have \begin{equation}\label{bounding the kernel by strong nondeterminism} \U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\leq c_1^{n}\prod_{k=1}^{n} \frac{1}{\min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}^H,
\|\B{t}_k\|_{\B{\alpha}}^H\}} \end{equation} where $c_1:=\frac{1}{\sqrt{C(2\pi)^d}}$.
On the other hand, it is easy to verify that if a sequence $\B{t}_{1}$, $\B{t}_{2}$, ..., $\B{t}_{n}$ is narrowing, then its reversal, i.e. $\B{t}_{n}$, $\B{t}_{n-1}$, ..., $\B{t}_{1}$ satisfies the nearest neighbor property. Hence by Theorem \ref{nearest neigbor bound theorem}, for any $(\B{t}_{1}, \B{t}_{2}, \cdots, \B{t}_{n})\in \mathcal{N}_{n, \B{\alpha}}$, we have $$
\sum_{k=2}^{n}\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}\leq \mathbb{L}_{\B{\alpha}}(n; \mathcal{I}^N)\leq c_{\B{\alpha}} n^{1-\frac{1}{\sum_{i=1}^{N}\alpha_i}}, $$ so \begin{equation}\label{linear in n upper bound on the exponent}
\sum_{k=1}^{n}k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}\leq c_2 \, n, \end{equation} where $c_2$ is a constant that only depends on $N$, $\B{\alpha}$ and $d$. So \eqref{linear in n upper bound on the exponent} and \eqref{bounding the kernel by strong nondeterminism} imply $$ \begin{aligned} &\int_{\mathcal{N}_{n, \B{\alpha}}}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\\ &\quad \leq e^{c_2\,n} \int_{\mathcal{N}_{n, \B{\alpha}}} c_1^{n\beta}\prod_{k=1}^{n} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}\min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}},
\|\B{t}_k\|_{\B{\alpha}}\}\bigr)} {\min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}^{\beta H},
\|\B{t}_k\|_{\B{\alpha}}^{\beta H}\}}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\\ &\quad \leq c_3^n \int_{\mathbb{R}^{nN}} \prod_{k=1}^{n} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}\min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}},
\|\B{t}_k\|_{\B{\alpha}}\}\bigr)} {\min\{
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}^{\beta H},
\|\B{t}_k\|_{\B{\alpha}}^{\beta H}\}} \mathrm{d}\B{t}_1 \cdots \mathrm{d}\B{t}_n\\
&\quad \leq c_3^n \int_{\mathbb{R}^{nN}} \prod_{k=1}^{n} \biggl( \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}\bigr)}
{\|\B{t}_k-\B{t}_{k-1}\|_{\B{\alpha}}^{\beta H}}+ \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{t}_k\|_{\B{\alpha}}\bigr)}
{\|\B{t}_k\|_{\B{\alpha}}^{\beta H}}\biggr) \mathrm{d}\B{t}_1 \cdots \mathrm{d}\B{t}_n \end{aligned} $$ where $c_3:=e^{c_2}c_1^\beta$. Let $\mathfrak{S}$ be the set of all possible functions from $\{1, \cdots, n\}$ into $\{0,1\}$, and for any function $\vartheta\in \mathfrak{S}$ and any $k\in\{1,\cdots, n\}$, define $$ \B{y}_k^\vartheta= \left\{ \begin{array}{lr} \B{t}_k, & \text{if } \vartheta(k)=0\\ \B{t}_k-\B{t}_{k-1}, & \text{if } \vartheta(k)=1. \end{array}\right. $$ Then the last inequality can be written as follows \begin{equation}\label{upperbound on integral of kernel} \begin{aligned} &\int_{\mathcal{N}_{n, \B{\alpha}}}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\\ &\quad \leq c_3^n \sum_{\vartheta\in \mathfrak{S}} \int_{\mathbb{R}^{nN}} \prod_{k=1}^{n} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{y}_k^\vartheta\|_{\B{\alpha}}\bigr)}
{\|\B{y}_k^\vartheta\|_{\B{\alpha}}^{\beta H}} \mathrm{d}\B{t}_1 \cdots \mathrm{d}\B{t}_n\\ &\quad = c_3^n \sum_{\vartheta\in \mathfrak{S}} \int_{\mathbb{R}^{nN}} \prod_{k=1}^{n} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{y}_k^\vartheta\|_{\B{\alpha}}\bigr)}
{\|\B{y}_k^\vartheta\|_{\B{\alpha}}^{\beta H}} \mathrm{d}\B{y}_1^\vartheta \cdots \mathrm{d}\B{y}_n^\vartheta\\ &\quad = c_3^n \sum_{\vartheta\in \mathfrak{S}} \prod_{k=1}^{n} \int_{\mathbb{R}^{N}} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{y}\|_{\B{\alpha}}\bigr)}
{\|\B{y}\|_{\B{\alpha}}^{\beta H}} \mathrm{d}\B{y}. \end{aligned} \end{equation} Define the surface $\mathcal{S}_{\B{\alpha}}\subset\mathbb{R}^N$ as follows $$
\mathcal{S}_{\B{\alpha}}:=\{(x_1, x_2, \cdots, x_N)\in \mathbb{R}^N: \; \sum_{i=1}^{N} \sqrt[\alpha_i]{|x_i|}=1\}, $$ and take an arbitrary parametrization $\B{s}\mapsto\B{\sigma}_{\B{s}}$ of $\mathcal{S}_{\B{\alpha}}$, where $\B{\sigma}_{\B{s}}=(\sigma_1(\B{s}), \cdots,\sigma_N(\B{s}))$ and $\B{s}=(s_1, \cdots, s_{N-1})$. Then using the change of variables $(\omega,\B{s})$ with $\B{y}=r^{\B{\alpha}}\circ\B{\sigma}_{\B{s}}$, we have \begin{equation}\label{passing to radial and factorizing k out} \begin{aligned} \int_{\mathbb{R}^{N}} \frac{\exp\bigl(-k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}}
\|\B{y}\|_{\B{\alpha}}\bigr)}
{\|\B{y}\|_{\B{\alpha}}^{\beta H}} \mathrm{d}\B{y} &=
|\mathcal{S}_{\B{\alpha}}^\circ \, | \int_{0}^{+\infty} \frac{\exp\bigl(-r k^{\frac{1}{\sum_{i=1}^{N}\alpha_i}} \bigr) r^{\sum_{i=1}^{N}\alpha_i}} {r^{\beta H+1}} \mathrm{d}r\\
&=|\mathcal{S}_{\B{\alpha}}^\circ| \frac{k^{\frac{\beta H}{\sum_{i=1}^{N}\alpha_i}}}{k} \int_{0}^{+\infty} \frac{e^{-r}}{r} r^{\sum_{i=1}^{N}\alpha_i-\beta H} \mathrm{d}r \end{aligned} \end{equation} where $$
|\mathcal{S}_{\B{\alpha}}^\circ|:= \int_{\B{\sigma}^{-1}(\mathcal{S}_{\B{\alpha}})} J_\sigma(\B{s})\mathrm{d}\B{s}, $$ and $J_\sigma$ is the absolute value of the determinant given in Equation \eqref{J_sigma}.
It is important to note that the right-hand side of \eqref{passing to radial and factorizing k out} is finite only if $\sum_{i=1}^{N}\alpha_i>\beta H$. So by Equations \eqref{upperbound on integral of kernel} and \eqref{passing to radial and factorizing k out} we get \begin{equation}\label{final upperbound on restricted integral of kernel} \int_{\mathcal{N}_{n, \B{\alpha}}}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n \leq \, c_4^n \, \frac{(n!)^{\frac{\beta H}{\sum_{i=1}^{N}\alpha_i}}}{n!}, \end{equation} where $$ c_4:=2 c_3
|\mathcal{S}_{\B{\alpha}}^\circ| \int_{0}^{+\infty} \frac{e^{-r}}{r} r^{\sum_{i=1}^{N}\alpha_i-\beta H}. $$ So finally, plugging Equation \eqref{final upperbound on restricted integral of kernel} into Equation \eqref{permutation sum} we get $$ \int_{(\mathcal{I}^N)^n}\big(\U{K}_n^{\B{X}}(\B{t}_1, \cdots, \B{t}_n)\bigr)^{\beta}\mathrm{d}\B{t}_1\cdots\mathrm{d}\B{t}_n\leq c_4^n \, (n!)^{\frac{\beta H}{\sum_{i=1}^{N}\alpha_i}}. $$ The proof for the case of $N=1$ is similar to the above proof, except for the fact that instead of arranging $\B{t}_i$'s based on the narrowing property, we order them by the natural ordering on $\mathbb{R}$. The rest of the proof remains basically the same. \end{proof}
\begin{proof}[Proof of Theorem \ref{Upper bound for intersection local times}] Let us define $\tilde{d}:=(m-1)d$, $\tilde{N}:=\sum_{i=1}^{m}N_i$, and the following vector $$ \B{\Delta}_{\B{\tilde{t}}}:=\bigl(\B{X}_1(\B{t_1})-\B{X}_2(\B{t_2}), \B{X}_2(\B{t_2})-\B{X}_3(\B{t_3}),\cdots, \B{X}_{m-1}(\B{t_{m-1}})-\B{X}_m(\B{t_m})\bigr)\in\mathbb{R}^{\tilde{d}}. $$ Evidently, $\B{\Delta}_{\B{\tilde{t}}}$ is a centered Gaussian $(\tilde{N},\tilde{d})$-field.
Take an arbitrary positive integer $n$, and consider any family of points $\{\B{t}_k^i\}_{i,k}$ where $k=1,\cdots,m$, $i=1,\cdots, n$, such that for every $i$ and $k$ we have $\B{t}_k^i\in\mathcal{I}^{N_k}$. Note that the superscript $i$ in $\B{t}_k^i$ is simply an index and should not be confused with exponent. For any such family of points, and for any $i=1,\cdots, n$, we define the vector $\B{\tilde{t}}_i:=(\B{t}_1^i, \cdots, \B{t}_m^i)\in \mathbb{R}^{\tilde{N}}$. So we are interested in $$ \detcov[\B{\Delta}_{\B{\tilde{t}}_1}, \B{\Delta}_{\B{\tilde{t}}_2}, \cdots, \B{\Delta}_{\B{\tilde{t}}_n}], $$ where $\B{\tilde{t}}_1, \cdots, \B{\tilde{t}}_n\in \mathcal{I}^{\tilde{N}}$.
We first note that the determinant of the covariance matrix of a random vector is invariant under permutations of the entries of the random vector. In other words, for any $d$ dimensional random vector $(Y_1,\cdots,Y_d)$, and $\sigma$ an arbitrary permutation of the set $\{1,2,\cdots,d\}$, we have $$ \detcov[Y_{\sigma(1)}, Y_{\sigma(2)}, \cdots, Y_{\sigma(d)}]= \detcov[Y_{1}, Y_{2}, \cdots, Y_{d}]. $$ This is true because any permutation of a vector is equivalent to multiplying it with a permutation matrix, and the determinant of any permutation matrix is $\pm1$. Using this fact, it is easy to verify the following equality $$ \detcov[\B{\Delta}_{\B{\tilde{t}}_1}, \B{\Delta}_{\B{\tilde{t}}_2}, \cdots, \B{\Delta}_{\B{\tilde{t}}_n}]=\detcov[\B{A}_1-\B{A}_2, \B{A}_2-\B{A}_3, \cdots, \B{A}_{m-1}-\B{A}_m], $$ where the random vectors $\B{A}_1$, ..., $\B{A}_m$ are defined as follows $$ \B{A}_k:= \bigl(\B{X}_k(\B{t}_k^1),\B{X}_k(\B{t}_k^2),\cdots,\B{X}_k(\B{t}_k^n)\bigl)\in \mathbb{R}^{nd}\;; \quad \forall k=1,\cdots,m. $$
Note that for any positive integer $m$, any sequence of Gaussian random vectors of the same size $\B{A}_1, \B{A}_2, \cdots,\B{A}_m$, and for any $k=1,\cdots, m$, we have $$ \detcov\bigl[\B{A}_1, \cdots, \B{A}_m]=\detcov[\B{A}_1, \cdots, \B{A}_{k-1}, \B{A}_k+\sum_{j\neq k} c_j\B{A}_j, \B{A}_{k+1}, \cdots, \B{A}_m\bigr] $$ where $\sum_{j\neq k}c_j\B{A}_j$ is any arbitrary linear combination of all the involved vectors excluding $\B{A}_k$. Using this simple fact we have $$ \begin{aligned} &\detcov\bigl[\B{A}_1-\B{A}_k, \cdots, \B{A}_{k-1}-\B{A}_k, \B{A}_{k+1}-\B{A}_k, \cdots, \B{A}_{m}-\B{A}_k\bigr]=\\ &\qquad=\detcov\bigl[\B{A}_1-\B{A}_2, \B{A}_2-\B{A}_3, \cdots, \B{A}_{m-1}-\B{A}_m\bigr]\quad:\quad \forall k=1, \cdots, m. \end{aligned} $$ So using Proposition \ref{Reduction Inequality for detCov}, for every $k$ we have \begin{equation}\label{simple lower bound for intersection detCov} \detcov[\B{A}_1-\B{A}_2, \B{A}_2-\B{A}_3, \cdots, \B{A}_{m-1}-\B{A}_m]\geq \frac{\detcov[\B{A}_1, \B{A}_2, \cdots, \B{A}_m]}{\detcov[\B{A}_k]}. \end{equation} Let $p_k:=1-q_k$ for every $k=1,\cdots, m$.
By \eqref{simple lower bound for intersection detCov}, and noting that $\sum_{k=1}^{m}p_k=1$, we have $$ \detcov[\B{A}_1-\B{A}_2, \B{A}_2-\B{A}_3, \cdots, \B{A}_{m-1}-\B{A}_m]\geq \prod_{k=1}^{m}\Bigl(\frac{\detcov[\B{A}_1, \B{A}_2, \cdots, \B{A}_m]}{\detcov[\B{A}_k]}\Bigr)^{p_k}. $$ Using the independence of $\B{A}_i$'s, we get $$ \detcov\bigl[\B{A}_1-\B{A}_2, \B{A}_2-\B{A}_3, \cdots, \B{A}_{m-1}-\B{A}_m\bigr]\geq \prod_{k=1}^{m}\bigl(\detcov[\B{A}_k]\bigr)^{\sum_{j\neq k}p_j}. $$ As $\sum_{j\neq k}p_j=q_k$, we come to \begin{equation}\label{upper bound on intersection kernel} \U{K}_n^{\B{\Delta}}(\B{\tilde{t}}_1, \cdots, \B{\tilde{t}}_n)\leq \prod_{i=1}^{m}\bigl(\U{K}_n^{\B{X}_k}(\B{t}_k^1, \cdots, \B{t}_k^n)\bigr)^{q_k} \end{equation}
Applying Lemma \ref{local nondeterminism theorem}, it is clear that $\U{K}_n^{\B{\Delta}}(\B{\tilde{t}}_1, \cdots, \B{\tilde{t}}_n)$ is integrable over $\mathcal{I}^{n\tilde{N}}$ and its integral is bounded from above by $c^n \,(n!)^{\sum_{k=1}^{m} \frac{q_k}{\tilde{\xi}_k}}$, where $c$ is a positive constant that does not depend on $n$.
\end{proof}
\section{Existence of the limit}\label{Existence of the limit} In this section we will prove that for any centered Gaussian $(N,d)$-field $\B{X}_{\B{t}}$ satisfying conditions $\mathfrak{A}_0$, $\mathfrak{A}_1$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ (with mutually rational $\alpha_i$'s), the following limit exists $$ \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}, $$ where $Z:=L_{\B{0}}(\B{X},\mathcal{I}^N)$ and $\lambda:=\frac{ \textrm{tr}(\BU{H})}{\sum_{k=1}^{N}\alpha_k}$, as in the last section. In the sequel, we prove Theorem \ref{main theorem on local times} and Corollary \ref{limit theorem on intersection local times}.
We use the standard notation $\lfloor\cdot\rfloor$ for the floor function ,i.e., for every $x\in \mathbb{R}$, the expression $\lfloor x\rfloor$ is the largest integer that is smaller than or equal to $x$.
\begin{lem}\label{The Inequality} For any $\omega\in \mathbb{R}_+$, there exist strictly positive constants $r_1$ and $\kappa$ that only depend on $N$, $\alpha_i$'s and $\BU{H}$, such that for every $r>r_1$ we have $$ \mathbb{E} (Z^{M(r+1)})\geq \frac{\kappa^M}{r^{M(N+1)}} \bigl(\frac{M}{\omega^{\sum_{k=1}^{N}\alpha_k}}\bigr)^{rM} \omega^{rM \textrm{tr}(\BU{H})} \Bigl(\mathbb{E}( Z^{r})\Bigr)^M. $$ where $M:=\prod_{i=1}^N \lfloor \omega^{\alpha_i}\rfloor$. \end{lem}
\begin{proof}
\noindent \textbf{Step 1: }
Let $\mu:=M(r+1)$. Using proposition \ref{existence and approximation}, we have the following probabilistic representation \begin{equation}\label{S1E1} \mathbb{E} (Z^{M(r+1)}) =\mathbb{E}^{\B{\tau}}\bigl(\U{K}_\mu(\B{\tau}_1, \cdots, \B{\tau}_\mu)\bigr),
\end{equation} where $\{\B{\tau}_i\}_{i=1}^\mu$ is a family independent identically distributed (iid) random variables, each $\B{\tau}_i$ being uniformly distributed over $[0,1]^N$, and $\mathbb{E}^{\B{\tau}}$ denotes expectation with respect to the family of random variables $\{\B{\tau}_i\}_{i=1}^\mu$.
We define the set of points $\mathcal{P}\subset [0,1]^N$ as $$
\mathcal{P}:=\Bigl\{\,(\frac{i_1}{\omega^{\alpha_1}}, \frac{i_2}{\omega^{\alpha_2}}, \cdots, \frac{i_N}{\omega^{\alpha_N}})\,\Big|\, i_k\in \{0, 1, \cdots, \lfloor \omega^{\alpha_k}\rfloor-1\}:\, \forall k=1, 2, \cdots, N\Bigr\}, $$ where $\lfloor\cdot\rfloor$ denotes the floor function, as usual.
For every $\B{p}\in\mathcal{P}$, we define the sub-cube $\mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}}) $ as $$ \mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}}) := \B{p}+\prod_{k=1}^{N}(0,\frac{1}{\omega^{\alpha_k}}). $$
We define the event $\Xi$ as the event on the family $\{\B{\tau}_i\}_{i=1}^{M(r+1)}$ such that every sub-cube $\mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}})$, $\B{p}\in\mathcal{P}$, contains exactly $r+1$ points of the set $\{\B{\tau}_i\}_{i=1}^{M(r+1)}$. We also define $\Omega^{\mu}_{M}$ as the set of all functions $\theta:\{1, 2, \cdots, M(r+1)\}\rightarrow \mathcal{P}$ such that at every point $\B{p}\in\mathcal{P}$, the cardinality of the inverse image of $\theta$ equals $r+1$, i.e., $|\theta^{-1}(\B{p})|=r+1$; in other words, every member $\theta\in \Omega^{\mu}_{M}$ is a partitioning of the set $\{1, 2, \cdots, M(r+1)\}$ into $M$ distinct boxes indexed by $\mathcal{P}$ such that every box contains exactly $r+1$ elements of $\{1, 2, \cdots, M(r+1)\}$. For every $\theta\in \Omega^{\mu}_{M}$, let $\Xi_\theta$ be the event that the points of $\{\B{\tau}_i\}_{i=1}^{M(r+1)}$ are distributed among the sub-cubes $\{\mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}}) \}_{\B{p}\in\mathcal{P}}$ according to $\theta$. It is clear that $\Xi=\bigcup_{\theta\in \Omega^{\mu}_{M}}\Xi_\theta$. So by Equation \ref{S1E1}, we have $$ \mathbb{E} (Z^{M(r+1)})\geq \mathbb{E}^{\B{\tau}}\bigl[\U{K}_\mu(\B{\tau}_1, \cdots, \B{\tau}_\mu)\,\mathbf{1}_{\Xi}\bigr] = \sum_{\theta\in \Omega^{\mu}_{M}}\mathbb{E}^{\B{\tau}}\bigl[\U{K}_\mu(\B{\tau}_1, \cdots, \B{\tau}_\mu)\,\mathbf{1}_{\Xi_\theta}\bigr]. $$ For any $\theta\in \Omega^{\mu}_{M}$ and $\B{p}\in\mathcal{P}$, we use the following notation $$ \U{K}_{r+1}(\tau, \theta, \B{p}):=\U{K}_{r+1}(\B{\tau}_{i_1}, \cdots, \B{\tau}_{i_{r+1}}), $$ where $i_1$, ..., $i_{r+1}$ denote all the distinct elements of $\theta^{-1}(\B{p})$.
Using Proposition \ref{Ineuqlity for conditional variance}, for every $\theta \in \Omega^{\mu}_{M}$ we have the following inequality $$ \U{K}_{M(r+1)}(\B{\tau}_{1}, \cdots, \B{\tau}_{{M(r+1)}})\geq \prod_{\B{p}\in\mathcal{P}} \U{K}_{r+1}(\tau, \theta, \B{p}). $$ Hence we obtain $$ \mathbb{E} (Z^{M(r+1)})\geq \sum_{\theta\in \Omega^{\mu}_{M}}\mathbb{E}^{\B{\tau}}\bigl[\prod_{\B{p}\in\mathcal{P}} \U{K}_{r+1}(\tau, \theta, \B{p})\,\mathbf{1}_{\Xi_\theta}\bigr]. $$
For any $\theta\in \Omega^{\mu}_{M}$ and $\B{p}\in\mathcal{P}$, let $\Xi^{\B{p}}_\theta$ denote the event that the points $\B{\tau}_{i_1}, \cdots, \B{\tau}_{i_{r+1}}$ lie in the sub-cube $\mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}})$, where $\{i_1,\cdots,i_{r+1}\}:=\theta^{-1}(\B{p})$. It is evident that $$ \mathbf{1}_{\Xi_\theta}=\prod_{\B{p}\in\mathcal{P}}\mathbf{1}_{\Xi^{\B{p}}_\theta}. $$ So we get \begin{equation}\label{StepOneResult} \mathbb{E} (Z^{M(r+1)})\geq \sum_{\theta\in \Omega^{\mu}_{M}}\prod_{\B{p}\in\mathcal{P}}\mathbb{E}^{\B{\tau}}\bigl[ \U{K}_{r+1}(\tau, \theta, \B{p})\,\mathbf{1}_{\Xi^{\B{p}}_\theta}\bigr]. \end{equation}
\noindent \textbf{Step 2: } For any $\theta$ and $\B{p}$ fixed, let $\{i_1, \cdots, i_{r+1}\}:=\theta^{-1}(\B{p})$, i.e., $\{i_1, \cdots, i_{r+1}\}$ is the set of indices such that $\B{\tau}_{i_1}, \cdots, \B{\tau}_{i_{r+1}}\in\mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}}) $. Using Proposition \ref{Reduction Inequality for detCov} we have $$ \begin{aligned} &\detcov (\B{X}_{\B{\tau}_{i_{1}}}, \cdots, \B{X}_{\B{\tau}_{i_{r+1}}})\\ &\quad \quad\leq \detcov(\B{X}_{\B{\tau}_{i_{r+1}}})\, \detcov (\B{X}_{\B{\tau}_{i_{1}}}-\B{X}_{\B{\tau}_{i_{r+1}}}, \cdots, \B{X}_{\B{\tau}_{i_{r}}}-\B{X}_{\B{\tau}_{i_{r+1}}}). \end{aligned} $$ By Property $\mathfrak{A}_0$ and using Corollaries \ref{detCov formula} and \ref{Ineuqlity for conditional variance}, we have $$ \detcov(\B{X}_{\B{\tau}_{i_{r+1}}})\leq c_0^d. $$ Using the fact that $\B{X}$ has stationary increments, i.e., Property $\mathfrak{A}_2$, we have $$ \detcov (\B{X}_{\B{\tau}_{i_{1}}}-\B{X}_{\B{\tau}_{i_{r+1}}}, \cdots, \B{X}_{\B{\tau}_{i_{r}}}-\B{X}_{\B{\tau}_{i_{r+1}}})= \detcov (\B{X}_{\B{\tau}_{i_{1}}-\B{\tau}_{i_{r+1}}}, \cdots, \B{X}_{\B{\tau}_{i_{r}}-\B{\tau}_{i_{r+1}}}). $$ So we have $$ \begin{aligned} \U{K}_{r+1}(\tau, \theta, \B{p})&= \U{K}_{r+1}(\B{\tau}_{i_{1}}, \cdots, \B{\tau}_{i_{r+1}})\\ &\geq (2\pi)^{-\frac{d(r+1)}{2}}c_0^{-\frac{d}{2}}\, \bigl(\detcov (\B{X}_{\B{\tau}_{i_{1}}-\B{\tau}_{i_{r+1}}}, \cdots, \B{X}_{\B{\tau}_{i_{r}}-\B{\tau}_{i_{r+1}}})\bigr)^{-\frac{1}{2}}\\ &=c_1 \,\U{K}_{r}(\B{\tau}_{i_{1}}-\B{\tau}_{i_{r+1}}, \cdots, \B{\tau}_{i_{r}}-\B{\tau}_{i_{r+1}}), \end{aligned} $$ where $c_2:=(2\pi c_0)^{-\frac{d}{2}}$. Therefore, we have \begin{equation}\label{S2E1} \begin{aligned} &\mathbb{E}^{\B{\tau}}\bigl[ \U{K}_{r+1}(\tau, \theta, \B{p})\,\mathbf{1}_{\Xi^{\B{p}}_\theta}\bigr]\\ &\quad\quad \geq c_1\int_{\B{t}_{1},\cdots,\B{t}_{r+1}\in \mathcal{C}^{\circ}(\B{p},\omega^{-\B{\alpha}}) }\U{K}_{r}(\B{t}_{1}-\B{t}_{r+1}, \cdots, \B{t}_{r}-\B{t}_{r+1})\,\mathrm{d}\B{t}_{1}\cdots\mathrm{d}\B{t}_{r+1}\\
&\quad\quad =c_1\int_{\B{z}\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}}) } \int_{\B{t}_{1},\cdots,\B{t}_{r}\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}})
}\U{K}_{r}(\B{t}_{1}-\B{z}, \cdots, \B{t}_{r}-\B{z})\,\mathrm{d}\B{t}_{1}\cdots\mathrm{d}\B{t}_{r}\mathrm{d}\B{z}, \end{aligned} \end{equation} where we used change of variables in the last line.
For every $\B{z}=(z_1, \cdots, z_N)\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}})$, we define $$ \zeta_{\B{z}}:=\min_{k}\{(\omega^{-\alpha_k}-z_k)^{\frac{1}{\alpha_k}}\}, $$ and $$ \B{\tilde{\zeta}}_{\B{z}}:=\zeta_{\B{z}}^{\B{\alpha}}= (\zeta_{\B{z}}^{\alpha_1}, \cdots, \zeta_{\B{z}}^{\alpha_N}). $$ For every such $\B{z}$, we introduce the new variables $\{\B{s}_k\}_{k=1,\cdots,r}$ in the following way $$ \B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{k}:=\B{t}_{k}-\B{z}:\;\forall k=1, \cdots, r, $$ where $\circ$ as usual, denotes the Schur product of two vectors, i.e., the vector formed by entry-wise multiplication of the two vectors. It can be easily verified that for every such $\B{z}$ we have \begin{equation}\label{S2E2} \begin{aligned} &\int_ {\B{t}_{1},\cdots,\B{t}_{r}\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}})} \U{K}_{r}(\B{t}_{1}-\B{z}, \cdots, \B{t}_{r}-\B{z})\,\mathrm{d}\B{t}_{1}\cdots\mathrm{d}\B{t}_{r}\\ &\quad \quad\geq \zeta_{\B{z}}^{r(\alpha_1+\cdots+\alpha_N)} \int_{\B{s}_{1},\cdots,\B{s}_{r}\in (0,1)^N} \U{K}_{r}(\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{1}, \cdots, \B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{r})\,\mathrm{d}\B{s}_{1}\cdots\mathrm{d}\B{s}_{r}. \end{aligned} \end{equation} On the other hand, by diagonal self-similarity (Property $\mathfrak{A}_3$) and noting Remark \ref{diagonal self-similarity covariance formulation}, we have $$ \cov (\B{X}_{\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{1}}, \cdots, \B{X}_{\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{r}})=\BU{\Lambda}_{r} \cov (\B{X}_{\B{s}_{1}}, \cdots, \B{X}_{\B{s}_{r}})\,\BU{\Lambda}_{r}^\dagger, $$ where $\Lambda_{r}\in\mathbb{R}^{rN\times rN}$ is the block diagonal matrix consisting of $r$ copies of the matrix $\zeta_{\B{z}}^{\BU{H}}$ on its main diagonal, and filled with zero elsewhere, in other words $\BU{\Lambda}_{r}=\textrm{diag}[\zeta_{\B{z}}^{\BU{H}}, \cdots, \zeta_{\B{z}}^{\BU{H}}]$. Now we notice the fact that the determinant of the exponential of a matrix equals the exponential of its trace, i.e., $\det(e^{\BU{A}})=e^{\textrm{tr}(\BU{A})}$; see e.g. \cite[ch.2]{Hall2003}. So we get $$ \det \cov (\B{X}_{\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{1}}, \cdots, \B{X}_{\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{r}})= \zeta_{\B{z}}^{2r\,\textrm{tr}(\BU{H})}, $$ which implies that \begin{equation}\label{S2E3} \U{K}_{r}(\B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{1}, \cdots, \B{\tilde{\zeta}}_{\B{z}}\circ\B{s}_{r})= \zeta_{\B{z}}^{-r\,\textrm{tr}(\BU{H})} \U{K}_{r}(\B{s}_{1}, \cdots, \B{s}_{r}). \end{equation} So, by Equations \ref{S2E1}, \ref{S2E2}, and \ref{S2E3} we have \begin{equation}\label{S2E4} \mathbb{E}^{\B{\tau}}\bigl[ \U{K}_{r+1}(\tau, \theta, \B{p})\,\mathbf{1}_{\Xi^{\B{p}}_\theta}\bigr] \geq c_1 \,\mathbb{E}( Z^{r}) \int_{\B{z}\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}})} \zeta_{\B{z}}^{r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} \mathrm{d}\B{z}. \end{equation} where we used the following equality which is a result of Proposition \ref{existence and approximation} $$ \mathbb{E}( Z^{r})=\int_{\B{s}_{1},\cdots,\B{s}_{r}\in (0,1)^N} \U{K}_{r}(\B{s}_{1}, \cdots, \B{s}_{r})\,\mathrm{d}\B{s}_{1}\cdots\mathrm{d}\B{s}_{r}. $$ Now we define $\{x_k\}_{i=1}^{N}$ as $x_k:=1-\omega^{\alpha_k}z_k$. By this change of variables, we have $$ \zeta_{\B{z}}=\frac{1}{\omega} \min_{k}\{(x_k)^{\frac{1}{\alpha_k}}\}. $$ So we have \begin{equation}\label{S2E5} \int_{\B{z}\in \mathcal{C}^{\circ}(\B{0},\omega^{-\B{\alpha}})} \zeta_{\B{z}}^{-r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} \mathrm{d}\B{z} =\omega^{-\sum_{k=1}^{N}\alpha_k} \omega^{-r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} J(\B{\alpha},\BU{H},r),
\end{equation} where $$ J(\B{\alpha},\BU{H},r):= \int_0^1\cdots\int_0^1 \bigl(\min_{k}\{(x_k)^{\frac{1}{\alpha_k}}\}\bigl)^{r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} \,\mathrm{d}x_1\cdots \mathrm{d}x_N. $$ This shows that if $\sum_{k=1}^{N}\alpha_k$ is smaller than $ \textrm{tr}(\BU{H})$, then property $\mathfrak{A}_1$ can not hold. So we assume $\sum_{k=1}^{N}\alpha_k\geq \textrm{tr}(\BU{H})$.
Denote $\eta:=r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))$ and $\alpha_0:=\min \{\alpha_1,\cdots,\alpha_N\}$. One can easily verify that for $r$ larger than $r_1:=\frac{2\alpha_0}{\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H})}$ (so that $\eta>2\alpha_0$), we have $$ \begin{aligned} &\int_0^1\cdots\int_0^1 \bigl(\min_{k}\{(x_k)^{\frac{1}{\alpha_k}}\}\bigl)^{\eta} \,\mathrm{d}x_1\cdots \mathrm{d}x_N \geq \int_0^1\cdots\int_0^1 \bigl(\min_{k}\{x_k\}\bigl)^{\frac{\eta}{\alpha_0}} \,\mathrm{d}x_1\cdots \mathrm{d}x_N\\ &\quad \quad \geq \int_{1-\frac{\alpha_0}{\eta}}^1\cdots\int_{1-\frac{\alpha_0}{\eta}}^1 \bigl(\min_{k}\{x_k\}\bigl)^{\frac{\eta}{\alpha_0}} \,\mathrm{d}x_1\cdots \mathrm{d}x_N \geq (\frac{\alpha_0}{\eta})^N (1-\frac{\alpha_0}{\eta})^{\frac{\eta}{\alpha_0}}\\ &\quad \quad\geq C (\frac{\alpha_0}{\eta})^N, \end{aligned} $$ where $C>0$ is global contact. So for $r$ large enough ($r\geq r_1$), we have \begin{equation}\label{S2E6} J(\B{\alpha},\BU{H},r)\geq \frac{c_2}{r^N}, \end{equation} where $c_2>0$ is a constant that only depends on $\alpha_i$'s and $N$. When $\sum_{k=1}^{N}\alpha_k= \textrm{tr}(\BU{H})$, Equation \eqref{S2E6} remains valid for every $r\in \mathbb{N}$. So in this case we define $r_1$ equal to $1$.
So by applying Equations \eqref{S2E6} and \eqref{S2E5} into Equation \eqref{S2E4} we get \begin{equation}\label{S2E7} \mathbb{E}^{\B{\tau}}\bigl[ \U{K}_{r+1}(\tau, \theta, \B{p})\,\mathbf{1}_{\Xi^{\B{p}}_\theta}\bigr] \geq \frac{c_1 c_2}{r^N \omega^{\sum_{k=1}^{N}\alpha_k}} \omega^{-r(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} \mathbb{E}( Z^{r}). \end{equation}
\noindent \textbf{Step 3: } Applying Equation \eqref{S2E7} to \eqref{StepOneResult} we get \begin{equation}\label{S3E1} \mathbb{E} (Z^{M(r+1)})\geq \frac{c_1^{M}c_2^M \mathfrak{N}^{M(r+1)}_{M}}{r^{MN} \omega^{M\sum_{k=1}^{N}\alpha_k}} \omega^{-rM(\sum_{k=1}^{N}\alpha_k- \textrm{tr}(\BU{H}))} \Bigl(\mathbb{E}( Z^{r})\Bigr)^M, \end{equation}
where $\mathfrak{N}^{M(r+1)}_{M}:=|\Omega^{M(r+1)}_{M}|$, i.e., the cardinality of $\Omega^{M(r+1)}_{M}$. Using Lemma \ref{Partitioning a set}, we have $$ \mathfrak{N}^{M(r+1)}_{M}\geq \frac{\kappa_1 \sqrt{M}}{\kappa_2^M \sqrt{(r+1)^M}} M^{M(r+1)}, $$ where $\kappa_1$ and $\kappa_2$ are global constants. So we have $$ \mathbb{E} (Z^{M(r+1)})\geq \frac{\kappa_1 c_1^{M}c_2^M \sqrt{M}}{\kappa_2^M r^{MN} \sqrt{(r+1)^{M}} } \bigl(\frac{M}{\omega^{\sum_{k=1}^{N}\alpha_k}}\bigr)^{M(r+1)} \omega^{rM \textrm{tr}(\BU{H})} \Bigl(\mathbb{E}( Z^{r})\Bigr)^M. $$ which clearly implies the statement of the lemma. \end{proof}
\begin{lem}\label{Iterated Inequality} There exists a positive number $r_1$ that only depends on $N$, $\alpha_i$'s and $\BU{H}$, such that for every $r>r_1$ and any $\omega\in \mathbb{R}_+$ with the property that $\omega^{\alpha_i}$'s are all integer numbers, and for any positive integer $q$, we have \begin{equation}\label{L3E1} \frac{\Bigl(\mathbb{E} (Z^{r M^q(1+o_r)})\Bigr)^{\frac{1}{r M^q(1+o_r)}}} {(r M^q(1+o_r))^{\lambda}} \geq B_{\omega}(r)\, \Bigl(\frac{ \sqrt[r]{\mathbb{E}( Z^{r})} }{r^{\lambda}}\Bigr)^{\frac{1}{1+o_r}}, \end{equation} where $M:=\prod_{i=1}^{N}\lfloor\omega^{\alpha_i}\rfloor$, $o_r:=\frac{1}{r}\sum_{k=0}^{q-1}\frac{1}{M^k}$, and $B_{\omega}(r)$ is a strictly-positive-valued function ($B_{\omega}(r)>0$) that depends only on $\omega$, $r$, $N$ and $\BU{H}$ such that $\lim_{r\rightarrow+\infty} B_{\omega}(r)=1$. \end{lem} \begin{proof} By Lemma \ref{The Inequality} we have $$ \mathbb{E} (Z^{M(r+1)})\geq \frac{\kappa^M}{r^{M(N+1)}} (\rho \omega^{ \textrm{tr}(\BU{H})})^{rM} \Bigl(\mathbb{E}( Z^{r})\Bigr)^M, $$ where $M=\prod_{i=1}^N \lfloor \omega^{\alpha_i}\rfloor$, and $\rho:=\frac{M}{\omega^{\sum_{k=1}^{N}\alpha_k}}$. Reiterating this inequality $q$ times, and using the inequality $M^k r+\sum_{i=1}^{k}M^i\leq M^k(r+2)$, we get $$ \mathbb{E} (Z^{r M^q+\sum_{i=1}^{q}M^i})\geq A_{\omega,r,q}\, (\rho \omega^{ \textrm{tr}(\BU{H})})^{(q r M^q+\sum_{i=1}^{q-1}i M^{i+1})} \Bigl(\mathbb{E}( Z^{r})\Bigr)^{M^q}, $$ where $$ A_{\omega,r,q}:=\frac{\kappa^{\sum_{i=1}^q M^i}}{M^{(N+1)\sum_{i=1}^{q-1}i M^{q-i}}\, (r+2)^{(N+1)\sum_{i=1}^q M^i}}. $$
For $\lambda:=\frac{\textrm{tr}(\BU{H})}{\sum_{i=1}^N\alpha_i}$, and using the notation $o_r:=\frac{1}{r}\sum_{i=0}^{q-1}\frac{1}{M^i}$, we have \begin{equation}\label{T1S1E1} \frac{\Bigl(\mathbb{E} (Z^{r M^q+\sum_{i=1}^{q}M^i})\Bigr)^{\frac{1}{r M^q+\sum_{i=1}^{q}M^i}}} {(r M^q+\sum_{i=1}^{q}M^i)^{\lambda}} \geq B_{\omega,r,q}\, \Bigl(\frac{ \sqrt[r]{\mathbb{E}( Z^{r})} }{r^{\lambda}}\Bigr)^{\frac{1}{1+o_r}}, \end{equation}
where $$ B'_{\omega,r,q}:=(A_{\omega,r,q})^{\frac{1}{r M^q(1+o_r)}} (\rho \omega^{ \textrm{tr}(\BU{H})})^{ \frac{q r M^q+\sum_{i=1}^{q-1}i M^{i+1}}{r M^q+\sum_{i=1}^{q}M^i} } M^{-q\lambda} r^{-\lambda\frac{o_r}{1+o_r}} (1+o_r) ^{-\lambda}. $$ Using the inequalities $\sum_{i=0}^{+\infty} x^{i}=\frac{1}{(1-x)}$ and $\sum_{i=1}^{+\infty}i x^{i-1}=\frac{1}{(1-x)^2}$, and noting that $M\geq2$, we can easily verify that as $r$ goes to $+\infty$, the function $o_r$ converges to zero uniformly in $q$ and $\omega$, and \begin{equation} M^{q-1}\leq \sum_{i=1}^{q-1}i M^{q-i} \leq 4 M^{q-1} \quad \text{and} \quad M^{q}\leq \sum_{i=1}^{q}i M^{i} \leq 2 M^{q}. \end{equation} Using these inequalities, we can easily show that \begin{equation}\label{T1S1E2} (A_{\omega,r,q})^{\frac{1}{r M^q(1+o_r)}}\geq A_r, \end{equation} where $A_r>0$ is only a function of $r$ and $N$ such that $\lim_{r\rightarrow+\infty} A_r=1$. It is also easy to verify that $$ q\sum_{i=1}^{q}M^i-\sum_{i=1}^{q-1}i M^{i+1}=M^q \sum_{i=1}^{q} \frac{i}{M^{i-1}}, $$ and hence \begin{equation}\label{T1S1E3} q-\frac{q r M^q+\sum_{i=1}^{q-1}i M^{i+1}}{r M^q+\sum_{i=1}^{q}M^i}=\frac{\sum_{i=1}^{q} \frac{i}{M^{i-1}}}{r(1+o_r)}\leq \frac{1}{r(1+o_r)(1-\frac{1}{M})^2}\,. \end{equation} Noting that under the assumptions of the lemma, $\rho$ is equal to $1$, and using Equations \eqref{T1S1E2} and \eqref{T1S1E3}, we obtain $$ B'_{\omega,r,q}\geq B_{\omega}(r), $$ where $B_{\omega}(r)$ is a strictly-positive-valued function that only depends on $N$, $r$, $\omega$, and $\BU{H}$ such that $\lim_{r\rightarrow+\infty} B_{\omega}(r)=1$. This completes the proof. \end{proof}
\begin{lem}\label{existence of moments limit} Let $\B{X}_{\B{t}}$ be an (N,d)-Gaussian random field satisfying Properties $\mathfrak{A}_1$, $\mathfrak{A}_0$, $\mathfrak{A}_2$, and $\mathfrak{A}_3$ with self-similarity vector $\B{\alpha}:=(\alpha_1,\cdots,\alpha_N)$ such that for every $i$ and $j$, the quotient $\alpha_i/\alpha_j$ is a rational number. Then the following limit exists \begin{equation}\label{Theorem Limit Equation} \lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}, \end{equation} where $Z:=L_{\B{0}}(\B{X},[0,1]^N)$ and $\lambda:=\frac{ \textrm{tr}(\BU{H})}{\sum_{k=1}^{N}\alpha_k}$. \end{lem} \begin{proof}
Define $$ \overline{\ell}:=\limsup_{n\rightarrow+\infty} \frac{ \sqrt[n]{\mathbb{E}(Z^{n})}}{n^{\lambda}} \quad \text{and} \quad \underline{\ell}:=\liminf_{n\rightarrow+\infty} \frac{ \sqrt[n]{\mathbb{E}(Z^{n})}}{n^{\lambda}}. $$ Consider any positive real number $\ell$ that is strictly less than $\overline{\ell}$. Let $\ell_1$ and $\ell_2$ be real numbers satisfying $\ell<\ell_1<\ell_2<\overline{\ell}$.
\noindent \textbf{Step 1: } As for every $i$ and $j$, the quotient $\alpha_i/\alpha_j$ is a rational number, we can find a real number $\alpha>0$ such that for every $i$, the quotient $\frac{\alpha_i}{\alpha}$ is an integer. Now choose $\omega_1$ and $\omega_2$ such that $\omega_1^\alpha=2$ and $\omega_2^\alpha=3$. Clearly, in this case all $\omega_j^{\alpha_i}$'s are integer-valued for every $j=1,2$ and $i=1,\cdots,N$, hence we may apply Lemma \ref{Iterated Inequality}. Also note that in this case, there exists a positive integer $m_0$ such that $M_1:=\prod_{i=1}^{N}\lfloor\omega_1^{\alpha_i}\rfloor=2^{m_0}$ and $M_2:=\prod_{i=1}^{N}\lfloor\omega_2^{\alpha_i}\rfloor=3^{m_0}$. Let $r$ be any integer larger than $r_1$, and $p$ and $q$ be two arbitrary positive integers. Applying Equation \eqref{L3E1} first with $\omega_1$ and $p$, and then repeating it with $\omega_2$ and $q$, we get \begin{equation}\label{T2S1E1} \frac{\Bigl(\mathbb{E} (Z^{\Phi_{r,p,q}})\Bigr)^{\frac{1}{\Phi_{r,p,q}}}} {(\Phi_{r,p,q})^{\lambda}} \geq B_{\omega_2}(R)\,\bigl({B_{\omega_1}(r)}\bigr)^{\frac{1}{1+o_R}} \Bigl(\frac{ \sqrt[r]{\mathbb{E}( Z^{r})} }{r^{\lambda}}\Bigr)^{\frac{1}{(1+o_r)(1+\bar{o}_R)}}. \end{equation} where $R:=r M_1^p+\sum_{k=1}^{p}M_1^k$, $o_r:=\frac{1}{r}\sum_{k=0}^{p-1}\frac{1}{M_1^k}$, $\bar{o}_R:=\frac{1}{R}\sum_{k=0}^{q-1}\frac{1}{M_2^k}$, and $$ \Phi_{r,p,q}:=R M_2^q+\sum_{k=1}^{q}M_2^k=r 2^{m_0p} 3^{m_0q} (1+o_R)(1+o_r). $$ We note that $B_{\omega_1}$ and $B_{\omega_2}$ converge to one uniformly in $p$ and $q$, and $o_r$ and $o_R$ converge to zero uniformly in $p$ and $q$.
\noindent \textbf{Step 2: } Choose $\varepsilon>0$ such that $(1+\varepsilon)<\frac{\ell_1}{\ell}$. Clearly, there exists $r_2>0$ such that for every $R, r\geq r_2$ we have $$ (1+o_r)(1+\bar{o}_R)<1+\varepsilon\quad\text{, and}\quad B_{\omega_2}(R)\,\bigl({B_{\omega_1}(r)}\bigr)^{\frac{1}{1+o_R}} \ell_2^{\frac{1}{(1+o_r)(1+o_R)}}>\ell_1. $$ By the definition of $\limsup$, there exists an integer $r>\max\{r_1,r_2\}$ such that $\frac{\sqrt[r]{\mathbb{E}(Z^{r})}}{r^{\lambda}}>\ell_2$. Now we apply this $r$ to Equation \eqref{T2S1E1}, along with any arbitrary integers $p$ and $q$. Noting that $R=r M_1^p+\sum_{k=1}^{p}M_2^k>r>r_2$, we have \begin{equation}\label{T2S1E2} \frac{\Bigl(\mathbb{E} (Z^{\Phi_{r,p,q}})\Bigr)^{\frac{1}{\Phi_{r,p,q}}}} {(\Phi_{r,p,q})^{\lambda}} \geq \ell_1 \;;\quad \forall p,q\in \mathbb{N}. \end{equation} We also have \begin{equation}\label{T2S1E3} r 2^{m_0p} 3^{m_0q}\leq \Phi_{r,p,q}=r 2^{m_0p} 3^{m_0q} (1+o_R)(1+o_r)\leq r 2^{m_0p} 3^{m_0q} (1+\varepsilon). \end{equation}
As $\log_{2}3$ is not a rational number, by Dirichlet's approximation theorem (Theorem \ref{Dirichlet}) there exist $p_0,q_0\in \mathbb{N}$ such that \begin{equation}\label{T2S1E4}
0<|p_0-q_0\log_23|<\frac{1}{m_0} \log_2 (\frac{\ell_1}{\ell(1+\varepsilon)}). \end{equation} We proceed with the assumption that $p_0>q_0\log_23$; when $p_0<q_0\log_23$, the proof is similar. So by Equation \eqref{T2S1E4} we have \begin{equation}\label{T2S1E41} 1<\nu:=\frac{2^{m_0p_0}}{3^{m_0q_0}}<\frac{\ell_1}{\ell(1+\varepsilon)}. \end{equation} We choose $k_0\in \mathbb{N}$ such that $\nu^{k_0}>3$, and define $n_1:=r(1 +\varepsilon) 3^{m_0 q_0 k_0}$. Take any arbitrary integer $n\geq n_1$, and define the following $$ q_1:=\max\{k; \; n\geq r (1+\varepsilon) 3^{m_0 k}\}\quad \text{and} \quad k_1:=\max\{k; \; n\geq r (1+\varepsilon) 3^{m_0 q_1}\nu^{k}\}. $$ As we have $$ 3^{m_0 q_1}\nu^{k_1}=3^{m_0 (q_1-k_1 q_0)} 2^{m_0 k_1 q_0}, $$ hence \begin{equation}\label{T2S1E5} r(1+\varepsilon)3^{m_0 (q_1-k_1 q_0)} 2^{m_0 k_1 q_0}\leq n <\nu r(1+\varepsilon) 3^{m_0 (q_1-k_1 q_0)} 2^{m_0 k_1 q_0}. \end{equation}
We note that $q=q_1-k_1 q_0\geq 0$, because \\ (I) as $n>n_1$, by definition $q_1\geq q_0 k_0$, and \\ (II) $k_1\leq k_0$, otherwise if $k_0< k_1$, then $\nu^{k_1}>3$, and hence $n>r(1+\varepsilon)3^{m_0 (q_1+1)}$ which is in contradiction with the definition of $q_1$.\\
So we can apply Equation \eqref{T2S1E2} to $q=q_1-k_1 q_0$ and $p=k_1 q_0$. By Equations \eqref{T2S1E3} and \eqref{T2S1E5}, we get \begin{equation}\label{T2S1E6} \Phi_{r,p,q}\leq n \leq \nu (1+\varepsilon) \Phi_{r,p,q}. \end{equation}
\noindent \textbf{Step 3: } As $\Phi_{r,p,q}\leq n$, by H\"older's inequality we have $$ \Bigl(\mathbb{E} (Z^{\Phi_{r,p,q}})\Bigr)^{\frac{1}{\Phi_{r,p,q}}}\leq \Bigl(\mathbb{E} (Z^{n})\Bigr)^{\frac{1}{n}}. $$ Hence, by Equation \eqref{T2S1E6} we have \begin{equation}\label{T2S3E1} \bigl(\nu(1+\varepsilon)\bigr)^{-\lambda}\frac{\Bigl(\mathbb{E} (Z^{\Phi_{r,p,q}})\Bigr)^{\frac{1}{\Phi_{r,p,q}}}} {(\Phi_{r,p,q})^{\lambda}}\leq \frac{\Bigl(\mathbb{E} (Z^{n})\Bigr)^{\frac{1}{n}}} {n^{\lambda}}. \end{equation} But by \eqref{T2S1E41}, we have $$ \frac{\ell}{\ell_1}\leq \bigl(\nu(1+\varepsilon)\bigr)^{-1}\leq \bigl(\nu(1+\varepsilon)\bigr)^{-\lambda}. $$ So by Equations \eqref{T2S3E1} and \eqref{T2S1E2} we finally get $$ \frac{\Bigl(\mathbb{E} (Z^{n})\Bigr)^{\frac{1}{n}}} {n^{\lambda}}\geq \ell. $$ This means that $\underline{\ell}$ is larger than or equal to $\ell$. As this is true for any positive number $\ell$ that is strictly less than $\overline{\ell}$, this implies $\overline{\ell}=\underline{\ell}$; in other words the limit in \eqref{Theorem Limit Equation} exists. \end{proof}
\begin{proof}[Proof of Theorem \ref{main theorem on local times}] Lemma \ref{existence of moments limit} guarantees the convergence of $\{\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}\}_n$, so Theorem \ref{Kasahara} can be applied if we show that the limit is strictly positive. This can indeed be easily verified applying Lemma \ref{Iterated Inequality} with some arbitrary $r>r_1$ and $\omega\in \mathbb{R}_+$ such that $\omega^{\alpha_i}$'s are all integers, and then letting $q$ converge to $+\infty$. Clearly the left-hand side of Equation \eqref{L3E1} converges to $\lim_{n\rightarrow +\infty}\frac{\sqrt[n]{\mathbb{E}(Z^n)}}{n^\lambda}$ whereas the right-hand side is strictly positive and independent of $q$. \end{proof}
Now we can easily prove Corollary \ref{limit theorem on intersection local times}.
\begin{proof}[Proof of Corollary \ref{limit theorem on intersection local times}]
We define the Gaussian field $\B{\Delta}_{\B{\tilde{t}}}$ $$ \B{\Delta}_{\B{\tilde{t}}}:=\bigl(\B{X}_1(\B{t_1})-\B{X}_2(\B{t_2}), \B{X}_2(\B{t_2})-\B{X}_3(\B{t_3}),\cdots, \B{X}_{m-1}(\B{t_{m-1}})-\B{X}_m(\B{t_m})\bigr), $$ where $\B{\tilde{t}}:=(\B{\tilde{t}}_1, \cdots, \B{\tilde{t}}_n)$ and $\B{\tilde{t}}_1\in \mathcal{I}^{\tilde{N}_k}$ for every $k=1,\cdots,m$.
It is evident that $\B{\Delta}_{\B{\tilde{t}}}$ is a a centered Gaussian $(\tilde{N},\tilde{d})$-field, where $\tilde{d}:=(m-1)d$ and $\tilde{N}:=\sum_{k=1}^{m}N_k$.
The proof of the existence of the local time of $\B{\Delta}_{\B{\tilde{t}}}$ around $\B{0}$ over the cube $\mathcal{I}^{\tilde{N}}$ and the finiteness of all its moments, is similar to the proof of Theorem \ref{Upper bound for intersection local times}. Indeed, using Equation \eqref{upper bound on intersection kernel} with $q_i=\frac{m-1}{m}$ ($p_i=\frac{1}{m}$) for every $i=1,\cdots,m$, we obtain
$$ \U{K}_n^{\B{\Delta}}(\B{\tilde{t}}_1, \cdots, \B{\tilde{t}}_n)\leq \prod_{k=1}^{m}\bigl(\U{K}_n^{\B{X}_k}(\B{t}_k^1, \cdots, \B{t}_k^n)\bigr)^{\frac{m-1}{m}}. $$ So the Gaussian field $\B{\Delta}_{\B{\tilde{t}}}$ satisfies Property $\mathfrak{A}_1$.
As all the random fields $\B{X}_k$, $k=1,\cdots,m$, satisfy Properties $\mathfrak{A}_0$ and $\mathfrak{A}_2$, so does the Gaussian field $\B{\Delta}_{\B{\tilde{t}}}$. As every $\B{X}_k$ is diagonally self-similar (i.e., it satisfies Property $\mathfrak{A}_3$) with scaling vector $\B{\alpha}_k:=(\alpha_{k,1}, \cdots, \alpha_{k,N_k})\in \mathbb{R}_+^{N_k}$ and scaling matrix $\BU{H}\in\mathbb{R}^{d\times d}$, it can be easily verified that $\B{\Delta}_{\B{\tilde{t}}}$ is also diagonally self-similar with the scaling vector $\B{\tilde{\alpha}}\in\mathbb{R}_+^{\tilde{N}}$ constructed by adjoining all the vectors $\B{\alpha}_k$ together, i.e., $\B{\tilde{\alpha}}:=(\B{\alpha}_1, \B{\alpha}_2, \cdots, \B{\alpha}_m)$ and with the scaling matrix $\BU{\tilde{H}}\in\mathbb{R}^{(m-1)d\times(m-1)d}$ which is a block diagonal matrix containing $m-1$ copies of $\BU{H}$ on its main diagonal and zero elsewhere; in other words, $\BU{\tilde{H}}:=\textrm{diag}(\BU{H}, \BU{H}, \cdots, \BU{H})$. Clearly in this case we have $\textrm{tr}(\BU{\tilde{H}})=(m-1)\textrm{tr}\BU{H}$. Now the desired conclusion is evident applying Theorem \ref{main theorem on local times}.
\end{proof}
\section{Appendix} \begin{lem}\label{conditional variance formula} Let $\mathcal{H}$ be a Gaussian Hilbert space, i.e., for any $n\in\mathbb{Z}^{+}$, and any elements $X_1, \cdots, X_n\in \mathcal{H}$, the set $\{X_i\}_{i=1}^n$ is a family of jointly Gaussian zero-mean random variables, and $\mathcal{H}$ forms a Hilbert space with respect to the inner product $\langle X,Y\rangle:=\mathbb{E}(X Y)$. Let $\mathcal{G}$ be a subspace of $\mathcal{H}$, and $X$ be an element of $\mathcal{H}$. Then we have $$
\var(X\big|\mathcal{G})=\|Q_{\mathcal{G}}(X)\|^2, $$ where $Q_{\mathcal{G}}(X):=X-P_{\mathcal{G}}(X)$, and $P_{\mathcal{G}}(X)$ is the orthogonal projection of $X$ over the subspace $\mathcal{G}$. \end{lem} \begin{proof} By definition, we have $$
\var(X\big|\mathcal{G})=
\mathbb{E}\bigl[\bigl(X-\mathbb{E}(X\big|\mathcal{G})\bigr)^2\big |\mathcal{G}\bigl]. $$ Replacing $X$ by $P_{\mathcal{G}}(X)+Q_{\mathcal{G}}(X)$ on the right-hand side of the above equation, and noting that $Q_{\mathcal{G}}(X)$ is independent of $\mathcal{G}$, we can easily derive the desired result. \end{proof} \begin{cor}\label{conditionging decreases variance} An immediate implication of the previous lemma is the following inequality $$
\var(X\big|\mathcal{G})\leq \var(X). $$ \end{cor}
\begin{lem}\label{Ineuqlity for conditional variance} Let $\mathcal{Y}$ be an arbitrary inner-product space. Then for any $\B{y}_1,\cdots,\B{y}_n \in \mathcal{Y}$, $n\in\mathbb{N}$, we have $$ \det \begin{bmatrix} \B{y}_1\\ \vdots\\ \B{y}_n \end{bmatrix} \begin{bmatrix} \B{y}_1&\cdots&\B{y}_n
\end{bmatrix}=\|\B{y}_1\|^2
\prod_{k=2}^{n} \|Q_{\langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle}(\B{y}_k)\|^2,
$$ where $\langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle$ is the subspace generated by $\{\B{y}_1, \cdots, \B{y}_{k-1}\}$. \end{lem} \begin{proof}
We assume that $\{\B{y}_i\}_{i=1}^n$ are linearly independent, because otherwise, the equality is trivially true. By orthogonal decomposition of each $\B{y}_k$ over the subspace $\langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle$, we can obtain the sequences $\{\B{f}_i\}_{i=1}^n$, $\{\eta_i\}_{i=1}^n$, and $\{\B{p}_i\}_{i=1}^n$ such that for $\forall k=1, \cdots, n$, we have $\B{y}_k=\eta_k \B{f}_k+\B{p}_k$ where $\eta_k\in \mathbb{R}_+$, $\B{p}_k\in \langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle$, $\B{f}_k\perp \langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle$, and ${\|\B{f}_i\|=1}$. In fact for each $k=2, \cdots, n$, $\B{p}_k=P_{\langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle}(\B{y}_k)$ and $\eta_k \B{f}_k=Q_{\langle \B{y}_1, \cdots, \B{y}_{k-1}\rangle}(\B{y}_k)$. Let $f:\mathbb{R}^n\rightarrow \mathcal{Y}$ be the linear isometry such that $f(\B{e}_i)=\B{f}_i$ for every $i=1, \cdots, n$, where $\{\B{e}_i\}_{i=1}^n$ is the standard basis for the Euclidean space $\mathbb{R}^n$, i.e., $\B{e}_i$ is the \underline{column vector} that is $1$ in the $i$-th entry and $0$ elsewhere.
For each $i=1, \cdots,n$, define $\B{x}_i\in\mathbb{R}^n$ as the inverse image of $\B{y}_i$, i.e., $f(\B{x}_i)=\B{y}_i$. As $f$ is an isometry, we have $$ \det \begin{bmatrix} \B{y}_1\\ \vdots\\ \B{y}_n \end{bmatrix} \begin{bmatrix} \B{y}_1&\cdots&\B{y}_n \end{bmatrix}= \det \begin{bmatrix} \B{x}_1^T\\ \vdots\\ \B{x}_n^T \end{bmatrix} \begin{bmatrix} \B{x}_1&\cdots&\B{x}_n \end{bmatrix}= \Bigl(\det \begin{bmatrix} \B{x}_1&\cdots&\B{x}_n \end{bmatrix}\Bigr)^2. $$ Again due to the fact that $f$ is an isometry, for every $k$ we have $\B{x}_k\in\langle \B{e}_1, \cdots, \B{e}_{k} \rangle$, i.e., the matrix $\begin{bmatrix} \B{x}_1&\cdots&\B{x}_n \end{bmatrix}$ is upper triangular with $\eta_i$'s on its diagonal. So we have $$ \det \begin{bmatrix} \B{x}_1&\cdots&\B{x}_n \end{bmatrix} =\prod_{i=1}^n \eta_i. $$
But $\eta_1=\|\B{y}_1\|$, and $\eta_i=\|Q_{\langle \B{y}_1, \cdots, \B{y}_{i-1}\rangle}(\B{y}_i)\|$ for every $i=2, \cdots, n$. So the proof is complete. \end{proof}
\begin{cor}\label{detCov formula} Suppose $\{X_i\}_{i=1}^{n}$ is a family of jointly Gaussian random variables. Using Lemma \ref{conditional variance formula}, we obtain the following formula $$
\det \cov (X_1, \cdots, X_n)=\var(X_1) \prod_{k=2}^{n} \var(X_k\big| X_1, \cdots, X_{k-1}) $$ \end{cor} Using Lemma \ref{Ineuqlity for conditional variance} we can easily verify the following two propositions. \begin{prop}\label{Inequality for detCov} Suppose that $\{Y_i^j;\; i=1, \cdots,n, j=1,\cdots, m_i\}$ is family of jointly Gaussian random variables, where $m_i\in\mathbb{N}$ for every $i=1, \cdots n$. Also, for each $i=1, \cdots n$, let $\B{Y}_i:=(Y_i^1,\cdots, Y_i^{m_i})$. Then we have $$ \begin{aligned} \detcov(\B{Y}_1, \cdots, \B{Y}_n)
=\detcov(\B{Y}_1) \prod_{k=2}^{n} \detcov(\B{Y}_k\big| \B{Y}_1, \cdots, \B{Y}_{k-1}) \leq \prod_{i=1}^{n} \detcov(\B{Y}_i), \end{aligned} $$
where $\detcov(\B{Y}_{k}\big| \B{Y}_1, \cdots, \B{Y}_{k-1})$ is the determinant of the conditional covariance matrix of $\B{Y}_{k}$ conditioned on the random vectors $\B{Y}_1$, ..., $\B{Y}_{k-1}$. \end{prop}
\begin{prop}\label{Reduction Inequality for detCov} Let $m\in\mathbb{N}$, and consider any family of $n$ jointly Gaussian random vectors of size $m$, i.e.,
$\BU{Y}_i:=(Y_i^1,\cdots, Y_i^{m})$ for every $i$. Then we have $$ \begin{aligned} &\detcov(\BU{Y}_1, \cdots, \BU{Y}_{n})\\ &\quad\quad \leq \detcov(\BU{Y}_{k}) \detcov(\BU{Y}_1-\BU{Y}_{k}, \cdots, \BU{Y}_{k-1}-\BU{Y}_{k}, \BU{Y}_{k+1}-\BU{Y}_{k}, \cdots, \BU{Y}_{n}-\BU{Y}_{k}) \end{aligned} $$ \end{prop}
We have the following lemma which can be proved by elementary probability and then Stirling's approximation, i.e., the fact that $\kappa_1\leq \frac{n!}{(\frac{n}{e})^n \sqrt{n}}\leq \kappa_2$, where $\kappa_1$ and $\kappa_2$ are strictly positive global constants.
\begin{lem}\label{Partitioning a set}
Let $n,m\in\mathbb{N}$, and suppose that we have $nm$ distinct balls and $n$ distinct baskets. Let $\mathfrak{N}^{nm}_{n}$ be the number of different ways one can distribute the balls among the baskets such that each basket contains exactly $m$ balls. In other words, $\mathfrak{N}^{nm}_{n}$ is the cardinality of $\Omega^{mn}_{n}$ where $\Omega^{mn}_{n}$ is the set of all functions $\sigma:\{1, 2, \cdots, mn\}\rightarrow \{1, 2, \cdots, n\}$ such that for every $p\in\{1, 2, \cdots, n\}$, the cardinality of the inverse image of $\sigma$ equals $m+1$, i.e., $|\sigma^{-1}(\B{p})|=m$. We have $$ \mathfrak{N}^{nm}_{n}=\frac{(nm)!}{(m!)^n} \geq \frac{}{} \frac{\kappa_1 \sqrt{n}}{\kappa_2^n \sqrt{m^n}} n^{nm}, $$ where $\kappa_1$ and $\kappa_2$ are strictly positive global constants. \end{lem}
The following theorem, also known as Dirichlet's theorem on Diophantine approximation, is a direct application of Pigeonhole Principle which itself was first used by Dirichlet \cite{Dirichlet1863}. For completeness we provide the proof. \begin{thm}[Dirichlet's approximation theorem]\label{Dirichlet} For any real number $\alpha$ and any positive integer $n$, there exist integers $p$ and $q$ such that $1\leq q \leq n$ and $
|p-q \alpha|\leq \frac{1}{n} $ \end{thm} \begin{proof} Consider the numbers $\alpha-\lfloor\alpha\rfloor$, $2\alpha-\lfloor2\alpha\rfloor$, ..., $n\alpha-\lfloor n\alpha\rfloor$, and the intervals $[\frac{i}{n},\frac{i+1}{n})$, for $i=0,\cdots,n-1$. Either one of the numbers falls into the first interval $[0,\frac{1}{n})$, or otherwise there will an interval that contains more than one point. In either case we can find the desired $p$ and $q$. \end{proof}
\def$'${$'$}
\end{document} | arXiv | {
"id": "1910.05650.tex",
"language_detection_score": 0.5585141181945801,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{No-go for fully unitary quantum mechanics from Bell's Theorem; comment on ``Physics and Metaphysics of Wigner’s Friends: Even performed pre-measurements have no results''} \author{Konrad Schlichtholz} \affiliation{International Centre for Theory of Quantum Technologies, University of Gdansk, 80-308 Gda{\'n}sk, Poland} \email[Corresponding author: ]{konrad.schlichtholz@phdstud.ug.edu.pl}
\pagenumbering{arabic}
\begin{abstract}
The purpose of this comment is to show that a reinterpretation of the results from the Letter: ``Physics and Metaphysics of Wigner’s Friends: Even performed pre-measurements have no results'' allows for reaching the conclusion ``pre-measurements have no results'' based only on postulates of quantum mechanics without additional assumptions on irreversibility. Additionally, with supplementary reasoning based on Bell's theorem, one can show that unitary decoherence cannot be solely responsible for the quantum-to-classical transition, and an additional irreversibility model is required for its full description. Consequently, the black hole information paradox has no physical basis. \end{abstract}
\maketitle
This comment presents a reinterpretation of the results from the Letter \cite{ZM}. The authors of \cite{ZM} point out that, in a restatement of Wigner's Friend proposed in \cite{FR}, a paradox appears due to equating irreversible measurements with reversible pre-measurements to which one should not assign outcomes. They argue that the inclusion of decoherence in Friend's system rules out the paradox. Based on this, they reject the statement from \cite{FR} ''quantum theory cannot consistently describe the use of itself''.
Let me start with the reanalysis of the GHZ-like reasoning (`Step three' in the Letter). In this thought experiment, based on Born's rule, `outcomes' of Friends' pre-measurements (unitary evolution of Friends' labs) $f_i=\pm1$ and Wigners' measurements $w_i=\pm1$ are forced to obey relation $(f_1 f_2 f_3)^2 (w_1 w_2 w_3)^2=-1$. The authors argue that Friends cannot associate the outcomes $f_i$ as this leads to the paradox. However, this argument is based on the assumption that Wigners' outcomes are assigned correctly. The motivation for this is that Friends' outcomes can be wiped out by Wigners' actions, while Wigners' results cannot be undone due to effectively irreversible still unitary decoherence. Irreversibility arises since the control over degrees of freedom of macroscopic objects is effectively impossible. This is not satisfactory, as this irreversibility is not intrinsic to the theory itself, and mathematical consistency is based on lack of contradiction between all theory's constituents --- not only between those practically implementable. Therefore, at this stage, there is no particular mathematical reason for the irreversibility assumption, and this reasoning is just an improved version of the original paradox from \cite{FR}. However, one can modify the experiment assuming that Wigners also perform pre-measurements. In this case, the probabilities are assigned due to Born's rule in the same way, and thus the relation for the outcomes still holds. Therefore, even if only pre-measurements are done, we get the paradox. This shows without additional assumptions that one cannot associate outcomes with pre-measurements. However, associating outcomes with collapse does not cause paradoxes in such scenarios.
One can see decoherence emerging as a non-unitary Kraus representation of the evolution \cite{Deco} of the reduced density matrix of a subsystem (Friend) coupled with an environment (Friend's lab) where the whole system evolves unitarily. Therefore, such decoherence is already taken into account in this modified setup, as in fact it is pre-measurement. This can be seen from Eq. (2) in \cite{ZM}, as the reduced density matrix of Friend after unitary evolution of the entire lab is a classical mixture of states corresponding to possible results. At this point, one could try to assume that Friend assigned some outcome. However, as discussed, this causes paradox. Thus, unitary decoherence does not provide a consistent way of describing measurement, but rather it is its precursor, and therefore it does not fully describe the quantum-to-classical transition.
This can also be seen in another way. In general, evolving to a classical mixture in a subsystem is not equivalent to assigning the result. Consider entanglement swapping \cite{Swapp} in which two singlets decoupled from the environment are recombined to form a new singlet. Treating the second singlet as part of the environment, the reduced density matrix of modes of the first singlet after the procedure is simply the classical mixture. However, we know that there is no local-hidden variable description of the system, and thus no meaningful outcome can be ascribed to it. This idea can be generalized to GHZ states generation \cite{GHZ,GHZ2} and scaled to arbitrary entangled system size. In general, unitary evolution entangles the system with the environment, whereas from Bell's theorem we know that the phenomenon of entanglement is governed without local-hidden variables (outcomes). Therefore, for a theory to describe unambiguously measurement on the subsystem, there has to be a moment when entanglement with the subsystem is completely removed on all levels, leading to all reduced density matrices containing the subsystem being block-diagonal with respect to measurement basis in this subsystem. This requires nonunitary behavior of the system at some scale of dilution of entanglement (e.g. discontinuous cut of coherences). Therefore, observable multiparty Bell non-classicality provides indirect experimental proof that current unitary quantum mechanics does not describe measurement and is only an effective statistical approximation not fundamental to macroscopic systems. Simply, outcomes are not well defined in unitary evolution, and one needs to impose irreversibility to overcome that.
The above results show that information is irreversibly lost during evolution, and the black hole information paradox \cite{black} that builds upon the unitarity of evolution is not of fundamental concern.
\end{document} | arXiv | {
"id": "2303.12087.tex",
"language_detection_score": 0.9207071661949158,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title {Nef reduction and anticanonical bundles} \author{Thomas Bauer and Thomas Peternell} \date{ } \maketitle
\vspace*{-0.5in}\section*{Introduction}
Projective manifolds $X$ with nef anticanonical bundles (i.e. $-K_X \cdot C = \det T_X \cdot C \geq 0$ for all curves $C \subset X$) can be regarded as an interpolation between Fano manifolds (ample anticanonical bundle) and Calabi-Yau manifolds resp.\ tori and symplectic manifolds (trivial canonical bundle). A differential-geometric analogue are varieties with semi-positive Ricci curvature although this class is strictly smaller -- to get the correct picture one has to consider sequences of metrics and make the negative part smaller and smaller. However we will work completely in the context of algebraic geometry. \\ Our aim is twofold: classification and, as a consequence, boundedness in case of dimension 3. We shall not consider threefods with trivial canonical bundles, the eventual boundedness of Calabi-Yau threefolds still being unknown. Fano threefolds have been classified a long time ago and threefolds with big and nef anticanonical bundle are very much related with $\bQ$-Fano threefolds; therefore we will concentrate here on {\it projective threefolds $X$ with $-K_X$ nef and $K_X^3 = 0,$ but $K_X \not \equiv 0.$} \\ The essential problem is to distinguish the positive and flat directions in $X.$ There are three main tools to do that: \begin{itemize} \item the Albanese map \item Mori theory \item the nef reduction. \end{itemize}
Given a normal projective variety $X$ and a nef line bundle $L$, the nef reduction produces an almost holomorphic dominant meromorphic map $f: X \rightharpoonup B$ with connected fibers such that \begin{enumerate} \item $L$ is numerically trivial on all compact fibers $F$ of $f$ of dimension $\dim X - \dim B$ \item for a general point $x \in X$ and every irreducible curve $C$ passing through $x$ such that $\dim f(C) > 0,$ we have $L \cdot C > 0.$ \end{enumerate}
The number $\dim B$ is an invariant of $L$ (actually $f$ is birationally determined) and is called the nef dimension $n(L)$. We will apply this to $L = -K_X$ and find that in case $1 \leq n(-K_X) \leq 2,$ some multiple of $-K_X$ is spanned and provides the nef reduction (Theorem 2.1). This theorem is actually first established in the case when $X$ is rationally connected while the other cases are done a posteriori with further knowledge of the structure of the variety. \\ The first thing in classification is of course the study of the Albanese. It is known that $\alpha: X \to {\rm Alb}(X)$ is a surjective submersion. Since we are interested only in classification up to finite \'etale cover, we will assume that the irregularity $q(X) $ is maximal (with respect to finite \'etale covers) and then, possibly after another cover, Theorem 4.2 provides a precise structure if $n(-K_X) = 1$ or $2$ -- essentially $X$ is a product -- and allows to show that threefolds with positive irregularity are bounded up to finite \'etale cover, also if $n(-K_X) = 3.$ \\ The Albanese theory being settled, we may now assume that $q = 0,$ even after finite \'etale cover. This means that $X$ is simply connected. If now $X$ is not rationally connected, we can use the rational quotient and it turns out that after finite \'etale cover, $X$ is a product $\bP_1 \times $K3 (Theorem 3.1). \\ So we are reduced to rationally connected threefolds. Combining the holomorphic nef reduction and Mori theory which are so to speak ``perpendicular'', we arrive at several structure theorems (sections 5 and 6) if $n(-K_X) = 1$ or $2.$ \\ The case $n(-K_X) = 3$ is studied in sect.\ 7. This condition means that $-K_X$ is ample on all irreducible members of covering families of curves. Recalling our general assumption that $K_X^3 = 0,$ we show that $K_X^2 \ne 0$ and -- although it has a non-zero fixed part -- the anticanonical system $\vert -K_X \vert $ induces a fibration $f: X \to \bP_1.$ The general fiber $F$ has $n(-K_F) = 2$ and therefore it is either $\bP_2$ blown up in 9 points but without elliptic fibration or a special $\bP_1$-bundle over an elliptic curve. We study in detail the first case under the genericity assumption that the unique element in $\vert -K_F \vert$ is a smooth elliptic curve (and not a configuration of rational curves). The remaining cases will be studied in a second part. Notice that $n(-K_X) = 3$ and $K_X^3 = 0$ is the only case when ``abundance'' $\kappa (-K_X) = \nu (-K_X)$ does not hold. The surface analogues are $\bP_2$ blown up in 9 points without elliptic fibration resp.\ elliptic ruled surfaces of very special type.\\ Concerning boundedness (in the rational connected case), we are immediately done by classification if $X$ admits a Mori contraction $\varphi: X \to Y$ of fiber type, i.e. $\dim Y \leq 2.$ If $\varphi$ is birational, we want to proceed by induction on the Picard number. In most cases $-K_Y$ is again nef, so that the induction is no problem. However there are two exceptions, namely when $X \to Y$ is the blow up of a rational curve $C \subset Y$ with normal bundle $\sO(-1) \oplus \sO(-2)$ resp.\ $\sO(-2) \oplus \sO(-2).$ The first case does not create any difficulty because after some birational transformation it leads to a bounded situation. However the $(-2,-2)$-case needs further consideration which will be carried out in the second part of this paper. Therefore at the moment we obtain boundedness modulo boundedness of threefolds with $n(-K_X) = 3$ resp.\ of those threefolds carrying a $''(-2,-2)''$-contraction.
\setcounter{tocdepth}{1} \tableofcontents
\section{Preliminaries}
In \cite{8authors} the following reduction theorem is proved.
\begin{theorem} Let $L$ be a nef line bundle on a normal projective variety $X.$ Then there exists an almost holomorphic dominant meromorphic map $f: X \rightharpoonup B$ with connected fibers such that \begin{enumerate} \item $L$ is numerically trivial on all compact fibers $F$ of $f$ of dimension $\dim X - \dim B$ \item for a general point $x \in X$ and every irreducible curve $C$ passing through $x$ such that $\dim f(C) > 0,$ we have $L \cdot C > 0.$ \end{enumerate} The map $f$ is unique up to birational equivalence of $B.$ \end{theorem}
Recall that a meromorphic map $f: X \rightharpoonup Y$ is almost holomorphic if there exists an open non-empty set $U \subset X$ such that $f \vert U$ is holomorphic and proper. In particular $\dim B$ is an invariant of $L$ and we set $n(L) = \dim B,$ the nef dimension of $L$.
\begin{proposition} Let $p: X \to Y$ be a proper surjective morphism of normal projective varieties and let $L$ be a nef line bundle on $Y$ with nef reduction $f: Y \rightharpoonup B$. Then the Stein factorization of $ f \circ p$ gives a nef reduction for $p^* L$; in particular $n(p^* L) = n(L)$. \end{proposition}
\proof Obviously $p^* L$ ist numerically trivial on compact fibers of $ f \circ p$ of the expected dimension. Let $X \stackrel{g}{\rightharpoonup} A \stackrel{q}{\to} B$ be the Stein factorization of (a desingularization of) $f \circ p$. Then $p^* L$ is numerically trivial on the general fiber of $g$ so the nef reduction of $p^* L$ must factor via $g$. Let $x \in X$ be a general point and $C \subset X$ an irreducible curve through $x$ with $\dim g(C) > 0$. As $q$ is finite $q(g(C))$ is again a curve, so $p(C)$ is a curve which is not contracted by $f$, i.~e.\ $L \cdot p_*(C) >0$. Now the projection formula implies $p^* L \cdot C >0$ and $g$ is a nef reduction for $p^* L$. \qed
\begin{corollary} Let $p: X \to Y$ be an \'etale covering of projective manifolds. Then $n(\pm K_X) = n(\pm K_Y)$. \end{corollary}
\begin{defprop} Let $X$ be a projective manifold (or a variety with $\bQ$-factorial canonical singularities say) and let $D$ be a nef divisor on $X$. We define the numerical dimension of $D$ to be
$\nu(D) = \max \, \{ n \, | \, D^n \not\equiv 0 \}$. Then we always have the inequalities $\kappa(D) \leq \nu(D) \leq n(D)$ . We call $D$ \emph{good} if $\kappa(D) = \nu(D)$, otherwise we call it \emph{bad}. If $D=\pm K_X$ is good then it is semi-ample, i.~e.\ some multiple is generated by global sections. \end{defprop}
\proof \cite[2.2 and 6.1]{Ka85}, resp. \cite[2.8]{8authors} for the inequality $\nu (D) \leq n(D).$ \qed
By the Abundance Conjecture, the canonical bundle is never expected to be bad, whereas the anticanonical bundle \emph{can} be bad.
The classification of algebraic surfaces with nef anticanonical bundle is of course an easy consequence of the Kodaira-Enriques classification:
\begin{proposition} Let $X$ be a smooth projective surface with $-K_X$ nef and not numerically trivial. Then the following assertions are equivalent \begin{enumerate} \item $X$ admits an elliptic fibration \item $n(-K_X) = 1$ \item either after a finite \'etale cover $X \simeq \bP_1 \times A$ with an elliptic curve $A$ or $X$ is $\bP_2$ blown up in 9 points such that some multiple $-mK_X$ is generated by global sections. \end{enumerate} In particular $\kappa(-K_X)=\nu(-K_X)=n(-K_X)=1$ and the nef reduction can be chosen holomorphic, not only almost holomorphic. \end{proposition}
As a corollary, $n(-K_X) = 2$ if and only if either $-K_X$ is big or $X$ is $\bP_2$ blown up in 9 points without elliptic fibration or if $X = \bP(E)$ with a semi-stable bundle $E$ over an elliptic curve $A$ which cannot be written -- after twist -- in the form $\sO \oplus L$ with $L$ torsion or as non-split extension of a trivial line bundle with a line bundle of degree $1$. Hence:
\begin{proposition} Let $X$ be a smooth projective surface with $-K_X$ bad. Then $\kappa(-K_X)=0$, $\nu(-K_X)=1$, $n(-K_X)=2$ and $X$ is one of the following: \begin{list}{}{} \item {\bf Case A)} $X$ is $\bP_2$ blown up in 9 points in sufficiently general position (possibly infinitely near) or \item {\bf Case B)} $X = \bP(E)$, $E$ a rank 2 vector bundle over an elliptic curve which is defined by an extension $0 \to \sO \to E \to L \to 0$ with $L$ a line bundle of degree 0 and either \begin{list}{}{} \item {\bf B.1)} $L = \sO$ and the extension is non-split or \item {\bf B.2)} $L$ is not torsion. \end{list} \end{list} \end{proposition}
\begin{remark} In these cases, the structure of the unique element $D$ in $|-K|$ is as follows: \begin{list}{}{} \item {\bf Case A)} The image of $D$ in $\bP_2$ is the unique cubic curve containing the 9 points and every point is a simple point on the cubic. $D$ is either smooth elliptic or a configuration of rational curves. Every component contains exactly $3d$ points where $d$ is the degree of the component. \item {\bf B.1)} $D=2C$ and $C$ is smooth elliptic. \item {\bf B.2)} $D=C_1 + C_2$ where the $C_i$ are smooth elliptic curves which do not meet. \end{list} \end{remark}
The remaining case is $-K_X$ big and nef which implies that $X$ is $\bP_2$ blown up in at most 8 points in almost general position, $\bP_1 \times \bP_1$ or the Hirzebruch surface $F_2$.
\section{Nef reduction for the anticanonical bundle}
In this section we study the nef reduction of a projective {\it three}fold $X$ with nef anticanonical bundle $-K_X$ and prove that the reduction map can be taken holomorphic.
\begin{theorem} Let $X$ be a projective threefold with $-K_X$ nef. Then there exists a holomorphic map $f: X \to B$ to a normal projective variety $B$ such that \begin{enumerate} \item $-K_X$ is numerically trivial on all fibers of $f$ \item for $x \in X$ general and every irreducible curve $C$ passing through $x$ such that $\dim f(C) > 0,$ we have $-K_X \cdot C > 0.$ \end{enumerate} In case $X$ is rationally connected and $n(-K_X) = 1$ or $2$ then even some multiple $-mK_X$ is spanned by global sections, so that we can take $f$ to be (the Stein factorisation of) the map defined by the sections of $-mK_X.$ \end{theorem}
\begin{definition} Let $X$ be a smooth projective threefold. A $(-2,-2)$-contraction on $X$ is a blow-up $\varphi: X \to Y$ of the smooth threefold $Y$ along a smooth rational curve $C$ with normal bundle $N_{C/Y} = \sO(-2) \oplus \sO(-2).$ \end{definition}
\begin{remark} The background of this definition is the following. Let $X$ be a smooth projective threefold, $\phi: X \to Y$ the blow-up of a smooth curve $C$ in the projective threefold $Y.$ Then $-K_Y$ is nef unless possibly $\phi$ is a $(-2,-2)$-contraction or $C = \bP_1$ with normal bundle $\sO(-1) \oplus \sO(-2).$ This last case is usually easy to be dealt with. \end{remark}
\proof If $X$ is not rationally connected, then the assertions follow from direct classification, see sect.\ 3 and 4. So we will assume $X$ to be rationally connected -- {\it actually we shall use rational connectedness only if $K_X^2 = 0.$} We start with an almost holomorphic nef reduction $f: X \rightharpoonup B$ over the normal projective variety $B$ of dimension $b.$ If $b = 1,$ then $f$ is automatically holomorphic,the spannedness been proved in (5.2), so we may assume $b = 2.$ Consider the general fiber $C$ of $f$, an elliptic curve, and form the associated family $(C_t)_{t \in T}.$ To be precise, we consider the graph of this family $$q: \sC \to T$$ with projection $ p: \sC \to X.$ Then $p(q^{-1}(t)) $ is a compact fiber $C_t$ of $f$ for general $t$ and of course $p : q^{-1}(t) \to C_t$ is an isomorphism for all $t.$ After a base change we may assume $T$ smooth and also we may assume $\sC$ normal. \\ Since $p^*(-K_X)$ is $q$-nef, we easily find a line bundle $\tilde L$ on $T$ and a positive integer $m$ such that $$ p^*(-mK_X) = q^*(\tilde L).$$ Indeed, we let $L = q_*(p^*(-K_X))^{**};$ notice here that $p^*(-K_X)$ is trivial on the general fiber of $q$ since $K_X \vert C_t $ is trivial and since $p$ is an isomorphism near the general $C_t.$ Thus $q_*(p^*(-K_X))^{**}$ is really a line bundle on $T$ and we obtain an injection $q^*(L) \to p^*(-K_X).$ This yields a decomposition $$ p^*(-K_X) = q^*(L) + D $$ with $D$ effective coming from multiple fibers. Hence $mD = q^*(D')$ and we put $\tilde L = mL + D'.$ \\ If now $K_X^2 \ne 0,$ then $L^2 > 0$ and therefore $\kappa (L) = 2. $ This implies $\kappa (-K_X) = 2$ (since $p$ has degree 1) so that the numerical Iitaka dimension and the Iitaka dimension coincide. Thus $-K_X$ is {\it good}. By [Ka85,6.1], $-K_X$ is therefore semi-ample, i.e. some $-mK_X$ is spanned.
It remains to treat the case $K_X^2 = 0.$ Here $L^2 = 0,$ but of course we cannot say that $\kappa (L) = 1.$ If however we know that $\kappa (-K_X) = 1,$ then the same arguments as above show that $-K_X$ is semi-ample. To get more informations, we consider a Mori contraction $\varphi: X \to Y.$ Since $K_X^2 = 0, $ we rule out $\dim Y = 1$ and also $\varphi$ cannot contract a divisor to a point. In other words, $\varphi$ is a conic bundle or the blow-up of a smooth curve $C$ in the smooth 3-fold $Y.$ \vskip .2cm \noindent (1) Suppose first that $\varphi $ is birational. Then $-K_Y$ is again nef unless $\varphi$ is a $(-2,-2)$-contraction since by $K_X^2 = 0,$ the second exception in (2.2) cannot occur. We also note that $K_Y^2 = [C].$ If $-K_Y$ is nef, the Kawamata-Viehweg vanishing theorem gives $$H^2(-K_Y) = 0.$$ On the other hand, $\varphi_*(-K_X) = \sI_C \otimes -K_Y,$ hence by virtue of $R^q\varphi_*(-K_X) = 0$, we obtain the exact sequence $$ H^1(C,-K_Y \vert C) \to H^2(-K_X) \to H^2(-K_Y).$$ Since $K_Y \cdot C = 2 - 2g(C) = \deg (-K_C) $ (see the proof of 5.3 for the detailed computations), we obtain $ h^1(-K_Y \vert C) = h^0(K_Y \vert C + K_C) \leq 1.$ Now Riemann-Roch gives $$ \chi(-K_X) = 3.$$ Here we used the rational connectedness to obtain $\chi(\sO_X) = 1.$ Putting things together, we obtain $h^0(-K_X) \geq 2$ and therefore $\kappa (-K_X) = 1.$ In case of a $(-2,-2)$-contraction, we verify the vanishing $H^2(-K_Y) = 0$ by hand, then all arguments are the same. By duality, we check $H^1(2K_Y) = 0.$ Let $H$ be a general hyperplane section. Then $-2K_Y+H$ is ample, at least after substituting $H$ by a multiple; hence $H^1(2K_Y-H) = 0.$ Since $H$ does not contain the curve $C$ and since $K_Y^2 = C,$ the restriction $-K_Y \vert H$ is big and nef, hence $H^1(2K_Y \vert H ) = 0.$ Thus $H^1(2K_Y) = 0.$ \vskip .2cm \noindent (2) Now consider the case of a conic bundle $\varphi: X \to Y$ with discriminant locus $\Delta \subset Y$. As $X$ is rationally connected, $Y$ must be rational. By the formula $\varphi_* (K_X^2) \equiv -(4K_Y + \Delta)$ we know that
$|-4K_Y|$ contains the reduced element $\Delta$, hence $-K_Y$ must be nef (cf.\ Lemma 2.5). So $Y$ is either $\bP_1 \times \bP_1$, $F_2$ or $\bP_2$ blown up in at most 9 points in almost general position. As above, Riemann Roch gives $\chi(-K_X)=3$. If $h^0(-K_X) \geq 2$, then $\kappa(-K_X) = \nu(-K_X) = 1$ and we are done. So we have to rule out the two possibilities
{\bf A)} $h^2(-K_X) \geq 3$ resp.
{\bf B)} $h^0(-K_X)=1$ and $h^2(-K_X)=2$.
\noindent We consider the rank 3 vector bundle $V = \varphi_*(-K_X)(-K_Y)$. Using duality and the vanishing of the higher direct image sheaves $R^i\varphi_*(-K_X)$ we calculate $h^0(V^*) = h^2(\varphi_*(-K_X)) = h^2(-K_X)$ so we know in case {\bf A)} that $V^*$ has at least 3 sections and in case {\bf B)} that $V^*$ has 2 sections.\\ As $-K_X$ restricted to a fiber of $\varphi$ is $\sO(1)$ on the conic we can recover $X$ as a hypersurface in $\bP(V)$. By construction $X$ is linear equivalent to $2 \xi + \pi^*D$ for some divisor $D$ on $Y$ and
$(\xi + \pi^*K_Y)_{|X} = -K_X$ which determines $D$: Using the canonical bundle formula we calculate
$$ -K_X = (- K_{\bP(V)} - X)_{|X} = (3 \xi - \pi^*(K_Y + \det V) - 2\xi -\pi^*D)_{|X}$$
$$ = (\xi - \pi^*(K_Y + \det V + D)_{|X}$$ hence $ D = -2 K_Y - \det V$. Second, the condition $K_X^2=0$ reads as $$ 0 = (\xi + \pi^*K_Y)^2 \cdot (2 \xi + \pi^*D)$$ $$ = \xi^2 \cdot \pi^*(2 \det V + 4K_Y + D) + \xi \cdot \pi^*(-2 c_2 + 2K_Y^2 + 2K_Y \cdot D)$$ which implies $D = -2 \det V - 4K_Y = 2 D$ i.~e.\ $D=0$ as well as $c_2 = K_Y^2$.\\ In case {\bf A)} $V^*$ has 3 sections $s_1, s_2, s_3$. We consider a general line $C \subset Y$. Lemma 2.6 implies that $V$ restricted to $C$ is nef hence of type $\sO(a) \oplus \sO(b) \sO(c)$ with nonnegative integers $a,b,c$. As $V^*$ has 3 sections we conclude $a=b=c=0$ hence $\det V \cdot C =0$. But then $K_Y \cdot C = 0$ using our numerical condition which is impossible.\\ In case {\bf B)} we still know that $V^*$ has two sections $s_1, s_2$. Again $V$ restricted to a general $\bP_1$ is nef and therefore $V$ splits as $\sO \oplus \sO \oplus \sO(a)$ with $a \geq 0$. In particular $s_1 \wedge s_2$ does not vanish identically. This means that the cokernel $\sL$ in the sequence $$ 0 \to \sO \oplus \sO \stackrel{(s_1,s_2)}{\longrightarrow} V^* \to \sL \to 0$$ has generic rank 1. From the sequence and our numerical conditions above we calculate $c_1 (\sL) = -c_1 (V) = 2K_Y$ and $c_2(\sL) = c_2(V) = K_Y^2$. Dualising we obtain an injection $ 0 \to \sL^* \to V$ with $\sL^*$ locally free and in fact $\sL^* = -2K_Y$ because on a rational surface numerical and linear equivalence coincide. Twisting by $K_Y$ we finally obtain an injection $$ 0 \to \sO(-K_Y) \to \varphi_*(-K_X).$$ If $-K_X$ has more that one section $\kappa(-K_X)=1$ and we are done. Therefore $h^0(-K_Y)=1$. An application of Riemann-Roch shows that this is only possible if $K_Y^2=0$ hence $c_2(V)=0$ and already $\sL$ is locally free. So we get: $$ 0 \to \sO(-K_Y) \to \varphi_*(-K_X) \to \sO(K_Y) \oplus \sO(K_Y) \to 0$$ Now we have two possibilities: Either this splits i.~e.\ $\varphi_*(-K_X) = \sO(-K_Y) \oplus \sO(K_Y)^{\oplus 2}$ and $-2K_X$ has at least 3 sections and again we are done. Or the sequence doesn't split which is only possible if $h^1(-2K_Y)=1$ which is equivalent to $h^0(-2K_Y)=2$ using Riemann-Roch. Writing down the usual filtration for $S^2(\varphi_*(-K_X))$ we obtain $$ 0 \to F^1 \to S^2(\varphi_*(-K_X)) \to \sO(2K_Y)^{\oplus 3} \to 0$$ $$ 0 \to \sO(-2K_Y) \to F^1 \to \sO^{\oplus 2} \to 0$$ and $\kappa(-K_X)=1$ once again. \qed
In the proof of Theorem 2.1 we have actually shown the following
\begin{corollary} Let $X$ be a smooth projective threefold with $-K_X$ nef. Suppose $K_X^2 = 0$ and $X$ rationally connected. Then $\nu(-K_X) = \kappa (-K_X) = 1$ and therefore $-K_X$ is semi-ample, inducing an abelian or K3-fibration over $\bP_1.$ \end{corollary}
During the proof of Theorem 2.1 we used the fact that for a conic bundle $X \to Y$ with $K_X^2=0$ we know that $-K_Y$ is nef. This follows from the following lemma:
\begin{lemma} Let $Y$ be a smooth projective surface and assume that there exists a reduced divisor $\Delta \equiv -mK_Y$ for some $m \geq 4$. Then $-K_Y$ is nef. \end{lemma}
\proof Let $C \subset Y$ be a curve which is contained in $\Delta$, i.~e.\ $\Delta = C + \Delta'$ for some effective divisor $\Delta'$ which does not contain $C$. Then $(1-m) K_Y \cdot C = K_Y \cdot C + \Delta \cdot C = \deg K_C + \Delta' \cdot C \geq -2$. Therefore $-K_Y \cdot C \geq -2 / (m-1)$ and the integer $-K_Y \cdot C$ is nonnegative. \qed
Another ingredient of the proof is the following specialization of $\cite[3.21]{DPS94}$.
\begin{lemma} Let $\varphi: X \to Y$ be a Mori contraction which is a conic bundle, $X$ a smooth projective threefold with $-K_X$ nef and let $V$ be the vector bundle $\varphi_*(-K_X)(-K_Y)$. Furthermore let $\bP_1 \cong C \subset Y$ such that $X_C = \varphi^{-1}(C)$
is smooth. Then $V_{|C}$ is generated by global sections, in particular it is nef.
\end{lemma}
\proof We consider the induced conic bundle $\varphi_C: X_C \to C$. Let $l = \varphi_C^{-1}(y)$ be any fiber.
We want to show that $(\varphi_C)_*(-K_{X|C})(-K_{Y|C})$ is generated by its global sections, i.~e.\
that every section of $( -K_X \otimes \varphi^* (-K_Y) )_{|l}$ lifts to $X_C$. We will show the vanishing of
$H^1(X_C, -K_{X|X_C} - \varphi_C^*(-K_{Y|C}) -l)$ which gives the desired extension property. If we write
$$-K_{X|X_C} - \varphi_C^*(-K_{Y|C}) -l) = K_{X_C} + L$$ then some easy calculation gives
$$ L = -2 K_{X|X_C} + \varphi_C^*(-K_C -y)$$ As $-K_{X}$ is nef and $\varphi$-ample $L$ is ample and Kodaira vanishing gives $H^1(K_{X_C} + L)=0$. \qed
\begin{remark} We used several times the fact that the restriction of $\xi = \sO_{\bP(V)}(1)$ to $X$ is $-K_X$.
If $l$ is a conic then $-K_X \cdot l = \xi \cdot l = 2$ so $\xi_{|X}$ and $-K_X$ only differ by $\varphi^* M$ for some $M$. Consider the relative Euler sequence $$ 0 \to \Omega^1_{\bP(V)/Y}(1) \to \pi^* V \to \sO_{\bP(V)} (1) \to 0$$
Since $H^i( l, \Omega^1_{\bP_2}(1)_{|l}) =0$ for $i=0,1$ which follows from the Bott formula we know that
$\varphi_*( \Omega^1_{\bP(V)/Y}(1)_{|X}) = R^1\varphi_*( \Omega^1_{\bP(V)/Y}(1)_{|X}) = 0$. So if we restrict the sequence to $X$ and push it down to $Y$ we obtain an isomorphism
$$\varphi_*(-K_X) = V = \varphi_* \varphi^* V = \varphi_* (\pi^*V_{|X}) \stackrel{\cong}{\to} \varphi_*(\xi_{|X})$$ which implies that $M$ is torsion hence trivial as $Y$ is simply connected. \end{remark}
\section{Non-rationally connected threefolds with vanishing irregularity}
\begin{theorem} Let $X$ be a smooth projective threefold with $\tilde q(X) = 0.$ Suppose that $-K_X$ is nef but $K_X \not \equiv 0.$ Then either $X$ is rationally connected or after finite \'etale cover, $X \simeq \bP_1 \times S$ with $S$ a K3 surface. \end{theorem}
\proof Since $K_X \not \equiv 0,$ its Kodaira dimension $\kappa (X) = - \infty,$ hence $X$ is uniruled. Let $(C_t)$ be a covering family of rational curves, providing an almost holomorphic quotient $f: X \rightharpoonup S$ to a smooth variety $S$ of dimension 1 or 2. If $\dim S = 1,$ then then $S \simeq \bP_1$ by $\tilde q(X) = 0.$ Since the fibers of $f$ are rationally connected, we conclude that $X$ is rationally connected. So let $\dim S = 2;$ we may assume $S$ minimal. Since $\tilde q(X) = 0, $ also $\tilde q(S) = 0$. Moreover by Zhang [Zh96], $\kappa (S) \leq 0,$ see [DPS01,4.12]. If $\kappa (S) = - \infty,$ then $S$ must be rational, and hence $X$ is rationally connected. Thus we are reduced to $S$ being K3 or an Enriques surface. After a finite \'etale cover we may assume $S$ to be K3. \\ By Mori theory (in the relative version for $f$) we can find a sequence $g: X \rightharpoonup X'$ of birational contractions and flips such that $X'$ has a $\bP_1$-fibration $h: X' \to S,$ which is the contraction of an extremal ray. By [PS98,2.1,2.2] $-K_{X'}$ is almost nef; i.e. $-K_{X'} $ is nef except for finitely many rational curves. Hence the arguments of [PS98,1.9,1.10] apply and show that $X' \to S$ is actually a $\bP_1$-bundle. In particular, $X'$ is smooth. Then we can write $$ X' = \bP(E) $$ with a rank 2 vector bundle $E$ over $S.$ Now the almost nefness of $-K_X$ implies that $S^2E \otimes \det E^* $ is nef on all curves except for finitely many rational curves $C_i \subset S.$ Using $\bQ$-bundles we can say that $$ E_0 := E \otimes {{\det E^*} \over {2}} $$ is almost nef. Since moreover $-K_X$ is pseudo-effective, also $-K_{X'} $ is pseudo-effective, so that $\sO_{\bP(E_0)}(1) $ is pseudo-effective. Then by [DPS01,6.7(c)], $E_0$ is numerically flat. \\ NB. It is clear that we may argue on the level of $\bQ$-bundles; alternatively note that, fixing $A$ ample on $S,$ then $$ H^0(S^m(S^2E \otimes \det E^*) \otimes A)) \ne 0 $$ for large $m$, hence $\sO_{\bP(S^2E \otimes \det E^*)}(1) $ is pseudo-effective and also almost nef in the sense of [DPS01], so that [DPS01,6.7(c)] applies to give the numerical flatness of $S^2E \otimes \det E^*.$ \\ Now, $S$ being simply connected, it follows that $S^2E \otimes \det E^* = \sO_S^3,$ and we claim that then $\bP(E) \simeq \bP_1 \times S$ : consider a smooth member $D \in \vert -K_{X'} \vert, $ given by a general section in $S^2E \otimes \det E^*.$ Now $D$ maps $2:1$ onto $S$ and $K_D = \sO_D$, so $D \to S$ is \'etale. Since $\pi_1(S) = 0,$ $D$ must be disconnected with $2$ components, hence $-K_{X'}$ is divisible by $2$ and therefore $E \otimes \det E^*$ exists as a vector bundle and is trivial. \\ Alternatively, the three sections of $-K_{X'}$ give a map $X' \to \bP_2$. Since $-K_{X'}^2 = 0,$ the image of this map must be $ \bP_1$ and we conclude. \\ Finally we show that $X = X'.$ So let $g_m: X_m \rightharpoonup X'$ be the last contraction of $g.$ We know again that $-K_{X_m} $ is almost nef. This leads immediately to a contradiction by considering a surface $S_x = \{x\} \times S$ such that $S_x$ meets the center of $g_m$ and by computing canonical bundles. \qed
\begin{corollary} Let $X$ be a smooth projective threefold with $\tilde q(X) = 0.$ Suppose that $-K_X$ is nef but $K_X \not \equiv 0.$ If $X$ is not rationally connected, then either $X = \bP_1 \times S$ with $S$ a K3 or an Enriques surface or $X$ is a non-trivial $\bP_1$-bundle over an Enriques surface $S$ which is trivialized by the universal 2:1-cover $\tilde S \to S$. In all cases we have $n(-K_X) = n(-K_{\tilde X}) = 1$. \end{corollary}
\proof Let $\tilde X \to X$ be the universal cover; then $\tilde X = \bP_1 \times \tilde S$ with a K3 surface $\tilde S$ by (3.1). Let $\varphi: X \to S$ be a Mori contraction; then $\varphi $ lifts to $\tilde X$ and therefore must be a $\bP_1$-bundle. Moreover we obtain an \'etale cover $\tilde S \to S.$ So $S$ is K3 or Enriques and we are in the situation as described in the corollary. \qed
\section{Threefolds with positive irregularity}
\begin{setup} {\rm Here we consider smooth projective threefolds $X$ with $-K_X$ nef such that $\tilde q(X) > 0.$ After passing to a finite \'etale cover we may and will assume that $q(X) = \tilde q(X).$ We let $\alpha: X \to A$ denote the Albanese, which is a surjective submersion by [PS98]. In particular $q \leq 3$ and $q = 3$ iff $X$ is abelian. So we need only to consider the cases $q = 1$ and $q = 2.$ Additionally we consider the invariant $n(-K_X).$ If $n(-K_X) = 0, $ then $K_X \equiv 0,$ so we suppose $n(-K_X) > 0.$ Then we have a non-trivial nef reduction $f: X \rightharpoonup S$, at least if $n(-K_X) \ne 3$ (in which case $S = X).$ } \end{setup}
\begin{theorem} Let $X$ be a smooth projective threefold with $-K_X$ nef. Suppose $$1 \leq q = \tilde q \leq 2$$ and $$0 < n(-K_X) < 3.$$ Then the nef reduction $f: X \to S$ can be taken holomorphic and after a finite \'etale cover, $X$ is one of the following \begin{enumerate} \item $q(X) = 1,$ $n(-K_X) = 2,$ and $X \simeq A \times S$ ($A$ elliptic, $S$ rational with $n(-K_S)= 2$, i.e. $\bP_2,$ $\bP_1 \times \bP_1$, del Pezzo or $-K_S$ nef without elliptic fibration). \item $q(X) = 1,$ $n(-K_X) = 1,$ and $X \simeq A \times F$ where $F$ is $\bP_2$ blown up in 9 points such that $-K_F$ is nef and $F$ admits an elliptic fibration. Here $S$ is the image of the elliptic fibration on $F.$ \item $q(X) = 2,$ $n(-K_X) = 2.$ Then $X = \bP(E) \times B_2$ with $B_i$ elliptic curves and $E$ a numerically flat bundle over $B_1$ with $n(-K_{\bP(E)}) = 2$ (i.e. $E$ is non-trivial even after finite \'etale cover). Here $S = \bP(E).$ \item $q(X) = 2,$ $n(-K_X) = 1.$ Then $X = A \times \bP_1 $ ($A$ abelian), and $S = \bP_1.$ \end{enumerate} \end{theorem}
\proof We consider the almost holomorphic nef reduction $f: X \rightharpoonup S$ to a smooth curve or a normal surface. \\ \\ {\bf Case I:} $q = 1.$ \\ Then all fibers $F$ of $\alpha$ are smooth rational surfaces with $-K_F$ nef and in particular $K_F^2 \geq 0.$ \\ {\it Subcase I.1:} $F = \bP_2.$ So $\alpha$ is a $\bP_2$-bundle; we write $X = \bP(E)$ with a rank 3-bundle $E$ over $A.$ Then $E \otimes {{\det E^*} \over {3}}$ is nef with $c_1 = 0,$ hence numerically flat. If $n(-K_X) = 2,$ then $\dim S = 2$ and consider a general fiber $C$ of $f.$ Since $f$ is holomorphic with $K_X \cdot C = 0,$ $C$ is a smooth elliptic curve and therefore an \'etale multi-section of $\alpha,$ hence a section after a finite \'etale cover of $A.$ Then we obtain a 2-dimensional family of disjoint sections, and hence $X = A \times S$ \\ The case $n(-K_X) = 1$ is obviously impossible by dimension reasons since $-K_X$ is $\alpha$-ample and trivial on the general fiber of $f.$ \\ {\it Subcase I.2:} $F = \bP_1 \times \bP_1.$ Then we have a factorisation (see [PS98]) $$ X {\buildrel {g} \over {\la }} W {\buildrel {h} \over {\la}} A $$ with $g$ and $h$ both $\bP_1$-bundles. Then $X = \bP(E) $ with a rank 2-bundle $E$ over $W$; moreover $-K_W$ is nef. Since $-K_X$ is $\alpha$-ample, it is clear that (as in case I.1) $\dim f(F_{\alpha}) = 2,$ hence $n(-K_X) = 2.$ Let again $C$ be a general fiber of $f,$ a smooth elliptic curve. Then after finite \'etale cover of $A$, $C$ is a section of $\alpha,$ hence already $W = \bP_1 \times A.$ These elliptic curves define an irreducible family $(C_t)_{t \in T}$ with $T$ compact, a priori only the general $C_t$ is a smooth elliptic curve. However $-K_X \cdot C_t = 0$ for all $t;$ hence $C_t$ cannot contain a component in a fiber $F$ of $\alpha.$ On the other hand, $C_t \cdot F = 1,$ hence every $C_t$ is a section of $\alpha.$ Now consider the family $(g(C_t)).$ This is a complete family of elliptic curves, i.e. with no degeneracies. Therefore $W = \bP_1 \times A$ and $g(C_t)$ is a fiber of the projection to $\bP_1.$ Also $\bP(E \vert c \times A) = \bP_1 \times A,$ hence after renormalizing, $E \vert c \times A$ is trivial for all $c$ and so $E = p_1^*(E') $ with a vector bundle $E'$ over $\bP_1.$ Hence $X = \bP(E') \times A$, and consequently $\bP(E') = \bP_1 \times \bP_1.$ Therefore $X = \bP_1 \times \bP_1 \times A.$ \\ {\it Subcase I.3:} $F$ is del Pezzo with $K_F^2 \leq 7.$ Then we have a factorization $$ X {\buildrel {g} \over {\la }} W {\buildrel {h} \over {\la}} A $$ with $g$ the blow-up of some multi-sections and $h$ a $\bP_2$-bundle. By the same reason as before, $n(-K_X) = 2.$ Again we get a lot of multi-sections of $h$, which gives $W = \bP_2 \times A$ (after \'etale cover), hence $X = \bP_2(x_1, \ldots, x_r) \times A.$ \\ {\it Subcase I.4} The same arguments still work if $-K_F$ is just nef as long as $\dim S = 2,$ i.e. $n(-K_X) = 2.$ So suppose $\dim S = 1.$ Note that this is only possible if $-K_F$ is not big, i.e. $K_F^2 = 0$ and if $F$ admits an elliptic fibration to $S = \bP_1.$ We still have a factorization $$ X {\buildrel {g} \over {\la }} W {\buildrel {h} \over {\la}} A $$ with $g$ the blow-up of some multi-sections and $h$ a $\bP_2$-bundle. Consider the nef reduction $f: X \to S \simeq \bP_1.$ Then the general fiber $F$ is a smooth surface with $K_F \equiv 0.$ Since $F$ projects onto the elliptic curve $A,$ the surface $F$ must by hyperelliptic or a torus and actually it is a product after finite \'etale cover. Let $E$ be the exceptional locus of $g.$ Then $E \cap F$ is a union of multi-sections of $F \to A$ and we are going to determine its structure. \\ Consider the last blow-up $g_r : X = X_r \to X_{r-1},$ blowing up the \'etale multi-section $C_r$ of $X_{r-1} \to A.$ Let $E_r$ be the corresponding divisor in $X.$ Since $f(E_r) = S,$ we have an \'etale cover $E_r \to S \times A$ given by $(f \vert E_r) \times (h \circ g \vert E_r)$. Hence $E_r \cap F$ is an elliptic curve, an \'etale multi-section of $F \to A.$ Therefore we find a 1-dimensional (non-complete) family of disjoint multi-sections of $F \to A$ not meeting $E \cap F.$ Varying $F$ and proceeding by induction on the blow-ups belonging to $g,$ we obtain a 2-dimensional family of disjoint \'etale multi-sections of $W \to A$ (not meeting the exceptional locus of $g$ in $W$). Hence $W$ is a product after finite \'etale cover and we want to conclude that then $X$ is already a product (up to finite étale cover). This is clear if we always blow up curves of type $A_p = A \times \{p\}$. So assume this is not the case i.~e.\ we blow up a curve $C_i$ which is not of this type. Then we may find some curve $A_p$ such that $C_i \cap A_p$ is finite. But if $\hat{A}_p$ denotes the strict transform we then calculate $$ -K_{X_i} \cdot \hat{A}_p = ( g_i^* (-K_{X_{i-1}}) - E_i) \cdot \hat{A}_p = - K_{X_{i-1}} \cdot A_p - E_i \cdot \hat{A}_p = - E_i \cdot \hat{A}_p <0$$ (here $-K_{X_{i-1}} \cdot A_p = 0$ because inductively we may assume that $X_{i-1}$ is a product). Now this contradicts $-K_{X_i}$ nef.\\ {\bf Case II:} $q = 2$. Now $\alpha: X \la A$ is a $\bP_1$-bundle. After a finite \'etale cover, we can write $X = \bP(E)$ with a numerically flat rank 2-bundle $E$. \\ {\it Subcase I:} \ $n(-K_X) = 1.$ In that case we have again a holomorphic map $f: X \la S = \bP_1$. The general fiber $F_f$ is an \'etale cover of $A,$ so after another \'etale base change, general fibers of $f$ and $\alpha$ meet transversally at one point. In other words, $\alpha$ has many disjoint sections and thus $E$ is trivial (after a twist). So $X = S \times A = \bP_1 \times A.$\\ {\it Subcase II:} \ $n(-K_X) = 2.$ Then $\alpha$ has a 2-dimensional family of sections so that $A$ carries a famliy of elliptic curves. Now by Poincar\'e reducibility, $A = B_1 \times B_2$ is a product of elliptic curves $B_i;$ possibly after finite \'etale cover. Then we argue similarly as in Subcase I.2 to obtain the product structure $X = \bP(F) \times B_2$ with $F$ a semi-stable bundle on $B_1 $ (such that $E = p_1^*(F)).$ \qed
\begin{re} {\rm In the case $n(-K_X) = 3$ we cannot expect such precise results. Here $-K_X$ is positive on all covering families of generically irreducible curves. Let us consider e.g. the situation that $q(X) = 2.$ With the notations as before and after a finite \'etale cover, $X = \bP(E)$ with a rank 2-bundle over the Albanese torus $A$ such that $E$ is nef with $c_1(E) = 0 $ (see [CP91]). So $E$ is numerically flat. Fix a curve $C \subset A.$ Then there exists a moving curve $B \subset \bP(E \vert C)$ with $K_X \cdot B = 0$ if and only if after normalizing $C$ and after a finite \'etale cover of the normalization, the bundle $E \vert C$ splits. Let us say that $E$ is almost trivial on $C.$ Then we obtain: \\ $n(-K_X) = 3$ if and only if there is at most a countable number of curves $C \subset A$ such that $E \vert C$ is almost trivial. } \end{re}
\begin{corollary} Let $X$ be a smooth projective threefold with $-K_X$ nef and $K_X \not \equiv 0.$ If $\tilde q > 0,$ then $q > 0.$ The nef reduction is holomorphic, and if $q < \tilde q$ then $X$ is a $\bP_1$-bundle over a hyperelliptic surface. \end{corollary}
\proof Since $K_X \not \equiv 0,$ we have $\tilde q \leq 2.$ Let $\tilde X \to X$ be a finite \'etale cover such that $q(\tilde X) = \tilde q.$ From the classification in (4.2) we deduce $\chi(\sO_{\tilde X}) = 0.$ If $q(X) = 0$ then $\chi(\sO_X) \geq 1,$ contradicting $\chi(\sO_X) = \frac{1}{m} \chi(\sO_{\tilde X}).$ So we only have to investigate threefolds $X$ with $q = 1$ and $\tilde q = 2.$ Looking at the classification, we see that then the Albanese map $\alpha: X \to A$ factors as $g \circ f$ with a $\bP_1$-bundle $f: X \to A'$ and an elliptic bundle $g: A' \to A$ where $q(A') = 1.$ So $A'$ is hyperelliptic (since $\kappa (A') \leq 0$). The fibers $F$ of $\alpha$ are $\bP_1$-bundles over elliptic curves with $-K_F$ nef. If $\dim S = 2,$ $X$ carries a 2-dimensional family of elliptic curves. Then $F$ is necessarily a product, and $S = \bP(V) $ with a rank 2-bundle over $A$ while $X = \bP(g^*(V)).$ The bundle $V$ is moreover non-trivial, even after twists. If $\dim S = 1,$ then $S = \bP_1$ and $X = S \times A'.$ In all other cases, $\dim S = 0.$ \qed
\begin{theorem} Smooth projective threefolds $X$ with $-K_X $ nef, $K_X \ne 0$ and $\tilde q > 0$ form a bounded family up to finite \'etale cover. \end{theorem}
\proof By virtue of (4.1) we are a priori reduced to the case $n(-K_X) = 3,$ however we will not use this information. Let $\alpha: X \to A $ be the smooth surjective Albanese map. \\ First assume that $q = 2.$ As already noticed, after possibly a finite \'etale cover, $X$ is of the form $\bP(E)$ with a numerically flat rank-2 bundle $E$ over $A$. In particular $c_1(E) = 0$ and $E$ is semi-stable. This gives boundedness. \\ So from now on we assume $q = 1.$ If $\alpha$ is a $\bP_2$-bundle, the same argument applies. In all other cases we have the following picture [PS98]. There exists a $\bP_1$-bundle $p: W \to A$ with $-K_W$ nef and a rank 2-bundle $E$ over $W$ such that the 3-fold $X' = \bP(E)$ has nef anticanonical bundle and such that $X$ arises from $X'$ by blowing up some \'etale multisection of $\alpha$ (including the case $X = X'$). The surfaces with $-K$ nef are bounded, so we may fix $W.$ Next we have to bound $E$ up to twists by line bundles. Let $F$ be a fiber of $p$ and $C_0$ a section with $C_0^2 $ minimal. We normalize $E$ such that $$ 0 \leq c_1(E \vert F) \leq 1 $$ and $$ 0 \leq c_1(E \vert C_0).$$ Let $e = -C_0^2. $ Since $-K_W$ is nef, we have $-1 \leq e \leq 0.$ If $e = -1,$ then there exists an \'etale cover of degree $2$ which has $e = 0.$ So we can restrict to $e = 0.$ Writing $$ c_1(E) = aC_0 + bF,$$ we have $a = c_1(E \vert F) $ and $b = c_1(E \vert C_0),$ so $0 \leq a,b \leq 1.$ In particular $c_1(E)^2 = 2ab = 0$ or $2.$ The fact that $-K_{X'} $ is nef is translated into the nefness of $$ E \otimes {{\det E^*} \over {2}} \otimes {{-K_W} \over {2}}.$$ Using $K_W^2 = 0$ and the inequalities $c_1^2 \geq c_2$ and $c_2 \geq 0$ for a nef bundle, the equality $c_2(E \otimes {{\det E^*} \over {2}}) = 0$ is established. This means $$ c_1^2(E) = 4c_2(E), $$ hence $c_1^2 (E) = c_2(E) = 0$ and $ab = 0.$ If now $E$ is semi-stable with respect to the ample divisor $H = C_0+F$, then we obtain boundedness. So suppose $E$ is unstable with respect to $H.$ Let $S$ be the maximal destablising subsheaf, a line bundle. Then we have an exact sequence $$ 0 \to S \to E \to \sI_Z \otimes Q \to 0 \eqno (S)$$ with a finite set $Z$ and a line bundle $Q$. The destabilising property gives $S \cdot H \geq 1.$ Since $K_W \cdot C_0 = 0$, $$E \otimes {{\det E^*} \over {2}} \vert C_0$$ is numerically flat, hence $E \vert C_0$ is semi-stable. Thus $$S \cdot C_0 \leq {{1} \over {2}} c_1(E) \cdot C_0 = {{b} \over {2}}, $$ so $S \cdot C_0 = 0$ and therefore $S \cdot F \geq 1.$ \\ The nefness of $$E \otimes {{\det E^*} \over {2}} \otimes {{-K_W} \over {2}} \vert F$$ yields $ E \vert F = \sO \oplus \sO $ or $\sO(1) \oplus \sO(-1)$ in case $a = 0$ and $ E \vert F = \sO \oplus \sO(1) $ in case $a = 1.$ Hence $S \cdot F = 1$ and consequently we have $S \equiv C_0.$ After tensoring with a topological trivial line bundle we have $S = C_0.$ If $a = b = 0,$ then $Q \equiv -C_0.$ Since $c_2(E) = 0,$ $Z = \emptyset $ and now the sequence (S) proves the boundedness. The cases $a = 1, b = 0$ and $a = 0, b = 1$ are done in the same way. \\ Finally we have to deal with the multi-sections (of degree at most $9$) to be successively blown up. After some \'etale cover of $A$, the first multisection $C$ is a section of $X' \to A$ and we have to bound $C.$ Let $f: Z \to X'$ be the blow up of $C$ and $D = f^{-1}(C).$ The equation $$ 0 = K_Z^3 = (K_{X'} + D)^3 $$ together with $K_{X'}^3 = 0$ yields $c_1(N_C) = - 3 (K_{X'} \cdot C).$ On the other hand, $c_1(N_C) = - K_{X'} \cdot C $ since $C$ is an elliptic curve. Thus $K_{X'} \cdot C = c_1(N_C) = 0$. The nefness of $-K_Z \vert D = -K_D + N_D$ leads easily to the statement that $N_{C/X'} $ is numerically flat. Let $\tilde C = p(C) \subset W.$ Then $\tilde C$ is a section of $W \to A.$ Let $\tilde e$ be the invariant of $E \vert \tilde C.$ Let $c = \tilde C^2.$ Then the nefness of $-K_{X'} \vert \bP(E \vert \tilde C)$ is translated into $c \geq \tilde e$ if $e \geq 0$ and into $c \geq 0$ if $\tilde e < 0,$ i.e. $\tilde e = -1.$ Let $C_0 $ be a section of minimal self-intersection in $\bP(E \vert \tilde C)$. Since $-K_{X'}$ is nef and since $-K_{X'} \cdot C = 0,$ we must have $C \equiv C_0.$ Since $-K_{X'} \vert \bP(E \vert \tilde C) \equiv 2C_0 + (e+c)F,$ we conclude that $e = -c.$ Therefore in total $c \geq 0$ if $\tilde e \geq 0,$ hence $c = \tilde e = 0.$ Hence $\tilde e = 0,-1.$ Hence $\tilde C^2 = 0,1$ which proves boundedness of $\tilde C$ and hence of $C.$ \\ The other centers to be blown up are treated in the same way; we leave the details to the reader.
\qed
\section{Rationally connected threefolds I}
In this section we investigate rationally connected threefolds $X$ with $n(-K_X) = 1;$ they can be viewed as the 3-dimensional analogues of the surfaces $\bP_2(x_1, \cdots, x_9)$ carrying an elliptic fibration. The first proposition improves Theorem 2.1 in case $n(-K_X) = 1,$ compare also (2.11) in [8authors].
\begin{proposition} Let $X$ be a smooth projective 3-fold with $-K_X$ nef. Suppose $n(-K_X) = 1$ and let $f: X \to B = \bP_1$ be the nef reduction. Then there exists $m_0 \in \bN$ such that $m_0 K_X$ is spanned by global sections and such that the sections define the map $f.$ In particular $K_X^2 = 0.$ \end{proposition}
\proof Let $F$ be the general fiber of $f.$ Then $K_F = K_X \vert F \equiv 0,$ hence there exists $m$ such that $mK_F = \sO_F$ for most fibers. Thus $f_*(-mK_X)$ is a line bundle over $B$ and there is an inclusion $$ f^*f_*(-mK_X) \to -mK_X.$$ Hence we can write $$ -mK_X = f^*(\sO_B(a)) + \sum \lambda_i F_i $$ with fiber components $F_i.$ It follows that $ \sum \lambda_i F_i$ is $f$-nef which is only possible if $n \sum \lambda_i F_i = f^*(\sO_B(b))$ (cut e.g. by a general hyperplane section). Hence we find $m_0$ such that $$ -m_0 K_X = f^*(\sO_B(c)) $$ and our claim follows. \qed
\begin{theorem} In the situation of (5.1), suppose that there exists a Mori contraction $\varphi: X \to S$ with $\dim S \leq 2.$ Then $-K_S$ is nef. Moreover $$f \times \varphi: X \to B \times S \simeq \bP_1 \times S$$ is a two-sheeted cover ramified over some $D \in \vert \sO_B(2) \hat \otimes -2K_S \vert.$ The projection $\varphi$ is a conic bundle with discriminant locus $\Delta $ being a member of the linear system $\vert -4K_S \vert$, and the map $f$ is a K3-fibration over $B = \bP_1$ with $-K_X = f^*(\sO_B(1)).$ In particular $-K_X$ is spanned by global sections and $-K_X$ is hermitian semi-positive. \end{theorem}
Of course, examples as in the theorem exist: just start with $X$ as a smooth two-sheeted covering of $S$, ramified over $D$ as in the theorem (since $D$ is divisible by 2, the cyclic cover exists). Of course, the existence of a smooth $D \in \vert -K_S \vert$ must be guaranteed. \\ \\
\proof Let $F$ be a general smooth fiber of $f.$ Since $K_F \equiv 0,$ we must have $\dim \varphi(F) = 2,$ so $S$ is a (smooth) surface, and $\varphi$ is a conic bundle by Mori's classification. By (2.5) $-K_Y$ is nef. Let $\Delta$ denote its discriminant locus. Then we have the well-known and easy formula $$ \varphi_*(K_X^2) \equiv -(4K_S+\Delta).$$ Since $K_X^2 = 0$ by (5.1), $\Delta \equiv -4K_S,$ hence $\Delta = -4K_S$, since $X$ is rationally connected, hence simply connected. In particular $\Delta \ne 0$, since $S$ is necessarily a rational surface. \\ Let $l$ be the fiber over a general point of $\Delta.$ Then $l$ is a reducible conic $l = l_1+l_2$ and the $l_i$ are homologous in $X$, since $\varphi$ is the contraction of an extremal ray. Thus $-K_X \cdot l_i = 1.$ Let $d = \deg(f \vert l)$ over a general conic $l = \varphi^{-1}(s),$ then $d \geq 2.$ We will show that $d = 2$ so that $-K_X = f^*(\sO_B(1)) $ (use again rational connectedness to pass from numerical equivalence to linear equivalence). \\ \\ {\bf (I)} We first assume that $K_F = \sO_F.$ \\ Then $f_*(-K_X) $ is a line bundle $\sO_B(a),$ and as in the proof of 5.1, we can write $$ -K_X = f^*(\sO_B(a)) + \sum_{i=1}^k \lambda_i F_i, $$ where $F_i$ are fiber components and $\lambda_i$ are positive integers. We take $a$ maximal with such a decomposition and also note that numerical and linear equivalence coincide, $X$ being simply connected. Since $-K_X$ is nef, $\sum \lambda_i F_i = cF$ with a positive rational number $c.$ Since $f_*(\sO_X(\sum \lambda_i F_i)) = \sO_B,$ it follows $0 \leq c < 1.$ Thus $a \geq 0,$ again by nefness of $-K_X.$ \\ Suppose next $a = 0.$ Since $ \varphi \vert F_i$ is finite, we have $F_i \cdot l_j \geq 1$ (remember that $l_1 $ and $l_2$ are homologous), hence $F_i \cdot l \geq 2.$ By virtue of $-K_X \cdot l = 2,$ we must have $k = 1$ and $\lambda_1 = 1,$ so that $-K_X = F_1$. So $h^0(-K_X) = 1$ and the sequence $$ H^2(\sO_X) \to H^2(\sO_X(F_1)) \to H^2(N_{F_1}) $$ together with $h^2(N_{F_1}) = h^0(K_X \vert F_1)$ shows that $h^2(-K_X) \leq 1.$ Thus $\chi(-K_X) \leq 2.$ On the other hand, Riemann-Roch plus $K_X^3 = 0$ gives $$ \chi(-K_X) = 3 \chi(\sO_X) = 3,$$ contradiction. Hence $a \geq 1.$ Since $$ 2 = -K_X \cdot l = da + (\sum_{i=1}^{k} \lambda_i F_i) \cdot l, $$ we must have $da = 2$ and $k = 0$, therefore $d = 2$ and $ a = 1 $ and $$ -K_X = f^*(\sO_B(1)).$$ Moreover $f \vert l$ and $\varphi \vert F$ are two-sheeted coverings. Now we consider $$\tau = f \times \varphi: X \to B \times S,$$ which is a cyclic cover of degree 2, ramified over say $D \subset B \times S.$ Then $$ K_X = \tau^*(K_{B\times S} + {{1} \over {2}}D) $$ together with $K_X = f^*(\sO_S(1))$ proves that $D \in \vert \sO(2) \hat \otimes -2K_S \vert.$ Finally, $h =\varphi \vert F \to S$ is a two-sheeted cover which easily shows that $F$ must be K3 (consider $h_*(\sO_F) = \sO_S \oplus -K_S$ and take cohomology resp.\ take preimages of exceptional curves in $S$).\\ \\ {\bf (II)} If $K_F \ne \sO_F,$ then $K_F$ is torsion, write $\lambda K_F = \sO_F.$ Actually $F$ is a hyperelliptic surface or an Enriques surface and $\lambda = 2,3,4,6$ in the first resp. $\lambda = 2$ in the second case. \\ \\ {\bf (IIa)} Suppose that $\lambda = 2, $ i.e. $2K_F = \sO_F.$ \\ Arguing as in (I), we have $$ -2K_X = f^*(\sO_B(a)) + \sum_{i=1}^k \lambda_i F_i.$$ As before, $a \geq 0.$ Assuming $a = 0,$ we conclude that either $k = 2$ and $-2K_X = F_1+F_2$ or $k = 1$ and $-2K_X = m F_1$ with $m = 1,2.$ Now the contradiction is derived in the same way as before: the cohomology groups $H^2(N_{F_1+F_2}) = H^0(K_X \vert F_1+F_2)$ resp. $H^2(N_{mF_1})$ for $1 \leq k \leq 2$ have dimension at most 2 and Riemann-Roch gives $\chi(-2K_X) = 5.$ \\ Therefore $a \geq 1$ and the equation $$ -2K_X = f^*(\sO_B(a)) + \sum_{i=1}^k \lambda_i F_i$$ leads to either $da = 2$ and $k = \lambda = 1$ or to $da = 4$ and $k = 0.$ In the first case $a = 1$ and $$ -2K_X = f^*(\sO_B(1)) + F_1 = F + F_1,$$ in the second (when $a = 1, d = 4)$ $$ -2 K_X = f^*(\sO_B(1)) = F$$ resp. $$-2 K_X = f^*(\sO_B(2))$$ (when $a = 2, d = 2$). This last case clearly contradicts $K_F \ne \sO_F$. In the two remaining cases we have $h^0(-2K_X) = 2$ and $h^2(-2K_X) \leq 2$ thanks to $-2K_X = F+F_1$ resp. $-2K_X = F$ and $$H^2(\sO_X) \to H^2(\sO_X(F+F_1)) \to H^2(N_{F+F_1}) $$ resp. $$H^2(\sO_X) \to H^2(\sO_X(F)) \to H^2(N_F).$$ This contradicts again Riemann-Roch. So $\lambda = 2$ is impossible; in particular $F$ cannot be an Enriques surface.\\ \\ {\bf (IIb)} Now let $\lambda \geq 3,$ in particular $F$ is hyperelliptic. We shall rule out this case. First suppose that $S$ is ruled: $S = \bP(E).$ Then either $S = \bP_1 \times \bP_1$ or $S$ contains a (rational) curve $C$ with $C^2 < 0.$ In the first case the covering $F \to S$ produces 2 different fibrations on $F$ which is impossible, in the second $F$ contains a curve $C'$ with $C'^2 < 0,$ which is also absurd. Hence $S = \bP_2,$ in particular the Picard number $\rho(X) = 2.$ Now consider the relative Albanese map associated with $f,$ $$ \sigma: X \rightharpoonup W. $$ $\sigma \vert F$ is the Albanese of the general smooth hyperelliptic surface $F.$ Let $\pi: \hat X \to X$ be a sequence of blow-ups such that the induced map $\hat \sigma: \hat X \to W$ is holomorphic. Let $A$ be ample on $W$ and consider $$L = (\pi_*(\sigma^*(A)))^{**}.$$ Then $L$ is a line bundle with $L\vert F $ is nef, non-trivial but not ample. Since $\rho (X) = 2,$ we can write, at least as $\bQ$-divisors: $$ L \equiv f^*(\sO_B(a)) \otimes \varphi^*(\sO(b)).$$ Thus $$ L \vert F = \varphi^*(\sO_S(b)),$$ hence $L \vert F$ is ample, trivial or negative, contradiction.
\begin{theorem} In the situation of (5.1) suppose that there is a birational Mori contraction $\varphi: X \to Y.$ Then the following holds: \begin{enumerate} \item $\varphi $ is the blow-up of a smooth curve $C$ in the smooth threefold $Y.$ \item $-K_X = f^*(\sO_B(1))$ and $f$ is a K3-fibration. \item The normal bundle $N_{C/Y} $ is of the form $N_{C/Y} = L \oplus L$ with some line bundle $L$ on $C.$ \item $-K_Y \cdot C = 2g(C)-2 = \deg L.$ \item If $\deg L > 0,$ then $-K_Y$ is big and nef. \item If $\deg L = 0,$ then $-K_Y$ is nef with $K_Y^3 = 0$ and $C$ is elliptic. We have a nef reduction $g: Y \to \bP_2$ such that $-K_Y = g^*(\sO(1))$ and $C$ is a fiber of $g.$ \item If $\deg L < 0,$ then $-K_Y$ is not nef and $C = \bP_1$ with $N_C = \sO(-2) \oplus \sO(-2).$ \end{enumerate} \end{theorem}
\proof The birational map $\varphi$ contracts a prime divisor $E$ either to a point or to a curve. The first alternative however cannot appear since then $f \vert E$ would have to be finite. So $E$ is contracted to a curve $C$, and $Y$ is automatically smooth with $\varphi $ being the blow-up of $C$ in $Y.$ \\ Let $l \simeq \bP_1$ be a non-trivial fiber of $\varphi.$ Since $-K_X \cdot l = 1,$ we see with the same methods as in (5.2) that $$ \deg (f\vert l) = 1 $$ and $$ -K_X = f^*(\sO_B(1)).$$ Denoting $F' = \varphi (F)$ for a fiber $F$ of $f,$ we conclude that $\varphi \vert F: F \to F' $ is an isomorphism. Since $H^1(\sO_X) = 0 $ and $H^2(\sO_X(-F)) = H^2(K_X) = H^1(\sO_X) = 0$, the general $F$ is a K3-surface. Now the exceptional divisor $E$ has two contractions $\varphi \vert E$ and $f \vert E$ so that $E \simeq B \times C.$ In particular we can write $N_C = L \oplus L.$ \\ Let $$C_b = \{b\} \times C;$$ then $K_X \cdot C_b = 0$ since $C_b$ is contracted by $f.$ From $K_X = \varphi^*(K_Y) + E,$ we deduce $$ K_Y \cdot C = - E \cdot C_b.$$ Now $N_E^* = \sO_{\bP(N^*_C)}(1) = C_b + \varphi_E^*(L^*).$ Hence $$ -E \cdot C_b = (C_b + \varphi_E^*(L^*)) \cdot C_b = \deg L^*,$$ and in total $$ K_Y \cdot C = \deg L^*. \eqno (1) $$ So the adjunction formula gives $$ 2g(C)-2 = \deg L. \eqno (2) $$ From the exact sequences $$ 0 \la N_{C_b/E} = \sO \la N_{C_b/X} \la N_{E/X} \vert C_b = L \la 0$$ and $$ 0 \la N_{C_b/F} \la N_{C_b/X} \la N_{F/X} \vert C_b = \sO \la 0$$ we obtain $$ N_{C_b/F} = L. \eqno (3)$$ We also notice $$ \sO_X(F) = \varphi^*(\sO_Y(F')) - E $$ and $$ N_{F/X} = N_{F'/Y} - \sO_{F'}(C), $$ thus $$ N_{F'/Y} = \sO_{F'}(C). \eqno (4) $$ Since $-K_Y$ is nef on every curve $\ne C,$ the bundle $-K_Y$ is nef precisely when $\deg L \geq 0 $ by virtue of (1). The equation $$0 = K_X^3 = (\varphi^*(K_Y) + E)^3 $$ gives $$ K_Y^3= \deg L^*. \eqno (5)$$ Finally we observe that because of $-K_X = F,$ the formula $$ -K_Y = F' \eqno (6)$$ holds.\\ \\ {\bf Case I:} $\deg L > 0.$ \\ Then $-K_Y$ is nef and big by (5) and the previous remark. The normal bundle $N_{F'/Y} $ is big and nef by (6).\\ \\ {\bf Case II:} $\deg L = 0.$ \\ Here $-K_Y$ is nef with $K_Y^3 = 0.$ The normal bundle $N_{F'/Y} $ is effective (and nef); actually $N_{F'/Y} = \sO_{F'}(C).$ Furthermore it is clear that $n(-K_Y) = 2.$ In fact, $E \cap F$ is an elliptic curve $l$ for general $F$ and $0 = K_X \cdot l = K_Y \cdot \varphi(l) $ giving a 2-dimensional $K_Y$-trivial family of elliptic curves on $Y.$ To be more precise, we consider the exact sequence $$ 0 \to H^0(\sI_C \otimes (- K_Y)) \to H^0(-K_Y) \to H^0(-K_Y \vert C) \to H^1(I_C \otimes (- K_Y)).$$ Now $I_C \otimes (- K_Y) = \varphi_*(-K_X),$ hence $h^0(I_C \otimes (- K_Y)) = 2$ and $h^1(I_C \otimes (- K_Y)) = 0,$ as one checks immediately (in fact, $h^1(-K_X) = 0)$. Using the normal bundle sequence for $C \subset F' \subset Y$ and $N_{C/F'} = \sO,$ the normal bundle $N_{C/Y}$ must be trivial or the non-split extension by two trivial bundles, hence $-K_Y \vert C = \sO_C.$ Putting this into the exact sequence, we conclude that $h^0(-K_Y) = 3$ and $-K_Y$ is spanned. Let $\tilde h: Y \to \bP_2$ be the associated map (which contracts $C$) and let $h: Y \to T$ be its Stein factorisation. Then $-K_Y^2 = \tilde h^{-1}(x)$ for any $x \in \bP_2,$ on the other hand $K_Y^2 = C.$ We conclude that then $h$ must be an isomorphism, hence $T = \bP_2.$ \\ \\ {\bf Case III:} $\deg L < 0.$ \\ Then (2) shows that $C = \bP_1$ and that $N_C = \sO(-2) \oplus \sO(-2).$ In this case $-K_Y$ is not nef. \\
\begin{example} {\rm (1) Let $Z = \bP(E)$ be a $\bP_3$-bundle over $\bP_1$ such that $Z$ is Fano; let $\pi$ denote its projection. We suppose furthermore that $-K_Z \otimes \pi^*(\sO(-1)$ is generated by global sections (the existence of a smooth section would be sufficient). Take a general $$ X \in \vert -K_Z \otimes \pi^*(\sO(-1) \vert.$$ Let $\psi: Z \to W$ denote the second projection and let $f = \pi \vert X;$ $\varphi = \psi \vert X.$ Then $$ -K_X = f^*(\sO(1))$$ the general fiber of $f$ is a quartic in $\bP_3$ and $n(-K_X) = 1.$ The condition on $-K_Z \otimes \pi^*(\sO(-1)$ is translated into $$ S^4E \otimes \det E^* \otimes \sO(-1) $$ being generated by sections. If we write $$ E = \bigoplus \sO(a_i) $$ with $a_1 \geq \ldots \geq a_4,$ then this condition comes down to $$ 4a_4 \geq \sum a_i + 1 = c_1(E) + 1.\eqno (*)$$ This implies that up to a twist $E$ can only be of type $(0,0,0,0)$ or of type $(1,0,0,0).$ In the first case $Z = \bP_1 \times \bP_3.$ It is immediately checked that $\varphi: X \to \bP_3$ is birational and that $\varphi$ is the blow-up of a curve $C$ with $\deg C = 16 $ and $g(C) = 33.$ \\ In the second case, the second contraction $\psi: Z \to W = \bP_4$ is the blow-up in a plane $S$ and $X$ is the blow-up of a quartic $W'$ in $\bP_4$ along $W' \cap S.$ \\ \\ (2) In order to get an example for 5.3(6), we take a threefold $Y$ such that $-K_Y$ is spanned by global sections and such that $K_Y^3 = 0.$ Then let $X \subset \bP_1 \times Y$ be a general element of $\vert \sO(1) \hat \otimes -K_Y \vert.$ \\ \\ (3) At the moment we do not have an example for 5.3(7).}
\end{example}
\begin{theorem} Rationally connected threefolds $X$ with the following properties are bounded modulo boundedness for threefolds $Y$ with big and nef $-K_Y$ resp.\ threefolds with $-K_Y$ nef and $n(-K_Y) = 3.$ \begin{itemize} \item $-K_X$ is nef; \item $n(-K_X) = 1;$ \item $X$ does not admit a contraction which is of type $(-2,-2).$ \end{itemize} \end{theorem}
The boundedness - actually classification - of threefolds with $-K$ big and nef is under investigation at the moment; contractions of type $(-2,-2)$ are excluded just for technical reasons and reasons of length. We come back to that in a separate paper.
\proof (1) First suppose that $X$ a contraction $\varphi: X \to S$ with $\dim S \leq 2.$ Then Theorem 5.2 applies and, using the notations of (5.2), it only remains to bound the bundle $E,$ i.e. to bound $S.$ But since $K_X^2 = 0,$ we actually have $-K_S$ nef (see proof of 2.2)), thus $S$ is bounded. \\ (2) If $\varphi$ is birational to the threefold $Y$, then (5.3) applies. Using the notations of (5.3) and ruling out the case of a $(-2,-2)$-contraction, we have either have $(-K_Y)^3 > 0 $ or $K_Y^3 = 0$ with $C$ elliptic. In the second case, we have an elliptic fibration $g: Y \to \bP_2$ and $C$ is a fiber. In order to proceed by induction on the Picard number, we want to apply Theorem 6.9. For that we verify that $Y$ does not admit a $(-2,-2)$-contraction. In fact, if $D$ is the exceptional divisor of such a contraction, then $D$ meets only singular fibers of $g$ and therefore $D \cap C = \emptyset. $ But then $D$ defines already a $(-2,-2)$-contraction on $X$ which was ruled out by assumption. \\ If however $(-K_Y)^3 > 0,$ then $Y$ is bounded (by assumption), hence it remains to bound $C$ for fixed $Y.$ Let $a = K_Y^3.$ From $$ 0 = K_X^3 = K_Y^3 + 3 \varphi^*K_Y \cdot E^2 + E^3 $$ and $\varphi^*(K_Y) \cdot E^2 = - K_Y \cdot C$ and $E^3 = - c_1(N_{C/Y})$ we obtain $$ 2 K_Y \cdot C + 2g-2 = K_Y^3 = a. \eqno (*)$$ Here $g$ denotes the genus of $C.$ Now consider $\vert -mK_Y \vert $ for some large $m$ and the associated birational embedding $$ \psi: Y \to Y' \subset \bP_N.$$ Let $\lambda: Y \to Y'$ be the birational part of $\psi$ and put $C' = \lambda (C).$ Also notice $K_Y = \lambda^*(K_{Y'}).$ By the theory of Hilbert schemes it suffices to bound the degree of $C'$ and the genus of $C'$ ( $ = g)$ (if $\dim \lambda (C) = 0,$ then $(*)$ proves that $C = \bP_1$; on the other hand, its normal bundle is ample due to (5.3), so this case cannot occur). Due to $(*)$ we only need to bound $-K_Y \cdot C.$ But this follows from $K_X^2 = 0,$ i.e. $K_Y^2 = C,$ hence $-K_Y \cdot C = (-K_Y)^3.$
\qed
\section{Rationally connected threefolds II}
\begin{setup} {\rm We are now turning to the case of rationally connected threefolds $X$ with $-K_X$ nef and $n(-K_X) = 2.$ Here we have a holomorphic elliptic fibration $f: X \to B$ to a normal projective surface $B$ by (2.1). Since $-mK_X = f^*(G)$ for some ample line bundle $G$, there exists an effective $\bQ$-divisor $D$ on $B$ such that $(B,D)$ is log-terminal and $K_X \equiv f^*(K_B+D)$ [Na88,0.4]. In particular $B$ has only quotient singularities. Again we consider a Mori contraction $\varphi: X \to Y.$ \\ We note that by Riemann-Roch $\chi(-K_X) = 3$ since $K_X^3 = 0.$ Since $K_X^2 \ne 0$ then by Kodaira vanishing $H^2(-K_X) = 0,$ therefore $$ h^0(-K_X) \geq 3. \eqno (6.1.1) $$} \end{setup}
\begin{theorem} Suppose in (6.1) that $\dim Y = 1.$ Then $B = \bP_2$ and the general fiber $F$ of $\varphi$ is $\bP_1 \times \bP_1$ or a del Pezzo surface. In particular $X$ is a ramified cover over $\bP_1 \times \bP_2$ of degree $d$ at most 8. Moreover we have $$ -K_X = f^*(\sO_{\bP_2}(a)) $$ with $1 \leq a \leq 2$ and $a = 2$ can only happen when $F = \bP_1 \times \bP_1.$ The elliptic fibration $f$ is equidimensional. Finally, if $a = 1,$ then $K_F^2 = d$ and $8 \geq K_F^2 \geq 2.$ \end{theorem}
\proof Notice that $f \vert F$ is finite and $\varphi$ is finite on every $f$-fiber $F_f.$ Therefore the equidimensionality of $f$ is clear: if $F_0$ is a 2-dimensional fiber of $f,$ then $F_0 \cap F$ would be a curve. \\ (1) First we show that $\varphi$ cannot be a $\bP_2$-bundle. This already settles the second assertion. Suppose $X = \bP(E)$ with a rank 3-bundle $E$ over $Y = \bP_1.$ Then $K_X^3 = 0$ translates into $$ c_1(S^3E \otimes \det E^* \otimes \sO(2)) = 0$$ which is absurd. \\ (2) Next observe that $\rho (B) = 1$ and also the group of Weil divisors modulo linear equivalence is $\zed$; just because $\rho(X) = 2.$ Let $C \subset F$ be a smooth rational curve with $C^2 = 0$ and let $C' = h(C),$ where $h = f \vert F.$ Then $C'$ is a moving rational curve in $B$ not meeting the singularities of $B.$ Hence $H^0(\omega_B) = 0$, so that $H^2(\sO_B) = 0.$ Since anyway $H^1(\sO_B) = 0,$ the surface $B$ has only rational singularities (and in particular again $K_B$ is $\bQ$-Cartier). This is also clear from the fact that $B$ has only log-terminal singularities. \\ (3) Consider the torsion free sheaf $f_*(-K_X)$ and let $\sL$ be its reflexive hull. Then at least outside a set of codimension at least 2 in $X$ we can write $$ -K_X = f^*(\sL) + D_0$$ where $f_*(\sO_X(D_0)) = \sO_B.$ This can also be considered as an equation of $\bQ$-Cartier divisors on all of $X$, resp. as $-K_X = f^*(\sL)^{**} + D_0$ on all of $X.$ Now we have $$2 = -K_X \cdot C = (\deg h_C) (\sL \cdot C') + D_0 \cdot C.$$ Thus we are in one of the following cases: \begin{enumerate} \item $D_0 \cdot C = \deg h_C = \sL \cdot C' = 1$; \item $D_0 \cdot C = 0; \ \deg h_C = 2; \ \sL \cdot C' = 1$; \item $D_0 \cdot C = 0; \ \deg h_C = 1; \ \sL \cdot C' = 2$. \end{enumerate} But we know that $K_X \equiv f^*(K_X+D),$ hence as a $\bQ$-divisor, $D_0 \equiv f^*(G).$ This shows that $D_0 \cdot C > 0$ unless $D_0 = 0.$ Hence in cases (2) and (3) we have $D_0 = 0.$ By (6.1.1) we have $h^0(-K_X) \geq 3,$ hence $h^0(\sL) \geq 3$. \\ \noindent \vskip .2cm Now suppose that we are in one of the first two cases. Therefore always $\sL \cdot C' = 1.$ Hence any effective Weil divisor on $B$ is a positive (integer) multiple of $\sL$, having in mind that $C'$ lies in the regular part of $B.$ Considering the exact sequence $$ H^0(\sL \otimes \sO_B(-C')) \to H^0(\sL) \to H^0(\sL \vert C') $$ and having in mind that $C' = \bP_1$ (the $C'$ are linearly equivalent, thus the general $C'$ is smooth), we conclude that $\sL = \sO_B(C')$ and $h^0(\sL) = 3.$ In particular $\sL$ is locally free, $C'^2 = 1,$ $K_B = \sO_B(-3C')$ and thus $B = \bP_2$ (use e.g. Fujita's $\Delta$-genus, [Fu90];[BS95]). \\ \vskip .2cm \noindent (4) We still have to consider the case $\sL \cdot C' = 2.$ \vskip .2cm \noindent (4a) Arguing in the same way suppose first that $H^0(\sL \otimes \sO_B(-C')) \ne 0.$ Then we either have $\sL = \sO_B(C')$ with $C'^2 = 2$ or $\sL = \sO_B(2C')$ with $C'^2 = 1.$ In the first case $K_B$ is divisible by 2 and $B$ is a quadric cone; in the second $K_B$ is divisible by 3 and $B = \bP_2.$ We rule out the case that $B$ is a quadric cone as follows. We can write $$ h_*(\omega_{F/B}) = \sE \oplus \sO_B \; ;$$ here $\sE$ is a reflexive sheaf at least (it is not clear whether $h$ is flat; we will not care about that). To proceed we first notice that $$ H^1(h_*(\omega_{F/B}) \otimes \sO_B(-2)) \ne 0. \eqno (*)$$ On the other hand, we are going to prove $$ H^1(h_*(\omega_{F/B}) \otimes \sO(-2)) = 0. \eqno (**)$$ This comes down to the vanishing $$ H^1(F,\omega_{F/B} \otimes h^*(\sO_B(-2))) = 0.$$ Now $-K_X = f^*(\sO_B(1))$ in our situation, hence $\omega_{F/B} = h^*(\sO_B(1)).$ Since $\omega_F = h^*(\sO_B(-1)),$ our claim therefore comes down to prove that $$H^1(F,\omega_F) = 0.$$ This is however clear by duality and $(**)$ is verified so that in case(4a) $B$ must be a plane. \\ \\ (4b) If now $H^0(\sL \otimes \sO_B(-C')) = 0,$ then $h^0(\sL) = h^0(\sL \vert C') = 3.$ So $H^0(\sL)$ defines a meromorphic map $$ g: B \rightharpoonup \bP_2$$ which is holomorphic near $C'$; moreover $g(C')$ is a conic and thus $C'^2 = 4.$ \\ Suppose that $\sO_B(C')$ generates ${\rm Pic}(B) = \zed.$ Then, having in mind that $\sL \cdot C' = 2, $ we must have $$ \sL = \sO_B({{1} \over {2}}C').$$ Hence $\sO_B(C') = g^*(\sO_{\bP_2}(2)) $ near $C'$ and since $C'^2 = 4,$ we conclude that $g$ must be generically $1:1.$ Thus $g$ is an isomorphism outside the finite set of indeterminacies. But now the linear system $\vert C' \vert $ defines an embedding into $\bP_5$ which factors via $g$ over the Veronese embedding of $\bP_2$ and therefore proves that $g$ is an isomorphism. \\ If $\sO_B(C')$ is not the ample generator $\sO_B(1)$, then necessarily $\sO_B(C') = \sO_B(2),$ and then $\sL = \sO_B(1)$, in particular $\sL$ is locally free. Now $c_1(\sL)^2 = 1$ and hence again $g$ is an isomorphism.\\ \vskip .2cm \noindent (5) We now show that $D_0 = 0$ which has to be proved only in the first case. Supposing $D_0 \ne 0,$ we have $$ -K_X = f^*(\sO_B(1)) + D_0$$ and we first consider the case that $F$ is as del Pezzo surface (different from the quadric). Then take a $(-1)$-curve $l \subset F$, and we obtain $$ 1 = -K_X \cdot l = f^*(\sO_B(1)) \cdot l + D_0 \cdot l. $$ Since $l$ and $C$ are homologous in $X$, we have $D_0 \cdot l > 0,$ contradiction. \\ If $F = \bP_1 \times \bP_1,$ then using the equation of $\bQ$-divisors $D_0 = f^*(\sO_B(b))$ and $D_0 \vert F = (1,1)$ (since $D_0 \cdot C = 1$), we obtain $D_0 \vert F = h^*(\sO_B(1)) $, so that $b = 1.$ This is absurd.\\ \noindent \vskip .2cm (6) Finally, if $a = 2$ then $-K_F$ is divisible by 2 which is only possible if $F = \bP_1 \times \bP_1$. The last statement of the theorem is clear.
\qed
\begin{example} {\rm (1) Let $g: X \to \bP_1 \times \bP_2$ be the cyclic cover of degree 2, ramified over a smooth divisor of type $(4,2)$. Then $-K_X = p_2^*(\sO(2)),$ so that $-K_X$ is nef but not big. Moreover $p_2 $ is the nef reduction and $p_1$ is a quadric bundle. \\ (2) We modify the previous example by taking $R$ of type $(4,4)$. Then $-K_X = p_2^*(\sO_B(1))$ and $p_1$ is a del Pezzo fibration whose general fiber $F$ has $K_F^2 = 2$ (hence $F$ is $\bP_2$ blown up in 7 points). \\ (3) Let $h: X \to \bP_1 \times \bP_2$ be the cyclic cover of degree 3 ramified over a smooth divisor of type (3,3). Then $-K_X = p_2^*(\sO_B(1))$ and $p_1$ is a del Pezzo fibration with $K_F^2 = 3.$ \\ In all examples it is easily checked that indeed $b_2(X) = 2$ so that $p_1$ is the contraction of an extremal ray. } \end{example}
\begin{theorem} In the setup (6.1) suppose that $\dim Y = 2.$ Let $\Delta$ be the discriminant locus of the conic bundle $\varphi: X \to Y$. Suppose that $\Delta \ne 0.$ Then \begin{enumerate} \item The nef reduction (given by $\vert -mK_X \vert$ for suitable large $m$) is equidimensional; \item $-(4K_Y+\Delta) $ is nef; and \item either $f \times \varphi: X \to B \times Y$ is an embedding; and $B = \bP_2$ \item or $f \times \varphi: X \to B \times Y$ is a 2:1-covering onto its image and $B = \bP_2.$ \end{enumerate} \end{theorem}
\proof By (2.1) $-mK_X$ is spanned for suitable large $m$ and therefore defines the nef reduction. $l = l_y, y \in Y$ will always denote a fiber of $\varphi$ and we set $l' = l_y' = f(l_y).$ \noindent \vskip .2cm (A) Assume that there is a 2-dimensional fiber component $S$ of $f$. Since $h^0(-K_X) > 0,$ we can write $$ -K_X = aS + D_0 + E$$ with effective divisors $D_0$ and $E$ such that $E = f^*(E')$ and $f_*(\sO_X(D_0)) = \sO_B$ and with a positive integer $a.$ In fact, choose a point $p \in S$ and a section $s \in H^0(-K_X)$ vanishing at $p;$ then notice that $K_X \vert S \equiv 0$ so that $s \vert S = 0.$ Let $a$ be the vanishing order and then consider $-K_X - aS.$ \\ Now let $l$ be a general fiber of $\varphi.$ Since $\Delta \ne 0$, we must have $S \cdot l \geq 2.$ Thus $a = 1,$ $S \cdot l = 2,$ $E \cdot l = D_0 \cdot l = 0$ so that $D_0 = \varphi^*(\tilde D)$ and $E = \varphi^*(\tilde E).$ Let $F$ be a general fiber of $f.$ Then $(E + D_0) \cdot F = 0, $ hence $(\tilde E + \tilde D) \cdot \varphi(F) = 0.$ But $\kappa ( \tilde E + \tilde D) = 2$ since $\kappa (-K_X) = 2$, contradiction to the fact that $\varphi(F)$ moves in $Y.$ This proves (1).\\ Notice that as a consequence of (1) the general $l'$ does not meet any singularity of $B.$
\noindent \vskip .2cm \noindent (B) Claim (2) follows from $$ -(4K_Y+\Delta) \cdot C = K_X^2 \cdot \varphi^{-1}(C) \geq 0.$$ Note that $-(4K_Y+\Delta) \ne 0,$ since $K_X^2 \ne 0.$ \noindent \vskip .2cm \noindent (C) Approaching (3) and (4) we write as in (A) $$ -K_X = E + D_0 = f^*(E') + D_0.$$ The second equation is an equation of $\bQ$-divisors; in terms of sheaves it reads $$ \sO_X(-K_X) = f^*(\sO_B(E'))^{**} \otimes \sO_X(D_0).$$ Notice that $E' \ne 0;$ actually $h^0(\sO_B(E')) \geq 3.$ By intersecting with irreducible components of reduced conics we obtain $D_0 \cdot l = 0$ for all $l.$ Since $E \cdot l = 2,$ we must have $\deg (f \vert l) \leq 2.$ \\ Next we show that $\rho(B) \leq 2$ and actually $\rho(B) = 1$ if $(l')_{y\in Y}$ is a 2-dimensional family. In fact, if $(l')$ is 1-dimensional, then take a general curve $C \subset Y$ and set $X_C = \varphi^{-1}(C).$ Clearly $X_C$ projects onto $B.$ But all fiber components of $X_C$ are homologous in $X$ (not in $X_C$), thus $\rho(B) \leq 2.$ If $(l')$ is 2-dimensional, then choose $x_0 \in B$ general and pick an irreducible curve $C \subset Y$ such that $x_0 \in l'_y$ for all $y \in C.$ Then $X_C \to B$ contracts some curve and therefore $\rho (B) = 1.$ \\
\vskip .2cm \noindent (D) First we assume $\rho (B) = 1.$ Let $\sO_B(1)$ be the ample generator on $B.$ We write $$ \sO_B(l') = \sO_B(a) \, , \, \, \sO_B(E') = \sO_B(b) \ {\rm and} \ -K_B = \sO_B(c) $$ with a positive integer $a$ and positive rational numbers $b,c.$ Let $d = c_1(\sO_B(1))^2. $\\ \vskip .2cm \noindent (D.1) Suppose that $E' \cdot l' = 2.$ Suppose first that $b < a.$ Then $E' \cdot l' = 2$ implies $h^0(\sO_B(E')) \leq 3,$ hence $h^0(\sO_B(E')) = h^0(-K_X) = 3.$ To see this, we are going to show that for $E'$ and $l'$ general, $E' \cap l'$ is contained in the smooth locus of $l'.$ Once we know this, $h^0(\sO_B(E') \vert l') \leq 3$ by considering the normalization of $l'.$ If $\vert E' \vert $ has no fixed components, the intersection statement is clear. Otherwise we could write $E' = M + E''$ with $M$ fixed. Since $B$ is easily seen to be $\bQ$-factorial (since $\rho (B) = 1$) and since $M \cdot l' > 0,$ we conclude that $\sO_B(M) = \sO_B(E'')$ which is absurd. \\ To continue, let $$g: B \rightharpoonup \bP_2$$ be the associated rational map. Then $g$ is holomorphic near the general $l'$ since sections on $l'$ lift to $B.$ Moreover $g$ maps $l'$ biholomorphic onto a conic in $\bP_2,$ in particular $l'$ is smooth. Since two conics in $\bP_2$ meet in 4 points generally, we must have $l'^2 \leq 4,$ in other words $$ a^2 d \leq 4.$$ Thus $a = 1$ and $d \leq 4$ or $a = 2$ and $d = 1.$ Now the equation $E' \cdot l' = 2$ translates into $$ 2 = abd.$$ Putting things together, either $a = 1$ or $(a,b) = (2,1)$ and if $a = 1,$ then $d = 4$ and $b = {{1} \over {2}}.$ In this case we compute the $\Delta$-genus $$ \Delta(\sO_B(1)) = 2 + d - h^0(\sO_B(1)) = 6 - h^0(\sO_B(1)).$$ Namely, Kawamata-Viehweg and Riemann-Roch give $$ h^0(\sO_B(1)) = \chi(\sO_B(1)) = 6.$$ Hence $\Delta(\sO_B(1)) = 0$ and Fujita'classification (see e.g. [BS95,3.1.2]) implies that there is a smooth rational curve $C \subset B$ such that $B $ is $\bP(\sO_C \oplus \sO_B(1) \vert C)$ with the zero-section blown down. Then an easy explicit calculation on $\bP(\sO_C \oplus \sO_B(1) \vert C)$ shows that the invariant $e = 4$. Let $\pi: \hat B \to B$ be the canonical desingularization, i.e. $\hat B = \bP(\sO \oplus \sO(-4))$ over $\bP_1.$ Let $C_0 $ be the negative section and $F$ a fiber of $\pi.$ Then $\pi^*(E') = \alpha C_0 + 2F$ for some $\alpha$ (pull-back as reflexive sheaf) since $\pi^*(\sO_B(1)) = C_0 + 4F.$ We also know that $f \vert l$ is biholomorphic, $f(l) $ being a smooth rational curve. Thus $f \times \varphi$ is an embedding and $X \cap (B \times y)$ is defined by $\sO_B(E').$ In other words, $l' \in \vert \sO_B(E') \vert.$ But there is no irreducible reduced member in $\vert E' \vert,$ which follows immediately by considering $\vert \alpha C_0 + 2F \vert$ for any $\alpha$. \vskip .2cm \noindent Now to the case that $a = 2$ and $b = 1.$ Again we compute $$ \Delta(\sO_B(1)) = 0.$$ Arguing as before and using again $h^0(\sO_B(1)) = 3,$ we see that $e = 1$ and therefore $B = \bP_2.$\\
If $b \geq a,$ then $a = 1$ and $(b,d) = (1,2)$ or $(2,1)$. Suppose we know that $l'$ is smooth for general $l'.$ Then the adjunction formula for $l' \subset B$ gives $$ -2 = -cad + a^2d = d(1-c).$$ Thus in case $(b,d) = (1,2)$ we get $c = 2$ and $B$ is the quadric cone by the (generalized) Kobayashi-Ochiai theorem. If $(b,d) = (2,1),$ then $c = 3$ and $B = \bP_2$ by the same quotation. We exclude the case of the quadric cone: it is clear that $f \times \varphi$ is an embedding and that $X \subset B \times Y$ is defined by $\sO_B(1) \hat \otimes -K_Y.$ In particular $X$ is Cartier in $B \times Y$. But $X$ is smooth and must meet the singular locus of $B \times Y,$ contradiction. \\ It remains to check the smoothness of $l'$. If $c > a = 1,$ then $$ 0 = H^1(\sO_B) \to H^1(\sO_{l'}) \to H^2(\sO_B(-l')) = H^0(\sO_B(l') + K_B) = 0$$ proves $H^1(\sO_{l'}) = 0$ so that $l' = \bP_1.$ If $c \leq 1,$ then Riemann-Roch for $\chi(\sO_B(1))$ yields $c = 1.$ Thus $B$ is Gorenstein (with ample anticanonical class). Then it is well-known that $-K_B$ is spanned (e.g. by classification). Hence $\sO_B(l')$ is spanned and $l'$ is smooth (of course this immediately contradicts $l' \in \vert -K_B \vert$). \\ \vskip .2cm \noindent (D.2) If $E' \cdot l' = 1,$ then from $h^0(\sO_B(l')) \geq 3,$ we obtain $h^0(\sO_B(E'-l')) \ne 0.$ Unless $E' = l'$ we can write $E' = l' + R$ which clearly contradicts $\rho (B) = 1.$ Now we have $E' = l'$ which means $b = a.$ Moreover $E'\cdot l' = 1$ translates into $ 1 = abd,$ hence $a = b = d = 1.$ Computing $\chi(\sO_B(1))$ we get $c = 1$ or $3$, the case $c = 1$ being excluded as in (D.1). Hence $c = 3$ and $B = \bP_2.$
\vskip .2cm \noindent (E) Finally we treat the case $\rho (B) = 2.$ Then the family $(l')$ is 1-dimensional.
\vskip .2cm \noindent (E.1) Case $\deg f \vert l = 1$: \\ Let $l'$ be general and choose $b \in l'$ such that $f^{-1}(b)$ is elliptic. Let $$C = \varphi(f^{-1}(l')).$$ Notice that $\dim C = 1$ by our assumption that the family $(l')$ is 1-dimensional. By the degree assumption, $C$ is a (possibly singular) elliptic curve. Also, every fiber of $f$ over $l'$ must be elliptic since it is mapped onto $C.$ Let $$X_C = \varphi^{-1}(C).$$ Then both $\varphi({\rm Sing}(X_C))$ and $f({\rm Sing}(X_C))$ are finite. Hence ${\rm Sing}(X_C) $ is finite and thus $X_C$ is normal. Thus $C$ and $l'$ are smooth and $X_C \simeq C \times l'.$ Therefore we are in one of the the two following situations. \noindent \vskip .2cm (I) $f: X \to B$ is an elliptic fiber bundle outside a finite set in $B.$ \noindent \vskip .2cm (II) There are finitely many $l_1', \ldots , l_k'$ such that all $f$-fibers over every $l'_i$ are singular and moreover every $l'$ different from $l'_1, \ldots , l'_k$ is disjoint from $\bigcup l_j'.$ \noindent \vskip .2cm In case (I) we have $R^1f_*(\sO_X) = \sO_B$ in codimension 1 whence $q(X) > 0$ as $f$ is equidimensional contradicting the rational connectedness of $X.$ \\ In case (II) we consider the graph of the family $(l')$ and deduce immediately the existence of a map $g: B \to T = \bP_1$ contracting all $l'.$ Since all fibers of $\varphi$ are contracted by $X \to T,$ there is a map $h: Y \to T$ such that $$ g \circ f = h \circ \varphi.$$ Since $B$ is $\bQ$-factorial and $\rho (B) = 2,$ all fibers of $g$ must be irreducible. Also, no fiber can be multiple, otherwise by base change $\varphi$ would have multiple fibers in codimension 1. Thus all fibers of $g$ are irreducible reduced and therefore $\Delta = \emptyset, $ contradiction.
\vskip .2cm \noindent (E.2) Case $\deg (f \vert l) = 2$: \\ Then $E' \cdot l' = 1$ and the $l'$ form a 1-dimensional family. Let $l'$ be general, so that $l'$ is contained in the regular part of $B.$ Consider $X_l = f^*(l').$ Then $\dim \varphi (X_l) = 1$ and $N_{l/X_l} = \sO_l.$ Now consider the exact sequence $$ 0 \to f^*(N^*_{l'/B}) \to N^*_{l/X} \to N^*_{l/X_l} \to 0.$$ Since $N^*_{l/X} = \sO_l \oplus \sO_l,$ we conclude that $N^*_{l'/B} = \sO_{l'},$ in particular $(l')^2 = 0.$ Since $-K_B-D$ is ample, $K_B \cdot l' < 0.$ Hence we conclude $l' \simeq \bP_1$. Thus $H^0(\sO_B(l'))$ defines a holomorphic map $g: B \to C \simeq \bP_1$ contracting $l'.$ The general fiber $F$ of the induced map $X \to \bP_1 = C$ has $-K$ nef (and not ample) and is therefore either $\bP_1 \times {\rm elliptic} $ or $\bP_2$ blown up in 9 points. In any case $g $ induces a map $h: Y \to C $ such that $h \circ \varphi = g \circ f.$ Now $g$ has irreducible fibers due to $\rho (B) = 2$ and also $g$ cannot have multiple fibers as in (E.1 (II)). Hence $g$ is a $\bP_1$-bundle. If we write $B = \bP(V) \to C$, then $X = \bP(h^*(V)) \to Y,$ and thus $\varphi$ is not a proper conic bundle.
\vskip .2cm \noindent (F) Finally we observe that in case $\deg f \times \varphi = 1$, this map is an embedding by Zariski's Main Theorem. In case of degree 2, we already saw that $B = \bP_2$. Also it follows that $f \vert l$ is a degree $2$ covering for all smooth $l$, resp. an isomorphism on all components of singular conics. Therefore $f \times \varphi$ is a {\it covering} of degree 2.
\qed
\begin{example} {\rm (1) Let $X \subset \bP_2 \times \bP_2$ be a smooth divisor of type $(3,2)$. Then $-K_X$ is nef, and one projection defines an elliptic fibration while the other is a conic bundle. It is not difficult to see directly that the discriminant locus $\Delta$ is always non-empty.\\ (2) If we take in (1) $X$ of type $(3,1)$, then instead of a conic bundle we have a $ \bP_1$-bundle coming from a vector bundle, a situation we study next. Of course we can take $X$ more generally as a smooth hypersurface in $Y \times \bP_2$ of type $(-K_Y,1)$, where the requirement on $Y$ is just that $\vert -K_Y \hat \otimes \sO(2) \vert$ contains a smooth member. \\ (3) To get an example with a $2:1$-covering, let $Z = \bP(T_{\bP_2})$ with projection $p: Z \to \bP_2$ (i.e. $Z \subset \bP_2 \times \bP_2$ has degree $(1,1)$. Let $G = \sO_Z(1) \otimes p^*(\sO(1));$ then $G$ defines the second contraction $q: Z \to \bP_2$ of the Fano manifold $Z \subset \bP_2 \times \bP_2.$ Now take a smooth element $R \in \vert 4G \otimes p^*(\sO(2)) \vert$. Let $h: X \to Z$ be the 2:1-covering ramified over $R.$ Then $K_X = h^*p^*(\sO(-1)),$ so that $-K_X$ is nef with an elliptic fibration $X \to \bP_2.$ The map $q \circ h$ defines a conic bundle structure over $\bP_2.$ } \end{example}
\begin{theorem} In (6.1) suppose that $\dim Y = 2$ and that $\varphi: X \to Y$ has discriminant locus $\Delta = \emptyset. $ Then $\varphi$ is a $\bP_1$-bundle and of the form $X = \bP(E)$ with a rank 2 vector bundle $E$ over $Y$. In particular $-K_Y$ is nef. Furthermore: \begin{enumerate} \item $B = \bP_2, \bP_1 \times \bP_1$ or $\bP_2$ blown up in one point. \item If $B = \bP_2$, then $X \subset \bP_2 \times Y$ is given by $\sO(1) \hat \otimes (-K_Y)$ and $K_Y^2 > 0$ \item If $B = \bP_1 \times \bP_1$ or $\bP_2$ blown up in one point, then $Y$ is $\bP_2$ blown up in 9 points in such a way that $Y$ carries an elliptic fibration $g: Y \to D = \bP_1$ and such that $E = g^*(\sO(a) \oplus \sO)$ with $a = 0,1.$ \end{enumerate} \end{theorem}
\proof Since $Y$ is a smooth rational surface, $H^2(Y,\sO_Y^*) = H^3(Y,\zed)$ is torsion free; hence every analytic $\bP_1$-bundle over $Y$ is of the form $\bP(V).$ Then $K_X^3 = 0$ translates into $$ 3K_Y^2 = 4c_2(V) - c_1^2(V). \eqno (*)$$ \vskip .2cm \noindent (A) First we prove that $f$ is equidimensional. So suppose to the contrary that $S$ is a 2-dimensional fiber component. Then $S$ must be a section of $\varphi$: consider a general curve $C \subset Y$ and let $X_C = \varphi^{-1}(C).$ Then $f \vert X_C$ is generically finite; on the other hand $S$ projects onto $Y$, so $S \cap X_C$ is a curve, i.e. $X_C$ contains a contractible curve, which is necessarily a section. Hence $S$ is a section of $\varphi$. (Of course $S \to Y$ is finite!) \\ This section corresponds to an exact sequence (after a suitable twist of $V$) $$ 0 \to \sO_Y \to V \to L \to 0 \eqno (S_1) $$ such that $S = \bP(L).$ Notice that $h^0(L) = 0,$ since $S$ does not move, and $\sO_X(S) = \sO_{\bP(E)}(1).$ Since $f$ contracts the surface $S$, we have $K_X \cdot S = 0,$ in particular $$K_X \cdot S \cdot \varphi^*(H) = 0$$ for all ample divisors $H$ on $Y.$ This comes down to $$L \cdot H - K_Y \cdot H = 0,$$ hence $L = K_Y.$ Putting this into $(*)$, it follows $3 L^2 = - L^2,$ hence $L^2 = 0 = K_Y^2.$ In particular $Y$ is $\bP_2$ blown up in 9 points in almost general position. \\ Suppose first that $(S_1)$ splits. Since $-mK_X$ is spanned for suitable $m,$ we easily see that $-mK_Y$ is spanned and therefore $Y$ has an elliptic fibration. This elliptic fibration induces the elliptic fibration on $X$ and it is clear that $f$ cannot have a 2-dimensional fiber. It remains to rule out the case that $Y$ does not carry an elliptic fibration and at the same time $(S_1)$ does not split. Then however, taking symmetric powers of $(S_1),$ $h^0(S^m(V \otimes -K_Y))$ can grow at most linearly, contradicting the spannedness of $-mK_X$ and $K_X^2 \ne 0.$ Thus $f$ does not have a 2-dimensional fiber. \vskip .2cm \noindent We consider the ruling family $(l_y)_{y \in Y}$ in $X.$ As in (6.4) the image family $( l'_y) = (f(l_y))$ in $B$ is either 2- or 1-dimensional. Using the decomposition $-K_X = E + D_0$ as in (6.4) and the obvious fact that $E \cdot l_y > 0,$ the degree $\deg (f \vert l_y)$ is 1 or 2. Also we have $\rho (B) \leq 2$ and $\rho(B) = 1$ if $(l')$ is 2-dimensional.
\vskip .2cm \noindent (B) Case $\rho (B) = 1$: \\ Then we can argue exactly as in (6.4) and conclude that $B = \bP_2$. Moreover $f \times \varphi$ is an embedding or a degree 2 covering over its image in $B \times Y.$ \\ First suppose that $f \times \varphi$ is an embedding. Then $X \in \vert \sO(1) \hat \otimes H \vert$. Here $H$ is some line bundle on $Y$. $X$ cannot be of degree $2$ in $\bP_2$ because then $\varphi$ would be a conic bundle with $\Delta = \emptyset;$ on the other hand the reducible conics in $\bP_2$ have codimension 1 in the parameter space $\bP_5$ of all conics, so we must have $X = l \times Y$ with a fixed conic $l$. Then however $Y$ must have an elliptic fibration $g: Y \to C$ and $B = l \times C$ contradicting $\rho (B) = 1.$ \\ So $X$ is linearly embedded in the trivial $\bP_2$-bundle over $Y.$ Therefore after a suitable twist $V$ is a quotient of the trivial rank 3-bundle, so that there is an exact sequence $$ 0 \to M \to \sO_Y^3 \to V \to 0 \eqno (S') $$ with a line bundle $M$ on $Y$. Notice $M = \det V^*.$
Computing $K_X$ by the adjunction formula for $X \subset B \times Y$ and via the $\bP_1$-bundle structure shows by comparison that $H = \det V.$ Then $(S')$ gives $c_1^2(V) = c_2(V).$ Let $l_b = \varphi(f^{-1}(b)),$ an elliptic curve for general $b \in B.$ We conclude that $l_b \in \vert \det V \vert.$ This also shows $h^0(V) = 3$ thanks to $h^1(\det V^*) = 0.$ Now the adjunction formula yields $$ 0 = K_Y \cdot l_b + c_1^2(V),$$ hence $c_1^2(V) = - K_Y \cdot \det V.$ On the other hand, Riemann-Roch gives $$ \chi (V) = -{{1}\over {2}} c_1^2(V) - {{1} \over {2}} c_1(V) \cdot K_Y + 2 = 2,$$ and $\chi (V) = h^0(V) - h^1(V).$ Thus $h^1(V) = 1$ and therefore $h^2(\det V^*) = h^0(\det V + K_Y) = 1.$ Take $0 \ne D \in \vert K_Y + \det V \vert.$ \\ First we assume $K_Y^2 > 0.$ Then from $D \cdot (-K_Y) = 0$ we deduce $$ D = \sum a_i C_i$$ with $a_i \geq 0$ and $C_i$ some $(-2)$-curves. Therefore $$ \det V = -K_Y + \sum a_i C_i.$$ Since $\det V $ is nef, this is only possible if all $a_i = 0.$ Hence $\det V = -K_Y$ if $K_Y^2 > 0.$ \\ Now suppose $K_Y^2 = 0.$ So $c_1(V)^2 = c_2(V) = K_Y^2 = 0,$ and $Y$ has an elliptic fibration $g: Y \to C;$ moreover $\det V = - a K_Y.$ Using (S') and taking into account possible multiple fibers we find a rank 2-bundle $V'$ over $C$ and a line bundle $L$ over $Y$ such that $V = g^*(V') \otimes L.$ In particular we conclude that $B = \bP(V'),$ a contradiction to our assumption that $\rho (B) = 1.$ \\ If $f \times \varphi$ has degree $2,$ then consider its image $X' \subset B \times Y.$ Consider the ramification divisor $R.$ Then $R \cdot p_Y^{-1}(y) = 2,$ i.e.\ $R \to Y$ is a degree 2 covering and it must be ramified since $Y$ is simply connected. Over the ramification points in $Y$ we therefore have reducible conics. Contradiction.
\noindent \vskip .2cm \noindent (C) Case $\rho (B) = 2$: \\ Thus $(l')$ is 1-dimensional. Arguing as in (E.1) of the proof of (6.4), we obtain a map $g: B \to C = \bP_1$ and a map $h: Y \to C$ such that $ g \circ f = h \circ \varphi.$ Now we argue as in (B) above to get $B = \bP(V').$ The nefness of $-K_X$ is equivalent to the nefness of $$ g^*(V' \otimes {{\det V'^*} \over {2}}) \otimes {{-K_B } \over {2}}.$$ This means that --up to normalization-- $V' = \sO \oplus \sO $ resp. $V' = \sO \oplus \sO(1)$ so that $B = \bP_1 \times \bP_1$ resp. $\bP_2(x).$
\qed
\begin{proposition} In (6.1) suppose that $\dim Y = 3.$ Let $E$ denote the exceptional divisor and suppose that $\dim \varphi (E) = 0.$ Then $-K_Y$ is big and nef and we are in one of the following situations. \begin{enumerate} \item $E = \bP_2$ with normal bundle $N_E = \sO(-1);$ $Y$ is smooth with $(-K_Y)^3 = 8;$ $B = \bP_2$ with $\deg f_E = 1 $ or $ 4.$ \item $E = \bP_2$ with normal bundle $N_E = \sO(-2);$ $(-K_Y)^3 = {{1} \over {2}};$ $B = \bP_2$ with $f_E$ an isomorphism, i.e. $E$ is a section of $f.$ \item $E = \bP_1 \times \bP_1$ with $N_E = \sO(-1,-1);$ $B = \bP_1 \times \bP_1$ or $\bP_2$ and $f_E$ is an isomorphism resp. $\deg f_E = 2.$ \item $E = Q_0$, the quadric cone, with $N_E = \sO(-1);$ $B = Q_0$ or $\bP_2$ with $f_E$ an isomorphism resp. $\deg f_E = 2.$ \end{enumerate} \end{proposition}
\proof It is clear that $-K_Y$ is nef and the classification of $(E,N_E)$ is provided by [Mo82]; from this information also the computation of $(-K_Y)^3$ is clear. It remains to determine $B$ and $\deg f_E$. It is clear that $E$ maps onto $B$ and also that $f_E$ is finite. In particular any 2-dimensional fiber component of $f$ is disjoint from $E.$ Again we write in codimension 1: $$ -K_X = f^*(\sL) + D \eqno (*)$$ where $D$ is the contribution from the multiple fibers. \vskip .2cm \noindent (1) Here $-K_X \vert E = \sO(2).$ Restricting to a general line $l \subset E$ we obtain either $\deg f \vert l = 1$, $\sL \cdot l' = 1$ and $D \cdot l = 1$ or $\deg f \vert l = 1$, $\sL \cdot l' = 2$, $D = 0$ or $\deg f \vert l = 2$, $\sL \cdot l' = 1$ and $D = 0.$ Here $l' = f(l).$ \\ Suppose first that $\deg f \vert l = 1$. By Bertini $f^{-1}(l')$ must be irreducible for general $l'$, hence $\deg f_E = 1$ and $B = \bP_2.$ Now $f$ cannot have multiple fibers and therefore $D = 0,$ ruling out the first case. \\ So $\deg f \vert l = 2$ and $L \cdot l' = 1.$ Then $\sO_B(l') \in {\rm Pic}(B) = \zed$ and the $\bQ$-line bundle $\sL$ can be written as $\sL = \sO_B(\mu l')$ with a positive rational number $\mu.$ Since we know by (6.1.1) that $h^0(\sL) \geq 3,$ the exact sequence $$ H^0(B,\sO_B(-l')) \to H^0(B,\sL) \to H^0(\sL \vert l')$$ together with $\sL \cdot l' = 1$ shows that $\mu \geq 1.$ Then $1 = \sL \cdot l' = \mu l'^2,$ hence $\mu = 1.$ So $\sL = \sO_B(l')$ and the three sections of $\sL$ give an isomorphism $B \to \bP_2.$ Squaring $(*)$ we see that $\deg f_E = 4.$ \vskip .2cm \noindent(2) Here $-K_X \vert E = \sO(1),$ so that $D = 0$ and $\deg f \vert l = 1.$ Again we conclude $\deg f_E = 1$, in particular $B = \bP_2.$ \vskip .2cm \noindent (3) Since $-K_X \vert E = \sO(1,1)$ and $h^0(L) \geq 3,$ we have $D = 0.$ Restricting to a ruling line $l$, we obtain $\deg f \vert l = 1$ and $L \cdot l' = 1.$ Distinguishing the cases $\rho (B) = 1$ and $2$ and using both ruling families, similar calculations as in (1) lead to $B = \bP_2$ resp. $\bP_1 \times \bP_1.$ In the first case $\deg f_E = 2,$ in the second $f_E$ is an isomorphism. \vskip .2cm \noindent (4) The case of a quadric cone is similar. \qed
\begin{proposition} In (6.1) suppose that $\dim Y = 3;$ let $E$ be the exceptional divisor and suppose that $\dim \varphi (E) = 1;$ i.e. $Y$ is smooth and $\varphi$ is the blow-up of a smooth curve $C.$ Then we are in one of the following 2 cases: \begin{enumerate} \item $\dim f(E) \leq 1$. Then $-K_Y$ is nef with $n(-K_Y) = 2$ unless $\varphi$ is of type $(-2,-2);$ if in case $-K_Y$ nef $g: Y \to B'$ denotes the nef reduction of $Y,$ then there exists a birational map $\tau: B \to B'$ such that $\tau \circ f = g \circ \varphi $ and $C$ is a smooth (elliptic) fiber of $g.$ \item $\dim f(E) = 2$. Then $B$ is a rational ruled surface and there exists another contraction $\psi: X \to Y'$ such that either $\dim Y \leq 2$ or $\dim Y = 3$ but the exceptional divisor $E$ does not project onto $B.$ \end{enumerate} \end{proposition}
\proof (1) Suppose first that $\dim f(E) \leq 1.$ Let $A' = f(E)$ and let $A$ be the image of the Stein factorization of $E \to A';$ in particular $A$ is a smooth rational curve. Then $E$ admits two different projections, hence $E \simeq C \times A$ (at least after finite \'etale cover which we will ignore for simplicity of notations). We write $E = \bP(N^*),$ where $N^*$ is the conormal bundle of $C \subset Y.$ Then $N$ decomposes: $N = L \oplus L$ with a line bundle $L$ on $C.$ Let $g$ be the genus of $C$ and let $C_0 = C \times a $ for some $a \in A.$ Then by the standard theory of ruled surfaces and adjunction we can write $$ -K_X \vert E = C_0 + (2-2g + \deg L) F,$$ with $F$ a fiber of $\varphi \vert E.$ Since $-K_X$ is nef, we conclude that $\deg L \geq 2g-2.$ Now $C_0$ is contracted by $f$ so that $-K_X \cdot C_0 = 0.$ This is translated into $\deg L = 2g-2.$ Since $C_0$ is contained in a fiber of $f$, $L$ cannot be ample and thus we have $g \leq 1.$ If $g = 0$ then $L = \sO(-2).$ This is the $(-2,-2)$ case and $-K_Y$ is nef except on $ C.$ Thus we suppose that $g = 1.$ Then $K_Y \cdot C = 0$, so that $-K_Y$ is nef and $K_Y^3 = 0 $. Clearly $n(-K_Y) = 2;$ let $g: Y \to B'$ be the nef reduction. Now $f \vert E $ is just the projection onto $A = A' = \bP_1$, $-K_X$ being nef. Moreover $N_{A/B} $ is negative and $B'$ is just the blow-down of $A \subset B.$ Thus $g$ contracts $C$ and $C$ is just a fiber of $g.$ \\ (2) Let $l$ be a general fiber of $\varphi \vert E.$ Similarly as in the proof of (6.4)(1) we see that $E$ is disjoint from all potential 2-dimensional fiber components. Now use the decomposition $-K_X = f^*(\sL) + D$ in codimension 1 and intersect with $l$ to obtain $D = 0$, $\deg f \vert l = 1$ and $\sL \cdot l' = 1$ for $l' = f(l).$ \\ If $\rho (B) = 1,$ then $l'^2 > 0$ so that $f_E^*(l') $ is ample. But $l \subset f^*_E(l')$, thus Bertini gives a contradiction. \\ So $\rho (B) = 2.$ Then $l'^2 = 0$ and the linear system $\vert l' \vert $ gives a morphism $\tau: B \to T$ to a smooth curve $T$ such that $\tau \circ f_E = h \circ \varphi_E $ with some covering $h: C \to T.$ Since $\tau$ is flat with all fibers irreducible and reduced, it must be a $\bP_1$-bundle. So $B$ is ruled and of course rational. Let $p: X \to \bP_1$ be some projection, $p = q \circ f_E$ with $q: B \to \bP_1$ a $\bP_1$-bundle structure. Then $K_X$ is not $q$-nef, hence there exists a relative contraction $\psi: X \to Y'$. This is the contraction we are looking for. \qed
\begin{theorem} Smooth rationally connected threefolds subject to the following conditions are bounded modulo boundedness of threefolds $Y$ with $-K_Y$ nef and $n(-K_Y) = 3:$ \begin{itemize} \item $-K_X$ is nef \item $X$ has a contraction not of type $(-2,-2)$ \item $n(-K_X) = 2.$ \end{itemize} \end{theorem}
\proof Let $\varphi: X \to Y$ be a contraction. If $\dim Y = 1,$ we have boundedness by (6.2). In case $\dim Y = 2$ and the discriminant locus $\Delta \ne \emptyset,$ (6.4) clearly gives boundedness once we can bound $Y.$ But $-(4K_Y + \Delta) $ is nef, and therefore $Y$ is bounded. If $\Delta = \emptyset, $ then boundedness follows from (6.6). So we are reduced to the case that $\varphi$ is birational. By assumption we may assume that $\varphi$ is not of type $(-2,-2).$ If $\varphi$ is a $(-1,-2)$-contraction, then by [DPS93, 3.5/3.6/3.7] $X$ is canonically isomorphic to a $\bQ$-Fano threefold with a fixed type of singularities, hence $X$ is bounded. So we may assume that $-K_Y$ is nef. If the exceptional divisor $E$ contracts to a point, then (6.8) applies and we conclude by assumption. Finally let $\dim \varphi(E) = 1.$ If $\dim f(E) = 2,$ then by (6.8) we can switch to another contraction of a different type and work with that one. If $\dim f(E) \leq 1$ and if $\varphi$ is not of type $(-2,-2)$, then we conclude by induction on $\rho (X).$ \qed
\section{Rationally connected threefolds III}
\begin{setup} {\rm The last case to deal with is $n(-K_X) = 3$ for a rationally connected threefold $X$ with $-K_X$ nef. Hence there is no covering family $(C_t)$ such that $K_X \cdot C_t = 0.$ Of course this is the case when $(-K_X)^3 > 0,$ i.e. $-K_X$ is big and nef. Then $-mK_X$ is generated for suitable large $m$ and the curves $C$ with $-K_X \cdot C = 0$ are just those which are contracted by the morphism associated with $\vert -mK_X \vert.$ So we shall assume $K_X^3 = 0.$ From the proof of theorem 2.1 we know that in this case $K_X^2 \ne 0$ i.~e.\ $\nu(-K_X)=2$. } \end{setup}
\subsection{The structure of the anticanonical system}
\begin{proposition} \label{prop72} Let $X$ be a smooth projective rationally connected threefold with $-K_X$ nef, $n(-K_X)=3$ and $\nu(-K_X)=2$. Then
the anticanonical system $|- K_X|$ has non-empty fixed part
$A$. It induces a fibration $f: X \to \bP_1$. If $F$ is a fiber of $f$ then $|-K_X| = A + |kF|$ with $k \geq 2$. Furthermore $A^3 = A^2 \cdot F = 0$. \end{proposition}
\proof We know $h^0(-K_X) \geq 3$ by Riemann-Roch and Kawamata-Viehweg
vanishing. Assume that $|-K_X|$ is without fixed part. Then for two general members
$D,D' \in |-K_X|$ we have $D \cdot D' = C$ with an effective curve $C$ on $D$ and $-K_X \cdot C = 0$ as $K_X^3 = 0$. This implies also that $-K_X \cdot C' = 0$ for any component $C'$ of $C$. Now we assumed $n(-K_X)=3$ and therefore the $-K$-trival curves do not cover $X$. Hence, as $D$ moves, we conclude that no component of $C$ moves on $D$.\\ An easy calculation using $q(X)=0$ shows that $h^0(\sO_D(C)) \geq 2$ which implies $h^0(\sO_D) \geq 2$ and $D$ has at least two connected components $D=P+Q$. Let $H \subset X$ be very ample and let $D_H, P_H, Q_H$ denote the restrictions to $H$. First we note that $P_H$ and $Q_H$ are nef divisors on $H$: If $C$ is a curve in $H$ which is contained in $P_H$ then $Q_H \cdot C = 0$ as $P_H$ and $Q_H$ do not meet and therefore $P_H \cdot C = D_H \cdot C \geq 0$. Now $D_H^2 > 0$ as $K_X^2 \neq 0$ and we may assume that $P_H^2 >0$. As $Q_H$ is orthogonal to $P_H$ and nef the Hodge index theorem implies that $P_H$ and $Q_H$ are proportional. In particular $P_H^2 = 0$ as $P_H \cdot Q_H = 0$ which gives the desired contradiction.\\
Now write $|-K_X| = A + |B|$ with non-empty fixed part $A$ and movable part $B$ and $h^0(B) = h^0(-K_X) \geq 3$.\\ As $-K_X$ is nef we know that $K_X^2 \cdot A \geq 0$ and $K_X^2 \cdot B \geq 0$. Now $0 = -K_X^3 = K_X^2 \cdot (A + B)$ gives $K_X^2 \cdot A = K_X^2 \cdot B = 0$. From this we further conclude that $-K_X \cdot (A \cdot B + B^2)=0$. As B moves $A \cdot B$ and $B^2$ are effective cycles which implies $-K_X \cdot A \cdot B = -K_X \cdot B^2 =0$. From the last equation and $-K_X^2 \cdot A = 0$ we finally deduce that $-K_X \cdot A^2=0$.\\ In the next step $-K_X \cdot B^2 = 0$ in combination with $n(-K_X)=3$ gives the further structure of $B$. As $B$ has obviously no fixed part we may repeat the argumentation in the first part of the proof with $B$ playing the role of $D$ to prove that $B$ has at least two connected components. Now $B_H$ has no fixed part and is therefore nef. So following the proof above a few lines more shows that $B^2 \neq 0$
leads to a contradiction. Hence $B^2 = B_H^2 =0$ and as $|B_H|$ has at most finitely many base points, some multiple of $B_H$ defines a morphism sending $H$ to a curve. In particular every connected component of $B_H$ (being nef) is equivalent to a rational multiple of a general fiber of this map so that two connected components of $B$ are equivalent up to a rational factor. As $B$ has at least two such components this implies that some multiple of $B$ is generated by its global sections.\\ Let $f: X \to \bP_1$ be the Stein factorization of the morphism defined by
$|mB|$ for sufficiently large $m$ and let $F$ be a fiber. Then $B$ is
equivalent to a rational multiple of $F$. As $|B|$ has no fixed part we can actually conclude that $B$ consists of $k$ fibers with $k$ an integer and as $B$ has at least two connected components $k \geq 2$. Finally $-K_X \cdot A^2 = 0$ gives $A^3 + A^2 \cdot B = 0$ and $K_X^3=0$ gives $A^3 + 3 A^2 \cdot B=0$ (as $B^2=0$) which together imply that $A^2 \cdot B = A^3 = 0$. \qed
\begin{corollary} In this situation $\rho(X) \geq 3$. \end{corollary}
\proof Assume that $\rho(X) = 2$ and let $H$ be an ample divisor. As $A$ and $F$ are not proportional we can write $H = \alpha A + \beta F$ with some rational numbers $\alpha, \beta$. As $A^3 = A^2 \cdot F = F^2 = 0$ we calculate $H^3 = 0$ which is impossible. \qed
Next we study the fibration defined by $-K_X$.
\begin{lemma} Let $F$ be a general fiber of $f$. Then $F$ is a smooth surface with $-K_F$ nef and effective, $K_F^2=0$ and $n(-K_F)=2$. \end{lemma}
\proof The canonical bundle formula shows that $-K_F = -K_{X|F} = A_{|F}$. As $A^2 \cdot F = 0$ we have $K_F^2=0$. Furthermore $n(-K_F)=2$ otherwise $F$ and therefore $X$ would be covered by $K_X$-trivial curves. \qed
Now we consider the different cases corresponding to the type of a general fiber $F$. There are three different cases: \begin{enumerate} \item $F$ is $\bP_2$ blown up in 9 (not necessarily distinct) points without elliptic fibration and the anticanonical divisor is a smooth elliptic curve. \item As before, but this time the anticanonical divisor consists of rational curves. \item $F$ is a $\bP_1$-bundle over an elliptic curve (with $n(-K_F) = 2)$. \end{enumerate}
\begin{proposition}
In the setting of proposition \ref{prop72} write $|-K_X| = A_1 + A' + |kF|$ ($k\geq2$)
where $A_1, A'$ are effective divisors with ${A_1}_{|F} = -K_F$ resp.\ $A'_{|F} = 0$ for a general fiber $F$. Furthermore, let $\sL = \sO_X(-A'-kF) = \sO_X(K_X+A_1)$. Then: \begin{enumerate} \item $h^0(\sO_{A_1})= 1 + h^1(R^1 f_* \sL)$ and $h^2(\sO_{A_1})=0$ \item $h^1(\sO_{A_1})= h^0(\sO_{\bP_1}(l+k-2) + h^0(R^1 f_* \sL) \geq 1$ where the nonnegative number $l$ is defined via $f_* (\sO(-A') = \sO_{\bP_1}(-l)$. In particular, $h^1(\sO_{A_1})= 1$ iff $k=2$, $A'=0$ and $h^0(R^1 f_* \sL)=0$. \end{enumerate} \end{proposition}
\begin{remark} If $F$ is rational, $R^1 f_* \sL$ is torsion. So in this case $h^0(R^1 f_* \sL)=0$ iff $R^1 f_* \sL=0$. \end{remark}
\proof Everthing follows from direct computations: As $X$ is rationally connected $H^i(\sO_X)=0$ for $i>0$. The sequence $$ 0 \to \sO_X(-A_1) \to \sO_X \to \sO_{A_1} \to 0 $$ then implies that $$ h^0(\sO_{A_1}) = 1 + h^1(O_X(-A_1))$$ $$ h^1(\sO_{A_1}) = h^2(O_X(-A_1))$$ $$ h^2(\sO_{A_1}) = h^3(O_X(-A_1))$$ By duality, $h^i(O_X(-A_1)) = h^{3-i}(\sO_X(-A'-kF)) = h^{3-i}(\sL)$, in particular
$h^2(\sO_{A_1}) = h^0(-A'-kF) = 0$. The sheaf $f_* \sL$ is torsion free hence locally free and of rank one as $\sL_{|F}=\sO_F$. In fact $f_* \sL = \sO_{\bP_1}(-l-k)$ where $l$ is the number of fibers containing components of $A'$. For the further calculations we use the Leray spectral sequence which collapses at the $E_2$ level as the base is a curve. So $$ h^1(\sO_{A_1}) = h^1(X,\sL) = h^0(R^1 f_* \sL) + h^1 (f_* \sL)$$ and $h^1(f_* \sL) = h^1(\sO_{\bP_1}(-l-k) = h^0(\sO_{\bP_1}(l+k-2)$ which gives the claim. Finally $$ h^0(\sO_{A_1}) = 1 + h^2(X,\sL) = 1 + h^0(R^2 f_* \sL) + h^1 (R^1 f_* \sL)$$
and $R^2 f_* \sL = 0$ as $h^2(\sL_{|F}) = h^2(\sO_F) = h^0(-K_F) = 0$ for every fiber $F$, again by duality. \qed
{\it For the rest of the paper we will concentrate on the case where a general fiber
$F$ is $\bP_2$ blown up in 9 points such that $|-K_F| = C$ a smooth elliptic curve.}
As $-K_X$ is nef and not numerically trivial we have $\kappa(X) = -\infty$. In particular $K_X$ is not nef and we can study $X$ using some Mori contraction $\varphi$. If $\varphi: X \to X'$ is birational then $X'$ is again smooth and we stay within the small list of \cite{Mo82a}; in particular we do not
encounter small contractions or flips. We also keep the fact that the structure of $|-K|$ is very special. The final outcome is some Mori fiber space $\varphi: X \to Y$.
\subsection{The case where $F$ is rational}
\subsubsection{The setup}
\begin{proposition} \label{prop75} Consider as above a smooth projective rationally connected threefold $X$ with $-K_X$ nef,
$n(-K_X)=3$, $\nu(-K_X)=2$ and let $f:X \to \bP_1$ be the fibration induced by $|-K_X| = A + |kF|$, $k \geq 2$ and $F$ a general fiber. We further assume that $F$ is $\bP_2$ blown up in 9 points such that $-K_F$ is nef and
$|-K_F| = C$ a smooth elliptic curve. Then $k=2$, $A = C \times \bP_1$ and $f$ restricted to $A$ is the second projection. \end{proposition}
\proof As $-K_F = A_{|F} = C$ a irreducible reduced curve we find a divisor $A_1$ which occurs in $A$ with multiplicity one and the rest $A'$ does not meet $F$. Furthermore the restriction of $f$ to $A_1$ is an
elliptic fibration and the anticanonical bundle $-K_{A_1} = (A'+kF)_{|A_1}
\geq kF_{|A_1}$ contains $f_{|A_1}^* \sO(2)$ which will imply our other assertions:\\ Let $\nu: \hat{A}_1\to A_1$ be the normalization and let $\mu: \tilde{A}_1 \to \hat{A}_1$ be the minimal desingularization. Let $h:\bar{A}_1 \to \bP_1$ be a relative minimal model of the induced elliptic fibration $g:\tilde{A}_1 \to \bP_1$ i.~e.\ we take the successive blow-down $\lambda$ of ($-1$)-curves contained in fibers. Computing the (anti-)canonical bundles we get $$ -K_{\hat{A}_1} = \nu^* (-K_{A_1}) + Z$$ with some effective Weil divisor $Z$ supported on the zero locus of the conductor ideal and $$ -K_{\tilde{A}_1} = \mu^* (-K_{\hat{A}_1}) + E_1$$ with some effective divisor $E_1$ and finally $$ -K_{\tilde{A}_1} + E_2 = \lambda^* (-K_{\bar{A}_1})$$ with another effective divisor $E_2$. In particular we still have that $-K_{\bar{A}_1} \geq h^* \sO(2)$ and the dual of the relative dualizing sheaf for $h$ is effective. By \cite[Theorem III.18.2]{BPV84} this is only possible if all smooth fibers are isomorphic and the only singular fibers are multiple fibers of the form $m_i F_i$. Then the weak canonical bundle formula for elliptic fibrations \cite[Corollary V.12.3]{BPV84} shows that $$ K_{\bar{A}_1}=h^* L + \sum (m_i -1)F_i $$ with some line bundle $L$ of degree $$\deg L = \chi(\sO_{\bar{A}_1}) - 2 \chi(\sO_{\bP_1}) = 1 - q(\bar{A}_1) - 2$$ which reads in total as $$ -K_{\bar{A}_1} = h^* \sO(1+q(\bar{A}_1)) + \sum (1 - m_i) F_i $$ By the argument above and the Leray spectral sequence we conclude that $q(\bar{A}_1) \leq 1$. Together with $-K_{\bar{A}_1} \geq h^* \sO(2)$ this shows that there are no multiple fibers and $h$ is a $C$-bundle. But the base is $\bP_1$ hence $\bar{A}_1 = C \times \bP_1$. In particular $-K_{\bar{A}_1} = h^*\sO(2)$ and every inequality above was in fact an equality. This implies that $$ \bar{A}_1 = \tilde{A}_1 = \hat{A}_1 = A_1 = C \times \bP_1$$ -- here we use that $C \times \bP_1$ has no curves with negative self intersection; also $Z=0$ implies that $A_1$ is regular in codimension 1. As we already know that $A_1$ is Cohen-Macaulay, it is normal. By Proposition 7.5 now $q(A_1)=1$ implies $A' = 0$ and $k = 2$ which settles the claim. \qed
\begin{corollary} Under the assumptions of \ref{prop75} we have
$|-mK_X| = mA + |2m F|$. \end{corollary}
\proof This follows immediately from the fact that $A$ has just one component, $A_{|F} = -K_F$ and $\kappa(-K_F) = 0$. \qed
\subsubsection{Running the MMP -- birational contractions}
We start with divisorial Mori contractions $\varphi: X \to X'$ where $X$ has the following special structure provided by proposition 7.7:
\begin{definition} Let $X$ be a smooth projective threefold. We say that $X$ has structure \emph{(A)} if there exists a fibration $f: X \to \bP_1$ such that a general fiber $F$ is a smooth rational surface with $-K_F$ nef,
$|-K_F|$ contains a smooth elliptic curve $C$ and $-K_X = A + 2F$
where $A \cong C \times \bP_1$ and $f_{|A}$ is the second projection. Furthermore we require that $-K_X$ is almost nef which means that there are at most finitely many rational curves $D_i$ such that $-K_X \cdot D \geq 0$ for all curves $D \neq D_i$. \end{definition}
\begin{remark} Let $\varphi: X \to X'$ be a divisorial contraction. By \cite[Prop.\ 3.3]{DPS93} if $-K_X$ is nef then in general $-K_{X'}$ is merely almost nef (and this conclusion also holds if we just require $-K_X$ to be almost nef). \end{remark}
\begin{proposition} Assume that $X$ has structure (A). Then there is no birational Mori contraction $\varphi:X \to X'$ where a divisor is mapped to a point. \end{proposition}
\proof Assume that we have such a contraction and let $E$ be the exceptional divisor. We first treat the case $E \cong \bP_2$. By Mori's list we have two possibilities for the normal bundle: either $\sO_E(E) = \sO_E(-1)$ or $\sO_E(-2)$. As $E$ itself is not fibered we must have $F \cdot E = 0$ and $E$ is contained in some fiber $F_0$ of $f$. Obviously $E$ is different from $A$, therefore $A \cdot E$ is an effective cycle of curves and must be either the elliptic curve $C_0 = A \cdot F_0$ or zero. On the other hand computing
the canonical bundle $K_E = (K_X + E)_{|E} = (-A + E)_{|E}$ we see that
$A_{|E}$ has degree 1 or 2. Contradiction.\\ Another case is $E \cong Q$ a singular quadric in $\bP_3$. By Mori's classification the normal bundle is $\sO_E(-1)$ in this case and we can conclude as above.\\ The last case is $E \cong \bP_1 \times \bP_1$ with normal bundle $\sO_E(-1,-1)$ hence
$-K_{X|E} = \sO_E(1,1)$. If $l$ is a (general) fiber of the first projection of $E$ we have $-K_X \cdot l = 1$. As $E \neq A$ we know that $l\not\subset A$ and therefore $A \cdot l \geq 0$. This gives the only possibility $F \cdot l = 0$, $A \cdot l = 1$
which implies that $F_{|E} = \sO_E(a,0)$ with $a \geq 0$ hence $A_{|E} = \sO_E(1-2a,1)$. But this is an effective cycle on $E$ which means that $1-2a \geq 0$ therefore $a=0$ and
$F_{|E}$ is trivial. So $E$ is contained in some special fiber $F_0$ and
$A_{|E}$ is either the elliptic curve $C_0 = A_{|F_0}$ or zero. Both cases are not possible as
$A_{|E} = \sO_E(1,1)$. \qed
\begin{proposition} \label{prop710} Assume that $X$ has structure (A) and let $\varphi: X \to X'$ be a birational Mori contraction where the exceptional divisor $E$ is mapped to a curve. Then one of the following two cases occurs: \begin{description} \item (i) $E \neq A$ and we have $-K_{X'} = A' + 2F'$ with $A' = \varphi(A) \cong A, F' = \varphi(F)$ and either $F' \cong F$ or $F \to F'$ is the blow-down of some (--1)-curves in $F$. The fibration $f$ factors as $f = f' \circ \varphi$ and $f': X' \to \bP_1$ gives $X'$ the structure (A). \item (ii) $E = A$ and we have $-K_{X'} = 2G$ with $G = \varphi(F) \cong F$
and $\varphi$ is the blow-up of the elliptic curve $G^2$ which is in $|-K_G|$. In this case $X'$ has structure (O). \end{description} \end{proposition}
\begin{definition} Let $X$ be a smooth projective threefold. We say that $X$ has structure \emph{(O)} if $-K_X = 2G$ is almost nef where $G$ is a
smooth rational surface with $-K_G$ nef, $|-K_G|$ contains a smooth elliptic curve $D = G^2$ and $G$ moves in a linear system without fixed components. \end{definition}
\begin{re} In case (ii) of proposition \ref{prop710} $f$ does not factor via $\varphi$ as the images of two fibers $G_1, G_2$ meet in $D = \varphi(E)$. Since $A \cdot l = -1$ this case can only occur if $A$ is not nef. \end{re}
\proof The extremal ray corresponding to the contraction is generated by a rational curve $l$ with $-K_X \cdot l = 1$. In fact $\varphi$ is the blow-up of the smooth curve $D \subset X'$ and $l$ is a fiber of
the $\bP_1$-bundle $\pi = \varphi_{|E}$. We also know that $X'$ is smooth.\\ We first treat the case $E \neq A$. Then $A \cdot l \geq 0$ and the intersection numbers are $A \cdot l = 1$ and $F \cdot l = 0$. Hence if we pick some $l_0$ it must be contained in a fiber $F_0$. We now have two different subcases:
$\dim f_{|E} = 0$ or $1$. If $f_{|E}$ maps onto $\bP_1$ then $E \cdot F$ is
non-empty and in fact $F_{|E}=b\:\!l$ with $b>0$. Restricted to a general fiber
$\varphi_{|F}$ blows down some (--1)-curves $l$ in $F$ each of them meeting $C = -K_F$ transversally in one point. In particular all the curves $l$ meet
$A$ transversally in one point which implies that $\varphi_{|A}$ is an isomorphism.\\ If $f(E)$ is a point, $E$ is contained in some special fiber $F_0$ and in particular $A \cdot E$ is either $C_0 = -K_{F_0}$ or zero and in fact it must be $C_0$ as $A \cdot l = 1$. This also shows that $\varphi(A) \cong A$. The rest is obvious because $\varphi$ is an isomorphism outside $F_0$.\\ The other case is $E = A$. Since $A = C \times \bP_1$ we know that $l$ is a
fiber of the first projection of $A$ and $F \cdot l = 1$ as $A_{|F}=C$. As we contract the curves $l$ meeting $F$ transversally we conclude $\varphi(F) \cong F$. The other assertions are evident. \qed
\begin{proposition} \label{prop713} Assume that $X$ has structure (O) and let $\varphi: X \to X'$ be a birational Mori contraction. Then $\varphi$ contracts a divisor
$E \cong \bP_2$ with normal bundle $\sO_E(-1)$ to a point. $X'$ has structure (O) and $\varphi_{|G}$ blows down one (--1)-curve in $G$. \end{proposition}
\proof As $-K_F$ is divisible by two $\varphi$ is induced by a rational curve $l$ with $-K_X \cdot l = 2$ and contracts $E \cong \bP_2$ to
a point on the smooth threefold $X'$. Since $G \cdot l = 1$ and $l$ is a line on the exceptional $\bP_2$ we have $G_{|E} = \sO_E(1)$. So $G^2 \cdot E = 1$ which also means that $E$ intersects $D = G^2$ (which is an anticanonical divisor for $G$) in one point. Hence a general $l$ is a (--1)-curve in the appropriate $G$ containing it. \qed
\subsubsection{The outcome -- Mori fiber spaces}
Since $\kappa(X)=-\infty$ the Mori program must terminate with a fiber space. We start with a smooth 3-fold $X$ having structure (A). After a finite number of blow-downs described in \ref{prop710} and \ref{prop713} we obtain a smooth 3-fold (which we call again $X$) with structure (A) or (O). The final step in the Mori program is a fiber type contraction $\varphi: X \to Y$. We first consider the case where $\varphi$ is a conic bundle.
\begin{proposition} Assume that $X$ has structure (A) and let $\varphi: X \to Y$ be a Mori contraction to a smooth surface $Y$, i.~e.\ $\varphi$ is a conic bundle. Let $l$ be a general conic. Then \begin{description} \item either $F \cdot l = 1$ and $Y = F$, $X = \bP_1 \times F$ and $\varphi$
is the second projection. In this case $A = \varphi^* (C)$ for some elliptic curve $C \in |-K_F|$. \item or $F \cdot l = 0$. In this case $Y = \bP_1 \times \bP_1$ and $\varphi$
is a $\bP_1$-bundle; in particular $\varphi_{|F}$ gives $F$ the structure of a ruled surface. \end{description} \end{proposition}
\proof a) We first consider the case $F \cdot l \neq 0$. As $-K_X \cdot l = A \cdot l + 2 F \cdot l = 2$ the only possibility is $F \cdot l = 1$
which also implies that $\varphi$ is a $\bP_1$-bundle and $\varphi_{|F}$ is an isomorphism. Consider the product map $p = f \times \varphi: X \to \bP_1 \times Y$ which is generically one to one. If $D \subset X$ is a curve which is contracted by $p$ then $D$ is also contracted by $\varphi$. Therefore $D$ is a fiber of $\varphi$ and the rigidity lemma shows that $D$ cannot be contracted by $p$. It follows that $p$ is an isomorphism $X \cong \bP_1 \times F$ and $\varphi$ is the projection to the second factor.\\ b) The other case is $F \cdot l = 0$. This implies $F = \varphi^* F'$ which gives a factorization
$f: X \stackrel{\varphi}{\la} Y \stackrel{pr_2}{\la} \bP_1$. We notice that $pr_2 \circ \varphi_{|A}$ equals the projection of $A$ to the second factor. We also have $A \cdot l = 2$ which means that $\varphi$ restricted to $A$ is generically 2:1. It is also finite as $A = C \times \bP_1$ has no contractible curves. In fact if we look
what happens for a general fiber $F$ then $\varphi_{|F}$ maps $C$ 2:1 on
$pr_2^{-1}(f(F)) \cong \bP_1$. As $pr_2 \circ \varphi_{|A}$ is the second
projection of $A$ the ramification locus of $\varphi_{|A}$ must be equal to some fibers of the first projection of $A$ which gives $Y = \bP_1 \times \bP_1$.\\ Let $Q = \{pt\} \times \bP_1$ be a general fiber of the first projection and
let $X_Q = \varphi^{-1}(Q)$. Then $A_{|Q}$ consists of two sections
$Q_1, Q_2$ of $\varphi_{|X_Q}$. As $A \cdot l = 2$ we conclude that
$Q_i \cdot l = 1$ and $\varphi_{|X_Q}$ is a smooth conic bundle i.~e.\ $X_Q$ is a ruled surface.(In fact $X_Q$ must be $\bP_1 \times \bP_1$ as $Q_1$ and $Q_2$ do not meet.) In particular the discriminant locus $\Delta$ of $\varphi$ is contained in some fibers of the first projection of $Y$. As $\varphi$ is an extremal contraction every nonsingular rational curve in $\Delta$ must meet the rest of $\Delta$ in at least two points (see for example \cite[p. 83]{Mi83}). In our situation this implies that $\Delta$
is empty and $\varphi$ is a $\bP_1$-bundle so $\varphi_{|F}$ exhibits $F = f^{-1}(pt)$ as a ruled surface over $pr_2^{-1}(pt)$. \qed
\begin{proposition} Assume that $X$ has structure (O) and let $\varphi: X \to Y$ be a Mori contraction to a smooth surface $Y$. Then $Y \cong G$ and $\varphi$ is a $\bP_1$-bundle. \end{proposition}
\proof Let $l$ be a general conic. As $-K_X \cdot l = 2$ we get $G \cdot l = 1$ so $\varphi$ is regular and $G$ is a section. \qed
Next we consider del Pezzo fibrations over some curve which must be $\bP_1$ as $X$ is rationally connected. Since $\varphi$ is a Mori contraction we also know $\rho(X)=2$.
\begin{proposition} Assume that $X$ has structure (A) and let $\varphi: X \to \bP_1$ be a Mori contraction. Then $\varphi = f$ and in particular $F$ is a del Pezzo surface. \end{proposition}
\proof The contraction is generated by a rational curve $l$ with $-K_X \cdot l \in \{1, 2, 3\}$ and $l$ is numerically effective. Therefore $A \cdot l \geq 0$ and in fact $A \cdot l > 0$ as $A$ is not a fiber of $\varphi$. If $-K_X \cdot l = 1$ or $2$ this implies that $F \cdot l = 0$ so $F = \varphi^*(pt)$ and $f$ and $\varphi$ coincide. In the last case $-K_X \cdot l = 3$ and $\varphi$ is a $\bP_2$-bundle and $l$ a line. As $\bP_2$ is not fibered, $F$ restricted to a $\bP_2$ is trivial and again the two fibrations coincide. \qed
\begin{proposition}
Assume that $X$ has structure (O) and let $\varphi: X \to \bP_1$ be a Mori contraction. Then $\varphi$ is a quadric bundle and $\varphi_{|G}$ defines a $\bP_1$-fibration on $G$. \end{proposition}
\proof As $-K_X$ is divisible by 2, $\varphi$ is always a quadric bundle. Let
$Q \cong \bP_1 \times \bP_1$ be a fiber. The canonical bundle formula shows that $G_{|Q}$ has type $(1,1)$ so $Q$ intersects $G$ in a smooth rational curve. \qed
The last possibility is the case where the base of the Mori fiber space is a point, i.~e.\ $X$ is a Fano threefold with $\rho = 1$.
\begin{proposition} Let $X$ be a Fano manifold with $\rho(X) = 1$. Then $X$ does not have structure (A). \end{proposition}
\proof This is obvious because $F^2 = 0$ but $A_{|F}$ is the canonical bundle of $F$ which is not zero. \qed
\begin{proposition} Assume that $X$ is a Fano manifold with $\rho(X)=1$ and that $X$ has structure (O). Then either $X \cong \bP_3$ and $G$ is a smooth quadric or $X$ has index two, $G$ is the generator of $\mathrm{Pic} (X)$ and is therefore a del Pezzo surface of degree $1 \leq G^3 \leq 5$. \end{proposition}
\proof We just use the fact that $-K_X$ is divisible by two and cite Iskovskih's classification. \qed
\small \begin{tabular}{lcl} Thomas Bauer and Thomas Peternell\\ Mathematisches Institut \\ Universit\" at Bayreuth \\ D-95440 Bayreuth, Germany \\ thomas.bauer@uni-bayreuth.de \\ thomas.peternell@uni-bayreuth.de \\ \end{tabular}
\end{document} | arXiv | {
"id": "0310484.tex",
"language_detection_score": 0.8295321464538574,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Characteristic polynomials of Linial arrangements
for exceptional root systems}
\begin{abstract} The (extended) Linial arrangement $\mathcal{L}_{\Phi}^m$ is a certain finite truncation of the affine Weyl arrangement of a root system $\Phi$ with a parameter $m$. Postnikov and Stanley conjectured that all roots of the characteristic polynomial of $\mathcal{L}_{\Phi}^m$ have the same real part, and this has been proved for the root systems of classical types.
In this paper we prove that the conjecture is true for exceptional root systems when the parameter $m$ is sufficiently large.
The proof is based on representations of the characteristic quasi-polynomials in terms of Eulerian polynomials.
\end{abstract}
\tableofcontents
\section{Introduction} \label{sec:intro}
\subsection{Background} \label{subsec:background}
A hyperplane arrangement $\mathcal{A}=\{H_1, \dots, H_n\}$ is a finite collection of affine hyperplanes in an $\ell$-dimensional vector space $\ensuremath{\mathbb{K}}^\ell$. Despite its simplicity, the theory of hyperplane arrangements has fruitful connections with many areas in mathematics (\cite{ot, st-lect}). One of the most important invariant of an arrangement $\mathcal{A}$ is the \emph{characteristic polynomial} $\chi(\mathcal{A}, t)\in\mathbb{Z}[t]$. Indeed the characteristic polynomial is related to several other invariants, such as the Poincar\'e polynomial of the complexified complement $M(\mathcal{A})$ \cite{os}, the number of chambers for real arrangements \cite{zas-face}, the number of $\ensuremath{\mathbb{F}}_q$-rational points \cite{cra-rot, ter-jac}, Chern classes of certain vector bundles \cite{mus-sch, alu}, and lattice points countings \cite{bl-sa, ktt-cent, ktt-noncent, ktt-quasi, yos-worp}.
\subsection{Main results} \label{subsec:result}
Let $V=\mathbb{R}^\ell$ be an $\ell$-dimensional Euclidean space. Let $\Phi\subset V^*$ be an irreducible root system. Fix a positive system $\Phi^+\subset\Phi$. For a positive root $\alpha\in\Phi^+$ and $k\in\mathbb{Z}$, define \[ H_{\alpha, k}=\{x\in V\mid \alpha(x)=k\}. \] The set of all such hyperplanes is called the affine Weyl arrangement. Finite truncations of the affine Weyl arrangement have received considerable attention (\cite{ath-adv, ath-survey, ath-lin, ath-gen, ede-rei, ps-def, shi-kl, ter-multi, yos-char}). Among others, the (extended) Linial arrangement $\ensuremath{\mathcal{L}}_\Phi^m$ is defined by \[ \ensuremath{\mathcal{L}}_\Phi^m=\{H_{\alpha, k}\mid \alpha\in\Phi^+, k=1, 2, \dots, m\}, \] (where $\ensuremath{\mathcal{L}}_\Phi^0=\emptyset$ by convention). In \cite{ps-def}, Postnikov and Stanley studied combinatorial aspects of Linial arrangements. They posed the following conjecture.
\begin{conjecture} \label{conj:rh} (\cite[Conjecture 9.14]{ps-def}) Suppose $m\geq 1$. Then every root $\alpha\in\mathbb{C}$ of the equation $\chi(\ensuremath{\mathcal{L}}_{\Phi}^{m}, t)=0$ satisfies $\operatorname{Re} \alpha=\frac{mh}{2}$, where $h$ denotes the Coxeter number of $\Phi$. \end{conjecture} The conjecture was verified for $\Phi=A_\ell$ by Postnikov and Stanley \cite{ps-def}, and for $\Phi=B_\ell, C_\ell$, and $D_\ell$ by Athanasiadis (\cite{ath-lin}). These works are based on explicit representations of $\chi(\ensuremath{\mathcal{L}}_{\Phi}^{m}, t)$ for the corresponding root systems. (The case $\Phi=G_2$ is also easy.)
For exceptional root systems, some partial answers have recently been reported in \cite{yos-worp}. Namely, for $\Phi\in\{E_6, E_7, E_8, F_4\}$, Conjecture \ref{conj:rh} has been verified when the parameter $m>0$ satisfies \[ m\equiv -1 \left\{ \begin{array}{ll} \mod 6, & \Phi=E_6, E_7, F_4, \\ \mod 30, & \Phi=E_8. \end{array} \right. \]
The purpose of this paper is to prove Conjecture \ref{conj:rh} for exceptional root systems when $m\gg 0$. The main result is the following.
\begin{theorem} \label{thm:main} (Corollary \ref{cor:main}) Let $\Phi\in\{E_6, E_7, E_8, F_4\}$. Suppose $m\gg 1$. Then, every root $\alpha\in\mathbb{C}$ of the equation $\chi(\ensuremath{\mathcal{L}}_{\Phi}^{m}, t)=0$ satisfies $\operatorname{Re} \alpha=\frac{mh}{2}$. \end{theorem}
\subsection{What makes roots lie on a line?}
The proof of Theorem \ref{thm:main} relies on the expression of the characteristic quasi-polynomial $\chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)$ in terms of the Ehrhart quasi-polynomials and Eulerian polynomials developed in \cite{yos-worp}. (See \S \ref{sec:pre}.) However, the key result that enables us to conclude ``having the same real part'' is the following elementary lemma.
\begin{lemma} \label{lem:elem} Let $f(t)\in\mathbb{R}[t]$. Suppose $M$ is a real number that satisfies the inequality \[ M>2\cdot\max\{\operatorname{Re}\alpha\mid\alpha\in\mathbb{C}, f(\alpha)=0\}. \]
Let $\omega\in\mathbb{C}$ be a complex number with $|\omega|=1$. Then, the root $\alpha$ of the equation \[ f(t)-\omega\cdot f(M-t)=0 \] satisfies $\operatorname{Re} \alpha=\frac{M}{2}$. \end{lemma} \begin{proof} Set $f(t)=a(t-\alpha_1)(t-\alpha_2)\cdots(t-\alpha_n)$, ($a\neq 0$). As $f(t)$ is a real polynomial, $\overline{\alpha_i}$ is also a root of $f(t)$. Set $\beta_i=M-\overline{\alpha_i}$. Then, $\alpha_i$ and $\beta_i$ are symmetric with respect to the line $\operatorname{Re} =\frac{M}{2}$, and we have $f(M-t)=(-1)^n\cdot\prod_{i=1}^n(t-\beta_i)$.
If $\operatorname{Re} z<\frac{M}{2}$, then $|z-\alpha_i|<|z-\beta_i|$ for all $i$,
and hence $|f(z)|<|f(M-z)|$.
Similarly, $\operatorname{Re} z>\frac{M}{2}$ implies $|f(z)|>|f(M-z)|$. Therefore, $f(z)=\omega f(M-z)$ implies that $\operatorname{Re} z=\frac{M}{2}$. \end{proof}
The basic strategy of the proof of Theorem \ref{thm:main} is to construct $F^{(m)}(t)\in\mathbb{Q}[t]$ such that \[ \chi(\ensuremath{\mathcal{L}}_{\Phi}^m, t)=F^{(m)}(t)+(-1)^\ell\cdot F^{(m)}(mh-t), \] where $\ell$ is the rank of $\Phi$, then apply Lemma \ref{lem:elem}.
The remainder of this paper is organized as follows. In \S \ref{sec:pre}, we recall the notion of the characteristic quasi-polynomial $\chi_{\operatorname{quasi}}(\mathcal{A}, q)$ for an integral arrangement $\mathcal{A}$. The characteristic quasi-polynomial $\chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_\Phi^m, q)$ of the Linial arrangement $\ensuremath{\mathcal{L}}_\Phi^m$ can be expressed in terms of the Ehrhart quasi-polynomial $L_\Phi(t)$ of the fundamental alcove and the Eulerian polynomial $R_\Phi(t)$ of $\Phi$. The relation between these objects is described as follows (Theorem \ref{thm:yos-worp}) \[ \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_\Phi^m, q)=R_{\Phi}(S^{m+1})L_{\Phi}(q), \] where $S$ is the shift operator. In \S \ref{sec:limpoly}, using the symmetry of the Eulerian polynomial (Proposition \ref{prop:properties}), we introduce the truncated characteristic quasi-polynomial $\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_\Phi^m, q)$, which satisfies \[ \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)=\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)+ (-1)^\ell\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, mh-q), \] (Proposition \ref{prop:decquasi}). Since these functions are quasi-polynomials, it does not make sense to consider the roots. However, we will see that the limit \[ F_\Phi(q):=R_\Phi^{1/2}(S)q^\ell= \lim_{m\rightarrow\infty} \frac{\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_\Phi^m, mq)}{m^\ell} \] becomes a polynomial in $q$ (Proposition \ref{prop:normalim}). The location of the zeros of $F_\Phi(t)$ is crucial for $m\gg 0$. Indeed, we will check, case-by-case, that the real parts of the zeros are less than $\frac{h}{2}$ (Proposition \ref{prop:Remax}).
Because the proof for the quasi-polynomials is complicated, we will first give a simplified ``polynomial version'' of the main result as a ``toy-case'' in \S \ref{sec:toy}.
In \S \ref{subsec:settings}, we summarize those properties of quasi-polynomials and Eulerian polynomials that are necessary for the proof of the main result. This is mainly to simplify the notation. In \S \ref{subsec:asympt}, we present a weaker version of the main result for the asymptotic behavior of the real parts of roots. In \S \ref{subsec:exact}, we prove the main result.
\section{Preliminaries}
\label{sec:pre}
\subsection{Quasi-polynomials with the $GCD$-property}
\label{subsec:quasi}
A map $F:\mathbb{Z}\longrightarrow\mathbb{C}$ is called a \emph{quasi-polynomial} with a period $\rho>0$ if $F(q)$ can be expressed as a polynomial in $q$ that depends only on the residue class $q \mod \rho$. In other words, there exist polynomials $f_0, f_1, \dots, f_{\rho-1}\in\mathbb{C}[t]$ such that \[ F(q)=f_i(q) \] if $q\equiv i\mod\rho$. The polynomials $f_0, \dots, f_{\rho-1}$ are called the constituents of $F$. The period $\rho$ is said to be the minimal period if $F$ does not have smaller periods than $\rho$. The quasi-polynomial $F$ can be expressed as \begin{equation} \label{eq:periodicpoly} F(q)=c_0(q)q^d+c_1(q)q^{d-1}+\dots+c_d(q), \end{equation} where $c_i:\mathbb{Z}\longrightarrow\mathbb{C}$ is a periodic function with a period $\rho$ (i.e., $c_i(q+\rho)=c_i(q)$ for all $q\in\mathbb{Z}$).
We say that the quasi-polynomial $F$ has a constant leading term if $c_0(q)$ in (\ref{eq:periodicpoly}) is a nonzero constant function. In this case, $d$ is called the degree of the quasi-polynomial $F$.
The quasi-polynomial $F$ is said to have the $GCD$-property if the constituents satisfy: $\gcd(i, \rho)=\gcd(j, \rho)\Longrightarrow f_i(t)=f_j(t)$.
\begin{remark} In this paper, we distinguish the roles of the variables $q$ and $t$. The variable $q$ always runs through $\mathbb{Z}$ (or $\mathbb{Z}_{>0}$), whereas $t$ runs through $\mathbb{R}$ or $\mathbb{C}$. Under this convention, the variable of a quasi-polynomial should be $q$, and its constituents may have a variable $t$. \end{remark}
\subsection{Characteristic quasi-polynomials}
\label{subsec:charquasipoly}
Let $\mathcal{A}=\{H_1, \dots, H_n\}$ be an arrangement of affine hyperplanes in $\mathbb{R}^n$. Throughout this paper, we assume that the hyperplanes are defined over $\mathbb{Z}$. More precisely, there exists an integral linear equation \[ \alpha_i(x_1, \dots, x_\ell)=a_{i1}x_1+\dots+a_{i\ell}x_\ell+b_i \] ($a_{ij}, b_i\in\mathbb{Z}$) that satisfies $H_i=\alpha_i^{-1}(0)\subset\mathbb{R}^\ell$. For an arrangement $\mathcal{A}$, we can associate the modulo $q>0$ complement: \[ M_q(\mathcal{A})=\left(\Z/q\Z\right)^\ell\smallsetminus \bigcup_{i=1}^n\overline{H}_i, \] where $\overline{H}_i=\{x\in\left(\Z/q\Z\right)^\ell\mid\alpha_i(x)\equiv 0\mod q\}$.
The following theorem was given by Kamiya, Takemura and Terao.
\begin{theorem} (\cite{ktt-cent, ktt-noncent, ktt-quasi}) $\#M_q(\mathcal{A})$ is a quasi-polynomial with the $GCD$-property for sufficiently large $q\gg 0$. \end{theorem} We denote the quasi-polynomial by $\chi_{\operatorname{quasi}}(\mathcal{A}, q)$, which is called the \emph{characteristic quasi-polynomial} of $\mathcal{A}$. The characteristic quasi-polynomial has a constant leading term; hence, it is of the form \[ \chi_{\operatorname{quasi}}(\mathcal{A}, q)=q^\ell+c_1(q)\cdot q^{\ell-1}+\dots+c_\ell(q), \] where $c_i:\mathbb{Z}\longrightarrow\mathbb{Z}$, $i=1, \dots, \ell$ are periodic functions. It is also known that the prime constituent of $\chi_{\operatorname{quasi}}(\mathcal{A}, q)$ is equal to the characteristic polynomial of $\mathcal{A}$ \cite{ath-adv, ath-lin}, i. e., the characteristic polynomial $\chi(\mathcal{A}, t)$ has the form \[ \chi(\mathcal{A}, t)=t^\ell+c_1(1)\cdot t^{\ell-1}+\dots+c_\ell(1). \]
\subsection{Eulerian polynomials for root systems}
\label{subsec:epoly}
We first recall the terminology of \cite{bour, hum}.
Let $V=\mathbb{R}^\ell$ be the Euclidean space with inner product $(\cdot, \cdot)$. Let $\Phi\subset V$ be an irreducible root system with exponents $e_1, \dots, e_\ell$, Coxeter number $h$, and Weyl group $W$. For any integer $k\in\mathbb{Z}$ and $\alpha\in\Phi^+$, the affine hyperplane $H_{\alpha, k}$ is defined by \begin{equation} \label{eq:affinehyperp} H_{\alpha, k}=\{x\in V\mid (\alpha, x)=k\}. \end{equation}
Fix a positive system $\Phi^+\subset \Phi$ and the set of simple roots $\Delta=\{\alpha_1, \dots, \alpha_\ell\} \subset\Phi^+$. The highest root, denoted by $\widetilde{\alpha}\in\Phi^+$, can be expressed as a linear combination $\widetilde{\alpha}=\sum_{i=1}^\ell c_i\alpha_i$ ($c_i\in\mathbb{Z}_{>0}$). We also set $\alpha_0:=-\widetilde{\alpha}$ and $c_0:=1$. Then, we have the linear relation \begin{equation} \label{eq:linrel} c_0\alpha_0+c_1\alpha+\dots+c_\ell\alpha_\ell=0. \end{equation} The coweight lattice $Z(\Phi)$ and the coroot lattice $\check{Q}(\Phi)$ are defined as \[ \begin{split} Z(\Phi) &= \{x\in V\mid (\alpha_i, x)\in\mathbb{Z}, \alpha_i\in\Delta\}, \\ \check{Q}(\Phi) &= \sum_{\alpha\in\Phi}\mathbb{Z}\cdot \frac{2\alpha}{(\alpha, \alpha)}. \end{split} \] The coroot lattice $\check{Q}(\Phi)$ is a finite index subgroup of the coweight lattice $Z(\Phi)$. The index $\#\frac{Z(\Phi)}{\check{Q}(\Phi)}=f$ is called the \emph{index of connection}.
Let $\varpi_i^\lor\inZ(\Phi)$ be the dual basis of the simple roots $\alpha_1, \dots, \alpha_\ell$, that is, $(\alpha_i, \varpi_j^\lor)=\delta_{ij}$. Then, $Z(\Phi)$ is a free abelian group generated by $\varpi_1^\lor, \dots, \varpi_\ell^\lor$. We also have $c_i=(\varpi_i^\lor, \widetilde{\alpha})$.
Each connected component of $V\smallsetminus \bigcup\limits_{\substack{\alpha\in\Phi^+\\ k\in\mathbb{Z}}}H_{\alpha, k}$ is an open simplex, called an \emph{alcove}. Define the fundamental alcove ${\sigma_\Phi^\circ}$ by \[ {\sigma_\Phi^\circ} = \left\{ x\in V
\left| \begin{array}{ll} (\alpha_i, x)>0,&(1\leq i\leq \ell)\\ (\widetilde{\alpha}, x)<1& \end{array} \right. \right\} \] The closure $\overline{\alcov}=\{x\in V\mid (\alpha_i, x)\geq 0\ (1\leq i\leq \ell),\ (\widetilde{\alpha}, x)\leq 1\}$ is a simplex with vertices $0, \frac{\varpi_1^\lor}{c_1}, \dots, \frac{\varpi_\ell^\lor}{c_\ell}\in V$. The supporting hyperplanes of facets of $\overline{\alcov}$ are $H_{\alpha_1, 0}, \dots, H_{\alpha_\ell, 0}, H_{\widetilde{\alpha}, 1}$.
Using the linear relation (\ref{eq:linrel}), we define the function $\operatorname{asc}:W\longrightarrow\ensuremath{\mathbb{Z}}$.
\begin{definition} \label{def:asc} Let $w\in W$. Then, $\operatorname{asc}(w)$ is defined by \[ \operatorname{asc}(w)=\sum_{\substack{0\leq i\leq \ell\\ w(\alpha_i)>0}}c_i. \] \end{definition}
\begin{definition} \label{def:geneul} The generalized Eulerian polynomial $R_\Phi(x)$ is defined by \[ R_\Phi(x)=\frac{1}{f}\sum_{w\in W}x^{\operatorname{asc}(w)}. \] \end{definition} The following proposition gives some basic properties of $R_\Phi(x)$. \begin{proposition} \label{prop:properties} (\cite{lp-alc2}) \begin{itemize} \item[(1)] $\deg R_\Phi(x)=h-1$. \item[(2)] (Duality) $x^h\cdotR_\Phi(\frac{1}{x})=R_\Phi(x)$. \item[(3)] $R_\Phi(x)\in\ensuremath{\mathbb{Z}}[x]$. \item[(4)] $R_{A_\ell}(x)$ is equal to the classical Eulerian polynomial. (See \cite{comtet, foa-hist, st-ec1} for classical Eulerian polynomials.) \end{itemize} \end{proposition}
The polynomial $R_\Phi(x)$ was introduced by Lam and Postnikov in \cite{lp-alc2}. They proved that $R_\Phi(x)$ can be expressed in terms of cyclotomic polynomials and classical Eulerian polynomials. \begin{theorem} \label{thm:lp} (\cite[Theorem 10.1]{lp-alc2}) Let $\Phi$ be a root system of rank $\ell$. Then, \begin{equation} \label{eq:lp} R_\Phi(x)=[c_0]_x\cdot [c_1]_x\cdot [c_2]_x\cdots [c_\ell]_x\cdot R_{A_\ell}(x), \end{equation} where $[c]_x=\frac{x^c-1}{x-1}$. \end{theorem} We will give an alternative proof of Theorem \ref{thm:lp} in \S \ref{subsec:cqpLinial} using Ehrhart series.
\subsection{Ehrhart quasi-polynomials for root systems}
\label{subsec:eqpoly}
It is known that the number of lattice points $L_\Phi(q)=\#\left(q\cdot\overline{\alcov}\cap Z(\Phi)\right)$ in the delate $q\cdot\overline{\alcov}$ is a quasi-polynomial in $q$, called the \emph{Ehrhart quasi-polynomial} of $\overline{\alcov}$. (See \cite{be-ro, st-ec1} for details on Ehrhart theory.) Suter \cite{sut} explicitly computed the Ehrhart quasi-polynomial $L_{\overline{\alcov}}(q)$. Several useful conclusions may be summarized as follows. \begin{theorem} \label{thm:suter} (Suter \cite{sut}) \begin{itemize} \item[(i)] The Ehrhart quasi-polynomial $L_{\overline{\alcov}}(q)$ has the $GCD$-property. \item[(ii)] $L_{\overline{\alcov}}(q)$ has a constant leading term whose
leading coefficient is $\frac{f}{|W|}$. \item[(iii)] The minimal period is $\widetilde{n}=\gcd(c_1, c_2, \dots, c_\ell)$. (See Table \ref{fig:table} for explicit values.) \item[(iv)] If $q\in\mathbb{Z}$ is relatively prime to the period $\widetilde{n}$, then \[
L_{\overline{\alcov}}(q)=\frac{f}{|W|}(q+e_1)(q+e_2)\cdots(q+e_\ell). \]
\item[(v)] $\operatorname{rad}(\widetilde{n})|h$, where $\operatorname{rad}(\widetilde{n})=
\prod_{p: \mbox{\scriptsize prime}, p|\widetilde{n}}p$ is the radical of $\widetilde{n}$. \item[(vi)] The Ehrhart series $\operatorname{Ehr}_{\Phi}(z)$ of $L_\Phi(q)$ is \begin{equation} \label{eq:ehrser} \operatorname{Ehr}_{\Phi}(z):=\sum_{q=0}^\infty L_\Phi(q)z^q= \frac{1}{(1-z^{c_0})(1-z^{c_1})\cdots(1-z^{c_\ell})}. \end{equation} \end{itemize} \end{theorem}
\begin{table}[htbp] \centering {\footnotesize
\begin{tabular}{c|c|c|c|c|c|c|c}
$\Phi$&$e_1, \dots, e_\ell$&$c_1, \dots, c_\ell$&$h$&$f$&$|W|$&$\widetilde{n}$&$\operatorname{rad}(\widetilde{n})$\\ \hline\hline $A_\ell$&$1,2,\dots,\ell$&$1,1,\dots,1$&$\ell+1$&$\ell+1$&$(\ell+1)!$&$1$&$1$\\ $B_\ell, C_\ell$&$1,3,5,\dots,2\ell-1$&$1,2,2,\dots,2$&$2\ell$&$2$&$2^\ell\cdot \ell!$&$2$&$2$\\ $D_\ell$&$1,3,5,\dots,2\ell-3,\ell-1$&$1,1,1,2,\dots,2$&$2\ell-2$&$4$&$2^{\ell-1}\cdot\ell!$&$2$&$2$\\ $E_6$&$1,4,5,7,8,11$&$1,1,2,2,2,3$&$12$&$3$&$2^7\cdot 3^4\cdot 5$&$6$&$6$\\ $E_7$&$1,5,7,9,11,13,17$&$1,2,2,2,3,3,4$&$18$&$2$&$2^{10}\cdot 3^4\cdot 5\cdot 7$&$12$&$6$\\ $E_8$&$1,7,11,13,17,19,23,29$&$2,2,3,3,4,4,5,6$&$30$&$1$&$2^{14}\cdot 3^5\cdot 5^2\cdot 7$&$60$&$30$\\ $F_4$&$1,5,7,11$&$2,2,3,4$&$12$&$1$&$2^7\cdot 3^2$&$12$&$6$\\ $G_2$&$1,5$&$2,3$&$6$&$1$&$2^2\cdot 3$&$6$&$6$ \end{tabular} } \caption{Table of root systems.} \label{fig:table} \end{table}
The following proposition is a conclusion of the Ehrhart--Macdonald reciprocity. \begin{proposition} \label{prop:hshift} \begin{equation} L_{\Phi}(-q)=(-1)^\ell\cdot L_\Phi(q-h). \end{equation} (See \cite[Corollary 3.4]{yos-worp} for a more general formula.) \end{proposition}
\subsection{Characteristic quasi-polynomials of Linial arrangements}
\label{subsec:cqpLinial}
Let $S$ be the shift operator that replaces the variable $q$ by $q-1$ (or $t$ by $t-1$). More generally, the polynomial $\sigma(S)=a_0+a_1S+\cdots+a_dS^d$ acts on a function $f(q)$ as \[ \sigma(S)f(q)=a_0f(q)+a_1f(q-1)+\cdots+a_df(q-d). \] The characteristic quasi-polynomial $\chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^n, q)$ can be expressed in terms of the Eulerian polynomial $R_\Phi(x)$, and the Ehrhart quasi-polynomial $L_\Phi(q)$ of the fundamental alcove.
\begin{theorem} \label{thm:yos-worp} (\cite{yos-worp}) \begin{equation} \label{eq:yos-worp} \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_\Phi^m, q)= R_\Phi(S^{m+1})L_\Phi(q). \end{equation} \end{theorem}
\begin{remark} \label{rem:worp} The formula (\ref{eq:yos-worp}) holds for $m=0$ if we consider $\ensuremath{\mathcal{L}}_\Phi^0$ to be the empty arrangement. In that case, we have \begin{equation} \label{eq:L0} q^\ell=R_\Phi(S)L_\Phi(q). \end{equation} This can be considered as a root system generalization of the so-called Worpitzky identity \cite{wor}.
Note that Theorem \ref{thm:lp} by Lam and Postnikov follows from the Worpitzky identity (\ref{eq:L0}) and Theorem \ref{thm:suter} (vi). Indeed, equation (\ref{eq:L0}) is equivalent to \begin{equation} \label{eq:eulerianser} R_\Phi(z)\operatorname{Ehr}_{\Phi}(z)=\sum_{k=0}^\infty k^\ell z^k. \end{equation} Note that the right-hand side depends only on the rank $\ell$. A comparison of (\ref{eq:eulerianser}) with $\Phi=A_\ell$ shows that \[ \frac{R_\Phi(z)}{(1-z^{c_0})(1-z^{c_1})\cdots(1-z^{c_\ell})} = \frac{R_{A_\ell}(z)}{(1-z)^{\ell+1}}. \] This yields Theorem \ref{thm:lp}. \end{remark}
\begin{example} \label{ex:g2} Let $\Phi=G_2$. Since $R_{A_2}(x)=x+x^2$, we have \[ R_{G_2}(x)=(1+x)(1+x+x^2)(x+x^2)=x+3x^2+4x^3+3x^4+x^5. \] The closed fundamental alcove $\overline{\alcov}$ is the convex hull of $0, \frac{\varpi_1^\lor}{2}, \frac{\varpi_1^\lor}{3}$. The period is $\widetilde{n}=6$. The Ehrhart quasi-polynomial is \[
L_{\overline{\alcov}}(q)= \left\{ \begin{array}{ll} \frac{1}{12}(q+1)(q+5), &\mbox{ if $q\equiv 1, 5 \mod 6$},\\ &\\ \frac{1}{12}(q+2)(q+4), &\mbox{ if $q\equiv 2, 4 \mod 6$},\\ &\\ \frac{1}{12}(q+3)^2, &\mbox{ if $q\equiv 3 \mod 6$},\\ &\\ \frac{1}{12}(q^2+6q+12), &\mbox{ if $q\equiv 0 \mod 6$.} \end{array} \right. \] It is known that the characteristic quasi-polynomial of the Weyl arrangement $\mathcal{A}_{\Phi}=\{H_{\alpha, 0}\mid \alpha\in \Phi^+\}$ can be expressed as $\chi_{\operatorname{quasi}}(\mathcal{A}_\Phi, q)= (-1)^\ell\frac{\#W}{f}L_{\overline{\alcov}}(-q)$ \cite{ath-gen, ktt-quasi}. Thus, we have \[ \chi_{\operatorname{quasi}}(\mathcal{A}_{G_2}, q)= \left\{ \begin{array}{ll} (q-1)(q-5), &\mbox{ if $t\equiv 1, 5 \mod 6$},\\ &\\ (q-2)(q-4), &\mbox{ if $t\equiv 2, 4 \mod 6$},\\ &\\ (q-3)^2, &\mbox{ if $t\equiv 3 \mod 6$},\\ &\\ q^2-6q+12, &\mbox{ if $t\equiv 0 \mod 6$.} \end{array} \right. \]
The characteristic quasi-polynomial of the Linial arrangement is \[ \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{G_2}^1, q)= \left\{ \begin{array}{ll} q^2-6q+11 & q\equiv 1\mod 2, \\ q^2-6q+14 & q\equiv 0\mod 2. \end{array} \right. \] \end{example}
\section{Limit Polynomials}
\label{sec:limpoly}
\subsection{Normalized limit polynomials}
\label{subsec:normaliz}
Let $f(S)\in\mathbb{C}[S]$ be a polynomial of the shift operator $S$, and $g(t)\in\mathbb{C}[t]$. Assume $\deg g(t)=\ell$. Let us consider the polynomial \begin{equation} g_m(t):=f(S^{m+1})g(t) \end{equation} for $m\geq 0$.
\begin{proposition} \label{prop:normalim} \begin{equation} \lim_{m\to\infty}\frac{g_m(mt)}{m^\ell}=f(S)t^\ell. \end{equation} \end{proposition}
\begin{proof} Write $f(S)=\sum_{k=0}^N a_kS^k$ and $g(t)=\sum_{i=0}^\ell c_it^{\ell-i}$ ($c_0\neq 0)$. Then, we have, \[ g_m(t)=f(S^{m+1})g(t)= \sum_{k=0}^N\sum_{i=0}^\ell a_k c_i\cdot (t-k(m+1))^{\ell-i}. \] Hence, \[ \begin{split} \lim_{m\to\infty} \frac{g_m(mt)}{m^\ell} &= \lim_{m\to\infty} \frac{1}{m^\ell} \sum_{i=0}^\ell\sum_{k=0}^N a_k c_i\cdot (mt-k(m+1))^{\ell-i}. \\ &= \lim_{m\to\infty} \sum_{i=0}^\ell\frac{1}{m^i} \sum_{k=0}^N a_k c_i\cdot (t-k\frac{m+1}{m})^{\ell-i}. \\ &= \sum_{k=0}^N a_k c_0\cdot (t-k)^{\ell}\\ &= f(S)t^\ell. \end{split} \] \end{proof}
\subsection{Truncated Eulerian polynomials}
\label{subsec:trunc}
Suppose $R_\Phi(x)=\sum_{i=1}^{h-1}a_ix^i$. Define the truncated Eulerian polynomial $R_\Phi^{1/2}(t)$ by \begin{equation} R_\Phi^{1/2}(x)= \left\{ \begin{array}{ll} \sum\limits_{1\leq i<\frac{h}{2}}a_ix^i, & \mbox{ if $h$ is odd},\\ &\\ \sum\limits_{1\leq i<\frac{h}{2}}a_ix^i+\frac{a_{h/2}}{2}x^{h/2}, & \mbox{ if $h$ is even}. \end{array} \right. \end{equation}
\begin{example} Let $\Phi=G_2$. Then, $R_\Phi^{1/2}(x)=x+3x^2+2x^3$. \end{example}
The following is straightforward from Proposition \ref{prop:properties} (2).
\begin{proposition} \label{prop:dectrunc} $R_\Phi(x)=R_\Phi^{1/2}(x)+x^h\cdotR_\Phi^{1/2}(x^{-1})$. \end{proposition} Using the truncated Eulerian polynomial $R_\Phi^{1/2}(x)$, we define the half characteristic quasi-polynomial $\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_\Phi^m, q)$ as follows. \begin{equation} \chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_\Phi^m, q):=R_\Phi^{1/2}(S^{m+1})L_\Phi(q). \end{equation} Note that $\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_\Phi^m, q)$ is a quasi-polynomial of the period $\widetilde{n}$, however, it does not have the $GCD$-property.
\begin{example} For $\Phi=G_2$, we have \[ R_{G_2}^{1/2}(S)L_{G_2}(q)= \left\{ \begin{array}{ll} \frac{6q^2+10q}{12}&q\equiv 0\mod 3, \\ \frac{6q^2+10q-4}{12}&q\equiv 1\mod 3, \\ \frac{6q^2+10q+4}{12}&q\equiv 2\mod 3, \end{array} \right. \] and \[ \chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{G_2}^1, q)= R_{G_2}^{1/2}(S^2)L_{G_2}(q)= \left\{ \begin{array}{ll} \frac{3q^2-8q+12}{6} & q\equiv 0\mod 6, \\ \frac{3q^2-8q+5}{6} & q\equiv 1\mod 6, \\ \frac{3q^2-8q+10}{6} & q\equiv 2\mod 6, \\ \frac{3q^2-8q+3}{6} & q\equiv 3\mod 6, \\ \frac{3q^2-8q+14}{6} & q\equiv 4\mod 6, \\ \frac{3q^2-8q+1}{6} & q\equiv 5\mod 6. \end{array} \right. \] \end{example}
\begin{proposition} \label{prop:decquasi} The half characteristic quasi-polynomial $\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)$ satisfies the following. \begin{equation} \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)=\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)+ (-1)^\ell\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, mh-q). \end{equation} \end{proposition} \begin{proof}
Write $R_{\Phi}^{1/2}(x)=\sum_{i=1}^{\lfloor h/2\rfloor}a_i'x^i$. From Theorem \ref{thm:yos-worp} and Proposition \ref{prop:dectrunc}, it follows that \begin{equation} \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)= (R_{\Phi}^{1/2}(S^{m+1})L_\Phi)(q)+ (S^{h(m+1)}R_{\Phi}^{1/2}(S^{-m-1})L_\Phi)(q). \end{equation} The first term is equal to $\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)$. We shall prove that the second term is equal to $(-1)^\ell\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, mh-q)$. Indeed, using Proposition \ref{prop:hshift}, we have \begin{equation} \begin{split} (S^{h(m+1)}R_{\Phi}^{1/2}(S^{-m-1})L_\Phi)(q) &= \sum_{i=1}^{\lfloor h/2\rfloor}a_i'S^{h(m+1)-(m+1)i}L_\Phi(q)\\ &= \sum_{i=1}^{\lfloor h/2\rfloor}a_i'L_\Phi(q-h(m+1)+(m+1)i)\\ &= (-1)^\ell \sum_{i=1}^{\lfloor h/2\rfloor}a_i' L_\Phi(-q+hm-(m+1)i)\\ &= (-1)^\ell \sum_{i=1}^{\lfloor h/2\rfloor}a_i' S^{(m+1)i}L_\Phi(-q+hm)\\ &= (-1)^\ell\chi_{\operatorname{quasi}}^{1/2}(\ensuremath{\mathcal{L}}_{\Phi}^m, mh-q). \end{split} \end{equation} \end{proof} The next corollary follows immediately.
\begin{corollary} $\chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, q)=(-1)^\ell \chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_{\Phi}^m, mh-q)$. \end{corollary}
Consider $F_\Phi(t):= R_{\Phi}^{1/2}(S)t^\ell=\sum_{i=1}^{\lfloor h/2\rfloor}a_i'(t-i)^\ell$ as a polynomial in $t$. The distribution of the roots of $F_\Phi(t)=0$ will play a crucial role.
\begin{proposition} \label{prop:Remax} Let $\Phi\in\{E_6, E_7, E_8, F_4, G_2\}$ be an exceptional root system. Suppose $F_\Phi(\alpha)=0, \alpha\in\mathbb{C}$. Then, $\operatorname{Re}\alpha<\frac{h}{2}$. \end{proposition} \begin{proof} This proposition can be verified computationally. We describe the method for the case $\Phi=E_6$. The other cases are similar. First, using Table \ref{fig:table}, Theorem \ref{thm:lp}, and $R_{A_6}(x)=x + 57 x^2 + 302 x^3 + 302 x^4 + 57 x^5 + x^6$, we have \[ \begin{split} R_{E_6}(x) =& (1 + x)^3 (1 + x + x^2) (x + 57 x^2 + 302 x^3 + 302 x^4 + 57 x^5 + x^6) \\ =& x + 61 x^2 + 537 x^3 + 1916 x^4 + 3782 x^5 + 4686 x^6 + 3782 x^7 \\ & + 1916 x^8 + 537 x^9 + 61 x^{10} + x^{11}. \end{split} \] Therefore, we have \[ R_{E_6}^{1/2}(x)=x + 61 x^2 + 537 x^3 + 1916 x^4 + 3782 x^5 + 2343 x^6 \] and \[ R_{E_6}^{1/2}(S)t^6=(t - 1)^6 + 61 (t - 2)^6 + 537 (t - 3)^6 + 1916 (t - 4)^6 + 3782 (t - 5)^6 + 2343 (t - 6)^6. \] The roots of $R_{E_6}^{1/2}(S)t^6=0$ (approximation by Mathematica) are $ t=4.55334\pm 0.465487\sqrt{-1}, 4.78675\pm 1.55735\sqrt{-1}$, and $5.37033\pm 3.11072\sqrt{-1}$. All roots have real parts that are less than $\frac{h}{2}=6$.
The maximum real parts of the roots are presented in Table \ref{fig:maxreal}. \begin{table}[htbp] \centering
\begin{tabular}{c|c|c} $\Phi$&$\max\{\operatorname{Re}\alpha\}$&$h/2$\\ \hline\hline $E_6$&$5.3703$&$6$\\ $E_7$&$8.4367$&$9$\\ $E_8$&$14.6604$&$15$\\ $F_4$&$4.8967$&$6$\\ $G_2$&$2.166$&$3$ \end{tabular} \caption{The maximal real parts of roots.} \label{fig:maxreal} \end{table} \end{proof}
\section{A Toy Case} \label{sec:toy}
Let $\Phi\in\{E_6, E_7, E_8, F_4, G_2\}$. Denote its exponents by $e_1, \dots, e_\ell$ and the Coxeter number by $h$. Let $g(t)\in\mathbb{R}[t]$ be a real polynomial of $\deg g=\ell$ that satisfies $g(t-h)=(-1)^\ell g(-t)$. (Note that such a polynomial exists, e. g., $g(t)=\prod_{i=1}^\ell(t+e_i)$.)
\begin{theorem} \label{thm:toyRH} With the above notation, for sufficiently large $m\gg 0$, every root $\alpha\in\mathbb{C}$ of the equation $(R_\Phi(S^{m+1})g)(t)=0$ satisfies $\operatorname{Re} \alpha=\frac{mh}{2}$. \end{theorem}
\begin{proof} Set $g_m(t):=(R_{\Phi}^{1/2}(S^{m+1})g)(t)$. Then, we have \[ (R_\Phi(S^{m+1})g)(t)=g_m(t)+(-1)^\ell g_m(mh-t). \] (The proof is similar to that of Proposition \ref{prop:decquasi}.) By Proposition \ref{prop:normalim}, \[ \lim_{m\rightarrow\infty} \frac{g_m(mt)}{m^\ell}= R_{\Phi}^{1/2}(S)t^\ell=:F_\Phi(t). \] From Proposition \ref{prop:Remax}, it follows that the real parts of the roots of the equation $F_\Phi(t)=0$ are less than $\frac{h}{2}$. By the continuity of the roots of polynomials, for sufficiently large $m\gg 0$, the real parts of the roots of $g_m(mt)=0$ are also less than $\frac{h}{2}$, which is equivalent to saying that the real parts of the roots of $g_m(t)=0$ are less than $\frac{mh}{2}$. Then Lemma \ref{lem:elem} completes the proof. \end{proof}
\section{Main Results}
\label{sec:main}
In this section, by generalizing the argument in \S \ref{sec:toy}, we will prove the main result. The main difficulty is related to the fact that $L_\Phi(q)$ is a quasi-polynomial. We will use the idea of averaging constituents to resolve this problem. We will work in the generalized setting described in \S \ref{subsec:settings} for the sake of notational simplicity.
\subsection{Settings} \label{subsec:settings}
Let $\widetilde{n}, h>0$ be positive integers and $L(q)$ be a quasi-polynomial with period $\widetilde{n}$.
\begin{assumption} \label{assump:1} $L(q)$ has a constant leading term of degree $\ell>0$. In other words, $L(q)$ has an expression of the form \[ L(q)=c_0q^\ell+c_1(q)q^{\ell-1}+\cdots+c_\ell(q), \] where $c_i:\mathbb{Z}\longrightarrow\mathbb{Q}$ ($i=1, \dots, \ell$) is a periodic function and $c_0\neq 0$. \end{assumption}
\begin{assumption} \label{assump:2} $L(q)$ satisfies the following. \begin{equation} \label{eq:Ldual} L(-q)=(-1)^\ell L(q-h). \end{equation} \end{assumption} Let $R(x)\in\mathbb{Q}[x]$ be a polynomial of $\deg R(x)=h-1$.
\begin{assumption} \label{assump:3} $R(x)$ satisfies the following. \begin{equation} \label{eq:Rdual} x^hR(x^{-1})=R(x). \end{equation} \end{assumption} Write $R(x)=a_1x+a_2x^2+\cdots+a_{h-1}x^{h-1}$. Define the truncation $R'(x)$ of $R(x)$ by \[ R'(x)= \left\{ \begin{array}{ll} \sum\limits_{1\leq i<\frac{h}{2}}a_ix^i, & \mbox{ if $h$ is odd},\\ &\\ \sum\limits_{1\leq i<\frac{h}{2}}a_ix^i+\frac{a_{h/2}}{2}x^{h/2}, & \mbox{ if $h$ is even}. \end{array} \right. \] We also write $R'(x)=\sum_{i=1}^{\lfloor h/2\rfloor}a_i'x^i$. It is easy to see that $R(x)$ satisfies \[ R(x)=R'(x)+x^hR'(x^{-1}). \] Next, we make an assumption on the location of the roots of the polynomial \begin{equation} R'(S)t^\ell=\sum_{i=1}^{\lfloor h/2\rfloor}a_i'(t-i)^\ell. \end{equation}
\begin{assumption} \label{assump:4} Every root $\alpha\in\mathbb{C}$ of $R'(S)t^\ell=0$ satisfies \begin{equation} \operatorname{Re}\alpha<\frac{h}{2}. \end{equation} \end{assumption}
The following is our main example. \begin{example} \label{ex:mainex}
Let $\Phi\in\{E_6, E_7, E_8, F_4, G_2\}$. Then, the Ehrhart quasi-polynomial $L_\Phi(q)$, period $\widetilde{n}$, Coxeter number $h$, and Eulerian polynomial $R_\Phi(x)$ satisfy Assumptions \ref{assump:1}, \ref{assump:2}, \ref{assump:3}, and \ref{assump:4}. \end{example}
\begin{remark} Although Example \ref{ex:mainex} is the main example, we can construct many other examples that satisfy the above assumptions. For instance, for $\widetilde{n}=1$, some arbitrary $h\geq 3$ and $\ell>0$, $L(q)=(q+\frac{h}{2})^\ell$ and $R(x)=x+x^{h-1}$ satisfy the above assumptions. \end{remark}
\subsection{Asymptotic behavior of roots}
\label{subsec:asympt}
For $m>0$, define the quasi-polynomials $L^{(m)}(q)$ and $L'^{(m)}(q)$ of period $\widetilde{n}$ by \[ L^{(m)}(q)=(R(S^{m+1})L)(q) \] and \[ L'^{(m)}(q)=(R'(S^{m+1})L)(q). \] By Assumption \ref{assump:2} (and Assumption \ref{assump:3}), \begin{equation} \label{eq:decLprim} L^{(m)}(q)=L'^{(m)}(q)+(-1)^\ell L'^{(m)}(mh-q). \end{equation}
Let us denote the constituents by $L_d^{(m)}(t)\in\mathbb{Q}[t]$ for each residue class $d\mod\widetilde{n}$ (or $0\leq d<\widetilde{n}$), namely, \[ L^{(m)}(q)= L_d^{(m)}(q), \mbox{ when }q\equiv d\mod\widetilde{n}. \] Note that the constituents have the following expression: \[ L_d^{(m)}(t)= \sum_{i=1}^{h-1} a_i\sum_{j=0}^\ell c_j(d-(m+1)i)\cdot (t-(m+1)i)^{\ell-j}. \] The relation (\ref{eq:decLprim}) can also be written as \begin{equation} \label{eq:decconst} L_d^{(m)}(t)=L_d'^{(m)}(t)+(-1)^\ell\cdot L_{mh-d}'^{(m)}(mh-t) \end{equation} at the level of the constituents.
\begin{proposition} \label{prop:Lprim} \[ \lim_{m\rightarrow\infty}\frac{L_d'^{(m)}(mt)}{m^\ell}=c_0\cdot R'(S)t^\ell. \] In particular, the limit does not depend on the residue $d$. \end{proposition}
\begin{proof} Recall $L_d'^{(m)}(t)= \sum_{i=1}^{\lfloor h/2\rfloor}a_i' \sum_{j=0}^\ell c_j(d-(m+1)i)(t-(m+1)i)^{\ell-j}$. Hence, \[ \frac{L_d'^{(m)}(mt)}{m^\ell}= \sum_{i=1}^{\lfloor h/2\rfloor}a_i' \sum_{j=0}^\ell \frac{c_j(d-(m+1)i)}{m^j} \left(t-\frac{m+1}{m}i\right)^{\ell-j}. \] When $j>0$, since $c_j$ is a periodic function, we have $\lim_{m\rightarrow\infty}\frac{c_j(d-(m+1)i)}{m^j}=0$. By the assumption, $c_0(d-(m+1)i)= c_0$ is a nonzero constant. We have \[ \begin{split} \lim_{m\rightarrow\infty}\frac{L_d'^{(m)}(mt)}{m^\ell} &= c_0\cdot\sum_{i=1}^{\lfloor h/2\rfloor}a'_i(t-i)^\ell\\ &= c_0\cdot R'(S)t^\ell. \end{split} \] \end{proof}
\begin{definition} \label{def:infsup} \[ \begin{split} \overline{r}_d^{(m)}&:= \max\{\operatorname{Re}\alpha\mid \alpha\in\mathbb{C}, L_d^{(m)}(\alpha)=0\}, \\ \underline{r}_d^{(m)}&:= \min\{\operatorname{Re}\alpha\mid \alpha\in\mathbb{C}, L_d^{(m)}(\alpha)=0\}. \end{split} \] \end{definition} If $L(q)$ and $R(x)$ satisfy Assumptions \ref{assump:1}--\ref{assump:4}, then $\overline{r}_d^{(m)}$ and $\underline{r}_d^{(m)}$ approach $\frac{mh}{2}$ as $m\rightarrow\infty$. More precisely, we have the following.
\begin{theorem} \label{thm:asymptr} For any $0\leq d<\widetilde{n}$, \begin{equation} \label{eq:limithhalf} \lim_{m\rightarrow\infty} \frac{\overline{r}_d^{(m)}}{m}= \lim_{m\rightarrow\infty} \frac{\underline{r}_d^{(m)}}{m}=\frac{h}{2}. \end{equation} \end{theorem}
\begin{proof} Let $F(t):=c_0\cdot R'(S)t^\ell$. Then, by (\ref{eq:decconst}) and Proposition \ref{prop:Lprim}, we have \begin{equation} \lim_{m\rightarrow\infty} \frac{L_d^{(m)}(mt)}{m^\ell}=F(t)+(-1)^\ell\cdot F(h-t). \end{equation} Choose a root $\alpha_m\in\mathbb{C}$ of $L_d^{(m)}(t)=0$. Then, obviously, $\frac{\alpha_m}{m}$ satisfies the equation $L_d^{(m)}(mt)=0$. Hence, $\frac{\alpha_m}{m}$ approaches the (set of) roots of $F(t)+(-1)^\ell F(h-t)=0$. Equation (\ref{eq:limithhalf}) follows from Assumption \ref{assump:4} and Lemma \ref{lem:elem}. \end{proof}
\begin{corollary} \label{cor:limRH} Let $\Phi\in\{E_6, E_7, E_8, F_4\}$. Fix $0\leq d<\widetilde{n}$. Let $\alpha_m\in\mathbb{C}$ be a root of the constituent $\chi_{\operatorname{quasi}, d}(\ensuremath{\mathcal{L}}_\Phi^m, t)$. Then, \[ \lim_{m\rightarrow\infty} \frac{\operatorname{Re}\alpha_m}{m}=\frac{h}{2}. \] \end{corollary}
\subsection{Exact arrangement of roots}
\label{subsec:exact}
In the previous subsection, we proved that the real part of a root $\alpha$ of a constituent $L_d^{(m)}(t)=0$ is asymptotically close to $\frac{mh}{2}$. Here, we prove a stronger result for some special constituents.
\begin{definition} \label{def:adm} (Using the notation in \S \ref{subsec:settings}), the residue $d\mod \widetilde{n}$ ($0\leq d<\widetilde{n}$) is called an \emph{admissible residue} if the constituents of $L^{(m)}(q)$ satisfy \begin{equation} \label{eq:adm} L_d^{(m)}(t)= L_{d+kh}^{(m)}(t)= L_{-d+kh}^{(m)}(t) \end{equation} for all $k\in\mathbb{Z}$ and $m>0$. \end{definition}
\begin{example} Consider the situation in Example \ref{ex:mainex}. Then, $L^{(m)}(q)=(R(S^{m+1})L)(q)$ is equal to $\chi_{\operatorname{quasi}}(\ensuremath{\mathcal{L}}_\Phi^m, q)$, which has the $GCD$-property. Hence, $d$ is an admissible residue if and only if \[ \gcd(d, \widetilde{n})= \gcd(d+kh, \widetilde{n}) \]
for all $k\in\mathbb{Z}$. As $\operatorname{rad}(\widetilde{n})|h$ (Theorem \ref{thm:suter} (v)), $d=1$ is an admissible residue. Other admissible residues (divisors of $\widetilde{n}$) for $\Phi\in\{E_6, E_7, E_8, F_4, G_2\}$ are listed in Table \ref{tab:ar}. \begin{table}[htbp] \centering
\begin{tabular}{c|c|c|c|c|c} $\Phi$&$\widetilde{n}$&$\operatorname{rad}(\widetilde{n})$&$h$&\mbox{admissible divisor of $\widetilde{n}$}&$m_0$\\ \hline\hline $E_6$&$6$&$6$&$12$&$1,2,3,6$&$1$\\ $E_7$&$12$&$6$&$18$&$1,3$&$2$\\ $E_8$&$60$&$30$&$30$&$1,3,5,15$&$2$\\ $F_4$&$12$&$6$&$12$&$1,2,3,4,6,12$&$1$\\ $G_2$&$6$&$6$&$6$&$1,2,3,6$&$1$ \end{tabular} \caption{Admissible divisors.} \label{tab:ar} \end{table} \end{example}
The following is the main result.
\begin{theorem} \label{thm:maingeneral} (Using the same notation as in \S \ref{subsec:settings}.) Suppose $d$ ($0\leq d<\widetilde{n}$) is an admissible residue. Let $\alpha_m$ be a root of $L_d^{(m)}(t)=0$. Then, for sufficiently large $m\gg 0$, $\operatorname{Re}\alpha_m=\frac{mh}{2}$ holds. \end{theorem}
\begin{proof} Let $m_0:=\frac{\widetilde{n}}{\gcd(h, \widetilde{n})}$. Then, $L_d^{(m)}(t)=L_{d+kh}^{(m)}(t)=L_{-d+kh}^{(m)}(t)$ for $k=0, 1, \dots, m_0-1$. Hence, using (\ref{eq:decconst}), \begin{equation} \label{eq:2m01} \begin{split} L_d^{(m)}(t) &= \frac{1}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{d+kh}^{(m)}(t)\\ &= \frac{1}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{d+kh}'^{(m)}(t) +\frac{(-1)^\ell}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{-d+(m-k)h}'^{(m)}(mh-t)\\ &= \frac{1}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{d+kh}'^{(m)}(t) +\frac{(-1)^\ell}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{-d+kh}'^{(m)}(mh-t). \end{split} \end{equation} As $L_d^{(m)}(t)=L_{-d}^{(m)}(t)$, we also have \begin{equation} \label{eq:2m02} L_d^{(m)}(t) = \frac{1}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{-d+kh}'^{(m)}(t) +\frac{(-1)^\ell}{m_0}\cdot\sum_{k=0}^{m_0-1}L_{d+kh}'^{(m)}(mh-t). \end{equation} Define the polynomial $F_d^{(m)}(t)$ by \[ F_d^{(m)}(t):= \frac{1}{2m_0}\cdot \sum_{k=0}^{m_0-1} \left\{ L_{d+kh}'^{(m)}(t)+L_{-d+kh}'^{(m)}(t) \right\}. \] Then, combining (\ref{eq:2m01}) and (\ref{eq:2m02}), $L_d^{(m)}(t)$ can be expressed as \[ L_d^{(m)}(t)=F_d^{(m)}(t)+(-1)^\ell\cdot F_d^{(m)}(mh-t). \] From Proposition \ref{prop:Lprim}, it follows that \begin{equation} \lim_{m\rightarrow\infty} \frac{F_d^{(m)}(mt)}{m^\ell}=c_0\cdot R'(S)t^\ell. \end{equation} Hence, for sufficiently large $m\gg 0$, every root $\alpha\in\mathbb{C}$ of $F_d^{(m)}(t)=0$ satisfies \[ \operatorname{Re}\alpha<\frac{mh}{2}. \] Applying Lemma \ref{lem:elem}, every root of $L_d^{(m)}(t)=0$ has the real part $\frac{mh}{2}$. \end{proof}
\begin{corollary} \label{cor:main} Let $\Phi\in\{E_6, E_7, E_8, F_4, G_2\}$. For sufficiently large $m\gg 0$, every root $\alpha\in\mathbb{C}$ of the characteristic polynomial $\chi(\ensuremath{\mathcal{L}}_\Phi^m, t)$ of the Linial arrangement $\ensuremath{\mathcal{L}}_\Phi^m$ satisfies $\operatorname{Re}\alpha=\frac{mh}{2}$. \end{corollary}
\begin{proof} Recall that the characteristic polynomial is the constituent of the characteristic quasi-polynomial corresponding to $d=1$. Since $d=1$ is an admissible residue, we can apply Theorem \ref{thm:maingeneral}. \end{proof}
\noindent {\bf Acknowledgements.} The author was partially supported by JSPS KAKENHI Grant Number 25400060, 15KK0144, and 16K13741.
\end{document} | arXiv | {
"id": "1610.07841.tex",
"language_detection_score": 0.5325603485107422,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{GENERALIZED WATSON TRANSFORMS II: THE COMPLEMENTARY SERIES OF $\boldsymbol{GL(2,\mathbb{R})}$}
\author{Qifu Zheng} \address{Department of Mathematics and Statistics, The College of New Jersey, Ewing, NJ 08618} \email{zheng@tcnj.edu} \subjclass[2010]{Primary 22E30, 43A32, 44A15; Secondary 43A65, 42A38} \keywords{Complementary series, unitary representations, Watson transform.}
\begin{abstract} We apply the theory of generalized Watson transforms developed in \cite{zheng00} to construct the complementary series of $GL(2,\mathbb{R})$. \end{abstract}
\textbf{\maketitle}
\parindent=25pt \pagestyle{myheadings}
\markboth{
\sc{Q. Zheng}
}{
\sc{Generalized Watson Transforms, II: The Complementary Series of }$GL(2,\mathbb{R})$
}
\section{Introduction} \label{Intro}
This article is the second in a series of articles in which we develop the theory of generalized Watson transforms and make applications of those results to the representation theory of the general linear groups over $\mathbb{R}$. It is well known \cite{bargmann,knapp} that the irreducible representations of $GL(2,\mathbb{R})$, the general linear group of $2 \times 2$ real matrices, are classified according to three distinct constructions: (1) The principal series are usually constructed by unitary induction from its parabolic subgroup. (2) The complementary series are constructed by a form of analytic continuation from the principal series. (3) The (relative) discrete series are usually constructed in spaces of holomorphic functions on the unit disk or upper half complex plane.
On the other hand, by applying the results developed in [3], we can obtain all three series using the method of generalized Watson transforms. That this method is able to achieve these results is due to the fact that the group $G = GL(2,\mathbb{R})$ is generated by its upper triangular (Borel) subgroup $Q$ and the Weyl reflection, \begin{equation} \label{pmatrix} p = \begin{bmatrix}
0 & 1 \\
-1 & 0 \\
\end{bmatrix}
\end{equation} then, any irreducible unitary representation $\pi$ of $G$ is determined by its restriction to $Q$ and $p$, which in fact corresponding to a generalized Watson transform. In this paper, we will illuminate this approach by applying the method of generalized Watson transforms to construct the complementary series of $G$, and in a subsequent article \cite{zheng_in_prep}, we will use the generalized Watson transform method to construct unitary representations of higher rank groups.
This paper is organized as follows: In Section \ref{Review}, we review briefly some concepts and
theorems related to generalized Watson transforms from \cite{zheng00}. In Section \ref{pitt}, we will describe the subgroups of $G$ and its non-unitary representations realized on the Hilbert space $L^2(\mathbb{R}, (1+ x^2)^s dx)$. Then, in Section \ref{Subgroup} we will use Pitt's theorem \cite{pitt} to realize the representations on the space $H_s = L^2(\mathbb{R}, |x|^{-s}dx)$ where $0 < s < 1$. Finally, in Section \ref{Proof} we will show that the representations realized on $H_s$ in Section 4 are unitary.
\section{Some remarks on the generalized Watson transforms}
\label{Review}
Let $G_0$ be a topological group, $R$ and $L$ be unitary representations of $G_0$ on a Hilbert space $H$, and let $I$ denote the identity operator on $H$. A unitary operator $W$ that intertwines $R$ and $L$ is called a {\it generalized Watson transform with respect to $R$ and $L$} if $W^2 = I$. The operator $W$ is called a {\it generalized skew Watson transform with respect to $R$ and $L$} if $W^2 = -I$. The results in \cite{zheng00} provide several theorems on the construction of generalized Watson transforms. Here, we list one corollary that is needed in the proof of the unitarity of the complementary series.
\begin{prop} {\rm(Zheng \cite{zheng00})}
\label{RandL}
Suppose that $G_0$ is Abelian, let $R$ be a unitary representation of $G_0$ on a Hilbert space $H$, and set $L( g ) = R(g^{-1})$ for all $g \in G_0$. For $\phi\in H$, suppose that $\phi^{\circ}=\left\{R(g)\phi : g\in G_0\right\}$ spans a dense subspace of $H$. Then there exists a generalized Watson transform $W$ on $H$ with respect to $R$ and $L$ such that $W\phi = \pm\phi$ if and only if $\langle\phi|R(g)\phi\rangle $ is real for all $g \in G_0$.
\end{prop}
\section {Subgroups of \texorpdfstring{$G$}{G} and its non-unitary representations}
\label{Subgroup}
Denote the elements of $G$ by $$
g=\begin{bmatrix}
a & c \\
b & d \\
\end{bmatrix}, $$
where $a, b, c, d \in\mathbb{R}$ and $ad - bc \ne 0$. Let $Q$ be the full upper-triangular subgroup of matrices \begin{equation}
\label{qmatrix}
q := q(a,c,d)=\begin{bmatrix}
a & c \\
0 & d \\
\end{bmatrix}, \end{equation}
and $Q^t$ be the analogous full lower-triangular subgroup. Then $Q$ is the semi-direct product of the normal Abelian subgroup $N$ of unipotent matrices \begin{equation}
\label{nmatrix}
n := n( c) = \begin{bmatrix}
1 & c \\
0 & 1 \\
\end{bmatrix}, \end{equation} $c \in \mathbb{R}$, and the diagonal subgroup $D$ of matrices \begin{equation}
\label{gammamatrix}
\gamma(a, d)=
\begin{bmatrix}
a & 0 \\
0 & d \\
\end{bmatrix}, \end{equation}
with $a \ne 0, d\ne 0$. The matrix $p$ in (\ref{pmatrix}),
called the Weyl element, plays a special role in the representation theory of $G$. Since $p^2 = -I$, the generalized Watson transforms are operators associated by representations to $p$. This explains the importance of generalized Watson transforms in the representation theory of $G$, and more generally, of reductive Lie groups.
The following result is well known \cite{knapp}.
\begin{prop}
\label{NandD}
The subgroups $N$ and $D$ and the Weyl element $p$ generate the group $G$.
\end{prop}
The proof of the above result rests on two identities. First, when $b = 0$, $$g=\begin{bmatrix}
a & c \\
0 & d \\
\end{bmatrix}=q(a,c,d)=\gamma(a,d) \, n(\frac{c}{a});$$
and when $b\ne 0$,
$$g=\begin{bmatrix}
a & c \\
b & d \\
\end{bmatrix}=n(a/b) \, p \, \gamma(-b,-(\det{g})/b) \, n(d/b).
$$ Therefore, any representation of the group $G$ is completely determined by its restrictions to $N,D$ and $p$.
For $d \in \mathbb{R}$, let $\sigma(d)$ denote the sign of $d$; also, let $\epsilon$ equal 0 or 1. Starting from the one-dimensional character of $Q^t$,
$$
\tau_s\left(\begin{bmatrix}
a & 0 \\
b & d \\
\end{bmatrix}\right)=\sigma(d)^{\epsilon}|a|^{-(\Re(s)+1)/2}|d|^{(\Re(s)+1)/2}$$ we can obtain the bounded representation $\pi_s$, of $G$ on the weighted norm space $V_s= L^2 (\mathbb{R},(1+x^2)^{\Re(s)}dx)$, defined by
\begin{multline}
\label{pis} (\pi_s(g)f)(x) = (\sigma(\det g))^{\epsilon}(\sigma(\det(a +xb)))^{\epsilon} \\
\times |\det g|^{(\Re(s)+1)/ 2} |\det(a + xb)|^{ -( \Re(s)+1 )}f\left(\frac{c+xd}{a+xb}\right),
\end{multline}
The representation $\pi_s$ on $V_s$ is unitary if and only if $s$ is pure imaginary, i.e., $\Re(s) = 0$. Since in this article we concentrate on the complementary series, we will only consider the case in which $\epsilon = 0$ and $0 < s < 1$, and in that case, (\ref{pis}) becomes \begin{equation} \label{special}
(\pi_s(g)f)(x) = |\det g|^{(s+1)/ 2} |\det(a + xb)|^{ -(s+1)}f\left(\frac{c+xd}{a+xb}\right) \end{equation}
\section{New realizations of non-unitary representations of \texorpdfstring{$G$}{G}}
\label{pitt}
In this section, we will make use of a well-known theorem of Pitt \cite{pitt} to realize the non-unitary representation $\pi_s$ on the weighted norm space $H_s = L^2(\mathbb{R}, |x|^sdx)$ when $0 < s < 1$. We first state a special case of Pitt's theorem and refer readers to Stein \cite{stein} for more general versions of Pitt's theorem.
\begin{thm}
\label{pitthm}
{\rm (Pitt \cite{pitt})} Let $s\in (0,1)$ and let $\hat{f}$ denote the Fourier transform of a function $f:\mathbb{R}\to\mathbb{C}$. Then there exists a constant $C$ such that
\begin{equation}
\label{inequ}
\int_{-\infty}^{\infty}|\hat{f}(x)|^2|x|^{-s}dx \le C\int_{-\infty}^{\infty}|\hat{f}(x)|^2|x|^{s}dx
\end{equation}
for every function $f \in H_s$ for which the integral on the right side of (\ref{inequ}) is convergent.
\end{thm}
By applying Proposition \ref{pitthm}, we can define in terms of $\pi_s$ a representation $\rho_s$ on $H_s$. Indeed, denoting by $\mathcal{F}$ the Fourier transform, we have the following result.
\begin{lem} Define, for $g \in G$, the operator \begin{equation} \label{rhos} \rho_s(g) = \mathcal{F}\pi_s(g){\mathcal{F}}^{-1}. \end{equation} Then $\rho_s$ is a well-defined representation of $G$ on $H_s$, $0 < s < 1$.
\end{lem}
\begin{proof}
By Pitt's theorem, there exists a constant $C > 0$ such that for any $f \in V_s$ \begin{align*}
\int_{-\infty}^{\infty}|(\mathcal{F}{f})(x)|^2 \, |x|^{-s}dx &\le C\int_{-\infty}^{\infty}|f(x)|^2 \, |x|^{s}dx \\
&\le C\int_{-\infty}^{\infty}|f(x)|^2|1+x^2|^{s}dx, \end{align*} $0 < s < 1$. Hence, the Fourier transform $\mathcal{F}$ is a continuous map from $V_s$ to $H_s$, and this shows that $\mathcal{F}V_s \subset H_s$.
Define the functions $\lambda, \mu:\mathbb{R}\to\mathbb{R}$ by $\lambda(x) = e^{-x^2/2}$ and $ \mu(x) = xe^{-x^2/2}$.
Then it is a simple calculation to show that both $\lambda$ and $\mu$ are in $V_s$ and $H_s$, and also that $\mathcal{F}\lambda=\mu,\mathcal{F}\mu=-i\mu$. Define a homomorphism $T$ of $\mathbb{R}^{\times}$, the multiplicative group of non-zero real numbers, by
$$
(T(a)f)( x) = a^{1/2}f(ax)
$$
for any $a \in \mathbb{R}^{\times} $ and any function $f$ on $\mathbb{R}$. Then for $f \in V_s$, we obtain the relation,
\begin{equation}
\mathcal{F}T(a)=T(a^{-1})\mathcal{F}.
\end{equation}
By the uniqueness property of Laplace transform, it follows that if $f \in V_s$ is such that for any $a \in \mathbb{R}^{\times} $
$$
\int_{-\infty}^{\infty}f(x)(T(a)\lambda)(x)(1+x^2)^{s}dx=0
$$
and
$$
\int_{-\infty}^{\infty}f(x)(T(a)\mu)(x)(1+x^2)^{s}dx=0
$$
then $f = 0$. Hence, $\hbox{Span}\!\left\{T(a)\lambda, T(a)\mu |a \in \mathbb{R}^{\times}\right\}$ is a dense subspace of $V_s$. Similarly, the space generated by its image, {\it viz.}, $\hbox{Span}\!\left\{T(a^{-1})\lambda, T(a^{-1})\mu |a \in \mathbb{R}^{\times}\right\}$, is dense in $H_s$. Therefore, it follows from the continuity of $\mathcal{F}$ that $\mathcal{F}V_s = H_s$.
Consequently, for any $f \in H_s$ and $g \in G$, we obtain ${\mathcal{F}}^{-1}f \in V_s,$ $\pi_s (g){\mathcal{F}}^{-1}f\in V_s,$ and $ \mathcal{F}\pi_s(g){\mathcal{F}}^{-1}f \in H_s.$
Therefore, we have proved that $\rho_s(g ) = \mathcal{F}\pi_s(g){\mathcal{F}}^{-1}$ is well-defined for any $g \in G$.
Finally, since $\pi_s$ is a representation of $G$ on $V_s$ then it follows immediately that $\rho_s$ also is a well-defined
representation of $G$ on $H_s.$
\end{proof}
\section {Unitarity of the complementary series} \label{Proof}
Throughout this section, we will use the notation defined in (\ref{qmatrix})-(\ref{gammamatrix}) for the elements of the subgroups of $G$. The main of this section is to establish the following result.
\begin{thm}
\label{main}
For $s \in (0,1)$, the operator $\rho_s$ defined in ($\ref{rhos}$) is a unitary representation of $G$ on $H_s$.
\end{thm}
Before embarking on the proof of Theorem \ref{main}, we shall establish several preliminary results.
\begin{lem}
\label{preserve}
The subgroup $Q$ of upper-triangular subgroup preserves the norm of $H_s$ when $\rho_s$ is restricted to $Q$.
\end{lem}
\begin{proof}
Notice that if $q = q( a, c, d ) \in Q$ then, by $(\ref{special})$,
\begin{equation}
(\pi_s(q)f)(x) = |a|^{-(s+1)/2}|d|^{(s+1)/2}f(a^{-1}(c+xd )).
\end{equation}
Applying the Fourier transform, we obtain
\begin{align*}
(\rho_s(q)f)(x) &= (\mathcal{F}\pi_s(q)\mathcal{F}^{-1}f)(x) \\
&= e^{icd^{-1}x}|a|^{(1-s)/2}|d|^{(s-1)/2}f(d^{-1}xa).
\end{align*}
Hence the result follows by a simple calculation.
\end{proof}
Define
$
R(\gamma)=\rho_s(\gamma)
$
for $\gamma= \gamma(a, d) \in D$. Then, it follows from the above lemma that $R$ is a unitary representation of $D$ on $H_s$ and
$$
(R(\gamma)f)(x) = |a|^{(1-s)/2}|d|^{(s-1)/2}f(d^{-1}xa).
$$
\begin{lem} \label{phi}
For $s \in \mathbb{C}$ such that $0< \Re(s) < 1$, define
\begin{equation}
\phi_s(x)=\int_{-\infty}^{\infty} e^{-ixy} \, (1+y^2)^{-(s+1)/2} \, dy.
\end{equation}
Then $\phi_s \in H_s$, and the set ${\mathbf{\Phi}}^{\circ} = \{R(\gamma)\phi_s:\gamma \in D\}$ spans a dense subspace of $H_s^+ =\{f \in H_s: f \text{ is even}\}$. \end{lem}
\begin{proof} Since $0 < \Re(s)< 1$ then the functions $(1+y^2)^{-(s+1)/2}$ and $(1+y^2)^{-(s+1)}$ are elements of $L^1(\mathbb{R})$. Therefore, by the $L^1$ and $L^2$ properties of the Fourier transform, it follows that $\phi_s \in L^2 (\mathbb{R})\cap L^{\infty}(\mathbb{R})$, and
\begin{align*}
\langle\phi_s|\phi_s\rangle_s &= \int_{-\infty}^{\infty}|\phi_s(x)|^2|x|^{-s}dx \\
&= \int_{|x|\le 1}|\phi_s(x)|^2|x|^{-s}dx +\int_{|x|>1}|\phi_s(x)|^2|x|^{-s}dx \\
&\le \|\phi_s\|_{\infty} \int_{|x|\le 1}|x|^{-s}dx+\int_{|x|>1}|\phi_s(x)|^2dx\\
&\le 2(1-s)^{-1} \|\phi_s\|_{\infty} + \|\phi_s\|_2^2.
\end{align*}
where $\|\phi_s\|_{\infty}$ and $\|\phi_s\|_2$ are the norms of $\phi_s$ in $L^{\infty} (\mathbb{R} )$ and $L^2 (\mathbb{R})$, respectively.
Therefore $\phi_s \in H_s$, and
\begin{align*}
\phi_s(x) &= \int_{-\infty}^{\infty} e^{-ixy} \, (1+y^2)^{-(s+1)/2} \, dy \nonumber \\
&= \frac{1}{\Gamma((1+s)/2)} \int_{-\infty}^{\infty} e^{-ixy} \left[\int_{0}^{\infty} \xi^{(s-1)/2} e^{-\xi(1+y^2)}d\xi\right]dy.
\end{align*}
As these integrals are absolutely convergent, we now apply Fubini's theorem to reverse the order of integration; then the inner integral with respect to $y$ is seen to be the Fourier transform of the Gaussian; on evaluating that integral we obtain
\begin{align}
\label{Gammaequ}
\phi_s(x) &= \frac{\sqrt{\pi}}{\Gamma((1+s)/2)} \int_{0}^{\infty}\xi^{(s/2)-1} e^{-\xi} e^{-x^2/4\xi}d\xi \nonumber \\
&=\frac{\sqrt{\pi}}{\Gamma((1+s)/2)} \, |x/2|^{s/2}\int_{0}^{\infty} \xi^{(s/2)-1} \exp(-|x|(\xi+\xi^{-1})/2) d\xi.
\end{align}
In order to prove that $\Phi^{\circ}$ spans a dense subspace in $H_s^+$, it suffices to show that if $f \in H_s$ is even and is such that $\langle R(\gamma(a, 1))\phi_s |f\rangle_s= 0 $ for any $ a \in \mathbb{R}^{\times}$ then $f = 0$, almost everywhere. It follows from the condition $\langle R(\gamma(a, 1))\phi_s |f\rangle = 0$ that
$$
\int_{-\infty}^{\infty} |a|^{(1-s)/2}\phi_s(xa)\overline{f(x)}|x|^{-s}dx=0
$$
and that
$$
\int_{-\infty}^{\infty}\phi_s(xa)\overline{f(x)}|x|^{-s}dx=0
$$
for all $a \in \mathbb{R}^{\times}$. From (\ref{Gammaequ}), we have
$$
\phi_s(xa) = \frac{\sqrt{\pi}}{\Gamma((1+s)/2)} |xa/2|^{s/2} \int_{0}^{\infty} \xi^{(s/2)-1} \exp(-|xa|(\xi+\xi^{-1})/2) d\xi,
$$
and replacing $\xi$ by $\xi/|x|$ in the latter integral, we obtain
$$
\phi_s(xa) =\frac{\sqrt{\pi}}{\Gamma((1+s)/2)} |a|^{s} \int_{0}^{\infty} \xi^{\frac{s}{2}-1} \exp\Big(-|a|\xi-\frac{x^2}{4\xi}\Big) d\xi.
$$
Therefore,
$$
\int_{-\infty}^{\infty}\overline{f(x)}|x|^{-s}\left[\int_{0}^{\infty} \xi^{\frac{s}{2}-1} \exp\Big(-|a|\xi-\frac{x^2}{4\xi}\Big)d\xi \right]dx=0
$$
for all $a \in \mathbb{R}^\times$. Again applying Fubini's theorem to interchange the order of integration, we obtain
$$
\int_{0}^{\infty} \xi^{(s/2)-1} \left[\int_{-\infty}^{\infty}\overline{f(x)}|x|^{-s} \exp(-x^2/4\xi) dx \right]e^{-|a| \xi}d\xi=0.
$$
As the latter integral is a Laplace transform, it follows that
$$
\int_{-\infty}^{\infty}\overline{f(x)}|x|^{-s} \exp(-x^2/4\xi) dx = 0
$$
$\xi$-almost everywhere. Since $f$ is even, it follows that
$$
\int_{0}^{\infty}\overline{f(x)}|x|^{-s} \exp(-x^2/4\xi) dx
$$
almost everywhere in $\xi > 0$; equivalently,
$$
\int_{0}^{\infty}\overline{f(\sqrt{x})} |x|^{-(s+1)/2} \exp(-x/4\xi) dx = 0
$$
almost everywhere in $\xi > 0$. Hence, the Laplace transform of $\overline{f(\sqrt{x})}|x|^{-(s+1)/2}$ is zero almost everywhere on $\mathbb{R}^{\times}$. Therefore
$\overline{f(\sqrt{x})}=0$ for almost every $x > 0$, which implies that $f ( x ) = 0$, \text{ a.e. } $x > 0$. Because $f$ is even, we deduce that $f ( x ) = 0$ a.e. This completes the proof of the Lemma.
\end{proof}
\begin{lem} \label{psis}
For $0 < \Re(s) < 1$, define
\begin{equation}
\psi_s(x)=\int_{-\infty}^{\infty} ye^{-ixy} (1+y^2)^{-(s+1)/2} dy
\end{equation}
Then $\psi_s \in H_s$, and the set ${\mathbf{\Psi}}^{\circ} = \left\{ R(\gamma)\psi_s : \gamma \in D\right\}$ spans a dense subspace of $H_s^- = \left\{f \in H_s : f \text{ is odd} \right\}$.
\end{lem}
The proof of this result is similar to that of Lemma \ref{phi}.
\noindent{\it Proof of Theorem \ref{main}}. By Lemma \ref{preserve} it follows that $\rho_s$ is unitary when restricted to $Q$. Also, by Lemma \ref{Subgroup}, we need to prove that $\rho_s(p)$ is a unitary operator on $H_s$.
Denoting $\rho_s(p)$ by $W$, it is a straightforward calculation to verify the following properties for $W$:
\noindent (i) $W^2 = I$, the identity operator of $H_s$.
\noindent (ii) $WR(\gamma)= R(\gamma^{-1})W$ for $\gamma= \gamma(a, d) \in D$.
\noindent (iii) $W\phi_s =\phi_s$.
\noindent It is also a simple calculation to verify that $\int_{-\infty}^{\infty}\phi_s(x)\overline{(R(\gamma)\phi_s)(x)}|x|^{-s}$ is real
for any $\gamma \in D$. Hence by Proposition \ref{RandL} and Lemma \ref{phi}, $W$ is a generalized Watson transform of $H_s^+$ with respect to the unitary representations $R(\gamma)$ and $R(\gamma^{-1})$. Therefore, $W$ is unitary on $H_s^+$.
Similarly, from Proposition \ref{RandL} and Lemma \ref{psis}, it follows that $W$ is unitary on $H_s^-$. Consequently, $W$
is unitary on $H_s = H_s^+ \oplus H_s^-$. This completes the proof of the unitarity of the complementary series of $G$.
\qed
\end{document} | arXiv | {
"id": "1901.06004.tex",
"language_detection_score": 0.664944589138031,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter} \title{Regularity of solutions of abstract linear evolution equations} \runtitle{Linear evolution equations} \thankstext{T1}{This work was supported by JSPS KAKENHI Grant Number 20140047.}
\begin{aug} \author{Vi$\hat{\d{e}}$t T$\hat{\text{o}}$n T\d{a}\thanksref{T1} \ead[label=e1]{taviet.ton[at]ist.osaka-u.ac.jp}}
\address{Department of Information and Physical Sciences\\
Graduate School of Information Science and Technology, Osaka University\\
Suita, Osaka 565-0871, Japan\\ \printead{e1}}
\runauthor{V. T. T\d{a}}
\affiliation{Osaka University}
\end{aug}
\begin{abstract} In this paper, we study regularity of solutions to linear evolution equations of the form $dX+AXdt=F(t)dt$ in a Banach space $H$, where $A$ is a sectorial operator in $H$ and $A^{-\alpha} F \, (\alpha>0)$ belongs to a weighted H\"{o}lder continuous function space. Similar results are obtained for linear evolution equations with additive noise of the form $dX+AXdt=F(t)dt+G(t)dW(t)$ in a separable Hilbert space $H$, where $W(t)$ is a cylindrical Wiener process. Our results are applied to a model arising in neurophysiology, which has been proposed by Walsh \cite{Walsh}. \end{abstract}
\begin{keyword}[class=MSC] \kwd[Primary ]{60H15} \kwd{35R60} \kwd[; secondary ]{47D06} \end{keyword}
\begin{keyword} \kwd{Analytic semigroups} \kwd{Stochastic linear evolution equations} \kwd{Regularity} \end{keyword} \tableofcontents \end{frontmatter}
\section{Introduction and main results}
In recent years, existence, uniqueness and regularity of solutions to stochastic partial differential equations have been extensively studied by many authors. These topics have been developed mainly by using three approaches, that is, the semigroup methods (see Da Prato-Zabczyk \cite{prato}, T\d{a}-Yagi \cite{Ton2}, T\d{a} \cite{Ton3}, and references therein), the variational methods (see Rozovskii \cite{Rozovskii}, Pr\'{e}v\^{o}t-R\"{o}ckner \cite{PrevotRockner}, and references therein), and the martingale measure methods (see Walsh \cite{Walsh}).
Among others, stochastic linear evolution equations have been studied by many authors in Hilbert or $L_2$ spaces (see Rozovskii \cite{Rozovskii}, Da Prato et al. \cite{prato0,prato}, T\d{a} \cite{Ton1}), in weighted Sobolev $L_p$ spaces (see Krylov-Lototsky \cite{Krylov1}), in weighted H\"{o}lder spaces (see Mikulevicius \cite{Mikulevicius2}), and in M-type 2 Banach space (Brze\'{z}niak \cite{Brzezniak}, T\d{a}-Yagi \cite{Ton0}, T\d{a}-Yamamoto-Yagi \cite{Ton4}).
In this paper, we shall study regularity of solutions to both deterministic and stochastic linear evolution equations whose coefficients belong to weighted H\"{o}lder continuous function spaces by using the semigroup methods. Our results can be applied to a class of stochastic partial differential equations such as the Zakai equation in the nonlinear filtering problem (see Rozovskii \cite{Rozovskii}), the stochastic heat equation (see Hairer \cite{Hairer}), Nagumo's equation, Hodgkin-Huxley equations (see \cite{Walsh,prato}), which can be approximated by the equation: $$\frac{\partial X}{\partial t}=\Delta X -aX + F(t) +G(t)\dot W.$$ As a consequence, some results obtained by Walsh \cite{Walsh} for a model arising in neurophysiology are improved.
Let us introduce the framework in the deterministic case.
We consider the Cauchy problem for a linear evolution equation
\begin{equation} \label{P1} \begin{cases} dX+AXdt=F(t)dt,\hspace{1cm} 0<t\leq T,\\ X(0)=\xi \end{cases} \end{equation}
in a Banach space $H$ with norm $\|\cdot\|,$ where $T>0$, $A\colon\mathcal D(A)\subset H\to H$ is a densely defined, closed linear operator in $H$,
and $F$ is an $H$-valued addition external force. \begin{definition} Let $F$ satisfy the condition $$\int_0^t S(t-s) F(s)ds<\infty, \hspace{2cm} t\in [0,T],$$ where $S(t)$ is the $C_0$-semigroup in $H$ generated by the operator $(-A)$. Then the process $X$ defined by $$X(t)=S(t)\xi + \int_0^t S(t-s) F(s)ds, \hspace{2cm} t\in [0,T]$$ is called a mild solution of \eqref{P1}. \end{definition} (In Remark \ref{remark2} below, we will give a discussion about a relation between weak, mild and strong solutions.)
When the initial value $\xi\in H$ is arbitrary and $F\in L^1([0,T];H)$, the Cauchy problem \eqref{P1} possesses a unique continuous mild solution. Meanwhile, when $\xi\in \mathcal D(A)$ and $F\in W^{1,p}([0,T];H) $ with $p\geq 1$, the mild solution possesses the regularity $$X\in \mathcal C^1([0,T];H)\cap \mathcal C([0,T]; \mathcal D(A))\cap W^{1,p}([0,T];H),$$ where $W^{1,p}([0,T];H)$ denotes the Sobolev space of all $H$-valued functions $u$ on $[0,T]$ whose weak derivative $u'$ belongs to $L^p([0,T];H)$ (see e.g., \cite{Ball,prato}).
When $(-A)$ is a sectorial operator (i.e. it satisfies conditions (H1) and (H2) below), it generates an analytical semigroup $S(t)$. In \cite{Pazy}, the author considered the case $\xi=0$ and $F\in \mathcal C^\alpha([0,T];H)$ for some $\alpha \in (0,1),$ where $\mathcal C^\alpha([0,T];H)$ denotes the space of $\alpha$-H\"{o}lder continuous functions on $[0,T]$. Then the mild solution of \eqref{P1} satisfies $$X\in \mathcal C^1([0,T];H)\cap \mathcal C([0,T];\mathcal D(A)).$$
In \cite{Sinestrari}, under the assumption $F(0)\in \mathcal D_A(\alpha,\infty)$, it has been verified that $$X\in \mathcal C^{1,\alpha}([0,T];H)\cap \mathcal C^\alpha([0,T];\mathcal D(A)),$$ where $\mathcal C^{1,\alpha}$ is the space of $\mathcal C^1$ functions with derivatives in $\mathcal C^\alpha.$ If $F$ is taken from $ \mathcal C([0,T];\mathcal D_A(\alpha,\infty)) $ then (see \cite{prato-2}) $$X\in \mathcal C^1([0,T];\mathcal D_A(\alpha,\infty))\cap \mathcal C([0,T];\mathcal D_A(\alpha+1,\infty)),$$ where $ \mathcal D_A(\alpha,\infty)$ is the space of all $x\in H$ such that
$\sup_{t>0} \frac{\|S(t)x-x\|}{t^\alpha}<\infty$, and $\mathcal D_A(\alpha+1,\infty)=\{x\in \mathcal D(A)\colon Ax\in \mathcal D_A(\alpha,\infty)\}$. When $\xi\in H$ is arbitrary and $F$ belongs to a weighted H\"{o}lder continuous function space $\mathcal F^{\beta, \sigma}((0,T]; H)$ (see the definition below), Yagi \cite{yagi0} showed that $$X\in \mathcal C^1((0,T];H)\cap \mathcal C([0,T];H) \cap \mathcal C((0,T];\mathcal D(A)).$$ He also obtained the maximal regularity for both initial value $\xi \in \mathcal D(A^\beta)$ and external force function $F\in \mathcal F^{\beta, \sigma}((0,T]; H)$ (see \cite{yagi1}-\cite{yagi}):
$$A^\beta X\in \mathcal C([0,T];H),$$ $$\frac{dX}{dt}, AX \in \mathcal F^{\beta, \sigma}((0,T]; H).$$
In the present paper, we assume that $A^{-\alpha}F$ belongs to the weighted H\"{o}lder continuous function space $\mathcal F^{\beta, \sigma}((0,T]; H)$ with some positive $\alpha$, i.e. $F$ satisfies both temporal and spatial regularity. We will show both temporal and spatial regularity of solutions to \eqref{P1}. Similar results are obtained for linear evolution equations in Hilbert spaces with additive noise. Our result will be applied to a model arising in neurophysiology.
Let us review the function space $\mathcal F^{\beta, \sigma}((0,T]; H)$
for two exponents $0<\sigma<\beta\leq 1$ (see \cite{yagi}). The space $\mathcal F^{\beta, \sigma}((0,T]; H)$ consists of all $H$-valued continuous functions $f(t)$ on $(0,T]$ (resp. $[0,T]$) when $0<\beta<1$ (resp. $\beta=1$) with the following three properties: \begin{itemize}
\item [\rm (i)] When $\beta<1$,
\begin{equation} \label{P2}
t^{1-\beta} f(t) \text{ has a limit as } t\to 0.
\end{equation}
\item [\rm (ii)] The function $f$ is H\"{o}lder continuous with the exponent $\sigma$ and with the weight $s^{1-\beta+\sigma}$, i.e.
\begin{equation} \label{P3} \begin{aligned}
&\sup_{0\leq s<t\leq T} \frac{s^{1-\beta+\sigma}\|f(t)-f(s)\|}{(t-s)^\sigma}\\
&=\sup_{0\leq t\leq T}\sup_{0\leq s<t}\frac{s^{1-\beta+\sigma}\|f(t)-f(s)\|}{(t-s)^\sigma}<\infty. \end{aligned} \end{equation}
\item [\rm (iii)]
\begin{equation} \label{P4}
\begin{aligned}
\lim_{t\to 0} & w_f(t)=0, \\
& \text{ where } w_f(t)=\sup_{0\leq s <t}\frac{s^{1-\beta+\sigma}\|f(t)-f(s)\|}{(t-s)^\sigma}. \end{aligned}
\end{equation}
\end{itemize} Then $\mathcal F^{\beta, \sigma}((0,T]; H)$ becomes a Banach space with norm
$$\|f\|_{\mathcal F^{\beta, \sigma}((0,T]; H)}=\sup_{0\leq t\leq T} t^{1-\beta} \|f(t)\|+ \sup_{0\leq s<t\leq T} \frac{s^{1-\beta+\sigma}\|f(t)-f(s)\|}{(t-s)^\sigma}.$$
For simplicity, if not specified the norm of $f$ in $\mathcal F^{\beta, \sigma}((0,T];H)$ will be denoted by $\|f\|_{\mathcal F^{\beta, \sigma}}$. The following useful inequality follows the definition directly. For every $ f\in \mathcal F^{\beta, \sigma}((0,T]; H), 0<s<t\leq T$ we have
\begin{equation} \label{P5} \begin{cases}
\|f(t)\|\leq \|f\|_{\mathcal F^{\beta, \sigma}} t^{\beta-1}, \\
\|f(t)-f(s)\| \leq w_f(t) (t-s)^{\sigma} s^{\beta-\sigma-1}\leq \|f\|_{\mathcal F^{\beta, \sigma}} (t-s)^{\sigma} s^{\beta-\sigma-1}. \end{cases} \end{equation} In addition, it is not hard to show that \begin{equation} \label{P6} \mathcal F^{\gamma,\sigma} ((0,T];H)\subset \mathcal F^{\beta,\sigma} ((0,T];H), \hspace{2cm} 0<\sigma<\beta<\gamma\leq 1. \end{equation} \begin{remark} The space $\mathcal F^{\beta, \sigma}((0,T]; H)$ is not a trivial space. When $0<\sigma<\beta<1$, $f(t) =t^{\beta-1} g(t) \in \mathcal F^{\beta, \sigma}((0,T]; H),$ where $g(t)$ is any $H$-valued function on $[0,T]$ such that
$g\in \mathcal C^\sigma([0,T];H)$
and $g(0)=0.$ When $0<\sigma<\beta=1$, the space $ \mathcal F^{1, \sigma}((0,T]; H)$ includes the space of all H\"{o}lder continuous functions with the exponent $\sigma.$ \end{remark}
Let us now formulate the precise conditions on the coefficients in \eqref{P1}. \begin{itemize}
\item [(\rm{H1})] The spectrum $\sigma(A)$ of $A$ is contained in an open sectorial domain $\Sigma_{\varpi}$:
\begin{equation*}
\sigma(A) \subset \Sigma_{\varpi}=\{\lambda \in \mathbb C: |\arg \lambda|<\varpi\}, \quad \quad 0<\varpi<\frac{\pi}{2}.
\end{equation*}
\item [(\rm{H2})] The resolvent satisfies the estimate
\begin{equation*} \label{H2}
\|(\lambda-A)^{-1}\| \leq \frac{M_{\varpi}}{|\lambda|}, \quad\quad\quad \quad \lambda \notin \Sigma_{\varpi}
\end{equation*}
with some constant $M_{\varpi}>0$ depending only on the angle $\varpi$.
\item [(\rm{H3})] \begin{equation*} \label{H6} \begin{aligned}
& A^{-\alpha}F\in \mathcal F^{\beta, \sigma}((0,T];H) \hspace{0.5cm} \text{ with } 0<\sigma< \beta\leq 1 \text{ and } \\
& \frac{1+\sigma}{4}<\alpha\leq \frac{\beta}{2}. \end{aligned}
\end{equation*}
\end{itemize}
Under the assumptions (H1) and (H2), it is well-known that \begin{proposition}[see e.g., \cite{yagi}] Let {\rm (H1)} and {\rm (H2)} be satisfied. Then \begin{itemize} \item [\rm (i)] $(-A)$ generates a semigroup $S(t)=e^{-tA}$, i.e. $S(t-s)$ enjoys the property \begin{equation*} \begin{aligned} \begin{cases} S(t+s)=S(t)S(s), &\hspace{1cm} 0\leq t, s<\infty.\\ S(0)=I. & \end{cases} \end{aligned} \end{equation*} \item [\rm (ii)] For every $t>0$ and $\theta\geq 0$
\begin{equation} \label{P7} \begin{aligned}
& \|A^\theta S(t)\| \leq \iota_\theta t^{-\theta}, \\
& \text{ where } \iota_\theta:=\sup_{0\leq t<\infty} t^\theta \|A^\theta S(t)\|<\infty. \end{aligned} \end{equation}
In particular, \begin{equation} \label{P8}
\|S(t)\|\leq \iota_0, \hspace{2cm} 0\leq t<\infty. \end{equation} \item [\rm (iii)] For every $\theta>0$ there exists a constant $\upsilon_\theta$ such that
\begin{equation} \label{P9}
\|A^{-\theta}\|\leq \upsilon_\theta.
\end{equation} \item[\rm (iv)] For every $0< \theta\leq 1 $
\begin{equation} \label{P10}
t^\theta A^{\theta} S(t) \text{ converges to } 0 \text{ strongly on } H \text { as } t\to 0.
\end{equation}
\end{itemize} \end{proposition} From now when we use notations $\iota_\theta$ or $\upsilon_\theta$, it refers to constants in \eqref{P7} or \eqref{P9}, respectively.
Now we can state the main result for \eqref{P1}. \begin{theorem}\label{theorem1} Let {\rm (H1)}, {\rm (H2)} and {\rm (H3)} be satisfied. Let $\xi$ take values in $\mathcal D(A^\beta).$ Then there exists a unique mild solution of \eqref{P1} possessing the regularity: $$X\in \mathcal C((0,T];\mathcal D(A^{1-\alpha})), $$ $$ A^{\alpha} X\in \mathcal C([0,T];H)\cap \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H) \hspace{1cm} \text{ for every }\gamma\in [0, \sigma].$$ Furthermore, $X$ satisfies the estimate \begin{equation} \label{P11}
\|X(t)\|\leq \iota_\alpha B(\beta,1-\alpha) \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} t^{\beta-\alpha} +\iota_0 \|\xi\|, \hspace{1cm} t\in [0,T], \end{equation} where $B(\cdot,\cdot)$ is the beta function. \end{theorem}
Let us next formulate the analogous result for linear evolution equations with additive noise. We proceed to study the Cauchy problem for an abstract stochastic evolution equation
\begin{equation} \label{P12} \begin{cases} dX+AXdt=F(t)dt+ G(t)dW(t),\hspace{1cm} 0<t\leq T,\\ X(0)=\xi \end{cases} \end{equation} in a separable Hilbert space $H$. Here, $W(t)$ is a cylindrical Wiener process on a separable Hilbert space $U$, which is defined on a filtered probability space $(\Omega, \mathcal F,\mathcal F_t,\mathbb P).$ The coefficient $G$ is a measurable process from $([0,T],\mathcal B([0,T]))$ to $(L_2(U;H),\mathcal B(L_2(U;H))),$ where $L_2(U;H)$ denotes the space of all Hilbert-Schmidt operators from $U$ to $H$. The linear operator $A$ and the function $F$ are the same to ones in \eqref{P1}. And the initial value $ \xi$ is an $\mathcal F_0$-measurable random variable.
\begin{definition}\label{Def1} Let $F$ and $G$ satisfies the conditions
$$\int_0^T \|S(t-s)F(s)\|ds<\infty$$ and
$$ \int_0^T \|S(t-s)G(s)\|_{L_2(U;H)}^2ds<\infty.$$ Then an $H$-valued predictable process $X(t), t\in [0,T]$ defined by \begin{align*} X(t)=&S(t)\xi +\int_0^tS(t-s) F(s)ds+ \int_0^t S(t-s) G(s)dW(s) \end{align*}
is called a mild solution of \eqref{P12}. \end{definition} \begin{remark} \label{remark2} In the semigroup approach to parabolic evolution equations, mild (and \textit{weak}) solutions are mainly studied, because the existence of \textit{strong} solutions (or \textit{real} solutions) is very rare \cite{prato}. Let us review the notions of strong and weak solutions. Suppose that $F$ and $G$ satisfy the conditions
$$\int_0^T \|F(s)\|ds<\infty$$ and
$$ \int_0^T \|G(s)\|_{L_2(U;H)}^2ds<\infty.$$ Then an $H$-valued predictable process $X$ on $ [0,T]$ is called a strong solution to \eqref{P12} if $X$ takes values in $\mathcal D(A)$ a.s. and satisfies
$$\int_0^T \|AX(s)\|ds<\infty, \hspace{2cm} \text{a.s.}$$ and almost surely $$X(t)=\xi+\int_0^t [F(s)-AX(s)]ds+ \int_0^t G(s)dW(s), \hspace{2cm} t\in [0,T]. $$
The process $X$ is called a weak solution if for every $h\in \mathcal D(A^*)$ and $t\in [0,T]$ we have \begin{align*} \langle X(t),h\rangle=&\langle \xi,h\rangle+ \int_0^t [\langle F(s),h\rangle-\langle X(s),A^*h\rangle]ds\\ &+\int_0^t\langle G(s),h\rangle dW(s). \end{align*} It is clear that a strong solution is also a weak solution. In addition, a weak solution is a mild solution \cite{prato}. The inverses are not true in general. They can take place, however, under some special conditions. In \cite{prato}, the authors showed that a mild solution becomes a weak solution under some conditions on $G$. In \cite{Ton}, under the assumption
$$\|AS(t)\| \leq C t^{-\delta}, \hspace{2cm} t>0$$ with some constants $C>0$ and $\delta\in (0,\frac{1}{2})$, we proved that a mild solution is also a strong solution. \end{remark} We suppose that the process $G$ in the equation \eqref{P12} belongs to a weighted H\"{o}lder continuous function space:
\begin{equation*} \rm{(H4)} \hspace{2cm} G\in \mathcal F^{\beta, \sigma} ((0,T];L_2(U;H)) \hspace{1cm} \text{with } 0<\sigma<\beta-\frac{1}{2}.
\end{equation*} Our results for the stochastic case are as follows. In the case there is no external force, we have \begin{theorem} \label{theorem2}
Assume that $F\equiv 0$ on $[0,T]$. Let {\rm (H1)}, {\rm (H2)} and {\rm (H4)} be satisfied. Let $\xi$ take values in $\mathcal D(A^\beta) $ a.s. such that $ \mathbb E\|A^\beta \xi\| <\infty$. Then there exists a unique mild solution of \eqref{P12} possessing the regularity: $$X\in \mathcal C((0,T];\mathcal D(A^\nu)), \quad A^{\alpha_1} X\in \mathcal C^\gamma([\epsilon,T];H) \hspace{1cm}\text{ a.s.},$$ and
$$\mathbb E \|A^{\alpha_1} X\|\in \mathcal F^{\beta,\sigma}((0,T];\mathbb R) $$
for every $0<\nu<\frac{1}{2}, 0<\alpha_1\leq \frac{1}{2}-\sigma, 0<\gamma<\sigma$ and $\epsilon\in (0,T]$. Furthermore, $X$ satisfies the estimate
\begin{equation} \label{P13}
\mathbb E\|X(t)\|\leq C[\mathbb E\|A^\beta\xi\|+\|G\|_{\mathcal F^{\beta, \sigma} ((0,T];L_2(U;H))} t^{\beta-\frac{1}{2}}], \hspace{1cm} t\in [0,T],
\end{equation}
where $C$ is some constant depending only on the exponents and constants $\iota_\theta,$ $ \upsilon_\theta \,(\theta\geq 0)$. \end{theorem} In general, we have \begin{theorem} \label{theorem3}
Let {\rm (H1)}, {\rm (H2)}, {\rm (H3)} and {\rm (H4)} be satisfied. Let $\xi$ take values in $\mathcal D(A^\beta) $ a.s. such that $ \mathbb E\|A^\beta \xi\| <\infty$. If $\alpha\leq \frac{1}{2}-\sigma$
then there exists a unique mild solution of \eqref{P12} possessing the regularity: $$X\in \mathcal C((0,T];\mathcal D(A^\nu)), \quad A^{\alpha} X\in \mathcal C^\gamma([\epsilon,T];H) \hspace{1cm}\text{ a.s.},$$ and
$$\mathbb E \|A^{\alpha} X\|\in \mathcal F^{\beta,\sigma}((0,T];\mathbb R) $$
for every $0<\nu<\frac{1}{2}, 0<\gamma<\sigma$ and $\epsilon\in (0,T]$. Furthermore, the estimate
\begin{equation*}
\mathbb E\|X(t)\|\leq C[\mathbb E\|A^\beta\xi\|+\|A^{-\alpha}F\|_{F^{\beta,\sigma}} t^{\beta-\alpha}+\|G\|_{\mathcal F^{\beta, \sigma} ((0,T];L_2(U;H))} t^{\beta-\frac{1}{2}}]
\end{equation*}
holds true for every $t\in [0,T]$, where $C$ is some constant depending only on the exponents and constants $\iota_\theta, \upsilon_\theta \,(\theta\geq 0)$. \end{theorem} \begin{remark} According to the proof of the above theorems in the next section, we can see that the results in this paper hold true for not only non-random functions $F(t)$ and $ G(t)$ but also random ones, i.e. for $F$ and $ G$ depending on both $t\in [0,T]$ and $\omega\in \Omega$ and satisfying {\rm (H3)} and {\rm (H4)} almost surely. In that case, we shall need more conditions on $F$ and $G$ such as \begin{itemize}
\item [{\rm (i)}] $F$ and $G$ are measurable with respect to $(t,\omega).$
\item [{\rm (ii)}] $\mathbb E\|G\|_{\mathcal F^{\beta, \sigma}((0,T];L_2(U;H))}^2 <\infty.$ \end{itemize} \end{remark} The rest of the present paper is organized as follows. In the next section, we shall present the proofs of Theorems \ref{theorem1}, \ref{theorem2} and \ref{theorem3}. In Section \ref{section3}, we shall apply the main results to a model arising in neurophysiology. \section{Proof of the main theorems} \label{section2} \begin{proof}[Proof of Theorem \ref{theorem1}]
Let us show that \eqref{P1} possesses a unique mild solution in the space
$$X\in \mathcal C((0,T];\mathcal D(A^{1-\alpha})), $$
which satisfies the estimate \eqref{P11}. By \eqref{P5}, \eqref{P7} and {\rm (H3)}, we have \begin{equation} \label{P14} \begin{aligned}
\int_0^t\|S(t-s) F(s)\|ds&=\int_0^t\|A^\alpha S(t-s) A^{-\alpha} F(s)\|ds\\
&\leq \iota_\alpha \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} \int_0^t (t-s)^{-\alpha}s^{\beta-1}ds\\
& =\iota_\alpha \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} B(\beta,1-\alpha)t^{\beta-\alpha} <\infty, \hspace{1cm} t\in [0,T]. \end{aligned} \end{equation} Then the integral $\int_0^tS(t-s) F(s)ds$ is well-defined. Consequently, \eqref{P1} possesses a mild solution given by $$X(t)=S(t)\xi +\int_0^tS(t-s) F(s)ds$$ (the uniqueness is obvious). On the other hand, \begin{align} &\int_0^t A^{1-\alpha}S(t-s) F(s)ds \notag\\ &=\int_0^tA S(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]ds+ \int_0^tA S(t-s) dsA^{-\alpha} F(t) \notag\\ &=\int_0^tA S(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]ds+ [I-S(t)]A^{-\alpha} F(t). \label{P15} \end{align} Thanks to \eqref{P5}, \eqref{P7} and {\rm (H3)}, \begin{align*}
&\int_0^t\|A S(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]\|ds\\
&\leq \int_0^t\|A S(t-s)\| \|A^{-\alpha} F(s)-A^{-\alpha} F(t)\|ds\\
&\leq \iota_1 \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} \int_0^t (t-s)^{\sigma-1}s^{\beta-\sigma-1}ds\\
& =\iota_1 \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} B(\beta-\sigma,\sigma)t^{\beta-1} <\infty, \hspace{1cm} t\in (0,T]. \end{align*} Hence, it is easy to see that $\int_0^t AS(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]ds$ is continuous in $t\in (0,T].$ In addition, since $A$ is closed, we obtain $$A\int_0^t S(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]ds=\int_0^tA S(t-s) [A^{-\alpha} F(s)-A^{-\alpha} F(t)]ds.$$
Therefore, by \eqref{P15}, it is seen that $$A^{1-\alpha} \int_0^t S(t-s) F(s)ds=\int_0^t A^{1-\alpha}S(t-s) F(s)ds$$ and $A^{1-\alpha} \int_0^t S(t-s) F(s)ds$ is continuous in $t\in (0,T]$. As a consequence, $$A^{1-\alpha}X(t)=A^{1-\alpha}S(t)\xi+A^{1-\alpha} \int_0^t S(t-s) F(s)ds$$ belongs to $\mathcal C((0,T];H)$. Furthermore, by \eqref{P8} and \eqref{P14}, $X$ satisfies the estimate \eqref{P11}.
Let us next prove that $A^{\alpha} X(t)=A^{\alpha} S(t)\xi +A^{\alpha} \int_0^tS(t-s) F(s)ds$ is continuous in $t\in [0,T]$. Indeed, the first term in the right-hand side of the latter equality is continuous on $[0,T]$, because $\alpha<\beta$ and $A^{\alpha} S(t)\xi=A^{\alpha-\beta} S(t)A^\beta \xi.$ So, it suffices to show that the second term is also continuous on $[0,T]$.
Similarly to \eqref{P14}, we have \begin{equation*} \begin{aligned}
\int_0^t\|A^\alpha S(t-s) F(s)\|ds\leq
\iota_{2\alpha} \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} B(\beta,1-2\alpha)t^{\beta-2\alpha} <\infty, \hspace{0.2cm} t\in [0,T]. \end{aligned} \end{equation*} In addition, since $A^{\alpha}$ is closed, we obtain $$A^{\alpha} \int_0^tS(t-s) F(s)ds=\int_0^tA^{\alpha} S(t-s) F(s)ds.$$ Hence, it is easy to see that $A^{\alpha} \int_0^tS(t-s) F(s)ds$ is continuous in $t\in [0,T].$
Let us now verify that $A^{\alpha} X\in \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H)$ for every $\gamma\in [0,\sigma]$. We use the expression \begin{align*} A^{\alpha} X(t)=& A^{\alpha}S(t) \xi+\int_0^t A^{2\alpha} S(t-s)[A^{-\alpha}F(s)-A^{-\alpha}F(t)]ds\\ &+\int_0^t A^{2\alpha} S(t-s)dsA^{-\alpha}F(t)\\ =& A^{\alpha}S(t) \xi+\int_0^t A^{2\alpha} S(t-s)[A^{-\alpha}F(s)-A^{-\alpha}F(t)]ds\\ &+A^{2\alpha-1}[I-S(t)]A^{\alpha-1} F(t)\\ =&J_1(t)+J_2(t)+J_3(t). \end{align*} We will show that $J_1, J_2$ and $ J_3$ belong to $\mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H)$.
{\it Proof for $J_1$}. We prove that $$J_1\in \mathcal F^{\beta,\gamma}((0,T];H) \subset \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H)\hspace{2cm} (\text{see }\eqref{P6}). $$ Indeed, applying \eqref{P10} to $$t^{1-\beta} A^{\alpha}S(t) \xi=A^{\alpha-1} t^{1-\beta} A^{1-\beta}S(t) A^\beta \xi,$$
we obtain $\lim_{t\to 0} t^{1-\beta} A^{\alpha}S(t) \xi=0.$ Hence, the condition \eqref{P2} is fulfilled.
On the other hand, by using \eqref{P7}, for $0<s<t\leq T$ we have \begin{align}
&\frac{s^{1-\beta+\gamma} \|A^\alpha[S(t)-S(s)] \xi\|}{(t-s)^\gamma} \notag\\
&\leq \frac{\|A^{\alpha-\gamma-1}[S(t-s)-I]\|}{(t-s)^\gamma} s^{1-\beta+\gamma}\|A^{1-\beta+\gamma}S(s) A^\beta \xi\|\notag\\
&\leq \frac{\|\int_0^{t-s} A^{\alpha-\gamma}S(u)du\|}{(t-s)^\gamma} \iota_{1-\beta+\gamma} \|A^\beta \xi\|. \label{P16} \end{align} If $\alpha\geq \gamma$ then \begin{align}
\frac{s^{1-\beta+\gamma} \|A^\alpha[S(t)-S(s)] \xi\|}{(t-s)^\gamma}&\leq \frac{ \int_0^{t-s} \iota_{\alpha-\gamma} u^{\gamma-\alpha}du}{(t-s)^\gamma} \iota_{1-\beta+\gamma} \|A^\beta \xi\|\notag\\
&= \frac{\iota_{\alpha-\gamma} \iota_{1-\beta+\gamma} \|A^\beta \xi\| }{1+\gamma-\alpha}(t-s)^{1-\alpha}. \label{P17} \end{align} If $\alpha< \gamma$ then by \eqref{P8} and \eqref{P9} \begin{align}
\frac{s^{1-\beta+\gamma} \|A^\alpha[S(t)-S(s)] \xi\|}{(t-s)^\gamma}&\leq \frac{ \int_0^{t-s} \upsilon_{\gamma-\alpha} \iota_0du}{(t-s)^\gamma} \iota_{1-\beta+\gamma} \|A^\beta \xi\|\notag\\
&= \upsilon_{\gamma-\alpha} \iota_0\iota_{1-\beta+\gamma} \|A^\beta \xi\| (t-s)^{1-\gamma}. \label{P18} \end{align} Hence,
$$\sup_{0\leq s<t\leq T}\frac{s^{1-\beta+\gamma} \|A^\alpha[S(t)-S(s)] \xi\|}{(t-s)^\gamma}<\infty$$ and \begin{align*}
&\lim_{t\to 0} \sup_{0<s<t}\frac{s^{1-\beta+\gamma} \|A^\alpha[S(t)-S(s)] \xi\|}{(t-s)^\gamma}=0. \end{align*} Therefore, the conditions \eqref{P3} and \eqref{P4} are also satisfied. We have thus verified that $J_1\in \mathcal F^{\beta,\gamma}((0,T];H). $
{\it Proof for $J_2$}. Using \eqref{P5} and {\rm (H3)} we have \begin{align*}
\|J_2(t)\|&\leq \int_0^t \|A^{2\alpha} S(t-s)\|\|A^{-\alpha}F(s)-A^{-\alpha}F(t)\|ds\notag\\ &\leq \iota_{2\alpha} w_{A^{-\alpha}F}(t)\int_0^t (t-s)^{\sigma-2\alpha} s^{\beta-\sigma-1}ds\notag\\ &=\iota_{2\alpha} w_{A^{-\alpha}F}(t) B(\beta-\sigma,1+\sigma-2\alpha)t^{\beta-2\alpha}, \end{align*} here
$$
w_{A^{-\alpha}F}(t)=\sup_{0\leq s <t}\frac{s^{1-\beta+\sigma}\|A^{-\alpha}F(t)-A^{-\alpha}F(s)\|}{(t-s)^\sigma} \hspace{1cm} (\text{see } \eqref{P4}). $$ This implies that \begin{equation} \label{P19} \lim_{t\to 0} t^{1-(\beta-\sigma+\gamma)} J_2(t)=0 \hspace{1cm} \text{ in } H. \end{equation}
We next observe that for $0<s<t\leq T$ \begin{align} J_2(t)-J_2(s)=&\int_s^t A^{2\alpha} S(t-u)[A^{-\alpha} F(u)-A^{-\alpha} F(t)] du \notag\\ &+ [S(t-s) -I] \int_0^s A^{2\alpha} S(s-u)[A^{-\alpha} F(u)-A^{-\alpha} F(s)] du \notag\\ &+\int_0^s A^{2\alpha} S(t-u)[A^{-\alpha} F(s)-A^{-\alpha} F(t)] du\notag\\ =&J_{21}(t,s)+J_{22}(t,s)+J_{23}(t,s). \label{E19} \end{align}
For $J_{21}(t,s)$, by \eqref{P5} and \eqref{P7} we have the estimate \begin{align}
\|J_{21}(t,s)\|\leq & \int_s^t \|A^{2\alpha} S(t-u)\| \|A^{-\alpha} F(u)-A^{-\alpha} F(t)\| du \notag\\ \leq &\int_s^t \iota_{2\alpha} w_{A^{-\alpha} F}(t)(t-u)^{\sigma-2\alpha} u^{\beta-\sigma-1}du\notag\\
\leq &\iota_{2\alpha} w_{A^{-\alpha} F}(t)s^{\beta-\sigma-1}\int_s^t (t-u)^{\sigma-2\alpha} du\notag\\ =&\frac{\iota_{2\alpha} w_{A^{-\alpha} F}(t)s^{(\beta-\sigma+\gamma)-\gamma-1} (t-s)^{1+\sigma-2\alpha}}{1+\sigma-2\alpha}\notag\\ \leq &C_1 w_{A^{-\alpha} F}(t)s^{(\beta-\sigma+\gamma)-\gamma-1} (t-s)^\gamma, \label{P20} \end{align} where $C_1>0$ is some positive constant.
For $J_{22}(t,s)$, we estimate its norm as follows. \begin{align}
&\|J_{22}(t,s)\| \notag\\
&=\Big \|\int_0^{t-s} AS(r)dr \int_0^s A^{2\alpha} S(s-u)[A^{-\alpha} F(u)-A^{-\alpha} F(s)] du \Big \|\notag\\
&=\Big \|\int_0^{t-s} \int_0^s A^{1+2\alpha} S(r+s-u)[A^{-\alpha} F(u)-A^{-\alpha} F(s)] du dr\Big \|\notag\\ &\leq \iota_{1+2\alpha} w_{A^{-\alpha} F}(s) \int_0^{t-s} \int_0^s (r+s-u)^{-1-2\alpha} (s-u)^{\sigma} u^{\beta-\sigma-1}du dr\notag\\ &=\frac{\iota_{1+2\alpha}w_{A^{-\alpha} F}(s)}{2\alpha} \int_0^s [(s-u)^{-2\alpha}-(t-u)^{-2\alpha}] (s-u)^{\sigma} u^{\beta-\sigma-1}du \notag\\ &=\frac{\iota_{1+2\alpha}w_{A^{-\alpha} F}(s)}{2\alpha} \int_0^s [(t-u)^{2\alpha}-(s-u)^{2\alpha}] (t-u)^{-2\alpha} (s-u)^{\sigma-2\alpha} \notag\\ &\hspace{4cm} \times u^{\beta-\sigma-1}du \notag\\ &\leq \frac{\iota_{1+2\alpha}(t-s)^{2\alpha} w_{A^{-\alpha} F}(s)}{2\alpha} \int_0^s (t-u)^{-2\alpha} (s-u)^{\sigma-2\alpha} u^{\beta-\sigma-1}du \notag\\ &= \frac{\iota_{1+2\alpha} w_{A^{-\alpha} F}(s)}{2\alpha} (t-s)^{2\alpha} \int_0^s (t-s+u)^{-2\alpha} u^{\sigma-2\alpha} (s-u)^{\beta-\sigma-1}du, \label{E21} \end{align} here we used the inequality $(t-u)^{2\alpha}-(s-u)^{2\alpha}\leq (t-s)^{2\alpha}.$ We have \begin{align*} &(t-s)^{2\alpha}\int_{\frac{s}{2}}^s (t-s+u)^{-2\alpha} u^{\sigma-2\alpha} (s-u)^{\beta-\sigma-1}du\\
=& (t-s)^{\gamma}\int_{\frac{s}{2}}^s (t-s)^{2\alpha-\gamma} (t-s+u)^{-2\alpha} u^{1+\sigma-2\alpha} u^{-1} (s-u)^{\beta-\sigma-1}du\\
\leq & 2(t-s)^{\gamma} s^{-1}\int_{\frac{s}{2}}^s [ (t-s)^{2\alpha-\gamma} (t-s+u)^{-2\alpha} u^{1+\sigma-2\alpha}] (s-u)^{\beta-\sigma-1}du. \end{align*} Since there exists $C_2>0$ such that for every $\frac{s}{2}\leq u\leq s$ \begin{align*} (t-s)^{2\alpha-\gamma}& (t-s+u)^{-2\alpha} u^{1+\sigma-2\alpha}\\ &=\Big(\frac{t-s}{t-s+u}\Big)^{2\alpha-\gamma} \Big(\frac{u}{t-s+u}\Big)^{\gamma} u^{1+\sigma-2\alpha-\gamma} \leq C_2, \end{align*} we obtain \begin{align} &(t-s)^{2\alpha}\int_{\frac{s}{2}}^s (t-s+u)^{-2\alpha} u^{\sigma-2\alpha} (s-u)^{\beta-\sigma-1}du \notag\\
\leq & 2C_2(t-s)^{\gamma} s^{-1}\int_0^s (s-u)^{\beta-\sigma-1}du \notag\\
= & \frac{2C_2(t-s)^{\gamma} s^{(\beta-\sigma+\gamma)-\gamma-1}}{\beta-\sigma}. \label{E23} \end{align} Meanwhile, \begin{align} &(t-s)^{2\alpha}\int_0^{\frac{s}{2}} (t-s+u)^{-2\alpha} u^{\sigma-2\alpha} (s-u)^{\beta-\sigma-1}du \notag \\
\leq &2^{1-\beta+\sigma} s^{\beta-\sigma-1} (t-s)^{2\alpha}\int_0^{\frac{s}{2}} (t-s+u)^{-2\alpha} u^{\sigma-2\alpha} du \notag \\
\leq &2^{1-\beta+\sigma} s^{\beta-\sigma-1} (t-s)^{1+\sigma-2\alpha}\int_0^\infty (1+r)^{-2\alpha} r^{\sigma-2\alpha} dr \notag \\
= &2^{1-\beta+\sigma} (t-s)^{1+\sigma-2\alpha-\gamma}\int_0^\infty (1+r)^{-2\alpha} r^{\sigma-2\alpha} dr s^{\beta-\sigma-1} (t-s)^\gamma \notag \\
\leq &2^{1-\beta+\sigma} T^{1+\sigma-2\alpha-\gamma}\int_0^\infty (1+r)^{-2\alpha} r^{\sigma-2\alpha} dr s^{\beta-\sigma-1} (t-s)^\gamma \notag \\
\leq & C_3s^{\beta-\sigma-1} (t-s)^\gamma, \label{E25} \end{align} where $$C_3=2^{1-\beta+\sigma} T^{1+\sigma-2\alpha-\gamma}\int_0^\infty (1+r)^{-2\alpha} r^{\sigma-2\alpha} dr <\infty \hspace{0.4cm} \text {(since } 1+\sigma-4\alpha<0). $$ Taking the sum of \eqref{E23} and \eqref{E25} and substituting it for the integral in the right-hand side of \eqref{E21}, we gain \begin{equation} \label{P21}
\|J_{22}(t,s)\|\leq \Big(\frac{2C_2}{\beta-\sigma}+C_3\Big) \frac{\iota_{1+2\alpha}}{2\alpha} w_{A^{-\alpha} F}(s)s^{(\beta-\sigma+\gamma)-\gamma-1} (t-s)^{\gamma}. \end{equation}
For $J_{23}(t,s)$, by using \eqref{P5} \begin{align*}
\|J_{23}(t,s)\|&= \|A^{2\alpha-1} [S(t-s)-S(t)] [A^{-\alpha} F(s)-A^{-\alpha} F(t)]\| \\
&\leq \|A^{2\alpha-1} [S(t-s)-S(t)]\| (t-s)^{\sigma -\gamma}w_{A^{-\alpha} F}(t)s^{\beta-\sigma-1} (t-s)^{\gamma}. \end{align*} Then it is clear that there exists $C_4>0$ such that \begin{equation} \label{P22}
\|J_{23}(t,s)\| \leq C_4 w_{A^{-\alpha} F}(t)s^{(\beta-\sigma+\gamma)-\gamma-1} (t-s)^\gamma. \end{equation} Thanks to \eqref{P19}, \eqref{E19}, \eqref{P20}, \eqref{P21} and \eqref{P22}, we conclude that $$J_2\in \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H).$$
{\it Proof for $J_3$}. Since $t^{1-\beta} A^{-\alpha}F(t)$ has a limit as $t\to 0$, \begin{equation} \label{E27} \lim_{t\to 0} t^{1-\beta}J_3(t)=\lim_{t\to 0} A^{2\alpha-1}[I-S(t)]t^{1-\beta} A^{-\alpha}F(t)=0. \end{equation}
We next write \begin{align} J_3(t)-J_3(s)=&A^{2\alpha-1}[I-S(t)][A^{-\alpha}F(t)-A^{-\alpha}F(s)] \notag\\ &+A^{2\alpha-1}[I-S(t-s)]S(s)A^{-\alpha}F(s). \label{E29} \end{align}
Let us give estimate for the norm of the first term in the right-hand side of the latter equality. Due to \eqref{P5}, \eqref{P8} and \eqref{P9} there exists $C_5>0$ such that \begin{align}
&\|A^{2\alpha-1}[I-S(t)][A^{-\alpha}F(t)-A^{-\alpha}F(s)]\| \notag\\
&\leq \|A^{2\alpha-1}[I-S(t)]\| w_{A^{-\alpha}F}(t) s^{\beta-\sigma-1} (t-s)^\sigma \notag\\ &\leq C_5 w_{A^{-\alpha}F}(t) s^{(\beta-\sigma+\gamma)-\gamma-1} (t-s)^\gamma. \label{P23} \end{align}
For the norm of the second term, we have \begin{align*}
&\|A^{2\alpha-1}[S(t-s)-I]S(s)A^{-\alpha}F(s)\|\\
&\leq \|[S(t-s)-I]A^{2\alpha-\sigma-1}\| s^{\beta-\sigma-1} \|s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|\\
&\leq \Big\|\int_0^{t-s} A^{2\alpha-\sigma} S(r) dr\Big\| s^{\beta-\sigma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|. \end{align*} If $2\alpha\geq \sigma$ then \begin{align*}
&\|A^{2\alpha-1}[S(t-s)-I]S(s)A^{-\alpha}F(s)\|\\
&\leq \int_0^{t-s} \iota_{2\alpha-\sigma} r^{\sigma-2\alpha}dr s^{\beta-\sigma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|\\
&= \frac{\iota_{2\alpha-\sigma}}{1-2\alpha+\sigma} (t-s)^{1-2\alpha+\sigma-\gamma } (t-s)^\gamma s^{\beta-\sigma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|\\
&\leq \frac{\iota_{2\alpha-\sigma}T^{1-2\alpha+\sigma-\gamma }}{1-2\alpha+\sigma} (t-s)^\gamma s^{(\beta-\sigma+\gamma) -\gamma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|. \end{align*} Meanwhile, if $2\alpha < \sigma$ then by using \eqref{P7} and \eqref{P9} \begin{align*}
&\|A^{2\alpha-1}[S(t-s)-I]S(s)A^{-\alpha}F(s)\|\\
&\leq \int_0^{t-s} \upsilon_{\sigma-2\alpha} \iota_0 dr s^{\beta-\sigma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|\\
&=\upsilon_{\sigma-2\alpha} \iota_0 (t-s)^{1-\gamma} (t-s)^\gamma s^{(\beta-\sigma+\gamma) -\gamma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|\\
&\leq \upsilon_{\sigma-2\alpha} \iota_0 T^{1-\gamma} (t-s)^\gamma s^{(\beta-\sigma+\gamma) -\gamma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|. \end{align*} Hence, there exists $C_6>0$ such that \begin{align}
&\|A^{2\alpha-1}[S(t-s)-I]S(s)A^{-\alpha}F(s)\| \notag\\
&\leq C_6 (t-s)^\gamma s^{(\beta-\sigma+\gamma) -\gamma-1} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|. \label{P24} \end{align}
On the other hand, since $ s^{1-\beta}A^{-\alpha}F(s)$ has a limit as $s\to 0$, by \eqref{P10}
$$\lim_{s\to 0} \| s^\sigma A^\sigma S(s) s^{1-\beta}A^{-\alpha}F(s)\|=0.$$ According to \eqref{E27}, \eqref{E29}, \eqref{P23} and \eqref{P24}, $J_3$ is then verified to be in $\mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H).$
We have thus proved that $A^{\alpha} X$ belongs to $ \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H)$ for every $0\leq \gamma \leq \sigma$. It completes the proof of the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem2}] Throughout this proof, the notation $C$ will stand for a universal constant which is determined in each occurrence by the exponents and by constants $\iota_\theta, \upsilon_\theta \,(\theta\geq 0)$ in a specific way. We will use the following property of stochastic integrals (see \cite{prato} for the proof). \begin{lemma} \label{lemma1.5} Let $ \phi $ be a measurable function from $ ([0,T],\mathcal B([0,T]))$ to $(L_2(U;H),$ $\mathcal B(L_2(U;H)))$ satisfying the condition
$$\int_0^t \|\phi(s)\|_{L_2(U;H)}^2 ds<\infty, \hspace{2cm} t\in [0,T]. $$ Then the stochastic integral $\int_0^t \phi(s) dW(s)$ is well-defined and is continuous in $ t\in [0,T]. $ Furthermore,
$$ \mathbb E\Big\| \int_0^t \phi(s) dW(s)\Big\|^2=\int_0^t\|\phi(s)\|_{L_2(U;H)}^2 ds.$$ In fact, the integral is a continuous square integrable martingale on $[0,T]$. Here and then, the continuity means that almost surely trajectories of the stochastic process are continuous in time. \end{lemma} We divide the proof into several steps.
{\bf Step 1}. Let us observe that there exists a unique continuous mild solution of \eqref{P12}, which satisfies the estimate \eqref{P13}. Indeed, due to \eqref{P5} and \eqref{P8}, we have \begin{align}
&\int_0^t\|S(t-s)G(s)\|_{L_2(U;H)}^2 ds \notag\\
&\leq \int_0^t\| S(t-s)\|^2 \|G(s)\|_{L_2(U;H)}^2 ds\notag\\
& \leq \int_0^t \iota_0^2 \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H))}^2 s^{2(\beta-1)}ds\notag\\
&=\frac{\iota_0^2 \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H))}^2 t^{2\beta-1}}{2\beta-1}<\infty, \hspace{2cm} t\in [0,T]. \label{P25} \end{align} Therefore, by using Lemma \ref{lemma1.5}, it is easy to see that the stochastic convolution $$W_G(t)=\int_0^t S(t-s)G(s)dW(s)$$
is continuous in $t\in [0,T]$. Thus, $X(t)=S(t) \xi+ W_G(t)$ is a unique continuous mild solution of \eqref{P12}. Furthermore, by \eqref{P8}, \eqref{P9} and \eqref{P25}, \begin{align*}
\mathbb E\|X(t)\|&\leq \mathbb E\|S(t)\xi\|+\mathbb E\|W_G(t)\|\\
&\leq \mathbb E\|S(t)\xi\|+ \sqrt{\mathbb E\|W_G(t)\|^2}\\
&= \mathbb E\|A^{-\beta}S(t) A^\beta \xi\|+ \sqrt{\int_0^t \|S(t-s)G(s)\|_{L_2(U;H)}^2ds}\\
&\leq C[\mathbb E\|A^\beta\xi\|+\|G\|_{\mathcal F^{\beta, \sigma} ((0,T];L_2(U;H))} t^{\beta-\frac{1}{2}}], \hspace{1cm} t\in [0,T].
\end{align*}
{\bf Step 2}. Let us show that for every $\nu\in (0,\frac{1}{2}),$ $X\in \mathcal C((0,T];\mathcal D(A^\nu))$ a.s.. Indeed, we have \begin{align*}
&\int_0^t\|A^\nu S(t-s)G(s)\|_{L_2(U;H)}^2 ds\\
&\leq \int_0^t\| A^\nu S(t-s)\|^2 \|G(s)\|_{L_2(U;H)}^2 ds\\
& \leq \int_0^t \iota_\nu^2 \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H))}^2 (t-s)^{-2\nu} s^{2(\beta-1)}ds\\
&=\iota_\nu^2 \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H))}^2 B(2\beta-1,1-2\nu)t^{2(\beta-\nu)-1}\\ &<\infty, \hspace{3cm} t\in (0,T]. \end{align*} Lemma \ref{lemma1.5} then provides that the stochastic integral $\int_0^tA^\nu S(t-s)G(s)dW(s)$ is continuous on $(0,T]$. Since $A^\nu$ is closed, we obtain $$A^\nu W_G(t)=\int_0^tA^\nu S(t-s)G(s)dW(s).$$ Hence $A^\nu W_G$ is continuous on $(0,T]$. On the other hand, since $A^\nu S(t)\xi=A^{\nu-\beta} S(t) A^\beta \xi,$ $A^\nu S(\cdot)\xi$ is continuous on $[0,T]$. In this way, we conclude that $A^\nu X(t)=A^\nu S(t)\xi+ A^\nu W_G(t)$ is continuous in $t\in (0,T]$, i.e. for every $\nu\in (0,\frac{1}{2}),$ $X\in \mathcal C((0,T];\mathcal D(A^\nu))$ a.s..
{\bf Step 3}. Let us verify that for every $\alpha_1\in (0, \frac{1}{2}-\sigma]$ \begin{align*}
\mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\|^2 \leq & m(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}, \hspace{1cm} 0< s\leq t\leq T \end{align*} where $m(t)$ is some increasing function on $(0,T]$ such that $\lim_{t\to 0} m(t)=0.$
From the expression $$A^{\alpha_1} W_G(t)=\int_0^t A^{\alpha_1} S(t-r)[G(r)-G(t)]dW(r)+\int_0^t A^{\alpha_1} S(t-r) G(t)dW(r),$$ we have \begin{align*} &A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\\ =&\int_s^t A^{\alpha_1} S(t-r)[G(r)-G(t)]dW(r)\\ &+\int_0^sA^{\alpha_1} S(t-r)[G(r)-G(t)]dW(r)\\ &-\int_0^s A^{\alpha_1} S(s-r)[G(r)-G(s)]dW(r)\\ &+\int_s^t A^{\alpha_1} S(t-r)G(t)dW(r)+\int_0^s A^{\alpha_1} S(t-r)G(t)dW(r)\\ &-\int_0^s A^{\alpha_1} S(s-r)G(s)dW(r)\\
=&\int_s^t A^{\alpha_1} S(t-r)[G(r)-G(t)]dW(r)\\ &+\int_0^s A^{\alpha_1} S(t-s)S(s-r)[G(r)-G(s)+G(s)-G(t)]dW(r)\\ &-\int_0^s A^{\alpha_1} S(s-r)[G(r)-G(s)]dW(r)+\int_s^t A^{\alpha_1} S(t-r)G(t)dW(r)\\ &+\int_0^s [A^{\alpha_1} S(t-r)G(t)-A^{\alpha_1} S(s-r)G(s)]dW(r)\\
=&\int_s^t A^{\alpha_1} S(t-r)[G(r)-G(t)]dW(r)\\ &+\int_0^s [S(t-s)-I]A^{\alpha_1} S(s-r)[G(r)-G(s)]dW(r)\\ &+\int_0^s A^{\alpha_1} S(t-r)[G(s)-G(t)]dW(r)+\int_s^t A^{\alpha_1} S(t-r)G(t)dW(r)\\ &+\int_0^s A^{\alpha_1} S(s-r)[G(t)-G(s)]dW(r)\\ &+\int_0^s A^{\alpha_1} [S(t-r)-S(s-r)]G(t)dW(r)\\ =&K_1+K_2+K_3+K_4+K_5+K_6. \end{align*} Let us give estimates for the expectation of the square of norm of $K_i \, (i=1,\dots,6)$. For $K_1$, thanks to \eqref{P5} and \eqref{P7} we have \begin{align*}
\mathbb E\|K_1\|^2 = &\int_s^t \|A^{\alpha_1} S(t-r)[G(r)-G(t)]\|_{L_2(U;H)}^2dr\\
\leq &\int_s^t \|A^{\alpha_1} S(t-r)\|^2 \|G(r)-G(t)\|_{L_2(U;H)}^2dr\\
\leq &\iota_{{\alpha_1}}^2 w_{G}(t)^2 \int_s^t (t-r)^{2(\sigma-\alpha_1)} r^{2(\beta-\sigma-1)}dr\\
\leq &\iota_{\alpha_1}^2 w_{G}(t)^2 s^{2(\beta-\sigma-1)} \int_s^t (t-r)^{2(\sigma-\alpha_1)}dr\\
= &\iota_{\alpha_1}^2 w_{G}(t)^2 s^{2(\beta-\sigma-1)} \frac{(t-s)^{1+2(\sigma-\alpha_1)}}{1+2(\sigma-\alpha_1)}\\
\leq &C w_{G}(t)^2 s^{2(\beta-\sigma-1)}(t-s)^{2\sigma}, \end{align*} where
$$w_{G}(t)=\sup_{0\leq s\leq t}\frac{s^{1-\beta+\sigma}\|G(t)-G(s)\|_{L_2(U;H)}}{(t-s)^\sigma}.$$ For $K_2$ we have \begin{align*}
\mathbb E\|K_2\|^2 = & \int_0^s \Big\|\int_0^{t-s} AS(\rho) d\rho A^{\alpha_1} S(s-r) [G(r)-G(s)]\Big\|_{L_2(U;H)}^2 dr\\
\leq & \int_0^s \Big\|\int_0^{t-s} A^{1-\sigma} S(\rho) d\rho\Big\|^2 \|A^{\alpha_1+\sigma} S(s-r)\|^2\\
& \hspace{2cm} \times \|G(r)-G(s)\|_{L_2(U;H)}^2 dr\\
\leq & \iota_{1-\sigma}^2\iota_{\alpha_1+\sigma}^2 w_{G}(s)^2\\ &\times \int_0^s \Big(\int_0^{t-s} \rho^{\sigma-1}d\rho\Big)^2 (s-r)^{-2(\alpha_1+\sigma)} (s-r)^{2\sigma} r^{2(\beta-\sigma-1)}dr\\
\leq &C w_{G}(s)^2 \int_0^s (t-s)^{2\sigma} (s-r)^{-2\alpha_1} r^{2(\beta-\sigma-1)}dr\\
= & C B(2\beta-2\sigma-1,1-2\alpha_1)s^{2(\beta-\sigma-\alpha_1)-1} w_{G}(s)^2 (t-s)^{2\sigma}\\
\leq & C w_{G}(s)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}. \end{align*} For $K_3$, \begin{align*}
\mathbb E\|K_3\|^2 = & \int_0^s \|A^{\alpha_1} S(t-r) [G(s)-G(t)]\|_{L_2(U;H)}^2dr\\
\leq & \int_0^s \|A^{\alpha_1} S(t-r)\|^2 \|G(s)-G(t)\|_{L_2(U;H)}^2dr\\
\leq & C \int_0^s (t-r)^{-2{\alpha_1}}dr w_{G}(t)^2 (t-s)^{2\sigma} s^{2(\beta-\sigma-1)}\\
\leq & C [t^{1-2\alpha_1}-(t-s)^{1-2\alpha_1} ] w_{G}(t)^2 (t-s)^{2\sigma} s^{2(\beta-\sigma-1)}\\
\leq & C w_{G}(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}. \end{align*} For $K_4$, \begin{align*}
\mathbb E\|K_4\|^2= & \int_s^t \|A^\alpha_1 S(t-r) G(t)\|_{L_2(U;H)}^2dr\\
\leq & \int_s^t \|A^{\alpha_1} S(t-r)\|^2 \|G(t)\|_{L_2(U;H)}^2dr\\
\leq & C \int_s^t (t-r)^{-2\alpha_1}dr \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H)}^2 t^{2(\beta-1)} \\
\leq & C t^{2(\beta-1)} (t-s)^{1-2\alpha_1}\\
= & C t^{2\sigma} (t-s)^{1-2\alpha_1-2\sigma} t^{2(\beta-\sigma-1)} (t-s)^{2\sigma}\\
\leq & C t^{2\sigma} T^{1-2\alpha_1-2\sigma} s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}\\
\leq & C t^{2\sigma} s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}.\end{align*} For $K_5$, \begin{align*}
\mathbb E\|K_5\|^2 =& \int_0^s \|A^{\alpha_1} S(s-r)[G(t)-G(s)]\|_{L_2(U;H)}^2dr\\
\leq & \int_0^s \|A^{\alpha_1} S(s-r)\|^2 \|G(t)-G(s)\|^2dr\\
\leq &C \int_0^s (s-r)^{-2\alpha_1}dr w_{G}(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}\\
\leq &C s^{2(\beta-\sigma-\alpha_1)-1} w_{G}(t)^2 (t-s)^{2\sigma}\\
\leq &C w_{G}(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}. \end{align*} Finally, for $K_6$ we have \begin{align*}
\mathbb E\|K_6\|^2=& \int_0^s \|A^{\alpha_1} [S(t-r)-S(s-r)]G(t)\|_{L_2(U;H)}^2 dr\\
\leq & \int_0^s \|A^{\alpha_1+\sigma}S(s-r)\|^2 \|S(t-s)-I]A^{-\sigma}\|^2 \|G(t)\|_{L_2(U;H)}^2 dr\\
= & \int_0^s \|A^{\alpha_1+\sigma}S(s-r)\|^2 \Big\|\int_0^{t-s} A^{1-\sigma} S(\rho)d\rho\Big\|^2 dr\|G(t)\|_{L_2(U;H)}^2\\
\leq & \iota_{\alpha_1+\sigma}^2 \iota_{1-\sigma}^2 \int_0^s (s-r)^{-2(\alpha_1+\sigma)} dr \Big[\int_0^{t-s} \rho^{\sigma-1}d\rho\Big]^2 \\
&\times \|G\|_{\mathcal F^{\beta,\sigma}((0,T]; L_2(U;H))}^2 t^{2(\beta-1)} \\
\leq &C s^{1-2(\alpha_1+\sigma)}t^{2\sigma} t^{2(\beta-\sigma-1)} (t-s)^{2\sigma}\\
\leq &C T^{1-2(\alpha_1+\sigma)} t^{2\sigma} s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}\\ \leq & Ct^{2\sigma} s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}. \end{align*} In this way, we conclude that \begin{align*}
\mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\|^2&\leq 6\sum_{i=1}^6 \mathbb E\|K_i\|^2\\
&\leq m(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}, \end{align*} where $m(\cdot)$ is some increasing function on $(0,T]$ such that $\lim_{t\to 0} m(t)=0$.
{\bf Step 4}. Let us verify that for $ 0<\gamma<\sigma,$ $0<\alpha_1\leq \frac{1}{2}-\sigma$ and $\epsilon\in (0,T]$ $$A^{\alpha_1} X\in \mathcal C^\gamma([\epsilon,T];H) \hspace{1cm}\text{ a.s.},$$ and
$$\mathbb E \|A^{\alpha_1} X\|\in \mathcal F^{\beta,\sigma}((0,T];\mathbb R). $$ On account of {\bf Step 2}, $A^{\alpha_1} W_G$ is a Gaussian process. Using the estimate in {\bf Step 3} we apply the Kolmogorov continuity theorem to the process $A^{\alpha_1} W_G.$ We then obtain $$A^{\alpha_1} W_G\in \mathcal C^\gamma([\epsilon,T];H).$$ Furthermore, it has already verified in Theorem \ref{theorem1} that $$J_1=A^{\alpha_1} S(t) \xi\in \mathcal F^{\beta,\gamma}((0,T]; H)$$
(this result holds true for every $\alpha_1\in (0,1)).$ With a remark that $$ \mathcal F^{\beta,\gamma}((0,T];H) \subset \mathcal C^\gamma([\epsilon,T];H), \hspace{1cm} 0<\epsilon\leq T,$$ we arrive at the first statement, i.e. $$A^{\alpha_1} X=J_1+A^{\alpha_1} W_G\in \mathcal C^\gamma([\epsilon,T];H).$$
We shall show the second one. Due to the estimate in {\bf Step 3}, \begin{align*}
[\mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\|]^2 &\leq \mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\|^2\\
&\leq m(t)^2 s^{2(\beta-\sigma-1)} (t-s)^{2\sigma}. \end{align*} Hence, \begin{equation} \label{P26}
\frac{ s^{1-\beta+\sigma} \mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\| }{ (t-s)^{\sigma}} \leq m(t). \end{equation} Furthermore, by choosing $\gamma=\sigma$ in the estimates \eqref{P16}, \eqref{P17} and \eqref{P18}, which hold true for every ${\alpha_1}\in (0,1)$, we obtain \begin{align*}
&\frac{s^{1-\beta+\sigma} \|A^{\alpha_1}[S(t)-S(s)] \xi\|}{(t-s)^\sigma} \leq C \|A^\beta \xi\| \max\{(t-s)^{1-\alpha_1},(t-s)^{1-\sigma}\}. \end{align*} Taking expectation of both the hand sides of the above estimate, it follows that \begin{align}
&\frac{s^{1-\beta+\sigma}\mathbb E \|J_1(t)-J_1(s)\|}{(t-s)^\sigma} \leq C \mathbb E \|A^\beta \xi\| \max\{t^{1-\alpha_1},t^{1-\sigma}\}. \label{P27} \end{align} Combining \eqref{P26} and \eqref{P27} yields that \begin{align*}
&\frac{ s^{1-\beta+\sigma} |\mathbb E\|A^{\alpha_1} X(t)\|-\mathbb E\|A^{\alpha_1} X(s)\| |}{ (t-s)^{\sigma}} \notag\\
&\leq \frac{ s^{1-\beta+\sigma} \mathbb E\|A^{\alpha_1} X(t)-A^{\alpha_1} X(s)\| }{ (t-s)^{\sigma}} \notag\\
&= \frac{ s^{1-\beta+\sigma} \mathbb E\|[A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)] +[J_1(t)-J_1(s)]\| }{ (t-s)^{\sigma}} \notag\\
& \leq \frac{ s^{1-\beta+\sigma} [\mathbb E\|A^{\alpha_1} W_G(t)-A^{\alpha_1} W_G(s)\| + \mathbb E \|J_1(t)-J_1(s)\|]}{ (t-s)^{\sigma}}\notag\\
& \leq m(t)+ C \mathbb E\|A^\beta \xi\| \max\{t^{1-\alpha_1},t^{1-\sigma}\}. \end{align*} Therefore, \begin{equation} \label{P28}
\sup_{0\leq s<t\leq T} \frac{ s^{1-\beta+\sigma} |\mathbb E\|A^{\alpha_1} X(t)\|-\mathbb E\|A^{\alpha_1} X(s)\| |}{ (t-s)^{\sigma}}<\infty \end{equation} and \begin{equation} \label{P29}
\lim_{t\to 0} \sup_{0\leq s<t} \frac{ s^{1-\beta+\sigma} |\mathbb E\|A^{\alpha_1} X(t)\|-\mathbb E\|A^{\alpha_1} X(s)\| |}{ (t-s)^{\sigma}}=0. \end{equation}
On the other hand, by using the equality
$$\mathbb E\|A^{\alpha_1} W_G(t)\|^2=\int_0^t \|A^{\alpha_1} S(t-s) G(s)\|_{L_2(U;H)}^2 ds$$ and the estimate in {\bf Step 2}, we have \begin{align*}
t^{1-\beta} \mathbb E\|A^{\alpha_1} W_G(t)\| & \leq t^{1-\beta} \sqrt{\mathbb E\|A^{\alpha_1} W_G(t)\|^2}\\
&\leq \iota_{\alpha_1} \|G\|_{\mathcal F^{\beta,\sigma}((0,T];L_2(U;H))} t^{\frac{1}{2}-\alpha_1}\sqrt{ B(2\beta-1,1-2\alpha_1)}. \end{align*} Hence,
$$\lim_{t\to 0} t^{1-\beta} \mathbb E\|A^{\alpha_1} W_G(t)\|=0.$$ In addition, thanks to \eqref{P10}, \begin{align*}
\lim_{t\to 0} t^{1-\beta} \mathbb E\|J(t)\|=\lim_{t\to 0} \mathbb E\|A^{\alpha_1-1} t^{1-\beta} A^{1-\beta}S(t) A^\beta \xi\|=0. \end{align*} In this way, we thus proved that \begin{equation} \label{P30}
\lim_{t\to 0} t^{1-\beta} \mathbb E\|A^{\alpha_1} X(t)\| =\lim_{t\to 0} t^{1-\beta} \mathbb E\|J(t)+A^{\alpha_1} W_G(t)\|=0. \end{equation} By \eqref{P28}, \eqref{P29} and \eqref{P30}, the second statement has been verified. We complete the proof of the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{theorem3}] Clearly, the assumptions of Theorem \ref{theorem1} and Theorem \ref{theorem2} are fulfilled. Therefore,
it is easy to see that Theorem \ref{theorem3} follows from these two theorems. \end{proof}
\section{An application to neurophysiology} \label{section3}
In this section, we will apply our results to a model arising in neurophysiology, which has been proposed by Walsh \cite{Walsh}. Nerve cells operate by a mixture of chemical, biological and electrical properties. We shall regard them as long thin cylinders, which act much like electrical cables. Denote by $V(t,x)$ the electrical potential at time $t$ and the point $x$. If we identify such a cylinder with the interval $[0,L]$ then $V$ satisfies a nonlinear equations coupled with a system of ordinary differential equations, called the Hodgkin-Huxley equations. In some certain ranges of the values of the potential, the equations are approximated by the cable equation:
\begin{equation} \label{P31}
\begin{cases} \frac{\partial V}{\partial t}=\frac{\partial^2 V}{\partial x^2} -V, \\
V\colon [0,\infty)\times [0,L] \to \mathbb R, \\
t\geq 0, x\in [0,L].
\end{cases}
\end{equation}
Let us consider \eqref{P31} with an additional external force, called impulses of current. Neurons receive these impulses of current via synapses on their surface. Denote by $F(t)$ the current arriving at time $t$, then the electrical potential satisfies the equation:
\begin{equation*} \begin{cases} \frac{\partial V}{\partial t}=\frac{\partial^2 V}{\partial x^2} -V +F(t), \\
V\colon [0,\infty)\times [0,L] \to \mathbb R, \\
t\geq 0, x\in [0,L].
\end{cases}
\end{equation*} Suppose now that the current $F$ is perturbed by a space-time white noise $\dot W$, i.e. $$F(t)\leadsto F(t) + G(t) \dot W.$$
This leads us to the stochastic partial differential equation:
\begin{equation} \label{P32}
\begin{cases} \begin{aligned} &\frac{\partial V}{\partial t}(t,x)=\frac{\partial^2 V}{\partial x^2}(t,x) -V(t,x) +F(t) + G(t) \frac{\partial W(t)}{\partial t},&\hspace{0.5cm} t> 0, x\in [0,L],\\ & \frac{\partial V}{\partial x}(0,t)= \frac{\partial V}{\partial x}(L,t)=0, &\hspace{0.5cm} t>0,\\ &V(0,x)=V_0(x), &\hspace{0.5cm}x\in (0,L), \end{aligned}
\end{cases}
\end{equation} where $W$ is a cylindrical Wiener process defined on a filtered probability space $(\Omega, \mathcal F,\mathcal F_t,\mathbb P),$ $V_0$ is an $\mathcal F_0$-measurable random variable. More explanations for the construction of the system \eqref{P32} can be found in \cite{Walsh}.
Let us next construct an abstract formulation for \eqref{P32}. We will handle the equation in the Hilbert space $H=L_2([0,L])$. We assume that the cylindrical Wiener process $W$ takes value on a separable Hilbert space $U$, and that $F$ and $G$ are measurable from $([0,\infty),\mathcal B([0,\infty)))$ to $ (H,\mathcal B(H))$ and $(L_2(U;H),\mathcal B(L_2(U;H)), $ respectively. Denote by $A$ the realization of the operator $-\frac{\partial^2}{\partial x^2}+I$
in $H$ under the Neumann boundary conditions on the boundary $\{0,L\}$. We formulate the equation \eqref{P32} as the Cauchy problem for an abstract stochastic linear equation \begin{equation} \label{P33} \begin{cases} dV+AV=F(t)+G(t)dW(t), \hspace{1cm} 0<t<\infty,\\ V(0)=V_0 \end{cases} \end{equation} in $H$. \begin{lemma}\label{lemma2} The operator $A$ is a positive definite self-adjoint operator on $H$ with domain $$\mathcal D(A)=H_N^2([0,L]):=\{u\in H^2([0,L]) \text{ such that } \frac{\partial u}{\partial x}(0)= \frac{\partial u}{\partial x}(L)=0\}.$$
In addition, the domains of fractional powers $A^\theta, 0\leq \theta\leq 1,$ are characterized by \begin{equation*} \mathcal D(A^\theta)= \begin{cases} H^{2\theta}([0,L]) \hspace{1cm} \text{ for } 0\leq \theta<\frac{3}{4},\\ H_N^{2\theta}([0,L]) \hspace{1cm} \text{ for } \frac{3}{4}< \theta\leq 1. \end{cases} \end{equation*} \end{lemma} The lemma \ref{lemma2} is a special case of Theorem 16.7 and Theorem 16.9 in \cite{yagi}. It then follows that $A$ is a sectorial operator.
The following theorem follows from Theorem \ref{theorem1}, Theorem \ref{theorem2} and Theorem \ref{theorem3}. It shows global existence and regularity of solutions to \eqref{P33}. \begin{theorem} \label{theorem5}
Assume that $V_0\in \mathcal D(A^\beta) $ a.s. and $\mathbb E\|A^\beta V_0\|<\infty.$ \begin{itemize} \item [\rm (i)] If $G\equiv 0$ on $[0,\infty)$ (i.e. \eqref{P33} is a deterministic equation) and $F$ satisfies {\rm (H3)} for every $T>0,$ then the equation \eqref{P33} has a unique continuous mild solution $V$ on $[0,\infty)$ possessing the regularity: $$V\in \mathcal C((0,\infty);\mathcal D(A^{1-\alpha})), $$ $$A^{\alpha} V\in \mathcal C([0,\infty);H)\cap \mathcal F^{\beta-\sigma+\gamma,\gamma}((0,T];H), $$
and satisfying the estimate \begin{equation*}
\|V(t)\|\leq \iota_\alpha B(\beta,1-\alpha) \|A^{-\alpha}F\|_{\mathcal F^{\beta,\sigma}} t^{\beta-\alpha} +\iota_0 \|\xi\| \end{equation*} for every $T>0$, $t\in [0,T]$ and $\gamma\in [0, \sigma]$. \item [\rm (ii)] If $F\equiv 0$ on $[0,\infty)$ and $G$ satisfies {\rm (H4)} for every $T>0,$ then \eqref{P33} has a unique mild solution $V$ on $[0,\infty)$ possessing the regularity: $$V\in \mathcal C((0,\infty);\mathcal D(A^\nu)), \quad A^{\alpha_1} V\in \mathcal C^\gamma([\epsilon,T];H) \hspace{1cm}\text{ a.s.}$$ and
$$\mathbb E \|A^{\alpha_1} V\|\in \mathcal F^{\beta,\sigma}((0,T];\mathbb R), $$
and satisfying the estimate
\begin{equation*}
\mathbb E\|V(t)\|\leq C[\mathbb E\|A^\beta\xi\|+\|G\|_{\mathcal F^{\beta, \sigma} ((0,T];L_2(U;H))} t^{\beta-\frac{1}{2}}], \hspace{1cm} t\in [0,T]
\end{equation*}
for every $T>0, \nu\in (0,\frac{1}{2}), \alpha_1\in (0, \frac{1}{2}-\sigma], \gamma\in (0,\sigma)$ and $\epsilon\in (0,T]$, where $C$ is some constant depending only on the exponents and constants $\iota_\theta, \upsilon_\theta \,(\theta\geq 0)$. \item [\rm (iii)] If $F$ and $G$ satisfy {\rm (H3)} and {\rm (H4)} for every $T>0$ and $\alpha\in (0, \frac{1}{2}-\sigma],$ then there exists a unique mild solution of \eqref{P12} possessing the regularity: $$V\in \mathcal C((0,\infty);\mathcal D(A^\nu)), \quad A^{\alpha} V\in \mathcal C^\gamma([\epsilon,T];H) \hspace{1cm}\text{ a.s.}$$ and
$$\mathbb E \|A^{\alpha} V\|\in \mathcal F^{\beta,\sigma}((0,T];\mathbb R), $$
and satisfying the estimate
\begin{equation*}
\mathbb E\|V(t)\|\leq C[\mathbb E\|A^\beta\xi\|+\|A^{-\alpha} F\|_{\mathcal F^{\beta,\sigma}}\|G\|_{\mathcal F^{\beta, \sigma} ((0,T];L_2(U;H))} t^{\beta-\frac{1}{2}}] \hspace{0cm}
\end{equation*}
for every $t\in [0,T],$ $T>0, $ $ \nu\in (0,\frac{1}{2}), \gamma\in (0,\sigma)$ and $\epsilon\in (0,T]$, where $C$ is some constant depending only on the exponents and constants $\iota_\theta, \upsilon_\theta \,(\theta\geq 0)$. \end{itemize} \end{theorem} \begin{remark} A result in \cite{Walsh} shows that $V$ is a H\"{o}lder continuous function with exponent $\frac{1}{4}-\epsilon$, for any $\epsilon>0$. We have improved this result by showing that $A^\alpha V$ is H\"{o}lder continuous with an arbitrary exponent smaller than $\sigma$ (noting that $\sigma$ in Theorem \ref{theorem5} is possibly larger than $\frac{1}{4}$). Using the same framework, we can treat the equation \eqref{P32} in higher dimensions and obtain similar results. \end{remark}
\end{document} | arXiv | {
"id": "1508.07214.tex",
"language_detection_score": 0.4875430762767792,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\draft
\title{Entangling macroscopic oscillators exploiting radiation pressure}
\author{ Stefano Mancini$^{1,3}$, Vittorio Giovannetti$^{2}$, David Vitali$^{3}$ and Paolo Tombesi$^{3}$}
\address{ $^{1}$INFM, Dipartimento di Fisica, Universit\`a di Milano, Via Celoria 16, I-20133 Milano, Italy \\ $^{2}$Research Laboratory of Electronics, MIT - Cambridge, MA 02139, USA \\ $^{3}$INFM, Dipartimento di Matematica e Fisica, Universit\`a di Camerino, I-62032 Camerino, Italy}
\date{\today}
\maketitle
\begin{abstract} It is shown that radiation pressure can be profitably used to entangle {\it macroscopic} oscillators like movable mirrors, using present technology. We prove a new sufficient criterion for entanglement and show that the achievable entanglement is robust against thermal noise. Its signature can be revealed using common optomechanical readout apparatus. \end{abstract}
\pacs{03.65.Ud, 42.50.Vk, 03.65.Ta}
\begin{multicols}{2}
The fundamental role of {\it entanglement} \cite{SCH} in quantum mechanics has been re-emphasized in recent years \cite{ENT}. In this context, an important point is to asses whether this peculiarity of the quantum world, i.e., the entanglement, could be applicable to macroscopic bodies and, moreover, measurable.
Literature focused on methods to prepare atoms in entangled states already exists \cite{SAK}. The recent experiment generating entanglement of two gas samples \cite{JUL} is a striking achievement. Then, a real challenge is to devise the possibility of applying similar arguments to macroscopic, massive oscillators. Here we propose an experiment, which could be realized with present technologies, to show that it is possible to entangle massive oscillators exploiting the radiation pressure force.
It is indeed usually believed that, being a superposition of states, the entanglement between massive, macroscopic, objects is practically impossible to detect because of the fast diagonalization of the system's density matrix due to the coupling with the environment \cite{ZUR}. On the contrary, by using a new sufficient criterion for entanglement, we shall derive the parameter region for which two massive, movable, cavity mirrors can be entangled by the radiation pressure exerted by a cavity mode, and we shall show how to measure the degree of entanglement. Beside foundation interest, the ability to place such oscillators in entangled states may even result useful in applications, as in high precision measurements \cite{HOLL}.
To be concrete, as a specific model we consider two end mirrors of an optical cavity, which can both oscillate under the effect of radiation pressure force. Cavities with one movable mirror have already been studied \cite{VAR}, and a wide class of quantum states resulting from optomechanical coupling was proposed \cite{NCS}. Furthermore, due to recent technological developments in optomechanics, this area is now becoming experimentally accessible \cite{EXP}.
As pointed out in Ref. \cite{BRAG}, under the assumption that the measurement time is either less or of the order of the mechanical relaxation time, it is possible to consider a macroscopic oscillator, i.e., a movable mirror in our case, as a quantum oscillator. Then, for not too high oscillation frequency, with respect to the inverse round trip times of photons within the cavities, we can write the Hamiltonian of the system sketched in Figure \ref{fig1} as \begin{eqnarray}\label{H} {\cal H}&=&\sum_{i=1}^{2}\hbar\omega_{a} a_{i}^{\dag}a_{i} +\hbar\omega_{b} b^{\dag}b +\hbar\Omega\sum_{i=1}^{2}\left( \frac{p_{i}^{2}}{2}+\frac{q_{i}^{2}}{2}\right) \nonumber\\ &&-\hbar g a^{\dag}_{1} a_{1} q_{1} +\hbar g a^{\dag}_{2} a_{2} q_{2} +\hbar G b^{\dag} b \left(q_{1}-q_{2}\right) \nonumber\\ &&+i\hbar\sqrt{\gamma_a}\sum_{i=1}^{2} \left(\alpha^{in}e^{-i\omega_{a0}t}a_i^{\dag} -\alpha^{in\,*}e^{i\omega_{a0}t}a_i\right) \nonumber\\ &&+i\hbar\sqrt{\gamma_b}\left(\beta^{in}e^{-i\omega_{b0}t}b^{\dag} -\beta^{in\,*}e^{i\omega_{b0}t}b\right)\,, \end{eqnarray} where $a_{i}$, $a_{i}^{\dag}$ are the destruction and creation operators of the electromagnetic fields corresponding to the meters mode and $\omega_a$ their frequency (assumed equal for simplicity). Instead, $b$, $b^{\dag}$ are those of the {\it entangler} mode (the use of this terminology will become clear in the following) and $\omega_b$ its frequency. Finally, $q_{i}$ and $p_{i}$ are the dimensionless position and momentum operators of the mirrors $M_i$, both oscillating at frequency $\Omega$, and having mass $m$. The first row of equation (\ref{H}) simply represents the free Hamiltonian, whereas the second represents the effect of the radiation pressure force which causes the instantaneous displacement of the mirrors \cite{LAW}. The coupling constants are $g={\tilde g}/\sqrt{m\Omega}$ and $G={\tilde G}/\sqrt{m\Omega}$ where ${\tilde g}$, ${\tilde G}$ are related to the cavity mode frequencies, to the equilibrium length of the cavities, and to the reflection angles \cite{VAR,LAW}. The last two rows represent the driving fields action in the usual rotating wave approximation. We assume that both meters ($a_1$, $a_2$) are driven at frequency $\omega_{a0}$, while the entangler mode $b$ is driven at frequency $\omega_{b0}$; $\alpha^{in}$, $\beta^{in}$ are the classical fields characterizing the input laser powers
$P_a^{in}= \hbar \omega_{a0} |\alpha^{in}|^{2}$,
$P_b^{in}= \hbar \omega_{b0} |\beta^{in}|^{2}$, and $\gamma_{a}$, $\gamma_b$ are the cavity linewidths.
\begin{figure}
\caption{\narrowtext Schematic description of the system under study. For the sake of simplicity the oscillating mirrors $M1$ and $M2$ are assumed to be identical. An intense light field $b$ (entangler) couples the moving mirrors. Their tiny movements (indicated by the arrows) are then detected through the meter modes $a_1$ and $a_2$ which are subjected to homodyne measurement at $D1$ and $D2$. Finally, the two output currents are combined to get center of mass or relative mirrors coordinate. }
\label{fig1}
\end{figure}
By considering, the unitary evolution of the two mirrors and the entangler, neglecting the meters modes and the driving terms in Eq. (\ref{H}), it can be easily checked that the formers become entangled once a von Neumann projection onto the $b$-mode quadrature is performed (this is why we named such mode entangler). A detailed analysis of the problem, however, must include photon losses, the thermal noise on the mirrors, and the measurement backaction. It means that the interaction of all optical modes with their respective reservoirs and the effect of thermal fluctuations on the two mirrors, not considered in Hamiltonian (\ref{H}), must be added to this equation. This can be accomplished in the standard way \cite{milwal,VIT}. The resulting Hamiltonian gives rise to nonlinear Langevin equations whose linearization around the steady state leads to \begin{equation}\label{LINEQS} \begin{array}{l} \dot{a}_j = i\Delta_a a_j + (-)^{(j+1)} i g \alpha q_j -\frac{\gamma_a}{2} a_j+\sqrt{\gamma_a} a^{in}_j \,, \\ \\ \dot{b} = i\Delta_b b-i G\beta (q_1-q_2) - \frac{\gamma_b}{2} b+\sqrt{\gamma_b} b^{in}\,, \\ \\ \dot{q}_j=\Omega p_j \,, \\ \\ \dot{p}_j=-\Omega q_j+(-)^{(j+1)}g\alpha(a_j+a^{\dag}_j) \\ \\ \hspace{0.8 in} +(-)^j G (\beta^* b+\beta b^{\dag})-\Gamma p_j+\xi_j \,, \end{array} \end{equation} where $j=1,2$, and all the operators now represent small fluctuations around steady state values. These are \begin{eqnarray} \begin{array}{l} \langle q_j \rangle_{ss}
= (-)^{j}[G|\beta|^2-g|\alpha|^2]/\Omega, \\ \\ \langle p_j \rangle_{ss} = 0, \\ \\ \alpha \equiv \langle a_j \rangle_{ss} =\sqrt{\gamma_{a}} \alpha^{in}/ [\gamma_a/2 -i\Delta_a], \\ \\ \beta \equiv \langle b \rangle_{ss} =\sqrt{\gamma_b} \beta^{in} [\gamma_b/2-i\Delta_b]. \end{array} \end{eqnarray} Moreover, $\Delta_a\equiv\omega_{a0}-\omega_a +g\langle q_1 \rangle_{ss}$, $\Delta_b\equiv\omega_{b0}-\omega_b -G(\langle q_1 \rangle_{ss}-\langle q_2 \rangle_{ss})$, are the radiation phase shifts due to the detuning and to the stationary displacement of the mirrors. Both radiation fields used as meters ($a_1$, $a_2$) are damped through output fixed mirrors at the same rate $\gamma_a$, while the entangler mode $b$ is damped at rate $\gamma_b$. Furthermore, $\Gamma$ is the mechanical damping rate for the mirrors Brownian motion. Without loss of generality, we choose $\alpha$ real and $\Delta_a=0$. The operators $a^{in}_{j}(t)$ and $b^{in}(t)$ represent the vacuum (white) noise operators at the cavity inputs. The noise operator for the quantum Brownian motion of the mirrors is $\xi_j(t)$. The non-vanishing noise correlations are \begin{eqnarray}\label{NOISE} &&\langle a^{in}_j(t) a^{in\,\dag}_k(t') \rangle = \delta(t-t')\,\delta_{j,k}\,,\quad j,k=1,2\,, \nonumber\\ &&\langle b^{in}(t) b^{in\,\dag}(t') \rangle = \delta(t-t')\,, \\ &&\langle {\xi_j}(t) {\xi_k}(t') \rangle = \delta_{j,k}\int \, d\omega \, \frac{\Gamma\omega}{2\Omega} \frac{ \left[\coth\left( \hbar\omega/2k_BT\right)-1 \right]} {e^{i\omega(t-t')}} \,,\nonumber \end{eqnarray} where $k_B$ is the Boltzmann constant and $T$ the equilibrium temperature (the two mirrors are considered in equilibrium with their respective bath at the same temperature). Notice that the used approach for the Brownian motion is quantum mechanical consistent at every temperature \cite{VIT}.
The unitary evolution under the linearized Hamiltonian leading to system of Eqs. (\ref{LINEQS}) gives entanglement, as in the non-linearized case discussed above. Hence, the main task is to see whether such quantum correlations are visible or blurred by noisy effects. To accomplish this task, we first solve the system (\ref{LINEQS}) in the frequency domain by introducing the pseudo Fourier transform ${\cal O}(\omega)=\tau^{-1/2} \int_{-\tau/2}^{\tau/2} \, dt\, e^{i\omega t} {\cal O}(t)$ for each operator ${\cal O}$, where $\tau$ is the measurement time assumed to be large compared to all the relevant timescales of the system dynamics.
Let us now consider the measured current at each meter output. The boundary relations for the meters radiation fields \cite{GAR}, i.e., $a^{out}_j = \sqrt{\gamma_a}a_j-a^{in}_j$, yield the phase quadratures $Y_j=-i(a_j-a^{\dag}_j)$ at the output, namely \begin{equation}\label{YOUT} Y^{out}_j(\omega)=\frac{2g\alpha\sqrt{\gamma_a}} {\gamma_a/2-i\omega}q_j(\omega) +\frac{\gamma_a/2+i\omega}{\gamma_a/2-i\omega} Y^{in}_j(\omega)\,. \end{equation} Thus, the measurement of the output quadrature $Y^{out}_j$, in the detection box $D_j$, indirectly gives the mirror position $q_j$. More precisely, in homodyne detections, the positive and negative frequency components of the quadrature being measured are combined through a proper modulation, in order to achieve the measurement of a hermitian operator \cite{GAR}. Then, it would be possible to indirectly measure either $[q_j(\omega)+q_j(-\omega)]$ or $i[ q_j(-\omega)-q_j(\omega)] $, which implies the possibility to measure position or momentum for each macroscopic oscillator.
These measurements can be used to establish when the two oscillating cavity mirrors are entangled. Sufficient criteria for entanglement of continuous variable systems already exist \cite{DUAN,SIMON}, but here we shall introduce a {\em new} sufficient inseparability criterion, involving the {\em product} of variances of continuous observables:
{\em Theorem}. If we define $u=q_{1}+q_{2}$ and $v=p_{1}-p_{2}$, then, for any separable quantum state $\rho $, one has \begin{equation} \label{e3} \left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle
\geq |\langle[q_1,p_1]\rangle|^2 \,. \end{equation} See the appendix for the proof.
This theorem allows us to establish a connection with Refs.~\cite{REID}, which showed that when the inequality \begin{equation}\label{INFER} \left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle
< \frac{1}{4}|\langle[q_1,p_1]\rangle|^2\,, \end{equation} is satisfied, an EPR-like paradox arises\cite{EPR}, based on the inconsistency between quantum mechanics and local realism. Notice that the sufficient condition for inseparability of Eq.~(\ref{e3}) is weaker than condition (\ref{INFER}), but this is not surprising, since entangled states are only a necessary condition for the realization of an EPR-like paradox.
The theorem (\ref{e3}) can then be used to establish the conditions under which the two massive oscillators are entangled. In fact, defining the hermitian operator ${\cal R}_{\{{\cal O}\}}(\omega)=[{\cal O}(\omega)+{\cal O}(-\omega)]/2$ for any operator ${\cal O}(\omega)$ in the frequency domain, and using Eq.~(\ref{e3}), we define the degree of entanglement ${\cal E}(\omega)$ as \begin{equation}\label{EDEF} {\cal E}(\omega)=\frac{ \langle {\cal R}^2_{\{u\}}(\omega) \rangle \, \langle {\cal R}^2_{\{v\}}(\omega) \rangle}
{ \left| \langle \left[{\cal R}_{q_1}(\omega),{\cal R}_{p_1}(\omega)\right]
\rangle\right|^2}\,, \end{equation} (we use the fact that $\langle u\rangle = \langle v \rangle =0$ in our case) which is a marker of entanglement whenever ${\cal E}(\omega) < 1$. If, moreover, it goes below $1/4$, that indicates the presence of EPR correlations.
To calculate the function ${\cal E}(\omega)$ we evaluate the correlations \begin{equation} \langle {\cal O}(\omega){\cal O}(\pm\omega) \rangle =\int_{-\tau/2}^{\tau/2} \, \frac{dt}{\tau}\, \int_{-\infty}^{\infty} \, dt' \, e^{i\omega t'} \langle {\cal O}(t){\cal O}(t'\mp t) \rangle\,, \end{equation} and use the solutions of (\ref{LINEQS}) and correlations (\ref{NOISE}) in the frequency domain. In doing that, we require $G>g$ and $P_b^{in}>P_a^{in}$ because a strong interaction between mirrors and entangler is desirable. The strength of the system-meter interaction, instead, has to guarantee only a sufficient measurement gain. This condition, by referring to Eq.(\ref{YOUT}), corresponds to $g^2\alpha^2\gg(\gamma_a^2/4+\omega^2)/4$.
In Fig.\ref{fig2} we show the behavior of the degree of entanglement (\ref{EDEF}) as function of frequency and temperature for massive oscillators with $m=10^{-5}$ ${\rm Kg}$ and $\Omega=10^5$ ${\rm s}^{-1}$. The maximum entanglement is always obtained at the frequency $\Omega$ of the oscillating mirrors where the mechanical response is maximum. The useful bandwidth becomes narrower and tends to disappear as the temperature increases. Nevertheless, a large amount of entanglement is available at reasonable temperatures e.g. $4$ ${}^{\circ}K$. It means to have purely quantum effects at macroscopic scale notwithstanding $k_BT\gg\hbar\Omega$. It is also worth noting that the values of parameters here employed are essentially those already used in experiments \cite{EXP}. Moreover, considering single mode oscillators, as we have done here, is not a restrictive assumption because the various internal and external oscillating modes of the mirrors have different oscillation frequencies and they can be easily distinguished and addressed when measurements are performed in the frequency domain. Other promising candidates for the realization of entanglement between two massive objects are given by mesoscopic resonators, such as microfabricated cantilevers \cite{CLELAND}.
\begin{figure}
\caption{\narrowtext Degree of entanglement ${\cal E}$ as function of frequency $\omega$ and temperature $T$. The plot has been cut at ${\cal E}=1$, and the part of surface for which $0\le {\cal E}<1/4$ is black coloured. The value of parameters are: $\gamma_a=\gamma_b=\Delta_b=10^{5}$ ${\rm s}^{-1}$; $P^{in}_a=5\times 10^{-4}$ $W$; $P^{in}_b=5\times 10^{-3}$ $W$; $\Omega=10^{5}$ ${\rm s}^{-1}$; $m=10^{-5}$ ${\rm Kg}$; $\Gamma=1$ ${\rm s}^{-1}$; $g=0.5$ ${\rm s}^{-1}$; $G=5$ ${\rm s}^{-1}$. With these parameters the cavity lengths are $\approx 10^{-2}$ ${\rm m}$ for the $b$ mode and $\approx 10^{-1}$ ${\rm m}$ for the $a$ modes. }
\label{fig2}
\end{figure}
As can be evicted from Fig.\ref{fig2}, at low temperatures, ${\cal E}(\Omega)$ lies below the limit $1/4$. Thus, the studied system also provides an example of {\it macroscopic} EPR correlations, though with the experimental set-up of Ref.\cite{EXP}, a further condition, concerning the spatial separation between the two systems, is required to test the paradox \cite{EPR}. However, other possible set-ups could be devised permitting even such test. Demonstration of entanglement is instead much less demanding.
In conclusion, we have exploited the ponderomotive force to entangle macroscopic oscillators. Reliable conditions to achieve this goal are established by also accounting for a measurement of the degree of entanglement. The obtained results appears quite robust against the thermal noise and could be challenging tested with current technologies opening new perspectives towards the use of Quantum Mechanics in macroscopic world. Moreover, the possibility to prepare entangled state at the macroscopic level may prove to be useful for high precision and metrology applications. For example, it is possible to see that a scheme similar to that of Fig.~1 can be used to improve the detection of weak forces \cite{stefa}.
{\em Appendix}
We prove the sufficient criterion for inseparability for the pair of continuous variable operators
$u =|a|q_{1}+\frac{1}{a}q_{2}$ and
$v =|a|p_{1}-\frac{1}{a}p_{2}$, where $a$ is an arbitrary (nonzero) real number. Assuming $\rho = \sum_{i}w_{i}\;\rho _{i1}\otimes \rho _{i2}$ and using the same first steps of the proof of Ref.~\cite{DUAN}, we have \begin{eqnarray} &&\left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle = \left\{\sum _{i}w_{i}\left( a^{2}\left\langle \left( \Delta q_{1}\right) ^{2}\right\rangle _{i} \right. \right. \nonumber \\ &&\left. \left. +\frac{1}{a^{2}} \left\langle \left( \Delta q_{2}\right) ^{2}\right\rangle _{i}\right) + \sum_{i}w_{i}\left\langle u \right\rangle _{i}^{2}-\left( \sum_{i} w_{i}\left\langle u\right\rangle _{i}\right) ^{2}\right\} \nonumber \\ &&\times \left\{\sum _{i}w_{i}\left( a^{2}\left\langle \left( \Delta p_{1}\right) ^{2}\right\rangle _{i}+\frac{1}{a^{2}} \left\langle \left( \Delta p_{2}\right) ^{2}\right\rangle _{i}\right) \right.\nonumber \\ &&+ \left. \sum_{i}w_{i}\left\langle v \right\rangle _{i}^{2}-\left( \sum_{i} w_{i}\left\langle v\right\rangle _{i}\right) ^{2}\right\}, \end{eqnarray} where the symbol $\left\langle \cdots \right\rangle _{i}$ denotes average over the product density operator $\rho _{i1}\otimes \rho _{i2}$. By applying the Cauchy-Schwarz inequality $\left( \sum_{i}w_{i}\right) \left( \sum_{i} w_{i}\left\langle u\right\rangle _{i}^{2}\right) \geq \left(
\sum_{i}w_{i}\left| \left\langle u\right\rangle _{i}\right| \right) ^{2},$ we can rewrite \begin{eqnarray} &&\left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle \geq \nonumber \\ && \left\{\sum _{i}w_{i}\left( a^{2}\left\langle \left( \Delta q_{1}\right) ^{2}\right\rangle _{i}+\frac{1}{a^{2}} \left\langle \left( \Delta q_{2}\right) ^{2}\right\rangle _{i}\right) \right\} \nonumber \\ &&\times \left\{\sum _{i}w_{i}\left( a^{2}\left\langle \left( \Delta p_{1}\right) ^{2}\right\rangle _{i}+\frac{1}{a^{2}} \left\langle \left( \Delta p_{2}\right) ^{2}\right\rangle _{i}\right) \right\}. \end{eqnarray} Then using the fact that $\alpha^{2}+\beta^{2}\geq 2\alpha \beta$, we have \begin{eqnarray} &&\left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle \geq 4\left\{\sum _{i}w_{i}\sqrt{ \left\langle \left( \Delta q_{1}\right) ^{2}\right\rangle _{i} \left\langle \left( \Delta q_{2}\right) ^{2}\right\rangle _{i}} \right\} \nonumber \\ &&\times \left\{\sum _{i}w_{i}\sqrt{ \left\langle \left( \Delta p_{1}\right) ^{2}\right\rangle _{i} \left\langle \left( \Delta p_{2}\right) ^{2}\right\rangle _{i}} \right\}. \end{eqnarray} We then use again the Cauchy-Schwartz inequality and get \begin{eqnarray} &&\left\langle \left( \Delta u\right) ^{2}\right\rangle \left\langle \left( \Delta v\right) ^{2}\right\rangle \geq 4\left(\sum _{i}w_{i}\left[ \left\langle \left( \Delta q_{1}\right) ^{2}\right\rangle _{i} \left\langle \left( \Delta q_{2}\right) ^{2}\right\rangle _{i} \right. \right. \nonumber \\ && \left. \left. \times \left\langle \left( \Delta p_{1}\right) ^{2}\right\rangle _{i} \left\langle \left( \Delta p_{2}\right) ^{2}\right\rangle _{i} \right]^{1/4}\right)^{2}, \end{eqnarray} which gives the final inequality of Eq.~(\ref{e3}) when the Heisenberg uncertainty principle is applied.
\begin{references}
\bibitem{SCH} E. Schroedinger, Naturwiss. {\bf 23}, 807; 823; 844 (1935).
\bibitem{ENT} See, e.g., A. Peres, {\it Quantum Theory: Concepts and Methods}, (Kluwer, Dordrecht, 1993); M. A. Nielsen and I. L. Chuang, {\it Quantum Computation and Quantum Information}, (Cambridge University Press, Cambridge, 2000).
\bibitem{SAK} C. S. Sackett, {\it et al.}, Nature (London) {\bf 404}, 256 (2000).
\bibitem{JUL} B. Julsgaard, {\it et al.}, quant-ph/0106057.
\bibitem{ZUR} W. H. Zurek, Phys. Today {\bf 44}(10), 36 (1991).
\bibitem{HOLL} J. N. Hollenhorst, Phys. Rev. D {\bf 19}, 1669 (1979).
\bibitem{VAR} A. F. Pace, {\it et al.}, Phys. Rev. A {\bf 47}, 3173 (1993); K. Jacobs, {\it et al.}, Phys. Rev. A {\bf 49}, 1961 (1994); S. Mancini, S. and P. Tombesi, Phys. Rev. A {\bf 49}, 4055 (1994); C. Fabre, {\it et al.}, Phys. Rev. A {\bf 49}, 1337 (1994); G. J. Milburn, {\it et al.}, Phys. Rev. A {\bf 50}, 5256 (1994).
\bibitem{NCS} S. Mancini, {\it et al.}, Phys. Rev. A {\bf 55}, 3042 (1997); S. Bose, {\it et al.}, Phys. Rev. A {\bf 56}, 4175 (1997).
\bibitem{EXP} I. Tittonen, {\it et al.}, Phys. Rev. A {\bf 59}, 1038 (1999); Y. Hadjar, {\it et al.} Europhys. Lett. {\bf 47}, 545 (1999); P. F. Cohadon, {\it et al.}, Phys. Rev. Lett. {\bf 83}, 3174 (1999).
\bibitem{BRAG} V. B. Braginsky, and F. K. Khalili, {\em Quantum Measurements}, Ed. by K.S. Thorne (Cambridge Univ. Press, Cambridge, 1992), pp. 12-15.
\bibitem{LAW} C. K. Law, Phys. Rev. A {\bf 51}, 2537 (1995).
\bibitem{milwal}D. F. Walls and G. J. Milburn, {\it Quantum Optics}, (Springer, Berlin, 1994).
\bibitem{VIT} V. Giovannetti and D. Vitali, Phys. Rev. A {\bf 63}, 023812 (2001).
\bibitem{GAR} C. W. Gardiner, {\it Quantum Noise}, (Springer, Berlin, 1991), pp.158-164.
\bibitem{DUAN} L. M. Duan, {\it et al.}, Phys. Rev. Lett. {\bf 84}, 2722 (2000);
\bibitem{SIMON} R. Simon, Phys. Rev. Lett. {\bf 84}, 2726 (2000).
\bibitem{REID} M. D. Reid and P. D. Drummond, Phys. Rev. Lett. {\bf 60}, 2731 (1988); M. D. Reid, Phys. Rev. A {\bf 40}, 913 (1989).
\bibitem{EPR} A. Einstein, {\it et al.}, Phys. Rev. {\bf 47}, 777 (1935).
\bibitem{CLELAND} A. N. Cleland, and M. L. Roukes, Appl. Phys. Lett. {\bf 69}, 2653 (1996).
\bibitem{stefa} S. Mancini and P. Tombesi, unpublished.
\end{references}
\end{multicols}
\end{document} | arXiv | {
"id": "0108044.tex",
"language_detection_score": 0.7094402313232422,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Experimental implementation of fully controlled dephasing dynamics and synthetic spectral densities}
\author{Zhao-Di Liu} \thanks{These authors contributed equally to this work.} \affiliation{CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China}
\author{Henri Lyyra} \thanks{These authors contributed equally to this work.} \affiliation{Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun yliopisto, Finland}
\author{Yong-Nan Sun} \affiliation{CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China}
\author{Bi-Heng Liu} \affiliation{CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China}
\author{Chuan-Feng Li} \email{cfli@ustc.edu.cn} \affiliation{CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China}
\author{Guang-Can Guo} \affiliation{CAS Key Laboratory of Quantum Information, University of Science and Technology of China, Hefei, 230026, China} \affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, 230026, People's Republic of China}
\author{Sabrina Maniscalco} \affiliation{Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun yliopisto, Finland} \affiliation{Centre for Quantum Engineering, Department of Applied Physics, Helsinki, P.O. Box 11000, FI-00076 Aalto, Finland}
\author{Jyrki Piilo} \email{jyrki.piilo@utu.fi} \affiliation{Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turun yliopisto, Finland}
\date{\today}
\maketitle
\section*{ABSTRACT} \textbf{ Engineering, controlling, and simulating quantum dynamics is a strenuous task. However, these techniques are crucial to develop quantum technologies, preserve quantum properties, and engineer decoherence. Earlier results have demonstrated reservoir engineering, construction of a quantum simulator for Markovian open systems, and controlled transition from Markovian to non-Markovian regime. Dephasing is an ubiquitous mechanism to degrade the performance of quantum computers. However, all-purpose quantum simulator for generic dephasing is still missing. Here we demonstrate full experimental control of dephasing allowing us to implement arbitrary decoherence dynamics of a qubit. As examples, we use a photon to simulate the dynamics of a qubit coupled to an Ising chain in a transverse field and also demonstrate a simulation of non-positive dynamical map. Our platform opens the possibility to simulate dephasing of any physical system and study fundamental questions on open quantum systems.}
\section*{INTRODUCTION}
When a quantum system of interest interacts with an environment, its evolution becomes non-unitary and displays decoherence~\cite{Breuer2007}. This loss of quantum properties is interesting in itself for fundamental aspects -- such as quantum to classical transition~\cite{Schloss2007} -- but it is also important when developing applications of quantum physics for technological purposes~\cite{Suter2016}. Therefore, the dynamics of open quantum systems has become a major research area in modern quantum physics incorporating a multitude of physical systems and platforms.
Since it is hard, or even impossible, to avoid decoherence in realistic quantum systems, it is important to find means to control noise, and to develop new theoretical and simulation tools for open quantum systems. Indeed, already quite some time ago reservoir engineering was demonstrated experimentally with trapped ions by applying noise to trap electrodes~\cite{Myatt}, and thereby also influencing how the open system evolves. It is also possible to monitor in time the decoherence of field-states in a cavity~\cite{Deleglise}. More recently, a quantum simulator for Lindblad or Markovian dynamics was constructed, motivated by the studies of open many-body systems~\cite{Barreiro,Schindler}, and a simulator for noise induced by fluctuating fields was introduced in~\cite{Cialdi}. There has also been a large amount of activities dealing with non-Markovian quantum dynamics~\cite{rivas-2014,breuer-2016,de-vega-2017} including an experiment for controlled Markovian to non-Markovian transition with dephasing in a photonic system~\cite{Liu} and others induced by similar motivations~\cite{chiuri-2012,bernardes-2016}.
We focus on dephasing, or pure decoherence, which is an ubiquitous mechanism leading to a loss of quantum properties and degrading the performance of quantum computers~\cite{Palma}. Indeed, dephasing appears naturally in multiple physical systems and processes including qubit coupled to harmonic oscillators in thermal equilibrium~\cite{Breuer2007}, central spin coupled to Ising chain in transverse field~\cite{Quan}, excitons in quantum dots~\cite{Fan,Roszak}, superconducting qubits influenced by fluctuating magnetic dipoles~\cite{Cywinski}, and particles in a spatial superposition in gravitational field~\cite{Pikovski,Sokolov} - to name few examples.
However, despite of all the earlier theoretical and technological progress, full experimental control of decoherence -- allowing to emulate arbitrary open system dephasing dynamics -- has turned out to be an elusive goal. Having a complete freedom to induce any non-unitary dynamics for a given system in the laboratory would allow to simulate complex dynamical phenomena from a wide variety of fields, e.g., spin systems. This would also allow one to find out what are the ultimate limits of decoherence control. Here we implement arbitrary and fully controlled dephasing dynamics in the laboratory, which opens also the prospect to simulate open system qubit dynamics essentially in any physical system including those mentioned above. Moreover, our results demonstrate that it is possible to induce decoherence patterns that are not produced by ambient reservoirs and their spectral densities, i.e., to manufacture artificial, or synthetic, spectral densities. On the most fundamental level, full control of open system dynamics allows the simulation of dynamical maps that are not completely positive or positive. These concepts and the problematics of appropriate properties of dynamical maps have been extensively debated in the open system theory for long time~\cite{Pechukas,Alicki,Shaji}.
\section*{RESULTS}
{\noindent\textbf{Theoretical description of dephasing control}}\\ Our goal is to control and simulate dynamical maps described by a family of $t$-parametrized pure dephasing channels $\Lambda_t$ such that \begin{equation}\label{dephdyna} \Lambda_t(\rho(0)) =: \rho(t) = \begin{pmatrix} \rho_{00} & D^*(t)\rho_{01}\\ D(t)\rho_{10} & \rho_{11} \end{pmatrix}. \end{equation} Here, $\rho_{ij}$ ($i,j=0,1$) are the elements of the qubit density matrix $\rho$ at initial time $t=0$ and $D(t)\in\mathbb{C}$ is the so-called decoherence function. In dephasing, the diagonal elements $\rho_{00}$ and $\rho_{11}$ corresponding to populations do not evolve whereas $D(t)$ contains information on how the coherences $\rho_{01}$ and $\rho_{10}$ of the qubit evolve. If $\vert D(t)\vert $ decreases monotonically, so does also the magnitude of coherences. However, to develop a generic simulator for dephasing, we need to implement arbitrary $\vert D(t)\vert $, and subsequent evolution of the magnitude of coherences, without influencing the populations.
\begin{figure*}
\caption{The experimental setup. {\bf (a):} Key to the components: FC--fiber connector, PBS--polarizing beam splitter, HWP--half-wave plate, BD--beam displacer, GCHWP-- glass cemented half-wave plate, PCCL-- plano convex cylindrical lense, SLM--spatial light modulator, QP--quartz plate, and QWP--quarter-wave plate. The photon is guided from the source to the device via the lower FC. Then the photon goes through the gratings (the dark red lines) to SLM where the state is manipulated and the photon is reflected back (light red lines). A mirror guides the returning photon through the quartz plate combination. Finally a combination of QWP, HWP, and PBS is used to run tomographic measurement at the end of the device. {\bf (b--e):} The holograms used in the experiment. }
\label{setup}
\end{figure*}
The open system qubit in our simulator is the polarization of a photon and the environment consists of its frequency degree of freedom. The scheme is based on full control over the total initial polarization-frequency state which then dictates the subsequent polarization dephasing dynamics when the interaction between the polarization and frequency degrees of freedom -- and the open system time evolution -- begins. An initial pure polarization-frequency state for the photon can be written as \begin{equation}\label{state} \ket{\Psi} = C_V\ket{V}\int g(\omega)\ket{\omega}d\omega + C_H\ket{H}\int e^{i\theta(\omega)} g(\omega)\ket{\omega}d\omega. \end{equation} Here $V$ ($H$) corresponds to vertical (horizontal) polarization with amplitude $C_V$ ($C_H$), $\omega$ are the frequency values with amplitude $g(\omega)$, and $\theta(\omega)$ is the frequency dependent phase factor for polarization component $H$. The probabilities are normalized in the usual manner with $\vert C_H\vert^2 + \vert C_V\vert^2 = 1$ and
$\int\vert g(\omega)\vert^2 d\omega = 1$. It is important to note here that having a limited control over the initial frequency distribution $P(\omega)=|g(\omega)|^2$,
e.g.~implementing double peak structure, allows some degree of engineering of the dephasing dynamics~\cite{Liu}. However, for generic simulator we need full control over both the frequency distribution and the frequency--polarization dependent phase distribution $\theta(\omega)$. This also means that we are exploiting in our simulator initial polarization-frequency correlations which happens as soon as we have non-constant distribution for $\theta(\omega)$. In this case, the initial state Eq.~\eqref{state} can not be written as a polarization-frequency product state.
Once the initial state given by Eq.~\eqref{state} has been prepared, the simulator dynamics occurs when polarization and frequency interact in birefringent medium, such as quartz or calcite. The evolution of the total state is governed by the Hamiltonian
\begin{equation}\label{H} \text{H} = (n_H\ket{H}\bra{H} + n_V\ket{V}\bra{V})\int 2\pi \omega\ket{\omega}\bra{\omega}d\omega,
\end{equation} where $n_H$ ($n_V$) is the refractive index of the medium in the direction $H$ ($V$). By tracing out from the total system evolution the frequency degree of freedom, the polarization state undergoes the following dephasing dynamics \begin{equation}\label{dynamics} \rho(t) = \begin{pmatrix} \vert C_H\vert^2 & \kappa^*(t)C_{H}C_{V}^*\\ \kappa(t)C_{V}C_{H}^* & \vert C_V\vert^2 \end{pmatrix}\,, \end{equation} where \begin{equation}\label{decocoef} \kappa(t)=\int \vert g(\omega)\vert^2e^{i\theta(\omega)}e^{i 2 \pi \Delta n \omega t} d\omega\,, \end{equation} $\Delta n = n_H - n_V$ and $t$ is the interaction time.
Equation~\eqref{decocoef} shows in a clear manner that the decoherence function $\kappa(t)$ is the Fourier transformation of the distribution $\vert g(\omega)\vert^2e^{i\theta(\omega)}$ used to prepare the tailored initial total system state. Since Fourier transform is invertible, this connection tells us how the distributions $g(\omega)$ and $\theta(\omega)$ should be chosen to induce any desired polarization dephasing dynamics defined by any complex function $\kappa(t)$. On the other hand, for each $\kappa(t)$ the corresponding complex distribution $\vert g(\omega)\vert^2e^{i\theta(\omega)}$ is unique, and thus the implementation of a non-trivial $\theta(\omega)$ is necessary for full freedom of choosing the dephasing dynamics. For generic open quantum systems, specifying the spectral density (i.e., the coupling with the environment) is not equivalent to specifying the analytic expression of the dynamical map (solution of the master equation). In fact, in general, one may not even be able to solve analytically the master equation, and have a closed analytical form of the dynamical map. However, the case of pure dephasing dynamics is different because specifying the spectral density uniquely fixes the analytical form of the solution since the decoherence function (off-diagonal term of the density matrix) only depends on the spectral density, see Eqs.~(2), (4) and (5).
The complete positivity (CP) and positivity (P) conditions for single-qubit dephasing channel in Eq.~\eqref{dephdyna} are the same, namely
$|D(t)| \le 1$. However, the versatility of our simulator and control over $\kappa(t)$ permit to simulate non-positive channels in the following way. Due to initial system-environment correlations induced by the non-trival distribution $\theta(\omega)$, we have cases with $\vert\kappa(0)\vert < 1$, i.e., the simulator uses restricted domain of initial polarization states. To simulate the channel in Eq.~\eqref{dephdyna} and its decoherence function $D(t)$ with the simulator function $\kappa(t)$ in Eq.~\eqref{decocoef},
we need to use the scaling $|D(t)| = |\kappa(t)|/|\kappa(0)|.$
Therefore, with the full control of the simulator and using the initial system-environment correlations, we can also generate dynamics with $|\kappa(t)| > |\kappa(0)|$, i.e., $|D(t)| = |\kappa(t)|/|\kappa(0)| > 1$, and hence can simulate also non-positive maps.
{\noindent\textbf{Experimental set-up}}\\ In the experiment, a photon pair is produced in spontaneous parametric down conversion (SPDC) process by pumping a type-II beta-barium-borate crystal (9.0$\times$7.0$\times$1.0 mm$^3$, $\theta$ = 41.44$^\circ$) by a frequency-doubled femtosecond pulse (400 nm, 76 MHz repetition rate) from a mode-locked Ti:sapphire laser. After passing through the interference filter (3 nm FWHM, centered at 800 nm), the photon pairs are coupled into single-mode fibers separately. One of the photons is sent to the experimental device described in Fig.~\ref{setup}, and the other is used as a trigger for data collection. The coincidence counting rate collected by the avalanche photo diodes (APDs ) is about $1.8\times10^5$ in 60 s and the measurement time for each experiment
was 10 s.
\begin{figure*}
\caption{ Model decoherence functions and their simulation. {\bf (a--c):} The dynamics of the decoherence function $D(t)$ in the spin--Ising chain model as function of time in seconds. (a), (b), and (c) correspond to $\lambda = 0.01$, $\lambda = 0.9$, and $\lambda = 1.8$, respectively. {\bf (d):} The dynamics of the decoherence function $D(t)$ for the dephasing channel Eq.~\eqref{dephdyna} for the case when positivity is broken. {\bf (e--g):} Experimental dynamics of the decoherence function $\vert\kappa(t)\vert$ in the simulator corresponding to panels (a)-(c) as function of effective path difference in units of 800 nm. The black dots correspond to measurement data and the error bars are mainly due to the counting statistics, which are standard deviations calculated by the Monte Carlo method. The solid curves are theoretical fits for the measurement data which have been obtained by using the width of the photon frequency window as fitting parameter.
{\bf (h): } The dynamics of the decoherence function $\vert\kappa(t)|$ when simulating non-positive map of panel (d). The results clearly display the dynamical property $|\kappa(t)| > |\kappa(0)|$ over a long interval of time. }
\label{dyna}
\end{figure*}
\begin{figure*}
\caption{ Implemented photon frequency and phase distributions.
{\bf (a-d):} Probability distributions $ |g(\omega)|^2$ of the photon frequency. (a-c) correspond to spin-Ising model simulation and (d) corresponds to non-positive map simulation. Inset in panel (c) shows the experimentally measured distribution for this case after the SLM implementation. {\bf (e-h):} Phase distributions $\theta(\omega)$ of the photon frequency corresponding to (a-d). These distributions were implemented pairwise in the experiment with a two-dimensional SLM using the effective resolution of 900 pixels in frequency modes. The bandwidth of each distribution is approximately 3 nm and they are centered at 800 nm. The measurement data of the simulator dynamics corresponding to implemented distribution pairs (a,e), (b,f), (c,g), and (d,h) are shown in Fig.~\ref{dyna}(e), (f), (g), and (h), respectively.
}
\label{dist}
\end{figure*}
In the device of Fig.~\ref{setup}, a half-wave plate (HWP) is used to maximize the $H$ polarized component and a polarizing beam splitter (PBS) completely filters out the $V$ polarized component of the photon. Another HWP rotates the polarization from $\ket{H}$ to balanced superpositions $\frac{1}{\sqrt{2}}(\ket{H} \pm \ket{V})$. A beam displacer displaces the $V$ polarized component to lower branch, allowing us to manipulate the polarization components independently. Before the photon goes through three gratings, the $V$ polarized component goes through the HWP core of the composite component GCHWP and gets rotated to $H$. This is to avoid errors caused by the polarization dependency of the grating efficiency and the ability of the SLM to modulate only the H polarization.
Then the photon is diffracted in the horizontal direction with three cascaded gratings (1200 l/mm), and thus the frequency modes are converted into spatial modes. A collimating lens (PCCL) transforms the spatial modes into parallel lights incident on the phase-only spatial light modulator (SLM). In our work, we need to implement dephasing dynamics shown in Fig.~2. This is achieved by engineering the photon frequency and phase distribution displayed in Fig.~3. For the latter, the SLM can introduce complex phase factors for the spatial modes [Fig. 3~(e-h)]. In order to tune the intensity distribution of the frequency, gratings (25 l/mm, with parallel horizontal lines) are written in the hologram of SLM [Fig. 1 (b-d)]. The horizontal profile (pixel number in every column) of the gratings of the hologram (GH) is designed the same as the frequency intensity distribution [Fig.~3 (a-d)], which ensures that the area of the GH is proportional to the intensity of the frequency. If photons are not incident on the GH, they will be reflected by the screen of the SLM directly. On the other hand if photons cover the GH, they will be diffracted vertically at the first order of the GH with a fixed efficiency about 60\%. By collecting only the photons diffracted at the first order of the GH we achieve the intensity modulation of frequency. These manipulations can be performed independently for the upper and lower branches. For simplicity, we choose the reflected intensity distribution to be the same for both branches and implement the complex phase distributions on the lower branch only.
From the SLM the photon is reflected back (the light red lines in Fig.~\ref{setup}) through the PCCL and three gratings, which collimate and combine the spatial modes into one mode in each branch. The branches go again through the GCHWP which rotates this time the polarization of the upper branch from $H$ to $V$. A mirror guides both branches through another BD which recombines them into one. This way we have prepared the total polarization--frequency state in the shape of Eq.~\eqref{state}. Controlling the reflected intensity and complex phase distributions with SLM this way gives us directly full freedom to implement the distributions $g(\omega)$ and $\theta(\omega)$ in Eq.~\eqref{state}, respectively. Thus, the setup gives us full control of the dephasing dynamics of the polarization state as shown in Eq.~\eqref{decocoef}. Note that SLMs have been recently used also for quantum computing and information purposes, see, e.g., Refs.~\cite{Hor,Kagalwala,Tentrup}, and that a 4f-line is a standard way to manipulate the spectrum and implement pulse shaping by optical means~\cite{4f}. Finally, the recombined photon goes through a combination of quartz plates (QP) which couple the polarization with frequency according to interaction Hamiltonian~\eqref{H}. The total interaction time is controlled by changing the thickness of the QP combination. For each selected interaction time $t$, a combination of a quarter-wave plate, HWP, and PBS is used to run a tomographic measurement to determine the density matrix $\rho(t)$ of the polarization qubit.
{\noindent\textbf{Using a photon to simulate qubit coupled to Ising chain}}\\ To give an experimental demonstration of our optical simulator, we focus on the dynamics of an open system qubit interacting with an environment whose ground state exhibits a quantum phase transition. We consider a central spin coupled to an Ising spin chain in a transverse field. This is a widely used complex spin interaction model~\cite{Quan,Haikka} where one can induce the ground state phase transition by changing the magnitude of the transverse magnetic field with respect to the Ising chain spin-spin coupling. It is also worth mentioning that for this model, by quantifying the non-Markovianity of the central spin dynamics, one can identify the point of the phase transition in the environment~\cite{Haikka}.
The dynamics of the total spin--chain system is described by the Hamiltonian~\cite{Quan} \begin{align}
\text{H}(J,\,\lambda,\,\delta) = \,-J\Sigma_j\big( \sigma_3^{(j)} \sigma_3^{(j+1)} + \lambda \, \sigma_1^{(j)} + \delta \, |e\rangle\langle e| \sigma_1^{(j)} \big), \end{align} where $J$, $\delta$, and $\lambda$ correspond to the strengths of the nearest neighbor coupling in the chain, the spin--chain coupling, and the transverse field, respectively, while $ \sigma_1$ and $ \sigma_3$ are the Pauli spin operators. When the Ising chain is initially in the ground state of the environmental Hamiltonian, the dynamics of the central spin
is described by the dynamical map in Eq.~\eqref{dephdyna}, where the time-dependent decoherence function becomes \begin{align} D(t) = \Pi_{k>0} \big(1-\sin^2(2 \alpha_k) \sin^2(\varepsilon_k t) \big). \end{align} Here, $\epsilon_k$ are the single quasiexcitation energies, and $\alpha_k$ are Bogoliubov angles, both of which depend on $\lambda$~\cite{Quan}.
What makes this model especially interesting to simulate, is the variety of the dynamics it can induce. Specifically, fixing the parameters $J = 1$ and $\delta = 0.1$, different choices of $\lambda$ lead to very different behaviors. Figures~\ref{dyna}(a), (b), and (c) display the dynamics of the decoherence function for parameters $\lambda = 0.01$, $\lambda = 0.9$ and $\lambda = 1.8$, using 4000 spins in the environment. Here, $\lambda = 0.01$ ($\lambda = 1.8$) corresponds to paramagnetic (ferromagnetic) phase of the environment and the phase transition between the two happens at $\lambda = 0.9$. When the enviroment is in the paramagnetic phase, the decoherence function in Fig.~\ref{dyna}(a) decreases quite quickly destroying the coherences which, however, revive after a long-time interval, displaying also non-Markovian effects. At the phase transition point, corresponding to Fig.~\ref{dyna}(b), coherences quickly decay. In the ferromagnetic phase of the environment, the magnitude of coherences oscillates and displays trapping see Fig.~\ref{dyna}(c).
To simulate the dephasing dynamics displayed in Fig.~\ref{dyna} (a)-(c), we use the inverse of the transformation in Eq.~\eqref{decocoef} to obtain the distributions
$|g(\omega) |^2$ and $\theta(\omega)$ which need to be experimentally realized to prepare the appropriate initial total state of the simulator. The corresponding distributions for $|g(\omega) |^2$ are shown in Fig.~\ref{dist} (a)-(c) and for $\theta(\omega)$ in Fig.~\ref{dist} (e)-(g). We have prepared and used initial values $C_H=1/\sqrt{2}$ and $C_V=\pm 1/\sqrt{2}$ for polarization in the initial states of Eq.~\eqref{state}. The values of $|\kappa(t)|$ during the evolution are obtained via state tomography and by using trace distance (c.f.~\cite{Liu}). The experimental results for the dephasing dynamics are displayed in Fig.~\ref{dyna} (e)-(g). By comparing the spin-Ising model dephasing dynamics of the Fig.~\ref{dyna} (a)-(c) to their experimental simulation in Fig.~\ref{dyna} (e)-(g), we observe that the simulator produces faithfully the dynamics of the central spin in the Ising model for both different phases of the environmental ground state as well as at the phase transition point. Our results demonstrate high-level of control and versatility of the simulator and, in the considered exemplary cases, the ability to emulate dephasing in three distinct dynamical regimes: fast decoherence with revival of coherences (paramagnetic environment), fast and monotonic loss of coherences (phase transition of the environment), and coherence oscillations with trapping (ferromagnetic environment).
It is worth noting that systematic errors have a non-negligible effect. Figure 2(g) is a typical example of the resolution that leads to these errors. The corresponding hologram is in Fig.~1 (d). It becomes more difficult to modulate the amplitude of the frequency with high fidelity, when the spectrum gets narrower (also beam mode becomes worse). Although the setup was carefully optimized, these factors still lead to decrease in the fidelity of the initial state preparation. This is the reason why the experimental result [Fig.~2(g)] does not show an agreement with the theory [Fig.~2(c)] as good as in the other figures.
Having too wide spectrum also leads to systematic errors. Fig.~2(a), (b), and (c) display the dynamics of the decoherence function for the spin-system parameters $\lambda = 0.01$, $\lambda = 0.9$ and $\lambda = 1.8$, respectively, using 4000 spins in the environment. Although three gratings (1200 l/mm) are used, 3 nm FWHM (full width at half maximum) of SPDC photons can only cover 900 pixels in SLM. This effectively means that we can simulate 900 out of all 4000 environmental spins. 3100 spins corresponding to amplitude and phase close to 0 are ignored in the set-up. The experimental results [Fig. 2(e), (f), and (g)) are slightly different with respect to the theoretical results [Fig. 2(a), (b) and (c)]. In principle, this systematic error can be reduced by using smaller pixel SLM and wider FWHM filter. Note that, unless the spectrum is too narrow, the beam mode is good enough to achieve simulation with high fidelity.
For gratings, three 1200 l/mm gratings are used in our setup. This is a result of tradeoff. 1800 l/mm grating will make the divergence of spectrum bigger, but the fidelity of polarization states will be significantly less than the fidelity with gratings of 1200 l/mm. Therefore, increasing the divergence angle of the spectrum by increasing the density of the grating is somewhat challenging.
It is evident from Fig.~\ref{dist}, that producing this high control of dephasing, and being able to simulate a wide variety of dynamical features, indeed requires very challenging control of the photon frequency and frequency dependent phase distribution. This precision is exactly what allows for the accurate mimicking of the dynamics in different dephasing regimes, even though the number of pixels of the SLM is large but still limited.
{\noindent\textbf{Implementing non-positive dynamical map}}\\ To demonstrate the ability to simulate a non-positive dynamical map, we choose the decoherence function dynamics displayed in Fig.~\ref{dyna} (d) for the map of Eq.~\eqref{dephdyna}. This requires implementing initial frequency and phase distributions shown in Fig.~\ref{dist} (d) and (h), respectively. Comparing the decoherence function of Fig.~\ref{dyna} (d) to the experimental simulation in Fig.~\ref{dyna} (h), we observe again the accuracy and power of the simulator. Therefore, our results give a proof-of-principle demonstration that, with our scheme, it is possible to simulate a class of dynamical maps which breaks a property traditionally considered as the ultimate criterion for discriminating between the type of open system dynamics that could occur naturally (or be engineered) and those which were considered unphysical. It is also interesting to note here, that the initial frequency distribution to simulate the paramagnetic phase of the spin-Ising model [Fig.~\ref{dist} (a)] is very similar to the distribution used to simulate non-positive map [Fig.~\ref{dist} (d)] -- even though the dynamics is quite different, see Figs.\ref{dyna} (e) and (h). The difference between the two arises from the completely different type of phase distributions $\theta(\omega)$, shown in Fig.~\ref{dist} (e) and (h) for the two cases. This again reflects the crucial role that $\theta(\omega)$ plays in developing and implementing generic simulator for dephasing.
{\noindent\textbf{Synthetic spectral densities and other extensions}}\\ It is worth noting that our results makes it possible to produce synthetic spectral densities when considering the environments that an open system interacts with. Consider, for example, a qubit interacting with a bosonic environment via the interaction $H_i=\sum_k \sigma_3 (g_ka_k + g_k^*a^{\dagger})$ where $ \sigma_3$ is the qubit Pauli operator, $g_k$ the coupling constant to the bosonic mode $k$, and $a_k$ ($a_k^{\dagger}$) the creation (annihilation) operator for mode $k$. Then the decoherence function can be written as \begin{equation}
D(t)=\exp\!\left[ -\!\!\int_0^{\infty} d\omega \, J(\omega)
\coth\!\left(\! \frac{\beta\omega}{2} \!\right)\!
\frac{1\!-\!\cos\left(\omega t\right)}{\omega^2}\right]\!,
\end{equation} where $\beta$ is the inverse temperature and $J(\omega)$ is spectral density which contains information about the properties of the environment~\cite{Breuer2007}. Therefore, having a simulator for any behavior of $D(t)$, allows us via the connection above also to simulate generic spectral densities $J(\omega)$. This means that also open system dynamics and spectral densities which do not appear in nature otherwise, i.e., synthetic spectral densities, can be created.
Ohmic spectral density is often used for dephasing dynamics. However, as an example of synthetic spectral density, we choose the one shown in Fig.~4 (a) which produces decoherence dynamics displayed both theoretically and experimentally in Fig. 4 (b). The theoretical dynamics is obtained numerically from Eq.~(8) by using zero-temperature environment and the $J(\omega)$ displayed in Fig.~4(a). The experimental result has been obtained by using the frequency distribution $|g(\omega)|^2$ of the simulator photon displayed in Fig.~3 (a) while the used initial phase distribution $\theta(\omega)$ is, instead of Fig.~3 (e), a constant function. This result gives experimental evidence on the realizability of arbitrary synthetic spectral densities which we plan to study in more detail in the future.
\begin{figure}
\caption{ Synthetic spectral density and corresponding decoherence dynamics of a qubit. (a) A chosen spectral density $J(\omega)$ used in Eq.~(8). The unit of frequency is $\lambda_0 /c$ with $\lambda_0=800$ nm. (b) Corresponding theoretical and experimental dynamics of the decoherence function (for more details, see the main text). The black dots correspond to measurement data and the error bars are mainly due to the counting statistics, which are standard deviations calculated by the Monte Carlo method. The evolution time is given by effective path difference in units of $\lambda_0$. }
\label{Fig:4}
\end{figure}
Current framework can be extended to multi-qubit case by using the presented dephasing engineering scheme for each of the qubits. By using both of the SPDC photons, also initial correlations between the environments (frequencies) of the qubits can be controlled to a certain extent allowing to combine dephasing control with nonlocal features of the dynamical map~\cite{nlnm,nlnm-erratum}. Moreover, it is also possible to include coherent operations to the existing set-up allowing for the exploitation of the combination of sequences of coherent operations and controllable decoherent operations, in a many-body scenario, for quantum control and simulation purposes (see also ~\cite{Barreiro,Schindler}). Lastly, activities using structured light have increased significantly during the recent years~\cite{sl} including also photonic experiments~\cite{spint}. Here, our results open the possibility to combine interferometry with fully controllable noise and structured photons.
\section*{DISCUSSION}
Summarizing, we have introduced and realized experimentally a generic simulator for one-qubit dephasing.
The features of the simulator include full control of the dephasing, therefore allowing us in principle to simulate any pure-decoherence dynamics. As examples we considered dephasing for an Ising model in a transverse field, where the environment exhibits a phase transition, and the dynamics of the qubit displays three distinct and highly-different features. Moreover, we also showed how to simulate a non-positive dynamical map. The ability to synthesize arbitrary dephasing dynamics establishes an experimental testbed for fundamental studies on long-debated but not yet settled questions. In general, our results have implications in all fields and physical contexts, where dephasing plays a key role. These include, among others, quantum probing of many-body systems, exciton transfer in light-harvesting complexes, and numerous experimental platforms for quantum technologies.
{\bf Data Availability} The data that support the findings of this study are available from the authors upon reasonable request.
{\bf Acknowledgments} The Hefei group was supported by the National Key Research and Development Program of China (No. 2017YFA0304100), the National Natural Science Foundation of China (Nos. 61327901,11774335), Key Research Program of Frontier Sciences, CAS (No. QYZDY-SSW-SLH003), the Fundamental Research Funds for the Central Universities (No. WK2470000026), Anhui Initiative in Quantum Information Technologies (AHY020100). The Turku group acknowledges the financial support from Horizon 2020 EU collaborative project QuProCS (Grant Agreement 641277), Magnus Ehrnrooth Foundation, and the Academy of Finland (Project no. 287750). H.L. acknowledges also the financial support from the University of Turku Graduate School (UTUGS).
{\bf Author Contributions} Z.-D.L., Y.-N.S, B.-H.L. C.-F.L., and G.-C.G. planned, designed and implemented the experiments under the supervision of C.-F.L. and G.-C.G. Most of the theoretical analysis was peformed by H.L. under the supervision of S.M. and J.P. The paper was written by Z.-D.L., H.L., C.-F.L., S.M., and J.P. while all authors discussed the contents.
{\bf Competing Interests} The Authors declare no competing interests.
\end{document} | arXiv | {
"id": "1712.08071.tex",
"language_detection_score": 0.8431829214096069,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\draft
\title{Application of topological resonances in experimental investigation of a Fermi golden rule in microwave networks}
\author{Micha{\l} {\L}awniczak,$^{1}$ Ji\v{r}\'{\i} Lipovsk\'{y},$^{2}$ Ma{\l}gorzata Bia{\l}ous,$^{1}$ and Leszek Sirko$^{1}$} \address{$^{1}$Institute of Physics, Polish Academy of Sciences, Aleja Lotnik\'{o}w 32/46, 02-668 Warszawa, Poland\\ $^{2}$Department of Physics, Faculty of Science, University of Hradec Kr\'alov\'e, Rokitansk\'eho 62, 500 03 Hradec Kr\'alov\'e, Czechia\\ } \date{\today}
\begin{abstract} We investigate experimentally a Fermi golden rule in two-edge and five-edge microwave networks with preserved time reversal invariance. A Fermi golden rule gives rates of decay of states obtained by perturbing embedded eigenvalues of graphs and networks. We show that the embedded eigenvalues are connected with the topological resonances of the analyzed systems and we find the trajectories of the topological resonances in the complex plane. \end{abstract}
\pacs{03.65.Nk,02.40.-k}
\maketitle
\section{Introduction} There are many processes in physics that can be described as a perturbation of a certain, usually quite symmetric system. One example of this behaviour are the eigenvalues of quantum systems, which after the perturbation of the initial Hamiltonian become resolvent resonances. A simple system in which this phenomenon can be studied is the model of quantum graphs (see \cite{BerkolaikoKuchment13}) -- a net-like structure equipped with the Hamiltonian of a single quantum particle. One can consider an infinite quantum graph, consisting of internal (finite) edges and external (infinite) edges. For rationally related lengths of the internal edges, certain eigenfunctions corresponding to eigenvalues embedded into the continuous spectrum can be constructed as sine functions on some of the edges with zeros at the vertices and vanishing on the infinite edges. However, if the rationality of the edge lengths is perturbed, the former eigenvalues travel into the complex plane and become resonances. Due to the topological nature of the former eigenstates, these resonances are usually in the literature called \emph{topological resonances}. The behaviour of the topological resonances and their trajectories near the initial eigenvalue have been of interest to many papers recently \cite{ExnerLipovsky10,GnutzmannSchanzSmilansky13,LeeZworski16,ColindeVerdiereTruc18}. In \cite{ExnerLipovsky10}, the trajectories of topological resonances depending on the edge lengths were found and the circumstances under which the resonance again becomes an eigenvalue were studied. In \cite{GnutzmannSchanzSmilansky13} the term ``topological resonances'' was first used and the statistical properties of these resonances were investigated. The paper \cite{ColindeVerdiereTruc18} proved that for tree graphs with at most one vertex of valency 1, the resonances are far from the real axis (and hence there are no ``narrow'' resonances), which is not the case for the other types of graphs.
In the paper \cite{LeeZworski16}, the authors pointed out the link between the so-called Fermi golden rule in physics and the speed with which the former eigenvalue moves in the complex plane. To be more precise, if the lengths of the edges are parametrized by a parameter $t$, they found a formula for $\mathrm{Im}\,\ddot k$, the imaginary part of the second derivative of the wave vector~$k$ (the square root of the complex energy of the resonance) with respect to the parameter~$t$. Another version of the formula, obtained by Lee and Zworski in the case of standard (Kirchhoff's) coupling conditions, was later found in \cite{ExnerLipovsky17} using the pseudo-orbit expansion method for general coupling conditions.
In the present paper, we put under the experimental test the results of Lee and Zworski \cite{LeeZworski16} using microwave networks. We find the trajectories of the resonances for two particular networks and compare them with the numerical simulations. As opposed to abstract open quantum graphs microwave networks \cite{Hul2004} are real-world open systems which are additionally characterized by intrinsic absorption. Yet, we find a good correspondence of the experimental and theoretical results; the experimental trajectories match the theoretical ones \cite{LeeZworski16}. Moreover, we verify a Fermi golden rule by computing $\mathrm{Im}\,\ddot k$ theoretically and comparing it with the fit of the experimental data. We find a good correspondence for both systems.
\section{Theoretical model and formulation of a Fermi golden rule}\label{sec2} In this section, we introduce the well-known model of quantum graphs. Since the telegraph equation for microwave networks has a similar form as the Schr\"odinger equation for quantum graphs, our experiment can very precisely simulate the theoretical predictions made for quantum graphs.
Let us consider a metric graph consisting of a set of vertices $\mathcal{V}$ and a set of edges $\mathcal{E}$ including $M$ external (infinite) edges, which are parametrized by the intervals $(0,\infty)$ and denoted by $\{e_s\}_{s = 1}^M$, $s = 1,\dots, M$ and $N$ internal (finite) edges, which can be parametrized by the intervals $(0,\ell_j)$, $j = M+1,\dots, M+N$, connect two vertices and are denoted by $\{e_j\}_{j = M+1}^{M+N}$. We can denote $\ell_s = \infty$ for $s = 1, \dots, M$. In the Hilbert space $\mathcal{H} = \oplus_{j = 1}^{M+N} L^2(0,\ell_j)$ we define the operator $H$ acting as the negative second derivative on the domain consisting of functions belonging to the Sobolev space $\oplus_{j = 1}^{M+N} W^{2,2}(0,\ell_j)$ and satisfying the standard (Kirchhoff's) coupling conditions at each vertex $v$, namely the continuity of the function values and vanishing of the sum of the derivatives \begin{equation}
u_i(v) = u_j(v)\,,\ v \in e_i\cap e_j\,,\quad \sum_{e_j \ni v} \partial_\nu u_j(v) = 0\,,\label{eq:cc} \end{equation} where $\partial_\nu u_j(v)$ is the derivative at the vertex $v$ in the direction into the vertex: $\partial_\nu u_j(0) = -u_j'(0)$, $\partial_\nu u_j(\ell_j) = u_j'(\ell_j)$. For more details on quantum graphs we refer the reader to the monograph \cite{BerkolaikoKuchment13}.
Before stating the main formula, let us define one of its components, the generalized eigenfunctions. Let $e^s(k,x)$, where $k$ is the square root of energy, $k^2 = E$, $s=1,\dots, M$ be for $k^2$ not belonging to the spectrum of $H$ characterized as follows \begin{enumerate}
\item $e^s(k,x)$ locally belongs to the domain of $H$,
\item $(H-k^2) e^s(k,x) = 0$,
\item $e^s_j(k,x) = \delta_{js} \mathrm{e}^{-\mathrm{i}kx}+ s_{js}\mathrm{e}^{\mathrm{i}kx}$, $j = 1,\dots, M$, where $e^s_j$ is the component of $e^s$ on the $j$-th infinite edge and $\delta_{js}$ is the Kronecker delta,
\item We holomorphically extend $e^s_j(k,x)$ to all $k\in \mathbb{R}$. \end{enumerate}
A Fermi golden rule can be formulated as follows \cite[Thm. 1]{LeeZworski16}:
Let $H(t)$ be the above defined Hamiltonian, for which the lengths of the internal edges depend on the parameter $t$ via $\ell_j (t) = \ell_j\mathrm{e}^{-a_j(t)}$, $a_j(0) = 0$. Let $\dot a_j = \left.\frac{\partial a_j(t)}{\partial t}\right|_{t=0}$, $j = M+1,\dots M+N$ be the components of $\dot a = \{\dot a_j\}_{j=1}^M$. Let $k^2>0$ be a simple eigenvalue of $H(t)$ for $t=0$ and let $u$ be its corresponding normalized eigenfunction. Then there is a smooth function $k(t)$, where $k(0) = k$ is the eigenvalue and $k^2(t)$, $t\ne 0$ are resolvent resonances. For the second derivative $\ddot k$ of $k(t)$ with respect to $t$ at $t=0$, it holds \begin{equation}
\mathrm{Im}\, \ddot k = -\sum_{s=1}^M |F_s|^2\,,\\
\end{equation}
where
\begin{equation}
F_s = k \left<\dot a u(x),e^s(k, x)\right>+ \frac{1}{k}\sum_v\sum_{e_j\ni v}\frac{1}{4}\dot a_j[3\partial_\nu u_j(v) \overline{e^s(k, v)}-u(v)\partial_\nu\overline{e^s_j(k,v)}]\,. \label{Eq:F_s} \end{equation} In Eq. [\ref{Eq:F_s}] the inner product on $\mathcal{H}$ is denoted by $\left<\cdot, \cdot\right>$ and the overline denotes the complex conjugation.
In order to demonstrate the connection of this formulation of a Fermi golden rule to its usual form in the physics literature let us consider a perturbation $H(t) = H_0+tV$ of the Hamiltonian $H_0$. Let $E_0 = E(0)$ be a simple eigenvalue of $H_0$ and let $E(t) = \sum_{n=0}^\infty a_n t^n$ be the complex resonance for the operator $H(t)$. Then the second derivative of $ \Gamma(t) = 2 \,\mathrm{Im\,}E(t)$ can be expressed by an integral expression (18.3) in Ref. \cite{Lipovsky16}. Moreover, $\Gamma(t)$ can be related to the transition probability between the initial and final state, which is connected to the usual definition of a Fermi golden rule (for details see \cite{ReedSimon78}).
\section{Simulation of quantum graphs by microwave networks}\label{sec3}
Quantum graphs can be considered as idealizations of physical networks in the limit where the widths of the wires are much smaller than their lengths. They can be
successfully used to model theoretically a broad range of physical problems, see, e.g., \cite{Gnutzmann2006}. From the experimental point of view the recent epitaxial techniques allow for designing and fabrication of relatively simple quantum nanowire graphs \cite{Samuelson2004,Heo2008}. However, to deal experimentally with much more complex systems characterized by many controllable parameters one has to use microwave networks. Hul et al.\ \cite{Hul2004} demonstrated that quantum graphs with preserved time ($T$) reversal invariance can be successfully simulated by microwave networks containing microwave junctions and coaxial cables. In the investigations presented here the microwave networks were built of microwave T-junctions and the SMA-RG402 coaxial cables. The SMA-RG402 cables consist of two conductors. The inner conductor of a cable with radius $r_1=0.05$~cm is surrounded by a concentric conductor of inner radius $r_2=0.15$~cm. The space between the conductors is filled with Teflon with a dielectric constant $\varepsilon\simeq 2.06$. For the specified parameters of the cables inside them only the fundamental TEM mode propagates below the cut-off frequency of the TE$_{11}$ mode $\nu_{c}\simeq\frac{c}{\pi (r_1+r_2)\sqrt{\varepsilon}} \simeq 33$~GHz~\cite{Savytskyy2001}, where $c$ is the speed of light in vacuum. The lengths of the edges in the corresponding quantum graph are defined by the optical lengths $\ell_i=\ell^g_i\sqrt{\varepsilon}$, where $\ell^g_i$ are the geometric lengths of the coaxial cables.
Microwave networks provide a very rich platform for the experimental and the theoretical study of quantum graphs possessing the same topology and boundary conditions at the vertices. The spectral and scattering properties of microwave networks have been studied in Refs. \cite{Hul2004,Hul2005,Hul2005a,Lawniczak2008,Lawniczak2010,Lawniczak2012,Sirko2016,Bialous2016,Stockmann2016,Dietz2017}. The microwave networks allow for simulations of variety of chaotic systems whose spectral properties can be described by the three main symmetry classes: Gaussian orthogonal ensemble (GOE) \cite{Hul2004,Sirko2016,Lawniczak2019}, Gaussian unitary ensemble (GUE) \cite{Hul2004,Lawniczak2010,Bialous2016,Lu2020,Yunko2020} and Gaussian symplectic ensemble (GSE) \cite{Stockmann2016,Lu2020} in the Random Matrix Theory.
In this way microwave networks have become another important model systems to which belong flat microwave cavities \cite{Stockmann1990,Sridhar1994,Sirko1997,Hlushchuk2000,Hlushchuk2001,Blumel2001,Dhar2003,HemmadyPRL2005,Hul2005,Hemmady2006,Dietz2015,BialousPRE2019,Dietz2019} and experiments using Rydberg atoms strongly driven by microwave fields \cite{Blumel1991,Jensen1991,Bellerman1992,Sirko1993,Buchleitner1993,SirkoPRL1993,Bayfield1995,Sirko1995,Sirko1996,Bayfield1999,Sirko2001,Sirko2002,Galagher2016} that are successfully used in simplifying experimental analysis of complex quantum systems.
In order to test experimentally a Fermi golden rule in microwave networks we consider two examples of quantum graphs shown in Fig.~1(a) and Fig.~1(c). A two-edge graph in Fig.~1(a) consists two vertices, two internal edges and two infinite leads. The second graph, a five-edge graph in Fig.~1(c), is more complex. It contains 4 vertices, five internal edges and two infinite leads. The corresponding microwave networks constructed from microwave coaxial cables are shown in Fig.~1(b) and Fig.~1(d).
\section{Theoretical results for a Fermi rule}\label{sectheor} \subsection{A two-edge graph}
Let us consider a graph consisting of two vertices, two internal and two external edges (see Fig.~1(a)). Let the lengths of the internal edges $e_3$ and $e_4$ be $\ell_3<\infty$ and $\ell_4 <\infty$, respectively, while the edges $e_1$ and $e_2$ have infinite lengths. We will consider the dependence of the edge lengths on the parameter $t$ as $\ell_3 = \ell_0(1-t)$, $\ell_4 = \ell_0$ and the eigenvalue for $t=0$ with $k=\frac{2\pi}{\ell_0}$. In the appendix we prove that for a two-edge graph a Fermi golden rule is expressed by the formula
\begin{equation}
\mathrm{Im\,}\ddot k = -\frac{\pi^2}{2\ell_0}\,. \end{equation} Furthermore, we show that the imaginary part of $k(t)$ near the eigenvalue behaves as \begin{equation}
\mathrm{Im}\,k \approx -\frac{\pi^2}{4\ell_0} t^2\,. \end{equation}
\subsection{A five-edge graph}
Let us consider a graph in Fig.~1(c), having five internal edges and two external edges. Let the edge lengths be $\ell_3 = \ell_0(1-t)$, $\ell_4 = \ell_0(1+t)$, $\ell_5 = \ell_0(1-t)$, $\ell_6 = \ell_0(1+t)$, $\ell_7 = \ell_0(1+t)$ (this corresponds to the case \cite[Figure 4 c)]{LeeZworski16}). Let us start from the eigenvalue with $k\ell_0 = \arccos{(-1/3)} = 1.9106$. For our choice we have \begin{equation}
\dot a_3 = 1\,,\quad \dot a_4 = -1\,,\quad \dot a_5 = 1\,,\quad \dot a_6 = -1\,. \end{equation} The computation of $\mathrm{Im\,}\ddot k$ is given in \cite[Sec. 18.2]{Lipovsky16} and a Fermi golden rule takes the form \begin{equation}
\mathrm{Im\,}\ddot k = -\frac{1}{\ell_0}[(\dot a_3-\dot a_6)^2+(\dot a_4-\dot a_5)^2]0.1711-\frac{1}{\ell_0}(\dot a_3-\dot a_6)(\dot a_4-\dot a_5)0.1141\,=-\frac{0.9124}{\ell_0}. \end{equation}
\section{Experimental results}
Both microwave networks shown in Fig.~1(b) and Fig.~1(d) can be described in terms of $2\times 2$ scattering matrix $\hat S(\nu)$: \begin{equation} \label{eq:scatt_matrix} \hat S(\nu)=\left( \begin{array}{cc} S_{11}(\nu)&S_{12}(\nu)\\ S_{21}(\nu)&S_{22}(\nu)\end{array} \right) \mbox{,} \end{equation} relating the amplitudes of the incoming and outgoing waves of frequency $\nu$ in both infinite edges (leads). It should be emphasized that it is customary for microwave systems to make measurements of the scattering matrices in a function of microwave frequency $\nu$ which is related to the real part of the wave number $\mathrm{Re\,}k=\frac{2\pi }{c}\nu$.
To measure the two-port scattering matrix $\hat S(\nu)$ the vector network analyzer (VNA) Agilent E8364B was connected to the microwave networks shown in Fig.~1(b) and Fig.~1(d). The microwave test cables connecting
microwave networks to the VNA are equivalent to attaching of two infinite leads $e_1$ and $e_2$ to quantum graphs in Fig.~1(a) and Fig.~1(c).
\subsection{The two-edge network}
The internal edge lengths of the two-edge network (see Figs.~1(a-b)) were parameterized by the parameter $t$ as $\ell_3 = \ell_0(1-t)$ and $\ell_4 = \ell_0$, with $\ell_0=1.0068\pm 0.0002$ m. The length of the edge $e_3$ was changed using microwave cables and a microwave phase shifter. The eigenvalue for $t=0$ is given by $k=\frac{2\pi}{\ell_0}$ which in the frequency domain defines the resonance at 0.2978 GHz. Therefore, in order to analyze the dynamics of the topological resonance in a function of the parameter $t$ the scattering matrix $\hat S(\nu)$ of the network was measured in the frequency range $\nu = 0.01-0.5$ GHz.
As an example, the modulus of the determinant of the scattering matrix $|\det\bigr(\hat S(\nu)\bigl)|$ of the two-edge network for $t=-0.2$ is shown in Fig.~2(a) in the frequency range $0.30 - 0.36$ GHz (open circles). For $t\neq 0$ we deal for this network with two nearly-degenerated resonances $r_m=\nu_m+ig_m$, $m=1,2$. Therefore, the parameters of the resonances, including real $\mathrm{Re}\,k = \frac{2\pi}{c}\nu_1$ and imaginary $\mathrm{Im}\,k= \frac{2\pi}{c}g_1$ parts of the topological resonance, were obtained from the fit of the modulus of a sum of two Lorentzian functions \cite{Moldover1999} \begin{equation} \label{Eq:2_Lorentz} f_2(\nu)= \sum^{2}_{m=1}\frac{i\nu A_{m}}{\nu^{2}-(\nu_{m}+ig_{m})^{2}}+B(\nu-\nu_{1})+C \end{equation}
to the modulus of the determinant of the scattering matrix $|\det\bigr(\hat S(\nu)\bigl)|$, where $A_m$, $B$, and $C$ are complex constants and $r_m=\nu_m+ig_m$, $m=1,2$, are frequencies of the complex nearly-degenerated resonances.
The fit of $|f_2(\nu)|$ (see Eq. \ref{Eq:2_Lorentz}) to the modulus $|\det\bigr(\hat S(\nu)\bigl)|$ in the frequency range $\nu = 0.314-0.347$ GHz is marked in Fig.~2(a) by the red line. The topological resonance of the network is marked with a red dot and the other resonance with a blue dot. The right vertical axis $g$ in Fig.~2(a) shows the imaginary part of the resonances in GHz.
In Fig.~3 full circles show the trajectory of the topological resonance obtained experimentally for the two-edge network. Even for the parameter $t=0$ the imaginary part of the experimental topological resonance $g_1 = -43 \pm 20$ kHz is different than 0 suggesting that the topological resonance is influenced by the intrinsic absorption of the network. To analyze this situation we performed the numerical calculations using the method of pseudo-orbits \cite{BHJ,Li6,Li7}. In the calculations we took into account the internal absorption of the microwave cables forming the edges of the microwave network. To do this we replaced the real wave vector $k$ by the complex one with absorption-dependent imaginary part $\mathrm{Im\,}k=\beta \sqrt{2\pi \nu/c }$ and the real part $\mathrm{Re\,}k = 2\pi \nu/c $, where $\beta=0.009\,\mathrm{m}^{-1/2}$ is the absorption coefficient and $c$ is the speed of light in vacuum. This method is described in details in Ref. \cite{Hul2004}.
The results of the calculations are shown with diamonds in Fig.~3. The agreement between the experimental results (full circles) and the numerical ones (diamonds) is very good showing that the non-zero value of the imaginary part of the topological resonance at $t=0$ is due to intrinsic absorption in the network.
Due to the presence of the intrinsic absorption we fitted the experimental dependence of $\mathrm{Im}\,k$ on $t$ to the function $\mathrm{Im\,}k = a t^2 +b$ (see inset in Fig.~3). Using 9 experimental points (the central point corresponding to the topological resonance and four points to the left and four to the right from it) we obtained the values $a_{\mathrm{exp}} = -2.11 \pm 0.40 \,\mathrm{m}^{-1}$ and $b = -0.00097 \pm 0.00051 \,\mathrm{m}^{-1}$. In the inset in Fig.~3 the theoretical fit is marked by the full red line.
The experimental value $a_{\mathrm{exp}} = -2.11 \pm 0.40 \,\mathrm{m}^{-1}$ is within the experimental error in agreement with the theoretical one $a_{\mathrm{th}} = -\frac{\pi^2}{4\ell_0} = -2.45\,\mathrm{m}^{-1}$ obtained for $\ell_0 = 1.0068 \pm 0.0002 \,\mathrm{m}$. Moreover, the value $b = -0.00097 \pm 0.00051 \,\mathrm{m}^{-1}$ ($-46 \pm 24$ kHz) is in agreement with the imaginary part of the experimental topological resonance $g_1 = -43 \pm 20$ kHz.
\subsection{The five-edge network}
In the case of the five-edge network the internal edge lengths of the network (see Figs.~1(c-d)) were parameterized by the parameter $t$ as
$\ell_3 = \ell_0(1-t)$, $\ell_4 = \ell_0(1+t)$, $\ell_5 = \ell_0(1-t)$, $\ell_6 = \ell_0(1+t)$, and $\ell_7 = \ell_0(1+t)$, with $\ell_0=1.0025\pm 0.0002$ m. The lengths of the edges $e_3, e_4, e_5$, and $e_6$ were changed using microwave cables and microwave phase shifters. The eigenvalue for $t=0$ can be found from the equation $k\ell_0 = \arccos{(-1/3)} = 1.9154$ \cite{LeeZworski16}. In the frequency domain it specifies the resonance at 0.0912 GHz. That is why to analyze the dynamics of the topological resonance in a function of the parameter $t$ the scattering matrix $\hat S(\nu)$ of the five-edge network was measured in the frequency range $\nu = 0.01-0.5$ GHz. Fig.~2(b) shows the modulus of the determinant of the scattering matrix $|\det\bigr(\hat S(\nu)\bigl)|$ of the five-edge network for $t=-0.05$ in the frequency range $0.06 - 0.12$ GHz (open circles). Here, the situation is even more complicated than in the case of the two-edge network because we deal with a structure of three nearly-degenerated resonances, with the topological resonance placed between the other two. That is why the parameters of the resonances, including real $\mathrm{Re}\,k = \frac{2\pi}{c}\nu_2$ and imaginary $\mathrm{Im}\,k= \frac{2\pi}{c}g_2$ parts of the topological resonance, were obtained from the fit of the modulus of a sum of three Lorentzian functions
\begin{equation} \label{Eq:3_Lorentz} f_3(\nu) = \sum^{3}_{m=1}\frac{i\nu A_{m}}{\nu^{2}-(\nu_{m}+ig_{m})^{2}}+B_1(\nu-\nu_{1})+ B_2(\nu-\nu_{2}) +C \end{equation}
to the modulus of the determinant of the scattering matrix $|\det\bigr(\hat S(\nu)\bigl)|$, where $A_m$, $B_{1,2}$, and $C$ are complex constants and $r_m=\nu_m+ig_m$, $m=1, 2, 3$, are frequencies of the complex nearly-degenerated resonances. The fit of $|f_3(\nu)|$ to the modulus $|\det\bigr(\hat S(\nu)\bigl)|$ in the frequency range $\nu = 0.074-0.116$ GHz for $t=-0.05$ is denoted in Fig.~2(b) by the red line. The topological resonance of the network is marked by a red dot while the two other ones by blue dots. The right vertical axis $g$ in Fig.~2(b) shows the imaginary part of the resonances in GHz.
The trajectory of the topological resonance obtained experimentally for the five-edge network (full circles) is shown in Fig.~4. In this case the departure from 0 of the imaginary part of the topological resonance $g_2=-0.55\pm 0.04$ MHz at $t=0$ is even more significant than in the case of the two-edge microwave network. The agreement of our numerical calculations (diamonds in Fig.~4) with the experimental results (full circles) clearly demonstrates that also in this case we deal with the effect of the internal absorption in the network. The fitted dependence $\mathrm{Im\,}k = a t^2+b$ to the experimental points is marked by the full red line in the inset in Fig~4.
Using 5 experimental points (the central point, two points to the left and two point to the right) we obtained the value $a_{\mathrm{exp}} = -0.46 \pm 0.03\,\mathrm{m}^{-1}$ and $b =-0.0113 \pm 0.0002\,\mathrm{m}^{-1}$. Within the experimental error the value $a_{\mathrm{exp}} = -0.46 \pm 0.03\,\mathrm{m}^{-1}$ corresponds to the theoretical one obtained for $\ell_0 = 1.0025 \pm 0.0002\,\mathrm{m}$ (this is the average of edge lengths $\ell_3$, $\ell_4$, $\ell_5$, and $\ell_6$ for $t=0$): \begin{equation}
a_{\mathrm{th}} = \frac{1}{2}\mathrm{Im\,}\ddot k = -\frac{1}{2\cdot 1.0025}[(2^2+(-2)^2)0.1711+2(-2)0.1141] \,\mathrm{m}^{-1} = -0.46\,\mathrm{m}^{-1}\,. \end{equation} Also in this case the value $b=-0.011 \pm 0.001\,\mathrm{m}^{-1}$ ($-0.54 \pm 0.05$ MHz) is in agreement with the imaginary part of the experimental topological resonance $g_2=-0.55 \pm 0.04$ MHz.
\section{Summary}
Using microwave networks with preserved time reversal invariance we investigated experimentally a Fermi golden rule which gives rates of decay of states obtained by perturbing embedded eigenvalues of graphs and networks. We show that for the two-edge and five-edge microwave networks the embedded eigenvalues are connected with the topological resonances of the systems. Microwave networks are characterized by the intrinsic absorption which was taken into account in the numerical simulations of the networks. We found the trajectories of the topological resonances in the complex plane and showed that the experimental values of the parameter $a$ in the formula $\mathrm{Im}\,k=a t^2 + b$ are very close to the expected theoretical ones in the formula $\mathrm{Im\,}k = a_{\mathrm{th}} t^2$ for the two-edge and five-edge graphs, respectively. We show that the constant $b$ in the formula $\mathrm{Im}\,k=a t^2 + b$ accounts for the intrinsic absorption of the networks. Although, we illustrated a Fermi rule for two particular graphs, the Theorem (1) from Ref.~\cite {LeeZworski16} holds true for all quantum graphs with standard coupling conditions and at least one infinite lead, provided they support an eigenvalue embedded into the continuous spectrum for some edge lengths. It should be possible to obtain similar results for the corresponding microwave networks.
\section{Appendix}
\subsection{Theoretical results for a Fermi rule. A case of a two-edge graph}
Let us consider a graph consisting of two vertices, two internal and two external edges (see Fig.~1(a)). Let the lengths of the internal edges be $\ell_3<\infty$ and $\ell_4 <\infty$, while the edges $e_1$ and $e_2$ have infinite lengths. Let us consider the dependence of the edge lengths on the parameter $t$ as $\ell_3 = \ell_0(1-t)$, $\ell_4 = \ell_0$ and the eigenvalue for $t=0$ with $k=\frac{2\pi}{\ell_0}$. This situation corresponds to the case in \cite[Fig.~2 (b)]{LeeZworski16}. Let the edges be parametrized from $x=0$ at $v_1$ to $x=\ell_3$ and $x=\ell_4$ at $v_2$. We find from the above expressions that $$
\dot a_3 = -\frac{1}{\ell_0}\frac{\partial \ell_3}{\partial t} = 1\,,\quad \dot a_4 = -\frac{1}{\ell_0}\frac{\partial \ell_4}{\partial t} = 0\,. $$ The normalized eigenfunction for $t=0$ and $k = \frac{2\pi}{\ell_0}$ has edge components $u_1(x) = 0$, $u_2(x) = 0$, $u_3(x) = \frac{1}{\sqrt{\ell_0}}\sin{(kx)}$, $u_4(x) = - \frac{1}{\sqrt{\ell_0}}\sin{(kx)}$.
Let us now compute the form of the generalized eigenfunctions $e^s(k,x)$, $s=1,2$. The form of the components of $e^1(k,x)$ follows from its definition. \begin{eqnarray*}
e^1_1(k,x) = \mathrm{e}^{-\mathrm{i}kx}+s_{11}\mathrm{e}^{\mathrm{i}kx}\,,&\quad & e^1_2(k,x) = s_{12} \mathrm{e}^{\mathrm{i}kx}\,,\\
e^1_3(k,x) = \alpha_3\sin{(kx)}+\beta_3\cos{(kx)}\,,& \quad & e^1_4(k,x) = \alpha_4\sin{(kx)}+\beta_4\cos{(kx)} \end{eqnarray*} with unknown constants $s_{11}$, $s_{12}$, $\alpha_3$, $\beta_3$, $\alpha_4$, $\beta_4$. The coupling conditions (\ref{eq:cc}) yield \begin{eqnarray*}
1+s_{11} = \beta_3 = \beta_4\,,\quad \mathrm{i}(-1+s_{11})+\alpha_3+ \alpha_4 = 0\,,\\
s_{12} = \alpha_3 \sin{(k\ell_3)} +\beta_3\cos{(k\ell_3)} = \alpha_4 \sin{(k\ell_4)} +\beta_4\cos{(k\ell_4)}\,,\\
\mathrm{i} s_{12} -\alpha_3 \cos{(k\ell_3)} +\beta_3\sin{(k\ell_3)} -\alpha_4 \cos{(k\ell_4)} +\beta_4\sin{(k\ell_4)} = 0\,. \end{eqnarray*} If one writes this set of six equations for six variables into the matrix form, one finds that its solutions are not properly defined for $\ell_3 = \ell_4 = \ell_0$ (i.e., the case $t=0$) since the determinant of the corresponding matrix is 0. However, following the definition of $e^s$, one can use the holomorphic extensions of the solutions to $k = \frac{2\pi}{\ell_0}$ and obtain the unknown coefficients as the limits for $t \to 0$. We find $$
\alpha_3 = \alpha_4 = \frac{\mathrm{i}}{2}\,,\quad \beta_3 = \beta_4 = 1\,,\quad s_{11} = 0\,,\quad s_{12} = 1\,. $$ This corresponds to $$
e^1_1(k,x) = \mathrm{e}^{-\mathrm{i}kx}\,,\quad e^1_2(k,x) = \mathrm{e}^{\mathrm{i}kx}\,,\quad e^1_3 (k,x) = e^1_4 (k,x) = \cos{(kx)}+\frac{\mathrm{i}}{2}\sin{(kx)}\,. $$
We proceed similarly for $e^2(k,x)$. Using the ansatz \begin{eqnarray*}
e^2_1(k,x) = s_{21}\mathrm{e}^{\mathrm{i}kx}\,,&\quad & e^2_2(k,x) = \mathrm{e}^{-\mathrm{i}kx} + s_{22}\mathrm{e}^{\mathrm{i}kx}\,,\\
e^2_3(k,x) = \gamma_3 \sin{(kx)}+ \delta_3 \cos{(kx)}\,,& \quad & e^2_4(k,x) = \gamma_4 \sin{(kx)}+ \delta_4 \cos{(kx)} \end{eqnarray*} the coupling conditions (\ref{eq:cc}) yield the set of equations \begin{eqnarray*}
s_{21} = \delta_3 = \delta_4\,,\quad \mathrm{i}s_{21}+\gamma_3+\gamma_4 = 0\,,\\
1+s_{22} = \gamma_3 \sin{(k\ell_3)} + \delta_3 \cos{(k\ell_4)} = \gamma_4 \sin{(k\ell_4)} + \delta_4 \cos{(k\ell_4)}\,,\\
\mathrm{i}(-1+s_{22}) -\gamma_3\cos{(k\ell_3)} + \delta_3 \sin{(k\ell_3)} -\gamma_4\cos{(k\ell_4)} + \delta_4 \sin{(k\ell_4)} = 0\,. \end{eqnarray*} The solutions after the holomorphic extension to $k = \frac{2\pi}{\ell_0}$ are $$
s_{21} = \delta_3 = \delta_4 = 1\,,\quad \gamma_3 = \gamma_4 = -\frac{\mathrm{i}}{2}\,,\quad s_{22} = 0\,. $$ This corresponds to $$
e^2_1(k,x) = \mathrm{e}^{\mathrm{i}kx}\,,\quad e^2_2(k,x) = \mathrm{e}^{-\mathrm{i}kx}\,,\quad e^2_3 (k,x) = e^2_4 (k,x) = \cos{(kx)}-\frac{\mathrm{i}}{2}\sin{(kx)}\,. $$ Hence \begin{eqnarray*}
\left<\dot a u(x),e^1(k,x)\right> = \frac{1}{\sqrt{\ell_0}}\int_0^{\ell_0} \sin{(kx)}\left(\cos{(kx)}+\frac{\mathrm{i}}{2}\sin{(kx)}\right)\,\mathrm{d}x = \frac{\mathrm{i}}{4}\sqrt{\ell_0}\,,\\
\left<\dot a u(x),e^2(k,x)\right> = \frac{1}{\sqrt{\ell_0}}\int_0^{\ell_0} \sin{(kx)}\left(\cos{(kx)}-\frac{\mathrm{i}}{2}\sin{(kx)}\right)\,\mathrm{d}x = - \frac{\mathrm{i}}{4}\sqrt{\ell_0}\,. \end{eqnarray*} Moreover, using \begin{eqnarray*}
\partial_\nu u_3(v_1) = -\frac{k}{\sqrt{\ell_0}} = -\frac{2\pi}{\ell_0^{3/2}}\,,\quad \partial_\nu u_4(v_1) = \frac{k}{\sqrt{\ell_0}} = \frac{2\pi}{\ell_0^{3/2}}\,,\\
\partial_\nu u_3(v_2) = \frac{k}{\sqrt{\ell_0}}\cos{(k\ell_0)} = \frac{2\pi}{\ell_0^{3/2}}\,,\quad \partial_\nu u_4(v_2) = -\frac{k}{\sqrt{\ell_0}}\cos{(k\ell_0)} = -\frac{2\pi}{\ell_0^{3/2}} \,,\\
e^1(k,v_1) = e^1(k,v_2) =e^2(k,v_1) = e^2(k,v_2) = 1 \,,\quad u(v_1) = u(v_2) = 0 \end{eqnarray*} we can find $$
\sum_v\sum_{e_j\ni v}\frac{1}{4}\dot a_j[3\partial_\nu u_j(v) \overline{e^s(k, v)}-u(v)\partial_\nu\overline{e^s_j(k,v)}] = \frac{1}{4}\cdot 1\cdot \left[3\cdot \left(-\frac{2\pi}{\ell_0^{3/2}}\right) - 0\right]+\frac{1}{4}\cdot 1\cdot \left[3\cdot \frac{2\pi}{\ell_0^{3/2}} - 0\right] = 0\,. $$ Hence, only the first terms of the functions $F_s$ are nonzero. $$
|F_1| = \left|\frac{2\pi}{\ell_0}\frac{i}{4}\sqrt{\ell_0}\right| = \frac{\pi}{2\sqrt{\ell_0}} \,,\quad |F_2| = \left|\frac{2\pi}{\ell_0}\frac{(-i)}{4}\sqrt{\ell_0}\right| = \frac{\pi}{2\sqrt{\ell_0}} $$ and $$
\mathrm{Im\,}\ddot k = -\frac{\pi^2}{2\ell_0}\,. $$
One can simply prove (see, e.g. \cite{LeeZworski16,ExnerLipovsky17}) that $\mathrm{Im}\,\dot k|_{t=0} = 0$ and clearly $\mathrm{Im}\,k|_{t=0} = 0$. Therefore, we see from the Taylor's expansion that the imaginary part of $k(t)$ near the eigenvalue behaves as $$
\mathrm{Im}\,k \approx -\frac{\pi^2}{4\ell_0} t^2\,. $$
\pagebreak
\begin{figure}
\caption{ Panels (a) and (b) show the schemes of a two-edge quantum graph with $\mathcal{V}=2$ vertices and a microwave network with the same topology. Panels (c) and (d) show the schemes of a five-edge quantum graph with $\mathcal{V}=4$ vertices and a microwave network with the same topology.
The microwave networks were connected to the vector network analyzer with the flexible microwave cables which are equivalent to attaching infinite leads to quantum graphs in panels (a) and (c). }
\label{Fig1}
\end{figure}
\begin{figure}
\caption{(a) The modulus $|\det\bigr(\hat S(\nu)\bigl)|$ of the determinant of the scattering matrix of the two-edge network for the parameter $t=-0.2$ in the frequency range $0.30 - 0.36$ GHz (open circles). The fit of $|f_2(\nu)|$ (see Eq. \ref{Eq:2_Lorentz}) to the modulus $|\det\bigr(\hat S(\nu)\bigl)|$ in the frequency range $\nu = 0.314-0.347$ GHz is marked by the red line. The topological resonance of the network is marked with a red dot and the other one with a blue dot. The right vertical axis $g$ shows the imaginary part of the resonances in GHz.
(b) The modulus $|\det\bigr(\hat S(\nu)\bigl)|$ of the determinant of the scattering matrix of the five-edge network for the parameter $t=-0.05$ in the frequency range $0.06 - 0.12$ GHz (open circles). The fit of $|f_3(\nu)|$ (see Eq. \ref{Eq:3_Lorentz}) to the modulus $|\det\bigr(\hat S(\nu)\bigl)|$ in the frequency range $\nu = 0.074-0.116$ GHz is denoted by the red line. The topological resonance of the network is marked by a red dot while the two other ones by blue dots. The right vertical axis $g$ shows the imaginary part of the resonances in GHz. }
\label{Fig2}
\end{figure}
\begin{figure}\label{Fig3}
\end{figure}
\begin{figure}\label{Fig4}
\end{figure}
\end{document} | arXiv | {
"id": "2108.05584.tex",
"language_detection_score": 0.776897132396698,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A classification of $n$-tuples of commuting shifts of finite multiplicity} \begin{abstract}
Let $\mathbb{V}$ denote an $n$-tuple of shifts of finite multiplicity, and denote by $\Ann(\mathbb{V})$ the ideal consisting of polynomials $p$ in $n$ complex variables such that $p(\mathbb{V})=0$. If $\WW$ on $\frk{K}$ is another $n$-tuple of shifts of finite multiplicity, and there is a $\WW$-invariant subspace $\frk{K}'$ of finite codimension in $\frk{K}$ so that $\WW|\frk{K}'$ is similar to $\mathbb{V}$, then we write $\mathbb{V}\lesssim \WW$. If $\WW\lesssim \mathbb{V}$ as well, then we write $\WW\approx \mathbb{V}$.
In the case that $\Ann(\mathbb{V})$ is a prime ideal we show that the equivalence class of $\mathbb{V}$ is determined by $\Ann(\mathbb{V})$ and a positive integer $k$. More generally, the equivalence class of $\mathbb{V}$ is determined by $\Ann(\mathbb{V})$ and an $m$-tuple of positive integers, where $m$ is the number of irreducible components of the zero set of $\Ann(\mathbb{V})$. \end{abstract}
\section{Introduction}
An isometry $V$ on a (complex) Hilbert space $\mathfrak{H}$ is called a \emph{shift} of \emph{multiplicity} $k$ when $V$ is unitarily equivalent to the standard unilateral shift on $\ell^2 (\NN) \otimes \CC^k$ for some positive integer $k$. By the von Neumann-Wold theorem, $V$ is a shift of multiplicity $k$ if and only if $\bigcap_{j = 0}^{\infty} V^j\mathfrak{H}= \{ 0 \}$ and $\dim \ker V^{\ast} = k$. For the entirety of this paper, we fix an integer $n\geq 2$. Given an $n$-tuple $\mathbb{V}= ( V_1,\ldots, V_n)$ of commuting shifts of finite multiplicity, the \emph{annihilator} of $\mathbb{V}$ is defined to be the polynomial ideal \[ \Ann ( \mathbb{V}) = \{ p \in \CC [ x_1, \ldots, x_n] : p ( \mathbb{V}) = 0 \} , \] It is known from \cite[Prop. 6.3]{Timko} that $\Ann ( \mathbb{V})$ is a non-trivial ideal and that it determines a variety of pure dimension 1. We briefly review these facts in Section 2.
Suppose that $\mathbb{V}$ and $\WW$ are $n$-tuples of commuting shifts of finite multiplicity on Hilbert spaces $\mathfrak{H}$ and $\mathfrak{K}$, respectively. If there exists a $\WW$-invariant subspace $\mathfrak{K}'$ of finite codimension in $\frk{K}$ such that $\WW|\mathfrak{K}'$ is similar to $\mathbb{V}$, then we write $\mathbb{V} \lesssim \WW$. We say that $\mathbb{V}$ and $\WW$ are \emph{virtually similar} and write $\mathbb{V}\approx \WW$ when $\WW \lesssim \mathbb{V}$ and $\mathbb{V} \lesssim \WW$.
The finite multiplicity of the elements of $\mathbb{V}$ implies that $\mathbb{V}$ has a finite cyclic set. In particular, there exists a set $\{h_1, \ldots, h_k\}\subset \mathfrak{H}$ of least cardinality such that the subspace $\bigvee_{i = 1}^k \{ p (\mathbb{V}) h_i : p \in \CC [ x_1, \ldots, x_n] \}$ has finite codimension in $\mathfrak{H}$. We call $k$ the \emph{virtually cyclicity} of $\mathbb{V}$, and denote it by $\kappa(\mathbb{V})$.
For the case in which $\Ann ( \mathbb{V})$ is a prime ideal, we show in Theorem \ref{MainThm} that $\mathbb{V}$ and $\WW$ are virtually similar if and only if $\Ann ( \mathbb{V}) = \Ann ( \WW)$ and $\kappa ( \mathbb{V}) = \kappa ( \WW)$. When $\Ann ( \mathbb{V})$ is not prime, we have the following characterization. The ideal $\Ann (\mathbb{V})$ is radical and is the intersection of a unique finite set of prime ideals $\mathcal{I}_1, \ldots, \mathcal{I}_m$. For $i=1,\dots,m$, there exists a largest $\mathbb{V}$-invariant subspace $\frk{H}_i^+$ such that $\Ann(\mathbb{V}|\frk{H}^+_i)=\mc{I}_i$; the subspaces $\mathfrak{K}_1^+, \mathfrak{K}_2^+, \ldots$ are analogously defined. We show in Theorem \ref{GenThm} that $\mathbb{V} \approx \WW$ if and only if $\Ann (\mathbb{V}) = \Ann (\WW)$ and $\kappa(\mathbb{V}|\mathfrak{H}_i^+) = \kappa ( \WW|\mathfrak{K}_i^+)$ for $i = 1, \ldots, m$.
We remark that virtual similarity is stronger than necessary, and that `virtual quasi-similarity' suffices. That is, it is sufficient that there exist injective operators $X:\frk{H}\to\frk{K}$ and $Y:\frk{K}\to\frk{H}$ intertwining $\mathbb{V}$ with $\WW$ such that $(\ran X)^\bot$ and $(\ran Y)^\bot$ are finite dimensional.
The remainder of this paper is organized as follows. In section 2 we set notation and establish some preliminary results. In particular, we show that every $n$-tuple of commuting shifts of finite multiplicity has a non-trivial annihilator. In section 3 we extend some results from \cite{AnD} to provide a characterization of the $H^{\infty} ( R)$-invariant subspaces of the vector valued Hardy space $H^2(R,\frk{X})$, where $R$ is a sufficiently `nice' sub-domain of a compact Riemann surface. The main results discussed above appear in section 4. In section 5 we comment on another notion of equivalence for $n$-tuples that we call \emph{virtual unitary equivalence}, where the similarities appearing in the definition of virtual similarity are replaced by unitary maps. It is an open question, first essentially raised in \cite{AKM}, whether virtually similar $n$-tuples are also virtually unitarily equivalent. We show that this is true in a special case.
We would like to thank Hari Bercovici for his comments and support during the preparation of this document. We would also like to thank Greg Knese, Norm Levenberg, Noah Snyder, and Alberto Torchinsky for their conversations with the author.
\section{Preliminaries}
We start this section by reviewing why $n$-tuples of commuting shifts of finite multiplicity are always `algebraic', the proof of which is included for the readers convenience. Before this, we require the following result based on work in \cite{AgMcActa}. Here and afterward, $\mathbb{D}$ denotes the open unit disc in $\CC$.
\begin{proposition}\label{AKMGetPoly}
Let $V_1, V_2$ be a pair of commuting shifts of finite multiplicity. There exist relatively prime polynomials $p ( x_1, x_2)$ and $q ( x_2)$ with the following properties.
\begin{enumerate}
\item[\rm{(i)}] $p(V_1,V_2)=0$ and the zero set of $p$ is contained in $\mathbb{D}^2\cup(\pd\mathbb{D})^2\cup(\CC\backslash\overline{\mathbb{D}})^2$.
\item[\rm{(ii)}] $q$ has no zeros in the closed disc $\overline{\mathbb{D}}$.
\item[\rm{(iii)}] The polynomial $p$ is of the form
\[ p(x_1,x_2)=q(x_2)x_1^d+a_1(x_2)x_1^{d-1}+\cdots+a_d(x_2) \]
for some single-variable polynomials $a_1,\dots,a_d$.
\end{enumerate} \end{proposition} The core of the proof can be found in \cite[Thm.1.12]{AgMcActa} and \cite[Prop. 6.3]{Timko}, but we provide details here for completeness.
\begin{proof}[Proof of Prop. \ref{AKMGetPoly}] The pair $(V_1,V_2)$ is unitarily equivalent to the pair of Toeplitz operators $(T_{\Theta},T_{\zeta\: I})$ acting on the vector-valued Hardy space $H^2(\mathbb{D},\CC^k)$, where $k=\dim\ker V_2^*$, $\zeta$ is the coordinate function on $\mathbb{D}$, and $\Theta$ is a matrix-valued inner function. Because $\dim\ker V_1^*<\infty$, it follows from \cite[Thm. VI.3.1]{SzNagyFoias} that $\Theta$ has rational entries. There are clearly polynomials $P(x_1,x_2)$ and $Q(x_2)$ such that $\det(w\cdot I-\Theta(z))=P(w,z)/Q(z)$ and $P(V_1,V_2)=0$. It follows from \cite[Thm. 1.20]{AKM} that there is a non-zero polynomial $p$ satisfying property (i) that divides $P$. Let $q$ be such that property (iii) holds, and note that $q$ divides $Q$. It is clear that $Q$ has no roots on the closed unit disc, and therefore property (ii) holds. \end{proof}
Given $p_1, \ldots, p_r \in \mathbb{C} [ x_1, \ldots, x_n]$, denote by $Z (p_1, \ldots, p_r)$ the set of all $z \in \mathbb{C}^n$ such that $p_1 ( z) = \ldots = p_r ( z) = 0$. More generally, if $S \subseteq \mathbb{C} [ x_1,\ldots, x_n]$, then we set $Z ( S) = \left\{ z \in \mathbb{C}^n : p ( z) = 0\text{ for all } p \in S \right\}$. Subsets of the form $Z ( S)$ are called \emph{algebraic varieties}, and we refer the reader to \cite{Kendig} for more information on this topic. We recall in particular that if $\mathcal{V}$ is an irreducible algebraic variety and $p$ is a non-trivial polynomial such that $\mathcal{V}\cap Z(p) \neq\emptyset$, then \begin{equation}
\dim ( \mathcal{V} \cap Z ( p)) \geq \dim ( \mathcal{V}) - 1.
\label{DimIneqAlgGeo} \end{equation}
\begin{corollary}
Let $\mathbb{V}$ be an $n$-tuple of commuting shifts of finite multiplicity. Then $\Ann ( \mathbb{V})$ is
non-trivial and $\dim Z ( \Ann ( \mathbb{V})) = 1$. \end{corollary}
\begin{proof}
We apply Proposition \ref{AKMGetPoly} to $( V_1, V_n), \ldots, ( V_{n - 1}, V_n)$ to produce polynomials $p_1 ( x_1, x_n), \ldots, p_{n - 1} ( x_{n - 1}, x_n)$, respectively, in the annihilator $\Ann ( \mathbb{V})$. From repeated use of (\ref{DimIneqAlgGeo}) we have that $\dim Z ( p_1, \ldots, p_n) = 1$ and therefore $d = \dim Z ( \Ann ( \mathbb{V})) \leq 1$. If it were the case that $d = 0$, then there would be a non-zero single-variable polynomial $q(x_1)\in\Ann( \mathbb{V})$. But operators of the form $q ( V_1)$ are injective and so $d = 1$. \end{proof}
We collect additional information about $\Ann(\mathbb{V})$ in the following proposition. \begin{proposition}
Let $\mathbb{V}$ be an $n$-tuple of commuting shifts of finite multiplicity. The following assertions hold.
\begin{enumerate}
\item[\rm{(i)}] $\Ann(\mathbb{V})$ is a radical ideal.
\item[\rm{(ii)}] Each irreducible component of $Z(\Ann(\mathbb{V}))$ has dimension $1$.
\item[\rm{(iii)}] $Z(\Ann(\mathbb{V}))\subseteq \mathbb{D}^n\cup\mathbb{T}^n\cup(\CC\backslash\cc{\mathbb{D}})^n$
\end{enumerate} \end{proposition} \begin{proof}
Part (i) follows from the fact that $\mathbb{V}$ is a $n$-tuple of commuting subnormal operators. In particular, if $\widetilde{\mathbb{V}}$ denotes the minimal unitary extension of $\mathbb{V}$, then we have the identity $\|p(\mathbb{V})\|=\|p(\widetilde{\mathbb{V}})\|$ for each $p\in\CC[x_1,\dots,x_n]$. Assertion (ii) follows from \cite[Lemma 3.1]{Timko}, which demonstrates that no irreducible component of $Z(\Ann(\mathbb{V}))$ has dimension 0. To prove (iii), we apply Proposition \ref{AKMGetPoly}(i) repeatedly. \end{proof}
A \emph{finite Riemann surface} is a subdomain $R$ of a compact Riemann surface with the property that $\pd R$ is locally a real analytic curve. In particular, $\pd R$ is a finite disjoint union of topological circles. If the annihilator is a prime ideal, we can desingularize $Z ( \Ann ( \mathbb{V})) \cap \mathbb{D}^n$ to a finite Riemann surface, the pertinent properties of which are summarized in the following proposition. For proof, we refer the reader to \cite[Sec. 3]{AKM} for the case of $n=2$ and \cite[Sec. 7.1]{Timko} for $n>2$.
\begin{proposition}\label{GetR} Assume that $\Ann ( \mathbb{V})$ is a prime ideal. There exists a finite Riemann surface $R$ and a continuous proper map $\xi$ from $\overline{R}$ onto $Z ( \Ann ( \mathbb{V})) \cap \overline{\mathbb{D}}^n$ such that
\begin{enumerate}
\item[\rm{(i)}] $\xi ( \partial R) = Z ( \Ann ( \mathbb{V})) \cap (
\partial \mathbb{D})^n$;
\item[\rm{(ii)}] $\xi |R$ is holomorphic onto $Z ( \Ann ( \mathbb{V}))
\cap \mathbb{D}^n$; and
\item[\rm{(iii)}] there is a cofinite subset $X$ of $\cc{R}$ such that $\xi|(X\cap R)$ is a biholomorphism onto its image and $\xi|(X\cap\pd R)$ is a diffeomorphism onto its image.
\end{enumerate} \end{proposition}
We note that $\xi = ( \xi_1, \ldots, \xi_n)$ is an $n$-tuple of analytic functions on $R$ with unimodular boundary values. The associated multiplication operators on $H^2(R)$ are thus isometries. Of course, we also have that $p\circ\xi\equiv 0$ for each $p\in\Ann(\mathbb{V})$.
Denote by $A ( R)$ the algebra of continuous function on $\overline{R}$ that are analytic on $R$. We equip $A ( R)$ with the topology of uniform convergence on $\overline{R}$ and let $A_{\xi} ( R)$ denote the closed unital subalgebra of $A ( R)$ generated by $\xi_1, \ldots, \xi_n$. We identify the elements of $A ( R)$ with their boundary value functions on $\partial R$ and thus view $A(R)$ as a subalgebra of $C(\partial R)$.
The following result appears in \cite{AKM} for the case of $n=2$. We give here a different argument for $n\geq 2$. In what follows, we set $\NN_0=\NN\cup \{0\}$.
\begin{lemma}\label{GetQLem}
There exists a non-zero single-variable polynomial $Q$ such that $Q ( \xi_n) A ( R) \subseteq A_{\xi} ( R)$. \end{lemma}
\begin{proof}
If $A_{\xi} ( R)$ has finite codimension in $A ( R)$, then the lemma follows from \cite[Thm. 9.8]{GamEmb}. Let $C_{\xi}(\partial R)$ denote the closed unital $\ast$-subalgebra of $C (\partial R)$ generated by $\xi_1, \ldots, \xi_n$. The map $\xi$ separates all but a finite set of points of $\partial R$, and thus $\mathrm{Re}\: C_{\xi}(\partial R)$ has finite (real) codimension in $\mathrm{Re}\: C ( \partial R)$. From the commuting square of inclusion maps
\[ \begin{array}{ccc}
\mathrm{Re}\: A_{\xi} ( R) & \rightarrow & \mathrm{Re}\: C_{\xi}(\partial R)\\
\downarrow & & \downarrow\\
\mathrm{Re}\: A ( R) & \rightarrow & \mathrm{Re}\: C ( \partial R)
\end{array}, \]
we deduce that $A_{\xi}(R)$ has finite codimension in $A(R)$ if $\mathrm{Re}\: A_{\xi} ( R)$ has finite real codimension in $\mathrm{Re}\: C_{\xi}(\partial R)$.
Let $j\in\{1,\dots,n-1\}$, and note that Corollary \ref{AKMGetPoly} provides single-variable polynomials $a_0^{(j)},\dots,a_{d_j}^{(j)}$ such that
\[ a_{d_j}^{(j)}(\xi_n)\xi_j^{d_j}+\cdots+a_0^{(j)}(\xi_n)=0 \]
and $a_{d_j}^{(j)}$ has no zeros in $\overline{\mathbb{D}}$. With $S=\prod_{i=1}^{n-1}\{j\in\ZZ:|j|<d_i\}$ and $S_+=S\cap\NN_0^{n-1}$, we find that
\[ A_\xi(R)= \bigvee\{\xi_1^{j_1}\cdots\xi_{n-1}^{j_{n-1}}\xi_n^\ell : (j_1,\dots,j_{n-1})\in S_+,\ell\in\NN_0\} \]
and
\[ C_\xi(\partial R ) = \bigvee\{\xi_1^{j_1}\cdots\xi_{n-1}^{j_{n-1}}\xi_n^\ell: (j_1,\dots,j_{n-1})\in S,\ell\in\ZZ\}. \]
In particular, $\mathrm{Re}\: C_\xi(\partial R)$ is the closed linear span of elements of the form $\mathrm{Re}\: (\xi_1^{j_1}\cdots \xi_{n-1}^{j_{n-1}}\xi_n^\ell)$ and $\mathrm{Im}\: (\xi_1^{j_1}\cdots \xi_{n-1}^{j_{n-1}}\xi_n^\ell)$ with $(j_1,\dots,j_{n-1})\in S$ and $\ell\in\NN_0$. We note that
\[ 1/\xi_j=-a_0^{(j)}(\xi_n)^{-1}(a_{d_j}^{(j)}(\xi_n)\xi_j^{d_j-1}+\cdots+a_1^{(j)}(\xi_n)). \]
Set $r=(a_0^{(1)})^{d_1-1}\cdots (a_0^{(n-1)})^{d_{n-1}-1}$ and denote by $\delta$ the degree of $r$. For each $(j_1,\dots,j_{n-1})\in S$ and $\ell\in\NN_0$, there exists a polynomial $b_{j_1,\dots,j_{n-1},\ell}$, of degree at most $\delta-1$ in the $n$-th variable and at most $d_j-1$ in the $j$-th variable such that
\[ \xi_1^{j_1}\cdots \xi_{n-1}^{j_{n-1}}\xi_n^\ell-\frac{b_{j_1,\cdots,j_{n-1},\ell}(\xi)}{r(\xi_n)}\in A_\xi(R). \]
We conclude that $\mathrm{Re}\:
A_{\xi} ( R)$ has at most finite codimension in $\mathrm{Re}\: C_{\xi}(\partial R)$. \end{proof}
We conclude this section by setting notation. All Hilbert spaces are assumed to be complex and separable. Given an $n$-tuple $\mathbb{A}=(A_1,\dots,A_n)$ of operators on a common Hilbert space and $\beta\in\NN^n_0$, we set $\mathbb{A}^\beta=A_1^{\beta_1}\cdots A_n^{\beta_n}$. The general linear group of a Hilbert space $\mathfrak{X}$ is denoted by $\mathrm{GL} ( \mathfrak{X})$, and the unitary group of $\mathfrak{X}$ is denoted by $\mathrm{U} (\mathfrak{X})$.
Let $R$ be a finite Riemann surface and fix a point $x_0\in R$. We denote by $H^{\infty} ( R)$ the Hardy space of bounded analytic functions on $R$. For each analytic function $f:R\to \frk{X}$, denote by $u_f$ the least harmonic majorant of $x \mapsto \| f ( x)\|^2$. We denote by $H^2 ( R, \mathfrak{X})$ the space of all $\mathfrak{X}$-valued analytic functions $f$ for which $u_f ( x_0) < \infty$ and equip the $H^2(R,\frk{X})$ with the Hilbert space norm $\| f \| = \sqrt{u_f ( x_0)}$. Though the norm depends on the choice of $x_0$, the associated topology does not. Let $\omega$ denote harmonic measure at $x_0$. Then $L^2(\pd R,\frk{X})$ is the $\frk{X}$-valued $L^2$-space with norm $g\mapsto \left(\int_{\pd R}\|g(x)\|^2 d\omega(x)\right)^{1/2}$. We observe that every element of $H^2 ( R, \mathfrak{X})$ has $L^2$-boundary values defined $\omega$-a.e. on $\partial R$. We identify $H^2 (R, \mathfrak{X})$ with a subspace of $L^2 ( \partial R, \mathfrak{X})$ via the boundary value map. For details about Hardy spaces of Riemann surfaces, we refer the reader to \cite{Rudin}.
Given $f \in L^{\infty} ( \partial R)$, we denote by $M_f$ the operator of multiplication by $f$ on $L^2(\pd R,\frk{X})$. More generally, let $\frk{Y}$ be another Hilbert space and let $\Theta$ be a bounded (weakly) measurable $\mathcal{B} ( \mathfrak{X},\mathfrak{Y})$-valued function. We denote by $M_{\Theta}$ the map $L^2(\pd R,\frk{X})\ni f\mapsto \Theta f$.
\section{Remarks on $H^\infty$-invariant subspaces}
This section is devoted to adapting some results of {\cite{AnD}} to the setting of finite Riemann surfaces. Most of the results we present here are straightforward modifications of analogous results in {\cite{AnD}} that can be demonstrated with essentially the same proofs. In such cases, we simply state the result and refer the reader to the appropriate point in {\cite{AnD}}.
For this section, we fix a finite Riemann surface $R$ and a point $x_0\in R$. By {\cite[Sec. IV.5]{FnK}} there is an analytic covering map $\tau : \mathbb{D} \rightarrow R$ such that $\tau ( 0) = x_0$ and a group $G$ consisting of all analytic self-maps $\gamma$ of $\mathbb{D}$ for which $\tau \circ \gamma = \tau$. We call $G$ the \emph{deck transformation group} of $R$ and recall that $R$ is homeomorphic to $\mathbb{D}/G$ when $G$ is endowed with the discrete topology. Given a collection of functions $\mathcal{F}$ on $\mathbb{D}$, we denote by $\mathcal{F}^G$ the set of all $f \in \mathcal{F}$ such that $f \circ \gamma = f$ for each $\gamma \in G$. For example, $H^{\infty} ( \mathbb{D})^G$ denotes the set of all $G$-invariant bounded analytic functions on $\mathbb{D}$. The main result of this section is a Beurling-type theorem for the `pure' $H^{\infty} ( \mathbb{D})^G$-invariant subspace of $L^2 ( \partial \mathbb{D}, \mathfrak{X})^G$, where $\mathfrak{X}$ is a Hilbert space.
We begin with a short discussion of analytic vector bundles over $R$. A \emph{family of Hilbert spaces} over $R$ is a topological space $E$ together with a continuous projection $p : E \rightarrow R$ such that for each $x \in R$ the fiber $E_x = p^{- 1} ( \{ x \})$ is a Hilbert space in the topology inherited from $E$. Given a Hilbert space $\mathfrak{X}$, a \emph{coordinate covering} for $E$ is a collection of pairs $\{ ( \varphi_i, U_i) \}_{i \in I}$ such that \begin{enumerate}
\item[\rm{(i)}] $\{ U_i \}_{i \in I}$ is an open covering of $R$;
\item[\rm{(ii)}] $\varphi_i$ is a homeomorphism of $U \times \mathfrak{X}$ onto $p^{-
1} ( U_i)$ for each $i$; and
\item[\rm{(iii)}] for each $x \in R$ and $U_i \ni x$, the map $\varphi^x : \mathfrak{X}
\rightarrow E_x$ given by $\varphi^x h = \varphi ( x, h)$ is a continuous
linear isomorphism. \end{enumerate} A family of Hilbert spaces $E$ over $R$ with a coordinate covering $\{ ( \varphi_i, U_i) \}_{i \in I}$ is called a \emph{vector bundle}. In this case $x\mapsto\dim E_x$ is constant on $R$; we call this constant the \emph{rank} of the bundle. A \emph{transition map} of the bundle is any map of the form $U_i \cap U_j \ni x \mapsto ( \varphi_i^x)^{- 1} \varphi_j^x$ for which $U_i \cap U_j \neq \emptyset$. Observe that the transition maps take values in $\mathrm{\mathrm{GL}} ( \mathfrak{X})$. The \emph{trivial bundle} is the vector bundle $R \times \mathfrak{X}$ with the obvious projection map and the single element coordinate covering provided by the identity map of $R \times \mathfrak{X}$.
Let $E$ and $F$ be vector bundles over $R$ with fiber $\mathfrak{X}$. We say that a homeomorphism $\Lambda$ from $E$ onto $F$ is a \emph{vector bundle isomorphism} if for each $x \in R$ the restriction $\Lambda^x = \Lambda |E_x$ is a continuous linear isomorphism of $E_x$ onto $F_x$. We say that a bundle is \emph{topologically trivial} if it is isomorphic to the trivial bundle.
If each transition map $x \mapsto ( \varphi_i^x)^{- 1} \varphi_j^x$ is an analytic $\mathrm{GL} ( \mathfrak{X})$-valued function, then $E$ and the coordinate covering $\{ ( \varphi_i, U_i) \}_i$ are said to be \emph{analytic}. The trivial bundle over $R$ with fiber $\mathfrak{X}$ is analytic in the obvious way. If $F$ is an analytic vector bundle over $R$ with fiber $\mathfrak{X}$ and an analytic coordinate covering $\{ ( \psi_j, V_j) \}_{j \in J}$, then a vector bundle isomorphism $\Lambda : E \rightarrow F$ is \emph{analytic} if $\{ ( \psi_j, V_j) \}_{j \in J} \cup \{ ( \Lambda \circ \varphi_i, U_i) \}_{i \in I}$ also provides an analytic coordinate covering for $F$. We say that an analytic vector bundle is \emph{analytically trivial} is it analytically isomorphic to the trivial bundle.
\begin{theorem}[{\cite[Thm. 8.2]{Bungart}}]\label{Bungart}
Every analytic vector bundle over a non-compact Riemann surface is analytically trivial. \end{theorem}
A coordinate covering for a vector bundle $E$ with fiber $\mathfrak{X}$ is called a \emph{flat unitary} coordinate covering if the image of each transition map is contained in $\mathrm{U} ( \mathfrak{X})$, in which case $E$ is called a \emph{flat unitary vector bundle}. We note that the transition maps of a flat unitary coordinate covering are always locally constant, and thus every flat unitary vector bundle over $R$ is also analytic. Let $F$ be another flat unitary vector bundle over $R$ with fiber $\mathfrak{X}$, and let $\{ ( \varphi_i, U_i) \}_{i \in I}$ and $\{ ( \psi_j, V_j) \}_{j \in J}$ be coordinate coverings for $E$ and $F$, respectively. A vector bundle isomorphism $\Lambda : E \rightarrow F$ is a \emph{flat unitary vector bundle isomorphism} if $\{ ( \psi_j, V_j) \}_{j \in J} \cup \{ ( \Lambda \circ \varphi_i, U_i) \}_{i \in I}$ is also a flat unitary coordinate covering for $F$.
In contrast to analytic equivalence, there exist flat unitary vector bundles which are not equivalent as flat unitary vector bundles. To be more precise, let $\pi_1 (R)$ denote the fundamental group of $R$, and write $\alpha_1 \sim \alpha_2$ for $\alpha_1, \alpha_2 \in \mathrm{Hom} ( \pi_1 ( R), \mathrm{U} (\mathfrak{X}))$ whenever there is a $U \in \mathrm{U} ( \mathfrak{X})$ such that $\alpha_1 = U \alpha_2 ( \cdot) U^{- 1}$. One readily verifies that $\sim$ determines an equivalence relation on $\mathrm{Hom} ( \pi_1 ( R),\mathrm{U} ( \mathfrak{X}))$. For the proof of the following theorem, we refer the reader to \cite[Lemma 27]{Gunning}.
\begin{theorem}\label{AnDThmB}
There is a bijection between $\mathrm{Hom} ( \pi_1 ( R), \mathrm{U} ( \mathfrak{X})) / \sim$ and the set of equivalence classes of flat unitary vector bundles over $R$ with fiber $\mathfrak{X}$. \end{theorem}
Recall that a finite Riemann surface $R$ is a non-compact subdomain of a compact Riemann surface for which $\partial R$ is locally an analytic curve. In particular, $\partial R$ consists of a finite disjoint union of simple closed analytic curves. From this we easily find another finite Riemann surface $R' \supset \overline{R}$ such that $R$ is a deformation retract of
$R'$, implying in particular that $R'$ and $R$ have isomorphic fundamental groups. Given a vector bundle $E'$ over $R'$ with fiber $\mathfrak{X}$ and projection $p'$, the space $E' |R = \{ f \in E' : p' ( f) \in R \}$ is a vector bundle over $R$ with fiber $\mathfrak{X}$ and projection $p' | ( E'
|R)$. As observed in \cite[Sec. 1.4]{AnD}, the following is a corollary to the proof of Theorem 3.2.
\begin{corollary}\label{AndExtend}
If $E$ is a flat unitary vector bundle over $R$, then there is a flat
unitary vector bundle $E'$ over $R'$ such that $E' |R$ and $E$ are
equivalent as flat unitary vector bundles. \end{corollary}
Let $E$ be a flat unitary vector bundle over $R$ with fiber $\mathfrak{X}$ and a coordinate covering $\{(\varphi_i, U_i) \}_{i \in I}$, and fix a point
$x_0\in R$. Given $x\in U_i\cap U_j$ and $v\in E_x$, we note that $ \|(\varphi_i^x)^{-1}x \|=\|(\varphi_j^x)^{-1}v\|$. An \emph{analytic section} of $E$ is a continuous map $f : R \rightarrow E$ such that $p \circ f$ is the identity map on $R$ and $x \mapsto ( \varphi_i^x)^{- 1} \circ f$ is analytic on $U_i$ for each $i \in I$. We denote by $\Gamma_a ( E)$ the linear space of analytic sections of $E$. Given $f\in \Gamma_a(E)$, and define $h_f:R\to\RR$ by setting $h_f(x)=\|(\phi_i^x)^{-1}f(x)\|^2$ when $x\in U_i$. We note that $h_f(x)$ does not depend on which coordinate neighborhood of $x$ we use, and that $h_f$ has a least harmonic majorant $u_f$. The $E$-valued $H^2$ space of $R$ is the Hilbert space $H^2 ( E) = \{ f \in \Gamma_a ( E) : u_f ( x_0) < \infty \}$ with the norm given by $f\mapsto \|f\|=\sqrt{u_f(x_0)}$. We denote by $H^{\infty} ( R)$ the set of all bounded analytic functions on $R$, and note that $H^{\infty} ( R)$ acts on $H^2 ( E)$ by sending $( g, h) \in H^{\infty} ( R) \times H^2 ( E)$ to the analytic section $g h$.
Let $E$ and $F$ be flat unitary vector bundles over $R$ with fiber $\mathfrak{X}$. If $\Lambda : E \rightarrow F$ is a (uniformly) bounded analytic vector bundle isomorphism, then the operator $M_{\Lambda} : f \mapsto\Lambda\circ f$ defines a bounded linear isomorphism from $H^2 ( E)$ onto $H^2 ( F)$. One can show, as in \cite[Thm. 1]{AnD}, that if $E$ and $F$ are equivalent flat unitary vector bundles, then there is unitary map $U$ from $H^2 ( E)$ onto $H^2 ( F)$ such that $U ( g h) = g U h$ for all $( g, h) \in H^{\infty} ( R) \times H^2 ( E)$. We now have the following result as corollary of Theorem \ref{Bungart} and Corollary \ref{AndExtend}.
\begin{corollary}
\label{BundTrivCor}If $E$ is a flat unitary vector bundle over $R$ with fiber $\mathfrak{X}$, then there is a bounded analytic vector bundle isomorphism $\Lambda$ from the trivial bundle $R \times \mathfrak{X}$ onto $E$. In particular, $M_{\Lambda}$ is a bounded linear isomorphism of $H^2 ( R, \mathfrak{X})$ onto $H^2 ( E)$. \end{corollary}
The group of deck transformations $G$ for $\tau:\mathbb{D}\to R$ is a Fuchsian group of the second type that is isomorphic to $\pi_1 (R)$ \cite[\S IV.5]{FnK}. Associated with $G$ is a connected set $D_0 \subset \overline{\mathbb{D}}$, which we choose to contain $0$, with the following properties. \begin{enumerate}
\item[\rm{(i)}] The set $\partial \mathbb{D} \cap D_0$ consists of finitely many disjoint arcs in $\pd \mathbb{D}$, and $\mathbb{D} \cap \partial D_0$ consists of finitely many arcs,
each of which lies on a circle orthogonal to $\partial \mathbb{D}$.
\item[\rm{(ii)}] $\{ \gamma ( \mathbb{D} \cap D_0) : \gamma \in G \}$ partitions $\mathbb{D}$ and $L ( G) = \partial\mathbb{D}\backslash\bigcup_{\gamma\in G} \gamma ( D_0)$ is a set of arc-length measure $0$.
\item[\rm{(iii)}] The map $\tau$ extends to a local homeomorphism from $D = \bigcup_{\gamma \in G} \gamma ( D_0)$ onto $\overline{R}$. In particular, $\mathbb{D}/ G$ is analytically equivalent to $R$ and $D / G$ is homeomorphic to $\overline{R}$. \end{enumerate} We refer the reader to \cite[Ch. XI]{Tsuji} for more on this topic. From properties (ii) and (iii) it follows that $\int_{\partial R} u \,d \omega = \int_{\partial \mathbb{D}} u \circ \tau \,d m$ for all $u \in L^1 ( \partial R)$. In this way, the map $f \mapsto f \circ \tau$ determines an isometric isomorphism from $H^p( R)$ and $L^p ( \partial R)$ onto $H^p ( \mathbb{D})^G$ and $L^p ( \partial\mathbb{D})^G$, respectively, for each $p \in [ 1, \infty]$.
Let $\alpha \in \mathrm{Hom} ( G, \mathrm{U} ( \mathfrak{X}))$ and denote by $H^2_{\alpha} ( \mathbb{D}, \mathfrak{X})$ the set of all $f \in H^2 ( \mathbb{D}, \mathfrak{X})$ such that $f \circ \gamma = \alpha ( f) f$ for each $\gamma \in G$. We note that $H^2_{\alpha}( \mathbb{D}, \mathfrak{X})$ is $H^{\infty} ( \mathbb{D})^G$-invariant and that $H_{e}^2 ( \mathbb{D}, \mathfrak{X}) = H^2 ( \mathbb{D},\mathfrak{X})^G$, where $e$ denotes the trivial representation of $G$.
Theorem \ref{AnDThmB} asserts that $\alpha$ determines an essentially unique flat unitary vector bundle over $R$ with fiber $\mathfrak{X}$. Using a construction of such a bundle, the following theorem is deduced as in \cite[Thm. 5]{AnD}.
\begin{theorem}
\label{DRIsom}If $\alpha \in \mathrm{Hom} ( G, \mathrm{U} ( \mathfrak{X}))$ and $E$ is the associated flat unitary vector bundle, then $H^{\infty} ( R)$ acting on $H^2_{\alpha}(E)$ is unitarily equivalent to $H^{\infty}(\mathbb{D})^G$ acting on $H^2_{\alpha} ( \mathbb{D}, \mathfrak{X})$. \end{theorem}
Applying Theorem \ref{DRIsom} to Corollary \ref{BundTrivCor} produces the following. \begin{corollary}\label{GetPhi}
There is a bounded $\mathrm{GL} ( \mathfrak{X})$-valued analytic function $\Phi$ on $\mathbb{D}$ with the property that the $M_\Phi|H^2 ( \mathbb{D}, \mathfrak{X})^G$ is a continuous linear isomorphism onto $H_{\alpha}^2 ( \mathbb{D}, \mathfrak{X})$. \end{corollary}
For each Hilbert space $\mathfrak{X}$ and each $\alpha \in \mathrm{Hom} ( G,\mathrm{U} ( \mathfrak{X}))$ we fix a function $\Phi_{\alpha}$ as given by the preceding corollary. We remark that if $\Phi'_{\alpha}$ is another such function, then $h \mapsto ( \Phi_{\alpha})^{- 1} \Phi'_{\alpha} h$ is an automorphism of $H^2 ( \mathbb{D}, \mathfrak{X})^G$ and therefore $\Phi_{\alpha} H^2 ( \mathbb{D}, \mathfrak{X})^G = \Phi_{\alpha}' H^2 (\mathbb{D}, \mathfrak{X})^G$.
Let $\mc{A}$ be an algebra of operators acting on a Hilbert space $\frk{H}$, and suppose $\frk{H}'$ is an $\mc{A}$-invariant subspace of $\frk{H}$. The subspace $\frk{H}'$ is said to be \emph{pure} $\mc{A}$-invariant if there is no $\mc{A}|\frk{H}'$-reducing subspace of $\frk{H}'$ on which every element of $\mc{A}|\frk{H}'$ is a normal operator. For example, $\psi H^2(\mathbb{D})$ is pure $H^\infty(\mathbb{D})$-invariant whenever $\psi$ is an inner function. The following proposition differs from the analogous result in \cite{AnD} due to the fact that $H^\infty(R)$ is not, in general, generated by a single element.
\begin{proposition}\label{PurityProp}
Let $\frk{M}$ be a $H^\infty(\mathbb{D})^G$-invariant subspace of $L^2(\pd\mathbb{D},\frk{X})^G$ and let $\frk{M}'$ be the smallest $H^\infty(\mathbb{D})$-invariant subspace containing $\frk{M}$.
\begin{enumerate}
\item[\rm{(i)}] $\frk{M}'$ is $G$-invariant.
\item[\rm{(ii)}] $\frk{M}=\frk{M}'\cap L^2(\pd\mathbb{D},\frk{X})^G$
\item[\rm{(iii)}] If $\frk{M}$ is pure $H^\infty(\mathbb{D})^G$-invariant, then $\frk{M}'$ is pure $H^\infty(\mathbb{D})$-invariant.
\end{enumerate} \end{proposition} \begin{proof}
The proofs for (i)-(iii) are essentially identical to those found in {\cite[Prop 3.4]{AnD}}, but we present a proof of (iii) to illustrate the role of pure invariance.
We denote by $\zeta$ the coordinate function on $\pd\mathbb{D}$. Suppose $\frk{M}'$ is not a pure $H^\infty(\mathbb{D})$-invariant subspace of $L^2(\pd\mathbb{D},\frk{X})$, meaning that there is a non-trivial $M_\zeta$-invariant subspace $\frk{N}'\subseteq \frk{M}'$ such that $M_\zeta|\frk{N}'$ is a unitary operator. Given $f\in L^2(\pd\mathbb{D},\frk{X})$ and $\gamma\in G$, we set $C_\gamma f=f\circ\gamma$. The subspace $\frk{R}'=\bigvee_{\gamma\in G}C_\gamma\frk{N}'$ of $\frk{M}'$ is obvious $G$-invariant, and
\[ M_\zeta \frk{R}'=\bigvee_{\gamma\in G} M_\zeta C_\gamma\frk{N}'=\bigvee_{\gamma\in G} C_\gamma M_{\gamma^{-1}}\frk{N}'=\frk{R}'. \]
Thus $M_\zeta|\frk{R}'$ is a unitary operator and so $\frk{R}'$ is $H^\infty(\mathbb{D})^G$-reducing.
By \cite[Thm. VI.8]{Helson}, there is a unique weakly measurable projection-valued function $x\mapsto P(x)$ on $\pd \mathbb{D}$ such that $\frk{R}'=P L^2(\pd\mathbb{D},\frk{X})$. Since $\frk{R}'$ is $G$-invariant, we have that $P=P\circ\gamma$ for each $\gamma\in G$, and thus $\frk{R}'$ contains non-zero $G$-invariant elements. By (2) we have that $\frk{K}=\frk{R}'\cap L^2(\pd\mathbb{D},\frk{X})^G$ is a non-trivial subspace of $\frk{M}$. One now easily verifies that $\frk{K}$ is $H^\infty(\mathbb{D})^G$-reducing with the property that $M_f|\frk{K}$ is normal for any $f\in H^\infty(\mathbb{D})^G$. \end{proof}
We now present the main theorem of this section. The proof of this is essentially contained in the proof of \cite[Thm. 11]{AnD}, but we sketch it here for the reader's convenience.
\begin{theorem}\label{AnDInvarSbspc}
Suppose $\frk{M}$ a pure $H^\infty(\mathbb{D})^G$-invariant subspace of $L^2(\pd\mathbb{D},\frk{X})^G$. There exist a Hilbert space $\frk{Y}$, a representation $\alpha:G\to\mathrm{U}(\frk{Y})$, and a $\mc{B}(\frk{Y},\frk{X})$-valued weakly measurable function $\Psi$ on $\pd\mathbb{D}$ with the property that $\Psi(z)$ is an isometry for a.e. $z\in\pd\mathbb{D}$, that $(\Psi\circ\gamma)\cdot \alpha(\gamma)=\Psi$ for each $\gamma\in G$, and that
\[ \frk{M}=\Psi\Phi_\alpha H^2(\mathbb{D},\frk{Y})^G. \] \end{theorem} \begin{proof}
Let $\frk{M}'$ be the smallest $H^\infty(\mathbb{D})$-invariant subspace containing $\frk{M}$. As $\frk{M}$ is pure $H^\infty(\mathbb{D})^G$-invariant, it follows from Proposition \ref{PurityProp}(iii) that $\frk{M}'$ is pure $H^\infty(\mathbb{D})$-invariant. By \cite[Thm. VI.9]{Helson} there is a Hilbert space $\frk{Y}$ and a weakly measurable $\mc{B}(\frk{X},\frk{Y})$-valued function $\Psi$ such that $\frk{M}'=\Psi H^2(\mathbb{D},\frk{Y})$ and $\Psi(z)$ is an isometry for a.e. $z\in\pd\mathbb{D}$. Given $\gamma\in G$ it follows from Proposition \ref{PurityProp}(i) that $(\Psi\circ\gamma)H^2(\mathbb{D},\frk{Y})=\Psi H^2(\mathbb{D},\frk{Y})$. A corollary of \cite[Thm. VI.9]{Helson} asserts that there is a $\alpha(\gamma)\in \mathrm{U}(\frk{Y})$ such that $(\Psi\circ\gamma)\alpha(\gamma)=\Psi$. Thus $\alpha(\gamma)=(\Psi\circ\gamma)^*\Psi$, from which we readily deduce that $\alpha$ is a unitary representation of $G$ on $\frk{Y}$. Proposition \ref{PurityProp}(ii) implies that $\frk{M}=\Psi H^2_\alpha(\mathbb{D},\frk{Y})$, and the theorem now follows from Corollary \ref{GetPhi}. \end{proof}
\section{Virtual Similarity}
Let $\mathbb{V}$ and $\mathbb{W}$ denote $n$-tuples of commuting isometries on Hilbert spaces $\mathfrak{H}$ and $\mathfrak{K}$, respectively. Recall that $\mathbb{V} \lesssim \mathbb{W}$ if there is a finite codimensional $\mathbb{W}$-invariant subspace $\mathfrak{K}' \subseteq \mathfrak{K}$ such that $\mathbb{W}|\mathfrak{K}'$ is similar to $\mathbb{V}$. If $\WW\lesssim\mathbb{V}$ as well, then we say $\mathbb{V}$ is \emph{virtually similar} and write $\mathbb{V}\approx\WW$. The following is easily deduced from this definition.
\begin{corollary}\label{VS:ApproxEqAnn}
If $\mathbb{V}$ and $\mathbb{W}$ are virtually similar, then
$\mathrm{Ann} ( \mathbb{V}) = \mathrm{Ann} ( \mathbb{W})$. \end{corollary}
\begin{lemma}
\label{BlashProp}Let $V$ be an isometry on a Hilbert space $\mathfrak{H}$. If $\mathfrak{H}'$ is a finite codimensional $V$-invariant subspace and $V$ has no eigenvalues, then there is a finite Blaschke product $B$ such that $B (V)\mathfrak{H} \subseteq \mathfrak{H}' \subseteq \mathfrak{H}$. Moreover, if $V$ is a shift of finite multiplicity, then $B(V)\frk{H}$ has finite codimension in $\frk{H}'$. \end{lemma} \begin{proof}
The compression of $V$ to $(\frk{H}')^\bot$ has a minimal polynomial $Q$. Denote by $\lambda_1,\dots,\lambda_m$ the roots of $Q$, and assume only the first $\ell$ are contained in $\mathbb{D}$. For $j>\ell$, we note that $(V-\lambda_jI)\frk{H}$ is dense in $\frk{H}$. Thus we set $B(z)=\prod_{j=1}^\ell \frac{z-\lambda_j}{1-\cc{\lambda_j}z}$.
If $V$ is a shift of finite multiplicity, then the same is true of $\frac{V-\lambda_iI}{I-\cc{\lambda_i}V}$ for $i=1,\dots,\ell$. Thus $B(V)=\prod_{j=1}^\ell\frac{V-\lambda_jI}{I-\cc{\lambda_j}V}$ is also a shift of finite multiplicity. \end{proof}
\begin{corollary}
Let $\mathbb{V}$ be an $n$-tuple of commuting shifts of finite multiplicity on a Hilbert space $\frk{H}$. If $\mathfrak{H}'$ is a $\mathbb{V}$-invariant subspace of finite codimension, then $\mathbb{V}|\mathfrak{H}' \approx \mathbb{V}$. \end{corollary}
\begin{lemma}\label{LessSUFEq}
Suppose that $\mathbb{V}$ and $\WW$ are $n$-tuples of commuting isometries. If $\mathbb{V}\lesssim \WW$ and $W_n$ has no eigenvalues, then each element of $\mathbb{V}$ is a shift of finite multiplicity if and only if each element of $\WW$ is a shift of finite multiplicity. In either case $\mathbb{V}\approx\WW$. \end{lemma} \begin{proof}
Let $\frk{K}'$ be a $\WW$-invariant subspace of finite codimension in $\frk{K}$, and $S:\frk{H}\to \frk{K}'$ a boundedly invertible operator such that $SV_i=W_iS$ for $i=1,\dots,n$. By Lemma \ref{BlashProp}, there is a Blaschke product $B$ such that $B(W_n)\frk{K}\subseteq \frk{K}'$, whence
\[ B(W_n)\bigcap_{j=1}^\infty W_i^j\frk{K}\subseteq S\bigcap_{j=1}^\infty V_i^j\frk{H} \subseteq \bigcap_{j=1}^\infty W_i^j\frk{K}. \]
Thus $V_i$ is a shift if and only if $W_i$ is a shift.
Plainly
\begin{equation}\label{IncluChain1}
W_iB(W_n)\frk{K}\subseteq W_i\frk{K}'\subseteq \frk{K}'\subseteq \frk{K}.
\end{equation}
If $W_i$ and $W_n$ are shifts of finite multiplicity, then $W_iB(W_n)\frk{K}$ has finite codimension in $\frk{K}$, and it follows from \eqref{IncluChain1} that $SV_i\frk{H}$ has finite codimension in $S\frk{H}$. That is, if each element of $\WW$ is a shift of finite multiplicity, then each element of $\mathbb{V}$ is a shift of finite multiplicity. We also note that
\begin{equation}\label{IncluChain2}
W_iB(W_n)\frk{K}'\subseteq W_iB(W_n)\frk{K}\subseteq B(W_n)\frk{K} \subseteq \frk{K}'.
\end{equation}
If $V_i$ and $V_n$ are shifts of finite multiplicity, then $W_iB(W_n)\frk{K}'$ has finite multiplicity in $\frk{K}'$. It then follows from \eqref{IncluChain2} that $W_iB(W_n)\frk{K}$ has finite codimension in $B(W_n)\frk{K}$. Because $f\mapsto B(W_n)f$ is isometric on $\frk{K}$, it follows that each element of $\WW$ is a shift of finite multiplicity if and only if the same holds for each element of $\mathbb{V}$.
Assume that $\WW$ is an $n$-tuple of shifts of finite multiplicity, and note that $B(W_n)\frk{K}$ has finite codimension in $\frk{K'}$. Thus the $\mathbb{V}$-invariant subspace $\frk{H}'=S^{-1}B(W_n)\frk{K}$ has finite codimension in $\frk{H}$. For $f\in\frk{K}$,
\[ B ( W_n) W_j f = W_j S S^{- 1} B ( W_n) f = S V_j S^{- 1} B ( W_n) f, \]
and so $S^{-1}B(W_n)$ intertwines $\WW$ and $\mathbb{V}$. Because $g\mapsto S^{-1}B(W_n)g$ is one-to-one from $\frk{K}$ onto $\frk{H}'$, we conclude that $\WW\lesssim\mathbb{V}$ as well. \end{proof}
Recall that the virtual cyclicity $\kappa ( \mathbb{V})$ of $\mathbb{V}$ is the smallest positive integer $k$ for which there exists a set of vectors $h_1, \ldots, h_k \in \mathfrak{H}$ such that $\bigvee_{j = 1}^k \{p(\mathbb{V})h_j:p\in\CC[x_1,\dots,x_n]\}$ has finite codimension. We note that if $\mathfrak{H}'$ is a finite codimensional $\mathbb{V}$-invariant subspace of $\mathfrak{H}$, then $\kappa ( \mathbb{V}) = \kappa (
\mathbb{V}|\mathfrak{H}')$. Thus $\mathbb{V}$ is always virtually similar to an $n$-tuple that is both $\kappa ( \mathbb{V})$-cyclic and virtually $\kappa ( \mathbb{V})$-cyclic. We remark that a virtually cyclic $n$-tuple need not be cyclic.
\begin{example}
Let $V_1$ and $V_2$ be the isometries on $H^2(\mathbb{D},\CC^2)$ given by the equations $V_1(f,g)(z)=(zf(z),zg(z))$ and $V_2(f,g)(z)=(zg(z),zf(z))$. The $(V_1,V_2)$-invariant subspace $\frk{M}$ generated by any $h\in H^2(\mathbb{D},\CC^2)$ has codimension at least 1. Indeed, if $h(0)=(a,b)\in\CC^2$ is non-zero, then $(\cc{b},-\cc{a})$ is orthogonal to $\frk{M}$. In the case that $h(0)=0$, then $\frk{M}\subseteq \zeta H^2(\mathbb{D},\CC^2)$, where $\zeta$ is the coordinate function on the disc. Thus $(V_1,V_2)$ is not cyclic. \end{example}
\begin{lemma}\label{KappaLem}
Let $\mathbb{V}$ and $\WW$ be $n$-tuples of commuting shifts of finite multiplicity. If $\mathbb{V}\lesssim\WW$, then $\kappa(\mathbb{V})=\kappa(\WW)$. \end{lemma}
\begin{proof}
By Lemma \ref{LessSUFEq}, we have $\mathbb{V}\approx \WW$. Suppose $\mathbb{V}$ and $\WW$ act on Hilbert spaces $\frk{H}$ and $\frk{K}$, respectively. There is a finite codimensional $\WW$-invariant subspace $\frk{K}'\subseteq \frk{K}$ such that $\WW|\frk{K}'$ is similar to $\mathbb{V}$; say $S\in\mc{B}(\frk{H},\frk{K}')$ is boundedly invertible and $SV_i=W_iS$ for $i=1,\dots,n$. Let $k=\kappa(\WW)=\kappa(\WW|\frk{K}')$ and fix $f_1,\dots,f_k\in\frk{K}'$ that determine a $\WW$-invariant subspace $\frk{K}''$ of finite codimension in $\frk{K}'$. The subspace $S\frk{K}''=\bigvee_{i=1}^k\bigvee_{\beta\in\NN_0^n}\mathbb{V}^\beta Sf_i$ has finite codimension in $\frk{H}$, and therefore $\kappa(\mathbb{V})\leq k$. Because $\mathbb{V}\approx \WW$, a similar argument proves that $\kappa(\WW)\leq\kappa(\mathbb{V})$ as well. \end{proof}
\subsection{The Special Case where $\mathrm{Ann} ( \mathbb{V})$ is Prime.}\label{Sec:SC}
Throughout this subsection we assume that $\mathrm{Ann}(\mathbb{V})$ is a prime ideal and set $\mathcal{V}= Z ( \mathrm{Ann} ( \mathbb{V}))$. Let $R$ be the finite Riemann surface and $\xi$ the map from $\overline{R}$ onto $\mc{V}\cap\overline{\mathbb{D}}^n$ given by Proposition \ref{GetR}. Writing $\xi=(\xi_1,\dots,\xi_n)$, we note that $\xi_1, \ldots,\xi_n$ are unimodular on $\partial R$ and thus determine isometric multiplication operators on $H^2( R, \mathfrak{X})$ for any Hilbert space $\mathfrak{X}$. We abbreviate the $n$-tuple of multiplication operators $(M_{\xi_1},\dots,M_{\xi_n})$ by $\mathbb{M}_\xi$. Fix $x_0 \in R$ and let $\omega$ denote harmonic measure at $x_0$. Recall that $A_{\xi} ( R)$ is the uniform closure in $A(R)$ of the unital algebra generated by $\xi_1, \ldots, \xi_n$. We require the following result, whose proof is contained in that of \cite[Lemma 3.4]{AKM} for $n=2$ and \cite[Lemma 7.18]{Timko} for $n>2$. Here we sketch the proof for the reader's convenience.
\begin{lemma}\label{AbsContLem}
Let $\nu$ be a diffuse finite positive measure on $\pd R$ and denote by $\WW$ the $n$-tuple $(\xi_1,\dots,\xi_n)$ acting by multiplication on the $L^2(\nu)$-closure of $A_\xi(R)$. If $\WW$ an $n$-tuple of commuting shifts of finite multiplicity, then $\nu\ll\omega$. \end{lemma} \begin{proof}
Denote by $A^2_{\xi}(\nu)$ and $A^2(\nu)$ the $L^2(\nu)$-closures of $A_\xi(R)$ and $A(R)$, respectively, and denote by $\mathbb{U}$ the $n$-tuple $(\xi_1,\dots,\xi_n)$ acting on $A^2(\nu)$ by multiplication. Note that $A_\xi^2(\nu)$ has finite codimension in $A^2(\nu)$ and $\WW=\mathbb{U}|A_\xi^2(\nu)$, Because $\nu$ has no atoms, $U_n$ has no eigenvalues. By Lemma \ref{LessSUFEq}, it follows that $U_i$ is a shift of finite multiplicity for $i=1,\dots,n$.
We decompose $\nu$ as $hd\omega+d\nu_s$, where $\nu_s\bot \omega$ and $h$ is a non-negative element of $L^1(\omega)$. Because $A(R)$ is a hypo-Dirichlet algebra \cite[Lem. 1]{Wermer}, it follows from \cite[Sec. 3]{AhrnSar} that every representing measure for the character $f\mapsto f(x_0)$ on $A(R)$ is absolutely continuous with respect to $\omega$. Here we use the fact that $\omega$ is an Arens-Singer measure. By \cite[Lem. II.7.4]{Gamelin}, there exists an $F_\sigma$-set $E$ of harmonic measure $0$ such that $\nu_s(\pd R\backslash E)=0$. Applying Forelli's Lemma, we find that $\chi_E\in A^2(\nu)$, which is to say that $A^2(\nu)=A^2(hd\omega)\oplus A^2(\nu_s)$. Note that $(\nu_s\circ\xi_n^{-1})(\pd\mathbb{D}\backslash\xi_n(E))=0$. As $\xi_n$ is piecewise smooth on $\pd R$ and $\omega(E)=0$, it follows that $\xi_n(E)\subseteq \pd\mathbb{D}$ has arc-length measure $0$. That is, $\nu_s\circ\xi_n^{-1}$ is singular relative to Lebesgue measure on $\pd \mathbb{D}$. By the Kolmogorov-Krein theorem for the disc
\[ 0=\inf_{f\in A(\mathbb{D})}\int_{\pd\mathbb{D}}|1-z f(z)|^2d(\nu_s\circ\xi_n^{-1})(z)=\inf_{f\in A(\mathbb{D})}\int_{\pd R}|1-\xi_n(f\circ\xi_n)|^2d\nu_s. \]
That is, $\chi_E\in \xi_n\mathrm{clos}_{L^2(\nu)}(\CC[\xi_n]\chi_E)$ and $f\mapsto \xi_n f$ is a unitary operator on $\mathrm{clos}_{L^2(\nu)}(\CC[\xi_n]\chi_E)$. Because $f\mapsto \xi_n f$ on $A^2(\nu)$ is a shift, it follows that $\nu_s(E)=\|\chi_E\|^2_{L^2(\nu)}=0$. \end{proof}
In the following, we denote by $A(R,\CC^k)$ the linear space of all continuous functions $f:\cc{R}\to\CC^k$ that are analytic on $R$. If each component of $f$ is an element of $A_\xi(R)$ as well, then we write $f\in A_\xi(R,\CC^k)$. We view both $A(R,\CC^k)$ and $A_\xi(R,\CC^k)$ as subspaces of $L^2(\pd R,\CC^k)$.
\begin{lemma}\label{SubNormLem}
Assume that $\mathbb{V}$ has cyclic set of size $k$.
\begin{enumerate}
\item[\rm{(i)}] There exists a $k \times k$ matrix-valued measurable function $\Gamma$ on $\partial R$ such that $\mathbb{V}$ is unitarily equivalent to $\mathbb{M}_{\xi} |\mathfrak{N}$, where
\[ \mathfrak{N}=\mathrm{clos}_{L^2(\pd R,\CC^k)}\{ \Gamma f : f \in A_{\xi} (R,\CC^k) \}. \]
\item[\rm{(ii)}] There exists a pure $H^{\infty} (R)$-invariant subspace of finite codimension in $\mathfrak{N}$.
\end{enumerate} \end{lemma}
\begin{proof}
Let $\widetilde{\mathbb{V}}$ denote the minimal unitary extension of $\mathbb{V}$ to a Hilbert space $\widetilde{\frk{H}}$, and let $h_1, \ldots, h_k \in \mathfrak{H}$ form a cyclic set for $\mathbb{V}$. There is a projection valued measure $E$ concentrated on $\mathcal{V} \cap ( \partial \mathbb{D})^n$, coming from $\widetilde{\mathbb{V}}$, such that
\[ \langle p ( \mathbb{V}) h_i, q ( \mathbb{V}) h_j \rangle =
\int_{\mathcal{V} \cap ( \partial \mathbb{D})^n} p ( z) \overline{q (
z)} d \mu_{i j} ( z), \hspace{2em} d \mu_{i j} = \langle d E \cdot h_i,
h_j \rangle, \]
for $i,j=1,\dots,k$. From this we readily deduce that each $\mu_{i j}$ is absolutely continuous
with respect to $\mu=\sum_i \mu_{i i}$.
We claim that $\mu_{i i}$ has no atoms. Take $w \in \mathbb{C}^n$ and note that $\mu_{i i} ( \{ w \}) = \| E ( \{ w \}) h_i \|^2$. Fixing $v \in E ( \{ w \})$, we we find that for any $\beta \in (\NN_0)^{n}$, $\ell \in \{ 1, 2, \dots \}$, and $g \in \mathfrak{H}$,
\[ | \langle v, \tilde{\mathbb{V}}^{\ast \beta} g \rangle | = | w^{\beta}
w_1^{\ell}|\cdot| \langle v, \widetilde{V}_1^{\ell} g \rangle | = |\langle v, V_1^{\ell} g \rangle | = | \langle P_{\mathfrak{H}}
v, V_1^{\ell} g \rangle | . \]
Sending $\ell \rightarrow \infty$, we find that $v$ is orthogonal to vectors of the form $\tilde{\mathbb{V}}^{\ast \beta} g$. As the set of such vectors is dense in $\tilde{\mathfrak{H}}$, it follows that $v = 0$ and $\mu_{ii}$ has no atoms.
Because $\xi$ sends a cofinite subset of $\pd R$ homeomorphically onto a cofinite subset of $\mc{V}\cap(\pd\mathbb{D})^n$, the pull-back measure $\nu_{i i} = \mu_{i i} \circ \xi$ is well-defined and defuse. The restriction of $\mathbb{V}$ to $\bigvee_{\beta\in\NN_0^n} \mathbb{V}^{\beta} h_i$ is unitarily equivalent to $\mathbb{M}_\xi$ restricted to $A^2_\xi(\nu_{ii})$ and so, by Lemma \ref{AbsContLem}, we have that $\nu_{i i} \ll \omega$. In particular, $\nu=\sum_i\nu_{ii}$ is absolutely continuous with respect to $\omega$. Let $\Gamma$ be given by $( \Gamma^2)_{i j} = \left( \frac{d \mu_{j
i}}{d \mu} \circ \xi \right) \frac{d \nu}{d \omega}$ for each $i$ and $j$, and note that
\[ \langle p ( \mathbb{V}) h_i, q ( \mathbb{V}) h_j \rangle =
\int_{\mathcal{V} \cap ( \partial \mathbb{D})^n} \sum_{\ell = 1}^k (
\Gamma_{\ell i} \cdot p \circ \xi) \overline{( \Gamma_{\ell j} \cdot q
\circ \xi)} d \omega \]
for $p,q\in\CC[x_1,\dots,x_n]$. Assertion (i) is now proved.
Recall that there is a single-variable polynomial $Q$ such that $Q(\theta_n)A(R)$ is contained with finite codimension in $A_{\theta} ( R)$. Therefore $\mathfrak{M} = \mathrm{clos}_{L^2(\omega)} \{ Q(\theta_n)\Gamma f : f \in A(R,\CC^k) \}$ has finite codimension in $\mathfrak{N}$. Because $\mathrm{clos}_{L^2(\omega)} A (R) = H^2 (R)$ (see \cite[pg. 168]{GamAndLum}), we conclude that $\mathfrak{M}$ is $H^{\infty}(R)$-invariant. It remains to show that $\frk{M}$ is pure invariant. Denote by $U:\frk{N}\to\frk{H}$ the unitarily equivalence given by assertion (i), and let $\frk{R}\subseteq\frk{M}$ be a $H^\infty(R)|\frk{M}$-invariant subspace on which each element of $H^\infty(R)$ acts as a normal operator. Then $M_{\xi_1}|\frk{R}$ is a unitary operator, whence $V_1|U\frk{R}$ is unitary. But $V_1$ is a shift and therefore has no unitary summands. That is, $\frk{R}=\{0\}$ and $\frk{M}$ is pure invariant. \end{proof}
Let $\tau : \mathbb{D} \rightarrow R$ denote the universal covering map appearing in Section 3, and let $G$ denote the group of deck transformations associated with $\tau$. We set $\eta=(\eta_1,\dots,\eta_n)=\xi\circ\tau$ and denote by $A_{\eta} ( \mathbb{D})$ the closed unital subalgebra generated by $\eta_1, \ldots, \eta_n$ in the disc algebra $A(\mathbb{D})$, and by $A^2_\eta(\mathbb{D})$ the $L^2$-closure of $A_\eta(\mathbb{D})$. Note that each $\eta_i$ is a $G$-invariant inner function on $\mathbb{D}$. Given a representation $\alpha : G \rightarrow \mathrm{U} ( \mathbb{C}^k)$, we fix a $k\times k$ matrix-valued analytic function $\Phi_{\alpha}$ on $\mathbb{D}$ so that $f \mapsto \Phi_{\alpha} f$ is boundedly invertible from $H^2 ( \mathbb{D}, \mathbb{C}^k)^G$ onto $H^2_{\alpha} ( \mathbb{D}, \mathbb{C}^k)$.
\begin{lemma}
\label{UniModel} There is a finite codimensional $\mathbb{V}$-invariant subspace $\mathfrak{H}'\subseteq\frk{H}$ such
that $\mathbb{V}|\mathfrak{H}'$ is similar to $\mathbb{M}_{\eta} |A^2_{\eta} (
\mathbb{D}, \mathbb{C}^k)$ for $k = \kappa ( \mathbb{V})$. \end{lemma}
\begin{proof}
Without loss of generality, we assume that $\mathbb{V}$ is also $k$-cyclic. By Lemma \ref{SubNormLem}(ii), there is a $\mathbb{V}$-invariant subspace $\frk{R}$ of
finite codimension in $\mathfrak{H}$ so that
$\mathbb{V}|\frk{R}$ is unitarily equivalent to $\mathbb{M}_{\eta} |\mathfrak{M}$
where $\mathfrak{M}$ is a pure $H^{\infty} ( \mathbb{D})^G$-invariant subspace
of $L^2 ( \partial \mathbb{D}, \mathbb{C}^k)^G$. Let $U:\frk{M}\to\frk{R}$ denote the unitary equivalence. Thus, by Theorem
\ref{AnDInvarSbspc}, there is a Hilbert space $\frk{Y}$, a $\mc{B}(\frk{Y},\CC^k)$-valued weakly measurable function $\Psi$
on $\partial \mathbb{D}$, and a unitary representation $\alpha$ of $G$ on $\frk{Y}$ such that $\mathfrak{M}= \Psi \Phi_{\alpha} H^2 ( \mathbb{D}, \frk{Y})^G$. Moreover, $\Psi|\pd\mathbb{D}$ is a.e. isometric and thus $k\geq r=\dim\frk{Y}$.
With $e_1,\dots,e_r$ denoting an orthonormal basis for $\frk{Y}$, we set $f_i=\Psi\Phi_\alpha e_i$ for $i=1,\dots, r$. Because $A_\eta^2(\mathbb{D})$ has finite codimension in $H^2(\mathbb{D})^G$, the subspace $\bigvee_{i=1}^n A_\eta(\mathbb{D})f_i$ has finite codimension in $\frk{M}$. Thus $\frk{H}'=\bigvee_{i=1}^r\bigvee_{\beta\in\NN_0^k}\mathbb{V}^\beta Uf_i$ is a subspace of finite codimension in $\frk{H}$. By definition of $\kappa(\mathbb{V})$, we have that $k\leq r$ and thus $k=r$. \end{proof}
\begin{theorem}
\label{MainThm} Let $\mathbb{V}$ and $\WW$ be $n$-tuples of commuting shifts of finite multiplicity. If $\mathrm{Ann} (\mathbb{V})$ is a prime ideal, then $\mathbb{V} \approx \mathbb{W}$ if
and only if $\mathrm{Ann} ( \mathbb{V}) = \mathrm{Ann} ( \mathbb{W})$ and
$\kappa ( \mathbb{V}) = \kappa ( \mathbb{W})$. \end{theorem} \begin{proof}
If $\mathbb{V}\approx \WW$, then it follows from Corollary \ref{VS:ApproxEqAnn} that $\Ann(\mathbb{V})=\Ann(\WW)$, and from Lemma \ref{KappaLem} that $\kappa(\mathbb{V})=\kappa(\WW)$.
Assume that $\Ann(\mathbb{V})=\Ann(\WW)$ and $\kappa(\mathbb{V})=\kappa(\WW)$. By Lemma \ref{UniModel}, there is an $n$-tuple of commuting shifts of finite multiplicity $\mathbb{U}$, which depends up to similarity only on $\Ann(\mathbb{V})$ and $\kappa(\mathbb{V})$, such that $\mathbb{U}\lesssim \mathbb{V}$. By Proposition \ref{LessSUFEq}, it follows that $\mathbb{U}\approx \mathbb{V}$. A similar argument shows that $\mathbb{U}\approx\WW$ as well. \end{proof}
\subsection{The General Case}
If $\mathrm{Ann} ( \mathbb{V})$ is not prime, then there exists a
unique finite collection of prime ideals $\mathcal{I}_1, \ldots,
\mathcal{I}_m$ with $m > 1$ such that $\mathrm{Ann} ( \mathbb{V}) =
\bigcap_{i = 1}^m \mathcal{I}_i$ and $\mathrm{Ann}(\mathbb{V})\subsetneq \bigcap_{i\neq j}\mc{I}_i$ for $j=1,\dots,m$. We call these ideals the \emph{prime factors} of $\Ann(\mathbb{V})$. For $i = 1, \ldots, m$ we set $\widehat{\mathcal{I}}_i = \bigcap_{j \neq i} \mathcal{I}_j$, and define the subspaces
\[ \mathfrak{H}_i = \mathrm{clos} \{ p ( \mathbb{V}) f : p \in
\widehat{\mathcal{I}}_i, f \in \mathfrak{H} \}, \quad
\mathfrak{H}_i^+ = \left\{ f \in \mathfrak{H}: p ( \mathbb{V}) f = 0\text{ for all } p \in \mathcal{I}_i \right\}. \]
Note that $\frk{H}_i\subseteq \frk{H}_i^+$ for each $i$. Because the ideals of $\CC[x_1,\dots,x_n]$ are finitely generated, the $n$-tuple $\mathbb{V}|\frk{H}_i$ has a finite cyclic set. By \cite[Thm 6.2]{Timko}, each element of $\mathbb{V}|\frk{H}_i$ has finite multiplicity.
\begin{lemma}
Both $\sum_{i=1}^m \mathfrak{H}_i$ and $\sum_{i=1}^m \mathfrak{H}_i^+$ have finite codimension in $\mathfrak{H}$, and
\begin{equation}\label{GCLem1Eq1}
\Ann(\mathbb{V}|\frk{H}_j^+)=\Ann(\mathbb{V}|\frk{H}_j)=\mc{I}_j,\quad j=1,\dots,m.
\end{equation}
\end{lemma}
\begin{proof}
Plainly $\frk{H}_i\subseteq \frk{H}_i^+$ and
\begin{equation}\label{GCLem1Eq2}
Z(\Ann(\mathbb{V}|\frk{H}_i))\subseteq Z(\Ann(\mathbb{V}|\frk{H}_i^+))\subseteq Z(\mc{I}_i),\quad i=1,\dots,m.
\end{equation}
Observe that $\mathbb{V}|\frk{H}_j$ is an $n$-tuple of shifts, and therefore $Z(\mathbb{V}|\frk{H}_i)$ has no 0-dimensional components \cite[Lemma 3.1]{Timko}. Because $Z(\mc{I}_i)$ is an irreducible variety of dimension 1, we have that $Z(\Ann(\mathbb{V}|\frk{H}_j))=\mc{I}_j$, and \eqref{GCLem1Eq1} now follows from \eqref{GCLem1Eq2}.
For the remaining assertions, it suffices to show that $\sum_{j=1}^m\frk{H}_j$ has finite codimension. Recall that $V_n$ has finite multiplicity, and thus $\mathbb{V}$ has a finite cyclic set $\{h_1,\dots,h_k\}$ in $\frk{H}$. Because $Z(\widehat{\mc{I}}_i\cap\widehat{\mc{I}}_j)$ is a finite set whenever $i\neq j$, the ideal $\sum_{\ell=1}^m\widehat{\mc{I}}_\ell$ has finite codimension in $\CC[x_1,\dots,x_n]$; say
\[ \CC[x_1,\dots,x_n]=\CC\cdot p_1+\cdots+\CC\cdot p_r + \sum_{\ell=1}^m \widehat{\mc{I}}_\ell. \]
Because each $\widehat{\mc{I}}_i$ is an ideal, we have that
\[ \frk{H}=\sum_{\ell=1}^r\sum_{j=1}^k \CC p_\ell(\mathbb{V})h_j + \bigvee_{j=1}^m \frk{H}_j. \qedhere \]
\end{proof}
Given a polynomial $p\in\CC[x_1,\dots,x_n]$ and $z\in(\CC\backslash\{0\})^n$, we define $\iota(p)\in\CC[x_1,\dots,x_n]$ by setting
\[ \iota(p)(z)=z_1^{d_1}\cdots z_n^{\delta_n}\cc{p(1/\cc{z}_1,\dots,1/\cc{z}_n)} \]
where $d_1,\dots,d_n$ are the degrees of $x_1,\dots,x_n$ in $p$, respectively. We call $\delta=(d_1,\dots,d_n)$ the multi-degree of $p$. We now have
\[ \iota(p)(\mathbb{V})=p(\mathbb{V})^*\mathbb{V}^\delta. \]
Thus $p\in\Ann(\mathbb{V})$ if and only if $\iota(p)\in\Ann(\mathbb{V})$. What follows is based on \cite[Thm. 2.1]{AKM} and \cite[Thm. 3.4]{Timko}.
\begin{lemma}
The subspaces $\frk{H}_1,\dots,\frk{H}_k$ are pairwise orthogonal, the subspaces $\frk{H}_1^+,\dots,\frk{H}_m^+$ are linearly independent, and $\frk{H}_i$ has finite codimension in $\frk{H}_i^+$ for $i=1,\dots,m$.
\end{lemma}
\begin{proof}
Let $p_i\in\widehat{\mc{I}}_i$ and let $p_j\in\widehat{\mc{I}}_j$ for distinct $i,j$ in $\{1,\dots,m\}$. Because $\Ann(\mathbb{V}|\frk{H}_j)=\mc{I}_j$, it follows that $\iota(p_j)\in\widehat{\mc{I}}_j$ and thus that $\iota(p_j)\cdot p_i\in\Ann(\mathbb{V})$. In other words, for $h,h'\in\frk{H}$ and $\delta$ the multi-degree of $p_i$,
\[ \inner{p_i(\mathbb{V})h}{p_j(\mathbb{V})h'}=\inner{\mathbb{V}^\delta}{p_i(\mathbb{V})^*\mathbb{V}^\delta p_j(\mathbb{V})h'}= \inner{\mathbb{V}^\delta}{(\iota(p_i) p_j)(\mathbb{V})h'}=0.\]
That is, $\frk{H}_i$ and $\frk{H}_j$ are orthogonal.
If $f \in \mathfrak{H}_i^+ \cap \sum_{j \neq i}
\mathfrak{H}_j^+$, then $p ( \mathbb{V}) f = 0$ for each $p \in
\mathcal{I}_i + \hat{\mathcal{I}}_i$. However, $Z ( \mathcal{I}_i +
\hat{\mathcal{I}}_j) = \bigcup_{j \neq i} Z ( \mathcal{I}_i) \cap Z (
\mathcal{I}_j)$ has dimension 0, and thus $\mathcal{I}_i +
\widehat{\mathcal{I}}_i$ contains a non-zero single-variable polynomial $p_0 ( x_1)$. Because $p_0 ( V_1)$ is injective, we have that $f = 0$.
Let $p\in \widehat{\mc{I}}_j$ for some $j\neq i$ and denote by $\delta$ the multi-degree of $p$. Because $\Ann(\mathbb{V}|\frk{H}_\ell)=\mc{I}_\ell$ for each $\ell$, we know that $\iota(p)\in \widehat{\mc{I}}_j$. Thus, for $f\in\frk{H}_i^+$ and $g\in\frk{H}$,
\[ \inner{f}{p(\mathbb{V})g}=\inner{\mathbb{V}^{*\delta}p(\mathbb{V})^*\mathbb{V}^\delta f}{g}=\inner{\iota(p)(\mathbb{V}) f}{\mathbb{V}^{\delta}g}=0. \]
Thus $\frk{H}_i^+\bot \frk{H}_j$ for $j\neq i$, whence $\frk{H}_i^+\ominus \frk{H}_i\subseteq \left(\sum_{\ell=1}^m\frk{H}_\ell\right)^\bot$. Because $\sum_{\ell=1}^m\frk{H}_\ell$ has finite codimension in $\frk{H}$, we have that $\frk{H}_i$ has finite codimension in $\frk{H}_i^+$.
\end{proof}
It follows from the preceding lemma that $\mathbb{V}|\mathfrak{H}_i \approx \mathbb{V}|\mathfrak{H}_i^+$ and $\kappa(\mathbb{V}|\mathfrak{H}_i) = \kappa ( \mathbb{V}|\mathfrak{H}_i^+)$ for $i=1,\dots,m$. We also note that if $\mathfrak{H}'$ is a $\mathbb{V}$-invariant finite
codimensional subspace of $\mathfrak{H}$, then the subspace $\mathfrak{H}_i' = \mathrm{clos} \{ p ( \mathbb{V}) f : p \in \widehat{\mathcal{I}}_i, f \in \mathfrak{H}' \}$ has finite codimension in $\mathfrak{H}_i$. Because of this, $\mathcal{I}_i = \mathrm{Ann} ( \mathbb{V}|\mathfrak{H}_i')$ and $\kappa (
\mathbb{V}|\mathfrak{H}_i^+) = \kappa ( \mathbb{V}|\mathfrak{H}_i')$. More generally, we have the following.
\begin{lemma}\label{IdlCodimLem}
Let $\mc{I}$ be an ideal of $\CC[x_1,\dots,x_n]$. The closure $\frk{H}_\mc{I}'$ of $\{p(\mathbb{V})h:h\in\frk{H}',p\in\mc{I}\}$ has finite codimension in the closure $\frk{H}_\mc{I}$ of $\{p(\mathbb{V})h:h\in\frk{H},p\in\mc{I}\}$. Moreover, $\kappa(\mathbb{V}|\frk{H}_\mc{I}')=\kappa(\mathbb{V}|\frk{H}_\mc{I})$.
\end{lemma}
\begin{proof}
Ideals of $\CC[x_1,\dots,x_n]$ are finitely generated, and thus there are polynomials $p_1,\dots,p_\ell\in\mc{I}$ such that $\mc{I}=\sum_{j=1}^\ell \CC[x_1,\dots,x_n]\cdot p_j$. Because of this, $\frk{H}_\mc{I}'=\bigvee_{j=1}^\ell p_j(\mathbb{V})\frk{H}'$, with a similar expression for $\frk{H}_\mc{I}$. Thus, if $f_1,\dots,f_m$ form a basis for $\frk{H}\ominus\frk{H}'$, then
\[ \frk{H}_\mc{I}=\frk{H}_\mc{I}'+\sum_{j=1}^\ell\sum_{i=1}^m \CC p_j(\mathbb{V})f_i. \qedhere \]
\end{proof}
Recall from Corollary \ref{VS:ApproxEqAnn} that virtually similar $n$-tuples have the same annihilator. In particular, virtually similar tuples have annihilators with the same prime factors.
\begin{theorem}\label{GenThm}
Let $\mathbb{V}$ and $\WW$ be $n$-tuples of commuting shifts of finite multiplicity on Hilbert spaces $\frk{H}$ and $\frk{K}$, respectively. We denote by $\mc{I}_1,\dots,\mc{I}_m$ the prime factors of $\Ann(\mathbb{V})$, and set
\[ \frk{H}_i^+=\bigcap_{p\in\mc{I}_i}\ker p(\mathbb{V}), \quad \frk{K}_i^+=\bigcap_{p\in\mc{I}_i}p(\WW), \quad
i=1,\dots,m.\]
The following assertions are equivalent.
\begin{enumerate}
\item[\rm{(i)}] $\mathbb{V} \approx \mathbb{W}$
\item[\rm{(ii)}] $\mathrm{Ann} ( \mathbb{V}) = \mathrm{Ann} ( \mathbb{W})$ and $\kappa (
\mathbb{V}|\mathfrak{H}_j^+) = \kappa ( \mathbb{V}|\mathfrak{K}_j^+)$ for $j=1,\dots,m$.
\end{enumerate} \end{theorem}
\begin{proof}
Assume (ii). As above, we set $\frk{H}_j=\mathrm{clos}\{p(\mathbb{V})h:p\in\widehat{\mc{I}}_j,\: h\in\frk{H}\}$ and $\frk{K}_j=\mathrm{clos}\{p(\WW)g:p\in\widehat{\mc{I}}_j,\; g\in\frk{K}\}$ for $j=1,\dots,m$. Because $\sum_{j=1}^m\frk{H}_j$ and $\sum_{j=1}^m\frk{K}_j$ have finite codimension in $\frk{H}$ and $\frk{K}$, respectively, we deduce from Proposition \ref{LessSUFEq} that
\[ \mathbb{V} \approx \bigoplus_{j=1}^m \mathbb{V}|\frk{H}_j, \quad \WW\approx \bigoplus_{j=1}^m\WW|\frk{K}_j. \]
As $\Ann(\mathbb{V}|\frk{H}_i)=\mc{I}_i=\Ann(\WW|\frk{K}_i)$ and $\kappa(\mathbb{V}|\frk{H}_i)=\kappa(\WW|\frk{K}_i)$, it follows from Theorem \ref{MainThm} that $\mathbb{V}|\frk{H}_i\approx \WW|\frk{K}_i$ for $i=1,\dots,m$. Thus we have that
\[ \mathbb{V} \gtrsim \bigoplus_{j=1}^m \mathbb{V}|\frk{H}_j \gtrsim \bigoplus_{j=1}^m\WW|\frk{K}_j\gtrsim \WW. \]
Likewise, we find that $\WW\gtrsim\mathbb{V}$.
Conversely, we assume (i) and note that $\Ann(\mathbb{V})=\Ann(\WW)$ by Corollary \ref{VS:ApproxEqAnn}. Let $\mathfrak{K}'$ be a finite codimensional
$\mathbb{W}$-invariant subspace of $\mathfrak{K}$ and let $S\in\mc{B}(\mathfrak{H},\mathfrak{K}')$ be a boundedly invertible operator such that
$SV_j=W_jS$ for $j=1,\dots,n$. It follows from $\dim(\frk{K}\ominus\frk{K}')<\infty$ and Lemma \ref{LessSUFEq} that each element of $\WW|\frk{K}'$ is a shift of finite multiplicity. We set $\mathfrak{K}_i' = \mathrm{clos} \{ p ( \mathbb{W}) f : p \in \widehat{\mathcal{I}}_i, f \in \mathfrak{K}' \}$ for $i=1,\dots,m$ and note that $\frk{K}_i'$ has finite codimension in $\frk{K}_i$ by Lemma \ref{IdlCodimLem}. Thus $\frk{K}_i'$ has finite codimension in $\frk{K}_i^+$ as well. Because $S^{- 1} \mathfrak{K}_i' =\mathfrak{H}_i$, we conclude that $\kappa ( \mathbb{W}|\mathfrak{K}_i^+) = \kappa (
\mathbb{W}|\mathfrak{K}_i') = \kappa ( \mathbb{V}|\mathfrak{H}_i) =
\kappa ( \mathbb{V}|\mathfrak{H}_i^+)$. \end{proof}
Before ending this section, we remark that assertion (i) of Theorem \ref{GenThm} can be weakened as follows. Let $\mathbb{V}$ and $\WW$ be $n$-tuples of commuting shifts of finite multiplicity on Hilbert spaces $\frk{H}$ and $\frk{K}$, respectively. Suppose there are injective linear maps $X\in\mc{B}(\frk{H},\frk{K})$ and $Y\in\mc{B}(\frk{K},\frk{H})$ such that $\dim (\ran X)^\bot<\infty$, $\dim(\ran Y)^\bot<\infty$, and \[ XV_i=W_iX, \quad YW_i=V_iX \quad\text{for }i=1,\dots,n. \] Then we say that $\mathbb{V}$ and $\WW$ are \emph{virtually quasi-similar}. It is evident that virtual similarity implies virtual quasi-similarity. The converse is also true.
\begin{proposition}
If $\mathbb{V}$ and $\WW$ are virtually quasi-similar $n$-tuples of commuting shifts of finite multiplicity, then $\mathbb{V}\approx \WW$. \end{proposition} \begin{proof}
Let $X$ and $Y$ be as above. If $p\in\Ann(\WW)$, then $0=p(\WW)X=Xp(\mathbb{V})$. As $X$ is injective, it follows that $p\in\Ann(\mathbb{V})$. In a similar fashion, we see that $\Ann(\mathbb{V})\subseteq \Ann(\WW)$ as well.
Let $\mc{I}_1,\dots,\mc{I}_m$ be the prime factors of $\Ann(\mathbb{V})$, and set
\[ \frk{H}_i=\bigvee_{p\in\widehat{\mc{I}}_i}p(\mathbb{V})\frk{H}, \quad \frk{K}_i=\bigvee_{p\in\widehat{\mc{I}}_i}p(\WW)\frk{K} \]
for $i=1,\dots,n$. Plainly $X\frk{H}_i\subseteq \frk{K}_i$ and $Y\frk{K}_i\subseteq \frk{H}_i$. By Lemma \ref{IdlCodimLem}, it follows that $\dim(\frk{K}_i\ominus X\frk{H}_i)<\infty$ and $\dim(\frk{H}_i\ominus Y\frk{K}_i)<\infty$.
With $k=\kappa(\mathbb{V}|\frk{H}_i)$, let $f_1,\dots,f_k\in\frk{H}_i$ such that $\bigvee_{j=1}^k\bigvee_{\beta\in\NN^n_0}\mathbb{V}^\beta f_j$ has finite codimension in $\frk{H}_i$. This implies that $\bigvee_{j=1}^k\bigvee_{\beta\in\NN^n_0}\WW^\beta Xf_j$ has finite codimension in $\frk{K}_i$, and therefore $\kappa(\mathbb{V}|\frk{H}_i)=k\geq \kappa(\WW|\frk{K}_i)$. By a similar argument, we conclude that $\kappa(\mathbb{V}|\frk{H}_i)\leq \kappa(\WW|\frk{K}_i)$ as well. The proposition now follows from Theorem \ref{GenThm}. \end{proof}
\section{Virtual Unitary Equivalence}
We say that two $n$-tuples of commuting shifts are \emph{virtually unitarily equivalent} if each tuple is unitarily equivalent to a finite codimensional restriction of the other. Fix an $n$-tuple $\mathbb{V}$ of commuting shifts of finite multiplicity for which $\mathrm{Ann} ( \mathbb{V})$ is prime and set $k=\kappa(\mathbb{V})$.
\begin{quotation}\noindent
Question A : If $\mathbb{W}$ is a virtually $k$-cyclic $n$-tuple of commuting shifts of finite multiplicity and $\mathrm{Ann} ( \mathbb{V}) = \mathrm{Ann} ( \mathbb{W})$, is $\mathbb{W}$ virtually unitarily equivalent to $\mathbb{V}$? \end{quotation}
\noindent For $k=1$, the answer to Question A is `yes', as shown for $n=2$ in \cite{AKM} and $n>2$ in \cite{Timko}. The question remains open in general for $k>1$. We compare Question A with the following related question, where $G$ and $R$ are as in Section \ref{Sec:SC}. \begin{quotation}\noindent
Question B : Given $\alpha \in \Hom ( G, \mathrm{U} ( \mathbb{C}^k))$, does
there exist a $k\times k$ matrix-valued inner function $\Psi$ on
$\mathbb{D}$ such that $\Psi H_{\alpha}^2 ( \mathbb{D}, \mathbb{C}^k)$ is
a finite codimensional subspace of $H^2 ( \mathbb{D}, \mathbb{C}^k)^G$? \end{quotation}
\noindent By Theorem \ref{AnDInvarSbspc}, every $H^{\infty}(\mathbb{D})^G$-invariant subspace of $H^2 ( \mathbb{D}, \mathbb{C}^k)^G$ is of the form $\Psi H^2_{\alpha} ( \mathbb{D}, \mathbb{C}^k)$ for \emph{some} representation $\alpha$. If the answer to (B) is `yes', then such a subspace would exist for \emph{each} representation $\alpha$.
\begin{proposition}\label{EquivQProp}
The answer to Question $\mathrm{A}$ is affirmative if and only if the answer to Question $\mathrm{B}$ is affirmative. \end{proposition}
\begin{proof}
By Lemma \ref{UniModel}, there is a $\beta\in\Hom(G,\mathrm{U}(\CC^k))$ such that $\mathbb{V}$ is virtually similar to $\mathbb{M}_\eta|H^2_\beta(\mathbb{D},\CC^k)$. Assume that Question A may be answered in the affirmative and let $\alpha\in \Hom(G,\mathrm{U}(\CC^k))$. Denote by $\WW$ and $\mathbb{U}$ the $n$-tuples of commuting shifts of finite multiplicity given by $\mathbb{M}_\eta|H^2_\alpha(\mathbb{D},\CC^k)$ and $\mathbb{M}_\eta|H^2(\mathbb{D},\CC^k)^G$, respectively. These $n$-tuples are virtually similar to $\mathbb{V}$ and thus, by hypothesis, $\mathbb{V}$ is virtually unitarily equivalent to both $\WW$ and $\mathbb{U}$. In particular, $\WW$ and $\mathbb{U}$ are virtually unitarily equivalent. Thus there exists a finite codimension $\mathbb{U}$-invariant subspace $\frk{M}$ and a unitary operator $T:H^2_\alpha(\mathbb{D},\CC^k)\to \frk{M}$ such that $U_jT=TW_j$ for $j=1,\dots,n$.
We claim that $T$ is multiplication by a matrix-valued inner function $\Psi$ with the property that $\Psi\circ\gamma=\Psi\cdot\alpha(\gamma)^{-1}$ for each $\gamma\in G$. Indeed, set $A:f\mapsto T\Phi_\alpha f$, and note that $Ae_1,\dots,Ae_k\in H^2(\mathbb{D},\CC^k)^G$, where $e_1,\dots,e_k$ form an orthonormal basis for $\CC^k$. Let $Q$ be the polynomial given by Lemma \ref{GetQLem}, and note that multiplication by $Q(\eta_n)g$ commutes with $A$ for each $g\in H^\infty(\mathbb{D})^G$. Thus
\[ Q(\eta_n)g Ae_\ell=A Q(\eta_n)ge_\ell=Q(\eta_n)Age_\ell, \quad \ell=1,\dots,k. \]
Because $Q(U_n)$ is injective, we have that $A$ commutes with $H^\infty(\mathbb{D})^G$ and thus is a multiplication operator with a $G$-invariant analytic matrix-valued symbol $\Theta$. That is, $T$ is multiplication by $\Psi=\Theta\cdot (\Phi_\alpha)^{-1}$. The claim now follows from the fact that $T$ is isometric. Because $\frk{M}=\ran T$ has finite codimension, Question B is answered in the affirmative.
It is easily seen that Lemma \ref{LessSUFEq} remains true when `similarity' is replaced by
`unitary equivalence'. To answer Question A in the affirmative, it therefore suffices to finite a matrix-valued inner function $\Psi'$ with the property that $\Psi' H^2_\beta(\mathbb{D},\CC^k)$ has finite codimension in $H^2(\mathbb{D},\CC^k)$. That is, an affirmative answer to Question B provides an affirmative answer to Question A. \end{proof}
While the answer to either Question A or B is not known in general, we can provide an affirmative answer to Question B if we restrict to representations of $G$ with commutative image. In particular, if $G$ is a commutative group, then Question A has an affirmative answer for any $k$. To demonstrate this, we require some additional results, which are based on \cite[Sec. 4]{AKM} and \cite{Khavinson}.
Fix representative curves $K_1,\dots,K_L$ for the generators of the fundamental group of $R$ with base point $x_0$. By the Hurewicz theorem, these curves also provide a basis for the first singular homology group of $R$ with integer coefficients. Through the isomorphism of $G$ with the fundamental group, there are generators $\gamma_1,\dots,\gamma_L\in G$ such that the curve $K_j$ is covered by a curve $\mathbb{D}$ beginning at $0$ and ending at $\gamma_j(0)$ for $j=1,\dots,L$.
Denote by $\omega_x$ the harmonic measure for evaluation at $x\in R$, and recall that $\omega=\omega_{x_0}$. By Harnack's inequality, the measures $\omega_x$ and $\omega_{x_0}$ are mutually absolutely continuous. We set \[ P(x,y)=\frac{d\omega_x(y)}{d\omega_{x_0}(y)}, \quad y\in \pd R, \] and record a few theorems from \cite{Khavinson} that we require.
\begin{lemma}[{\cite[Lemma 3.1]{Khavinson}} ]\label{Khav1}
For $j=1,\dots,L$, there exists a continuous function $Y_j$ on $\pd R$ with the following property. If $\mu$ is a finite real measure on $\pd R$ and $u(x)=\int_{\pd R}P(x,y)d\mu(x)$, then the period of the harmonic conjugate $*u$ along the curve $K_j$ is given by $\int_{\pd R}Y_j(y)d\mu(y)$. \end{lemma}
Let $\mc{D}=\{\Delta_1,\dots,\Delta_L\}$ be a collection of disjoint Borel subsets of $\pd R$, and let $d\nu$ be a finite real Borel measure on $\pd R$. Define the matrix $A(\nu,\mc{D})$ with $ij$-th entry given by \[ A_{ij}(\nu,\mc{D})=\int_{\Delta_j} Y_i(x)d\nu(x), \] with $i,j\in\{1,\dots,L\}$. This is the period along $K_i$ of the harmonic conjugate of \[ g_j^{\nu,\mc{D}}(x)=\int_{\Delta_j}P(x,y)d\nu(y). \] We call $A(\nu,\Delta)$ the \emph{period matrix} of $\nu$ and $\mc{D}$.
\begin{lemma}[{\cite[Lemma 4.7]{Khavinson}}]\label{Khav2}
Given a disjoint family of non-empty open arcs $\Delta_1,\dots,\Delta_L$ in $\pd R$, there exist non-empty arcs $\widetilde{\Delta}_1\subseteq \Delta_1, \dots, \widetilde{\Delta}_L\subseteq \Delta_L$ such that the period matrix for $\omega$ and $\{\widetilde{\Delta}_1,\dots,\widetilde{\Delta}_L\}$ is non-singular. \end{lemma}
\begin{lemma}[{\cite[Lemma 4.12]{Khavinson}} ]\label{Khav3}
Let $\mc{D}=\{\Delta_1,\dots,\Delta_L\}$ be a disjoint family of non-empty arcs for which the period matrix for $\omega$ and $\mc{D}$ is non-singular, and let $v=(v_1,\dots,v_L)\in\RR^L$. There exists a $v'\in\RR^L$ such that the function $f=\sum_{j=1}^L v'_jg_j^{\omega,\mc{D}}$ has a harmonic conjugate with periods $v_1,\dots,v_L$ along the curves $K_1,\dots,K_L$, respectively. \end{lemma}
From here on, we assume fix a disjoint family $\mc{D}=\{\Delta_1,\dots,\Delta_L\}$ of arcs in $\pd R$ such that the period matrix for $\omega$ and $\mc{D}$ is non-singular. For brevity, we set $g_j=g_j^{\omega,\mc{D}}$ for $j=1,\dots,L$. With $f=\sum_{j=1}^L a_j g_j$ for some $a_1,\dots,a_L\in\RR$, we note that $f\circ\tau$ has a unique single valued harmonic conjugate $*(f\circ\tau)$ on $\mathbb{D}$ for which $*(f\circ\tau)(0)=0$. If $*f$ has periods $b_1,\dots,b_L$, then $*(f\circ\tau)\circ\gamma_j=*(f\circ\tau)+b_j$ for $j=1,\dots,L$.
Given a subset $U$ of $\pd R$, we denote by $\chi_U$ the characteristic function of $U$.
\begin{lemma}[{\cite[Lem. 3.11]{AKM}}]\label{AKMFLem}
Given $b_1,\dots,b_L\in\RR$, there exists a bounded holomorphic function $F$ on $R$ with finitely many zeros in $R$ satisfying
\[ \log | F(x) |=-\sum_{i=1}^L b_i\chi_{\Delta_i}(x), \quad x\in\pd R. \] \end{lemma}
Before we construct the matrix-valued inner function appearing in Question B for the case wherein $\alpha$ has commutative image, we first construct an analogous scalar-valued inner function.
\begin{lemma}\label{VUE:CharLem}
Let $\alpha$ be a character of $G$. There exists a scalar inner function $\psi$ on $\mathbb{D}$ such that $\psi H^2(\mathbb{D})^G$ has finite codimension in $H^2_\alpha(\mathbb{D})$. \end{lemma} \begin{proof}
Let $a=(a_1,\dots,a_L)\in \RR^L$ be such that $\alpha(\gamma_\ell)=e^{ia_\ell}$ for $\ell=1,\dots,L$. By Lemma \ref{Khav3}, there is a $b=(b_1,\dots,b_L)\in\RR^L$ such that $\sum_{j=1}^L b_j (*g_j)$ has period $a_i$ along $K_i$ for $i=1,\dots,L$. The multivalued function $h=\sum_{j=1}^Lb_j(g_j+i *g_j)$ is holomorphic on $R$ with the property that $(\mathrm{Re}\: h)(x)=\sum_{\ell=1}^L b_\ell \chi_{\Delta_\ell}(x)$ for $x\in \pd R$. Let $\widehat{h}:\mathbb{D}\to\CC$ be analytic such that $h\circ\tau=\widehat{h}$. With $\Omega_\ell=\tau^{-1}(\Delta_\ell)$ for $\ell=1,\dots,L$, we note that $(\mathrm{Re}\: \widehat{h})(z)=\sum_{i=1}^L b_i\chi_{\Omega_i}(z)$ for a.e. $z\in\pd \mathbb{D}$. The function $\phi=\exp(\widehat{h})$ and its reciprocal function are both bounded and analytic, and
\[ \phi\circ\gamma_\ell = e^{ia_\ell}\phi, \quad \ell=1,\dots,L. \] In particular, $H^2_\alpha(\mathbb{D})=\phi H^2(\mathbb{D})^G$.
Let $F$ be as in Lemma \ref{AKMFLem}, and set $\psi=(F\circ\tau)\phi$. We note that $\log|\psi(z)|=0$ for a.e. $z\in\pd\mathbb{D}$ and that $\psi\circ\gamma_\ell=e^{ia_\ell}\psi$ for $\ell=1,\dots,L$. It remains to show that $\psi H^2(\mathbb{D})^G$ has finite codimension in $H^2(\mathbb{D})^G$, and for this we note the following. If $f\in H^2(R)$ has the same zeros at $F$, counting multiplicity, then $f/F\in H^2(R)$. Thus $(F\circ\tau)H^2(\mathbb{D})^G$ has finite codimension in $H^2(\mathbb{D})^G$, whence $\psi H^2(\mathbb{D})^G$ has finite codimension in $H^2_\alpha(\mathbb{D})$. \end{proof}
\begin{corollary}
If $\alpha$ is a representation of $G$ with commutative image, then the inner function $\Psi$ appearing in Question $\mathrm{B}$ exists. In particular, Question $\mathrm{A}$ is answered in the affirmative when $G$ is commutative. \end{corollary}
\begin{proof}
Given complex numbers $c_1,\dots,c_k$, we denote by $\mathrm{diag}(c_1,\dots,c_k)$ the diagonal matrix with entries $c_1,\dots,c_k$. The matrices $\alpha(\gamma_1),\dots,\alpha(\gamma_L)$ commute, and thus we assume that there are $a_j^{(1)},\dots,a_j^{(k)}\in\RR$ such that $\alpha(\gamma_j)=\exp(i\:\mathrm{diag}(a_j^{(1)},\dots,a_j^{(k)}))$ for $j=1,\dots,L$. Denote by $\psi_\ell$ the scalar inner function given by Lemma \ref{VUE:CharLem} for the $L$-tuple $(a_1^{(\ell)},\dots,a_L^{(\ell)})$ for $\ell=1,\dots,k$ and set $\Psi=\mathrm{diag}(\psi_1,\dots,\psi_k)$. \end{proof}
\end{document} | arXiv | {
"id": "1707.08695.tex",
"language_detection_score": 0.6742162704467773,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\ifcase 0
\iffalse {Simplified Algorithms for Cardinality Matching: The Weighted Matching Approach} \fi
\title{The Weighted Matching Approach \\to Maximum Cardinality Matching} \author{ Harold N.~Gabow \thanks{Department of Computer Science, University of Colorado at Boulder, Boulder, Colorado 80309-0430, USA. E-mail: {\tt hal@cs.colorado.edu} } }
\date{March 10, 2017}
\maketitle \input prelude
\begin{abstract} Several papers have achieved time $O(\sqrt n m)$ for cardinality matching, starting from first principles. This results in a long derivation.
We simplify the task by employing well-known concepts for maximum weight matching. We use Edmonds' algorithm to derive the structure of shortest augmenting paths. We extend this to a complete algorithm for maximum cardinality matching in time $O(\sqrt n m)$. \end{abstract} \fi
\def0{0} \ifcase 0
\iftrue
\input newed
\fi
\or \input intro \input newed \input bnotes
\input nca \input alpha \input bmatch \input code \input strong \fi \iffalse \input intro \input ed
\fi
\iffalse \section*{Acknowledgments}
thanks to an anonymous referee for a careful reading and many suggestions. \fi
\ifcase 1 \or
\begin{thebibliography}{99}
\input bmacros
\def\nrlq #1,{{\it Naval Res.\ Logist.\ Quart., #1,}}
\iffalse \bibitem{AHU} A.V. Aho, J.E. Hopcroft, and J.D. Ullman, {\it The Design and Analysis of Computer Algorithms}, Addison-Wesley, Reading, Mass., 1974.
\bibitem{AMO} R.K. Ahuja, T.L. Magnanti, and J.B. Orlin, {\em Network Flows: Theory, Algorithms, and Applications}, Prentice-Hall, Saddle River, New Jersey, 1993. \fi
\bibitem{CCPS} W.J. Cook, W.H. Cunningham, W.R. Pulleyblank, and A.~Schrijver, {\it Combinatorial Optimization}, Wiley and Sons, NY, 1998.
\iffalse \bibitem{CLRS} T.H.~Cormen, C.E.~Leiserson, R.L.~Rivest and C.~Stein, {\em Introduction to Algorithms}, 2nd Ed., McGraw-Hill, NY, 2001. \fi
\bibitem{E} J. Edmonds, ``Maximum matching and a polyhedron with 0,1-vertices'', {\it J.\ Res.\ Nat.\ Bur.\ Standards 69B}, 1965, \pp 125-130.
\iffalse \bibitem{FKN} M. Fujii, T. Kasami, and K. Ninomiya, ``Optimal sequencing of two equivalent processors,'' {\em SIAM J. Appl. Math.} 17, 1969, \pp 784-789. Erratum, V.20, 1971, p.141. \fi
\iffalse \bibitem{G73} H.N. Gabow, ``Implementations of algorithms for maximum matching on nonbipartite graphs'', \phd, Comp. Sci. Dept., Stanford Univ., Stanford, Calif., 1973.
\bibitem{G76} H.N. Gabow, "An efficient implementation of Edmonds’ algorithm for maximum matching on graphs", \jacm 23, 2, 1976, \pp 221-234. \fi
\iffalse \bibitem{G83} H.N. Gabow, ``An efficient reduction technique for degree-constrained subgraph and bidirected network flow problems'', \stoc 15th, 1983, \pp 448-456.
\bibitem{G85b} H.N. Gabow, ``A scaling algorithm for weighted matching on general graphs'', \focs 26th, 1985, \pp 90-100.
\bibitem{G90} H.N. Gabow, ``Data structures for weighted matching and nearest common ancestors with linking'', \soda 1st, 1990, \pp 434-443. \fi
\iffalse \bibitem{G17} H.N. Gabow, "A data structure for nearest common ancestors with linking'', in preparation. with papplications to weighted matching and uninon-find
\bibitem{Ger} A.M.H. Gerards, ``Matching'', in {\em Network Models} (M.O. Ball, T.L. Magnanti, C.L. Monma, G.L. Nemhauser, eds.), Elsevier, Amsterdam, 1995, \pp 135-224.
\bibitem{GS13} H.N. Gabow and P.~Sankowski, ``Algebraic algorithms for $b$-matching, shortest undirected paths, and $f$-factors,'' \focs 54th, 2013, \pp 137-146. Revised version, 2016: ``Algorithms for weighted matching generalizations I: Bipartite graphs, $b$-matching, and unweighted $f$-factors''; ``Algorithms for weighted matching generalizations II: $f$-factors and the special case of shortest paths''. \fi
\bibitem{GT85} H.N. Gabow and R.E. Tarjan, ``A linear-time algorithm for a special case of disjoint set union'', \jcss 30, 2, 1985, \pp 209-221.
\bibitem{GT89} H.N. Gabow and R.E. Tarjan, ``Faster scaling algorithms for general graph matching problems'', {\it J.\ ACM} 38, 4, 1991, \pp 815-853.
\bibitem{GK} A.V. Goldberg and A.V. Karzanov, ``Maximum skew-symmetric flows and matchings'', {\it Math.\ Program., Series A} 100, 2004, \pp 537-568.
\bibitem{HK} J.E. Hopcroft and R.M. Karp, ``An $n^{2.5}$ algorithm for maximum matchings in bipartite graphs'', \sicomp 2, 1973, \pp 225-231.
\iffalse \bibitem{KM} T. Kameda and I. Munro,
``An $O(|V|.|E|)$ algorithm for maximum matching of graphs,'' {\em Computing} 12, 1974, \pp 91-98. \fi
\bibitem{K} A.V. Karzanov, ``On finding maximum flows in network with special structure and some applications'', in Russian, {\em Math.\ Problems for Production Control} 5, Moscow State University Press, 1973, \pp 81-94.
\bibitem{L} E.L. Lawler, {\it Combinatorial Optimization: Networks and Matroids}, Holt, Rinehart and Winston, New York, 1976.
\iffalse \bibitem{LP} L. Lov\'asz and M.D. Plummer, {\it Matching Theory}, North-Holland Mathematic Studies 121, North-Holland, New York, 1986. \fi
\bibitem{MV}
S. Micali and V.V. Vazirani, "An $O(\sqrt {|v|}\cdot |E|)$ algorithm for finding maximum matching in general graphs", \focs 21st, 1980, \pp 17-27.
\iffalse \bibitem{Pu12} W.R. Pulleyblank, ``Edmonds, matching and the birth of polyhedral combinatorics'', {\em Documenta Mathematica}, 2012, \pp 181-197. \fi
\bibitem{PS} C.H. Papadimitriou and K. Steiglitz, {\it Combinatorial Optimization: Algorithms and Complexity}, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1982.
\bibitem{S} A.~Schrijver, {\it Combinatorial Optimization: Polyhedra and Efficiency}, Springer, NY, 2003.
\bibitem{V1} V.V.~Vazirani, ``A theory of alternating paths and blossoms for proving correctness of the $O(\sqrt V E)$ general graph maximum matching algorithm'', {\em Combinatorica}, 14, 1, 1994, \pp 71-109.
\bibitem{V2} V.V.~Vazirani, ``A simplification of the MV matching algorithm and its proof'',
arXiv:1210.4594v5, Aug.27, 2013, 32 pages; also ``A proof of the MV matching algorithm'', manuscript, May 13, 2014, 42 pages.
\iffalse \bibitem{T83} R.E. Tarjan, {\it Data Structures and Network Algorithms,} SIAM, Philadelphia, PA., 1983. \fi \end{thebibliography}
\fi
\setcounter{section}{0} \renewcommand{\Alph{section}}{\Alph{section}} \renewcommand{\Alph{section}.\arabic{theorem}}{\Alph{section}.\arabic{theorem}}
\setcounter{equation}{0} \renewcommand{\Alph{section}.\arabic{equation}}{\Alph{section}.\arabic{equation}}
\section{Analysis of {\em find\_ap\_set}} \label{FAPApp} We will consider \sm. to be an ordered tree, where grow steps add the children of an outer vertex in left-to-right order. We also assume \os. inherits this order. (Thus in Fig.\ref{SAPFig}(b) vertex 3 became outer after vertex 2.)
\begin{lemma} \label{RightmostPathLemma} Any outer vertex $r$ that is not completely scanned has $b(r)$ on the rightmost path of \sm.. \end{lemma}
\example {The blossom step of Fig.\ref{SAPFig}(b) makes vertex 8 outer but not completely scanned. 8 is not on the rightmost path of \sm. but $b(r)=3$ is.}
\begin{proof} A simple induction shows that when a vertex $x$ is made outer, every vertex $s\in P(x)$ has $b(s)$ on the rightmost path of \sm.. Recall invariant (I) says whenever {\em find\_ap}$(x)$ scans an edge, $P(x)$ contains every outer vertex $r$ that has not been completely scanned. So $b(r)$ is on the rightmost path of \sm.. \end{proof}
\iffalse We claim the stronger statement: When control has passed to {\em find\_ap}$(x)$, $b(x)$ is on the rightmost path of \sm., and any outer vertex $z$ that is not completely scanned has $b(z)$ an \sm.-ancestor of $b(x)$.
We will show that the next step taken by {\em find\_ap}$(x)$ preserves the claim. A grow step adds the new outer vertex $y'$ to the rightmost path. Clearly it preserves the claim. A blossom step enlarges $B_x$ so it contains every new outer vertex. So it preserves the claim. (Note vertex $y$ need not be on the rightmost path.) Finally suppose {\em find\_ap}$(x)$ returns in line \ref{RLine}.
$x$ is now completely scanned. Let $x'$ be the new $x$. Clearly the claim is preserved if $B_{x'}=B_x$. In the opposite case $x$ is the base of $B_x$, $x'$ is the grandparent of $x$ in \sm., and $B_{x'}$ is the grandparent of $B_x$ in \os.. Since $x$ is the only vertex in $B_x$ that was not completely scanned, the claim is preserved. \end{proof} \fi
\begin{lemma} \label{SRelationsLemma} At any point in the algorithm, consider an edge $rs$ where $r$ is outer and $s\in V(\S.)$. Either $s$ is inner and left of $r$ in \sm., or $b(r)$ and $b(s)$ are related in \sm.. \end{lemma}
\example {Consider edge 8,6 immediately after the blossom step forming Fig.\ref{SAPFig}(b). $b(8)=3$ is related to $b(6)=6$ in \sm.. But 8 itself is not related to 6 in \sm.. Neither is 5, the mate of 8.}
\begin{proof} We will show that every grow and blossom step preserves the lemma. We start with this preliminary observation: Once an outer vertex $r$ has been completely scanned, any adjacent vertex $s$ is in \S..
Consider a grow step. It adds new vertices $y,y'$, with $b(y)=y$, $b(y')=y'$.
\subcase {$r\ne y'$} If $r$ is not completely scanned, Lemma \ref{RightmostPathLemma} implies
$b(r)$ is related to both $b(y)$ and $b(y')$. If $r$ is completely scanned the preliminary observation shows $s$ was in \S. before the grow step.
So $s\ne y,y'$.
We conclude the lemma is preserved.
\subcase {$r=y'$} Since $r$ is the rightmost vertex of \sm., any vertex is either related to $r$ or to the left of $r$. So the lemma holds if $s$ is inner. If $s$ is outer it cannot be left of $r$, since then the preliminary observation shows $r$ was in \S. before the grow step. \iffalse the previous argument applies to show $s$ is not completely scanned. Thus $b(s)$ an ancestor of $r=y'$. \fi
Now consider a blossom step. We consider the possibilities for $r$.
\subcase {$r$ is a vertex whose $b$-value is changed to $b(x)$} (These are the vertices that enter $B_x$.) Any vertex $s$ has $b(s)$ either related to $b(x)$ or left of $b(x)$ ($b(x)$ is on the rightmost path by Lemma \ref{RightmostPathLemma}. So we can assume $b(s)$ is left of $b(x)$. This implies $s$ is also left of $b(x)$. $s$ cannot be outer (as before, $s$ is completely scanned, so $r$ would be in \S. before it becomes a descendant of $b(x)$). So $s$ is inner as desired.
\iffalse
If $b(s)$ is related to $b(x)$ obviously it is related to $b(r)=b(x)$. The lemma applied to $b(x)$ shows the other possibility is $s$ is inner and left of $b(x)$. The assumption on $r$ implies it descends from $b(x)$ in \sm.. So $s$ is inner and left of $r$ in \sm.. \fi
\subcase {$r$ is outer and $b(r)$ is not changed to $b(x)$} We can assume $s$ is a vertex that enters $B_x$. We can further assume $b(r)$ is not related to $b(s)=b(x)$. Thus $b(r)$ is to the left of $b(x)$. As before $r$ is completely scanned, making $s$ in \S. before it becomes a descendant of $b(x)$.
\end{proof}
\iffalse
We apply the lemma to $b(x)$.
If $b(s)$ is related to $b(x)$ obviously it is related to $b(r)=b(x)$. The lemma applied to $b(x)$ shows the other possibility is $s$ is inner and left of $b(x)$. The assumption on $r$ implies it descends from $b(x)$ in \sm.. So $s$ is inner and left of $r$ in \sm..
Suppose $r$ is outer and $b(r)$ is not changed to $b(x)$. If $b(r)$ is an ancestor of $b(x)$ in \sm. then the lemma continues to hold - for every $s$ whose $b$-value has changed, $b(r)$ remains an ancestor of the new $b(s)=b(x)$. If $b(r)$ is not an ancestor, it is to the left of $b(x)$ (Lemma \ref{RightmostPathLemma}). So the lemma shows $r$ is not adjacent to any descendant of $b(x)$, and the changes of the blossom step preserve the lemma. The same holds if $r$ is to the left of $y$. If $r$ is to the right of $y$ it descends from $b(x)$ (Lemma \ref{RightmostPathLemma}). So if $s$ starts as an inner vertex and changes to outer, $b(r)$ descends from the new $b(s)=b(x)$. \end{proof}
\fi \iffalse If $v$ descends from $b(x)$ then so does $b(v)$. So $b(v)$ descends from $b(u)=b(x)$. If $v$ does not descend from $b(x)$ the lemma shows either $b(v)$ is related to $b(x)$ or
\fi
\begin{lemma} \label{InnerDescendantLemma} At any point in the algorithm let $t$ be an inner vertex, whose outer mate $t'$ is completely scanned, and $s$ an inner \sm.-descendant of $t$. A blossom step that makes $s$ outer makes $b(s)=b(t)$. \end{lemma}
\begin{proof}
Let $P$ be the \sm.-path from $t$ to $s$. We prove the lemma by induction on $|P|$.
Among all the inner vertices on $P$, let $u$ be the first to become outer in a blossom step. (If there is more than one choice take $u$ as deep as possible.) Let that blossom step be triggered by edge $xy$ where $b(x)$ is an ancestor of $b(y)$. $u$ is an \sm.-ancestor of $b(y)$ and $b(x)$ is an ancestor of $u$.
Every outer vertex $r$ on $P$ is completely scanned, since $t'$ is. (This follows since $r$ became outer while {\em find\_ap}$(t')$ was executing.) So $b(x)$ is not on $P$. Thus $b(x)$ is a proper ancestor of $t$. Letting $u'$ be the mate of $u$, the blossom step sets \begin{equation} \label{b1biEqn} b(t)=b(u)=b(u'). \end{equation}
If $u=s$ we are done. Otherwise let $v$ be the inner vertex that follows $u$ on $P$. Let $v'$ be its mate. As already mentioned, $v'$ is completely scanned. So the inductive assertion holds for $v$ and $s$. Consider the blossom step that makes $s$ outer. Let $b_1$ denote the $b$ function at the end of this step. The inductive assertion shows \[b_1(v)=b_1(s).\]
Since $v$ was inner, we also have $b_1(v)=b_1(u')$. \eqref{b1biEqn} implies $b_1(t)=b_1(u')$. Combining equations gives $b_1(t)=b_1(s)$. This completes the induction. \end{proof}
\begin{lemma} \label{CompletenessLemma} At any point in the algorithm, let $rs$ be an edge that has been scanned from both its ends. Then $b(r)=b(s)$. \end{lemma}
\begin{proof} Whenever $r$ and $s$ are both outer, $b(r)$ and $b(s)$ are related in \sm. (Lemma \ref{SRelationsLemma}). Let $b(r)$ be an ancestor of $b(s)$ the first time both are outer. Although $b(r)$ and $b(s)$ may change over time, $b(r)$ will always be an ancestor of $b(s)$.
Consider the three possibilities for $s$ when $rs$ is scanned from $r$.
\case {$s$ is not in the search forest} A grow step makes $s$ an inner child of $r$. Eventually $s$ becomes outer in a blossom step. The new blossom has an outer base vertex, so the blossom includes $r$, i.e., $b(r)=b(s)$.
\case {$s$ is outer} Clearly a blossom step is executed, making $b(r)=b(s)$.
\case{$s$ is inner} When $rs$ is scanned from $r$ let $t$ be first inner vertex on the \sm.-path from $b(r)$ to $s$. When $r$ scans $rs$, $t'=mate(t)$ is completely scanned. Now apply Lemma \ref{InnerDescendantLemma} to $t$ and $s$. The blossom step that makes $s$ outer makes $b_1(t)=b_1(s)$. Since $t$ has become outer $b_1(t)=b_1(r)$. Thus $b_1(s)=b_1(r)$ as desired. \end{proof}
\iffalse After $r$ scans $rs$ a blossom step makes $s$ outer. The blossom step is executed when an edge $az$ is scanned
\case{$s$ is inner} After $r$ scans $rs$ a blossom step makes $s$ outer. The blossom step is executed when an edge $az$ is scanned from $a$. To make the rest of the argument precise we distinguish between two versions of the $b$ function: $b_r$ refers to values when $r$ scans $rs$, $b_a$ refers to values when $a$ scans $az$. The blossom step implies $b_a(a)$ is an \sm.-ancestor of $s$ and $z$ is an \sm.-descendant of $s$.
$b_a(a)$ is active when $rs$ is scanned. (If not, $b(a)$ and $a$ are to the right of $b_r(x)$. So they are to the right of $z$. But then edge $az$ contradicts Lemma \ref{SRelationsLemma}.) Thus $b_a(a)$ is an ancestor of $b_r(x)$. (Possibly the two vertices are equal.) This shows the blossom step for $az$ updates both $b(s)$ and $b(r)$ to $b_a(a)$. \end{proof} \fi
To implement the algorithm efficiently we change the test for a blossom step, line \ref{TLine}, to the test of the comment. We will show the two tests are equivalent, i.e.,
{$b(y)$ is an outer proper descendant of $b(x)$ in \sm. iff $b(y)$ became outer strictly after $b(x)$.}
To prove the if direction assume $b(x)$ and $b(y)$ are both outer. As blossom bases they both became outer when they were added to \S.. Edge $xy$ makes $b(x)$ and $b(y)$ related (Lemma \ref{SRelationsLemma}). So if $b(y)$ was made outer strictly after $b(x)$ it was added to \S. after $b(x)$, i.e., it descends from $b(x)$. Thus the condition of line \ref{TLine} holds.
The only if direction is obvious, since any vertex is added to \sm. after its ancestors.
\section{Searching from the middle} \label{DSHApp} The algorithm of Micali and Vazirani \cite{MV} is based on a ``double depth-first search'': This search begins at an edge $e=uv$. It attempts to complete an augmenting path using disjoint paths from each of $u$ and $v$ to a free vertex. This is done with two coordinated depth-first searches, one starting at $u$, the other at $v$.
The key fact for this approach is a characterization of the starting edge $e$. We will begin by describing the conditions satisfied by $e$, using our terminology. Then we prove that any augmenting path contains such an edge $e$. Then we discuss the implications of this structure, including how the DDFS of \cite{MV} can be used for our Phase 2.
\iffalse Finally we rephrase our result in the terminology of \cite{MV,V1,V2}. This gives a simple alternative proof of Theorem 26 of \cite{V2}, the key fact for the Micali-Vazirani algorithm (as discussed in Section 10, Epilogue of \cite{V2}). \fi
We start with terminology based on the state of the search immediately before the last dual adjustment. Let $T'$ be the set of edges of $G$ that are tight at that time. Let $D_1 \cup D_2$ be the set of edges that become tight in the last dual adjustment, where $D_1$ refers to a grow step and $D_2$ is for a blossom step. So $e\in D_1$ has $y'(e)=\delta$ with one end of $e$ outer and the other not in \S..
$e\in D_2$ has $y'(e)=2\delta$ with both ends of $e$ outer. (Recall $w(e)=0$.) Here $y'$ is the dual function right before the last dual adjustment, and ``outer'' and \S. also refer to that time.
\def\mycenter #1 {\hbox to \hsize{
{#1}
}}
\begin{lemma} Any maximum weight augmenting path can be written as\\
\mycenter{$P_1,Q,P_2$} \newline \noindent where
each $P_i$ is an even length alternating path from a free vertex to an outer vertex, $P_i\con T'$,
$Q$ has the form $(e)$ with $e\in D_2$, or $(g_1,e,g_2)$ with $g_1,g_2\in D_1$.
\end{lemma}
\remark {Clearly $e$ is unmatched in the first form and matched in the second. Neither end of $e$ is in \S. in the matched form.}
\begin{proof} Let $y$ be the final dual function.
The dual adjustment step shows that any free vertex $f$ has $y(f)=y'(f)-\delta$. As mentioned in the proof of Corollary \ref{EdmondsAPCor} an augmenting path $P$ has maximum weight iff $w(P)=2y(f)$. Thus \begin{equation} \label{OldYEqn} w(P)=2y'(f)-2\delta. \end{equation} Furthermore the corollary shows that for any positive blossom $B$, $P\cap \gamma(B)$ is an even length alternating path, so $z(B)$ makes no net contribution to $w(P)$. Thus $P$ contains edges that are not tight in $y'$, in fact these edges belong to $D_1\cup D_2$ and have total slack $2 \delta$.
Suppose $P$ contains an edge $e\in D_2$. Since $y'(e)=2\delta$, $P$ contains exactly 1 such edge. The properties of the lemma for both $P_i$ and $Q$ follow easily.
The other possibility is that $P$ contains exactly two edges $g_1,g_2\in D_1$. Each $g_i$ is unmatched and has an end $v_i\notin \S.$. $P$ must contain a $v_1v_2$-subpath of edges in $T'$. It must consist of just one edge $v_1v_2\in M$, since unmatched edges with no end in \S. are not tight. The properties of the lemma for both $P_i$ and $Q$ follow. \end{proof}
It may not be clear how {\em find\_ap} succeeds in ignorance of this structure. So we take a more detailed look. We start with a simple fact:
\begin{proposition} \label{InnerInnerProp} No edge $uv\in T'$ joins 2 inner vertices. \end{proposition}
\begin{proof} A grow step that makes $u$ inner has $y(u)=1$. Every subsequent dual adjustment increases $y(u)$. So the search ends with $y(u)\ge 1$. If $u$ and $v$ are inner then $uv\notin M$, and $y(u)+y(v)\ge 2> w(uv)=0$. \end{proof}
We now present a more detailed proof of the lemma. Consider the search graph \os. immediately before the last dual adjustment. \os. is a subgraph of $H$.
Define a path form similar to the lemma, as\\ \mycenter{$P,Q,P'$} \newline \noindent where
$P$ is an even length alternating path from a free vertex to an outer vertex of \os., $P\con T'$;
$Q$ has the form of the lemma;
\iffalse either (a) $(e)$ with $e\in D_2$, or (b) $(g_1,e)$ where $g_1\in D_1$ and $e\in M-\os.$; \fi
$P'$ is an odd alternating path whose last edge is matched and last vertex is inner in \os., $P'\con T'$.
Let $A$ be an alternating even-length path in $H$ that starts at a free vertex. We claim that $A$ is a prefix of the above form. Clearly the claim forces {\em find\_ap} to find a path with the structure of the lemma.
We prove the claim inductively. Suppose an even length prefix $A'$ of $A$ ends at vertex $u$, and the next two edges of $A$ are $uv,vv'$
with $uv\notin M \ni vv'$.
If $A'$ has length 0 then $u$ is free. $A'$ has the form $P$.
Suppose $A'$ has form $P$. There are three possibilities:
\subcase {$v$ is inner in \os.} Its mate $v'$ is outer, so form $P$ holds for the longer prefix.
\subcase {$v$ is outer in \os.} $uv$ joins two outer vertices of \os.. Thus $uv\in D_2$. The matched edge $vv'$ joins an outer vertex with an inner, so $v'$ is inner. So the new prefix of $A$ has form $P,Q,P'$ for $P'=(vv')$.
\subcase {$v\notin \os.$} This makes $uv\in D_1$. The new prefix has the form $P,g_1,e$ with $g_1,e$ as in $Q$.
Now suppose $A'$ has form $P,g_1,e$ with $g_1,e$ as in $Q$. Since $e\notin \os.$ and $uv$ is tight, $uv\in D_2$. So $v$ is outer. Thus $v'$ is inner. The new prefix has form $P,Q,P'$ ($P'=(vv')$).
Finally suppose $A'$ has form $P,Q,P'$.
No end of an edge of $D_1\cup D_2$ is inner. Since $u$ is inner this makes $uv\in T'$. Also $u$ inner makes $v$ outer (Proposition \ref{InnerInnerProp}). The new prefix ends with edge $vv'\in M$ and $v'$ inner. Thus it has form $P,Q,P'$. The induction is complete.
The lemma opens up the possibility of having a Phase 2 search start from an edge $e$ of type $Q$. DDFS uses this strategy.
The Micali-Vazirani algorithm uses DDFS in Phase 1 as well. This depends on the fact that blossoms have a starting edge $e$ similar to the lemma. (More precisely suppose a blossom step creates a blossom $B$ with base $b$, with $v\in B$ a new outer vertex. Then $P(v,b)$ contains a unique subpath of form $Q$ of the lemma.
This is easily proved as above, e.g., use the second argument, traversing the path $P(v,b)$ starting from $b$.)
Using DDFS in both Phases 1 and 2 makes the Micali-Vazirani algorithm elegant and avoids any overhead in transitioning to Phase 2.
The proof of \cite{V2} that DDFS is correct is involved. Possibly it could be simplified using the lemmas we have presented, as well as other structural properties that weighted matching makes clear. The following aspects of the finer structure of $H$ are not needed for our development but are used in \cite{V2}.
\cite{V2} defines {\em evenlevel}$(x)$ as the length of a shortest even alternating path from a free vertex to $x$. The proof of Lemma \ref{EdmondsAPLemma} shows any even alternating $fx$-path has length
$\ge y(x)-y(f)$. Furthermore it shows that an outer vertex $x$ has $evenlength(x)=y(x)-y(f)=|P(x)|$.
{\em oddlevel}$(x)$, the length of a shortest odd alternating path from a free vertex to $x$, has a similar characterization, e.g., any odd alternating $fx$-path has length $\ge 1-y(x)-y(f)+\sum z(B)$, where the sum extends over blossoms $B$ with base vertex $b$ and $x\in B-b$.
Finally \cite{V2} divides the edges of $H$ into {\em bridges} and {\em props}. This is due to the fact that an edge $e$ of form $Q$ can trigger an initial blossom step, which can be followed by blossom steps triggered by unmatched edges of $T'$. $e$ is a bridge and the other triggers are props. (In the precise blossom structure stated above, $e$ is the $Q$ edge and the prop triggers are in $T'$.)
\end{document} | arXiv | {
"id": "1703.03998.tex",
"language_detection_score": 0.8068747520446777,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract}
A smooth geometrically connected curve over the finite field $\FF_q$ with
gonality $\gamma$ has at most ${\gamma(q+1)}$ rational points. The
first author and Grantham conjectured that there exist curves of every
sufficiently large genus with gonality $\gamma$ that achieve this bound. In
this paper, we show that this bound can be achieved for an infinite sequence
of genera using abelian covers of the projective line. We also argue that
abelian covers will not suffice to prove the full conjecture. \end{abstract}
\title{On abelian covers of the projective line with fixed gonality and many rational points}
\section{Introduction} Throughout, we use the unqualified term ``curve'' to mean a smooth proper geometrically connected scheme of dimension~$1$ over a field. Given a curve $C$ over a finite field $\FF_q$, the existence of a morphism $f \colon C \to \PP^1$ provides a bound on the number of points of $C$. Indeed, every rational point of $C$ maps to one of the rational points of $\PP^1$, and each fiber contains at most $\deg(f)$ points: \begin{equation}
\label{eq:morphism_bound}
\#C(\FF_q) \le \deg(f) \; (q+1). \end{equation} The minimum degree of a morphism to $\PP^1$ is known as the \textbf{gonality} of~$C$.
Write $N_q(g,\gamma)$ for the maximum number of points on a curve $C_{/\FF_q}$ with genus $g$ and gonality $\gamma$; we set this quantity to $-\infty$ if no such curve exists. From~\eqref{eq:morphism_bound}, we know that $N_q(g,\gamma) \le \gamma (q+1)$. In \cite{Faber_Grantham_GF2,Faber_Grantham_GF34}, the first author and Grantham gave many examples of curves where this bound is achieved, which led to the following conjecture:
\begin{conjecture}
\label{conj:large_genus}
Fix a prime power $q$ and an integer $\gamma \ge 2$. For $g$ sufficiently
large, we have
\[
N_q(g,\gamma) = \gamma(q+1).
\] \end{conjecture}
The implicit challenge in the conjecture is to construct interesting families of curves. The first author and Grantham wrote down explicit hyperelliptic equations to handle the case $\gamma = 2$ \cite{Faber_Grantham_GF34}. The second author established the conjecture for $q$ odd by constructing singular curves on toric surfaces \cite{vermeulen2022curves}. The proof uses a squarefree-discriminant trick to insure that the singular curves are smooth above non-rational points of $\PP^1$; in characteristic~$2$, discriminants are never squarefree, so one must find a different approach to control singularities. The goal of this note is to investigate how much can be proved using only abelian covers of the projective line, especially in the remaining open case where $q$ is even. Recall that an \textbf{abelian cover} of curves $X \to Y$ is a nonconstant morphism for which the corresponding extension of function fields $\kappa(X) / \kappa(Y)$ is Galois with abelian group of automorphisms. See \cite[\S5.12]{Serre_Rational_Points_Book} and \cite[\S6.3]{TVN_advanced} for summaries of geometric class field theory; see \cite[Ch.~V]{milneCFT} and \cite{Serre_Algebraic_Groups} for more robust treatments.
\begin{theorem}
\label{thm:main}
Fix a prime power $q$ and an integer $\gamma \ge 2$. There exists an infinite
sequence of distinct positive integers $(g_i)$ and abelian covers $C_i \to
\PP^1$ of degree $\gamma$ such that $C_i$ has genus $g_i$ and $\gamma(q+1)$
rational points. In particular,
\[
\limsup_{g \to \infty} N_q(g,\gamma) = \gamma(q+1).
\] \end{theorem}
Unfortunately, abelian covers are not sufficient to replace the limsup with a proper limit. For example, consider the case of covers of $\PP^1$ of prime degree $\gamma \equiv 1 \pmod{4}$, where $\gamma$ does not divide $q$. If $C \to \PP^1$ is abelian of degree~$\gamma$, then all of the ramification indices are equal to~$1$ or~$\gamma$. Let $m$ be the number of ramified points of $C$. The Hurwitz formula shows the genus of~$C$ is $\frac{\gamma-1}{2}(m-2)$. This expression is always even, so abelian covers of degree~$\gamma$ will never have anything to say about curves of odd genus.
Roughly speaking, there are three main approaches in the literature to constructing curves of large genus with many rational points: \begin{itemize}
\item Modular curves, especially in towers;
\item Class field theory constructions; and
\item \textit{Ad hoc} methods for special families. \end{itemize} Modular methods typically produce curves whose genus and gonality grow together, which is not helpful for Conjecture~\ref{conj:large_genus}. As indicated in the preceding paragraph, methods of class field theory do not produce enough curves to address the full scope of the conjecture. This leaves \textit{ad hoc} constructions. One particularly powerful construction is that of curves on toric surfaces; this was first introduced for the study of rational points in \cite{KWZ} and later used to prove Conjecture~\ref{conj:large_genus} for odd $q$ in \cite{vermeulen2022curves}. See \cite[p.vii]{Serre_Rational_Points_Book} and \cite[p.1]{TVN_advanced} for more philosophical exposition on constructing curves over finite fields.
\noindent \textbf{Acknowledgments.} We thank Congling Qiu for spotting an omission in the hypotheses of Lemma~\ref{lem:branch} in the version of this paper that appeared in the \textit{International Journal of Number Theory} in 2022.
\section{Proof of Theorem~\ref{thm:main}}
In our discussion below, we will identify $\PP^1$ with $\Spec \FF_q[t] \cup \{\infty\}$. Recall that the \textbf{branch locus} of a finite morphism $f \colon X \to Y$ is the minimal closed subset $B \subset Y$ such that the induced morphism $X \smallsetminus f^{-1}(B) \to Y \smallsetminus B$ is \'etale. Equivalently, the branch locus is the support of the discriminant of the induced extension of function fields $\kappa(X) / \kappa(Y)$.
The construction of our sequence of curves involves three steps: \begin{itemize}
\item Choose a morphism $h \colon \PP^1 \to \PP^1$ that maps every point of
$\PP^1(\FF_q)$ to $\infty$.
\item Construct a sequence of morphisms $f_i \colon X_i \to \PP^1$ of degree
$\gamma$ such that $\infty$ splits completely in each $X_i$ and the genera
of the $X_i$ tend to infinity.
\item Take the fiber product of $f_i$ and $h$ to obtain a new covering of
curves $C_i := X_i \times_{\PP^1} \PP^1 \to \PP^1$ of degree~$\gamma$ with the
correct number of rational points. \end{itemize} We must take care in choosing the branch loci of the morphisms $f_i$ in order to have control over the genus of the curve $C_i$.
The construction of $h$ is explicit. Let $a \in \FF_q[t]$ be an irreducible polynomial of degree $q+1$ and define \[
h(t) = \frac{a(t)}{t^q - t}. \] Then $h$ has degree $q+1$, and each point of $\PP^1(\FF_q)$ maps to $\infty$.
\begin{lemma}
\label{lem:abelian}
Fix $\gamma \ge 1$ and a proper closed subset $S \subset \PP^1$. There exists
an abelian cover $f \colon X \to \PP^1$ of degree $\gamma$ such that the point
$\infty$ splits completely in $X$, and such that the branch locus of $f$ is
supported on a closed point outside $S$. \end{lemma}
\begin{proof}
Define a modulus $\fm = e[P]$, where $P$ is a yet-to-be-determined closed
point of $\PP^1$ and $e \ge 1$ is an integer. By \cite[Thm.~V.1.7]{milneCFT},
the order of the ray class group $\Cl_{\fm}^0 (\PP^1)$ is given by
\[
\#\Cl_{\fm}^0(\PP^1) = \frac{q^{\deg(P)} - 1}{q-1}
q^{\deg(P)(e-1)}.
\]
This quantity is divisible by $\gamma$ for infinitely many closed points $P$
and choices of $e \ge 1$. In particular, we may choose $P$ and $e$ so that
$\gamma$ divides the order of $\#\Cl_{\fm}^0(\PP^1)$ and so that $P \not\in
S \cup \{\infty\}$.
Let $G$ be a quotient of $\Cl_{\fm}^0(\PP^1)$ of order $\gamma$. Then the map
$D \mapsto D - \deg(D) [\infty]$ followed by projection to $G$ gives a
surjective homomorphism ${\alpha \colon \Cl_{\fm}(\PP^1) \to G}$. We now
invoke the existence theorem of geometric class field theory
\cite[p.124]{Serre_Rational_Points_Book} to obtain a curve $X_{/\FF_q}$ and a
Galois covering $X \to \PP^1$ with group $G$ whose branch locus is
supported on $P$, and whose Frobenius element for $Q \ne P$ is given by
$\alpha([Q])$. By construction, we have $\alpha([\infty]) = 1$, so that
$\infty$ splits completely in~$X$. \end{proof}
For a curve $C$, write $g(C)$ for its genus.
\begin{lemma}
\label{lem:branch}
Let $f \colon Y \to X$ and $h \colon Z \to X$ be morphisms of curves over a
field $k$ whose branch loci in $X$ are disjoint. Then the fiber product $Y
\times_X Z$ is a smooth $k$-variety of dimension 1. If it is geometrically
irreducible, then it has genus
\[
d_h g(Y) + d_f g(Z) - d_f d_h g(X) + (d_f - 1)(d_h - 1),
\]
where $d_f = \deg(f)$ and $d_h = \deg(h)$. \end{lemma}
\begin{proof}
Write $W = Y \times_X Z$ for ease of notation. The following diagram describes
our situation:
\[
\begin{tikzcd}
W \ar[r,"\hat f"] \ar[d,"\hat h"] & Z \ar[d,"h"] \\
Y \ar[r,"f"] & X
\end{tikzcd}
\]
Let $B_f$ and $B_h$ be the branch loci of $f$ and $h$, respectively. The
base extension of a smooth morphism is smooth, so $\hat f$ is smooth above $Z
\smallsetminus h^{-1}(B_f)$. Similarly, $\hat h$ is smooth above $Y
\smallsetminus f^{-1}(B_h)$. Since the branch loci of $f$ and $h$ are
disjoint, we conclude that $W$ is regular.
For the genus formula, write $R_Y$, $R_Z$, and $R_W$ for the ramification
divisors of the morphisms $f$, $h$, and $f \hat h = h \hat f$,
respectively. Since $f$ and $h$ have disjoint branch loci, we learn that
\[
R_W = \hat f^* R_Z + \hat h^* R_Y.
\]
Taking degrees gives
\[
\deg(R_W) = \deg(f) \deg(R_Z) + \deg(h) \deg(R_Y).
\]
The Hurwitz formulae for the morphisms $f \colon Y \to X$, $h \colon Z
\to X$, and $f \hat h \colon W \to X$ show that
\begin{align*}
\deg(R_Y) &= 2g(Y) - 2 - \deg(f) \left(2g(X) - 2\right) \\
\deg(R_Z) &= 2g(Z) - 2 - \deg(h) \left(2g(X) - 2\right) \\
\deg(R_W) &= 2g(W) - 2 - \deg(f) \deg(h) \left(2g(X) - 2\right).
\end{align*}
Solving for $g(W)$ in this system of four equations gives the desired result. \end{proof}
Now we complete the proof of the Theorem~\ref{thm:main}. Let $B_0$ be the branch locus of $h \colon \PP^1 \to \PP^1$. Apply Lemma~\ref{lem:abelian} to construct an abelian cover $f_0 \colon X_0 \to \PP^1$ of degree $\gamma$ such that $\infty$ splits completely in $X_0$, and such that the branch locus of $f_0$ is disjoint from $B_0$. Next we iteratively apply Lemma~\ref{lem:abelian} to construct a sequence of abelian covers $f_i \colon X_i \to \PP^1$ of degree $\gamma$ and an increasing sequence of closed subsets $B_0 \subsetneq B_1 \subsetneq B_2 \subsetneq \cdots \subsetneq \PP^1$ such that \begin{itemize}
\item $\infty$ splits completely in $X_i$;
\item the branch locus of $f_i$ is a closed point disjoint from $B_i$; and
\item the set $B_{i+1}$ is the union of $B_i$ and the branch locus of $f_i$. \end{itemize} Since the branch locus of $f_i$ is disjoint from the branch locus of $h$, we may apply Lemma~\ref{lem:branch} to see that $C_i = X_i \times_{\PP^1} \PP^1$ is a smooth 1-dimensional variety over $\FF_q$.
We argue next that $C_i$ is geometrically connected. Note that $C_i$ is a closed subscheme of $X_i \times \PP^1$. The intersection theory of ruled surfaces \cite[V.2.3]{Hartshorne_Bible} shows that the group of numerical equivalence classes of $X_i \times \PP^1$ is generated by $e_1 = X_i \times \{\text{pt}\}$ and $e_2 = \{pt\} \times \PP^1$, and these two curves satisfy $e_1^2 = e_2^2 = 0$ and ${e_1.e_2 = 1}$. Since $C_i$ is a fiber product, each of its geometric components dominates $X_i$ and $\PP^1$ for the respective projection maps. Consequently, any geometric component is numerically equivalent to $ae_1 + be_2$ for some $a,b > 0$. If $C_i$ has two distinct geometric components, then those components have nonzero intersection. As any intersection point between components is non-smooth, we have a contradiction. It follows that $C_i$ is geometrically connected.
Now we argue that the genus of $C_i$ tends to infinity with $i$. Lemma~\ref{lem:branch} shows that $C_i$ has genus \[
g(C_i) = (q+1) g(X_i) + q(\gamma-1). \] The branch locus of $f_i \colon X_i \to \PP^1$ is supported on a closed point that is disjoint from the branch loci of all $f_j$ with $j < i$. As there are only finitely many closed points of a given degree, the degree of this branch locus must tend to infinity with $i$. By the Hurwitz formula, $g(X_i) \to \infty$ with $i$, and hence the genus of $C_i$ must also tend to infinity with $i$.
Finally, we show that $C_i$ has $\gamma(q+1)$ rational points and gonality $\gamma$. Consider the following fiber product diagram:
\[
\begin{tikzcd}
C_i \ar[r,"\hat f_i"] \ar[d,"\hat h"] & \PP^1 \ar[d,"h"] \\
X_i \ar[r,"f_i"] & \PP^1
\end{tikzcd}
\] Note first that $\hat f_i$ has the same degree as $f_i$, namely $\gamma$; thus $C_i$ has gonality at most $\gamma$. Since $\infty$ splits completely for the map $f_i \colon X_i \to \PP^1$, every pre-image of $\infty$ under $h$ splits completely in $C_i$ under the map $\hat f_i$ \cite[Prop.~3.9.6b]{Stichtenoth_book}. In particular, since $h$ maps all rational points of $\PP^1$ to $\infty$, we see that $\hat f_i^{-1}(x)$ consists of $\gamma$ rational points for each $x \in \PP^1(\FF_q)$. Thus $\#C_i(\FF_q) \ge \gamma(q+1)$. By \eqref{eq:morphism_bound}, $C_i$ must have gonality at least $\gamma$. We conclude that $C_i$ has gonality $\gamma$ and exactly $\gamma (q+1)$ rational points, and the proof of Theorem~\ref{thm:main} is complete.
\end{document} | arXiv | {
"id": "2202.12969.tex",
"language_detection_score": 0.827725887298584,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Reducing and monitoring round-off error propagation for symplectic implicit Runge-Kutta schemes }
\author{Mikel Anto\~nana, Joseba Makazaga, Ander Murua \\ KZAA saila, Informatika Fakultatea, UPV/EHU\\ Donostia / San Sebasti\'an}
\date{ }
\maketitle
\begin{abstract} We propose an implementation of symplectic implicit Runge-Kutta schemes for highly accurate numerical integration of non-stiff Hamiltonian systems based on fixed point iteration. Provided that the computations are done in a given floating point arithmetic, the precision of the results is limited by round-off error propagation. We claim that our implementation with fixed point iteration is near-optimal with respect to round-off error propagation under the assumption that the function that evaluates the right-hand side of the differential equations is implemented with machine numbers (of the prescribed floating point arithmetic) as input and output. In addition, we present a simple procedure to estimate the round-off error propagation by means of a slightly less precise second numerical integration. Some numerical experiments are reported to illustrate the round-off error propagation properties of the proposed implementation.
\end{abstract}
\section{Introduction} \label{sec:intro}
When numerically integrating an autonomous Hamiltonian system, one typically monitors the error in the preservation of the Hamiltonian function to check the precision of the numerical solution. However, severe loss of precision can actually occur for sufficiently long integration intervals, while displaying a good preservation of the value of Hamiltonian function. For high precision numerical integrations, where round-off errors may dominate truncation errors, it is highly desirable both reducing and monitoring the propagation of round-off errors.
We propose an implementation of symplectic implicit Runge-Kutta schemes (such as RK collocation methods with Gaussian nodes) that takes special care in reducing the propagation of round-off errors. Our implementation is intended to be applied to non-stiff problems, which motivates us to solve the implicit equations by fixed-point iteration (see for instance~\cite{JMSanz-Serna1994}\cite{Hairer2006} for numerical tests comparing the efficiency of implementations based on fixed point iterations and simplified Newton).
We work under the assumption that the (user defined) function that evaluates the right-hand side of the differential equations is implemented in such a way that input and output arguments are machine numbers of some prescribed floating point arithmetic. Our actual implementation includes the option of computing, in addition to the numerical solution, an estimation of the propagated round-off error.
The starting point of our implementation is the work of Hairer et al.~\cite{Hairer2008}. There, the authors observe that a standard fixed-point implementation of symplectic implicit RK (applied with compensated summation~\cite{Higham2002}) exhibits an unexpected systematic error in energy due to round-off errors, not observed in explicit symplectic methods. They make the following observations that allow them to understand that unfavorable error behavior: (a) The implicit Runge-Kutta method whose coefficients $\tilde b_i,\tilde a_{i j}$ are the floating-point representation of the coefficient $b_i,a_{i,j}$ of a symplectic Runge-Kutta method is not symplectic; (b) The error due to the application at each step of a fixed point iteration with standard stopping criterion (depending on a prescribed tolerance of the iteration error) tends to have a systematic character. Motivated by these observations, they modify the standard implementation of fixed point iteration which allows them to reduce the effect of round-off errors. No systematic error in energy is observed in the numerical experiments reported in~\cite{Hairer2008}. However, we observe in some numerical experiments that the stopping criterion for the fixed point iteration that they propose fails to work properly in some cases. In addition, we claim that their implementation is still not optimal with respect to round-off error propagation.
In Section~3, we propose alternative modifications of the standard fixed point implementation of symplectic implicit Runge-Kutta methods, which compare very favorably with that proposed in~\cite{Hairer2008}.
We first define a reference implementation with fixed point iteration where all the arithmetic operations other than the evaluation of the right-hand side of the system of differential equations are performed in exact arithmetic, and as many iterations as needed are performed in each step. Such an implementation, that we call FPIEA (Fixed Point iteration with Exact Arithmetic) implementation, is based on the following two modifications to the standard implementation with fixed point iterations: (i) From one hand, we reformulate each symplectic implicit Runge-Kutta method in such a way that its coefficients can be approximated by machine numbers while still keeping its symplectic character exactly (Subsection~3.1). (ii) On the other hand, we propose a modification of the stopping criterion introduced in~\cite{Hairer2008} that is more robust and is independent of the chosen norm (Subsection~3.2)
The implementation we present here is based on the FPIEA implementation, with most multiplications and additions performed (for efficiency reasons) in the prescribed floating point arithmetic, but some of the operations performed with special care in order to reduce the effect of round-off errors. In particular, this includes a somewhat non-standard application of Kahan's "compensated summation" algorithm~\cite{Kahan1965}\cite{Higham2002}\cite{Muller2009}, described in detail in Subsection~3.3.
Finally, in Subsection~3.4, we present a simple procedure to estimate the round-off error propagation as the difference of the actual numerical solution, and a slightly less precise second numerical solution. These two numerical solutions can be computed either in parallel, or sequentially with a lower computational cost than two integrations executed in completely independent way.
In Section~4, we show some numerical experiments to asses the performance of our final implementation. Some concluding remarks are presented in Section~5.
\section{Preliminaries}
\subsection{Numerical integration of ODEs by symplectic IRK schemes}
We are mainly interested in the application of symplectic implicit Runge-Kutta (IRK) methods for the numerical integration of Hamiltonian systems of the form \begin{equation} \label{eq:Hamsyst} \frac{d}{dt}q^j = \frac{\partial H(p,q)}{\partial p^j}, \quad \frac{d}{dt}p^j = -\frac{\partial H(p,q)}{\partial q^j}, \quad j=1,\ldots,d, \end{equation}
where $H:\mathbb{R}^{2d} \to \mathbb{R}$. Recall that the Hamiltonian function $H(q,p)$ is a conserved quantity of the system.
More generally, we consider initial value problems of systems of autonomous ODEs of the form \begin{equation} \label{eq:ivp} \frac{d}{dt}y=f(y),\quad y(t_0)=y_0, \end{equation} where $f: \mathbb{R}^D\to \mathbb{R}^D$ is a sufficiently smooth map and $y_0 \in \mathbb{R}^D$. In the case of the Hamiltonian system (\ref{eq:Hamsyst}), $D=2d$, $y=(q^1,\ldots,q^d,p^1,\ldots,p^d)$.
For the system of differential equations (\ref{eq:ivp}), an s-stage implicit Runge-Kutta method is determined by an integer $s$ and the real coefficients $a_{ij}$ ($1 \leqslant i, j \leqslant s$), $b_{i}$ ($1\leqslant i \leqslant s$). The approximations $y_{n} \approx y(t_n)$ to the solution $y(t)$ of (\ref{eq:ivp}) at $t=t_{n}=t_0 + n h$ for $n=1,2,3,\ldots$ are computed as \begin{equation} \label{eq:yz} y_{n+1}=y_n+h\sum^s_{i=1} b_i \, f(Y_{n,i}), \end{equation}
where the {\em stage vectors} $Y_{n,i}$ are implicitly defined at each step by \begin{equation} \label{eq:Y} Y_{n,i} =y_n+ h \sum^s_{j=1}{a_{ij}\,f(Y_{n,j})}, \quad i=1 ,\ldots, s. \end{equation}
An IRK scheme is symplectic if an only if~\cite{JMSanz-Serna1994}
\begin{equation} \label{eq:sympl_cond_1} b_{i}a_{ij}+b_{j}a_{ji}-b_{i}b_{j}=0, \ \ 1 \leqslant i,j \leqslant s. \end{equation}
In that case, the IRK scheme conserves exactly all quadratic first integrals of the original system (\ref{eq:ivp}), and if the system is Hamiltonian, under certain assumptions~\cite{Hairer2006}, it approximately conserves the value of the Hamiltonian function $H(y)$ over long time intervals.
\subsection{Floating point version of an IRK integrator}
Let $\mathbb{F} \subset \mathbb{R}$ be the set of machine numbers of a prescribed floating point system. Let $\mathrm{fl}:\mathbb{R} \longrightarrow \mathbb{F}$ be a map that sends each real number $x$ to a nearest machine number $\mathrm{fl}(x) \in \mathbb{F}$.
We assume that instead of the original map $f: \mathbb{R}^D \to \mathbb{R}^D$, we have a computational substitute \begin{equation} \label{eq:tildef} \tilde f: \mathbb{F}^D\to \mathbb{F}^D. \end{equation}
Ideally, for each $x \in \mathbb{F}^D$, $\tilde f(x):= \mathrm{fl}(f(x))$. In practice, the intermediate computations to evaluate $\tilde f$ are typically made using the floating point arithmetic corresponding to $\mathbb{F}$, which will result in some error $||\tilde f(x) - \mathrm{fl}(f(x))||$ caused by the accumulated effect of several round-off errors.
We aim at efficiently implementing a given symplectic IRK scheme under the assumption that $f: \mathbb{R}^D \to \mathbb{R}^D$ is replaced by (\ref{eq:tildef}). Hence, the effect of round-off errors will be present even in the best possible ideal implementation where exact arithmetic were used for all the computations except for the evaluations of the map (\ref{eq:tildef}). Our goal is to implement the IRK scheme working at the prescribed floating point arithmetic, in such a way that the effect of round-off errors is similar in nature and relatively close in magnitude to that of such ideal implementation.
\subsection{Kahan's compensated summation algorithm}
Obtaining the numerical approximation $y_{n} \approx y(t_{n})$, ($n=1,2,\ldots$) to the solution $y(t)$ of the initial value problem (\ref{eq:ivp}) defined by (\ref{eq:yz})--(\ref{eq:Y}) requires computing the sums \begin{equation} \label{eq:sumy_n} y_{n+1} = y_{n} + x_n, \quad n=0,1,2,\ldots, \end{equation}
where \begin{equation*} x_n = h\sum^s_{i=1} b_i \, f(Y_{n,i}). \end{equation*}
For an actual implementation that only uses a floating point arithmetic with machine numbers in $\mathbb{F}$, special care must be taken with the additions (\ref{eq:sumy_n}). It is worth mentioning that for sufficiently small step-length $h$, the components of $x_n$ are smaller in size than those of $y_{n}$ (provided that the components of the solution $y(t)$ of (\ref{eq:ivp}) remain away from zero). The naive recursive algorithm $\hat y_{n+1} :=\mathrm{fl}(\hat y_{n} + \mathrm{fl}(x_n))$, ($n=0,1,2,3\ldots$), typically suffers, for large $n$, a significant loss of precision due to round-off errors. It is well known that such a round-off error accumulation can be greatly reduced with the use of Kahan's compensated summation algorithm~\cite{Kahan1965} (see also~\cite{Higham2002},~\cite{Muller2009}).
Given a sequence $\{y_0,x_0,x_1,\ldots,x_n,\ldots\} \subset \mathbb{F}$ of machine numbers, Kahan's algorithm is aimed to compute the sums $y_n = y_0 + \sum_{\ell=0}^{n-1} x_{\ell}$, ($n\geqslant 1$,) using a prescribed floating point arithmetic, more precisely than with the naive recursive algorithm. In Kahan's algorithm, machine numbers $\tilde y_n$ representing the sums $y_n$ are computed along with additional machine numbers $e_n$ intended to capture the error $y_n-\tilde y_n$. The actual algorithm reads as follows:
\begin{algorithm}[H]
\BlankLine
$\tilde y_0= y_0; \ e_0=0$\;
\BlankLine
\For{$l\leftarrow 0$ \KwTo $n$}
{
\BlankLine
$X_l = \mathrm{fl}(x_l + e_{l})$\;
$\tilde y_{l+1} = \mathrm{fl}(\tilde y_{l} + X_{l})$\;
$\hat X_{l} = \mathrm{fl}(\tilde y_{l+1} - \tilde y_{l})$\;
$e_{l+1} = \mathrm{fl}(X_{l} - \hat X_{l})$\;
\BlankLine
}
\caption{Kahan’s compensated summation.}
\label{alg:Kahan'sCS}
\end{algorithm}
The sums $\tilde y_l + e_l$ (which in general do not belong to $\mathbb{F}$) are more precise approximations of the exact sums $y_l$ than $\tilde y_l \in \mathbb{F}$. In this sense, if $y_0 \not \in \mathbb{F}$, the algorithm (\ref{alg:Kahan'sCS}) should be initialized as $\tilde y_0:= \mathrm{fl}(y_0)$ and $e_0:=\mathrm{fl}(y_0-\tilde y_0)$ (rather than $e_0=0$).
Of course, algorithm (\ref{alg:Kahan'sCS}) also makes sense for $D$-vectors of machine numbers $\tilde y_0, e_0,x_0,x_1,\ldots,x_n \in \mathbb{F}^D$. In this setting, algorithm (\ref{alg:Kahan'sCS}) can be interpreted as a family of maps parametrized by $n$ and $D$, \begin{equation*} S_{n,D}:\mathbb{F}^{(n+3)D} \to \mathbb{F}^{2D}, \end{equation*} that given the arguments $\tilde y_0,e_0,x_0,x_1,\ldots,x_n \in \mathbb{F}^D$, returns $\tilde y_{n+1}, e_{n+1} \in \mathbb{F}^D$ such that $\tilde y_{n+1} + e_{n+1}$ is intended to represent the sum $(\tilde y_0 + e_0) + x_0 +x_1 + \cdots + x_n$ (with some small error).
\section{Proposed implementation of symplectic IRK schemes} \label{sec:2}
\subsection{Symplectic schemes with machine number coefficients} \label{ss:3.1}
If the coefficients $b_i,a_{i j}$ determining a symplectic IRK are replaced by machine numbers $\tilde b_i,\tilde a_{i j} \in \mathbb{F}$ that approximate them (say, $\tilde b_i := \mathrm{fl}(b_i)$, $\tilde a_{i j} := \mathrm{fl}(a_{i j})$), then the resulting IRK scheme typically fails to satisfy the symplecticity conditions (\ref{eq:sympl_cond_1}). This results in a method that does not conserve quadratic first integrals and exhibits a linear drift in the value of the Hamiltonian function~\cite{Hairer2008}.
Motivated by that, we recast the definition of a step of the IRK method as follows: \begin{align} \label{eq:YL} Y_{n,i} &=y_n+ \sum^s_{j=1}{\mu_{ij}\,L_{n,j}, \quad L_{n,i} = h b_i f(Y_{n,i})}, \quad i=1 ,\ldots, s, \\ \label{eq:y} y_{n+1} &=y_n+\sum^s_{i=1} L_{n,i}, \end{align}
where
\begin{equation*} \mu_{ij}=a_{ij}/b_j, \quad 1 \leqslant i,j \leqslant s. \end{equation*}
Condition (\ref{eq:sympl_cond_1}) now becomes,
\begin{equation*} \mu_{ij}+\mu_{ji}-1=0, \quad 1 \leqslant i,j \leqslant s. \end{equation*}
The main advantage of the proposed formulation over the standard one is that the absence of multiplications in the alternative symplecticity condition makes possible (see Appendix~A for the particular case of the 12th order Gauss collocation IRK method) to find machine number approximations $\tilde{\mu}_{i j}$ of $\mu_{i j}=a_{i j}/b_j$ satisfying exactly the symplecticity condition
\begin{equation} \label{eq:sympl_cond_2} \tilde{\mu}_{ij}+\tilde{\mu}_{ji}-1=0, \quad 1 \leqslant i,j \leqslant s. \end{equation}
\subsection{Iterative solution of the nonlinear Runge-Kutta equations} \label{ss:3.2}
The fixed point iteration can be used to approximately compute the solution of the implicit equations (\ref{eq:YL}) as follows: For $k=1,2,\ldots$ obtain the approximations $Y_{n,i}^{[k]}$, $L_{n,i}^{[k]}$ of $Y_{n,i}$, $L_{n,i}$ ($i=1,\ldots,s$) as \begin{equation} \label{eq:fixed_point_iteration} L_{n,i}^{[k]} = h b_i\, f(Y_{n,i}^{[k-1]}), \quad Y_{n,i}^{[k]} =y_n+ \sum^s_{j=1}\, \mu_{ij}\, L_{n,j}^{[k]} \quad i=1 ,\ldots, s. \end{equation}
The iteration may be initialized simply with $Y_{i}^{[0]}=y_n$, or by some other procedure that uses the stage values of the previous steps~\cite{Hairer2006}. If the step-length $h$ is sufficiently small, these iterations converge to a fixed point that is solution of the algebraic equations (\ref{eq:YL}).
The situation is different for an actual computational version of these iterations, where $f$ is replaced in (\ref{eq:fixed_point_iteration}) by its computational substitute (\ref{eq:tildef}). The $k$th iteration then reads as follows: For $ i=1 ,\ldots, s$, \begin{equation} \label{eq:fixed_point_iteration_ideal} f_{n,i}^{[k]} = \tilde f({\mathrm{fl}(Y_{n,i}^{[k-1]})}), \quad L_{n,i}^{[k]} = h b_i\, f_{n,i}^{[k]}, \quad Y_{n,i}^{[k]} = { y_n+ \sum^s_{j=1}\, \mu_{ij}\, L_{n,j}^{[k]} }. \end{equation}
In this case, either a fixed point of (\ref{eq:fixed_point_iteration_ideal}) is reached in a finite number of iterations, or the iteration fails (mathematically speaking) to converge. In the former case, however (provided that $h$ is small enough for the original iteration (\ref{eq:fixed_point_iteration}) to converge), after a finite number of iterations, a computationally acceptable approximation to the fixed point of (\ref{eq:fixed_point_iteration}) is typically achieved, and the successive iterates remain close to it. According to our experience and the numerical experiments reported in~\cite{Hairer2008}, a computational fixed point is reached for most steps in a numerical integration with sufficiently small step-length $h$.
In standard implementations of implicit Runge-Kutta methods, one considers
\begin{equation*} \Delta^{[k]} = (Y_{n,1}^{[k]}-Y_{n,1}^{[k-1]}, \ldots,Y_{n,s}^{[k]}-Y_{n,s}^{[k-1]}) \in \mathbb{F}^{s d}, \end{equation*}
(for notational simplicity, we avoid reflecting the dependence of $n$ on $\Delta^{[k]}$), and stops the iteration provided that $||\Delta^{[k]}|| \leqslant \mathrm{tol}$, with a prescribed vector norm $||\cdot||$ and iteration error tolerance $\mathrm{tol}$. If the chosen value of $\mathrm{tol}$ is too small, then the iteration may never end when the computational sequence does not arrive to a fixed point in a finite number of steps. If $\mathrm{tol}$ is not small enough, the iteration will stop too early, which will result in an iteration error of larger magnitude than round-off errors. Furthermore, as observed in~\cite{Hairer2008}, such iteration errors tend to accumulate in a systematic way.
The remedy proposed in~\cite{Hairer2008} is to stop the iteration either if $\Delta^{[k]}=0$ (that is, if a fixed point is reached) or if $||\Delta^{[k]}|| \geqslant || \Delta^{[k-1]}||$.
The underlying idea is that (provided that $h$ is small enough for the original iteration (\ref{eq:fixed_point_iteration}) to converge), typically $||\Delta^{[k]}|| < || \Delta^{[k-1]}||$ whenever the iteration error is substantially larger than round-off errors, and thus $||\Delta^{[k]}|| \geqslant || \Delta^{[k-1]}||$ may indicate that round-off errors are already significant.
We have observed that Hairer's strategy works well in general, but in some cases it stops the iteration too early. Indeed, it works fine for the initial value problem on a simplified model of the outer solar system (OSS) reported in~\cite{Hairer2008} with a step-size $h$ of $500/3$ days, but it goes wrong with $h=1000/3$.
Actually, we have run \href{http://www.unige.ch/~hairer/preprints/code.tar}{Hairer's fortran code} and observed that the computed numerical solution exhibits an error in energy that is considerably larger than round-off errors. The evolution of relative error in energy is displayed on the left of Figure~\ref{fig:plot0}, which shows a linear growth pattern. We have checked that, for instance, at the first step, \begin{equation*}
\|\Delta^{[1]}\| > \|\Delta^{[2]}\|> \cdots >\|\Delta^{[12]}\| = 3.91\times 10^{-14} \le \|\Delta^{[13]}\| = 4.35\times 10^{-14} \end{equation*}
which causes the iteration to stop at the $13$th iteration, which happens to be too early, since subsequently, $\|\Delta^{[13]}\|> \|\Delta^{[14]}\| > \|\Delta^{[15]}\|> \|\Delta^{[16]}\|=0$.
\begin{figure}
\caption{\small Evolution of relative error in energy for the outer solar system problem (OSS) with the original unperturbed initial values in~\cite{Hairer2008} and doubled step-size ($h=1000/3$ days). (a) Hairer's stopping criterion. (b) New stopping criterion.}
\label{fig:plot0}
\end{figure}
Motivated by that we propose an alternative more stringent stopping criterion: Denote as $\Delta_j^{[k]}$ the $j$th component ($1\leqslant j \leqslant s D$) of $\Delta^{[k]} \in \mathbb{R}^{s D}$. The fixed point iteration (\ref{eq:fixed_point_iteration_ideal}) is performed for $k=1,2,\ldots$ until either $ \Delta^{[k]} =0$ or the following condition is fulfilled for two consecutive iterations: \begin{equation} \label{eq:stopping} \forall j \in \{1,\ldots,s D\}, \quad
\min \left(\{|\Delta_j^{[1]}|,\cdots ,|\Delta_j^{[k-1]}|\} \ /\{0\} \right) \leqslant |\Delta_j^{[k]}|. \end{equation}
If $K$ is the first positive integer such that (\ref{eq:stopping}) does hold for both $k=K-1$ and $k=K$, then we compute the approximation $y_{n+1} \approx y(t_{n+1})$ as
\begin{equation*} y_{n+1} = y_{n} + \sum_{i=1}^{s} L_{n,i}^{[K]}. \end{equation*} The iteration typically stops with $\Delta_j^{[K]}=0$ for all $j$. However, when the iteration stops with $\Delta_j^{[K]}\neq 0$ for some $j$, that is, when no computational fixed point is achieved, we still have to decide if this has been due to the effect of small round-off errors, or because the step-size is not small enough for the convergence of the iteration (\ref{eq:fixed_point_iteration}). Here is the only point where our implementation depends on a norm-based standard criterion (with rather loose absolute and relative error tolerances) to decide if $\Delta^{[K]} \in \mathbb{F}^{s D}$ is small enough.
We have repeated the experiment of OSS with $h=1000/3$ by replacing Hairer's stopping criterion by our new one. The evolution of the resulting energy errors are displayed on the right of Figure~\ref{fig:plot0}.
\subsection{Machine precision implementation of new algorithm}
Subsections~\ref{ss:3.1} and~\ref{ss:3.2} completely determine the FPIEA (Fixed Point Iteration with Exact Arithmetic) implementation referred to in the Introduction. We next describe in detail our machine precision implementation of the algorithm described (for exact arithmetic) in previous subsection.
Consider appropriate approximations $\tilde b_i \in \mathbb{F}$ of $b_i$ ($i=1,\ldots,s$), and let
$\tilde{\mu}_{i j} \in \mathbb{F}$ ($i,j=1,\ldots,s$) be approximations of $\mu_{i j}$ satisfying exactly the symplecticity condition (\ref{eq:sympl_cond_2}).
Given $y_n \in \mathbb{R}^D$, we consider $\tilde y_0 = \mathrm{fl}(y_0)$ and $e_0=\mathrm{fl}(y_n-\tilde y_n)$. For each $n=0,1,2,\ldots$, we initialize $Y_{n,i}^{[0]}=y_n$, and successively compute for $k=1,2,\ldots$ \begin{equation} \label{eq:fixed_point_iteration_comp} \begin{split}
f_{n,i}^{[k]} &= \tilde f(Y_{n,i}^{[k-1]}), \quad L_{n,i}^{[k]} = \mathrm{fl}(h\, \tilde{b}_i\,f_{n,i}^{[k]}), \\ Z_{n,i}^{[k]} &= {e_n +} \sum^s_{j=1}\, \tilde{\mu}_{ij}\, L_{n,j}^{[k]}, \quad Y_{n,i}^{[k]} = \mathrm{fl}\Big( \tilde y_n+ Z_{n,i}^{[k]}\Big) \end{split} \end{equation}
until the iteration is stopped at $k=K$ according to the criteria described in Subsection~\ref{ss:3.2}. Hence, $K$ is the highest index $k$ such that $f_{n,i}^{[k]}$ has been computed.
In our actual implementation, one can optionally initialize $Y_{n,i}^{[0]}$ by interpolating from the stage values of previous step, which improves the efficiency of the algorithm. Nevertheless, in all the numerical results reported Section~\ref{s:ne} below, the simpler initialization
$Y_{n,i}^{[0]}=y_n$ is employed.
In (\ref{eq:fixed_point_iteration_comp}), we evaluate each $Z_{n,i}^{[k]} \in \mathbb{F}^D$ as \begin{equation*} Z_{n,i}^{[k]} = (\cdots ((e_n + \tilde{\mu}_{i,1} L_{n,1}^{[k]}) + \tilde{\mu}_{i,2} L_{n,2}^{[k]} )+\cdots+\tilde{\mu}_{i,s-1} L_{n,s-1}^{[k]})+\tilde{\mu}_{i,s}L_{n,s}^{[k]} \end{equation*} where each multiplication and addition is performed in the prescribed floating point arithmetic.
We then compute $\tilde y_{n+1}, e_{n+1} \in \mathbb{F}^D$ such that $\tilde y_{n+1} + e_{n+1} \approx y(t_{n+1})$ as follows: \begin{itemize} \item compute for $i=1,\ldots,s$ the vectors \begin{equation} \label{eq:Eni} E_{n,i} = h\, \tilde b_i\,f_{n,i}^{[K]}-L_{n,i}^{[K]}. \end{equation} \begin{equation*} {\delta_{n}=e_{n}+\sum_{i=1}^{s} E_{n,i}}. \end{equation*}
\item finally, compute
\begin{equation}
\label{eq:y3}
(\tilde y_{n+1}, e_{n+1})=S_{s,D}(\tilde y_{n},\delta_{n},L_{n,1}^{[K]}, \ldots, L_{n,s}^{[K]}).
\end{equation} \end{itemize}
If the FMA (fused-multiply-add) instruction is available, it should be used to compute (\ref{eq:Eni}) (with precomputed coefficients $h \tilde b_i$). The order in which the terms defining $Z_n$ and $\delta_n$ are actually computed in the floating point arithmetic is not relevant, as the corresponding round-off errors of the small corrections {$Z_n$} and $\delta_n$ will have a very marginal effect in the computation of $\tilde y_{n+1}$ and $e_{n+1}$.
\subsection{Round-off error estimation.} \label{ss:estimation}
We estimate the round-off error propagation of our numerical solution $\tilde{y}_{n}+ e_n \approx y(t_n)$ ($n=1,2,\ldots$) by computing its difference with a slightly less precise secondary numerical solution $\hat y_{n}+\hat e_n \approx y(t_n)$ obtained with a modified version of the machine precision algorithm described in previous section. In this modified version of the algorithm, the components of each $L_{n,i}^{[K]}$ in (\ref{eq:y3}) are rounded to a machine number with a shorter mantissa. We next give some more details.
Let $p$ be the number of binary digits of our floating point arithmetic. Given an integer $r\geqslant 0$ and a machine number $x$, we define $\mathrm{fl}_{p-r}(x):= \mathrm{fl}(2^r x + x) - 2^r x$. This is essentially equivalent to rounding $x$ to a floating point number with $p-r$ significant binary digits.
We determine the algorithm for the secondary integration by fixing a positive integer $r<p$ and modifying (\ref{eq:y3}) in the implementation of the algorithm described in previous subsection as follows: \begin{equation*}
(\hat y_{n+1}, \hat e_{n+1})=S_{s,D}(\hat y_{n},\delta_{n},\mathrm{fl}_{p-r}(L_{n,1}^{[K]}), \ldots, \mathrm{fl}_{p-r}(L_{n,s}^{[K]})). \end{equation*}
The proposed round-off error estimation can thus be obtained as the difference of the primary numerical solution and the secondary numerical solution obtained with a relatively small value of $r$ (say, $r=3$). These two numerical solutions can be computed in parallel in a completely independent way.
In addition, we have implemented a sequential version
with lower CPU requirements than two integrations executed sequentially in completely independent way. The key to do that is the following: At each step, the stage values $Y_{n,i}$ ($i=1,\ldots,s$) of both primary and secondary integration will typically be very close to each other (as far as the estimated round-off error does not grow too much). Thus, the number of iterations of each step of the secondary integration can be reduced by using the final stage values $Y_{n,i}$ ($i=1,\ldots,s$) of the primary integration as initial values $Y_{n,i}^{[0]}$ of the secondary integration.
\section{Numerical experiments} \label{s:ne}
We next report some numerical experiments to asses our implementation of the $6$-stage Gauss collocation method of order $12$ in the 64-bit IEEE double precision floating point arithmetic.
\subsection{Test problems}
We consider two different Hamiltonian problems corresponding to a double pendulum and the simulation of the outer solar system (considered in~\cite{Hairer2006} and \cite{Hairer2008}) respectively. In all the cases, we consider a time-step $h$ that is small enough for round-off errors to dominate over truncation errors.
\subsubsection{The double pendulum (DP) problem}
We consider the planar double pendulum problem: a double bob pendulum with masses $m_1$ and $m_2$ attached by rigid massless rods of lengths $l_1$ and $l_2$. This is a non-separable Hamiltonian system with two degrees of freedom, for which no explicit symplectic Runge-Kutta-type method is available, and hence Gauss collocation methods are a natural choice~\cite{McLachlan1992}.
{The configuration of the pendulum is described by two angles $q=(\phi,\theta)$ (see figure \ref{fig:double-pendulum}): while $\phi$ is the angle of the first bob, the second bob's angle is defined by $\psi=\phi+\theta$. We denote the corresponding momenta as $p=(p_{\phi},p_{\theta})$}.
\begin{figure}
\caption{Double Pendulum.}
\label{fig:double-pendulum}
\label{fig:two}
\end{figure}
Its Hamiltonian function $H(q,p)$ is \begin{multline} \label{eq:2} -\frac{ {l_1}^2 \ (m_1+m_2) \ {p_{\theta}}^2 +{l_2}^2 \ m_2 \ (p_{\theta} -p_{\phi})^2 + 2 \ l_1 \ l_2 \ m_2 \ p_{\theta} \ (p_{\theta} -p_{\phi}) \ \cos(\theta )} {{l_1}^2 \ {l_2}^2 \ m_2 \ (-2 \ m_1 - m_2 + m_2 \ \cos(2 \theta ))} \\ -g \ \cos (\phi) \ (l_1 \ (m_1+m_2)+l_2 \ m_2 \ \cos(\theta))+g \ l_2 \ m_2 \ \sin(\theta) \sin(\phi), \end{multline}
and we consider following parameters values \begin{equation*} \label{eq:17} g=9.8 \ \frac{m}{sec^2}\ ,\ \ l_1=1.0 \ m \ , \ l_2=1.0 \ m\ , \ m_1=1.0 \ kg\ , \ m_2=1.0 \ kg. \end{equation*}
We take two initial values from \cite{Dumitru}, the first one of non-chaotic character, and the second one exhibiting chaotic behaviour: \begin{enumerate}
\item {Non-Chaotic case (NCDP): $q(0)=(1.1, -1.1)$ and $p(0)=(2.7746,2.7746)$. We have integrated over $T_{end}=2^{12}$ seconds with step-size $h = 2^{-7}$. The numerical results will be sampled once every $m=2^{10}$ steps}.
\item {Chaotic case (CDP): $q(0)=(0,0)$ and $p(0)=(3.873,3.873)$.
We have integrated over $T_{end}=2^{8}$ seconds with step-size $h = 2^{-7}$. We sample the numerical results once every $m=2^{8}$ steps}. \end{enumerate}
Both initial value problems (NCDP and CDP) will be used to test the evolution of the global errors as well as to check the performance of the round-off error estimators. For the long term evolution of the errors in energy, only the NCDP problem will be considered.
\subsubsection{Simulation of the outer solar system (OSS)}
We consider a simplified model of the outer solar system (sun, the four outer planets, and Pluto) under mutual gravitational (non-relativistic) interactions. This is a Hamiltonian system with 18-degrees of freedom ($q_i, p_i \in \mathbb{R}^3, \ i=0,\cdots,5$) and Hamiltonian function is \begin{equation} \label{eq:Ham2}
H(q,p)=\frac{1}{2}\ \sum^N_{i=0}{\ \frac{{\|p_i\|}^2}{m_i}}- \ G \sum^N_{0\le i<j\le N}{\frac{m_im_j}{\|q_i-q_j\|}}. \end{equation}
We have considered the initial values and the values of the constant parameters ($G m_i$, $i=0,\ldots,5$) taken from ~\cite[page~14]{Hairer2006}. We have integrated over $T_{end}=10^{7}$~days, with step-size $h=500/3$ and the numerical results are sampled once every $m=120$ steps.
Observe that (\ref{eq:Ham2}) is a separable Hamiltonian, i.e., of the form $H(p,q) = T(p) + U(q)$. It is well known that the efficiency of standard fixed point iteration can be improved for Hamiltonians systems with separable Hamiltonian by considering a partitioned version of the fixed point iteration~\cite{Hairer2006}. Nevertheless, as in~\cite{Hairer2008}, here we report the numerical results obtained with the standard non-partitioned fixed-point iteration. (We have actually checked that similar results are obtained with the partitioned version of the fixed point iteration, the main difference being that less iterations are performed at each step.)
\subsection{Comparison of different sources of error in energy}
The error of a numerical solution $\tilde y_{n} + e_{n} \approx y(t_n)$ ($n=1,2,\ldots$) computed with our double precision (DP) implementation of symplectic IRK schemes is a combined result of different kinds of errors:
\begin{enumerate} \item The truncation error: The error due to replacing $y(t_n)$, $n=1,2,3,\ldots$ (where $y(t)$ is the solution of the initial value problem (\ref{eq:ivp})) by the numerical approximations $y_{n}$ defined by (\ref{eq:y})--(\ref{eq:YL}) (with exact coefficients $b_{i}, \mu_{i j}$).
\item The iteration error: In practice a finite number $K$ of fixed point iterations (\ref{eq:fixed_point_iteration}) are applied, and the solution $L_{n,i}, Y_{n,i}$ ($i=1,\ldots,s$) of (\ref{eq:YL}) are replaced by approximations $L_{n,i}^{[K]}, Y_{n,i}^{[K]}$. Then, in the FPIEA implementation, the corresponding numerical solution $\overline{y}_{n+1}$ is computed at each step as \begin{equation*} \overline{y}_{n+1} = y_{n} + \sum_{i=1}^{s} L_{n,i}^{[K]}. \end{equation*}
\item The error due to replacing the original map $f:\mathbb{R}^D \to \mathbb{R}^D$ by its computational substitute $\tilde{f}:\mathbb{F}^D \to \mathbb{F}^D$. This has a double effect: From one hand, in most steps, a computational fixed point is achieved in a finite number $K$ of iterations, which {causes} an unavoidable iteration error. On the other hand, replacing $f$ by $\tilde f$ adds the effect of some round-off errors.
\item The error due to the application of a different IRK scheme. In our case, we apply the scheme (\ref{eq:y})--(\ref{eq:YL}) with $b_i$ replaced by $\tilde b_i \in \mathbb{F}$ ($i=1,\ldots,s$) and each $\mu_{i j}$ replace by double precision approximation $\tilde \mu_{i j} \in \mathbb{F}$ satisfying condition (\ref{eq:sympl_cond_2}).
\item The error due to using inexact arithmetic for the operations (other than the evaluation of $\tilde f$) in the machine precision implementation of the algorithm. \end{enumerate}
We have simulated, for the double pendulum (the non-chaotic case, NCDP) and the outer solar system (OSS) respectively, the effect that each of the first four of such sources of errors has in the values of the energy (which of course is conserved in the exact solution) as follows:
\begin{figure}
\caption{\small We plot the evolution of energy error in logarithmic scale of the next algorithms implementations: A-algorithm as estimation of truncation error (red), B-algorithm as estimation of iteration error (green), C-algorithm as estimation of the effect of replacing the exact $f$ by its double precision version $\tilde{f}$ function (black) and D-algorithm as estimation of the effect of using double precision coefficients (blue).}
\label{fig:plot1}
\end{figure}
\begin{enumerate} \renewcommand{\Alph{enumi}}{\Alph{enumi}} \item In order to estimate the truncation error, we have applied our algorithm fully implemented in quadruple precision. \item For the iteration error, we have applied the quadruple precision version of the algorithm modified so that the fixed point process in the $n$th step is stopped at the $K$th iteration provided that $Y_{n,i}^{[K]}$ and $Y_{n,i}^{[K-1]}$ coincide when rounded to double precision. \item In addition, we have estimated the effect (in the evolution of the energy) of the error due to replacing $f$ by $\tilde f$, by considering the quadruple precision implementation of our algorithm with the double precision version of $\tilde f$. \item Finally, we have simulated the error due to the application of a RK scheme with double precision coefficients
by applying our quadruple precision implementation with double precision coefficients $\tilde b_i, \tilde \mu_{i j}$.
\end{enumerate}
We next plot (Fig. \ref{fig:plot1}), for each of the considered initial value problems, the evolutions of the energy errors corresponding to the items A--D in previous list. In both cases, we have chosen a step-size $h$ such that truncation errors are smaller than round-off errors. We observe that the effect of using double precision coefficients ($\tilde{b}_i, \tilde{\mu}_{ij}$) is also negligible compared to the propagation of round-off errors. The iteration error is similar in size to round-off errors.
\subsection{Statistical analysis of errors}
In order to make a more robust comparison of the numerical errors due to round-off errors, we adopt (as in~\cite{Hairer2008}) an statistical approach. We have considered for each of the three initial value problem, $P=1000$ perturbed initial values by randomly perturbing each component of the initial values with a relative error of size $\mathcal{O}(10^{-6})$.
We will compare three different fixed point implementations of the $6$-stage Gauss collocation method. In all of them, the same computational substitute $\tilde f:\mathbb{F}^D \to \mathbb{F}^D$ is used instead of the original map $f:\mathbb{R}^D \to \mathbb{R}^D$ defining the ODE (\ref{eq:ivp}):
\begin{enumerate} \item The FPIEA (fixed point iteration with exact arithmetic) implementation, where the techniques described in Subsections ~\ref{ss:3.1} and~\ref{ss:3.2} are applied to implement a fixed point iteration with all arithmetic operations (other than those used when evaluating $\tilde f$) performed in exact arithmetic. \item Our double precision version (coded in C) of the algorithm implemented in FPIEA.
We will refer to it as DP (double precision). \item The algorithm proposed in~\cite{Hairer2008}, implemented in \href{http://www.unige.ch/~hairer/preprints/code.tar}{Hairer's Fortran code}. \end{enumerate}
From one hand, we want to check if the propagation of round-off errors in our DP implementation are qualitatively similar and close in magnitude to its exact arithmetic counterpart FPIEA. On the other hand, we want to see how our DP implementation compares with Hairer's code.
In (Table~\ref{tab:fp}) we display the percentage of steps that reach a computational fixed-point and the average number of fixed-point iterations per step in each of the referred three implementations when applied to two differenent initial value problems.
\begin{table} \caption[Fixed-point percentage of steps and mean iterations.] {\small{Percentage of steps that reach a computational fixed-point and the number of fixed-point iterations per step for the computations of non-chaotic double pendulum (NCDP), chaotic double pendulum (CDP), and the outer solar system (OSS) problems. In columns, we compare three different implementations: FPIEA, DP (double precision) and Hairer's Fortran code.}} \label{tab:fp} \centering
{ \begin{tabular}{ l c c c c c c }
\hline
& \multicolumn{2}{c}{FPIEA} & \multicolumn{2}{c}{DP} & \multicolumn{2}{c}{Hairer} \\
\hline
NCDP & $98.7\%$ & $8.5$ & $98.8\%$ & $8.6$ & $98.5\%$ & $8.6$ \\
CDP & $98.9\%$ & $8.5$ & $98.9\%$ & $8.6$ & $98.4\%$ & $8.6$ \\
OSS & $97.7\%$ & $14.4$ & $97.4\%$ & $14.2$ & $87.5\%$ & $14.1$ \\
\hline
\end{tabular}} \end{table}
\subsubsection{Distribution of energy jumps}
\begin{figure}
\caption{ \small Histograms of $K P$ samples of energy jumps of the DP implementation against the normal distribution $N(\mu, \delta)$ for two problems (NCDP and OSS). The horizontal axis is multiplied by $10^{15}$ and vertical axis indicates the frequency }
\label{fig:plot2}
\end{figure}
The local error in energy $H(y_n)-H(y_{n-1})$ due to round-off errors, is "expected" to behave, for a good implementation free from biased errors, like an independent random variable. Then, provided that the numerical results are sampled every $m$ steps, with a large enough sampling frequency $m$, an energy jump $H(y_{k m})-H(y_{mk-m})$ will behave as an independent variable with an approximately Gaussian distribution with mean $\mu$ (ideally $\mu=0$) and standard deviation $\sigma$. So that the accumulated difference in energy, \begin{equation} \label{eq:AEE} H(y_{k m})-H(y_{0}) \end{equation} at the sampled times $t_{m k} = k m h$ would behave like a Gaussian random walk with standard deviation $k^{\frac12}\sigma = (t_{m k}/(m h))^{1/2} \sigma$. This is sometimes referred to as Brouwer's law in the scientific literature~\cite{Grazier2005}, from the original work on the accumulation of round-off errors in the numerical integration of Kepler's problem done by Brouwer in~\cite{Brouwer1937}.
In this sense, we want to check in which extent the (scaled) energy jumps, \begin{equation} \label{eq:REJ} (H(y_{k m})-H(y_{mk-m}))/H(y_0) \end{equation}
due to round-off errors after $m$ steps approximately obey a Gaussian distribution in our double precision (DP) implementation.
If $[0,T_{end}]$ is the integration interval, and $P$ perturbed initial values are considered, we have a total number of $K P$ samples of energy jumps, where $K=T_{end}/(m h)$. In Figure~\ref{fig:plot2}, we plot the histograms of $K P$ samples of energy jumps obtained with our DP implementation against the normal distribution $N(\mu, \delta)$ (where $\mu$ and $\sigma$ are the average and standard deviation of the samples respectively). For both initial value problems, non-chaotic double pendulum (NCDP), and the outer solar system (OSS), such histograms fits perfectly well to their corresponding normal distributions $N(\mu, \delta)$.
\subsubsection{Evolution of mean and standard deviation of errors}
We next plot (Fig.~\ref{fig:plot3}) the evolution of the mean and standard deviation of the errors in energy of the FPIEA, DP, and Hairer's implementations for the NCDP and OSS problems respectively.
\begin{figure}
\caption{\small Evolution of mean ($\mu$) and standard deviation ($\sigma$) of errors in energy (left) and detail of the evolution of mean errors in energy (right), for DP implementation (blue), FPIEA implementation (orange), and Hairer's implementation (green). Non-Chaotic case (a,b), and outer solar system case (c, d)}
\label{fig:plot3}
\end{figure}
Recall that FPIEA represents the best possible fixed point implementation of the IRK scheme provided that the double precision version $\tilde f$ of the original $f$ is used. We stress that we have made the stopping criterion of the FPIEA implementation even more stringent than in the DP implementation: we stop the fixed point iteration if either $ \Delta^{[k]} =0$ or (\ref{eq:stopping}) is fulfilled during ten consecutive iterations. This way, we try to avoid the persistence of iteration errors in the case of steps where no computational fixed point is obtained. (Observe that whenever $ \Delta^{[k]} =0$, there is no point in performing more fixed point iterations, as in that case a computational fixed point has been achieved.)
The numerical tests in Figure~\ref{fig:plot3} seen to confirm that our DP implementation is near optimal (that is, close to the FPIEA implementation), both with respect to the standard deviation and the mean of the errors in energy.
We believe that some small linear drift of the mean energy error may be unavoidable for the fixed point implementations of IRK schemes in some cases (such us the NCDP example). This is consistent with the observation that the simulated iteration error displaying in Figure~\ref{fig:plot0} is close in magnitude to the effect of round-off errors.
This is not of course inherent to the symplectic IRK scheme itself. In Figure~\ref{fig:plotNewton}, we display the results obtained for the NCDP example with a preliminary implementation of a simplified Newton implementation of the same IRK scheme as above. No linear drift of the mean energy error seems to appear.
\begin{figure}
\caption{\small Evolution of mean ($\mu$) and standard deviation ($\sigma$) of errors in energy of a IRK implementation with simplified Newton iterations}
\label{fig:plotNewton}
\end{figure}
To end with the present subsection, we plot (Fig.~\ref{fig:plot4}) the evolution of the (mean and standard deviation of) errors in position of the FPIEA, DP, and Hairer's implementations for the NCDP, CDP and OSS problems respectively. The displayed results seem to confirm our claim of the DP implementation being a close-to-optimal fixed point implementation of the symplectic IRK scheme.
\begin{figure}
\caption{\small Evolutions of Mean (left) and standard deviation (right) of global errors in positions of DP implementation (blue), FPIEA implementation (orange), and Hairer's implementation (green): NCDP (a,b), CDP (c,d) and OSS (e, f)}
\label{fig:plot4}
\end{figure}
\subsection{Round-off error estimation}
In order to assess the quality of the error estimation technique proposed in Subsection~\ref{ss:estimation}, we represent, In Fig.~\ref{fig:plot5}, for each of the three considered initial value problems (with the original unperturbed initial values), the evolution of the global errors in position of our DP implementation, together with the evolution of the estimations produced by using our technique applied with $r=3$. In addition, we present for each of the three considered examples, the evolution of the mean error in positions of the application of our DP algorithm to $P=1000$ perturbed initial value problems, together with the evolution of the mean of the estimated errors in positions. We believe that the results indicate that the proposed round-off error estimation procedure is useful for the purpose of assessing the propagation of round-off errors.
\begin{figure}
\caption{\small Left: estimation of the round-off error with the original unperturbed initial values. We compare the evolution of our error estimation (orange) with the evolution of the global error (blue). Right: evolution of the mean error in positions (blue) of the application of our DP algorithm to $P=1000$ perturbed initial value problems, together with the evolution of the mean of the estimated errors in positions (orange).}
\label{fig:plot5}
\end{figure}
\section{Concluding remarks}
Symplectic implicit Runge-Kutta schemes (such as RK collocation methods with Gaussian nodes) are very appropriate for the accurate numerical integration of general Hamiltonian systems. For non-stiff problems, implementations based on fixed-point iterations seem to be more efficient than those based on Newton method or some of its variants.
We propose an implementation that takes special care in reducing the propagation of round-off errors, and includes the option of computing, in addition to the numerical solution, an estimation of the propagated round-off error. We claim that our implementation with fixed point iterations is near optimal, in the sense that the propagation of round-off errors is essentially no worse than the best possible implementation with fixed point iteration. Our claim seems to be confirmed by our numerical experiments.
A key point in our implementation has been the introduction of a new stopping criterion for the fixed point iteration. We believe that such a stopping criterion could be also useful in other contexts.
According to our numerical experiments, it seems that, in some cases, some small linear drift of the mean energy error may be unavoidable for the fixed point implementations of IRK schemes. Whenever avoiding any drift of energy error becomes critical it might be preferable to use some Newton based iteration instead.
The C code of our implicit Runge-Kutta implementation with fixed point iterations can be downloaded from \href{<https://github.com/mikelehu/IRK-FixedPoint>}{IRK-FixedPoint} Github software repository or go to the next url: \url{https://github.com/mikelehu/IRK-FixedPoint} .
\paragraph{Acknowledgements}
M. Anto\~nana, J. Makazaga, and A. Murua have been partially supported by projects MTM2013-46553-C3-2-P from Ministerio de Econom\'ia y Comercio, Spain, by project MTM2016-76329-R “IMAGEARTH” from Spanish Ministry of Economy and Competitiveness and as part of the Consolidated Research Group IT649-13 by the Basque Government.
\section*{Appendix A: Computation of coefficients for 12th order Gauss collocation method}
We next illustrate, by considering in detail the case of the 6th stage Gauss collocation method for the 64-bit IEEE double precision floating point arithmetic, how to determine appropriate machine number coefficients $\mu_{i j}$, $1 \leqslant i,j \leqslant s$, that approximate the real numbers $a_{i j}/b_j$ of a given symplectic integration.
For all $i=1,\ldots,s$, $\mu_{i,i}=1/2$. For $1 \leqslant j < i \leqslant s$, $ \mu_{i j}:= \mathrm{fl}(a_{i j}/b_{j})$, which satisfy $1/2 < | \mu_{i j} | < 2$, which implies that $\mu_{j i}:= 1 - \mu_{i j}$ is a machine number. This results in machine number coefficients $\mu_{i j}$ that satisfy the symplecticity conditions (\ref{eq:sympl_cond_2}).
Given $h$, the coefficients $hb_i = h \times b_i$ are precomputed as follows: for $i=2,\ldots,s-1$, $hb_i := \mathrm{fl}(h \times b_i)$, and \begin{equation*} hb_1 := hb_s := (h - \sum_{i=2}^{s} hb_i)/2. \end{equation*}
\end{document} | arXiv | {
"id": "1702.03354.tex",
"language_detection_score": 0.7643833756446838,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Determinantal formulas with major indices} \author{Thomas McConville, Donald Robertson, Clifford Smyth} \date{\today}
\maketitle
\begin{abstract} We give a simple proof of a major index determinant formula in the symmetric group discovered by Krattenthaler and first proved by Thibon using noncommutative symmetric functions. We do so by proving a factorization of an element in the group ring of the symmetric group. By applying similar methods to the groups of signed permutations and colored permutations, we prove determinant formulas in these groups as conjectured by Krattenthaler. \end{abstract}
\section{Introduction}
Let $\mathfrak{S}_n$ be the group of permutations of $[n]:=\{1,\ldots,n\}$. We often consider permutations in one--line notation $w=w_1\cdots w_n$ where $w_i=w(i)$. An integer $i\in[n-1]$ is a \emph{descent} of a permutation $w$ if $w_i>w_{i+1}$. The \emph{major index} $\maj(w)$ is the sum of the descents of $w$. For example, $\maj(314652)=1+4+5=10$. The \emph{major index matrix} is $(q^{\maj(uv^{-1})})_{u,v\in\mathfrak{S}_n}$. In his survey of determinantal formulas, Krattenthaler discovered and communicated a proof by Thibon of the following identity.
\begin{theorem}[Theorem 56, \cite{krattenthaler2001advanced}]\label{thm:main_maj}
For all $n\geq 1$, \[ \det\left( q^{\maj(uv^{-1})} \right)_{u,v\in\mathfrak{S}_n} = \prod_{k=2}^{n}(1-q^{k})^{n!\cdot (k-1)/k }. \] \end{theorem}
\begin{example}
The identity is trivial if $n=1$. For $n=2$, Theorem~\ref{thm:main_maj} gives
\[ \det\left(\begin{matrix}1 & q\\q & 1\end{matrix}\right) = (1-q^2). \] For $n=3$, if we index the rows and columns by $(123,\ 132,\ 213,\ 231,\ 312,\ 321)$, then \[ \det\left(\begin{matrix}
1 & q^2 & q & q & q^2 & q^3\\
q^2 & 1 & q & q & q^3 & q^2\\
q & q^2 & 1 & q^3 & q^2 & q\\
q^2 & q & q^3 & 1 & q & q^2\\
q & q^3 & q^2 & q^2 & 1 & q\\
q^3 & q & q^2 & q^2 & q & 1
\end{matrix}\right) = (1-q^2)^3(1-q^3)^4. \] \end{example}
To prove Theorem~\ref{thm:main_maj}, Thibon explicitly determined the eigenvalues of the major index matrix with multiplicity using the theory of noncommutative symmetric functions developed in \cite{krob1997noncommutative}. Stanley also determined the eigenvalues with multiplicity in \cite[Theorem 2.2]{stanley2001generalized} by applying a theorem of Bidigare, Hanlon, and Rockmore \cite[Theorem 1.2]{bidigare1999combinatorial}.
In this paper, we present a new, simpler proof of Theorem~\ref{thm:main_maj}. Our proof relies on a clever interpretation of the major index of a permutation given by Adin and Roichman in \cite{adin2001flag}, which we recall here. Let $t_k=(k,k-1,\dots,1)$ be a $k$-cycle for $2\leq k\leq n$. Each $w$ in $\mathfrak{S}_n$ can be uniquely expressed in the form $t_n^{c_n}t_{n-1}^{c_{n-1}}\cdots t_2^{c_2}$ where $0\leq c_k < k$, and the major index of $w$ is $c_n+c_{n-1}+\cdots+c_2$. This means that the sequence $(t_n,t_{n-1},\ldots,t_2)$ is a perfect basis of $\mathfrak{S}_n$, the definition of which we recall in Section~\ref{sec:bases}. This perfect basis determines a factorization of the major index matrix, which we use to evaluate its determinant in Section~\ref{sec:maj}.
Our proof of Theorem~\ref{thm:main_maj} was motivated by Zagier's proof \cite{zagier1992realizability} of the identity \[ \det\left(q^{\inv(uv^{-1})}\right)_{u,v\in\mathfrak{S}_n} = \prod_{k=2}^{n}(1-q^{k^2-k})^{n!\cdot(n-k+1)/(k^2-k)}, \] where $\inv(w)$ is the number of inversions of a permutation $w$. Zagier considered an element of the group algebra $\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}(q)\mathfrak{S}_n$ whose image under the regular representation is the matrix $(q^{\inv(uv^{-1})})$. By factoring this element of the group algebra, he obtained a corresponding factorization of the matrix $(q^{\inv(uv^{-1})})_{u,v \in \mathfrak{S}_n}$ for which the determinants of the factors could be readily evaluated.
A \emph{colored permutation} $(w,x)$ consists of a permutation $w\in\mathfrak{S}_n$ and $x~\in~(\Zbb/m\Zbb)^n$. We recall the group structure on colored permutations in Section~\ref{sec:fmaj}. When $m=2$, this group is isomorphic to the group of signed permutations, the real reflection group of type $B_n$.
Based on extensive computational evidence, Krattenthaler conjectured in \cite{krattenthaler2005advanced} several analogues of Theorem~\ref{thm:main_maj} for colored permutations using variations on the major index given in \cite{adin2001descent,adin2001flag,reiner1993signed}. We prove all of his conjectured formulas and more in Sections~\ref{sec:fmaj},~\ref{sec:amaj}, and~\ref{sec:signed}. For each case, we construct a basis such that the relevant statistics can be read from the exponent vectors.
Theorem~\ref{thm:main_maj} and the various extensions we consider for colored or signed permutations are all specializations of group determinants. For a finite group $G$, its \emph{group determinant} is $\det(r_{gh^{-1}})_{g,h\in G}$ where $\{r_g\mid g\in G\}$ is a set of elements of a commutative ring. In his pioneering work on the representation theory of finite groups, Frobenius proved that if $\{r_g\mid g\in G\}$ is a set of indeterminates in a polynomial ring over $\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}$, then the irreducible factors of the group determinant naturally correspond to irreducible representations of $G$; see \cite{hawkins1978creation}.
We consider examples of the group determinant of the form $\det(q^{\stat(gh^{-1})})_{g,h\in G}$ for some statistic on $G$. We are especially intrigued by examples for which this determinant is a product of binomials. This behavior was proved for the length statistic on finite Coxeter groups in \cite{varchenko1993bilinear}, extending the aforementioned result from \cite{zagier1992realizability}.
The rest of this paper is structured as follows. Preliminary results on group determinants and perfect bases are given in Sections~\ref{sec:group_det} and~\ref{sec:bases}. Theorem~\ref{thm:main_maj} is proved in Section~\ref{sec:maj}. In Section~\ref{sec:fmaj}, we recall the flag major index on colored permutations introduced by Adin and Roichman in~\cite{adin2001flag} and prove a formula for the corresponding group determinant, answering \cite[Problem 49]{krattenthaler2005advanced}. In Section~\ref{sec:amaj}, we consider another statistic on colored permutations that we call the absolute flag major index, and we generalize and prove \cite[Conjecture 48]{krattenthaler2005advanced}. Finally, in Section~\ref{sec:signed} we prove an identity from which we can derive proofs of Conjectures~46,~47, and~50 in \cite{krattenthaler2005advanced}.
\section{Group determinants}\label{sec:group_det}
Let $G$ be a finite group and let $R$ be a commutative ring with $1$. The \emph{group ring} $RG$ is the free $R$-module with a distinguished basis that we identify with the elements of $G$. Multiplication of basis elements is the same as in $G$ and is extended linearly to $RG$.
Given a complex vector space $V\cong\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}^m$, a \emph{representation} is a group homomorphism $G\ra\GL(V)$, which we may extend to a ring homomorphism $RG\ra\End(V)$. For any element $\alpha=\sum_{g\in G}r_g g\in RG$ and any representation $\phi$, we set \[ \Delta_{\phi}(\alpha) := \det\phi(\alpha) = \det\left(\sum_{g\in G}r_g\phi(g)\right). \]
\begin{example} \leavevmode \begin{enumerate} \item If $\phi_{\triv}:G\ra\GL(\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}^1)$ is the trivial representation, then $\Delta_{\phi_{\triv}}(\alpha)=\sum_{g\in G}r_g$. \item If $G$ acts on a finite set $X$, then the \emph{permutation representation} $\phi_X:G\ra\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}^X$ assigns to an element $g$ the transformation $x\mapsto g\cdot x$ for $x\in X$. So $\phi_X(g)$ is a permutation matrix, i.e. it is a $0,1$--matrix such that every row and column has exactly one $1$. \item The \emph{regular representation} $\phi_{\reg}$ is the permutation representation induced by the action of $G$ on itself by left multiplication. The \emph{group determinant} of $G$ is \[ \Delta_{\phi_{\reg}}(\alpha) = \det\left(r_{uv^{-1}}\right)_{u,v\in G} \] where $\alpha=\sum r_g g$. \end{enumerate} \end{example} Two representations $\phi,\eta$ are \emph{equivalent} if there exists an invertible matrix $U$ such that $\eta(g)=U\phi(g)U^{-1}$ for all $g\in G$. It is clear that $\Delta_{\phi}(\alpha)=\Delta_{\eta}(\alpha)$ whenever $\phi$ and $\eta$ are equivalent representations. The direct sum $\phi\oplus\eta$ of two representations satisfies $\Delta_{\phi\oplus\eta}(\alpha) = \Delta_{\phi}(\alpha)\Delta_{\eta}(\alpha)$. As any representation is equivalent to a direct sum of irreducible representations, we can always factor the determinant $\Delta_{\phi}(\alpha)$ as a product over the irreducible direct summands of $\phi$.
For an $m\times m$ matrix $M$, let $\theta_M(q)=\det(I-qM)$.
For permutation representations, we have the following result.
\begin{proposition} \label{prop:det_permrep}
Let $G$ act on a finite set $X$, and let $\phi=\phi_X$ be the corresponding permutation representation. Fix $g\in G$, and let $\Ocal_1,\ldots,\Ocal_N$ be the orbits of the cyclic subgroup $\langle g\rangle$. Then
\[ \theta_{\phi(g)}(q) = \prod_{k=1}^N(1-q^{|\Ocal_k|}). \] \end{proposition}
\begin{proof}
For $k\in[N]$, let $T_k$ be the $|\Ocal_k|\times|\Ocal_k|$ permutation matrix with $(i,j)$-entry equal to $1$ if $i=j+1\bmod{|\Ocal_k|}$. Up to a simultaneous permutation of rows and columns, the matrix $\phi(g)$ is the direct sum $T_1\oplus\cdots\oplus T_N$. Hence,
\[ \theta_{\phi(g)}(q) = \det(I-q\phi(g)) = \prod_{k=1}^N\det(I-q T_k) = \prod_{k=1}^N(1-q^{|\Ocal_k|}).\qedhere \] \end{proof}
Let $o(g)$ denote the order of an element $g$. In the regular representation, every orbit of $\langle g\rangle$ has size equal to $o(g)$. We immediately deduce the following corollary of Proposition~\ref{prop:det_permrep}.
\begin{corollary}\label{cor:det_reg}
If $\phi=\phi_{\reg}$ is the regular representation of $G$, then
\[ \theta_{\phi(g)}(q) = (1-q^{o(g)})^{|G|/o(g)}. \] \end{corollary}
\section{Perfect bases}\label{sec:bases}
We follow the terminology of \cite{Shwartz2007MajorIA} for bases of groups.
Let $G$ be a finite group. A sequence $(g_1,\ldots,g_n)$ of elements of $G$ is a \emph{basis} if there exist positive integers $m_1,\ldots,m_n$ such that every element $g\in G$ may be uniquely expressed in the form $g = g_1^{c_1}\cdots g_n^{c_n}$ where $0\leq c_i< m_i$ for $i\in[n]$. This basis is \emph{perfect} if $m_i=o(g_i)$ for all $i$. A group admits a perfect basis if and only if it may be identified as a set with a Cartesian product of cyclic groups by the following map. \begin{align*}
\langle g_1\rangle\times\cdots\times \langle g_n\rangle &\stackrel{\sim}{\to} G\\
(g_1^{c_1},\ldots,g_n^{c_n}) &\mapsto g_1^{c_1}\cdots g_n^{c_n} \end{align*} This map is not necessarily a group isomorphism.
\begin{example}\label{ex:S3_basis}
Let $G=\mathfrak{S}_3$. Take $g_1=(123),\ g_2=(12)(3)$, written in cycle notation. Then $(g_1,g_2)$ is a perfect basis since every element of $\mathfrak{S}_3$ is uniquely expressible in the form $g_1^{c_1}g_2^{c_2}$ for $0\leq c_1<3,\ 0\leq c_2<2$.
\begin{center}
\begin{tabular}{l l}
\hspace{1cm} $g_1^0g_2^0 = (1)(2)(3)$ & \hspace{1cm} $g_1^0g_2^1 = (12)(3)$\\
\hspace{1cm} $g_1^1g_2^0 = (123)$ & \hspace{1cm} $g_1^1g_2^1 = (13)(2)$\\
\hspace{1cm} $g_1^2g_2^0 = (132)$ & \hspace{1cm} $g_1^2g_2^1 = (1)(23)$
\end{tabular}
\end{center}
\end{example}
\begin{example}\label{ex:Dn_basis}
We extend Example~\ref{ex:S3_basis} to any dihedral group of order $2n$ for $n\geq 3$. Consider the dihedral group $G$ of isometries of a regular $n$--gon whose vertices are labeled $1,2,\ldots,n$ in clockwise order. As a group of permutations of $[n]$, $G$ is generated by a rotation $g_1=(123\cdots n)$ and a reflection $g_2=(1,n-1)(2,n-2)\cdots(\lfloor\frac{n-1}{2}\rfloor,\lfloor\frac{n+2}{2}\rfloor)(n)$. Every rotation symmetry is of the form $g_1^k$ for some $0\leq k<n$, and every reflection symmetry is of the form $g_1^kg_2$ for some $0\leq k<n$. Hence, $(g_1,g_2)$ is a perfect basis. \end{example}
For the remainder of this section, we let $R=\Qbb[x_1,\ldots,x_n]$ and we consider the group ring $RG$.
\begin{lemma}\label{lem:factor_basis}
If $(g_1,\ldots,g_n)$ is a basis of $G$, then we have the identity
\[ \sum_{\substack{g=g_1^{c_1}\cdots g_n^{c_n}\\0\leq c_i<m_i}} x_1^{c_1}\cdots x_n^{c_n}\cdot g = \prod_{i=1}^n (1+x_ig_i+\cdots+x_i^{m_i-1}g_i^{m_i-1}). \]
If $(g_1,\ldots,g_n)$ is a perfect basis, then \begin{equation} \label{eqn:alphaBetaFactor} \prod_{i=1}^n (1+x_ig_i+\cdots+x_i^{m_i-1}g_i^{m_i-1}) (1-x_ig_i) = \prod_{i=1}^n(1-x_i^{m_i})\cdot 1. \end{equation} \end{lemma}
\begin{proof}
The first statement immediately follows by expanding the right hand side. The latter statement follows from the assumption $g_i^{m_i}=1$. \end{proof}
\begin{theorem}\label{thm:general_basis}
Suppose $G$ has a perfect basis $(g_1,\ldots,g_n)$, and set
\[ \alpha = \sum_{\substack{g=g_1^{c_1}\cdots g_n^{c_n}\\0\leq c_i<m_i}} x_1^{c_1}\cdots x_n^{c_n}\cdot g. \]
If $V\cong\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}^r$ is a vector space and $\phi:G\ra\GL(V)$ is a representation of $G$, then
\[ \Delta_{\phi}(\alpha) = \prod_{i=1}^{n}\frac{(1-x_i^{m_i})^r}{\theta_{\phi(g_i)}(x_i)}. \] \end{theorem}
\begin{proof}
By Lemma~\ref{lem:factor_basis}, we have
\[ \alpha\prod_{i=1}^n(1-x_ig_i) = \prod_{i=1}^n(1-x_i^{m_i})\cdot 1. \]
Hence,
\begin{align*}
\det(\phi(\alpha)) &= \frac{\det\phi\left(\prod_{i=1}^n(1-x_i^{m_i})1 \right)}{\det\phi\left(\prod_{i=1}^n 1-x_ig_i \right)}\\
&= \prod_{i=1}^n\frac{\det ((1-x_i^{m_i}) I_r)}{\det(I_r - x_i\phi(g_i))}\\
&= \prod_{i=1}^n\frac{(1-x_i^{m_i})^r}{\theta_{\phi(g_i)}(x_i)}.
\end{align*}
\end{proof}
\begin{corollary}\label{cor:reg_basis}
Suppose $G$ has a perfect basis $(g_1,\ldots,g_n)$, and set
\[ \alpha = \sum_{\substack{g=g_1^{c_1}\cdots g_n^{c_n}\\0\leq c_i<m_i}} q^{c_1+\cdots+c_n}\cdot g. \]
If $\phi_{\reg}$ is the regular representation of $G$, then
\[ \Delta_{\phi_{\reg}}(\alpha)=\prod_{i=1}^n(1-q^{m_i})^{|G|(1-1/o(g_i))}. \] \end{corollary}
\begin{proof}
Specializing $q=x_i$ in Theorem~\ref{thm:general_basis} gives
\[ \Delta_{\phi_{\reg}}(\alpha)=\prod_{i=1}^n\frac{(1-q^{m_i})^{|G|}}{\theta_{\phi_{\reg}(g_i)}(q)}. \]
But $\theta_{\phi_{\reg}(g_i)}(q)=(1-q^{m_i})^{|G|/o(g_i)}$ by Corollary~\ref{cor:det_reg}. \end{proof}
\begin{example}\label{ex:Dn_reps}
Let $G$ be the dihedral group in Example~\ref{ex:Dn_basis}. For $h\in G$, set $\rot(h)=h(n)$ if $h(n)\neq n$, and $\rot(h)=0$ if $h(n)=n$. Let $\refl(h)=0$ if $h$ is a rotation and $\refl(h)=1$ if $h$ is a reflection. Then for $0\leq c_1<n,\ 0\leq c_2<2$, we have $\rot(g_1^{c_1}g_2^{c_2})=c_1$ and $\refl(g_1^{c_1}g_2^{c_2})=c_2$.
Let $\alpha=\sum_{h\in G}x_1^{\rot(h)}x_2^{\refl(h)}\cdot h$, and let $\phi$ be a representation of $G$ of dimension $r$. By Theorem~\ref{thm:general_basis}, we have
\[ \Delta_{\phi}(\alpha) = \frac{(1-x_1^n)^r}{\theta_{\phi(g_1)}(x_1)}\frac{(1-x_2^2)^r}{\theta_{\phi(g_2)}(x_2)}. \]
If $\phi=\phi_{\reg}$ is the regular representation, then $r=2n,\ \theta_{\phi(g_1)}(q)=(1-q^n)^2$, and $\theta_{\phi(g_2)}(q)=(1-q^2)^n$. Hence,
\[ \Delta_{\phi_{\reg}}(\alpha)=(1-x_1^n)^{2n-2}(1-x_2^2)^n. \]
We end this example by evaluating $\Delta_{\phi}(\alpha)$ for all irreducible representations over $\Qbb$. Let $C_n = \langle g_1 \rangle$. \begin{itemize} \item For the trivial representation $\phi_{\triv}$ we have \[ \Delta_{\phi_{\triv}}(\alpha) = \frac{(1-x_1^n) (1-x_2^2)}{(1-x_1)(1-x_2)}. \] \item There is a one-dimensional representation $\phi_{\sign}(g) = (-1)^{c_2(g)}$ coming from the action on the cosets of $C_n$. We have \[ \Delta_{\phi_{\sign}}(\alpha) = \frac{(1-x_1^n)(1-x_2^2)}{(1-x_1)(1+x_2)}. \] \item For each divisor $d$ of $n$, there is an irreducible representation $\rho_d$ of $C_n$. Writing \[ \Phi_d(q) = a_0 + a_1 q + \cdots + q^{\ell} \] for the $d$th cyclotomic polynomial, we can realize $\rho_d$ on $V = \Qbb^{\ell}$ via \[ \rho_d(g_1) = \begin{bmatrix} 0 & 0 & 0 & \cdots & 0 & 0 & -a_0 \\ 1 & 0 & 0 & \cdots & 0 & 0 & -a_1 \\ 0 & 1 & 0 & \cdots & 0 & 0 & -a_2 \\ 0 & 0 & 1 & \cdots & 0 & 0 & -a_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 1 & 0 & -a_{\ell-2} \\ 0 & 0 & 0 & \cdots & 0 & 1 & -a_{\ell-1} \\ \end{bmatrix} \] Let $A$ be the anti-diagonal matrix with all anti-diagonal entries equal to $1$. Then $A^2$ is the identity and $A \rho_d(g_1) = \rho_d(g_1^{-1}) A$. Hence, we have an irreducible representation $\phi_d$ of the dihedral group via $\phi_d(h) = \rho_d(g_1)^{\rot(h)} A^{\refl(h)}$. In this representation, one has $\theta_{\phi_d(g_1)}(q)=q^{\ell}\Phi_d(1/q)$ and \[ \theta_{\phi_d(g_2)}(q) = \begin{cases} (1-q^2)^{\frac{\ell}{2}} & \ell \textup{ even} \\ (1-q^2)^{\frac{\ell-1}{2}} (1-q) & \ell \textup{ odd} \end{cases} \] from which $\Delta_{\phi_d}(\alpha)$ can be written down. \item Lastly, for any divisor $d$ of $n$ we can tensor the representation $\phi_d$ with the representation $\psi$ to obtain another irreducible representation. \end{itemize}
\end{example}
\section{Major index matrix}\label{sec:maj}
For $k\in[n]$, let $t_k=(k,k-1,\ldots,1)$ be a $k$-cycle in $\mathfrak{S}_n$. In \cite{adin2001flag}, Adin and Roichman gave an alternative interpretation of the major index of a permutation. We recall this statement and its proof.
\begin{lemma}[Claim 2.1, \cite{adin2001flag}]\label{lem:maj_basis}
The $(n-1)$--tuple $(t_n,t_{n-1},\ldots,t_2)$ is a perfect basis of $\mathfrak{S}_n$. Moreover, if $w=t_n^{c_n}t_{n-1}^{c_{n-1}}\cdots t_2^{c_2}$ for some $c_i$ with $0\le c_i< i$, then $\maj(w)=c_n+c_{n-1}+\cdots+c_2$. \end{lemma}
\begin{proof}
Let $w\in\mathfrak{S}_n$ be given. Since $t_n$ is the $n$--cycle $(n,n-1,\ldots,1)$, there exists a unique $c_n$ with $0\le c_n < n$ such that $t_n^{-c_n}w$ fixes $n$. By similar reasoning, there is a unique $c_{n-1}$ with $0\le c_{n-1}< n-1$ such that $t_{n-1}^{-c_{n-1}}t_n^{-c_n}w$ fixes $n-1$. Since $t_{n-1}$ fixes $n$, the element $t_{n-1}^{-c_{n-1}}t_n^{-c_n}w$ also fixes $n$. Continuing in this manner, we find unique $c_i$ with $0\le c_i< i$ such that $t_2^{-c_2}\cdots t_n^{-c_n}w$ is the identity permutation. Hence, the factorization $w=t_n^{c_n}\cdots t_2^{c_2}$ is unique.
As there are $n!=|\mathfrak{S}_n|$ choices for the exponents $(c_n,\ldots,c_2)$, we conclude that $(t_n,\ldots,t_2)$ is a basis. In fact, it is a perfect basis since the order of $t_i$ is $i$ and the exponent $c_i$ can be any value in the range $0\leq c_i<i$.
For each $k$, let $g_k=t_n^{c_n}\cdots t_k^{c_k}$. We prove that the major index of $g_k$ is $c_n+\cdots+c_k$. Taking $k=2$ gives $\maj(w)=c_n+\cdots+c_2$.
We observe that the values in $g_n$ are \emph{cyclically ordered}, i.e. there exists a unique $j\in\Zbb/n\Zbb$ such that $g_n(|j|+1)<\cdots<g_n(n)<g_n(1)<\cdots<g_n(|j|)$. Moreover, $|j|=c_n$, so the major index of $g_n$ is $c_n$.
Now let $k\geq 2$ and suppose the first $k+1$ values of $g_{k+1}$ are cyclically ordered. Multiplying $g_{k+1}$ on the right by $t_k$ rotates the first $k$ values of $g_{k+1}$. Hence, the first $k$ values of $g_k$ are cyclically ordered.
If the first $k$ values of $g_{k+1}$ are in increasing order, then by the same argument as in the base case, $\maj(g_k)=\maj(g_{k+1}t_k^{c_k})=\maj(g_{k+1})+c_k$.
Otherwise, there exists $j\in\Zbb/k\Zbb,\ j\neq 0$ such that $g_{k+1}(|j|+1)<\cdots<g_{k+1}(k+1)<g_{k+1}(1)<\cdots<g_{k+1}(|j|)$. Then $g_k$ has a descent at $|j|+c_k$ if $|j|+c_k\leq k$, or $g_k$ has descents at $|j|+c_k-k$ and at $k$ if $k+1\le |j|+c_k\le k+|j|-1$. All higher descents of $g_k$ are shared with $g_{k+1}$. Hence, $\maj(g_k)=\maj(g_{k+1})+c_k$, as desired. \end{proof}
For the remainder of the section, we consider the element $\alpha\in \mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}(q)\mathfrak{S}_n$ where \[ \alpha = \sum_{w\in\mathfrak{S}_n} q^{\maj(w)}\cdot w. \]
Lemma~\ref{lem:maj_basis} together with Theorem~\ref{thm:general_basis} immediately implies the following.
\begin{corollary}\label{cor:maj_general}
Let $n\geq 1$ be given. If $V\cong\mathbb{C}}\def\Dbb{\mathbb{D}}\def\Ebb{\mathbb{E}}\def\Fbb{\mathbb{F}}\def\Nbb{\mathbb{N}}\def\Qbb{\mathbb{Q}}\def\Rbb{\mathbb{R}}\def\Sbb{\mathbb{S}}\def\Zbb{\mathbb{Z}^r$ and $\phi:\mathfrak{S}_n\ra\GL(V)$, then
\[ \Delta_{\phi}(\alpha) = \prod_{k=2}^n\frac{(1-q^k)^r}{\theta_{\phi(t_k)}(q)}. \] \end{corollary}
If $\phi_{\reg}$ is the regular representation of $\mathfrak{S}_n$, then \[ \phi_{\reg}\left(\sum q^{\maj(w)}\cdot w\right) = \left(q^{\maj(uv^{-1})}\right)_{u,v\in\mathfrak{S}_n}. \] Theorem~\ref{thm:main_maj} now follows immediately from Corollary~\ref{cor:reg_basis}.
\begin{example} \label{eg:defining} The symmetric group $\mathfrak{S}_n$ naturally acts on $[n]$. The corresponding permutation representation $\phi_{\DEF}$ is called the \emph{defining representation}. Explicitly, the matrix representing $\alpha$ is \[ \phi_{\DEF}(\alpha) = \left(\sum_{w(i)=j}q^{\maj(w)}\right)_{i,j\in[n]}. \] We verify that \[ \det\left(\sum_{\substack{w\in\mathfrak{S}_n\\w(i)=j}}q^{\maj(w)}\right)_{i,j\in[n]} = (1-q)^{\binom{n}{2}}([n]!_q)^{n-1}. \] The left-hand side is $\Delta_{\phi_{\DEF}}(\alpha)$, so \[ \det\left(\sum_{\substack{w\in\mathfrak{S}_n\\w(i)=j}}q^{\maj(w)}\right)_{i,j\in[n]} = \prod_{k=2}^{n}\frac{(1-q^k)^n}{\theta_{\phi_{\DEF}(t_k)}(q)}. \] The element $t_k$ has one orbit of size $k$ and $n-k$ orbits of size $1$. Hence, \[ \theta_{\phi_{\DEF}(t_k)}(q) = (1-q^k)(1-q)^{n-k}. \] Hence, the determinant is equal to \begin{align*}
\prod_{k=2}^n\frac{(1-q^k)^n}{\theta_{\phi_{\DEF}(t_k)}(q)}
&= \prod_{k=2}^{n}\frac{(1-q^k)^n}{(1-q^k)(1-q)^{n-k}}\\
&= \prod_{k=2}^{n}([k]_q)^{n-1}(1-q)^{k-1} = (1-q)^{\binom{n}{2}}([n]!_q)^{n-1}. \end{align*} \end{example}
\begin{example} One can consider the action of $\mathfrak{S}_n$ on tuples. For example, here we calculate $\Delta_{\phi}(\alpha)$ where $\phi$ is the representation determined by the action of $\mathfrak{S}_n$ on $\binom{n}{2}$. From Corollary~\ref{cor:det_reg} and Theorem~\ref{thm:general_basis} it suffices to determine the orbit decomposition of the cycles $t_k$ acting on $\binom{n}{2}$.
For the $t_k$ orbit of $(i,j) \in \binom{n}{2}$ with $i < j$ there are three possibilities: \begin{itemize} \item $k < i$ in which case $t_k$ fixes $(i,j)$; \item $i \le k < j$ in which case the $j$ is fixed by $t_k$ and the orbit has size $k$; \item $j \le k$ in which case the orbit is the same as the orbit of $(i,j) \in \binom{k}{2}$. \end{itemize} It therefore suffices to determine the orbit structure of the action of $t_k$ on $\binom{k}{2}$. When $k$ is odd all orbits have size $k$. When $k$ is even the possibility $2(j-i) = k$ gives the unique orbit of size $\frac{k}{2}$. We therefore have, for the three possibilities above: \begin{itemize} \item $\binom{n-k}{2}$ orbits of size 1; \item $n-k$ orbits of size $k$; \item $\frac{k-1}{2}$ orbits of size $k$ when $k$ is odd \textsc{or} $\frac{k-2}{2}$ orbits of size $k$ and one orbit of size $\frac{k}{2}$ when $k$ is even; \end{itemize} and can calculate that \[ \theta_{\phi(t_k)}(q) = \begin{cases} (1-q)^{\binom{n-k}{2}} (1-q^k)^{n-k} (1-q^k)^{\frac{k-1}{2}} & k \textup{ odd} \\ (1-q)^{\binom{n-k}{2}} (1-q^k)^{n-k} (1-q^k)^{\frac{k-2}{2}} (1-q^{\frac{k}{2}}) & k \textup{ even} \end{cases} \] with the determinant formula following from Theorem~\ref{thm:general_basis}. \end{example}
We are interested in the extent to which $\Delta_{\phi_\lambda}(\alpha) = \det \phi_\lambda(\alpha)$ can be calculated where $\lambda$ is any partition of $n \in \mathbb{N}$ and $\phi_\lambda$ is the corresponding irreducible representation of $\mathfrak{S}_n$. The factorization \[ \prod_{i=2}^n (1+q t_i + \cdots + q^{i-1}t_i^{i-1}) (1-q t_i) = \prod_{i=2}^n(1 - q^i)\cdot 1 \]
from $x_i = q$ and $g_i = t_i$ in \eqref{eqn:alphaBetaFactor}, together with the fact that $t_{n-1},\dots,t_2$ is a perfect basis of $\mathfrak{S}_{n-1}$, suggests an inductive approach. Indeed, if $\lambda$ is a partition of $n$, the restriction $\phi_\lambda | \mathfrak{S}_{n-1}$ of $\phi_\lambda$ to $\mathfrak{S}_{n-1}$ is known to be a direct sum \[
\phi_\lambda | \mathfrak{S}_{n-1} = \bigoplus_{\eta \prec \lambda} \phi_\eta \] of those $\phi_\eta$ where $\eta$ immediately precedes $\lambda$ in the Young lattice. Thus, for $2 \le i \le n-1$ we have \begin{equation} \label{eqn:irredInduction} \det \phi_\lambda(1-qt_i) = \prod_{\eta \prec \lambda} \det \phi_\eta(1-qt_i) \end{equation} and it remains to calculate $\theta_{\phi_\lambda(t_n)}(q) = \det \phi_\lambda(1 - qt_n)$. For this calculation it suffices to determine the eigenvalues of $\phi_\lambda(t_n)$. These can be found using work of Stembridge~\cite[Theorem~3.3]{stembridge1989eigenvalues} which we recall here.
Fix a partition $\lambda$ of $n$ and $g \in \mathfrak{S}_n$ of order $m$. The eigenvalues of $\phi_\lambda(g)$ are of the form $\omega^{e_1},\dots,\omega^{e_r}$ where $\omega = e^{2 \pi i / m}$. The exponents $e_1,\dots,e_r$ are called the cyclic exponents of $g$ and are defined modulo $m$.
A standard tableau over $\lambda$ is any filling of $\lambda$ by $\{1,\dots,n\}$ with rows and columns strictly increasing. One calls $1 \le k \le n$ a descent of a standard tableau if $k+1$ appears in a row strictly below that of $k$.
Let $\mu = (\mu_1,\mu_2,\dots,\mu_\ell)$ be the cycle type of our element $g \in \mathfrak{S}_n$. Form \[ b_\mu = \left( \frac{m}{\mu_1}, \frac{2m}{\mu_1}, \dots, m, \frac{m}{\mu_2},\frac{2m}{\mu_2},\dots, m,\dots \right) \] which is a tuple of length $\mu_1 + \cdots + \mu_\ell$. For example \[ b_{(4,4,3,2)} = (3,6,9,12,3,6,9,12,4,8,12,6,12) \] and if $g$ is an $n$-cycle we have $b_{(n)} = (1,2,\dots,n)$. For any standard tableau $T$ over $\lambda$ its $\mu$ index is \[ \ind_\mu(T) = \sum_{k \in D(T)} b_\mu(k) \bmod m \] where $D(T)$ is the set of descents of $T$. The content of \cite[Theorem~3.3]{stembridge1989eigenvalues} is that \[
q^{e_1} + \cdots + q^{e_r} = \sum_{T | \lambda} q^{\ind_\mu(T)} \] modulo $1-q^m$.
\begin{example}[The standard representation] The standard representation corresponds to the partition $\lambda = [n-1,1]$. We will calculate the eigenvalues of $\phi_\lambda(t_n)$. The standard tableaux over $\lambda$ are indexed by the entry $2,\dots,n$ on the second row. Each has a single descent of $1,\dots,n-1$ respectively. We conclude that $\phi_\lambda(t_n)$ has eigenvalues $\omega,\omega^2,\dots,\omega^{n-1}$ and that its characteristic polynomial is $[n]_q$. Then \[ \theta_{\phi_\lambda(t_n)}(q) = q^{n-1} \det \phi_\lambda(q - t_n) = [n]_q \] as well. For all $2 \le i \le n-1$ we have \begin{equation} \label{eqn:standardInduction} \theta_{\phi_\lambda(t_i)}(q) = (1-q)^{n-i} \theta_{\phi_{[i-1,1]}(t_i)}(q) = (1-q)^{n-i} [i]_q \end{equation} from repeated application of \eqref{eqn:irredInduction}. We conclude that \[ \Delta_\lambda(\alpha) = \dfrac{\displaystyle\prod_{i=2}^n (1-q^i)^{n-1}}{\displaystyle\prod_{i=2}^n (1-q)^{n-i} [i]_q} = \frac{1}{[n]_q!} \dfrac{\displaystyle \prod_{i=2}^n (1-q^i)^{n-1}}{\displaystyle \prod_{i=2}^n(1-q)^{n-1}} \prod_{i=2}^n (1-q)^{i-1} = ([n]_q!)^{n-2} (1-q)^{\binom{n}{2}} \] which can also be obtained from dividing the result of Example~\ref{eg:defining} by $[n]_q!$. \end{example}
\begin{example}[The {$[2,2]$} representation] Fix $\lambda = [2,2]$. The two standard tableaux over $\lambda$ are \[ T = \young(12,34) \qquad S = \young(13,24) \] with descent sets $\{2\}$ and $\{1,3\}$ respectively. The element $t_4$ has eigenvalues $-1$ and $1$ so its characteristic polynomial is $q^2 - 1$. Thus \[ \theta_{\phi_\lambda(t_4)}(q) = q^2 \det \phi_\lambda(\tfrac{1}{q} - t_4) = 1-q^2 \] and \begin{align*} \theta_{\phi_\lambda(t_3)}(q) & = \theta_{\phi_{[2,1]}(t_3)}(q) = [3]_q \\ \theta_{\phi_\lambda(t_2)}(q) & = \theta_{\phi_{[2,1]}(t_2)}(q) = (1-q) [2]_q \end{align*} from the previous example. Finally \[ \Delta_\lambda(\alpha) = \dfrac{(1-q^2)^2 (1-q^3)^2 (1-q^4)^2}{(1-q^2) \cdot (1+q+q^2) \cdot (1-q) (1 + q)} = (1-q)(1-q^3)(1-q^4)^2 \] \end{example}
\section{Flag major index matrix}\label{sec:fmaj}
Let $H,N$ be groups such that $H$ acts on $N$ on the right. The \emph{semidirect product} $H\ltimes N$ is the group whose elements are $(g,x)$ for $g\in H,\ x\in N$, where \[ (g,x)(h,y) = (gh,(x\cdot h)y). \]
The symmetric group $\mathfrak{S}_n$ acts on $(\Zbb/m\Zbb)^n$ by permuting coordinates. That is, if $w\in\mathfrak{S}_n$ and $x\in(\Zbb/m\Zbb)^n$, then $x\cdot w\in(\Zbb/m\Zbb)^n$ where $(x\cdot w)_i = x_{w(i)}$. The group of colored permutations is the semidirect product $\mathfrak{S}_n^m=\mathfrak{S}_n\ltimes (\Zbb/m\Zbb)^n$. We express a colored permutation $(w,x)$ by writing $w$ in one--line notation with $x_k$ bars above $w(k)$. For example, the colored permutation $(1342,\ (1,0,2,1))$ is written $\bar{1}3\bar{\bar{4}}\bar{2}$.
Let $b=(1,0,0,\ldots,0)\in (\Zbb/m\Zbb)^n$. As in Section~\ref{sec:maj}, we set $t_k=(k,k-1,\ldots,1)\in\mathfrak{S}_n$. In particular, we let $t_1$ be the identity permutation.
For $k\in[n]$, let $\tilde{t}_k=(t_k,b)\in\mathfrak{S}_n^m$. Then the order of $\tilde{t}_k$ is $mk$. For example, if $m=n=k=3$, then \[ \langle \tilde{t}_k \rangle = \{ 123,\ \bar{3}12,\ \bar{2}\bar{3}1,\ \bar{1}\bar{2}\bar{3},\ \bar{\bar{3}}\bar{1}\bar{2},\ \bar{\bar{2}}\bar{\bar{3}}\bar{1},\ \bar{\bar{1}}\bar{\bar{2}}\bar{\bar{3}},\ 3\bar{\bar{1}}\bar{\bar{2}},\ 23\bar{\bar{1}} \}. \]
Colored letters are totally ordered as \[ n>(n-1)>\cdots>1>\bar{n}>\cdots>\bar{1}>\bar{\bar{n}}>\cdots \] giving rise to a major index for colored permutations. For example, the colored permutation $\bar{1}3\bar{\bar{4}}\bar{2}$ only has a descent at $2$ since $3>\bar{\bar{4}}$ but $\bar{1}<3$ and $\bar{\bar{4}}<\bar{2}$. So, the major index is $\maj(\bar{1}3\bar{\bar{4}}\bar{2})=2$.
The \emph{flag major index} of a colored permutation $g\in\mathfrak{S}_n^m$ is $\fmaj(g)=m\maj(g)+\col(g)$. For example, if $m=3$, then $\fmaj(\bar{1}3\bar{\bar{4}}\bar{2})=3\cdot 2 + 4 = 10$. This statistic was introduced by Adin and Roichman in \cite{adin2001flag} to give a combinatorial formula for the Hilbert series of a certain ring of invariants.
The proof of the following lemma is similar to the symmetric group case, and will be omitted. It can be obtained from \cite[Proposition 2.1]{Shwartz2007MajorIA} and \cite[Theorem 3.1]{adin2001flag}.
\begin{lemma}\label{lem:fmaj_basis}
The $n$--tuple $(\tilde{t}_n,\ldots,\tilde{t}_1)$ is a perfect basis of $\mathfrak{S}_n^m$. Moreover, if $g=\tilde{t}_n^{c_n}\cdots\tilde{t}_1^{c_1}$ for some $0\le c_k < mk$, then $\fmaj(g)=c_n+\cdots+c_1$. \end{lemma}
\begin{example}
Consider the colored permutation $g=\bar{1}3\bar{\bar{4}}\bar{2}$. The only power of $\tilde{t}_4$ with $\bar{2}$ in the last position is $(\tilde{t}_4)^6=\bar{\bar{3}}\bar{\bar{4}}\bar{1}\bar{2}$. This permutation has no descents, so its flag major index is
\[ \fmaj(\tilde{t}_4^6)=3\maj(\tilde{t}_4^6)+\col(\tilde{t}_4^6)=0+6=6. \]
Next, we rotate the first three entries once to put $\bar{\bar{4}}$ into the third position, i.e. $\tilde{t}_4^6\tilde{t}_3=\bar{\bar{1}}\bar{\bar{3}}\bar{\bar{4}}\bar{2}$. There are still no descents, and its flag major index is $7$.
Rotating the first two entries twice will put $3$ into the second position, i.e. $\tilde{t}_4^6\tilde{t}_3\tilde{t}_2^2=13\bar{\bar{4}}\bar{2}$. The colors are removed from the first two values, but a descent at $2$ is created, so
\[ \fmaj(\tilde{t}_4^6\tilde{t}_3\tilde{t}_2^2)=3\cdot 2+3=9. \]
Finally, we change the color of the first entry to find $g=\tilde{t}_4^6\tilde{t}_3\tilde{t}_2^2\tilde{t}_1$ and $\fmaj(g)=10$ is the sum of the exponents of this factorization. \end{example}
\begin{theorem}\label{thm:fmaj}
Let $n,m\geq 1$.
\[ \det\left(q^{\fmaj(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m} = \prod_{k=1}^n(1-q^{mk})^{n!m^n(1-1/(mk))} \] \end{theorem}
\begin{proof}
Set $\alpha=\sum q^{\fmaj(g)}\cdot g$. If $\phi_{\reg}$ is the regular representation of $\mathfrak{S}_n^m$, then \[ \phi_{\reg}(\alpha) = \left(q^{\fmaj(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m}. \]
By Lemma~\ref{lem:fmaj_basis} and Corollary~\ref{cor:reg_basis}, \begin{align*}
\Delta_{\phi_{\reg}}(\alpha) &= \prod_{k=1}^n(1-q^{o(\tilde{t}_k)})^{|\mathfrak{S}_n^m|(1-1/o(\tilde{t}_k))}\\
&= \prod_{k=1}^n(1-q^{mk})^{n!m^n(1-1/(mk))}\qedhere \end{align*}
\end{proof}
We identify $\mathfrak{S}_n$ with the subgroup of colored permutations $\{(\mathbf{0},w)\in\mathfrak{S}_n^m \mid w\in\mathfrak{S}_n\}$. Observe that for $g\in\mathfrak{S}_n^m$, we have $g\in\mathfrak{S}_n$ if and only if $\col(g)=0$. If $h$ is any colored permutation, there is a unique ordering of the colored values of $h$ with no descents. That is, there is a unique $g\in\mathfrak{S}_n$ such that $\maj(hg)=0$. Hence, the set $T=\{h\in\mathfrak{S}_n^m \mid \maj(h)=0\}$ is a left transversal to $\mathfrak{S}_n$ in $\mathfrak{S}_n^m$.
\begin{lemma}\label{lem:fmaj_split}
Let $T$ be the transversal to $\mathfrak{S}_n$ in $\mathfrak{S}_n^m$ defined above. Then
\[ \sum p^{\maj(g)}q^{\col(g)}\cdot g = \left(\sum_{h\in T} q^{\col(h)}\cdot h\right)\left(\sum_{w\in\mathfrak{S}_n} p^{\maj(w)}\cdot w\right). \] \end{lemma}
\begin{proof}
We first expand the right--hand side of the equation. Then
\begin{align*}
\left(\sum_{h\in T} q^{\col(h)}\cdot h\right)\left(\sum_{w\in\mathfrak{S}_n} p^{\maj(w)}\cdot w\right) &= \sum_{g\in\mathfrak{S}_n^m} p^{\maj(w)}q^{\col(h)}\cdot g,
\end{align*}
where in the latter sum, $g=hw,\ h\in T$, and $w\in\mathfrak{S}_n$. Fix $g\in\mathfrak{S}_n^m$, and decompose $g=hw$ accordingly. Since multiplication by $w$ on the right rearranges colors without changing their values, it is clear that $\col(g)=\col(h)$. On the other hand, since the colored values of $h$ are in increasing order, it follows that $w$ and $g$ have the same descents. Hence, $\maj(g)=\maj(w)$. We conclude that $p^{\maj(w)}q^{\col(h)}=p^{\maj(g)}q^{\col(g)}$, as desired. \end{proof}
Let $H$ be a subgroup of $G$. For $g\in G$, the subgroup $H$ acts on the right as the regular representation on the vector space $\Qbb[gH]$. Hence, the restriction of the regular representation of $G$ is isomorphic to a direct sum of $[G:H]$ copies of the regular representation of $H$.
\begin{theorem}
For $m,n\geq 1$,
\[ \det\left(p^{\maj(gh^{-1})}q^{\col(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m} = \prod_{k=2}^n(1-p^k)^{n!m^n(k-1)/k}\prod_{k=1}^n(1-q^{mk})^{n!m^{n-1}(m-1)/k}. \] \end{theorem}
\begin{proof}
Let $\alpha=\sum q^{\fmaj(g)}\cdot g$ as in the proof of Theorem~\ref{thm:fmaj}. Let $\beta=\sum p^{\maj(g)}q^{\col(g)}\cdot g$ in $\Qbb(p,q)\mathfrak{S}_n^m$. Then
\[ \phi_{\reg}(\beta) = \left(p^{\maj(gh^{-1})}q^{\col(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m}, \]
so we seek to prove that the right--hand side of the theorem statement is equal to $\Delta_{\phi_{\reg}}(\beta)$. By Lemma~\ref{lem:fmaj_split}, we have $\beta = \left(\sum_{h\in T} q^{\col(h)}\cdot h\right)\left(\sum_{w\in\mathfrak{S}_n} p^{\maj(w)}\cdot w\right)$. Hence, there exists polynomials $A(p),B(q)$ such that $\Delta_{\phi_{\reg}}(\beta)=A(p)B(q)$. Since $\alpha=\beta\mid_{p=q^m}$, we have $\Delta_{\phi_{\reg}}(\alpha)=A(q^m)B(q)$, so
\[ A(q^m)B(q) = \prod_{k=1}^n(1-q^{mk})^{n!m^n(1-1/(mk))}. \]
Since $\mathfrak{S}_n$ is a subgroup of $\mathfrak{S}_n^m$ of index $m^n$, the restriction of the regular representation of $\mathfrak{S}_n^m$ to $\mathfrak{S}_n$ is isomorphic to a direct sum of $m^n$ copies of the regular representation of $\mathfrak{S}_n$. Combined with Theorem~\ref{thm:main_maj}, we have
\[ A(p) = \Delta_{\phi_{\reg}}\left(\sum_{w\in\mathfrak{S}_n} p^{\maj(w)}\cdot w\right) = \prod_{k=2}^n (1-p^k)^{n!m^n (k-1)/k}. \]
Therefore,
\begin{align*}
B(q) &= \frac{\Delta_{\phi_{\reg}}(\beta)}{A(q^m)}\\
&= \frac{\prod_{k=1}^n(1-q^{mk})^{n!m^n(1-1/(mk))}}{\prod_{k=2}^n (1-q^{mk})^{n!m^n(k-1)/k}}\\
&= \prod_{k=1}^n(1-q^{mk})^{n!m^{n-1}(m-1)/k}.
\end{align*}
The theorem now follows by multiplying the formulas for $A(p)$ and $B(q)$. \end{proof}
\section{Absolute flag major index matrix}\label{sec:amaj}
In contrast with Section~\ref{sec:fmaj}, here we consider a simpler statistic that takes the descents of $(w,x)$ to be those of $w$. The \emph{absolute major index} of $(w,x)$ is $\amaj(w,x)=\maj(w)$. The \emph{absolute flag major index} of $(w,x)$ is $\amaj(w,x)+\col(w,x)$. We prove the following identity. Krattenthaler conjectured the $m=2$ case in \cite[Conjecture 48]{krattenthaler2005advanced}.
\begin{theorem}\label{thm:amaj}
For all $m,n\geq 1$, we have
\[ \det\left(p^{\amaj(gh^{-1})}q^{\col(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m} = (1-q^m)^{n!m^{n-1}(m-1)n}\prod_{k=2}^n(1-p^k)^{n!m^n(k-1)/k}. \] \end{theorem}
To prove Theorem~\ref{thm:amaj}, we produce a different perfect basis of $\mathfrak{S}_n^m$ than the one considered in Section~\ref{sec:fmaj}. The construction of this perfect basis can be formulated more generally as follows.
Let $H,N$ be groups such that $H$ acts on $N$ on the right. Consider the semidirect product $G=H\ltimes N$. We identify $H$ and $N$ with the subgroups $\{(h,1)\in G\mid h\in H\}$ and $\{(1,x)\in G\mid x\in N\}$, respectively. If $(h_1,\ldots,h_k)$ is a perfect basis of $H$ and $(x_1,\ldots,x_{\ell})$ is a perfect basis of $N$, then $(h_1,\ldots,h_k,x_1,\ldots,x_{\ell})$ is a perfect basis of $G$.
For $\mathfrak{S}_n^m$ we combine our perfect basis for $\mathfrak{S}_n$ with one for $\Zbb/m\Zbb^n$. For each $i\in[n]$, let $y^{(i)}\in(\Zbb/m\Zbb)^n$ where \[ (y^{(i)})_j=\begin{cases}1\ &\mbox{if }i=j\\0\ &\mbox{else}\end{cases}. \]
It is clear that $(y^{(1)},\ldots,y^{(n)})$ is a perfect basis of $(\Zbb/m\Zbb)^n$ since $x=\sum_i |x_i|y^{(i)}$ for all $x\in(\Zbb/m\Zbb)^n$. Hence, $(t_n,\ldots,t_2,y^{(1)},\ldots,y^{(n)})$ is a perfect basis of $\mathfrak{S}_n^m$. Moreover, if $g=t_n^{c_n}\cdots t_2^{c_2}(y^{(1)})^{d_1}\cdots(y^{(n)})^{d_n}$ is the factorization of $g$, then $\amaj(g)=c_n+\cdots+c_2$ and $\col(g)=d_1+\cdots+d_n$.
\begin{proof}[Proof of Theorem~\ref{thm:amaj}] Let $\phi_{\reg}$ be the regular representation of $\mathfrak{S}_n^m$. The restriction of $\phi_{\reg}$ to $\mathfrak{S}_n$ is isomorphic to a direct sum of $[\mathfrak{S}_n^m:\mathfrak{S}_n]=m^n$ copies of the regular representation of $\mathfrak{S}_n$. The restriction to $(\Zbb/m\Zbb)^n$ is isomorphic to a direct sum of $[\mathfrak{S}_n^m:(\Zbb/m\Zbb)^n]=n!$ copies of the regular representation of $(\Zbb/m\Zbb)^n$. We deduce the following sequence of identities.
\begin{align*}
\det\left(p^{\amaj(gh^{-1})}q^{\col(gh^{-1})}\right)_{g,h\in\mathfrak{S}_n^m} &= \Delta_{\phi_{\reg}}\left(\sum_{g\in\mathfrak{S}_n^m} p^{\amaj(g)}q^{\col(g)}\cdot g\right)\\
&= \Delta_{\phi_{\reg}}\left(\sum_{w\in\mathfrak{S}_n}p^{\maj(w)}\cdot w\right)\Delta_{\phi_{\reg}}\left(\sum_{x\in(\Zbb/m\Zbb)^n} q^{\col(x)}\cdot x\right)\\
&= \prod_{k=2}^n(1-p^k)^{n!m^n (k-1)/k}\prod_{i=1}^n(1-q^m)^{n!m^{n-1}(m-1)}\qedhere \end{align*} \end{proof}
\section{Signed permutations}\label{sec:signed}
A \emph{signed permutation} is a pair $(\varepsilon,w)$ where $w\in\mathfrak{S}_n$ and $\varepsilon\in\{-1,1\}^n$. We refer to $\varepsilon$ as the sign vector of the signed permutation $(\varepsilon,w)$. The symmetric group acts on the set of sign vectors on the left such that for $w\in\mathfrak{S}_n,\ \varepsilon\in\{-1,1\}^n$, $(w\cdot\varepsilon)_i = \varepsilon_{w^{-1}(i)}$ for all $i$. Let $B_n$ be the group of signed permutations, i.e. the semidirect product $\{1,-1\}^n\rtimes\mathfrak{S}_n$ where $(\varepsilon,u)(\varepsilon^{\pr},v)=(\varepsilon(u\cdot\varepsilon^{\pr}),uv)$. This is also known in the literature as the hyperoctahedral group since it is isomorphic to the group of symmetries of a hyperoctahedron.
We may write a signed permutation in one--line notation with a bar above a value $i$ if $\varepsilon_i=-1$. For example, the signed permutation \[ \left((1, -1, -1, 1),\ \binom{1\ 2\ 3\ 4}{2\ 1\ 4\ 3}\right) \] would be written as $\bar{2}14\bar{3}$. We refer to elements of $\{1,2,3,\ldots,\bar{1},\bar{2},\bar{3},\ldots\}$ as signed letters.
We consider two total orderings on signed letters. The first ordering is the natural ordering on integers, \[ \cdots <_A \bar{n} <_A \cdots <_A \bar{1} <_A 0 <_A 1 <_A \cdots <_A n <_A \cdots. \] The second ordering is \[ 0 <_B 1 <_B \cdots <_B n <_B \cdots <_B \bar{n} <_B \cdots <_B \bar{2} <_B \bar{1}. \]
For $i\in[n-1]$, we say $i$ is \emph{A--descent} of a signed permutation $(\varepsilon,w)=w_1\cdots w_n$ if $w_i >_A w_{i+1}$. Furthermore, $0$ is an A--descent if $w_1$ is negative. Similarly, $i$ is a \emph{B--descent} of $w=w_1\cdots w_n$ if $w_i >_B w_{i+1}$. Furthermore, $n$ is a B--descent if $w_n$ is negative. Let $\maj_A(\varepsilon,w)$ (respectively, $\maj_B(\varepsilon,w)$) be the sum of the A--descents (respectively, B--descents) of $(\varepsilon,w)$.
The \emph{negative set} is $\Neg(\varepsilon,w)=\{i \mid \varepsilon_i=-1\}$. Let $\nneg(\varepsilon,w)=|\Neg(\varepsilon,w)|$, and let $\sneg(\varepsilon,w)$ be the sum of elements in $\Neg(\varepsilon,w)$.
For example, $\{0,3\}$ is the set of A--descents of $\bar{2}14\bar{3}$, so $\maj_A(\bar{2}14\bar{3})=3$. The set of B--descents of $\bar{2}14\bar{3}$ is $\{1,4\}$, so $\maj_B(\bar{2}14\bar{3})=5$. The negative set is $\Neg(\bar{2}14\bar{3})=\{2,3\}$, so $\nneg(\varepsilon,w)=2$ and $\sneg(\varepsilon,w)=5$.
The statistic $\maj_B$ was introduced by Reiner in \cite{reiner1993signed}. The statistics $\maj_A$ and $\sneg$ were used by Adin, Brenti, and Roichman in \cite{adin2001descent} to prove a Carlitz-type formula for a joint Euler--Mahonian distribution in Type $B_n$.
The statistics $\maj_A,\ \maj_B$, and $\nneg$ are related as follows.
\begin{lemma}\label{lem:majB}
For any signed permutation $(\varepsilon,w)$,
\[ \maj_B(\varepsilon,w) = \maj_A(\varepsilon,w) + \nneg(\varepsilon,w). \] \end{lemma}
\begin{proof}
Let $(\varepsilon,w)=w_1\cdots w_n$ in one--line notation. Set $w_0=0$ and $w_{n+1}=n+1$. Then for $i\in\{0,1,\ldots,n\}$, $i$ is an A--descent if $w_i >_A w_{i+1}$ and $i$ is a B--descent if $w_i >_B w_{i+1}$. Let $X$ be the set of A--descents and $Y$ be the set of B--descents of $(\varepsilon,w)$. There is a bijection $\phi:X\ra Y$ where $\phi(i)=i$ if $w_i$ and $w_{i+1}$ have the same sign, and $\phi(i)=\min\{j\mid j>i,\ w_{j+1}>0\}$ if $w_i$ and $w_{i+1}$ have different signs. We observe the identity
\[ \sum_{i\in X}\phi(i)-i = \nneg(\varepsilon,w), \]
from which the lemma follows. \end{proof}
For $X\subseteq[n]$, let $q_X=\prod_{i\in X}q_i$. We prove the following identity.
\begin{theorem}\label{thm:signed_general} For all $n\geq 1$, \[ \det\left(p^{\maj_A(gh^{-1})}q_{\Neg(gh^{-1})}\right)_{g,h\in B_n} = \prod_{k=1}^n(1-q_k^{2k})^{n!2^{n-1}/k}\prod_{k=2}^{n}(1-p^k)^{n!2^n (k-1)/k}. \] \end{theorem}
Specializing Theorem~\ref{thm:signed_general} gives the following identities, which are Conjectures~46,~47, and~50 in \cite{krattenthaler2005advanced}.
\begin{corollary}\label{cor:signed_spec}
If $n\geq 1$, then
\begin{align*}
\det\left(p^{\maj_A(gh^{-1})}q^{\nneg(gh^{-1})}\right)_{g,h\in B_n} &= \prod_{k=1}^n(1-q^{2k})^{n!2^{n-1}/k}\prod_{k=2}^{n}(1-p^k)^{n!2^n (k-1)/k},\\
\det\left(q^{\maj_B(gh^{-1})}\right)_{g,h\in B_n} &= \prod_{k=1}^n(1-q^{2k})^{n!2^{n-1}/k}\prod_{k=2}^{n}(1-q^k)^{n!2^n (k-1)/k},\ \mbox{and}\\
\det\left(p^{\maj_A(gh^{-1})}q^{\sneg(gh^{-1})}\right)_{g,h\in B_n} &= \prod_{k=1}^n(1-q^{2k^2})^{n!2^{n-1}/k}\prod_{k=2}^{n}(1-p^k)^{n!2^n (k-1)/k}.
\end{align*} \end{corollary}
\begin{proof}
For $g\in B_n$, $q_{\Neg(g)}$ specializes to $q^{\nneg(g)}$ by setting $q_i=q$ for all $i$. This gives the first identity. The second follows from the first by setting $p=q$. For the third identity, we observe that $q_{\Neg(g)}$ specializes to $q^{\sneg(g)}$ by setting $q_i=q^i$ for all $i$. \end{proof}
To prove Theorem~\ref{thm:signed_general}, we construct a certain basis for $B_n$. This basis is not perfect for $n\geq 2$, but it is ``close enough'' for our purposes.
Let $\varepsilon^{(k)}\in\{1,-1\}^n$ where $(\varepsilon^{(k)})_j=-1$ if $k=j$ and $(\varepsilon^{(k)})_j=1$ if $k\neq j$. We again let $t_k=(k,k-1,\ldots,1)$ be a $k$-cycle. Let $s_k=(\varepsilon^{(k)},t_k)$ and $u_k=(\mathbf{1},t_k)$ be signed and unsigned versions of $t_k$, respectively.
\begin{lemma}\label{lem:sneg_basis}
Let $n\geq 1$ be given. The sequence $(s_1,\ldots,s_n,u_n,u_2)$ is a basis of $B_n$. In particular, every signed permutation $g$ may be uniquely expressed in the form $g=s_1^{d_1}\cdots s_n^{d_n}u_n^{c_n}\cdots u_2^{c_2}$ where $0\leq d_k < 2$ and $0\leq c_k < k$ for all $k$. Moreover, $\maj_A(g)=\maj(t_n^{c_n}\cdots t_2^{c_2})$ and $\Neg(g)=\{i\mid d_i=1\}$. \end{lemma}
\begin{proof}
Let $g=(\varepsilon,w)\in B_n$. For $v\in\mathfrak{S}_n$, we have $(\varepsilon,w)(\mathbf{1},v)=(\varepsilon,wv)$. That is, right multiplication by an element $(\mathbf{1},v)$ rearranges the positions of the signed integers in $g$ without changing the set of signed integers present.
Let $h=(\varepsilon,u)$ be the rearrangement of signed integers in $g$ in increasing order relative to $<_A$. Then $h$ is the unique element in the left coset $g\mathfrak{S}_n$ such that $\maj_A(h)=0$. Furthermore, for $v\in\mathfrak{S}_n$, we have $\maj_A(\varepsilon,uv)=\maj(v)$. In particular, $\maj_A(\varepsilon,w) = \maj(u^{-1}w)$. By Lemma~\ref{lem:maj_basis}, there is a unique factorization $u^{-1}w=t_n^{c_n}\cdots t_2^{c_2}$ where $0\leq c_k<k$ for all $k$, and $\maj(u^{-1}w)=c_n+\cdots+c_2$.
We have seen that the set $T=\{h\in B_n\mid \maj_A(h)=0\}$ is a left transversal to $\mathfrak{S}_n$ in $B_n$. Since $[B_n:\mathfrak{S}_n]=2^n$, we have $|T|=2^n$. To complete the proof, we show that each element $h\in T$ is uniquely expressible in the form $h=s_1^{d_1}\cdots s_n^{d_n}$ with $0\leq d_i<2$ for all $i$, and $\Neg(h)=\{i\mid d_i=1\}$.
Let $0\leq d_i<2$ for all $i$, and let $(\varepsilon,u)=s_1^{d_1}\cdots s_{n-1}^{d_{n-1}}$. Then $u$ fixes $n$, and by induction, we may assume $\maj_A(\varepsilon,u)=0$ and $\Neg(\varepsilon,u)=\{i\mid d_i=1, i<n\}$. But $s_1^{d_1}\cdots s_n^{d_n}=(\varepsilon,u)(\varepsilon^{(n)},t_n)=(\varepsilon+\varepsilon^{(n)},ut_n)$. Multiplying $u$ on the right by $t_n$ rotates the values of $u$ and puts $n$ at the beginning. Since $n\in\Neg(s_1^{d_1}\cdots s_n^{d_n})$, the signed values of $s_1^{d_1}\cdots s_n^{d_n}$ are still in increasing order, i.e. $\maj_A(s_1^{d_1}\cdots s_n^{d_n})=0$.
Hence, $\{s_1^{d_1}\cdots s_n^{d_n}\mid \forall i,\ 0\leq d_i<2\}\subseteq T$. Since both sets contain $2^n$ elements, they must be equal. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:signed_general}]
Let $\alpha=\sum_{g\in B_n} p^{\maj_A(g)}q_{\Neg(g)}\cdot g$. If $\phi_{\reg}$ is the regular representation of $B_n$, then
\[ \Delta_{\phi_{\reg}}(\alpha)= \det\left(p^{\maj_A(gh^{-1})}q_{\Neg(gh^{-1})}\right)_{g,h\in B_n}. \]
By Lemma~\ref{lem:sneg_basis}, we obtain a factorization
\[ \alpha = (1+q_1s_1)\cdots(1+q_ns_n)(1+pu_n+\cdots+p^{n-1}u_n)\cdots(1+pu_2). \]
Therefore,
\begin{align*}
\Delta_{\phi_{\reg}}(\alpha) &= \prod_{k=1}^n\Delta_{\phi_{\reg}}(1+q_1s_1) \prod_{k=2}^n\Delta_{\phi_{\reg}}(1+pu_k+\cdots+p^{k-1}u_k)\\
&= \prod_{k=1}^n\theta_{\phi_{\reg}(s_k)}(-q_k) \prod_{k=2}^n\frac{(1-p^k)^{|B_n|}}{\theta_{\phi_{\reg}(u_k)}(p)}
\end{align*}
The order of $u_k=(\mathbf{1},t_k)$ is $k$ and the order of $s_k=(\varepsilon^{(k)},t_k)$ is $2k$. Hence,
\[ \Delta_{\phi_{\reg}}(\alpha) = \prod_{k=1}^n(1-(-q_k)^{2k})^{n!2^n/(2k)} \prod_{k=2}^n(1-p^k)^{n!2^n(1-1/k)}. \] \end{proof}
{}
\end{document} | arXiv | {
"id": "2102.13005.tex",
"language_detection_score": 0.6055960059165955,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\vskip-40pt Wave Function Collapse, Correlating Interactions, and Conservation Laws} \author{Edward J. Gillis\footnote{email: gillise@provide.net}}
\maketitle
\begin{abstract}
\noindent The assumption that wave function collapse is induced by correlating interactions of the kind that constitute measurements leads to a stochastic collapse equation that does not require the introduction of any new physical constants and that is consistent with conservation laws. The collapse operator is based on the interaction (potential) energy, with a variable timing parameter related to the rate at which individual interactions generate the correlations. The approximate localization of physical systems follows from the distance-dependent nature of the interaction potentials. The equation is consistent with strict conservation of momentum and orbital angular momentum, and it is also consistent with energy conservation within the accuracy allowed by the limited forms of energy that can be described within nonrelativistic theory. The possibility of extending the proposal to a fully relativistic version is discussed.
\end{abstract}
\section{Introduction} \label{intro}
Stochastic, nonlinear modifications of the Schr\"{o}dinger equation have been proposed in order to explain measurement outcomes in terms of fundamental physical processes, and to describe how the apparently linear evolution of systems consisting of a few elementary particles is altered when large numbers of interacting particles become involved. In these accounts wave function collapse and the Born probability rule are consequences of the basic mathematical structure of the theory, rather than ad hoc postulates, seemingly at odds with that structure. This approach has been developed over a number of years in works by various authors\cite{Pearle_1976,Pearle_1979,Gisin_1984,GRW,Diosi_1,Diosi_2,Diosi_3,Gisin_c,GPR,Adler_Brun,Ghirardi_Bassi,Pearle_1,Brody_finite}.
The need to use an equation that is nonlinear in order to describe wave function collapse follows from the fact that projection of a state vector to one of its components is a nonlinear operation.\footnote{A recent work by Mertens, et al.\cite{Mertens_1} has also shown that nonlinearity is required for the derivation of the Born rule.} The need for a nonlinear equation to be stochastic was demonstrated by Gisin who showed that any deterministic nonlinear modification would lead to superluminal signaling\cite{Gisin_c}. Most of the proposals seek to collapse the measured system either to an approximate position state or to an energy eigenstate. Typically, they require the introduction of new physical constants. For example, position-based approaches usually include distance and timing parameters to govern the range and rate of the collapse effects.
The approach described here originated as an attempt to reconcile relativity with the nonlocal aspects of quantum theory by taking the prohibition of superluminal information transmission as the fundamental principle that these theories have in common. Consideration of the way in which information is physically instantiated and transmitted led to the hypothesis that the interactions that establish correlations between systems, such as those involved in measurements, are responsible for inducing wave function collapse\cite{Gillis_1}. This hypothesis was formalized in a stochastic collapse equation in \cite{Gillis_2} that used two-particle interaction potentials as the basis for the stochastic operator. In that equation the strength of the collapse effects was determined by the ratio of the interaction energy to the total relativistic energy of the two particles. The timing parameter was assumed to be a new constant that could be chosen to insure that collapse occurred on a time scale consistent with our macroscopic experience.
The need to introduce such a new constant can be eliminated by tying the collapse rate to the speed with which individual interactions separate the wave function into orthogonal branches in the process of generating correlations. It can be precisely quantified in terms of the rate at which the potential energy changes during the course of the interaction relative to the total energy of the interacting systems. It is defined in such a way that it integrates to a value of order, $1$, over the course of the interaction, and goes to zero when the branching is complete or when the two particles settle into a stationary state.\footnote{This condition holds for those interactions that contribute significantly to the collapse process. For other interactions the integrated value can be much less.} This insures that the collapse equation reduces to the ordinary nonrelativistic Schr\"{o}dinger equation when applied to stationary states or to noninteracting systems.
Besides eliminating the need to introduce new physical constants there are some other key differences between the equation presented here
and most previous proposals. Most approaches have assumed that the correlations between the measured system and the measuring apparatus are established \textit{before} the collapse, and that the collapse occurs on a time scale that is much shorter than that on which ordinary Hamiltonian evolution is effective. In contrast, the process proposed here is assumed to work in an incremental fashion with small transfers of amplitude between distinct branches of the wave function associated with each elementary interaction. Because these incremental stochastic effects are several orders of magnitude smaller than the changes induced by standard Schr\"{o}dinger evolution it works \textit{in parallel with the Hamiltonian} as the correlations are being established. In addition, because the collapse operator is based on the inter-particle potential energy its outcome cannot be readily characterized in terms of an eigenfunction of any single type of operator (such as position or energy). This is due to the fact that the potential energy varies continuously over the course of an entangling interaction, and the integrated value typically varies from one interaction to the next. The collapse basis in each instance is defined by the correlating interactions.\footnote{These branches tend to coincide with the collapse basis described in the decoherence literature\cite{Zeh,Zurek_ptr,Zurek_Darwin_1,Zurek_Darwin_2}, but the collapse is brought about, primarily, by the interactions between the elementary and other microscopic systems that constitute the measurement apparatus, and, in general, is \textit{not} dependent on the coupling of the apparatus to the environment.}
These distinguishing factors require a different type of proof that the proposed equation is effective in bringing about collapse. As described above, the process works by shifting amplitude between entangled branches depending on the level of interaction potential energy in each branch of the wave function at each stage of the process. So the proof presented here consists, essentially, of an explicit description of the collapse process as a random walk between these developing branches.
The next section provides a heuristic description of how the collapse process works as illustrated by a simple example. It then lays out the key assumptions that are made and discusses some relevant issues related to entanglement. In Section 3 the collapse operator and equation are defined, and the way in which the relevant parameters are determined is described. These parameters determine both the rate at which collapse occurs and the size of the systems necessary to bring about collapse. Section 4 describes how the wave function evolves in configuration space in measurement situations. It focuses on the way in which the tensor product structure of the Hilbert space and the basis in which to view entanglement relations are defined by the interactions that generate the correlations. Section 5 gives a detailed description of the collapse process in configuration space and a proof that it conforms with the Born probability rule. Section 6 discusses the fact that virtually all elementary systems are entangled to some extent with other systems with which they have interacted, and the implications that this entanglement has for the definition of conserved quantities and our understanding of conservation laws. Based on this discussion Section 7 goes on to demonstrate that the proposed collapse equation is consistent with strict conservation of momentum and angular momentum in individual experiments. Section 8 shows that, within the limits permitted by the somewhat restricted characterization of energy in nonrelativistic theory, consistency with energy conservation is also maintained. Section 9 discusses the possibility of extending the proposal to cover relativistic situations. Section 10 is a summary.
\section{Motivation, Approach, and Assumptions}
\label{sec:2}
The original motivation for assuming that wave function collapse is induced directly by correlating interactions stems from the role that these interactions play in the physical instantiation and transmission of information. Although wave function collapse is a nonlocal effect that \textit{appears} to be at odds with relativity, the fact that it is a probabilistic phenomenon prevents such effects from carrying any information. Thus, it makes sense to focus on the fundamental physical processes involved in enforcing this limit in order to reconcile the ``two fundamental pillars of contemporary theory"\cite{Bell_fund}.
The emphasis on entangling interactions has important additional benefits. First, the interactions pick out the basis into which the wave function collapses, and they also help define the tensor product structure of the Hilbert space\cite{Dugic_1,Dugic_2,Bertlmann_2,Bertlmann_1} which is crucial in characterizing the structure of entanglement relations. Second, it allows one to use the strength of the interactions and the duration of the entangling process to eliminate the need to introduce any new physical constants. Finally, it allows the construction of a collapse equation that respects conservation laws \textit{in individual instances}.
Several objections have been raised to the proposed equation based on misunderstandings of how an interaction-induced collapse process works. For this reason it is helpful to begin with a heuristic description such a process in a very simple measurement situation. Suppose that the wave function of an elementary particle is split into two spatially separated branches, and that a detector is placed in the path of one of these branches. When one of the branches encounters the detector it initiates a sequence or cascade of interactions that correlate the states of the particle to those of the detector. Most collapse proposals assume that these correlations are generated \textit{prior to} the actual collapse.\footnote{In many accounts the \textit{sequence of individual interactions} between the measured system and the the apparatus is idealized by introducing a single effective interaction Hamiltonian. The viewpoint offered here is that this sort of idealization misses the most crucial steps in the collapse process, and creates the impression that the collapse operator must compete with the Hamiltonian, rather than work in parallel with it.} In contrast, the assumption made here is that collapse proceeds incrementally as the entangling interactions are taking place. Specifically, it is assumed that with each such interaction a small amount of amplitude is transferred either into or out of the interacting branch based on some random process. With a sufficient number of interactions eventually all of the amplitude is transferred into either the interacting or the noninteracting branch of the wave function.
The question that naturally arises is what happens if detectors are placed in the paths of both branches. In these situations the process can work just as it does in the case of a single detector, as long as one assumes that the interactions are not perfectly synchronized in both time and strength. Such an assumption is extremely plausible for interaction sequences of sufficient length.
This intuitive idea must now be formalized in a stochastic collapse equation that covers the wide range of possible measurement situations, and also accounts for collapse in naturally occurring settings. The remainder of this section will set the stage for this by describing the assumptions, framework, and notation that will be used in carrying out this task.
Although the motivation for this proposal is a desire to reconcile relativity and quantum theory the formulation presented here is nonrelativistic. This simplifies the handling of the nonlocal aspects of wave function collapse and makes it possible to analyze the collapse process in configuration space. The configuration space picture is also helpful in explaining how the process respects conservation laws. The nonrelativistic formulation also entails that the kinetic and interaction energies are much less than the total relativistic energy.
The total system is assumed to consist of a very large number of subsystems. Much of the discussion will focus on elementary particles or other very small subsystems, but we must also allow for the possibility of elementary systems organizing themselves into much larger, even macroscopic, subsystems that can interact with other subsystems as a unit. Individual subsystems will be labeled with subscripts, $j,k,l,...$ Space coordinates will be labeled as $x_j, y_j, z_j$, and their vector representation will be represented as $\mathbf{w}_\textbf{j}$.
Interactions will be modeled by conservative (hence, distance-dependent), two-particle potentials, $\mathbf{\hat{ V}_{jk}}(\mathbf{w}_j,\mathbf{w}_k)
\, = \, \mathbf{\hat{ V}_{jk}}(|\mathbf{w}_j \, - \,\mathbf{w}_k|) $, that fall off with increasing distance. The term, 'particle', is to be interpreted as applying to any subsystem interacting as a unit. Since I will be assessing the effect of the collapse operator on conservation laws it is also necessary to rule out external potentials. All systems, even macroscopic ones, are considered to be quantum systems. There is no classical boundary.
By basing the collapse operators on the potentials, which are smooth functions of distance that decrease in magnitude as distance increases, we can insure that that the collapse results in the approximate localization of subsystems. The special status of the position basis follows from the assumption that collapse is tied to distance-dependent interactions. The close connection between the interaction basis and the position basis will also facilitate the description of the collapse process and the maintenance of conservation laws in configuration space.
It is also assumed that it is possible to measure any (reasonable) observable. Since measurements consist of sets of interactions, and since interactions are distance dependent, this implies that measurements typically begin with the spatial separation of the wave function of the target system into branches characterized by distinct values of the quantity associated with the observable being measured. This leads to the splitting of the wave function of the total system into well separated regions of configuration space. The random walk described in Section 5 takes place between these regions.
\section{Stochastic Collapse Equation}
\label{sec:3}
Like a number of previous proposals, the collapse equation described here is based on the Wiener process, also known as Brownian motion. The derivative of this process is white noise, which is usually indicated in the literature as $dW$, $dB$, or $d\xi$. A fairly standard form for these equations is:\footnote{Some recent proposals employ colored noise rather than white noise, and incorporate the stochastic processes into a classical noise field. See \cite{Adler_Bassi,Adler_Vinante,Bassi_Ferialdi}.} \begin{equation}\label{3x1} d\psi\, \, = \, (-i/\hbar)\mathbf{\hat{H}} \, \, \psi\, dt \, +\, \Big{ [} \sum_k \sqrt{\gamma}(\mathbf{\hat{ L}_k} - \langle \, \mathbf{\hat{ L}_k} \, \rangle) d\xi_k - \, \frac{1}{2}\sum_k\gamma(\mathbf{\hat{ L}_k} - \langle \, \mathbf{\hat{ L}_k} \, \rangle)^2 dt \Big{ ]}\psi . \end{equation}
The first term on the right represents the ordinary Schr\"{o}dinger evolution, governed by the Hamiltonian, $\mathbf{\hat{H}}$. The $\mathbf{\hat{ L}_k}$ are Lindblad operators which are almost always taken to be self-adjoint; hence, they correspond to observables. This schema allows for multiple independent Wiener processes, which can be real or complex. $\langle \, \mathbf{\hat{ L}_k} \, \rangle$ is the expectation value of $\mathbf{\hat{ L}_k} $ in the state, $\psi$, $\langle \psi | \mathbf{\hat{ L}_k}| \psi \rangle$, and its presence makes clear the nonlinearity of the equation. The parameter, $\gamma$, determines the effectiveness of the process in bringing about collapse, and, in most proposals, incorporates new physical constants. The stochastic processes, $\xi_k$, are assumed to have zero mean, and the differentials, $d\xi_k$, obey the It$\hat{o}$ stochastic calculus rules: \begin{equation}\label{3x2} d\xi^{*}_{j} d\xi_k = dt \delta_{jk}, \;\;\; dt d\xi_k = 0. \end{equation}
These conditions on the differentials determine the units, $ d\xi_k \,\sim \, (dt)^{\frac{1}{2}}$, and they are responsible for the factor, $dt$, in the final term on the right. The middle term, $\sum_k\sqrt{\gamma}(\mathbf{\hat{ L}_k} - \langle \, \mathbf{\hat{ L}_k} \, \rangle) d\xi_k $, is primarily responsible for bringing about the collapse of the state vector. In most other proposed equations it tends to concentrate the state on eigenstates of the observables, $\mathbf{\hat{ L}_k}$\cite{Adler_Brun}. (The approach taken here differs in that the set of possible outcomes is determined by which interactions are most relevant in a particular situation.) The nonunitary nature of the operator described by middle term necessitates an adjustment to insure that the resulting vector has unit norm. This is provided by the third term, $- \, \frac{1}{2}\sum_k\gamma(\mathbf{\hat{ L}_k} - \langle \, \mathbf{\hat{ L}_k} \, \rangle)^2 dt $.
The collapse equation proposed here will use just a single collapse operator, $ \hat{\mathcal{V}}, $ and a single Wiener process, $\xi$. In order to implement the hypothesis that wave function collapse is induced by those interactions that generate entanglement the stochastic operator will be based on conservative, two-particle interaction potentials,
$\mathbf{\hat{ V}_{jk}}(\textbf{w}_j,\textbf{w}_k)$, where $\textbf{w}_j$ and $\textbf{w}_k$ indicate the coordinates of systems $j$ and $k$ in configuration space. The requirement that the potential functions are conservative implies that they are functions only of the separation between the systems: $\mathbf{\hat{ V}_{jk}}(\mathbf{w}_j,\mathbf{w}_k) \, = \, \mathbf{\hat{ V}_{jk}}(|\mathbf{w}_j \, - \,\mathbf{w}_k|) $.
The emphasis here is on \textit{correlating} interactions. So we want to multiply the potential by some measure of the effectiveness of the interaction in establishing a correlation between the systems involved. Since this depends on the extent to which the interaction changes the states of the systems involved, the expression, \newline $\mathbf{\hat{ V}_{jk}} \, - \, \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle$, will be divided by the sum of the \textit{effective masses}, $ (m_j+m_k) $, of the interacting systems. For the present purpose the most straightforward way to determine an effective mass is to aggregate particles in bound states such as atoms, molecules, and other complex structures into single systems with a net charge and a total mass.
To maintain overall consistency in the dimension of the stochastic operator it is necessary to transform the effective mass in the denominator into an energy by multiplying by the square of some velocity. The only nonarbitrary choice is $c$, the speed of light. As will be shown, multiplying $\mathbf{\hat{ V}_{jk}} \, - \, \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle$ by $ 1 \, / \, (m_j+m_k)c^2 $ sets the size of the amplitude shifts associated with individual interactions to a very reasonable value, and insures that the stochastic effect is a minor perturbation relative to the changes induced by standard Schr\"{o}dinger evolution. It also anticipates the eventual extension of this proposal to a relativistic version.
The other critical piece of the stochastic operator is the rate parameter, $\gamma$. As stated earlier the single parameter, $\gamma$, will be replaced by multiple variable rate parameters, $\gamma_{jk}$, each of which is associated with the interaction between system, $j$, and system, $k$. These are tied to the speed with which individual interactions separate the wave function into orthogonal branches in the process of generating correlations. They are effective \textit{only during the brief period in which the correlation is being established}, and they tend to zero afterward. They are defined below.
The individual terms, $ [\sqrt{\gamma_{jk}}(\mathbf{\hat{ V}_{jk}} - \langle \mathbf{\hat{ V}_{jk}} \rangle)] \; / \; [(m_j+m_k)c^2] $, will be abbreviated as $\hat{\mathcal{V}}_{jk}.$ The single collapse operator, $ \hat{\mathcal{V}}, $ is then defined as: $\sum_{j<k} \hat{\mathcal{V}}_{jk}. $ The proposed collapse equation can be represented as either: \begin{equation}\label{3x3} d\psi \, = \, (-i/\hbar)\mathbf{\hat{H}} \, \psi \, dt \, +\, \sum_{j < k} \hat{\mathcal{V}}_{jk} \, \, \psi\, d\xi(t) - \, \frac{1}{2} (\sum_{j < k} \hat{\mathcal{V}}_{jk}) \, (\sum_{m < n} \hat{\mathcal{V}}_{mn} ) \, \psi \, dt ; \end{equation} or, more compactly, as: \begin{equation}\label{3x4} d\psi \, = \, (-i/\hbar)\mathbf{\hat{H}} \, \psi \, dt \, +\,
\hat{\mathcal{V}} \, \psi\, d\xi(t) - \, \frac{1}{2} \hat{\mathcal{V}}^2 \, \psi \, dt. \end{equation}
It is important to emphasize that the amplitude transfer brought about by each $\hat{\mathcal{V}}_{jk}$ applies across the entire configuration space and not only to the systems, $j$ and $k$. Other systems are affected by the transfer to the extent that they are entangled with either $j$ or $k$. Because the potentials, $\mathbf{\hat{ V}_{jk}}$, are distance-dependent the amplitude transfers occur between localized regions of configuration space.
To complete the collapse proposal it is necessary to specify the rate parameters, $\gamma_{jk}$. These will be defined in terms of the rate at which the potential energy is changing during the interaction between system, $j$, and system, $k$. They vanish for stationary states since the interaction potential is constant in these situations. They also tend to zero when the two systems become widely separated.
The rate of change of potential energy is: \begin{equation}\label{3x5}
\frac{d}{dt} \, \langle \, \psi | \, \mathbf{\hat{ V}_{jk}} \, | \psi, \rangle \;
= \; \frac{-i}{\hbar} \Big{(} \, \langle \, \psi | \,
\Big{[} \mathbf{\hat{ H}_{jk}}, \mathbf{\hat{ V}_{jk}} \Big{]} \, | \psi \rangle \, \Big{)}, \end{equation} where $ \Big{[} \mathbf{\hat{ H}_{jk}}, \mathbf{\hat{ V}_{jk}} \Big{]} $ is the commutator of $ \mathbf{\hat{ H}_{jk}} $ and $ \mathbf{\hat{ V}_{jk}}, $ and the nonrelativistic Hamiltonian is: \begin{equation}\label{3x6} \mathbf{\hat{ H}_{jk}} \;\; = \;\; -\frac{\hbar^2}{2m} (\mathbf{\nabla_j}^2 \, + \, \mathbf{\nabla_k}^2) \; + \; \mathbf{\hat{ V}_{jk}}. \end{equation} This yields: \begin{equation}\label{3x7}
(-\frac{i \hbar}{2m}) \, \langle \, \psi | \, (\mathbf{\nabla_j}^2 \mathbf{\hat{ V}_{jk}} \, + \, \mathbf{\nabla_k}^2 \mathbf{\hat{ V}_{jk}} \; + \; 2 ( \mathbf{\nabla_j} \mathbf{\hat{ V}_{jk}} \mathbf{\nabla_j}
\, + \, \mathbf{\nabla_k} \mathbf{\hat{ V}_{jk}} \mathbf{\nabla_k}) \, | \psi \rangle \, . \end{equation}
The nonrelativistic limit on energy entails a minimum for the term, $|\mathbf{w}_j \, - \,\mathbf{w}_k| $. This, in turn, implies that the integration over $\psi$ eliminates the terms, $\mathbf{\nabla_j}^2 \mathbf{\hat{ V}_{jk}}$ and $\mathbf{\nabla_k}^2 \mathbf{\hat{ V}_{jk}}.$ The result is: \begin{equation}\label{3x8}
|| \, \int \, \psi^* \, \Big{[} (\mathbf{\hat{\nabla}_j}\mathbf{\hat{ V}_{jk}} \, \cdot \, ( \frac{-i \hbar}{m} ) \, \mathbf{\nabla_j} \psi) \, + \, (\mathbf{\hat{\nabla}_k}\mathbf{\hat{ V}_{jk}} \, \cdot \, ( \frac{-i \hbar}{m} ) \, \mathbf{\nabla_k} \psi)
\Big{]} \, ||. \end{equation}
This expression includes the norm ($||...||$) in order to insure a positive value. (All integrals are over the entire configuration space unless otherwise noted.)
Some further refinement of this formula is required. As described earlier typical measurement situations begin with the branching of the wave function of an elementary or other very small system, $j$, into spatially separated regions. Only one of these branches encounters system, $k$, and interacts with it. It is essential for the derivation of the Born probability rule that the rate parameter, $\gamma_{jk}$, be independent of the amplitude of the interacting branch. For this reason we want to pick out just the interacting portion of the wave function. This also allows one to define $\gamma_{jk}$ in such a way that it integrates to a value of order, $1$, over the duration of the interaction.
The interacting portion of the wave function can be picked out by using a weighting factor, $ \frac{ \mathbf{\hat{ V}_{jk}} \,} { \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle } $, reflecting the extent to which each segment of the wave function is involved in the interaction. To insure that $\gamma_{jk}$ has the dimensions of inverse time and that it integrates to a value of order, $1$, the rate of change of potential energy will be divided by the total nonrelativistic energy of the interacting systems. Since the kinetic energy of the center of mass of the interacting systems adds an arbitrary offset to the total energy it is necessary to eliminate it by specifying the energy in the center-of-mass frame. The center-of-mass velocity can be identified by integrating the relevant momentum operators over the interacting portion of the wave function and dividing by the the combined mass: \begin{equation}\label{3x9} \vec{\mathbf{u }} \; = \; \int \, \Big{(}\frac{ \mathbf{\hat{ V}_{jk}} \,} { \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle } \,\Big{)} \, \psi^* \, \, ( \frac{-i \hbar}{mj+m_k} ) \, [ \, \mathbf{\nabla_j} \psi \, + \, \mathbf{\nabla_k} \psi \, ]. \end{equation}
The wave function in the center-of-mass rest frame can then be represented as: \begin{equation}\label{3xa} \psi_{cm} \; = \; \Big{(}\frac{ \mathbf{\hat{ V}_{jk}} \,} { \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle } \,\Big{)} \, \psi \, \, e^{ i [ - \, \vec{\mathbf{u}} \cdot \vec{\mathbf{w}}_j \frac{m_j}{\hbar} \, - \, m_j \mathbf{u}^2 \frac{t}{2 \hbar} \, - \, \vec{\mathbf{u}} \cdot \vec{\mathbf{w}}_k \frac{m_k}{\hbar} \, - \, m_k \mathbf{u}^2 \frac{t}{2 \hbar}]}. \end{equation} With this expression the relevant measure for the total energy is:
\begin{equation}\label{3xb}
\langle \, \psi_{cm} \, | \, \Big{(} \, \hat{\mathbf{H}}_{jk} \, - \, E_0 \, \Big{)} \, | \, \psi_{cm} \, \rangle,
\end{equation} where $E_0$ is the ground state energy. (It should be noted that the terms, $ \frac{ \mathbf{\hat{ V}_{jk}} \,} { \langle \, \mathbf{\hat{ V}_{jk}} \, \rangle } $, $ \vec{\mathbf{u }} $, and, hence, $\psi_{cm}$ can vary over the duration of the interaction.) Putting together \ref{3x8} through \ref{3xb} we arrive at the definition for
$\gamma_{jk}$ : \begin{equation}\label{3xc} \gamma_{jk} \; \equiv \; \Big{[} \,
|| \, \int \, \psi^* \, \Big{[} (\mathbf{\hat{\nabla}_j}\mathbf{\hat{ V}_{jk}} \, \cdot \, ( \frac{-i \hbar}{m} ) \, \mathbf{\nabla_j} \psi) \, + \, (\mathbf{\hat{\nabla}_k}\mathbf{\hat{ V}_{jk}} \, \cdot \, ( \frac{-i \hbar}{m} ) \, \mathbf{\nabla_k} \psi)
\Big{]} \, ||\, \Big{]} \; \Big{/} \; \Big{[} \,
\langle \, \psi_{cm} \, | \, \Big{(} \, \hat{\mathbf{H}}_{jk} \, - \, E_0 \, \Big{)} \, | \, \psi_{cm} \, \rangle \, \Big{]}. \end{equation}
Our primary interest is in the integrated value of $\gamma_{jk}$ for those pairs of systems that interact most strongly since these interactions are chiefly responsible for inducing the collapse. It should be clear that in these cases the integrated value has a maximum of order, $1$, since, in the center-of-mass frame one expects an approximately complete changeover from potential to kinetic energy (or vice versa). In a transition from a free state to a free state the value would be about $2$. Transitions to bound states would be about $1$. (Radiative effects are neglected since this is a nonrelativistic formulation.) Since $ \int \, \gamma_{jk} \, dt \; \sim \; 1$ it is also clear that $ \parallel \, \int \, \sqrt{\gamma_{jk}} \, d\xi(t) \parallel \; \sim \; 1. $ In more weakly interacting cases the change in potential energy is much smaller than the average kinetic energy over the course of the interaction, and so the integrated value of $\gamma_{jk}$ and the effect on the amplitude are much less.
For the cases of primary interest, the fact that $\gamma_{jk}$ integrates to about $1$ allows us to estimate the size of the amplitude shift associated with each interaction. The shift will be roughly equal to the ratio of interaction energy to total relativistic energy. The upper limit for this value can be estimated by considering electrostatic potentials and electron-electron interactions since these have the lowest mass of any particles in a nonrelativistic theory. The upper limit on the interaction energy must be large enough to accommodate typical measurement situations, while respecting the nonrelativistic constraint, $\mathbf{\hat{V}}_{jk} \, \ll \, mc^2$. The ratio of potential energy to relativistic energy for an electron in the ground state of a hydrogen atom is equal to the square of the fine structure constant, $ \alpha^2 \, \approx \, 5.33*10^{-5}$. This number provides a convenient reference, but it is not quite large enough to cover all measurements of interest.\footnote{In photomultipliers the electrons involved in the measurement process are accelerated up to several hundred electron volts. Since one would not expect all of this energy to be concentrated in a single electron-electron interaction this limit should be sufficient.} Therefore, it is reasonable to consider interaction energy ratios of roughly one order of magnitude above this reference, $\mathbf{\hat{V}}_{jk} \; / \; [(m_j+m_k)c^2] \sim \, 10 \alpha^2 \, \sim \, 5*10^{-4}$. Energy ratios much above this value can no longer be described with simple distance-dependent potentials, and relativistic corrections would need to be taken into account.
It is also of interest to estimate the the ratio of the change in the wave function due to the stochastic operator relative to that induced by the Hamiltonian over the course of the interaction. This ratio can be expressed as: \begin{equation}\label{3xd} \Big{[} \int \, \hat{\mathcal{V}}_{jk} \, \, \psi\, d\xi(t) \Big{]} \; / \; \Big{[} \int \, \frac{\mathbf{\hat{V}}_{jk}}{\hbar} \, \psi \, \delta t \Big{]} \end{equation} Since $\hat{\mathcal{V}}_{jk}$ is defined as $[\sqrt{\gamma_{jk}}(\mathbf{\hat{ V}_{jk}} - \langle \mathbf{\hat{ V}_{jk}} \rangle)] \; / \; [(m_j+m_k)c^2] $, and $ \parallel \, \int \, \sqrt{\gamma_{jk}} \, d\xi(t) \parallel \; \sim \; 1, $ this ratio is approximately $ [ 1 \, / \, (m_e+m_e)c^2] *[\hbar \, / \, \delta t]. $ The time interval, $\delta t$, is inversely proportional to the $\frac{3}{2}$ power of the interaction energy, and the mass of the electron, $m_e$, has again been used to determine an upper limit.
The interaction energy limit described above can be used to estimate the relevant time interval, $\delta t$, by considering the time it takes for the potential energy to change from $0.1$ of its maximum value to its maximum value (or vice versa). The maximum speed of an electron at the energy limit is about $7*10^6 m/s$ and the distance over which the change in $ \mathbf{\hat{ V}_{jk}} $ takes place is about equal to the Bohr radius, $\approx \, 5* 10^{-11}m$. So $\delta t$ is about $10^{-17}$ seconds. The resulting ratio is: \begin{equation}\label{3xe} [ 1 \, / \, (m_j+m_k)c^2] *[\hbar \, / \, \delta t] \; \approx \; [1 \, / \, 10^{-13}] *[ 10^{-34} \, / \,10^{-17} ] \; = \; 10^{-4}. \end{equation}
The smallness of this ratio shows that the largest stochastic effect from a single interaction is a fairly small nonlinear perturbation on the Hamiltonian. Both of the ratios derived here play key roles in the discussion in Section 5 which shows how \ref{3x3} brings about the collapse of the wave function. In preparation for that demonstration the next section deals with some issues involved in the evolution of the wave function and the generation of entanglement relations in measurement situations.
\section{Evolution and Entanglement in Configuration Space} \label{sec:4}
As just shown the effects of the stochastic operator from a single interaction are minor perturbations on the standard Schr\"{o}dinger evolution. So, to explain how \ref{3x3} induces complete collapse it is helpful to first review the way in which the wave function evolves in typical measurement situations under the influence of the Hamiltonian. This background is assumed, either explicitly or implicitly, by every serious attempt to understand what happens during quantum measurements.
The interactions that constitute measurements are distance dependent. For this reason measurements intended to determine a particular quantity typically begin by splitting the subject wave function into branches characterized by different values of that quantity. For example, to measure momentum one might send the subject particles through a magnetic field that bends the trajectories in various directions. To measure spin along a particular axis one can also use a magnetic field to separate up and down states. In some cases, of course, the separation occurs simply as a result of the evolution of the wave function.
Some or all of the branches then proceed into devices in which the subject particle can initiate a set of interactions that result in a final state of the device that is macroscopically distinct from its initial state. The arrangement of interactions that carry out this task must be designed in such a way that an elementary or other microscopic system is capable of completely altering the state of a macroscopic system.
In many accounts the detailed interactions involved in correlating the state of an elementary system with that of the measurement apparatus are ignored. Many stochastic collapse proposals reduce this process to a single step described by a very simple Hamiltonian. In a similar fashion, most of the literature on decoherence assumes that it is the interactions between the apparatus and the ``environment" that are most relevant in explaining the appearance of particular outcomes.
In contrast, in this proposal the individual interactions that take place between the measured system and the measurement apparatus, along with those \textit{within the apparatus}, play the crucial role in bringing about the collapse of the wave function.\footnote{This is not meant to rule out the possibility that subsequent interactions between the apparatus and environment might help to finish the process.}
The critical role that individual, localized interactions play in the development of distinct branches of the wave function is evident in Mott's analysis of $\alpha$-ray tracks\cite{Mott}. By examining the evolution of the wave function in configuration space he showed how these interactions lead to correlations between the positions of the atoms that are ionized by the passage of the $\alpha$-particle. These correlations are responsible for the well defined tracks that are seen in these experiments. Since Mott's paper numerous studies have shown how the establishment of correlations between elementary systems leads to the suppression of interference between various segments of the wave function. The reason for this suppression can be easily understood by considering the description of the evolving system in configuration space. The interactions lead to the separation of the wave function into essentially disjoint regions. This point has been elaborated in several discussions of the branching process in configuration space in connection with the de Broglie-Bohm theory.\cite{Bell_dBR_Bhm,Bell_imp_p_w,Bohm_Hiley,Norsen_PW,Romano}
Because the representation of the wave function in configuration space exhibits the clear separation of the branches, it provides a convenient way to understand how the measurement basis is selected. It also enables a detailed description of the collapse process according to \ref{3x3}. It is, therefore, worth examining this representation in more detail.
Each point in configuration space represents a classical configuration of the total system. The regions in which the wave function differs significantly from zero contain the configurations that are consistent with the prior evolution of the system, in particular, with the interaction history. This history determines how conserved quantities have been exchanged during interactions, and the extent to which the states of various subsystems are correlated.
The correlations play a key role in defining the entanglement structure of the wave function, but entanglement also depends on how the total system is decomposed into subsystems. In other words, it depends on the choice of a coordinate system in configuration space. For the purpose of analyzing the evolution of the wave function in measurement situations the natural choice for a coordinate system is the one used to describe distance-dependent interaction potentials, and is often referred to simply as the position basis. This corresponds to a decomposition of the total system into \textit{interacting} subsystems. That is the decomposition that will be used here. (Alternate decompositions that decouple evolution equations by eliminating any reference to interactions between systems will be reviewed briefly in the next section. These decompositions eliminate entanglement by changing the tensor product structure of the Hilbert space.)
The separate localized segments of the wave function reflect the entanglement structure defined by the position basis. The branches of systems that have interacted and exchanged conserved quantities are represented in the same region of configuration space and their complementary branches are in a different region.
The entanglement relations can be examined in a little more detail by considering the effect of an interaction between two elementary systems. Suppose that system, $j$, is the subject of the measurement, and that a branch of this system interacts with system, $k$, in a region around $(\mathbf{w}_j,\mathbf{w}_k)$. The systems exchange momentum and energy during the interaction, and, as a result, their trajectories are altered in a coordinated manner reflecting the entanglement that now exists between them. The components of the systems that interacted around $(\mathbf{w}_j,\mathbf{w}_k)$ are now linked and will be represented in a region of configuration space around $(\mathbf{w}'_j,\mathbf{w}'_k)$. When either of these components interacts with another system, $l$, it extends the chain of entanglement relations. In this manner further measurement-like interactions build a large structure of correlated components that define a macroscopically distinct state of the total system, represented in a disjoint, localized region in configuration space.
Consider the density, $\psi^* \psi$, corresponding to the system wave function in just this entanglement region. Under ordinary Schr\"{o}dinger evolution, governed only by the Hamiltonian, it will have some integrated value, say, $\mu^*\mu$. To demonstrate that \ref{3x3} is effective in bringing about the collapse of the wave function it must be shown that, as a result of the stochastic action associated with each of the elementary correlating interactions that constitute the measurement, the integrated value, $\mu^* \mu$, is altered to either $1$ or to $0$. This is done in the next section.
\section{Collapse of the Wave Function}
\label{sec:5}
To begin the demonstration it is necessary to calculate the change in the quantity, $\psi^* \psi$, induced by each correlating interaction according to \ref{3x3}. We first restrict \ref{3x3} to a single interaction: \begin{equation}\label{5x1} d\psi \, = \, (-i/\hbar)\mathbf{\hat{H}} \, \psi \, dt \, +\, \hat{\mathcal{V}}_{jk} \, \, \psi\, d\xi(t) - \, \frac{1}{2} \hat{\mathcal{V}}_{jk}^2 \, \psi \, dt. \end{equation} The adjoint equation is: \begin{equation}\label{5x2} d\psi^* \, = \, (+i/\hbar)\mathbf{\hat{H}} \, \psi^* \, dt \, +\, \hat{\mathcal{V}}_{jk} \, \, \psi^* \, d\xi^*(t) - \, \frac{1}{2} \hat{\mathcal{V}}_{jk}^2 \, \psi^* \, dt. \end{equation} The change in the product, $\psi^*\psi$, is: \begin{equation}\label{5x3} d(\psi^*\psi) \; = \; (\psi^* + d\psi^*)(\psi + d\psi) \, - \, (\psi^*\psi) \; = \; d\psi^* \psi \, + \psi^* d\psi \, + d\psi^* d\psi. \end{equation} To calculate the change we must take into account the It$\hat{o}$ calculus rules regarding the way in which differentials are treated. In ordinary calculus a term like $d\psi^* d\psi $ that is quadratic in differentials would be discarded. But the Wiener process, $ \xi(t) $, varies as $ \sqrt{t}$, since it is analogous to a random walk. It is this relationship that is responsible for the It$\hat{o}$ rule stated in \ref{3x2}, ($d\xi^{*} d\xi = dt $), and it also implies that the final term in \ref{5x3}, $d\psi^{*} d\psi $, must be retained.
Plugging \ref{5x1} and \ref{5x2} into \ref{5x3} yields: \begin{equation}\label{5x4} \begin{array}{ll} d(\psi^*\psi) \; = \;
\frac{i}{\hbar} \Big{[} \, (\mathbf{\hat{H}} \, \psi^*) \psi
\, - \, \psi^* (\mathbf{\hat{H}} \psi) \, \Big{]} \, dt & \\ - \psi^* (\frac{1}{2}\hat{\mathcal{V}}_{jk} \hat{\mathcal{V}}_{jk}) \, \psi \, dt \, - \psi^* (\frac{1}{2}\hat{\mathcal{V}}_{jk} \hat{\mathcal{V}}_{jk} ) \, \psi \, dt \, + \, \psi^* \, \hat{\mathcal{V}}_{jk} \hat{\mathcal{V}}_{jk} \psi \, d\xi^*d\xi \, & \\ \; \;\; \; \; \;\; \; \; \; \; \;\; \; \;\; \;\; + \; \; \psi^* \, \hat{\mathcal{V}}_{jk} \psi \, d\xi^* \, +\, \psi^* \, \hat{\mathcal{V}}_{jk} \psi \, d\xi. \end{array} \end{equation} Since $d\xi^{*} d\xi = dt $ we get: \begin{equation}\label{5x5} \;\;\; d (\psi^* \psi) \; = \; - \, \mathbf{\nabla} \cdot \Big{[} ( \frac{i \hbar}{2 m} ) \, ( \, \psi \mathbf{\nabla}\psi^* \, - \, \psi^* \mathbf{\nabla}\psi) \Big{]} \, dt \; + \; (\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) \, (d\xi^* \, + \, d\xi). \end{equation} The first term on the right is just the negative of the divergence of the probability current, which represents the change in $\psi^* \psi$ under ordinary Schr\"{o}dinger evolution.
The Schr\"{o}dinger term and the stochastic term, $(\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) \, (d\xi^* \, + \, d\xi),$ operate in essentially complementary and independent ways in measurement-like situations. The change induced in $\psi^* \psi$ by the Hamiltonian is relatively large and local, while the change resulting from the stochastic term is relatively small and nonlocal. As described in the previous section, an effective measurement process is one in which the individual interactions are arranged so that they clearly separate the wave function into disjoint regions of configuration space. The separation between the various branches is several orders of magnitude greater than the effective range of the measurement interactions.
The change induced by the Hamiltonian on the interacting portion of the wave function is, by definition, confined to the interaction region. Thus, the action of the Hamiltonian on each branch is effectively independent of its action on other branches, and since it is unitary it is also independent of the amplitude of each branch.
The effect of the stochastic operator \textit{is to transfer amplitude between the branch(es) undergoing interaction and the disjoint, noninteracting branches}. The change induced by the individual transfers in the value of $\psi^*\psi$ integrated across each branch is given by \ref{5x3}: $(\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) \, (d\xi^* \, + \, d\xi)$. The direction of the transfer depends on the sign of the product of $\hat{\mathcal{V}}_{jk} \, (d\xi^* \, + \, d\xi)$, and its magnitude was discussed in Section 3. Because the term, $\sqrt{\gamma_{jk}} \, d\xi$, integrates to approximately $1$, the size of each transfer depends on the ratio of the interaction energy to the total relativistic energy of the systems involved, along with the value of $\psi^* \psi$. Thus, with each individual correlating interaction there is a transfer of amplitude either into or out of the interacting portion of the wave function. Again, it is important to note that the amplitude transfer applies to the \textit{total system} - not only to the interacting subsystems. Thus, the entire branch of the wave function in which the interaction takes place has its amplitude altered by the stochastic action.\footnote{Although this point seems obvious it has been missed by several critics.} Since the wave function is assumed to be normalized and equations of the form, \ref{3x1}, are norm-preserving the integrated value of $\psi^* \psi$ is limited by $0$ and $1$. Therefore, with a sufficient number of correlating interactions this integrated value must approach one of these limits. Measurement processes typically involve enough individual interactions of adequate strength to insure complete collapse. More detailed quantitative estimates will be provided below.
Despite the straightforward manner in which \ref{3x3} brings about collapse there have been several claims that it is incapable of doing so. These are dealt with here.
It has been objected that the collapse process could be frustrated if measurement-like interactions are occurring in several branches simultaneously. However, in situations in which there is a sufficient number of interactions the probability of this is vanishingly small, as can be seen by considering the numerical estimates from Section 3. The estimate for the maximum ratio of interaction energy to total relativistic energy in nonrelativistic situations was about $5*10^{-4}$. The amplitude transfer process has the character of a random walk, with the step size determined by this ratio. The endpoints of the random walk are $0$ and $1$ for the integrated values of $\psi^* \psi$. The typical number of steps required to reach an endpoint is the square of the inverse of the step size. Given the maximum value for the step size just quoted, this means that the smallest number of individual interactions to complete the process would be in excess of $10^6$. The white noise that helps to determine the direction of the steps varies continuously. This means that to frustrate the collapse the interactions in disjoint branches of the wave function would have to be almost perfectly synchronized in both time and interaction strength over a walk of this length. An individual interaction with this strength has a duration of about $10^{-17}$ to $10^{-16}$ seconds and a range of less than $10^{-10}$ meters. The branches are separated by at least a few orders of magnitude greater than the effective range of the interactions. Variations in propagation delays and the complexity of propagation patterns make it nearly impossible to achieve the synchronization that would be needed. Interactions of lower strength would have longer durations and expanded ranges, but the required number of steps increases with the square of the inverse interaction strength. So the necessary synchronization remains a virtual impossibility.
Another objection that has been raised is that in the cases in which the collapse process is effective it would necessarily reduce the wave function to a single point, implying an infinite increase in energy. This objection is based on a misunderstanding of the way in which \ref{3x3} differs from most collapse equations. Most proposals are designed to collapse the wave function to an eigenfunction of one particular observable, for example, either position or total energy. Since the stochastic operator in \ref{3x3} is based on distance-dependent interaction potentials it has been alleged that it must reduce the wave function to an eigenfunction of potential energy, that is, a single point. As discussed previously, the effect of the operator relative to the Hamiltonian is very small, \textit{and the duration of its action is extremely short}. Moreover, this effect is smeared over the interacting portion of the wave function by the ordinary Hamiltonian evolution. There is simply no way that collapse to a single point can occur.\footnote{The extent to which the wave function is reshaped by the collapse operator is examined in section 8.}
One additional criticism of the current proposal is that because it conserves momentum (as will be shown in Section 7) it cannot be effective in bringing about collapse. The argument is that when momentum is conserved the center of mass evolves unitarily. This feature is alleged to prevent collapse since collapse equations are necessarily nonunitary.
This objection fails because, as long as the dynamics governing the system insures conservation of momentum, the motion of the center of mass \textit{is completely decoupled from} the evolution of the individual components that make up the system. Equations that redefine the the system into independently evolving subsystems (such as in simple two-body problems) alter the tensor-product structure of the Hilbert space by eliminating interactions between the components, and changing the entanglement structure. This change in entanglement structure has been discussed by Jekn\'{i}c-Dug\'{i}c, Arsenijev\'{i}c, and Dug\'{i}c, and by Thirring, Bertlmann, K\"{o}hler, and Narnhofer\cite{Dugic_1,Dugic_2,Bertlmann_2,Bertlmann_1}. The authors of \cite{Bertlmann_2} note that most discussions of entanglement implicitly assume a decomposition of the total system into \textit{interacting} subsystems: \begin{quote}
``Thus, it's only the interaction, which we consider to determine the density
matrix, or the measurement set-up, which fixes the factorization." \end{quote} The center of mass, regarded as an individual subsystem, is completely disentangled from the remainder of the system. It is not subject to any interactions or external potentials, and so it evolves freely and unitarily. But this implies \textit{nothing} about whether the rest of the system also evolves in a unitary fashion.
This concludes the proof that \ref{3x3} is effective in inducing wave function collapse in typical measurement processes. Note that the reference to the way in which the wave function evolves in these processes has been used in order to facilitate this demonstration. However, it should be clear that generally similar branching processes also occur in natural settings, outside the laboratory. In fact, they are ubiquitous. Thus, one would expect wave function collapse, induced in this manner, to occur all the time.
What remains is to show that the probability of collapse to a particular branch of the wave function complies with the Born rule. Consistency with the Born Rule follows essentially from the general form of \ref{3x1}. This can be shown in detail for \ref{3x3} as follows.
Assuming an adequate number of sufficiently strong correlating interactions the process described by \ref{3x3} can be characterized as a one-dimensional random walk with variable step size that ends with an integrated value of $\psi^* \psi$ over the interacting component of the the wave function equal to either $0$ or $1$. If one assumes that this integrated value at some stage of the process is equal to $\mu^* \mu$ then to establish the Born rule we need to show that the probability that the end value will be $1$ is $\mu^* \mu$.
We begin with a lemma that applies to simple one-dimensional random walks. Label one end point as $0$ and the other as $1$, and let the distance between them be divided into $n$ equal intervals. An elementary theorem states that from a point, $m$, the probability of the walk terminating at $1$ is $\frac{m}{n}$. An analogous result holds if steps of variable size are allowed. From a point, $p$, with $ 0 \, \leq \, p \, \leq \, 1$ the probability of reaching $1$ is $p$. To see this, let $\delta$ be a variable increment that is less than or equal to the distance to the nearest end point: $ 0 \, \leq \, p - \delta \, \leq \, p \, \leq \, p + \delta \, \leq \, 1$. Designate the probability of reaching $1$ as $Pr(p)$. We have $Pr(0) \, = \, 0$, $Pr(1) \, = \, 1$, and $ Pr(p) \, = \, \frac{1}{2}[ Pr(p-\delta)] \, + \, \frac{1}{2}[ Pr(p+\delta)]$. Since the last condition holds for all values of $p$ and $\delta$, $Pr(p)$ must be linear. Given the boundary conditions it follows that $ Pr(p) \, = \, p.$ Since a Wiener process is the scaling limit of a one-dimensional random walk, this result holds whether the walk is regarded as a discrete or continuous process.
In the collapse process $p$ is replaced by the value of $\psi^* \psi$ integrated over the interacting component, $ \mu^* \mu$. The corresponding value for the complementary (noninteracting) component is $\nu^* \nu \, = 1 \, - \, \mu^* \mu$. What must be shown is that the step size at each stage of the collapse process is less than or equal to both $\nu^* \nu $ and $ \mu^* \mu$.
The stochastic change in $\psi^* \psi$ from a single correlating interaction is given by \ref{5x5}: $(\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) \, (d\xi^* \, + \, d\xi)$. The individual terms, $\hat{\mathcal{V}}_{jk}$, were defined in Section 3 as $ [\sqrt{\gamma_{jk}}(\mathbf{\hat{ V}_{jk}} - \langle \mathbf{\hat{ V}_{jk}} \rangle)] \; / \; [(m_j+m_k)c^2] $. We first calculate the value of $(\mathbf{\hat{ V}_{jk}} - \langle \mathbf{\hat{ V}_{jk}} \rangle) $ across the interacting and the noninteracting component of $\psi$. The value of $\mathbf{\hat{ V}_{jk}}$ across the noninteracting component is essentially zero. Although the value varies somewhat across the interacting component, it is substantially different from that of the noninteracting portion. Designate the interacting component as $I$, and the orthogonal component as $O$. The average value of $\mathbf{\hat{ V}_{jk}}$ over $I$ is
$ \overline{\mathcal{ V}^I_{jk}} \; \equiv \; [ \, \langle \, I \, |V_{jk}| \, I \, | \rangle \,] \; / \; [ \, \langle \, I \, | \, I \, \rangle \, ]$. The average of $\mathbf{\hat{ V}_{jk}}$ over the entire wave function is then: $ \langle \mathbf{\hat{ V}_{jk}} \rangle \; = \, \mu^*\mu \, \overline{\mathcal{ V}^I_{jk}} $. The average value of $(\mathbf{\hat{ V}_{jk}} - \langle \mathbf{\hat{ V}_{jk}} \rangle) $ across the interacting component is $ \overline{\mathcal{ V}^I_{jk}} \, (1 \, - \mu^*\mu \, )
\; = \; \nu^* \nu \, \overline{\mathcal{ V}^I_{jk}} \, $. The corresponding value averaged over the orthogonal component is $ - \, \mu^* \mu \, \overline{\mathcal{ V}^I_{jk}} \, $. Using these averaged values, the expression, $(\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) $, can be written as: \newline
$ [ \, \langle \, I \, | \, \nu^* \nu \, \overline{\mathcal{ V}^I_{jk}} \, | I \, \rangle \; - \; \langle \, O \, |
\mu^* \mu \, \overline{\mathcal{ V}^I_{jk}} \, | O \, \rangle \, ] \; / \; [(m_j+m_k)c^2]. $ Since $\overline{\mathcal{ V}^I_{jk}} $ is constant over the wave function this can be represented as: \newline
$ \overline{\mathcal{ V}^I_{jk}} \, [ \nu^* \nu \, \langle \, I \, | \, I \, \rangle \; - \; \mu^* \mu \, \langle \, O \,
| O \, \rangle ] \; / \; [(m_j+m_k)c^2]. $
We have $ \langle \, I \, | \, I \, \rangle \; = \, \mu^*\mu $ and $ \langle \, O \, | \, O \, \rangle \; = \, \nu^*\nu $. Therefore, it is clear that the stochastic operator associated with each individual interaction multiplies each of the two orthogonal components by equal and opposite amounts: \newline $ \pm \; \mu^* \mu \, \nu^* \nu \, \overline{\mathcal{ V}^I_{jk}} \; / \; [(m_j+m_k)c^2]. $ Since this relationship holds for all pairs of systems, $j,k$, if multiple interactions are taking place at the same time we simply sum over them. The full stochastic effect on the various branches at each time is given by: \begin{equation}\label{5x8}
\pm \; (\mu^* \mu \, \nu^* \nu) \, \mathbf{ \sum_{jk}} \, (\psi^* \; \overline{\mathcal{ V}^I_{jk}} \; / \; [(m_j+m_k)c^2] \, \sqrt{\gamma_{jk}} \; \psi) \, (d\xi^*(t) \, + \, d\xi(t)). \end{equation}
The estimate for the maximum ratio of $\overline{\mathcal{ V}^I_{jk}} \; / \; [(m_j+m_k)c^2] $ from Section 3 was $5*10^{-4}$. The terms, $ \sqrt{\gamma_{jk}} $, are tied to the rate at which the interaction proceeds and, when conjoined with the terms, $d\xi^*(t) \, + \, d\xi(t))$, are designed to integrate to a value of about $1$ over the course of the interaction. The stochastic process, $ \xi(t)$, has a variance of $t_2 - t_1$ over the period from $t_1$ to $t_2$. Therefore, to insure that the norm of the sum, $ \mathbf{ \sum_{jk}} \, (\psi^* \; \overline{\mathcal{ V}^I_{jk}} \; / \; [(m_j+m_k)c^2] \, \sqrt{\gamma_{jk}} \; \psi) \, (d\xi^*(t) \, + \, d\xi(t))$, is less than or equal to $1$, we need only to take time increments, $dt$, that are sufficiently small. Since $\mu^* \mu$ and $\nu^* \nu$ are both less than or equal to $1$ this will guarantee that the total changes in the branches of $\psi^* \psi$ given by \ref{5x8} are less than or equal to $1$. Thus, \ref{3x3} reproduces the Born rule.
It should be clear that the number of correlating interactions in a typical measurement instrument during a measurement process is sufficient to bring about nearly complete collapse. Instruments consist of well in excess of $10^{23}$ particles and a substantial fraction of these are involved in the chain or cascade of interactions that result in macroscopic differences in the state of the device. With an average step size of $s$ the number of steps required to complete a random walk between the point $0$ and $1$ is on the order of $ 1 / s^2$. The estimates of Section 3 showed that the maximum step size is about $5*10^{-4}$. Measurement interactions could be a few orders of magnitude less than this and still be effective. Measurements are often initiated with individual interaction energies comparable to those of visible photons, and these would give interaction energy ratios in the range of $10^{-6}$ to $10^{-5}$. However, subsequent interaction energies in the process are frequently enhanced by the instrument. It seems reasonable to take the range of $10^{-7}$ to $10^{-6}$ as a lower limit for interaction energy ratios in most measurement situations.
The other key factor in determining the step size is the product, $\mu^* \mu \nu^* \nu$. As $\mu^* \mu$ approaches either $0.99$ or $0.01$ this term shrinks to a magnitude of about $0.01$, reducing the step size by a couple of orders of magnitude. With the estimates from the previous paragraph this would set an upper limit on the number of steps required at about $10^{16}$ to $10^{17}$. It is important to note that, as the measurement proceeds, the great majority of these interactions would be occurring in parallel - not sequentially. Weaker interactions would take about $10^{-11}$ seconds to complete, but since so many would be taking place at the same time the overall completion time for the process would be some small fraction of a second.
The way in which \ref{3x3} insures consistency with conservation laws in individual situations will be reviewed in the next few sections.
\section{Conserved Quantities Are Shared Across Entangled Branches}
\label{sec:6}
A proper understanding of conserved quantities in quantum theory must start with a recognition of the fact that \textit{all} systems are quantum systems. The idealized simplifications that are used to analyze particular types of situations and the classical boundaries that are placed around these situations must be seen as just pragmatic approximations. The use of such approximations is dictated by the impossibility of tracking all the interactions that couple a ``system" to its environment. But the fact that we cannot do a detailed calculation of the effect of all of these interactions does not prevent us from drawing general conclusions about their effect by applying some fundamental results of quantum theory.
One such key result is that entanglement is a \textit{generic} effect of the interaction between systems. This has been demonstrated by Gemmer and Mahler\cite{Gemmer_Mahler} and by Durt\cite{Durt_a,Durt_1}. An important implication of this result is that there is almost always some small amount of entanglement between an elementary system that has been ``prepared" in an (approximate) eigenstate of some observable and the preparation apparatus.
A photon interacting with a beam-splitter provides a simple example of such entanglement between a system and apparatus. (This example has been discussed in unpublished works by this author\cite{Gillis_3} and by Marletto and Vedral\cite{Marletto_Vedral}.) During the interaction the reflected branch of the photon exchanges momentum with the beam-splitter, and alters its state by a tiny amount. The extremely narrow wave function representing the beam-splitter in position space implies an extremely large spread in the momentum representation of the beam-splitter's state. Thus the exchange of momentum with the photon branch alters that state by a very tiny amount. The magnitude of the inner product between the slightly altered mirror state and the pre-interaction state of the apparatus can be represented as $ |1-\delta| $, where $ \delta \ll 1 $, but $ \delta > 0$. This nonzero $\delta$ implies some (very small) amount of entanglement between the photon and the beam-splitter. If, for the sake of simplicity, one assumes that the photon branches have equal amplitude, the entanglement, in terms of $\delta$, can be approximated as $ (\delta/2) | 1 - \log(\delta/2) | $\cite{Gillis_3}. It is important to note that momentum is strictly conserved within each branch of the entangled wave function.
If the photon is subsequently detected in one of the branches then, because it is entangled with the beam-splitter, the resulting collapse of the wave function encompasses not only the photon but also the correlated state of the beam-splitter. No matter which branch is detected the total momentum of the photon, beam-splitter and measurement apparatus is the same after the detection as it was before the initial photon-mirror interaction.
Because of the generic nature of the relationship between interaction and entanglement this conservation of the relevant quantities among ``system", preparation apparatus, and measurement instrument holds in virtually all situations. The fact that these quantities are exchanged and shared across branches of entangled systems through interactions implies that definite values of conserved quantities can be assigned to these entangled branches - not just to elementary subsystems that are assumed to be in factorizable eigenstates of the relevant observable.
We have no trouble in assigning definite values of conserved quantities to branches of entangled systems when those systems consist of a small number of elementary particles such as singlet states, correlated photons generated during parametric down conversion, or the kinds of states typically considered in discussions of quantum computation. But, we lose sight of the fact that well-defined quantities can be distributed across branches when those branches include somewhat larger subsystems.
The aim of this and the next two sections is to show that the proposed collapse equation (\ref{3x3})is consistent with the conservation laws of standard, unitary quantum theory in individual measurement situations in regard to momentum, angular momentum, and energy. Given this, the foregoing considerations about entanglement require a careful examination of exactly what those laws imply. A quantity, $\mathbf{q}$, is conserved if the observable, $\mathbf{\hat{Q}}$, with which it is associated commutes with the evolution operator, $\mathbf{\hat{H}}$, for arbitrary states, $| \psi \rangle$:
$\mathbf{\hat{H}} \mathbf{\hat{Q}} | \psi \rangle
\; = \; \mathbf{\hat{Q}} \mathbf{\hat{H}} | \psi \rangle \, $. When this condition holds the time derivative of $\mathbf{q}$, integrated over the state, $| \psi \rangle$, is zero:
$ (d/dt) \langle \, \psi \, | \mathbf{\hat{Q}} | \psi \rangle \; = \; \frac{i}{h} \, \langle \, \psi \, | \mathbf{\hat{H}} \mathbf{\hat{Q}} \, - \, \mathbf{\hat{Q}} \mathbf{\hat{H}} | \psi \rangle \, $.
Note that these laws are \textit{not} restricted to situations in which the the state, $| \psi \rangle$, is (assumed to be) in an eigenstate of the operator, $\mathbf{\hat{Q}}$. The argument presented earlier in this section implies that the systems under consideration are almost never in exact eigenstates. To impose such a restriction would render the conservation laws essentially vacuous.
The question of whether the conservation laws hold in a particular situation depends not only on the structure of equation \ref{3x3}, but also on the initial conditions that are assumed. The claim being made here is only that \ref{3x3} \textit{is consistent with} these laws. What will be shown in the next two sections is that the proposed equation guarantees that, within each entangled branch,\footnote{The tensor product structure and the basis in which entangled branches are defined is assumed to be that which is picked out by the interactions. This basis coincides with the position basis in configuration as described in Section 4.} the relevant quantities, $\mathbf{q}$, are conserved, when the branch is normalized. In order to establish overall consistency it must be assumed that the normalized quantities in different (affected) branches are equal. The remainder of this section will explain why this assumption is reasonable for a wide range of initial conditions.
A key background assumption of the proposed collapse equation is that the ``system" under consideration consists of a very large number of subsystems that have been interacting over an extended period. For such a system the properties of any elementary subsystem are determined essentially by its interactions with other subsystems, with the most recent interactions having the greatest effect. Branching processes are initiated by these \textit{conservative} interactions. Consider again the example of the photon and beam-splitter. It is clear that the interaction between them does not introduce any difference in momentum between the reflected and transmitted branch. The momentum change in the reflected branch of the photon is offset by a momentum change in one of the (slightly different) beam-splitter states. The total momentum in each (normalized) branch is the same after the interaction as it was prior to it. The same kind of analysis can be applied to any conservative interaction, involving any conserved quantity. Therefore, after an extended period of interaction one would expect that the total value of these quantities would be essentially the same in all of the branches.
This argument can be extended to cover varying configurations within each branch. As the wave function evolves under the influence of the Hamiltonian it continually recombines the configurations. Thus, what distinguishes the configurations at different points is the distribution of various quantities among the subsystems - not the total value of those quantities computed across all subsystems. In other words, after a sufficient number of interactions along with ordinary wave propagation, the total value of a conserved quantity in a particular configuration is very close to the average value of that quantity calculated across the branch. Therefore, in what follows it will be assumed that the total value of conserved quantities is the same at each point in configuration space where the wave function has nonzero amplitude.
With this assumption in the next two sections it will be shown that \ref{3x3} is consistent with strict conservation of momentum, angular momentum, and energy in individual measurements. The demonstrations for momentum and angular momentum are fairly straightforward, and they hold exactly. These cases are dealt with in Section 7. Because of nonrelativistic limitations the situation with energy is somewhat more complicated. It is dealt with separately in Section 8.
\section{Momentum and Angular Momentum} \label{sec:7}
In this section I will show that the stochastic operator defined in section 3 maintains the \textit{normalized value} of momentum and angular momentum at each point in configuration space. In other words the change in these quantities induced by the collapse equation is directly proportional to the change that is induced in the squared amplitude at that point: \begin{equation}\label{7x1} d(\psi^*\mathbf{\hat{Q}}\psi) \, / \, (\psi^*\mathbf{\hat{Q}}\psi) \; = \; d(\psi^* \, \psi) \, / \, (\psi^* \, \psi). \end{equation} Together with the assumption described at the end of Section 6 this implies that these quantities are strictly conserved under evolution governed by \ref{3x3}.
The change in $d(\psi^* \, \psi) $ was displayed in equation \ref{5x5}: \newline $ d (\psi^* \psi) \; = \; - \, \mathbf{\nabla} \cdot \Big{[} ( \frac{i \hbar}{2 m} ) \, ( \, \psi \mathbf{\nabla}\psi^* \, - \, \psi^* \mathbf{\nabla}\psi) \Big{]} \, dt \; + \; (\psi^* \, \hat{\mathcal{V}}_{jk} \, \psi) \, (d\xi^* \, + \, d\xi). $ The steps leading up to \ref{5x5} can be repeated to compute $d(\psi^*\mathbf{\hat{Q}}\psi) $: \begin{equation}\label{7x2} \begin{array}{ll} d (\psi^* \, \mathbf{\hat{Q}} \,\psi) \; = \; \frac{i}{\hbar} \Big{[} \, (\mathbf{\hat{H}} \, \mathbf{\hat{Q}} \,\psi^*) \psi \, - \, \psi^* (\mathbf{\hat{Q}} \,\mathbf{\hat{H}} \psi) \, \Big{]} \, dt \; & \\ \; \;\; \; \; \;\; \; \; \; \; \;\; \; \; \;\; \; \;\; \;\; \; \;
-\psi^* (\frac{1}{2} \hat{\mathcal{V}} \hat{\mathcal{V}}) \, \,\mathbf{\hat{Q}} \, \psi \, dt \; - \; \psi^* \, \hat{\mathbf{Q}} \, (\frac{1}{2}\hat{\mathcal{V}} \hat{\mathcal{V}}) \, \psi \, dt \,
+ \; \psi^* \, \hat{\mathcal{V}} \hat{\mathbf{Q}} \hat{\mathcal{V}} \, \psi \, d\xi^*d\xi \; & \\ \; \;\; \; \; \;\; \; \; \; \; \;\; \; \; \;\; \; \;\; \;\; + \; \; \psi^* \, \hat{\mathcal{V}} \hat{\mathbf{Q}} \, \psi \, d\xi^* \; + \;\psi^* \, \hat{\mathbf{Q}} \hat{\mathcal{V}} \, \psi \, d\xi \; & \\ \; \;\; \; \; \;\; \; \; \; \; \;\; \; \; \;\; \; = \; \; -\frac{1}{2}\psi^* [(\hat{\mathcal{V}})^2 \mathbf{\hat{Q}} \, + \, \hat{\mathbf{Q}} (\hat{\mathcal{V}})^2 ]\psi \, dt \; + \; \psi^*( \hat{\mathcal{V}} \hat{\mathbf{Q}} \hat{\mathcal{V}} \, )\psi \, dt \; \; & \\ \; \;\; \; \; \;\; \; \; \; \; \;\; \; \; \;\; \; \;\; \; \; + \; \; \psi^* \, \hat{\mathcal{V}} \hat{\mathbf{Q}} \, \psi \, d\xi^* \; + \;\psi^* \, \hat{\mathbf{Q}} \hat{\mathcal{V}} \, \psi \, d\xi. \end{array} \end{equation}
The first term in \ref{7x2} represents the change due to ordinary Schr\"{o}dinger evolution. By assumption $\mathbf{\hat{H}}$ commutes with $\mathbf{\hat{Q}}$, but commutativity of the operators only implies that $\mathbf{\hat{H}} \mathbf{\hat{Q}} \, = \, \mathbf{\hat{Q}}\mathbf{\hat{H}} $, when they are integrated over the entire wave function. We need a somewhat stronger condition. We need to show that the change in $(\psi^*\mathbf{\hat{Q}}\psi)$ matches the change in $(\psi^* \, \psi)$ at each point in configuration space: $ d(\psi^*\mathbf{\hat{Q}}\psi) \; \sim \; d(\psi^* \, \psi).$ For the quantities under consideration here this can be seen by observing that the probability current, $ ( \frac{i \hbar}{2 m} ) \, ( \, \psi \mathbf{\nabla}\psi^* \, - \, \psi^* \mathbf{\nabla}\psi) \, $, is simply (except for the factor, $m$, in the denominator) the expression for momentum in a slightly disguised form. In a manner of speaking the quantities, momentum and angular momentum, ``ride along" with the changes in $(\psi^* \, \psi)$ on the probability current. Thus, the changes due to Schr\"{o}dinger evolution in both $(\psi^*\mathbf{\hat{Q}}\psi)$ and $(\psi^* \, \psi)$, decouple from those induced by the collapse operator.
Therefore, we are left with the task of showing that the change in $(\psi^*\mathbf{\hat{Q}}\psi)$ resulting from the collapse operator, \newline $ -\frac{1}{2}\psi^* [(\hat{\mathcal{V}})^2 \mathbf{\hat{Q}} \, + \, \hat{\mathbf{Q}} (\hat{\mathcal{V}})^2 ]\psi \, dt \; + \; \psi^*( \hat{\mathcal{V}} \hat{\mathbf{Q}} \hat{\mathcal{V}} \, )\psi \, dt \; + \; \; \psi^* \, \hat{\mathcal{V}} \hat{\mathbf{Q}} \, \psi \, d\xi^* \; + \;\psi^* \, \hat{\mathbf{Q}} \hat{\mathcal{V}} \,\psi \, d\xi$, is proportional to $(\psi^* \, \hat{\mathcal{V}} \, \psi) \, (d\xi^* \, + \, d\xi)$. This can be done by showing that \begin{equation}\label{7x3} \hat{\mathbf{Q}} \hat{\mathcal{V}} \, \psi \; = \; \hat{\mathcal{V}} \, \hat{\mathbf{Q}} \psi \end{equation} at each point in configuration space.
The only spatial dependence in the collapse operator, $\hat{\mathcal{V}} $, is in the terms representing the conservative interaction potentials, $\mathbf{\hat{ V}_{jk}}(\mathbf{w}_j \, - \, \mathbf{w}_k) $. Since this implies that $ \mathbf{\hat{\nabla}_j} \, \mathbf{\hat{ V}_{jk}} \; = \; -\mathbf{\hat{\nabla}_k} \, \mathbf{\hat{ V}_{jk}} $ it is quite simple to show that the relation, \ref{7x3}, holds for the two operators in which we are interested here, momentum and orbital angular momentum.
The momentum operator can be expanded as: \begin{equation}\label{7x4} \mathbf{\hat{P}} \, \equiv \, -i \hbar \sum_i \,\mathbf{\hat{\nabla}_i}. \end{equation} Because the action of $\mathbf{\hat{\nabla}_j} $ cancels that of $\mathbf{\hat{\nabla}_k} $ when acting on $ \mathbf{\hat{ V}_{jk}}, $ and given the fact that $ \mathbf{\hat{\nabla}_i} \, \mathbf{\hat{ V}_{jk}} \; = \; 0 $ for $i \, \neq \, j, \; \; i \, \neq \, k $ it is clear that the operator, $\mathbf{\hat{P}} $, simply passes through $\hat{\mathcal{V}} $, and acts only on the wave function, $\psi$. So the normalized value of momentum is conserved at every point in configuration space.
The orbital angular momentum operator can be represented as: \begin{equation}\label{7x5}
\mathbf{\hat{L}} = -i \hbar \sum_i \, \mathbf{w_i} \mathbf{\times} \mathbf{\hat{\nabla}_i}. \end{equation} Explicit expansions for the x-component for systems, $j$, and $k$, are as follows: \begin{equation}\label{7x6} \mathbf{\hat{L}_x}(\mathbf{w_j}) \; = \; -i\hbar(y_j\partial{\mathbf{\hat{V}_{jk}}}/\partial{z_j} - z_j\partial{\mathbf{\hat{V}_{jk}}}/\partial{y_j}). \end{equation} \begin{equation}\label{7x7} \mathbf{\hat{L}_x}(\mathbf{w_k}) \; = \; -i\hbar (y_k\partial{\mathbf{\hat{V}_{jk}}}/\partial{z_k} - z_k\partial{\mathbf{\hat{V}_{jk}}}/\partial{y_k}). \end{equation}
The action of $ \mathbf{\hat{L}_x} $ on each $\mathbf{\hat{V}_{jk}} $ is given by:
\begin{equation}\label{7x8}
\begin{array}{ll} [ \mathbf{\hat{L}_x}(\mathbf{w_j}) \, + \, \mathbf{\hat{L}_x}(\mathbf{w_k}) ] \mathbf{\hat{V}_{jk}}
\; = \;
-i\hbar[ (y_j \, + \, y_k) (\partial{\mathbf{\hat{V}_{jk}}}/\partial{z_j}
\, + \, \partial{\mathbf{\hat{V}_{jk}}}/\partial{z_k} )
& \\ \; \;\; \; \;\; \; \;\; \; \;\; \; \; \;
\; \;\; \; \; \;\; \; \; \; \; \;\; \; \; \;\; \; \;\; \;\; \; \;\; \;
\; - \;
(z_j \, + \, z_k) ( \partial{\mathbf{\hat{V}_{jk}}}/\partial{y_j}
\, + \, \partial{\mathbf{\hat{V}_{jk}}}/\partial{y_k} )].
\end{array}
\end{equation} Since we have both:
\begin{equation}\label{7x9}
(\partial{\mathbf{\hat{V}_{jk}}}/\partial{z_j} \, + \, \partial{\mathbf{\hat{V}_{jk}}}/\partial{z_k} ) \; = \; 0;
\; \;\; \; \;\;
(\partial{\mathbf{\hat{V}_{jk}}}/\partial{y_j} \, + \, \partial{\mathbf{\hat{V}_{jk}}}/\partial{y_k} ) \; = \; 0,
\end{equation} it is clear that $ \mathbf{\hat{L}} $, like $ \mathbf{\hat{P}} $, effectively operates only on the wave function, $\psi$, and not on $\hat{\mathcal{V}} $. So, \ref{3x3} also strictly conserves orbital angular momentum at a point.
This completes the demonstration that \ref{3x3} is consistent with strict conservation of momentum and angular momentum in individual measurement situations. The issue of energy conservation is dealt with in the next section.
\section{Energy and Nonrelativistic Limitations} \label{sec:8}
Conventional nonrelativistic quantum theory accurately characterizes the status of energy conservation for stationary states and in situations with freely evolving noninteracting particles. In these states there is no radiation, and even though the theory does not represent relativistic corrections to kinetic energy associated with mass increase, there is no breach of conservation since the correction terms do not change.
The proposed collapse equation agrees with conventional theory in these cases since it reduces to the Schr\"{o}dinger equation when there is no exchange of energy between systems. This is obvious when there is no interaction, and the collapse operator also goes to zero in stationary states since it includes the terms, $\gamma_{jk}$, which are based on the rate of change of potential energy, which is zero in these situations.
In all other situations when there is an exchange of energy between systems there is some inaccuracy in the conventional theory's prediction of perfect conservation of kinetic-plus-potential energy. This is due to the fact that it lacks the means to represent some forms of energy. These include radiation, relativistic corrections to the kinetic energy formula, antiparticles, and interactions that cannot be fully characterized with scalar potentials.
The collapse operator in \ref{3x3} does induce a change in energy at each point in configuration space that is proportional to the change in $\psi^* \psi$ just as in the cases of momentum and angular momentum analyzed in the previous section. This accounts for the primary change in energy that occurs in measurement situations,\footnote{There are also energy changes induced by the measurement apparatus, but these do not pose any problem for energy conservation.} and this change is consistent with strict energy conservation.
However, in addition to correctly accounting for the change in energy in the measured subsystem the proposed collapse equation does predict some small deviations from strict conservation of energy. This is due to the fact that it is based on the conventional nonrelativistic theory and that it is designed to act in exactly the situations in which the conventional theory fails to account for all of the energy changes. What will be shown here is that these deviations originate from exactly the same types of interactions that lead to small errors in the predictions of the conventional theory. The fact that the deviations predicted by the collapse proposal are qualitatively and quantitatively similar to the inaccuracies of the conventional theory strongly suggests that they should be regarded as a consequence of the nonrelativistic formulation of both theories, rather than as an indication of a real problem with energy conservation.
The change induced in a particular quantity by the collapse process was given in equation \ref{7x2}. In parallel with the calculations of the previous section the effects of the collapse operator on energy can be evaluated by considering the relationship between $ \mathbf{\hat{H}} \, \hat{\mathcal{V}}_{jk} $ and $ \hat{\mathcal{V}}_{jk} \,\mathbf{\hat{H}}.$ Since it is clear that, at each point in configuration space, $\mathbf{\hat{V}} \hat{\mathcal{V}} \; = \; \hat{\mathcal{V}} \mathbf{\hat{V}}, $ the only deviations from perfect conservation that can arise are those involving the kinetic energy terms. Because the kinetic energy operator involves the second derivative the expansion of $-\frac{\hbar^2}{2m} \mathbf{\nabla}^2 \, \hat{\mathcal{V}}_{jk} $ is more complicated than the earlier calculations: \begin{equation}\label{8x1} \begin{array}{ll} (-\frac{\hbar^2}{2m} \mathbf{\nabla}^2) \, \hat{\mathcal{V}}_{jk} \; \; = \; \; \hat{\mathcal{V}}_{jk} (-\frac{\hbar^2}{2m} (\mathbf{\nabla_j}^2 \, + \, \mathbf{\nabla_k}^2) ) \; & \\ \; \; \; \; \; \; \; \; \; \; \; \; + \; \; (-\frac{\hbar^2}{2m}) \, \{ \, (\mathbf{\nabla_j}^2 \hat{\mathcal{V}}_{jk} \, + \, \mathbf{\nabla_k}^2 \hat{\mathcal{V}}_{jk} \; + \; 2 ( \mathbf{\nabla_j} \hat{\mathcal{V}}_{jk} \, \cdot \, \mathbf{\nabla_j} \, + \, \mathbf{\nabla_k} \hat{\mathcal{V}}_{jk} \, \cdot \, \mathbf{\nabla_k})\} . \end{array} \end{equation} As in the cases of momentum and angular momentum the changes in energy induced by the Hamiltonian and the collapse operator decouple. The collapse-induced changes can be computed by substituting $\mathbf{\hat{H}}$ for $\mathbf{\hat{Q}}$ in the relevant part of \ref{7x2}, using \ref{8x1}, and expanding: \begin{equation}\label{8x2} \begin{array}{ll} d (\psi^* \, \mathbf{\hat{Q}} \,\psi)_C \; = \; \; \psi^* \, \hat{\mathcal{V}} \mathbf{\hat{H}} \, \psi \, \, (d\xi^* \, + \, d\xi) \; & \\ & \\ - \; \; (\frac{\hbar^2}{2m}) \, \psi^* \, \{ \sum_{j<k} \, [ (\mathbf{\nabla_j}^2 \hat{\mathcal{V}}_{jk} \, + \, \mathbf{\nabla_k}^2 \hat{\mathcal{V}}_{jk}) \psi \; + \; 2 (\mathbf{\nabla_j}\hat{\mathcal{V}}_{jk} \, \cdot \, \mathbf{\nabla_j}\psi \, + \, \mathbf{\nabla_k}\hat{\mathcal{V}}_{jk} \, \cdot \, \mathbf{\nabla_k}\psi)]\} \, d\xi & \\ & \\ + \; \; (\frac{\hbar^2}{2m}) \psi^* \psi \, (\mathbf{\nabla} \hat{\mathcal{V}} \cdot \mathbf{\nabla} \hat{\mathcal{V}} ) \, dt. \end{array} \end{equation}
The first line expresses the desired proportionality between $ d (\psi^* \, \mathbf{\hat{Q}} \,\psi)_C$ and $d (\psi^* \, \psi)_C$, and mirrors the relationships for momentum and angular momentum derived in the previous section. If it were not for the deviations shown on the second and third line this would imply that strict energy conservation is maintained within the nonrelativistic theory. If it can be shown that the problematic terms displayed on the second and third lines are artifacts of the nonrelativistic formulation, and that they do not represent real deviations then the case will have been made that wave function collapse is consistent with strict conservation of energy.\footnote{As argued in Section 6, the common presumption that the apparent narrowing of the wave function associated with collapse \textit{must} lead to violations of energy conservation stems from the failure to recognize the entanglement of the measured system with systems with which it has previously interacted and the fact that these systems are also involved in the collapse.}
Unless a very high level of precision is required relativistic corrections to the nonrelativistic expressions become relevant only at fairly high energies. The magnitudes of the inaccuracies in the predictions of the conventional theory are well below the level of precision that is usually required. It will be shown here that the apparent deviations implied by the collapse proposal are of similar magnitudes. There are substantial differences between the deviations represented by the terms on the second and third line of \ref{8x2}, and so they will be dealt with separately.
Consider an interaction between system $j$ and system $k$, and focus on just the effect on system $j$. Using the definition of $\hat{\mathcal{V}}_{jk}$ from Section 3 the relevant part of the second line can be expanded as: \begin{equation}\label{8x3}
(-\frac{\hbar^2}{2m}) \mathbf{\big{[}} \psi^* \, \psi (\mathbf{\nabla_j}^2 \, \mathbf{\hat{ V}_{jk}} ) \; + \; 2 \psi^*(\mathbf{\nabla_j} \mathbf{\hat{ V}_{jk}} \, \cdot \, \mathbf{\nabla_j}\psi) \mathbf{\big{]}} (\frac{1}{ (m_j+m_k) c^2)} ) \, (\sqrt{\gamma_{jk} }\, d\xi) . \end{equation} When there is a change in kinetic energy some of that change is unaccounted for in the conventional theory due to relativistic change in mass. (There is also some radiation, although it is several orders of magnitude less than the kinetic energy discrepancies.) So let us compare the deviations expressed in \ref{8x3} to the rate at which potential energy is converted to kinetic energy according to the Schr\"{o}dinger equation: \begin{equation}\label{8x4}
(\frac{i \hbar}{2m}) \mathbf{\big{[}} \psi^* \, \psi (\mathbf{\nabla_j}^2 \, \mathbf{\hat{ V}_{jk}} ) \; + \; 2 \psi^*(\mathbf{\nabla_j} \mathbf{\hat{ V}_{jk}} \, \cdot \, \mathbf{\nabla_j}\psi) \mathbf{\big{]}} . \end{equation} The expressions in the square brackets in \ref{8x3} and \ref{8x4} are identical. This illustrates the fact that the deviations implied by \ref{3x3} arise in exactly the same situations in which the conventional theory fails to account for all of the changes in kinetic energy associated with the lowest order relativistic corrections.
In \ref{8x3} the term $(\sqrt{\gamma_{jk} }\, d\xi)$ integrates to a magnitude of order, $1$, as decribed in Section 3. One can see that the size of the deviation is determined by the ratio of the interaction energy to the total relativistic energy. This deviation can be compared to the discrepancy associated with the conventional theory. The full relativistic kinetic energy can be represented as: $KE_{rel} \, = \, \sqrt{m_0^2c^4 +p^2c^2} \, - m_0c^2. $ Expanding this expression about $\mathbf{p}^2 \, = \, 0$, and retaining the two lowest order terms we get: \begin{equation}\label{8x5} KE_{rel} \, = \, m_0c^2\{ [1 + \frac{1}{2}p^2 / m_0^2c^2 - \frac{1}{8}p^4/m_0^4c^4 + \, ... ] \, - 1 \} \, = \, \frac{1}{2}p^2 / m_0 - \frac{1}{8}p^4/m_0^3c^2 \, + \, ... \end{equation} Since the nonrelativistic formula for kinetic energy is $KE_{nr} \, = \, p^2 / (2m),$ the first-order correction can be rewritten as $- KE_{nr} * (\frac{1}{2})(KE_{nr} /mc^2).$ The change in kinetic energy is (approximately) equal to the change in potential energy. Thus, like the deviation implied by \ref{3x3}, the unaccounted for change in kinetic energy, $ \sim (\Delta \mathbf{\hat{ V}} / mc^2) * KE_{nr}$, in the conventional nonrelativistic theory is also proportional to the ratio of the interaction energy to the total relativistic energy.
It is also worth noting that for interactions involved in transitions from free states to free states the energy deviations that occur when the two systems are receding from one another tend to cancel those that occurred when they were approaching.
For interactions that lead to transitions to or from bound states the conventional theory fails completely to account for energy changes. These situations involve the emission or absorption of photons with energies approximately equal to the change in kinetic energy. The standard theory is useful for calculating the energy discrepancies, but cannot explain them.
The deviations listed on the third line of \ref{8x2} involve two factors of the ratio of interaction energy to total relativistic energy. Thus, for typical nonrelativistic situations they are several orders of magnitude smaller than the those listed on the second line. There are a couple of compelling reasons for regarding these apparent deviations as artifacts of the nonrelativistic formulation rather than as real effects. First, they are less than radiative effects by a ratio of $(c/v)$. Second, in cases with much higher ratios of interaction energy to total relativistic energy, creation and annihilation processes would become relevant, and antiparticle content of the wave would need to be considered. Since there is always some small amount of antiparticle content in any localized wave function\footnote{For a calculation of the antiparticle content of localized wave packets see the discussion by Bjorken and Drell in \cite{Bjorken_Drell} (p.39). The ratio of the apparent deviation to radiated energy is based on the classical Larmor formula which yields a ratio of radiated power to energy change of $ (\frac{2}{3})\frac{c}{v} \mathbf{\big{ [}}\frac{(e^2/r)} {(mc^2)}\mathbf{\big{ ]}}^2. $ } it is reasonable to view these small discrepancies as resulting from the limitations of the nonrelativistic formulation.
To summarize, wave function collapse is not restricted to just the measured subsystem and the measurement apparatus. It also involves the systems with which the measured subsystem has previously interacted, even if the relevant entanglement is very small. The change in energy of the measured subsystem that is not attributable to interactions with the measurement apparatus is compensated for by correlated changes in these entangled systems. The small deviations that remain unaccounted for are attributable to the fact that the nonrelativistic theory is not able to describe certain forms of energy.
The next section examines the possibility of extending the current collapse proposal to account for relativistic effects.
\section{Possible Relativistic Extensions } \label{sec:9}
Although the formulation of the collapse equation presented here is essentially nonrelativistic, as mentioned earlier the idea that wave function collapse is induced by the interactions that establish correlations between systems was originally motivated by the desire to reconcile relativity with the nonlocal aspects of quantum theory. The focus on these interactions stemmed from the central role that they play in the transmission of information. It is, therefore, reasonable to ask what are the prospects for extending this proposal to achieve full consistency with relativity.
As argued in previous works\cite{Gillis_1,Gillis_2} full consistency requires a reconsideration of the foundations of \textit{both} quantum theory \textit{and} relativity. Recalling Einstein's repeated insistence that any inferences about the metric properties of space and time must be based on the observation of physical objects and processes\cite{Einstein_1,Einstein_2}, one must ask whether those properties are underdetermined by the probabilistic character of some physical processes. To what extent could such underdetermination hide some features of spacetime structure?
Consider that in quantum field theory Lorentz invariance is not guaranteed simply by the limiting speed of light. It is necessary to add the assumption that spacelike-separated interactions commute.\cite{Weinberg} Given the demonstration by Bell\cite{Bell_EPR} that quantum correlations imply some sort of nonlocal effects, what is it that prevents the assignment of an unambiguous sequencing to spacelike-separated interactions involving entangled systems? It is, precisely, the probabilistic character of nonlocal quantum effects. The reason that the relativistic description of spacetime is not disrupted by nonlocal quantum effects is that these effects are inherently probabilistic.
These considerations raise the possibility that there are spacetime features such as an evolving, spacelike hypersurface, along which the nonlocal quantum effects propagate. This structure remains hidden because of the nondeterministic nature of these effects. From this perspective the preferred reference frame used here to construct a low energy collapse theory can be viewed as just a simplified special case of such an evolving surface.
In order to extend the current proposal to a relativistic version the most straightforward approach would be to simply replace the preferred frame by a randomly evolving spacelike surface. Although any such surface does imply a sequencing of spacelike-separated interactions, since it evolves in a purely random fashion it remains impossible, in principle, to associate the specific sequencing with any observable physical effects. This approach also has the advantage that it maintains the equivalence of all reference frames, which is one of the essential features of relativity. Since the evolving surface would be unobservable in principle, the usual relativistic description of spacetime should emerge from the proposed extension of \ref{3x3}.
Such an extension would need to characterize interactions in a fully relativistic manner. So a stochastic operator based on interactions would involve more than scalar potentials. The account would need to incorporate all of the features of quantum field theory - massless particles, antiparticles, particle creation and annihilation, and other high energy effects. Such a development is, obviously, not trivial. But I believe that the challenges it would face are of a technical, rather than a fundamental conceptual nature.
\section{Summary} \label{sec:10}
The hypothesis that wave function collapse is induced by the entangling interactions that generate decoherent branches leads to a stochastic collapse equation with several attractive properties. It reduces to the Schr\"{o}dinger equation for stationary states and freely evolving systems, and deviates from it by a very small amount in situations that involve a few interacting elementary particles. It insures collapse to a definite outcome for systems of mesoscopic size with the correct probability and on a time scale that is consistent with our macroscopic experience. Because the strength and duration of the collapse effects are determined by the ratio of interaction energy to total relativistic energy it does not require the introduction of any new physical constants.
Furthermore, it is consistent with exact conservation of momentum and orbital angular momentum in individual experiments, and it is consistent with energy conservation in those circumstances in which conventional nonrelativistic quantum theory correctly predicts such conservation. This consistency is based on the recognition that all physical systems are quantum systems, and that they are almost always entangled to some extent. Since conserved quantities are shared across entangled branches the apparent nonconservation of a quantity in an elementary target system is compensated for by corresponding changes in (usually larger) systems with which it has interacted in the past.
Finally, compatibility with relativity can be achieved by replacing the fixed rest frame with an evolving spacelike surface. Nonlocal quantum effects propagate along the surface, and their nondeterministic nature prevents superluminal information transmission. This surface could be taken to evolve in a purely random fashion. This possibility respects the equivalence of all inertial frames, one of the central principles of relativity. With this approach one might reasonably hope to develop an account that explains measurement outcomes in terms of fundamental processes, while preserving the essential features of contemporary theory.
\end{document} | arXiv | {
"id": "2102.11370.tex",
"language_detection_score": 0.8680742979049683,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract}We introduce a notion of discrete topological complexity in the setting of simplicial complexes, using only the combinatorial structure of the complex by means of the concept of contiguous simplicial maps. We study the links of this new invariant with those of simplicial and topological LS-category. \end{abstract}
\keywords{Topological complexity; Simplicial complex; Contiguous maps; LS-category}
\subjclass[2010]{ 55R80, 55U10, 55M30, }
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}
Topological complexity, introduced by Farber \cite{FARBER1}, is a topological invariant defined to solve problems in robotics such as motion planning. For this purpose one needs an algorithm that, for each pair of points of the so-called configuration space of a mechanical or physical device, computes a path connecting them, in a continuous way. The key idea was to interpret that algorithm in terms of a section of the so-called path-fibration, which is a well-known map in algebraic topology.
The aim of the present paper is to establish a discrete version of this approach. This is interesting because many motion planning methods transform a continuous problem into a discrete one. Finite simplicial complexes are the proper setting to develop a discrete version of topology. The main technical point is to avoid the construction of a path-space $PK$ associated to the simplicial complex $K$. To do so, we use a different but equivalent characterization of topological complexity, as explained in Section \ref{UNO}.
In Section \ref{INVARIANT} we prove that the new invariant $\mathop{\mathrm{TC}}(K)$ only depends on the strong homotopy type of $K$, as defined by Barmak and Minian \cite{BM}. In Section \ref{CATEGORY} we compare this new invariant with the simplicial LS-category of $K$, defined by us in two previous papers \cite{FMV1,FMMV2}, thus giving a simplicial version of Farber's well known results \cite{FARBER1}. Finally, in Section \ref{GEOM}, $\mathop{\mathrm{TC}}(K)$ is compared with the topological complexity $\mathop{\mathrm{TC}}(\vert K \vert)$ of the geometric realization of the complex $K$.
\paragraph{\em Acknowledgements} We thank Nick Scoville for useful conversations, and Jes\'us Gonz\'alez for pointing out us the reference \cite{JESUSGONZ}. Corollary \ref{SAMEHOM} was pointed out to the second author by John Oprea and inspired our definition of the discrete topological complexity.
The first and the fourth authors were partially supported by MINECO Spain Research Project MTM2015-65397-P and Junta de Andaluc\'{\i}a Research Groups FQM-326 and FQM-189. The second author was partially supported by MINECO Spain Research Project MTM2016-78647-P and FEDER and by Xunta de Galicia GPC2015/006. The third author was partially supported by DFF-Research Project Grants from the Danish Council for Independent Research.
\section{Preliminaries} \label{UNO}
\subsection{Topological complexity}\label{TOPBACK} We include here some motivational remarks.
Farber's topological complexity \cite{FARBER1,FARBER2} is a particular case of the \v{S}varc genus or sectional category of a map \cite{CLOT,SVARC}. \begin{definition} The {\em \v{S}varc genus} $\mathop{\mathrm{secat}}(f)$ of a map $f\colon X \to Y$ is the minimum integer number $n\geq 0$ such that the codomain $Y$ can be covered by open sets $V_0,\dots,V_n$ with the property that over each $V_j$ there exists a local section $s_j$ of $f$ (that is, a continuous map $s_j\colon V_j \to X$ such that $f\circ s_j=\iota_j$, where $\iota_j\colon V_j\subset Y$ is the inclusion). \end{definition}
\begin{definition}The {\em topological complexity} of a topological space $X$ is $\mathop{\mathrm{TC}}(X)=\mathop{\mathrm{secat}}(\pi)$, where $\pi\colon PX \to X\times X$ is the so-called path fibration, that is, the map sending an arbitrary path $\gamma\colon [0,1] \to X$ into the pair $(\gamma(0),\gamma(1))$ formed by the initial and the final points of the path. \end{definition}
\begin{remark} It is common in algebraic topology to consider a normalized version of concepts such as \v{S}varc genus, topological complexity and LS-category $\mathop{\mathrm{cat}} X$ is often used, as in \cite{CLOT}, in such a way that contractible spaces have category zero. This is the convention we followed in our papers \cite{FMV1,FMMV2} and we will maintain it here. However, sometimes a non-normalized definition (which is equivalent to $\mathop{\mathrm{cat}} X+1$) can be used in some papers, as Farber did in \cite{FARBER1}. \end{remark}
An important result is that for some topological spaces (including the geometric realization of any finite simplicial complex) the topological complexity can be computed by taking {\em closed} subspaces instead of open subspaces. This is discussed in \cite[Chapter 4]{FARBER2}.
Now we proceed to modify the definition of sectional category . \begin{definition} The {\em homotopic \v{S}varc genus} of the map $f\colon X \to Y$, denoted by $\mathop{\mathrm{hsecat}}(f)$, is the minimum integer number $n\geq 0$ such that there exists an open covering of the codomain $Y=V_0\cup\dots\cup V_n$, with the property that for each $V_j$ there exists a local {\em homotopic} section $s_j$, that is, a continuous map $s_j\colon V_j \to X$ such that there is a homotopy $f\circ s_j\simeq \iota_j$, where $\iota_j\colon V_j\subset Y$ is the inclusion. \end{definition} Clearly $\mathop{\mathrm{hsecat}}(f)\leq \mathop{\mathrm{secat}}(f)$. For a particular class of maps both invariants coincide.
\begin{proposition}If $\pi\colon X\to Y$ is a fibration (that is, a map with the homotopy lifting property) then $\mathop{\mathrm{hsecat}}(\pi)=\mathop{\mathrm{secat}}(\pi)$. In particular this is true for the path fibration $\pi \colon PX \to X\times X$. \end{proposition}
Now, it is well known that any map factors, up to homotopy equivalence, through a fibration. We will apply it to the particular case of the diagonal map $\Delta_X \colon X \to X\times X$.
\begin{proposition} There is a homotopy equivalence $X\simeq PX$ such that the diagram in Figure \ref{DIAGPROJ} commutes up to homotopy (the maps are $c(x)=x$, the constant path, and $\alpha(\gamma)=\gamma(0)$, the initial point). \begin{figure}\label{DIAGPROJ}
\end{figure} \end{proposition}
\begin{corollary}\label{SAMEHOM}The maps $\pi$ and $\Delta_X$ have the same homotopic \v{S}varc genus, and both coincide with the topological complexity of $X$, $$\mathop{\mathrm{hsecat}}(\Delta_X)=\mathop{\mathrm{hsecat}}(\pi)=\mathop{\mathrm{secat}}(\pi)=\mathop{\mathrm{TC}}(X).$$ \end{corollary}
\begin{proposition}\label{EQUIV}Let $U\subset X\times X$ be an open subset. The following conditions are equivalent. \begin{enumerate} \item There is a section $s_U\colon U \to PX$ of the path fibration $\pi$; \item the restrictions to $U$ of the projections $p_1,p_2\colon X\times X \to X$ are homotopic maps\label{DOS}; \item either ${p_1}_{\mid U}$ or ${p_2}_{\mid U}$ is a section (up to homotopy) of the diagonal map $\Delta_X\colon X\to X\times X$. \end{enumerate} \end{proposition}
\subsection{Simplicial complexes}\ We refer the reader to Kozlov's book \cite{KOZLOV} for a modern survey of simplicial complexes and to Spanier's book \cite{SPANIER}, as well as to our paper \cite{FMV1}, for the classical notions of simplicial maps, simplicial approximation and contiguity.
Let $K$ be a finite abstract simplicial complex. Let $K^2=\catpro{K}$ be the categorical product as defined in \cite[Definition 4.25]{KOZLOV}. The set of vertices $V(K^2)$ is $V(K)\times V(K)$, and the simplices of $K^2$ are defined by the rule $\sigma\in K^2$ if and only if $\pi_1(\sigma)$ and $\pi_2(\sigma)$ belong to $K$, where $\pi_1,\pi_2$ are the projections from $K^2$ into $K$.
Let $\varphi\colon K \to L$ be a simplicial map, and define $\varphi^2=\varphi\,\Pi\,\varphi\colon K^2 \to L^2$ by $$\varphi^2(v,w)=(\varphi(v),\varphi(w)).$$ A very important property for our purposes is:
\begin{proposition}\label{PRODSIM} If $\varphi,\psi \colon K \to L$ are simplicial maps in the same contiguity class (denoted by $\varphi\sim\psi$), then $\varphi^2\sim \psi^2$. \end{proposition} \begin{proof}Being in the same contiguity class, $\varphi \sim \psi$, means that there is a sequence of simplicial maps $h_i\colon K \to L$, $i=1,\dots,m$, such that $h_0=\varphi$, $h_m=\psi$, and the maps $h_i$ and $h_{i+1}$ are contiguous (denoted $h_i\sim_c h_{i+1}$), so we can assume without loss of generality that $\varphi \sim_c \psi$. By definition it means that for each simplex $\sigma\in K$ the union of vertices $\varphi(\sigma)\cup\psi(\sigma)$ is a simplex of $L$.
Let $\sigma=\{(v_1,w_1),\dots, (v_n,w_n)\}$ be a simplex in $K^2$. By definition, that means that $\pi_1(\sigma)=\{v_1,\dots,v_n\}$ and $\pi_2(\sigma)=\{w_1,\dots,w_n\}$ are simplices of $K$. Then $$\varphi(\pi_1(\sigma))\cup \psi (\pi_1(\sigma))=\{\varphi(v_1),\dots,\varphi(v_n),\psi(v_1),\dots,\psi(v_n)\}$$ belongs to $L$. Analogously $\varphi(\pi_2(\sigma))\cup \psi (\pi_2(\sigma))\in L$. This is enough to prove that $\varphi^2(\sigma)\cup \psi^2(\sigma)\in L^2$. \end{proof}
\begin{remark}There is another notion of simplicial product, the so-called {\em direct product} $K\times K$ where it is necessary to fix an order on $V(K)$. The difference with $\catpro{K}$ is that the geometric realization $\vert K \times K\vert$ is homeomorphic to $\vert K \vert \times \vert K \vert$, while $\vert \catpro{K} \vert$ has only the homotopy type of the latter. However, Proposition \ref{PRODSIM} would only be true for the direct product if the maps $\varphi,\psi$ preserve the order. \end{remark}
\begin{remark} Recently, Gonz\'alez \cite{JESUSGONZ} introduced a combinatorial version $SC(K)$ of the topological complexity which is based on a simplicial analog of part \eqref{DOS} of Proposition \ref{EQUIV}. However, his notion is based on the direct product $K\times K$ and it seems not easy to compare it with our notion of simplicial complexity. \end{remark}
\section{Discrete topological complexity}\label{INVARIANT} In Section \ref{TOPBACK} we have explained the reason of the following definitions, which avoid the need of a simplicial version $PK$ of the path space.
\subsection{Farber subcomplexes} Let $\Omega\subset K^2$ be a simplicial subcomplex of the product $K^2=\catpro{K}$ and let $\iota_\Omega\colon \Omega \subset K^2$ be the inclusion map.
Let $\Delta \colon K \to K^2$ be the diagonal map $\Delta(v)=(v,v)$.
\begin{definition}We say that $\Omega\subset K^2$ is a {\em Farber subcomplex} if there exists a simplicial map $\sigma\colon \Omega\subset K^2 \to K$ such that $\Delta\circ \sigma \sim \iota_\Omega$. \end{definition} The map $\sigma$ will be called a {\em local homotopic section} of the diagonal, where ``homotopic'' must be understood in the sense of belonging to the same contiguity class.
\begin{definition} The {\em discrete topological complexity} $\mathop{\mathrm{TC}}(K)$ of the simplicial complex $K$ is the least integer $n\geq 0$ such that $K^2$ can be covered by $n+1$ Farber subcomplexes.
In other words, $\mathop{\mathrm{TC}}(K)\leq n$ if and only if $K^2=\Omega_0\cup\cdots\cup \Omega_n$, and there exist simplicial maps $\sigma_j\colon \Omega_j \to K$ such that $\Delta\circ \sigma_j\sim \iota_j$, where $\iota_j\colon \Omega_j\subset K^2$, for $j=0,\dots,n$, are inclusions. \end{definition}
Sometimes we shall call $\mathop{\mathrm{TC}}(K)$ the {\em simplicial complexity} of $K$ (not to be confused with the notion $SC(K)$ defined by Gonz\'alez in \cite{JESUSGONZ}). Notice that $\mathop{\mathrm{TC}}(K)$ is defined in purely combinatorial terms, involving neither the geometric realization $\geo{K}$ of the complex, nor the notion of topological homotopy, nor that of simplicial approximation.
\subsection{Motion planning}
Farber's complexity is a topological invariant introduced to solve problems in robotics such as motion planning \cite{FARBER2}. In this section we explain how our notion of discrete topological complexity is related to the motion planning problem on a simplicial complex.
Let $\Omega\subset K^2$ be a Farber simplicial subcomplex and let $\sigma\colon \Omega \to K$ be the associated section (up to contiguity) of the diagonal, that is, such that $\Delta\circ \sigma \sim \iota_\Omega$. Then for each pair of points $x,y\in K$ such that $(x,y)\in \Omega$, the point $\sigma(x,y)$ is an {\em intermediate point} between $x$ and $y$ in the following sense: consider the sequence of contiguous maps $h_0\sim_c\cdots\sim_c h_j\sim_c\cdots\sim_c h_m$ connecting $\Delta\circ \sigma$ and $\iota_\Omega$. Denote $h_j(x,y)=(x_j,y_j)$. Then $x_m=x$, $y_m=y$ and $x_0=\sigma(x,y)=y_0$. That means that we have a sequence of points \begin{equation}\label{PATH} x=x_m,\dots, x_0=\sigma(x,y)=y_0,\dots,y_m=y. \end{equation} Moreover, contiguity implies that two consecutive points in the above sequence belong to the same simplex: in fact, since $h_j\sim_c h_{j+1}$, the points $h_j(x,y)=(x_j,y_j)$ and $h_{j+1}(x,y)=(x_{j+1},y_{j+1})$ generate a simplex of $K^2$ (that is, they are either equal or the vertices of an edge). By definition of the product $K^2$, this means that the points $x_j$ and $x_{j+1}$ (resp. $y_j$ and $y_{j+1}$) generate a simplex of $K$. Hence the sequence (\ref{PATH}) gives an edge-path on $K$ connecting the points $x$ and $y$.
\subsection{Invariance}
Recall from \cite{BM} that two simplicial complexes $K,L$ have the same ``strong homotopy type'', $K\sim L$, if there is a sequence of elementary strong collapses and expansions connecting them. This is equivalent to the existence of simplicial maps $\varphi\colon K \to L$ and $\psi\colon L \to K$ such that $\varphi\circ\psi\sim 1_L$ and $\psi\circ\varphi\sim1_K$ (we recall that $\sim$ means ``being in the same contiguity class'').
\begin{theorem} The discrete topological complexity is an invariant of the strong homotopy type. That is, $K\sim L$ implies $\mathop{\mathrm{TC}}(K)=\mathop{\mathrm{TC}}(L)$.
\end{theorem}
\begin{proof}
From Prop. \ref{PRODSIM} we have
$$\varphi^2\circ\psi^2=(\varphi\circ \psi)^2\sim (1_L)^2=1_{L^2}$$ and analogously $\psi^2\circ\varphi^2\sim 1_{K^2}$, so we have $K^2\sim L^2$. Moreover the diagram
in Figure \ref{DIAG}
\begin{figure}\label{DIAG}
\end{figure}
verifies $\Delta_L\circ \varphi=\varphi^2\circ \Delta_K$ and $\Delta_k\circ\psi=\psi^2\circ\Delta_L$.
Now let $\Omega\subset K^2$ be a Farber subcomplex of $K^2$, that is, there exists a simplicial map $\sigma\colon \Omega\to K$ such that $\Delta_K\circ \sigma\sim\iota_\Omega$.
Then the inverse image $\Lambda=(\psi^2)^{-1}(\Omega)\subset L^2$ is a Farber subcomplex of $L^2$, because (see Figure \ref{DIAG}) the map
$$\lambda=\varphi\circ\sigma\circ{\psi^2}_{\vert \Lambda}\colon \Lambda\subset L^2\to L$$ verifies
\begin{align*}
\Delta_L\circ \lambda=&\Delta_L\circ \varphi\circ\sigma\circ\psi^2\circ \iota_\Lambda\\
=&\varphi^2\circ \Delta_K\circ\sigma\circ\psi^2\circ\iota_\Lambda\sim \varphi^2\circ \iota_\Omega\circ \psi^2\circ\iota_\Lambda\\
=&(\varphi^2\circ\psi^2)_{\vert \Lambda} \sim 1_{L^2}\circ\iota_\Lambda\\
=&\iota_\Lambda.
\end{align*}
Let $\mathop{\mathrm{TC}}(K)\leq n$, that is, there exists a covering $K=\Omega_0\cup\cdots\Omega_n$ where $\Omega_j$, $j=0,\dots,n$, are Farber subcomplexes. Then the corresponding $\Lambda_j=(\psi^2)^{-1}(\Omega_j)$, $j=0,\dots,n$, form a Faber covering of $L^2$, hence $\mathop{\mathrm{TC}}(L)\leq n$. The other inequality is proved in the same way.
\end{proof}
We have the following characterization of Farber subcomplexes, which is the simplicial version of Proposition \ref{EQUIV}.
\begin{theorem}\label{PROPIEDADES}Let $\Omega\subset K^2$ be a subcomplex of the categorical product. The following conditions are equivalent: \begin{enumerate} \item $\Omega$ is a Farber subcomplex. \item the restrictions to $\Omega$ of the projections are in the same contiguity class, that is, $(\pi_1)_{\vert\Omega}\sim (\pi_2)_{\vert\Omega}$. \item Either $(\pi_1)_{\vert\Omega}$ or $(\pi_2)_{\vert\Omega}$ is a section (up to contiguity) of the diagonal $\Delta\colon K\to K^2$. \end{enumerate} \end{theorem}
\begin{proof}\item[$1\Rightarrow 2)$] If $\Omega\subset K^2$ is a Farber subcomplex, then there exists $\sigma\colon \Omega\to K$ such that $\Delta\circ \sigma\sim \iota_\Omega$. But $\Delta\circ \sigma$ is the map $(\sigma,\sigma)$ defined by $\omega\in\Omega\mapsto (\sigma(\omega),\sigma(\omega))$. On the other hand $\iota_\Omega=(\pi_1\circ\iota_\Omega,\pi_2\circ\iota_\Omega)$. Then $$(\sigma,\sigma)\sim (\pi_1\circ\iota_\Omega,\pi_2\circ\iota_\Omega)$$ which implies, by composing with the projections, that $$(\pi_1)_{\vert\Omega}=\pi_1\circ\iota_\Omega\sim \sigma \sim \pi_2\circ\iota_\Omega=(\pi_2)_{\vert\Omega}.$$ \item[$2\Rightarrow 3)$] If $(\pi_1)_{\vert\Omega}\sim(\pi_2)_{\vert\Omega}$, define $\sigma\colon\Omega\to K$ by $\sigma=(\pi_1)_{\vert\Omega}$. Then $\iota_\Omega(x,y)=(x,y)$, for $(x,y)\in\Omega$, while $(\Delta\circ\sigma)(x,y)=(x,x)$. We have by hypothesis $$\iota_\Omega=((\pi_1)_{\vert\Omega},(\pi_2)_{\vert\Omega})\sim ((\pi_1)_{\vert\Omega},(\pi_1)_{\vert\Omega})=\Delta\circ\sigma.$$ \item[$3\Rightarrow 1)$] If $\sigma=(\pi_i)_{\vert\Omega}$ verifies $\Delta\circ\sigma\sim \iota_\Omega$, then $\Omega$ is a Farber subcomplex, by definition. \end{proof}
\section{Relationship with simplicial LS-category}\label{CATEGORY} One of Farber's main results for topological complexity relates it to a well known classical invariant, the Lusternik-Schnirelmann category \cite{CLOT}. In this section we get analogous results for the discrete setting, by using the simplicial LS-category of a simplicial complex introduced by the authors in \cite{FMV1, FMMV2}.
\subsection{Comparison with the category of $K$}
\begin{definition}Let $K$ be an abstract simplicial complex. A subcomplex $L\subset K$ is {\em categorical} if the inclusion $\iota_L\colon L\subset K$ belongs to the contiguity class of some constant map $L\to K$, that is, $\iota_L\sim \ast$. The (normalized) simplicial {\em LS-category} $\mathop{\mathrm{scat}} K$ of the simplicial complex $K$ is the minimum number $m\geq 0$ such that there are categorical subcomplexes $L_0,\dots,L_m$ which cover $K$, that is, $K=L_0\cup\cdots \cup L_m$. \end{definition}
\begin{remark} As explained in \cite{FMV1}, a categorical subcomplex may not be strongly collapsible in itself, but it must be in the ambient complex. Equivalently, it is the inclusion $\iota_L$, and not the identity $1_L$, which belongs to the contiguity class of a constant map. \end{remark}
The first inequality proved by Farber directly compares the topological complexity $\mathop{\mathrm{TC}}(X)$ of a space with the LS-category $\mathop{\mathrm{cat}} X$. We shall prove that this result also holds in the discrete setting.
\begin{theorem}For any abstract simplicial complex we have $$\mathop{\mathrm{scat}} K\leq \mathop{\mathrm{TC}}(K).$$ \end{theorem}
\begin{proof}If $\mathop{\mathrm{TC}}(K)\leq n$, let $K^2=\Omega_0\cup\cdots\cup\Omega_n$ be a covering by Farber subcomplexes. Fix a base point $v_0\in K$ and let $i_0\colon K \to K^2$ be the simplicial map $i_0(w)=(v_0,w)$. Then, let us take the inverse images $$\Sigma_j =(i_0)^{-1}(\Omega_j)\subset K, \quad j=0,\dots,n.$$ Since $K=\Sigma_0\cup\cdots\cup\Sigma_n$, if we prove that each $\Sigma_j$ is a categorical subcomplex then we can conclude that $\mathop{\mathrm{scat}} K\leq n$, and the result follows.
Let $\Omega\subset K^2$ be a Farber subcomplex, with a local section $\sigma\colon \Omega \to K$ such that $\Delta_K\circ \sigma\sim \iota_\Omega$, and let $\Sigma=(i_0)^{-1}(\Omega)\subset K$. We shall prove that the inclusion $\iota_\Sigma\colon \Sigma\subset K$ belongs to the contiguity class of the constant map $v_0\colon \Sigma\to K$, so we shall obtain that $\Sigma$ is a categorical subcomplex of $K$.
Since $\Delta_K\circ \sigma \sim \iota_\Omega$, there is a sequence of contiguous maps $\psi_i\colon \Omega \to K^2$, $i=1,\dots,m$, such that \begin{equation}\label{CHAIN1} \Delta_K\circ\sigma=\psi_1\sim_c\cdots\sim_c \psi_m=\iota_\Omega. \end{equation} Then, by composition, $$\pi_1\circ\psi_1\circ i_0\circ\iota_\Sigma\sim_c\dots\sim_c\pi_1\circ\psi_m\circ i_0\circ \iota_\Sigma,$$ where, for every $w\in \Sigma$, $$\pi_1\circ\psi_1\circ i_0\circ \iota_\Sigma(w)=\pi_1\circ\Delta_K\circ\sigma\circ i_0(w)=\sigma(v_0,w),$$ and $$\pi_1\circ\psi_m\circ i_0\circ \iota_\Sigma(w)=\pi_1\circ\iota_\Omega(v_0,w)=v_0.$$ On the other hand \begin{equation}\label{CHAIN2} \pi_2\circ\psi_1\circ i_0\circ\iota_\Sigma\sim_c\dots\sim_c\pi_2\circ\psi_m\circ i_0\circ \iota_\Sigma, \end{equation} where, for every $w\in \Sigma$, $$\pi_2\circ\psi_m\circ i_0\circ\iota_\Sigma(w)=\pi_2\circ \iota_\Omega(v_0,w)=w,$$ and $$\pi_2\circ\psi_1\circ i_0\circ\iota_\Sigma(w)=\pi_2\circ\Delta_K\circ\sigma\circ i_0(w)=\sigma(v_0,w).$$ From (\ref{CHAIN1}) and (\ref{CHAIN2}) it follows $$v_0\sim \sigma(v_0,w) \sim w, \quad \forall w\in \Sigma,$$ or equivalently, $v_0\sim \iota_\Sigma$, hence $\Sigma$ is a categorical subcomplex. \end{proof}
\subsection{Comparison with the category of $K^2$} The second comparison result by Farber in \cite{FARBER1} is between $\mathop{\mathrm{TC}}(X)$ and $\mathop{\mathrm{cat}} (X\times X)$. We shall prove that it is also true in the discrete setting.
\begin{lemma}\label{CONNECTED}The abstract simplicial complex $K$ is edge-path connected if and only if two arbitrary constant maps $L\to K$ are in the same contiguity class. \end{lemma}
The following theorem uses the normalized versions of LS-category and topological complexity.
\begin{theorem} If $K$ is an edge-path connected complex, then $$\mathop{\mathrm{TC}}(K) \leq \mathop{\mathrm{scat}} (K^2).$$ \end{theorem} \begin{proof}Let $\mathop{\mathrm{scat}}(K\,\Pi\, K)=n$ and let $K^2=\Omega_0\cup\cdots\Omega_n$ be a categorical covering of $K^2$. If we are able to prove that each $\Omega=\Omega_j$, $j=0,\dots,n$, is a Farber subcomplex then we will have $\mathop{\mathrm{TC}}(K)\leq n$, thus proving the Theorem.
By definition the inclusion $\iota_\Omega\colon \Omega\subset K^2$ verifies $\iota_\Omega\sim \ast$, where $\ast\colon \Omega \to K^2$ is some constant map $(v_0,w_0)$. Since the complex is path-connected we can choose the point $\ast$ verifying $w_0=v_0$.
By definition of contiguity class, since $\iota_\Omega\sim \ast$, there is a sequence of simplicial maps, each one contiguous to the next one, $$\iota_\Omega=\varphi_1\sim_c\cdots\sim_c \varphi_m=(v_0,v_0),$$ with $\varphi_j\colon \Omega \to K^2$. Let $\pi_1\colon K^2 \to K$ the projection onto the second factor, then each $\pi_1\circ \varphi_j\colon \Omega \to K$ is contiguous to $\pi_1\circ \varphi_{j+1}$. Hence \begin{equation}\label{PROV1} \pi_1\circ\iota_\Omega \sim \pi_1\circ \varphi_m=v_0. \end{equation}
Analogously, let $\pi_2\colon K^2 \to K$ be the projection onto the first factor, then \begin{equation}\label{PROV2} \pi_2\circ\iota_\Omega \sim \pi_2\circ \varphi_m=v_0. \end{equation} by means of the sequence $\pi_2\circ\varphi_j$.
Now, we shall verify that the map $\sigma=(\pi_1)_{\vert\Omega}\colon \Omega \to K$ verifies $\Delta_K\circ \sigma\sim \iota_\Omega$, so we conclude the proof.
Define the maps $\xi_j\colon \Omega \to K^2$, $j=1,\dots,m$, as $$\xi_j(v,w)=(v,\pi_1\circ \varphi_j(v,w)).$$ These are simplicial maps. Moreover, it is clear that $\xi_1\sim \cdots \sim \xi_m$.
Analogously define $\chi_j\colon \Omega \to K^2$, $j=1,\dots,m$, as $$\chi_j(v,w)=(v,\pi_2\circ \varphi_j(v,w)).$$ They verify $\chi_1\sim\cdots\sim\chi_m$.
Then it is immediate to check that: \begin{enumerate} \item[i)]
$\xi_1(v,w)=(v,v)$, that is, $\xi_1=\Delta_K\circ \sigma$; \item[ii)]
$\xi_m(v,w)=(v,v_0)$; \item[iii)]
$\chi_1(v,w)=(v,w)$, that is $\chi_1=\iota_\Omega$. \item[iv)]
$\chi_m(v,w)=(v,v_0)$. \end{enumerate}
Then, finally we get:
$$\Delta_K\circ \sigma=\xi_1\sim \xi_m=\chi_m\sim \chi_1=\iota_\Omega. \qedhere$$ \end{proof}
\begin{corollary}The abstract simplicial complex $K$ is strongly collapsible if and only if $\mathop{\mathrm{TC}}(K)=0$. \end{corollary}
\begin{proof} By definition, $K$ being strongly collapsible is equivalent to $\mathop{\mathrm{scat}} K=0$. Moreover, in \cite{FMMV2} we proved that $\mathop{\mathrm{scat}} K^2+1\leq (\mathop{\mathrm{scat}} K+1)^2$ (in fact, the categorical product of strongly collapsible complexes is strongly collapsible). Then $\mathop{\mathrm{TC}}(K)=0$. The converse is immediate from the inequality $\mathop{\mathrm{TC}}(K)\geq \mathop{\mathrm{scat}} K$. \end{proof}
\begin{corollary} The diagonal $\Delta\colon K \to K^2$ admits a {\em global} homotopic section (in the sense of contiguity, that is, there exists $\sigma\colon K^2 \to K$ such that $\Delta_K\circ \sigma \sim 1_K$) if and only if the complex $K$ is strongly collapsible. \end{corollary}
\begin{example}Consider the complex $K=\partial \Delta^2$ given by the simplices $$K=\{\emptyset, \{a\},\{b\},\{c\}, \{b,c\}, \{a,c\}, \{a,b\}\},$$ whose geometric realization is represented in Figure \ref{TRIANGLE}. \begin{figure}\label{TRIANGLE}
\end{figure}
Since $K$ is not strongly collapsible, but can be covered by two strongly collapsible subcomplexes, it follows that $\mathop{\mathrm{scat}} K=1$. Moreover $\mathop{\mathrm{scat}} K^2+1\leq (\mathop{\mathrm{scat}} K+1)^2=4$ \cite{FMMV2}, hence $1\leq \mathop{\mathrm{TC}}(K)\leq 3$. Then a section $\sigma$ defined in the whole complex $K^2$ is not possible.
It is easy to find three Farber subcomplexes covering $K^2$, and we shall prove now that two are not enough. Then $\mathop{\mathrm{TC}}(K)=2$. In fact, suppose that $K^2=\Omega_1\cup\Omega_2$ is a covering by two subcomplexes. Since $K^2$ has nine maximal simplices (see Figure \ref{BIGPROD}) then one of the subcomplexes, say $\Omega_1$, contains at least five of them. Now there are nine horizontal edges, so two of the maximal simplices in $\Omega_1$, say $\tau_1$ and $\tau_2$, must have one common horizontal edge. Finally, for each vertex $v_0\in K$, let $i_0\colon K \to K$ be the map $i_0(v)=(v_0,v)$. From Proposition \ref{EQUIV}, that $\Omega_1$ is a Farber subcomplex implies that the subcomplex $$(i_0)^{-1}(\Omega_1) = (\{v_0\}\times K) \cap \Omega_1 \subset K$$ is categorical in $K$, in particular it is not $K$ (because $K$ is not strongly collapsible). That means that $\Omega_1$ can not contain three consecutive vertical edges. Then none of the maximal simplices $P,Q,R$ in Figure \ref{BIGPROD} can be contained in $\Omega_1$. But $\Omega_2$ is also a Farber subcomplex, so it can not contain them as well, because by using the map $i_1(v)=(v,v_0)$ one proves that $\Omega_2$ can not contain three consecutive horizontal edges.
\begin{figure}\label{BIGPROD}
\end{figure} \end{example}
\section{Geometric realization}\label{GEOM} Let $\geo{K}$ be the geometric realization of the simplicial complex $K$. We can compute the usual topological complexity $\mathop{\mathrm{TC}}(\geo{K})$ of the topological space $\geo{K}$ and to compare it with the discrete (simplicial) complexity $\mathop{\mathrm{TC}}(K)$ of the simplicial complex $K$.
We need a previous result. It is known that $\geo{K^2}$ is not homeomorphic to the topological product $\geo{K}\times \geo{K}$, but they have the same homotopy type, as proved in Kozlov \cite[Prop.15.23]{KOZLOV}. The proof is based in the so-called ``nerve theorem''. However we need an explicit formula, to guarantee the following lemma.
\begin{lemma}There exists a homotopy equivalence $u\colon \geo{K}\times\geo{K}\to \geo{K^2}$ satisfying that the projections $p_1,p_2\colon \vert K \vert \times \vert K \vert \to \vert K \vert$ and $\pi_1,\pi_2 \colon \catpro{K} \to K$ verify (up to homotopy) that $\vert \pi_i\vert \circ u =p_i$, for $i=1,2$ (see Figure \ref{KOZLOVTRUE}).\end{lemma} \begin{proof}There is a homeomorphism $\vert K \times K \vert =\vert K \vert \times \vert K \vert$ which is induced by the projections \cite[p.~538]{HATCHER}. On the other hand, the homotopy equivalence $\vert K\times K \vert \simeq \vert \catpro{K}\vert$ is the geometric realization of the simplicial map $K\times K \to \catpro{K}$ induced by the natural inclusion map $\sigma_1\times \sigma_2 \to \sigma_1\,\Pi\, \sigma_2$ for each pair of simplices $\sigma_1,\sigma_2\in K$ (see \cite[Prop.~15.23]{KOZLOV} and \cite[Prop.4G.2]{HATCHER}). \begin{figure}\label{KOZLOVTRUE}
\end{figure} \end{proof}
\begin{theorem}\label{GEOMREALIZ}$\mathop{\mathrm{TC}}(\geo{K})\leq \mathop{\mathrm{TC}}(K)$. \end{theorem} \begin{proof} Let $\mathop{\mathrm{TC}}(K)\leq n$ and let $K^2=\Omega_0\cup\cdots\cup\Omega_n$ be a Farber covering.
Let $\Omega$ one of the Farber subcomplexes $\Omega_j$ of the covering of $K^2$, and let $i_\Omega \subset K^2$ be the inclusion. By construction of the geometric realization we have that $\geo{i_\Omega}$ is the inclusion $i_{\geo{\Omega}}\colon \geo{\Omega}\subset \geo{K^2}$. By hypothesis, the maps $\pi_1\circ i_\Omega$ and $\pi_2\circ i_\Omega$ are in the same contiguity class (Proposition \ref{PROPIEDADES}). By applying the functor $\geo{\cdot}$ of geometric realization, and taking into account that contiguous maps induce homotopic continuous maps (see \cite{SPANIER}), we have that $\vert \pi_1\vert \circ i_{\vert \Omega\vert}=\vert \pi_1\circ i_\Omega\vert$ is homotopic to $\vert \pi_2\vert \circ i_{\vert \Omega\vert}$.
Consider the closed subspace $F=u^{-1}(\geo{\Omega})\subset \geo{K}\times \geo{K}$. Then the map $$p_1\circ i_F =\vert \pi_1 \vert\circ u\circ i_F= \vert\pi_1\vert\circ i_{\vert \Omega \vert}$$ is homotopic to $p_2\circ i_F$. Consider the closed covering $F_0\cup\cdots\cup F_n$ of $\vert K \vert \times \vert K \vert$. This implies $\mathop{\mathrm{TC}}(\geo{K})\leq n$. \end{proof}
\begin{remark} Notice that the inequality in the latter Theorem is still true for all subdivisions of $K$, because the geometric realizations are homeomorphic, $\geo{\mathop{\mathrm{sd}} K}\cong\geo{K}$. It may happen that $\mathop{\mathrm{TC}}(K)$ differs from $\mathop{\mathrm{TC}}(\mathop{\mathrm{sd}} K)$, which reflects some particular property of the combinatorial structure. \end{remark}
\small{
\address{ \noindent {\sc D.~Fern\'andez-Ternero}. \\Dpto. de Geometr\'{\i}a y Topolog\'{\i}a, Universidad de Sevilla, Spain.\\} \email{desamfer@us.es}
\address{ \noindent {\sc E.~Mac\'ias-Virg\'os}. \\{Dpto. de Matem\'aticas,} Universidade de San\-tia\-go de Compostela, Spain.\\} \email{quique.macias@usc.es}
\address{ \noindent {\sc E.~Minuz}. \\Department of Mathematics, Aarhus University, Denmark\\} \email{minuz@math.au.dk}
\address{ \noindent {\sc J.A.~Vilches}. \\Dpto. de Geometr\'{\i}a y Topolog\'{\i}a, Universidad de Sevilla, Spain.\\} \email{vilches@us.es}
}
\end{document} | arXiv | {
"id": "1706.02894.tex",
"language_detection_score": 0.697852373123169,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A Logic of Injectivity} \author{J. Ad\'amek, M. H\'ebert and L. Sousa\footnote{The third author acknowledges financial
support by the Center of Mathematics
of the University of Coimbra and the School of Technology of Viseu} }
\date{}
\maketitle
\begin{abstract} Injectivity of objects with respect to a set $\ch$ of morphisms is an important concept of algebra, model theory and homotopy theory. Here we study the logic of injectivity consequences of $\ch$, by which we understand morphisms $h$ such that injectivity with respect to $\ch$ implies injectivity with respect to $h$. We formulate three simple deduction rules for the injectivity logic and for its finitary version where morphism s between finitely ranked objects are considered only, and prove that they are sound in all categories, and complete in all ``reasonable" categories. \end{abstract}
\section{Introduction}
Recall that an object $A$ is injective w.r.t. a morphism $h:P\rightarrow P^\m$ provided that every morphism from $P$ to $A$ factors through $h$. We address the following problem: given a set $\mathcal{H}$ of morphisms, which morphisms $h$ are \textit{injectivity consequences} of $\mathcal{H}$ in the sense that every object injective w.r.t. all members of $\mathcal{H}$ is also injective w.r.t. $h$? We denote the injectivity consequence relationship by $\mathcal{H}\models h$.
This is a classical topic in general algebra: the \textit{equational logic} of Garrett Birkhoff \cite{B} is a special case. In fact, an equation $s=t$ is a pair of elements of a free algebra $F$, and that pair generates a congruence $\sim$ on $F$. An algebra $A$ satisfies $s=t$ iff it is injective w.r.t. the canonical epimorphism $$h: F\rightarrow F/\sim.$$ Thus, if we restrict our sets $\mathcal{H}$ to regular epimorphisms with free domains, then the logic of injectivity becomes precisely the equational logic. However, there are other important cases in algebra: recall for example the concept of injective module, where $\mathcal{H}$ is the set of all monomorphisms (in the category of modules).
To mention an example from homotopy theory, recall that a \textit{Kan complex} \cite{K} is a simplicial set injective w.r.t. all the monomorphisms $\Delta^k_n\hookrightarrow \Delta_n$ (for $n,\, k \in \mathbb{N}, \, k\leq n$) where $\Delta_n$ is the complex generated by a single $n$-simplex and $\Delta^k_n$ is the subcomplex obtained by deleting the $k$-th 1-simplex and all adjacent faces. We can ask for example whether Kan complexes can be specified by a simpler collection of monomorphisms, as a special case of our injectivity logic.
Injectivity establishes a Galois correspondence between objects and morphisms of a category. The closed families on the side of objects are called \textit{injectivity classes}: for every set $\ch$ of morphism s we obtain the injectivity class Inj$\ch$, i.e., the class of all objects injective w.r.t. $\ch$. In \cite{AR2} small-injectivity classes in locally presentable categories were characterized as precisely the full accessible subcategories closed under products, and in \cite{RAB} this was sharpened in the following sense. Let us call a morphism \textit{$\lambda$-ary} if its domain and codomain are $\lambda$-presentable objects. Injectivity classes with respect to $\lambda$-ary morphisms are precisely the full subcategories closed under products, $\lambda$-filtered colimits, and $\lambda$-pure subobjects. For injectivity w.r.t. cones or trees of morphisms similar results are in \cite{AN77} and \cite{NS77}.
In the present paper we study closed sets on the side of morphisms, i.e., we develop a deduction system for the above injectivity consequence relationship $\models$. It has altogether three deduction rules, which are quite intuitive. Firstly, observe that every object injective w.r.t. a composite $h=h_2\cdot h_1$ is injective w.r.t. the first morphism $h_1$. This gives us the first deduction rule
\vspace*{4.5mm} \hspace*{1cm}\begin{tabular}{p{2.7cm}l}{\sc cancellation}\\ \\ \end{tabular}\hspace*{6mm}\begin{tabular}{c}$h_2 \cdot h_1$ \\ \hline $ h_1$ \\ \\ \end{tabular}
\hspace*{-\parindent}It is also easy to see that injectivity w.r.t. $h$ implies injectivity w.r.t. any morphism $h^\m$ opposite to $h$ in a pushout (along an arbitrary morphism), which yields the rule
\hspace*{1cm}\begin{tabular}{p{2.7cm}l}{\sc pushout}\\ \end{tabular}\hspace*{3mm}\begin{tabular}{l}$ h$\\ \hline
$ h'$ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}for every pushout }\\ \\ \end{tabular} \begin{tabular}{l}$ \xy (0,0)*{\xymatrix{\ar[r]^h\ar[d]& \ar[d]\\ \ar[r]^{h'}& }}="D"; (7,-10.5)*{}="A"; (7,-7)*{}="B"; (10.5,-7)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $\end{tabular}
\vspace*{3mm}
\hspace*{-\parindent}Finally, an object injective w.r.t. two composable morphisms is also injective w.r.t. their composite. The same holds for three, four, $\dots$ morphisms -- but also for a \textit{transfinite composite} as used in homotopy theory. For example, given an $\omega$-chain of morphisms $$\xymatrix{A_0\ar[r]^{h_0}&A_1\ar[r]^{h_1}&A_2\ar[r]^{h_2}&\dots}$$ then their $\omega$-composite is the first morphism $c_0:A_0\rightarrow C$ of (any) colimit cocone $c_n:A_n\rightarrow C\, (n\in \mathbb{N})$ of the chain. Observe that $c_0$ is indeed an injectivity consequence of $\{h_i;\, i<\omega \}$. For every ordinal $\lambda$ we have the concept of a $\lambda$-composite of morphisms (see \ref{a2.9} below) and the following deduction rule, expressing the fact that an object injective w.r.t. each $h_i$ is injective w.r.t. the transfinite composite:
\hspace*{0.6cm}\begin{tabular}{p{5.5cm}l}{\sc transfinite composition}\\ \end{tabular}\hspace*{1.2mm}\begin{tabular}{c}$ h_i\, (i<\lambda)$\\ \hline
$ h$ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{1.0mm}for every $\lambda$-composite $h$ of $(h_i)_{i<\lambda}$ }\\ \\ \end{tabular}
\hspace*{-\parindent}We are going to prove that the Injectivity Logic based on the above three rules is sound and complete. That is, given a set $\mathcal{H}$ of morphisms, then $\mathcal{H}\models h$ holds for precisely those morphisms $h$ which can be proved from assumptions in $\mathcal{H}$ using the three deduction rules above. This holds in a number of categories, e.g., in
\begin{enumerate} \item[(a)] every variety of algebras, \item[(b)] the category of topological spaces and many nice subcategories (e.g. Hausdorff spaces), and \item[(c)] every locally presentable category of Gabriel and Ulmer. \end{enumerate} We introduce the concept of a strongly locally ranked category encompassing (a)-(c) above, and prove the soundness and completeness of our Injectivity Logic in all such categories.
Observe that the above logic is infinitary, in fact, it has a proper class of deduction rules: one for every ordinal $\lambda$ in the instance of {\sc transfinite composition}. We also study, following the footsteps of Grigore Ro\c su, the completeness of the corresponding Finitary Injectivity Logic: it is the restriction of the above logic to $\lambda$ finite. Well, all we need to consider {\color{black} are the cases} $\lambda =2$, called {\sc composition}, and $\lambda=0$, called {\color{black} {\sc identity}}:
\vspace*{4.5mm} \hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc composition}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h_1\; \; h_0$ \\ \hline $ h$ \\ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}for $h= h_1\cdot h_0$ }\\ \\ \end{tabular}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\color{black} {\sc identity}}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{l}\hline $\id_A$\\ \end{tabular}
\hspace*{-\parindent} The resulting finitary deductive system (introduced in \cite{ASS} as a slight modification of the deduction system of Grigore Ro\c su \cite{R}) has four deduction rules; it is clearly sound, and the main result of our paper (Theorem \ref{a5.2}) says that it is also complete with respect to finitary morphisms, i.e., morphism s with domain and codomain of finite rank. This implies the expected compactness theorem: every finitary injectivity consequence of a set $\ch$ of finitary morphism s is an injectivity consequence of some finite subset of $\ch$.
The completeness theorem for Finitary Injectivity Logic will then be extended to the $k$-ary Injectivity Logic, defined in the expected way. Then the full completeness theorem easily follows.
The fact that the full Injectivity Logic above is complete in strongly locally ranked categories can also be derived from Quillen's Small Object Argument \cite{Q}, see Remark \ref{b3.9} below. However our sharpening to the $k$-ary logic for every cardinal $k$ cannot be derived from that paper, and we consider this to be a major {\color{black} step.}
\vspace*{2mm} \hspace*{-\parindent} {\textbf{Related work}} Bernhard Banaschewski and Horst Herrlich showed thirty years ago that implications in general algebra can be expressed categorically via injectivity w.r.t. regular epimorphisms, see \cite{BH}. A generalization to injectivity w.r.t. cones or even trees of morphisms was studied by Hajnal Andr\' eka, Istv\'an N\'emeti and Ildik\'o Sain, see e.g. \cite{AN77, AN79, NS77}.
To see more precisely how that work relates to ours and to classical logic, consider injectivity in the category of all $\Sigma$-structures (and $\Sigma$-homomorphisms), where $\Sigma$ is any signature. Then recall from \cite{AR}, 5.33 that there is a natural way to associate to a (finitary) morphism $f:A\rightarrow B$ a (finitary) sentence $$f^\m:=\forall X(\wedge A^\m(X)\rightarrow \exists Y (\wedge B^\m(X,Y)))$$ (where $A^\m(X)$ and $B^\m(X,Y)$ are sets of atomic formulas) such that an object $C$ satisfies $f^\m$ if and only if it is injective with respect to $f$ (see 2.22 below for more on this).
Such sentences are called \textit{regular sentences}. In this paper we concentrate on the proof theory for the (finite and infinite) regular logics. As mentioned above, the restriction to epimorphisms correspond to considering only the quasi-equations (i.e., no existential quantifiers), and just equations if we impose they have projective domains.
Recently, Grigore Ro\c su introduced a deduction system for injectivity, see \cite{R}, and he proved that the resulting logic is sound and complete for epimorphisms which are finitely presentable, see \ref{b3.4}, and have projective domains. A slight modification of Ro\c su's system was introduced in \cite{ASS}: this is the deduction system 2.4 below. It differs from \cite{R} by formulating {\sc pushout} more generally and using {\sc composition} in place of Ro\c su's {\sc union}. In \cite{ASS} completeness is proved for sets of epimorphisms with finitely presentable domains and codomains. (This is slightly stronger than requiring the epimorphisms to be finitely presentable, however, without the too restrictive assumption of projectivity of the domains the logic fails to be complete for finitely presentable epimorphisms in general, see \cite{ASS}.)
In the present paper completeness of the finitary logic is proved for arbitrary morphisms (not necessarily epimorphisms) with finitely presentable domains and codomains. The fact that the assumption of epimorphism is dropped makes the proof substantially more difficult. We present a short proof in locally presentable categories first, and then a proof of a more general result for strongly locally ranked categories. We also formulate the appropriate infinitary logic dealing with arbitrary morphisms.
There are other generalizations of Birkhoff's equational logic which are, except for the common motivation, not related to our approach. For example the categorical approach to logic of (ordered) many-sorted algebras of Razvan Diaconescu \cite{Dia}, and the logic of implications in general algebra of Robert Quackenbush \cite{Qua}.
In our joint paper \cite{AHS2} we are taking another route to generalize the equational logic: we consider orthogonality of objects to a morphism instead of injectivity. The deduction system is similar: the rule {\sc cancellation} has to be weakened, and an additional rule concerning coequalizers is added. We prove the completeness of the resulting logic of orthogonality in locally presentable categories. The corresponding sentences are the so called limit sentences, $\forall X(\wedge A^\m(X)\rightarrow \exists!Y(\wedge B^\m(X,Y)))$, where $\exists!Y$ means ``there exists exactly one $Y$ such that".
\section{Logic of injectivity}
\setcounter{thm}{-1}
\begin{sub}\label{2.0}{\em {\textbf{Assumption}} Throughout the paper we assume that we are working in a cocomplete category.} \end{sub}
\begin{sub}\label{2.1}{\em {\defn} A morphism $h$ is called an {\color{black} \textit{injectivity consequence}} of a set of morphisms $\ch$, notation $$\ch \models h$$
provided that every object injective w.r.t. all morphism s in $\ch$ is
also injective w.r.t. $h$. } \end{sub}
\begin{sub}\label{2.2}{\em {\exas} (1) A composite $h=h_2\cdot h_1$ is an injectivity consequence of $\{h_1,\, h_2\}$.
(2) Conversely, in every composite $h=h_2\cdot h_1$
the morphism $h_1$ is an injectivity consequence of $h$: $$\xymatrix{A\ar[r]^{h_1}\ar[dr]&A^\m \ar[r]^{h_2} \ar@{.>}[d] &A^{\m\m}\ar@{-->}[dl]\\&X&}$$
(3) In every pushout
$$\xymatrix{A\ar[r]^h\ar[d]_u&A^\m\ar[d]^v\\
B\ar[r]_{h^\m}&B^\m}$$
$h^\m$ is an injectivity consequence of $h$:
$$\xymatrix{A\ar[r]^h\ar[d]_u&A^\m\ar[d]^v\ar@{-->}[ddr]&\\
B\ar[r]^{h^\m}\ar[drr]&B^\m\ar@{-->}[dr]&\\&&X}$$
} \end{sub}
\begin{sub}\label{2.3}{\em {\rem} The above examples are exhaustive. More precisely, the following deduction system, introduced in \cite{ASS}, see also \cite{R}, (where, however, it was only applied to epimorphisms) will be proved complete below:}\end{sub}
\begin{sub}\label{2.4}{\em {\defn} The \textit{Finitary Injectivity Deduction System} consists of one axiom
\vspace*{6mm} \hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc identity}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{l}\hline $\id_A$\\ \end{tabular}
\vspace*{2mm} and three deduction rules
\vspace*{4.5mm}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc composition} \\ \end{tabular}\hspace*{3mm}\begin{tabular}{ll} $h\; \; h'$& \\ \hline
$h' \cdot h$\\ \end{tabular}
\begin{tabular}{l}{
if $h^\m\cdot h$ is defined}\end{tabular}
\vspace*{4.5mm} \hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc cancellation}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h' \cdot h$ \\ \hline $ h$ \\ \\ \end{tabular}
and
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc pushout}\\ \end{tabular}\hspace*{3mm}\begin{tabular}{l}$ h$\\ \hline
$ h'$ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}if }\\ \\ \end{tabular} \begin{tabular}{l}$ \xy (0,0)*{\xymatrix{\ar[r]^h\ar[d]& \ar[d]\\ \ar[r]_{h'}& }}="D"; (7,-10.5)*{}="A"; (7,-7)*{}="B"; (10.5,-7)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $\end{tabular}
\vspace*{6mm}
We say that a morphism $h$ is a \textit{formal consequence} of a set $\ch$ of morphisms (notation $\ch \vdash h$) in the Finitary Injectivity Logic if there exists a proof of $h$ from $\ch$ (which means
a finite sequence $h_1, \, ..., \, h_n = h$ of morphisms such that for every $i = 1, ..., n$ the morphism $h_i$ lies in $\ch$ or is a conclusion of one of the deduction rules whose premises lie in $\{h_1,...,h_{i-1}\}$).}\end{sub}
\begin{sub} \label{2.61/2} {\em {\lem}} The Finitary Injectivity Logic is sound, i.e., if a {\color{black} {morphism} } $h$ is a formal consequence of a set of {\color{black} morphism s } $\ch$, then $h$ is an injectivity consequence of $\ch$. Briefly: $\ch \vdash h$ implies $\ch \models h$.
\vspace*{3mm}
{\em The proof follows from \ref{2.2}.}\end{sub}
\begin{sub} \label{novo2.7}{\em {\rem} Later we define finitary morphisms (as morphisms whose domains and codomains are finitely presentable (Section 3) or of finite rank (Section 5)), and in Section 6 we prove that the resulting Finitary Injectivity Logic is complete, i.e., that $$\ch \models h\; \mbox{ implies }\; \ch \vdash h$$ for every set $\ch$ of finitary morphism s and every $h$ finitary. }\end{sub}
\begin{sub}\label{2.6}{\em {\exa} The following rule
\vspace*{4.5mm} \hspace*{2cm}\begin{tabular}{p{3.7cm}l}{\sc finite coproduct}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{l}$h_1 \; \; \; h_2 $ \\ \hline $ h_1+h_2$ \\ \\ \end{tabular}\newline (where for $h_i:A_i\rightarrow B_i$ the morphism $h_1+h_2:A_1+A_2 \rightarrow B_1+B_2$ is the canonical coproduct morphism) is obviously sound. Here is a proof in the Finitary Injectivity Logic:
\hspace*{-\parindent} Using the pushouts
$$ \xy (0,0)*{\xymatrix{A_1\ar[rr]^{h_1}\ar[d]_{}&& B_1\ar[d]^{}\\ A_1+A_2\ar[rr]_{h_1+\id_{A_2}}&& B_1+A_2}}="D"; (25,-14)*{}="A"; (25,-8)*{}="B"; (35.5,-8)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; (30,0)*{\xymatrix{A_2\ar[rr]^{h_2}\ar[d]_{}&& B_2 \ar[d]^{}\\ B_1+A_2\ar[rr]_{\id_{B_1}+h_2}&&B_1+B_2 }}="D"; (85,-14)*{}="A"; (85,-8)*{}="B"; (95.5,-8)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $$
\vspace*{5mm} \hspace*{-\parindent}we can write
{ \begin{center}\begin{tabular}{p{4.6cm}} \begin{tabular}{c}$\quad \; \; h_1 \;\; \; \qquad \qquad h_2$\end{tabular}
\begin{tabular}{p{4cm}}\hline \\ \end{tabular}
\vspace*{-4mm}
\begin{tabular}{c}$h_1+\id_{A_2} \qquad \id_{B_1}+h_2$\end{tabular}
\begin{tabular}{p{4cm}}\hline \\ \end{tabular}
\vspace*{-4mm}
\begin{tabular}{c}$\qquad \; \; \; \, h_1+h_2$ \end{tabular} \end{tabular} \begin{tabular}{p{4cm}} \vspace*{-5mm} via {\sc pushout}\\ via {\sc composition} \end{tabular}
\end{center}
\hspace*{-\parindent} since $h_1+h_2=(\id_{B_1}+h_2)\cdot(h_1+\id_{A_2})$. }}\end{sub}
\begin{sub}\label{2.7}{\em {\exa} The following rule
\hspace*{1cm}\begin{tabular}{p{4.2cm}l}{\sc finite wide pushout} \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c} $h_1\; \dots \; h_n$ \\ \hline
$h$\\ \end{tabular}
\hspace*{-\parindent} for every wide pushout
\vspace*{-10mm} $$\xymatrix{&\ar[dl]_{h_1}\ar[d]_{h_2}\ar[dr]_{\dots}^{h_n}&\\ \ar[dr]_{k_1}&\ar[d]_{k_2}&\ar[dl]_{\dots}^{k_n}\\ &C&} \; \; \qquad \mbox{\begin{tabular}{l}\\ \\ \\ \\ where $h=k_i\cdot h_i$\end{tabular}}$$ is sound. Here is a proof in the Finitary Injectivity Logic:
If $n=2$ we have
{ \begin{center} \begin{tabular}{p{2.5cm}}\begin{tabular}{c}$\; \, h_1\qquad h_2$ \end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm}
\begin{tabular}{c}$\qquad k_2$ \end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm}\begin{tabular}{c} $h=k_2\cdot h_2$ \end{tabular} \end{tabular}\begin{tabular}{p{3.8cm}} \vspace*{-5mm}via {\sc pushout}\\ via {\sc composition} \end{tabular}
\end{center} } If $n=3$ denote by $r$ a pushout of $h_1$, $h_2$, then a pushout, $h_3^\m$,
$$\xymatrix{&\ar[dl]_{h_1}\ar[dd]_r\ar[dr]^{h_2}\ar[rr]^{h_3}&&\ar[dd]^{k_3}&\\ \ar[dr]_{k_1}&&\ar[dl]^{k_2}&&\\ &\ar[rr]_{h^\m_3}&&&\\ &&&&}$$
\hspace*{-\parindent} of $h_3$ along $r$ forms a wide pushout of $h_1$, $h_2$ and $h_3$:
{ \begin{center} \begin{tabular}{p{2.5cm}} \begin{tabular}{c}$\, h_1\; \; \; h_2\; \; \; h_3$\end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm} \begin{tabular}{c}$\qquad k_2$ \end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm} \begin{tabular}{c}$\qquad r$\end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm} \begin{tabular}{c}$\qquad k_3$ \end{tabular}
\begin{tabular}{p{2cm}}\hline \\ \end{tabular}
\vspace*{-4mm} \begin{tabular}{c} $\; h=k_3 \cdot h_3$ \end{tabular} \end{tabular} \begin{tabular}{p{4cm}} \vspace*{-5mm}\begin{tabular}{l} via {\sc pushout}\end{tabular}
\vspace*{2.1mm}
\begin{tabular}{l} via {\sc composition}\end{tabular}
\vspace*{2.1mm}
\begin{tabular}{l} via {\sc
pushout}\end{tabular}
\vspace*{2.1mm}
\begin{tabular}{l}via {\sc composition} \end{tabular} \end{tabular} \end{center} }
\hspace*{-\parindent} Etc.
}\end{sub}
\begin{sub}\label{a2.8}{\em {\rem} We want to define a composition of a chain of $\lambda$ morphisms for every ordinal
$\lambda$ (see the case $\lambda=\omega$ in the Introduction). Recall that a \textit{$\lambda$-chain} is a functor $A$ from $\lambda$, the well-ordered category of all ordinals $i<\lambda$.
Recall further that $\lambda^+$ denotes the successor ordinal, i.e., the set of all $i\leq \lambda$.}\end{sub}
\begin{sub}\label{a2.9}{\em {\defn} (i) We call a $\lambda$-chain $A$ \textit{smooth} if for every limit ordinal $i<\lambda$ we have $$A_i=\mbox{co}\hspace*{-0.8mm}\lim_{j<i} A_j$$ with the colimit cocone of all $a_{ji}=A(j\rightarrow i)$.
(ii) A morphism $h$ is called a \textit{$\lambda$-composite} of morphisms $(h_i)_{i<\lambda}$, where $\lambda$ is an ordinal, if there exists a smooth $\lambda^+$-chain $A$ with connecting morphism s $a_{ij}:A_i\rightarrow A_j$ for $i\leq j\leq \lambda$ such that $$h_i=a_{i,i+1}\; \; \; \mbox{for all $i<\lambda$}$$ and $$h=a_{0,\lambda}.$$ }\end{sub}
\begin{sub}\label{a2.10}{\em {\exas} $\lambda=0$: No morphism $h_i$ is given, just an object $A_0$; and $h=a_{0,0}$ is the identity morphism of $A_0$.
$\lambda=1$: A morphism $h_0$ is given, and we have $h=a_{0,1}=h_0$. Thus, a 1-composite of $h_0$ is $h_0$.
$\lambda=2$: This is the usual concept of composition: given morphisms $h_0$, $h_1$, their 2-composite exists iff they are composable. Then $h_1\cdot h_0$ is the 2-composite.
$\lambda=\omega$: This is the case mentioned in the Introduction. Observe that, unlike the previous cases, an $\omega$-composite is only unique up to isomorphism. }\end{sub}
\begin{sub}\label{aa2.10} {\em \lem} A $\lambda$-composite of morphism s $(h_i)_{i<\lambda}$ is an injectivity consequence of these morphisms.
{\em {\pf} This is a trivial transfinite induction on $\lambda$. In case $\lambda=0$ this states that $\id_A$ is an injectivity consequence of $\emptyset$, etc.} \end{sub}
\begin{sub}\label{aa2.11}{\em {\defn} The \textit{Injectivity Deduction System} consists of the deduction rules
\vspace*{3mm}
\hspace*{0.6cm}\begin{tabular}{p{2.7cm}l}{\sc cancellation}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h' \cdot h$ \\ \hline $ h$ \\ \\ \end{tabular}
\hspace*{0.6cm}\begin{tabular}{p{2.7cm}l}{\sc pushout}\\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$ h$\\ \hline
$ h'$ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}for every pushout }\\ \\ \end{tabular} \begin{tabular}{l}$ \xy (0,0)*{\xymatrix{\ar[r]^h\ar[d]& \ar[d]\\ \ar[r]^{h'}& }}="D"; (7,-10.5)*{}="A"; (7,-7)*{}="B"; (10.5,-7)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $\end{tabular}
\vspace*{3mm}
\hspace*{-\parindent} and the rule scheme (one rule for every ordinal $\lambda$)
\hspace*{0.6cm}\begin{tabular}{p{5.2cm}l}{\sc transfinite composition}\\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$ h_i \; (i<\lambda)$\\ \hline
$ h$ \\ \end{tabular} \begin{tabular}{l}\\ {for every $\lambda$-composite $h$ of $(h_i)_{i<\lambda}$ }\\ \\ \end{tabular}
We say that a morphism $h$ is a \textit{formal consequence} of a set $\ch$ of morphisms (notation $\ch \vdash h$) in the Injectivity Logic if there exists a proof of $h$ from $\ch$ (which means a chain $(h_i)_{i \leq n}$ of morphisms, where $n$ is an ordinal, such that $h = h_n$, and each $h_i$ either lies in $\ch$, or is a conclusion of one of the deduction rules whose premises lie in $\{h_j\}_{j < i}$). }\end{sub}
\begin{sub}\label{aa2.13}{\em {\lem} } The Injectivity Logic is sound, i.e., if a {morphism} $h$ is a formal consequence of a set $\ch$ of morphism s, then $h$ is an injectivity consequence of $\ch$. Briefly: $\ch \vdash h \; \mbox{ implies }\; \ch \models h.$
\vspace*{2.5mm}
{\em The proof (using \ref{aa2.10}) is elementary.}\end{sub}
\begin{sub}\label{aa2.14}{\em {\rem} In \ref{aa2.11} we can replace
{\sc transfinite composition} by the deduction rule {\sc wide pushout}, see below, which makes use of the (obvious) fact that an object $A$ injective w.r.t. a set $\{h_i\}_{i<\lambda}$ of morphisms having a common domain is also injective w.r.t. their wide pushout. Let us note here that this rule does not replace {\sc pushout} of \ref{aa2.11} (because in the latter a pushout of $h$ along an {\it arbitrary} {morphism} is considered). }\end{sub}
\begin{sub}\label{aa2.15}{\em {\defn} The deduction rule
\vspace*{3mm}
\hspace*{0.6cm}\begin{tabular}{p{3.7cm}l}{\sc wide pushout}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h_i\; (i<\lambda) $ \\ \hline $ h$ \\ \end{tabular}\begin{tabular}{l}\hspace*{3mm}for $h$ a wide pushout of $\{h_i\}_{i<\lambda}$ \end{tabular}
\hspace*{-\parindent} applies, for every cardinal $\lambda$, to an arbitrary object $P$ and an arbitrary set $\{h_i\}$ of $\lambda$ morphisms with the common domain $P$ and the following wide pushout
\vspace*{-1cm} $$\xymatrix{&P\ar[ld]_{h_i}\ar[d]\ar[rd]&\\ P_i\ar[rd]_{k_i}&\ar[d]&\mbox{\hspace*{8mm}$\dots$}\ar[ld]\\&Q&&}\begin{array}{l}
\\
\\
\\
\\
\\
h=k_i\cdot h_i \mbox{ (for any $i$)} \\ \end{array}
$$
{\rem} Again, this is a scheme of deduction rules: for every {\color{black} cardinal} $\lambda$ we have one rule $\lambda$-{\sc wide pushout}. Observe that $\lambda=0$ yields the rule {\sc identity}.}\end{sub}
\begin{sub}\label{aa2.16} {\em {\lem} } The Injectivity Deduction System \ref{aa2.11} is equivalent to the deduction system
\centerline{{\sc composition, cancellation, pushout} and {\sc wide pushout}.}
{\em {\pf} (1) We can derive {\sc wide pushout} from \ref{aa2.11}. For every ordinal number $\lambda$ we derive the rule
\hspace*{3.5cm}\begin{tabular}{c}\\$h_i\; (i<\lambda) $ \\ \hline $ h$ \\ \\ \end{tabular}\hspace*{6mm}\begin{tabular}{l}for $h$ a wide pushout of $\{h_i\}_{i<\lambda}$ \end{tabular}
\hspace*{-\parindent} by transfinite induction on the ordinal $\lambda$. We are given an object $P$ and morphism s $h_i:P\rightarrow P_i\, (i<\lambda)$. The case $\lambda=0$ is trivial, from $\lambda$ derive $\lambda+1$ by using {\sc pushout}, and for limit ordinals $\lambda$ form the restricted multiple pushouts $Q_j$ of morphism s $h_i$ for $i<j$, and observe that they form a smooth chain whose composite is a multiple pushout of all $h_i$'s.
(2) From the system in \ref{aa2.16} we can derive the rule $\lambda$-{\sc composition}, where $\lambda$ is an arbitrary ordinal: the case $\lambda=0$ follows from 0-{\sc wide pushout}. The isolated step uses {\sc composition}: the $(\lambda+1)$-composite of $(h_i)_{i\leq \lambda}$ is simply $h_\lambda \cdot k$ where $k$ is the $\lambda$-composite of $(h_i)_{i < \lambda}$. In the limit case, use the fact that a composite $h$ of $(h_i)_{i < \lambda}$ is a wide pushout of $\{k_i\}_ {i<\lambda}$, where $k_i$ is a composite of $(h_j)_{j < i}$.}\end{sub}
\begin{sub}\label{ver} {\em {\rem } For every infinite cardinal $k$ the \textit{$k$-ary Injectivity Deduction System} is the system \ref{aa2.11} where $\lambda$ ranges through ordinals smaller than $k$. A proof of a morphism $h$ from a set $\ch$ in the $k$-ary Injectivity Logic is, then, a proof of length $n < k$ using only the deduction rules with $\lambda$ restricted as above. The last lemma can, obviously, be formulated under this restriction in case we use the scheme
$\lambda$-{\sc wide pushout}
for all cardinals $\lambda<k$.}\end{sub}
\begin{sub}\label{aa2.17}{\em {\defn} The deduction rule
\vspace*{5mm}
\hspace*{1cm}\begin{tabular}{p{4.2cm}l}{\sc coproduct} \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c} $h_i\; (i<\lambda)$ \\ $\xy (0,0)*{}="A"; (20,0)*{}="B"; "A"; "B" **\dir{-} \endxy$\\
$
{\coprod_{i<\lambda} h_i}$\\ \end{tabular}
\vspace*{3mm}
\hspace*{-\parindent} applies, for every cardinal $\lambda$, to an arbitrary collection of $\lambda$ morphism s $h_i:A_i\rightarrow B_i$.
}\end{sub}
\begin{sub}\label{aa2.18}{\em {\lem}} The Injectivity Deduction System \ref{aa2.11} is equivalent to the deduction system of \ref{aa2.16}
with {\sc wide pushout} replaced by
\centerline{\sc {\color{black} identity} \hspace*{1mm} $+$ \hspace*{1mm} coproduct} {\em {\pf} (1) {\sc coproduct} follows from \ref{aa2.16}. In fact, ${\coprod_{i<\lambda}h_i:\coprod_{i<\lambda}A_i\rightarrow \coprod_{i<\lambda}B_i}\,$ is a wide pushout of the morphism s $ {k_j:\coprod_{i<\lambda}A_i\rightarrow \coprod_{i< j }A_i +B_j+ \coprod_{j< i<\lambda}A_i}$, where $j$ ranges through $\lambda$, with components $\, \id_{A_i}\, (i\not= j)$ and $h_j$, and $k_j$ is a pushout of $h_j$ along the $j$-th coproduct injection of $ {\coprod_{i<\lambda}A_i}$.
(2) Conversely, {\sc wide pushout} follows from {\sc {\color{black} identity}}+{\sc coproduct}. We obviously need to consider only $\lambda >1$ and then we use the fact that given morphism s $h_i:A\rightarrow B_i\, (i<\lambda)$, their wide pushout $h:A\rightarrow C$ can be obtained from $ {\coprod_{i<\lambda}h_i}$ by pushing out along the codiagonal $\nabla: {\coprod_{\lambda}A\rightarrow A}$:
$$ \xy (0,0)*{\xymatrix{\coprod A\ar[rr]^{\coprod h_i}\ar[d]_{\nabla}&& \coprod B_i\ar[d]^{}\\ A \ar[rr]_{h}&& C}}="D"; (24,-14)*{}="A"; (24,-9)*{}="B"; (29.5,-9)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $$
}\end{sub}
\vspace*{3mm} \begin{sub}\label{aa2.19}{\em {\rem} The deduction system of the last lemma has five rules, but the advantage
against the system \ref{aa2.11} is that they are particularly simple to formulate:
\vspace*{4.5mm}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\color{black} {\sc identity}}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}\hline $\id_A$\\ \end{tabular}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc cancellation}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h_2 \cdot h_1$ \\ \hline $ h_1$ \\ \\ \end{tabular}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc composition}\\ \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$h_2\; \; h_1$ \\ \hline $ h_2\cdot h_1$ \\ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}if $h_2\cdot h_1$ is defined}\\ \\ \end{tabular}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc pushout}\\ \end{tabular}\hspace*{3mm}\begin{tabular}{c}$ h$\\ \hline
$ h'$ \\ \end{tabular} \begin{tabular}{l}\\ {\hspace*{10mm}given }\\ \\ \end{tabular} \begin{tabular}{l}$ \xy (0,0)*{\xymatrix{\ar[r]^h\ar[d]& \ar[d]\\ \ar[r]^{h'}& }}="D"; (7,-10.5)*{}="A"; (7,-7)*{}="B"; (10.5,-7)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; \endxy $\end{tabular} \vspace*{3mm}
\hspace*{2cm}\begin{tabular}{p{2.7cm}l}{\sc coproduct} \\ \end{tabular}\hspace*{3mm}\begin{tabular}{c} $h_i\; (i\in I)$ \\ \hline
$ {\coprod_{i\in I} h_i}$\\ \end{tabular}
\vspace*{6mm}
We prove below that \ref{aa2.11}, and therefore the above equivalent deduction system, is not only sound but (in a number of categories) also complete.}\end{sub}
\begin{sub} {\em {\rem} To relate our deduction rules to the usual ones (of classical logic), let us consider, as in the Introduction, the category of all $\Sigma$-structures. Then any object $A$ can be presented by a set $A^\m(X)$ of atomic formulas with parameters $X$ in $A$: for the familiar algebraic structures, this is just the usual concept of generators and relations. Given a morphism $f:A\rightarrow B$, and such presentations $A^\m(X)$ and $B_o^\m(Y)$ of $A$ and $B$, we can also present $B$ by $B^\m(X,Y)$, which is the union of $B_o^\m(Y)$ and the set of all the equations $x = t(Y)$ for which $f(x) = t(Y)$ ($t$ a $\Sigma$-term). Then for the sentence $$f^\m:=\forall X(\wedge A^\m(X)\rightarrow \exists Y(\wedge B^\m(X,Y)))$$ we have that an object $C$ is $f$-injective iff $C\models f^\m$. Note that if $f$ is finitary (see the Introduction or 3.4 below), the presentations, and hence $f^\m$, can be chosen to be finitary (more details in \cite{AR}, 5.33). Now, we can associate Gentzen-style rules to sets of atomic formulas, generalizing the idea of what was done (with more accuracy) in \cite{ASS} for sets of equations: associating $$A^\m(X)\Rightarrow B^\m(X,Y)$$ to $\forall X(\wedge A^\m(X)\rightarrow \exists Y(\wedge B^\m(X,Y)))$, the {\sc identity} axiom is of course \vspace*{-4mm} $$\begin{array}{c}\\ \hline A^\m(X)\Rightarrow A^\m(X)\end{array}\; \; ;$$ {\sc cancellation} is a categorical version of the ``restriction" rule $$\begin{array}{c}A^\m(X)\Rightarrow (B^\m(X,Y) \cup C^\m(X,Y,Z))\\\hline A^\m(X)\Rightarrow B^\m(X,Y)\end{array}\; \; ;$$ {\sc pushout} is essentially the ``weakening" rule $$\begin{array}{c} A^\m(X)\Rightarrow B^\m(X,Y)\\ \hline (A^\m(X)\cup C^\m(X,Z))\Rightarrow B^\m(X,Y)\end{array}\; \; ;$$ and {\sc composition} is a `` cut" rule $$\begin{array}{c}A^\m(X)\Rightarrow B^\m(X,Y),\; B^\m(X,Y)\Rightarrow C^\m(X,Y,Z)\\\hline A^\m(X)\Rightarrow C^\m(X,Y,Z)\end{array}\; \; .$$ The usual stronger ``cut" rule $$\begin{array}{c}A^\m(X)\Rightarrow B^\m(X,Y),\; ((B^\m(X,Y) \cup C^\m(X,Y,Z))\Rightarrow D^\m(X,Y,Z,U) \\\hline (A^\m(X)\cup C^\m(X,Y,Z))\Rightarrow D^\m(X,Y,Z,U)\end{array}$$ corresponds to $$\begin{array}{c}\xymatrix{A\ar[r]^f&B},\; \xymatrix{B+C\ar[r]^<<<<<g &D} \\\hline \xymatrix{A+C\ar[rr]^>>>>>>>>>>{g\cdot (f+1_C)}&& C}\end{array}\; \; ,$$ which is proved via
{ \begin{center}\begin{tabular}{p{4cm}} \begin{tabular}{c}$\quad \; f \qquad \qquad g$\end{tabular}
\begin{tabular}{p{3.2cm}}\hline \\ \end{tabular}
\vspace*{-4mm}
\begin{tabular}{c}$\; \;\;f+\id_{C} \qquad g$\end{tabular}
\begin{tabular}{p{3.2cm}}\hline \\ \end{tabular}
\vspace*{-4mm}
\begin{tabular}{c}$\quad \;\, g\cdot(f+1_C)$ \end{tabular} \end{tabular} \begin{tabular}{p{3.2cm}} \vspace*{-5mm} {\sc pushout}\\
{\sc composition} \end{tabular}
\end{center}}}\end{sub}
\section{Completeness in locally presentable categories}
\begin{sub} \label{b3.1}{\em \textbf{Assumption} In the present section we study injectivity in a \textit{locally presentable} category $\mathcal{A}$ of Gabriel and Ulmer, see \cite{GU}
or \cite{AR}.
This means that:
\begin{enumerate}
\item[(a)] $\mathcal{A}$ is cocomplete,
\end{enumerate}
and
\begin{enumerate}
\item[(b)]there exists {\color{black} a regular} cardinal $\lambda$ such that
$\mathcal{A}$ has a set of $\lambda$-presentable objects whose closure
under $\lambda$-filtered colimits is all of $\mathcal{A}$.
\end{enumerate}
Recall that an object $A$ is \textit{$\lambda$-presentable} if its
hom-functor hom($A,-):\mathcal{A}\rightarrow \Set$ preserves $\lambda$-filtered colimits. That is, given a $\lambda$-filtered diagram $D$ with
a colimit $c_i:D_i\rightarrow C$ $(i\in I)$ in $\mathcal{A}$, then for every
{morphism}
$f:A\rightarrow C$
\begin{enumerate}
\item[(i)] a factorization of $f$ through $c_i$ exists for some $i\in
I$,
\end{enumerate}
and
\begin{enumerate}
\item[(ii)] factorizations are essentially unique, i.e., given
$i\in I$ and
$c_i\cdot g^\m=c_i\cdot g^{\m\m}$
for some
$g^\m, g^{\m\m}:A\rightarrow D_i$,
there exists a connecting {morphism}
$d_{ij}:D_i\rightarrow D_j$
of the diagram with
$d_{ij}\cdot g^\m=d_{ij} \cdot g^{\m\m}$.
\end{enumerate}
}
\end{sub}
\begin{sub} \label{b3.2}{\em {\exas} (see \cite{AR}) Sets, presheaves, varieties of
algebras and simplicial sets are examples of locally presentable
categories. Categories such as $\mathbf{Top}$ (topological spaces) or
$\mathbf{Haus}$ (Hausdorff spaces) are not locally
presentable.}\end{sub}
\begin{sub} \label{b3.3}{\em {\rem}
(a) In the present section we prove that the Injectivity Logic is complete in every locally presentable category. The
reader may decide to skip this section since
we prove a more general result in Section 6. Both of our proofs are based on
the fact
that for every set $\ch$ of morphism s the full subcategory Inj$\ch$ (of all objects injective w.r.t. morphism s of $\ch$) is
\textit{weakly reflective}.
That is: every object $A\in \mathcal{A}$ has a morphism $r:A\rightarrow \overline{A}$, called a weak reflection, such that \begin{enumerate}
\item[(i)] $\overline{A}$ lies in Inj$\ch$ \end{enumerate} and \begin{enumerate} \item[(ii)] every morphism from $A$ to an object of Inj$\ch$
factors through $r$ (not necessarily uniquely). \end{enumerate} In the present section we will utilize the classical \textit{Small Object Argument} of D. Quillen \cite{Q}: this tells us that every object $A$ has a weak reflection $r:A\rightarrow \overline{A}$ in Inj$\ch$ such that $r$ is a transfinite composite of morphism s of the class $$\widehat{\mathcal{H}}=\{ k; \, k \mbox{ is a pushout of a member of $\ch$ along some morphism}\}.$$
(b) The reason for proving the completeness based on the Small Object Argument in the present section is that the proof is short and elegant. However, by using a more refined construction of weak reflection in Inj$\ch$, which we present in Section 5, we will be able to prove the completeness in the so-called strongly locally ranked categories, which include $\mathbf{Top}$ and $\mathbf{Haus}$.
The spirits of the two proofs are quite different. Given an injectivity consequence $h$ of a set of morphisms, in this section we will show how to derive a formal proof of $h$ from Quillen's construction of the weak reflection; this construction is ``linear", forming a transfinite composite. In the next section, a weak reflection will be constructed as a colimit of a filtered diagram which somehow presents simultaneously all the possible formal proofs.}\end{sub}
\begin{sub} \label{b3.4}{\em {\defn} A morphism is called \textit{$\lambda$-ary} provided that its domain and codomain are $\lambda$-presentable objects. For $\lambda=\aleph_0$ we say \textit{finitary}. }\end{sub}
\begin{sub} \label{b3.4}{\em {\rem} (a) The $\lambda$-ary morphisms are precisely the $\lambda$-presentable objects of the arrow category $\mathcal{A}^\rightarrow$. In contrast, M. H\'ebert introduced in \cite{H} $\lambda$-presentable morphism s; these are the morphism s $f:A\rightarrow B$ which are $\lambda$-presentable objects of the slice category $A\downarrow \mathcal{A}$. In the present paper we will not use the latter concept.
(b) We work now with the \textit{Finitary Injectivity Logic}, i.e., the deduction system \ref{2.4} applied to finitary morphism s. We generalize this to the $k$-ary logic below. }\end{sub}
\begin{sub} \label{b3.6}{\em {\theo}} The Finitary Injectivity Logic is complete in every locally presentable category $\ca$. That is, given a set $\ch$ of finitary morphism s in $\ca$, then every finitary morphism $h$ which is an injectivity consequence of $\ch$ is a formal consequence in the deduction system \ref{2.4}. Briefly: $$\ch \models h \mbox{ implies } \ch\vdash h.$$ \end{sub}
{\pf}
Given a finitary morphism $h:A\rightarrow B$ which is an injectivity consequence of $\ch$,
we prove that $$\ch \vdash h.$$
(a) {\color{black} The} above
object $A$ has a weak reflection
$$r:A\rightarrow \overline{A}$$ such that $r$ is a transfinite
composition of morphism s in
$\widehat{\mathcal{H}}$,
see \ref{b3.3}(a). Since $\ch\models h$, it follows that
$\overline{A}$ is injective w.r.t. $h$, which yields a {morphism} $u$
forming a commutative triangle
$$\xymatrix{A\ar[rr]^r\ar[dr]_h&&\ova\\&B\ar[ru]_u&}$$
(b) Consider all commutative triangles as above where
$r:A\rightarrow \overline{A}$ is any
$\alpha$-composite of morphism s in $\widehat{\mathcal{H}}$ for some ordinal $\alpha$ and $u$ is arbitrary. We
prove that the least possible $\alpha$ is finite. This finishes the
proof of $\ch \vdash h$: In case $\alpha=0$, we have that $\id=u\cdot
h$, and we derive $h$ via {\sc identity} and {\sc cancellation}. In case $\alpha$ is a finite ordinal greater than 0, we have that $r$ is provable from $\mathcal{H}$ using {\sc pushout} and {\sc composition}. Consequently, via {\sc cancellation}, we get $h$.
Let $\mathcal{C}$ be the class of all ordinals $\alpha$ such that there are an $\alpha$-composite $r$ of morphism s of $\widehat{\mathcal{H}}$ and a morphism $u$ with $r=u\cdot h$. To show that the least member $\gamma$ of $\mathcal{C}$ is finite, we prove that for each ordinal $\gamma\geq \omega$ in $\mathcal{C}$ we can find another ordinal in $\mathcal{C}$ which is smaller than $\gamma$.
\vspace*{2mm}
\hspace*{-\parindent} A. Case $\gamma=\beta+m$, with $\beta$ a limit ordinal and $m>0$ finite. Let $a_{i,i+1}\; (i< \beta+m)$ be the corresponding chain with $r=a_{0, \beta+m}$.
Since $a_{\beta, \beta+1}$ lies in $\widehat{\mathcal{H}}$, we can express it as a pushout of some {morphism} $k:D\rightarrow D^\m$ in $\ch$:
$\hspace*{3.5cm}\xy (-70,0)*{\xymatrix{&&&&&&&D\ar[lllld]_q\ar[r]^k\ar[d]_p&D^\m\ar[d]^{p^\m}\\mathcal{A}_0\ar[r]_{a_{01}}&A_1\ar[r]_{a_{12}} &&A_i\ar[r]_{a_{i,i+1}} \ar[d]_{v_i}&A_{i+1}\ar[r]_>>>>{a_{i+1,i+2}} \ar[d]_{v_{i+1}}&A_{i+2}\ar[r]_>>>>{a_{i+2,i+3}} \ar[d]_{v_{i+2}}&&A_\beta\ar[r]_{a_{\beta,\beta+1}} \ar[d]_{v_\beta}&A_{\beta+1}\ar@{=}[ld]\\&&&P_i\ar[r]_{p_{i,i+1}}&P_{i+1}\ar[r]_>>>>{p_{i+1,i+2}} &P_{i+2}\ar[r]_>>>>{p_{i+2,i+3}}&&P_\beta&}}="D"; (-26,-14.2)*{}="A"; (-26,-9)*{}="B"; (-18.5,-9)*{}="C"; "A"; "B" **\dir{-}; "B"; "C" **\dir{-}; ; (-43.5,-14)*{\dots}; (-43.5,-29)*{\dots}; (-106.5,-14)*{\dots}\endxy$
\vspace*{0.7cm}
We have a colimit $A_\beta=\mbox{co}\hspace*{-0.7mm}\lim_{i<\beta}A_i$ of a chain of morphism s. Hence, because $D$ is finitely presentable, $p$ factorizes as $p=a_{i\beta}\cdot q$ for some $i<\beta$ and some morphism $q:D\rightarrow A_i$. Let $v_i$ be a pushout of $k$ along $q$, and form a sequence $v_j$ of pushouts of $k$ along $a_{ij}\cdot q\, (j<\beta)$ as illustrated in the diagram above (taking colimits at the limit ordinals). Then it is easily seen, due to $p=a_{i\beta}\cdot q$, that $v_\beta
= \mbox{co}\hspace*{-0.7mm}\lim_{j<\beta}v_j$ is a pushout of $k$ along $p$. Thus, without loss of generality, $$P_\beta= A_{\beta+1} \; \; \mbox{and}\; \; v_\beta=a_{\beta,\beta+1}.$$ Observe that, since $a_{j, j+1}$ lies in $\widehat{\mathcal{H}}$, {\sc pushout} implies that $$p_{j,j+1}\in \widehat{\mathcal{H}} \; \; \mbox{for all} \, \, i\leq j < \beta.$$ Also $v_i\in \widehat{\mathcal{H}}$ since it is a pushout of $k$ along $q$. Consequently, $a_{0,\beta+1}$ is a $\beta$-composite of morphisms $b_{j,j+1}\; (j< \beta)$ of $\widehat{\mathcal{H}}$ as follows (where $l$ is the first limit ordinal after $i$): $$\begin{array}{lll}&b_{j,j+1}=a_{j,j+1}&\; \mbox{ for all $j<i$},\\ &b_{i,i+1}=v_i,\\ & b_{j,j+1}=p_{j-1,j}&\; \mbox{ for all $i<j<l$},\\ \mbox{and}&&\\ & b_{j,j+1}=p_{j,j+1}&\; \mbox{ for all $l\leq j<\beta$}. \end{array}$$
Thus $r=a_{0,\beta+m}$ is a $(\beta+(m-1))$-composite of morphisms of $\widehat{\mathcal{H}}$.
\vspace*{2mm}
\hspace*{-\parindent} B. Case $\gamma$ is a limit ordinal. The {morphism} $$u:B\rightarrow \ova=\mbox{co}\hspace*{-0.7mm}\lim_{i<\gamma}A_i$$ factors, since $B$ {\color{black} is finitely presentable}, through some $a_{i\gamma}, \; i<\gamma$: $$u=a_{i\gamma} \cdot \overline{u}\; \; \mbox{for some $\overline{u}:B\rightarrow A_i$.}$$ The parallel pair $$\xymatrix{A=A_0 \ar@<.8ex>[r]^{\; \; \; \; \; \; \overline{u}\cdot h}\ar@<-.8ex>[r]_{\; \; \; \; \; \; a_{0i}}&A_i}$$
is clearly merged by the colimit {morphism} $a_{i\gamma}$ of $A_\gamma=\mbox{co}\hspace*{-0.7mm}\lim_{i<\gamma}A_i$. Since $A$ is
finitely presentable, hom($A,-$) preserves that colimit, consequently (see (ii) in \ref{b3.1}.b), the parallel pair is also merged by a connecting {morphism} $a_{ij}:A_i\rightarrow A_j$ for some $i<j<\gamma$: $$a_{ij}\cdot \overline{u}\cdot h=a_{0j}.$$ This gives us a commutative triangle
$$\xy (0,0)*{ \xymatrix{A_0\ar[r]^{a_{01}}\ar[dr]_h&A_1\ar[r]^{a_{12}}& & A_j\\ &B\ar[rru]_{a_{ij}\cdot \overline{u}}&& }}="A"; (32.5,0)*{{\dots}} ="B";\endxy $$
\vspace*{1.3cm} \hspace*{-\parindent}thus $a_{0j}$ is a $j$-composite of morphism s of $\widehat{\ch}$ with $j<\gamma$.
\begin{sub} \label{b3.7}{\em {\rem} The above theorem immediatly generalizes to the $k$-ary Injectivity Logic, i.e., to the deduction system of \ref{ver} applied to $k$-ary morphism s. Recall that for every set of objects in a locally presentable category there exists a cardinal $k$ such that all these objects are $k$-presentable. Consequently, for every set $\ch\cup \{h\}$ of morphism s there exists $k$ such that all members are $k$-ary. The proof that $\ch \models h$ implies $\ch\vdash h$ is completely analogously to \ref{b3.6}: We show that the least possible $\alpha$ is smaller than $k$, thus in Cases A. and B. we work with $\gamma\geq k$. } \end{sub}
\begin{sub} \label{b3.8}{\em {\cor}} The Injectivity Logic is sound and complete in every locally presentable category.
\end{sub}
In fact, given $$\ch \models h$$ find a cardinal $k$ such that all members of $\ch\cup \{h\}$ are $k$-ary morphism s. Then $h$ is a formal consequence of $\ch$ by \ref{b3.7}.
\begin{sub} \label{b3.9}{\em {\rem} The above corollary also follows from the Small Object Argument (see \ref{b3.3}(a)): if $h:A\rightarrow B$ is an injectivity consequence of $\ch$ and if $r:A\rightarrow \ova$ is the corresponding weak reflection, then $r$ is clearly a formal consequence of $\ch$. Since $\ova$ is injective w.r.t. $h$, it follows that $r$ factors through $h$, thus, $h$ is a formal consequence of $r$ (via {\sc cancellation}).}
\end{sub}
\section{Strongly locally ranked categories}
\begin{sub} \label{a3.1}{\em {\rem} Recall that a \textit{factorization system} in a category is a pair $(\ce,\, \cm)$ of classes of morphism s containing all isomorphism s and closed under composition such that \begin{enumerate} \item[(a)] every {morphism} $f:A\rightarrow B$ has a factorization $f=m\cdot e$ with $e:A\rightarrow C$ in $\ce$ and $m:C\rightarrow B$ in $\cm$ \end{enumerate} and \begin{enumerate} \item[(b)] given another such factorization $f=m^\m\cdot e^\m$ there exists a unique ``diagonal fill-in" {morphism} $d$ making the diagram $$\xymatrix{A\ar[r]^{e}\ar[d]_{e^\m}&C\ar[ld]_{d}\ar[d]^{m}\\C^\m\ar[r]_{m^\m}&B}$$ commutative.
\end{enumerate}
The factorization system is called \textit{left-proper} if every morphism of $\ce$ is an epi{morphism}. In that case the $\ce$-quotients of an object $A$ are the quotient objects of $A$ represented by morphism s of $\ce$ with domain $A$.
}\end{sub}
\begin{sub} \label{a3.2}{\em {\defn} Let $(\ce,\cm)$ be a factorization system. We say that an object $A$ has \textit{$\cm$-rank $\lambda$}, where $\lambda$ is a regular cardinal, provided, that
\begin{enumerate} \item[(a)] hom($A,-$) preserves $\lambda$-filtered colimits of diagrams of $\cm$-morphisms (i.e., given a $\lambda$-filtered diagram $D$ whose connecting morphism s lie in $\cm$, then every {morphism} $f:A\rightarrow \mbox{colim}D$ factors, essentially uniquely, through a colimit map of $D$) \end{enumerate} and \begin{enumerate} \item[(b)] $A$ has less than $\lambda$ $\ce$-quotients.
\end{enumerate}
If $\lambda=\aleph_0$ we say that the object $A$ has \textit{finite $\cm$-rank}.} \end{sub}
\begin{sub} \label{a3.3} {\em {\exas} (1) For the factorization system (Iso, All), rank $\lambda$ is equivalent to $\lambda$-presentability.
(2) In the category $\mathbf{Top}$ of topological spaces, choose $(\ce, \, \cm)$ = (Epi, Strong Mono). Here the $\cm$-subobjects are precisely the embeddings of subspaces. Every topological space $A$ of cardinality $\alpha$ has $\cm$-rank $\lambda$ whenever $\lambda>2^{2^\alpha}$. In fact, hom($A,-$) preserves $\lambda$-directed unions of subspaces since $\alpha<\lambda$. And the amount of quotient objects of $A$ (carried by epimorphisms) is at most $\sum_{\beta\leq \alpha}E_\beta T_\beta$ where $E_\beta$ is the number of equivalence relations on $A$ of order $\beta$ and $T_\beta$ is the number of topologies on a set of cardinality $\beta$. Since $E_\beta$ and $T_\beta$ are both $\leq 2^{2^\beta}$, we have {\color{black} $\sum_{\beta\leq \alpha}E_\beta T_\beta \leq \alpha \cdot 2^{2^\alpha}\cdot 2^{2^\alpha}<\lambda$, thus we conclude that $A$ has less than $\lambda$ quotients.}} \end{sub}
\begin{sub} \label{new4.4}{\em {\rem} Every $\ce$-quotient of an object of $\cm$-rank $\lambda$ also has $\cm$-rank $\lambda$. In fact (a) in \ref{a3.2} follows easily by diagonal fill-in, and (b) is obvious.}\end{sub}
\begin{sub} \label{a3.4} {\em {\defn} A category $\mathcal{A}$ is called \textit{strongly locally ranked} provided that it has a left-proper factorization system $(\ce, \, \cm)$ such that \begin{enumerate}
\item[(i)] $\mathcal{A}$ is cocomplete;
\item[(ii)] every object has an $\cm$-rank, and all objects of
the same $\cm$-rank form a set up to isomorphism;
\item[(iii)] for every cardinal $\mu$ the collection of all objects of $\cm$-rank $\mu$
is closed under $\ce$-quotients and
under $\mu$-small colimits,
i.e., colimits of diagrams with
less than $\mu$ morphisms;
\end{enumerate} \hspace*{-\parindent} and
\begin{enumerate} \item[(iv)] the subcategory of all objects of $\mathcal{A}$ and all morphism s
of $\cm$ is closed under filtered colimits in $\mathcal{A}$. \end{enumerate}
{\rem} The statement (iv) means that, given a filtered colimit with connecting morphisms in $\cm$, then \begin{enumerate}
\item[(a)] the colimit cocone is formed by morphism s of $\cm$
\end{enumerate}
\hspace*{-\parindent} and
\begin{enumerate}
\item[(b)] every other cocone of $\cm$-morphism s has the unique
factorizing morphism in $\cm$.
\end{enumerate}
} \end{sub}
\begin{sub} \label{a3.5} {\em {\exas} (1) Every locally presentable category is strongly locally ranked: choose $$\ce\equiv \mbox{isomorphism s}, \, \cm\equiv \mbox{all morphism s.}$$
In fact, see \cite{AR}, 1.9 for the proof of (ii), whereas (iii) and (iv) hold trivially.
(2) Choose $$\ce\equiv \mbox{epimorphism s}, \, \cm\equiv \mbox{strong monomorphism s.}$$ Here categories such as $\mathbf{Top}$ (which are not locally presentable) are included. In fact, for a space $A$ of cardinality $\alpha$ we have that hom($A,-$) preserves $\lambda$-filtered colimits (=unions) of subspaces whenever $\alpha<\lambda$. Thus, by choosing a cardinal $\lambda>\alpha$ bigger than the number of quotients of $A$ we get an $\cm$-rank of $A$. It is easy to verify (iii) and (iv) in $\mathbf{Top}$.
(3) Let $\cb$ be a full, isomorphism closed, $\ce$-reflective subcategory of a strongly locally ranked category $\mathcal{A}$. If $\cb$ is closed under filtered colimits of $\cm$-morphism s in $\mathcal{A}$, then $\cb$ is strongly locally ranked. In fact, $\cb$ is closed under $\cm$ in the sense that given $m:A\rightarrow B$ in $\cm$ with $B\in \cb$, then $A\in \cb$. (Indeed, we have a reflection $r_A:A\rightarrow A^\m$ in $\ce$ and $m=m^\m \cdot r_A$ for a unique $m^\m$; this implies that $r_A\in \ce$ is an isomorphism, thus, $A\in \cb$.) Therefore the restriction of $(\ce,\, \cm)$ to $\cb$ yields a factorization system. It fulfils (ii)-(iv) of \ref{a3.4} because $\cb$ is closed under filtered colimits of $\cm$-morphisms.
(4) The category $\mathbf{Haus}$ of Hausdorff spaces is strongly locally ranked: it is an epireflective subcategory of $\mathbf{Top}$ closed under filtered unions of subspaces.
}\end{sub}
\begin{sub} \label{a3.6} {\em \textbf{Observation} In a strongly locally ranked category the class $\cm$ is closed under transfinite composition. This follows from (iv).} \end{sub}
\begin{sub} \label{a3.7}{\em {\defn} A {morphism} is called \textit{$k$-ary} if its domain and codomain have $\cm$-rank $k$. In case $k ={\aleph_0}$ we speak of \textit{finitary morphism s.}} \end{sub}
\begin{sub} \label{a3.8}{\em {\rem} The name ``strongly locally ranked" was chosen since our requirements are somewhat stronger than those of \cite{AHRT}: there a category is called locally ranked in case it is cocomplete, has an $(\ce,\cm)$-factorization, is $\ce$-cowellpowered and for every object $A$ there exists an infinite cardinal $\lambda$ such that hom$(A,-)$ preserves colimits of $\lambda$-chains of $\cm$-monomorphisms. Our definition of rank and the condition \ref{a3.4}(ii) imply that the given category is $\ce$-cowellpowered. Thus, every strongly locally ranked category is locally ranked.
An example of a locally ranked category that is not strongly locally ranked is the category of $\sigma$-semilattices (posets with countable joins and functions preserving them): condition \ref{a3.4}(iv) fails here. Consider e.g. the $\omega$-chain of the posets exp$(n)$ (where $n=\{0,1,\dots,n-1\}$), $n\in \omega$, with inclusion as order. The colimit of this chain is exp$(\mathbb{N})$ ordered by inclusion. If $M$ is the poset of all finite subsets of $\mathbb{N}$ with an added top element, then the embeddings exp$(n)\hookrightarrow M$ form a cocone of the chain, but the factorization morphism exp$(\mathbb{N}) \rightarrow M$ is not a monomorphism.} \end{sub}
\section{A construction of weak reflections}
\begin{sub} \label{a4.1} {\em {\textbf{Assumption}} In the present section $\mathcal{A}$ denotes a strongly locally ranked category. For every infinite cardinal $k$, $\mathcal{A}_k$ denotes a chosen set of objects of $\cm$-rank $k$ closed under $\ce$-quotients and $k$-small colimits. In particular, one may of course choose $\mathcal{A}_k$ to be a set of representatives of all the objects of $\cm$-rank $k$ up to isomorphism.}\end{sub}
Given a set $\ch \subseteq \cm$ of $k$-ary morphism s of $\mathcal{A}_k$ (considered as a full subcategory of $\mathcal{A}$), \cite{AHRT} provides a construction of a weak reflection in Inj$\,\ch$, which generalizes the Small Object Argument (see \ref{b3.3}). However, this does not appear to be sufficient to prove our Completeness Theorem for the finitary case. The aim of this section is to present a different, more appropriate construction.
We begin with the case $k=\omega$ and come back to the general case at the end of this section.
\begin{sub} \label{a4.2} {\em \textbf{Convention} (a) Morphisms with domain and codomain in $\mathcal{A}_\omega$ are called \textit{petty}.
(b) Given a set $\ch$ of petty morphism s, let $$\overline{\ch}$$denote the closure of $\ch$ under finite composition and pushout in $\mathcal{A}_\omega$. (That is, $\overline{\mathcal{H}}$ is the closure of $\ch \cup \{\id_A;\, A \in \mathcal{A}_\omega\}$ under binary composition and pushout along petty morphism s.)
(c) Since $\overline{\ch} \subseteq \mbox{mor}\mathcal{A}_\omega$ is a set, we can, for every object $B$ of $\mathcal{A}_\omega$, index all morphism s of $\overline{\ch}$ with domain $B$ by a set -- and that indexing set can be chosen to be independent of $B$. That is, we assume that a set $T$ is given and that for every object $B\in \mathcal{A}_\omega$, \begin{equation}\label{(t)}\{h_B(t):B\rightarrow B(t)\; ; \; t\in T \}\end{equation} is the set of all morphism s of $\overline{\ch}$ with domain $B$. }\end{sub}
\begin{sub} \label{a4.3} {\em \textbf{Diagram $\mathbf{D_A}$} For every object $A\in \mathcal{A}_\omega$ we define a diagram $D_A$ in $\mathcal{A}$ and later prove that a weak reflection of $A$ in Inj$\,\ch$ is obtained as a colimit of $D_A$. The domain $\cd$ of $D_A$, independent of $A$, is the poset of all finite {\color{black} words} $$\varepsilon,\, M_1,\, M_1M_2,\, \dots,\, M_1\dots M_k\; \; (k<\omega)$$ {\color{black} where $\varepsilon$ denotes the empty word and} each $M_i$ is a finite subset of $T$. The ordering is as follows:
$$M_1\dots M_k\leq N_1\dots N_l\; \; \mbox{ iff } \; \; k\leq l \; \; \mbox{ and }\; \; M_1\subseteq N_1, \, \dots,\, M_k\subseteq N_k.$$ Observe that $\varepsilon$ is the least element.
We denote the objects $D_A(M_1\dots M_k)$ of the diagram ${D_A}$ by $$A_M \mbox{ where } M=M_1\dots M_k,$$ and if $M_1 \dots M_k\leq N_1 \dots N_l \,= N$, we denote by $$a_{M,N}:A_M\rightarrow A_N$$ the corresponding connecting morphism of $D_A$. We define these objects and connecting morphisms by induction on the length $k$ of the word $M=M_1 \dots M_k$ considered.
\textit{Case $k=0$}: $A_{\varepsilon}=A$.
\textit{Induction step}: Assume that all objects $A_M$ with $M$ of length less than or equal to $k$ and all connecting morphisms between them are defined. For every word $M$ of length $k+1$ denote by $$M^\star\leq M$$ the prefix of $M$ of length $k$, and define the object $A_M$ as a colimit of the following finite diagram $$\xymatrix{A_K\ar[rr]^{h_{A_K}(t)}\ar[ddd]_{a_{K,M^\star}}&&A_K(t)&&\\&\bullet\ar[rr]\ar[ldd]&&&\\ &&\bullet\ar[rr]\ar[lld]&&\\mathcal{A}_{M^\star}&&\dots&&}$$ where $K$ ranges over all words $K\in \mathcal{D}$ with $K\leq M^\star$ and $t$ ranges over the set $M_{k+1}$. Thus, $A_M$ is equipped with (the universal cone of) morphism s $$a_{M^\star,M}:A_{M^\star}\rightarrow A_M \; \; \mbox{(connecting morphism of $D_A$)}$$ and $$d^K_M(t):A_K(t)\rightarrow A_M \; \; \mbox{for all $K\leq M^\star,\, t\in M_{k+1}$,}$$ forming commutative squares \begin{equation}\label{5.1a}\xymatrix{& A_K\ar[ld]_{a_{K,M^\star}} \ar[rd]^{h_{A_K}(t)}&\\ A_{M^\star}\ar[rd]_{a_{M^\star,M}}&&A_K(t)\ar[ld]^{d_M^K(t)}\\&A_M&}\end{equation} This defines the objects $A_M$ for all words of length $k+1$. Next we define connecting morphism s $$a_{N,M}:A_N\rightarrow A_M$$ for all words $N\leq M$. If the length of $N$ is at most $k$, then $N\leq M^\star$ and we define $a_{N,M}$ through the (already defined) connecting morphism $a_{N,M^\star}$ by composing it with the above $a_{M^\star,M}$. If $N$ has length $k+1$, we define $a_{N,M}$ as the unique morphism for which the diagrams \begin{equation}\label{5.1b}\xymatrix{&&&& A_K\ar[ld]_{a_{K,N^\star}} \ar[rd]^{h_{A_K}(t)}&&\\&&& A_{N^\star}\ar[rd]^{a_{N^\star,N}} \ar[dddr]_{a_{N^\star, M}}&&A_K(t)\ar[ld]_{d_N^K(t)} \ar[lddd]^{d^K_M(t)}&\\&&&&A_N\ar[dd]^<<<<<{a_{N,M}}&&(K\leq N^\star,\, t\in N_{k+1})\\&&&&&\\&&&&A_M&&}\end{equation} commute.
It is easy to verify that the morphism s $a_{N,M}$ are well-defined and that $D_A:\mathcal{D}\rightarrow \mathcal{A}$ preserves composition and identity morphism s.
}\end{sub}
\begin{sub}\label{3.3}\label{a4.6}{\em {\lem} }All connecting morphisms of the diagram $D_A$ lie in $\overline{\mathcal{H}}$.
{\em {\pf} We first observe that, given a finite diagram $$\xymatrix{A_i\ar[r]^{h_i}\ar[d]_{f_i}&B_i\\C&}\; \; \,\; \begin{array}{l}\\ \\ \\(i\in I) \,\end{array}$$ with all
$h_i$ in $\overline{\mathcal{H}}$, a colimit \begin{equation}\label{d1}\xymatrix{A_i\ar[r]^{h_i}\ar[d]_{f_i}&B_i\ar[d]^{d_i}\\C\ar[r]_{h}&D}\; \; \,\; \begin{array} {l}\\ \\ \\(i\in I) \,\end{array}\end{equation} is obtained by first considering pushouts $h_i^\m$ of $h_i$ along $f_i$ and then forming a wide pushout $h$ of all $h_i^\m\, (i\in I)$. Consequently, the connecting morphisms of $D_A$ are formed by repeating one of the following steps: a finite wide pushout of morphism s in $\overline{\mathcal{H}}$, a composition of morphism s in $\overline{\mathcal{H}}$, and a {pushout } of a {morphism} in $\overline{\mathcal{H}}$ along a petty {morphism}. Since $\overline{\mathcal{H}}$ is closed, by \ref{a4.2}, under the latter, it is closed under the first one in the obvious sense, see the construction of a finite wide pushout described in Example \ref{2.7}.
}\end{sub}
\begin{sub}\label{3.4}\label{a4.7}{\em {\lem} }For every object $A_M$ of the diagram $D_A$ and every morphism $h:A_M\rightarrow B$ of
$\overline{\mathcal{H}}$ there exists a connecting morphism $a_{M,\, N}:A_M\rightarrow A_N$ of $D_A$ which factors through $h$.
{\em {\pf} We have $M=M_1\dots M_k$ and $h=h_{A_M}(t)$ for some $t\in T$. Put $$N=M_1\dots M_k\{t\}.$$ Then for $K=M$ the definition of $d_{N}^K(t)$ (see \eqref{5.1a}) gives the following commutative diagram: $$\xymatrix{A_{M}\ar[rr]^{h_{A_{M}}(t)}\ar[d]_{id}&&A_{M}(t)\ar[d]^{d_N^K(t)}\\ A_{M}\ar[rr]_{a_{M,N}}&&A_{N}}$$ Consequently, $$a_{M,N}=d_N^K(t)\cdot h_{A_M}(t)$$ as required. {
}}\end{sub}
\begin{sub}\label{3.5}\label{a4.8}{\em {\prop} } Let $\ch$ be a set of petty morphism s with $\overline{\mathcal{H}} \subseteq \cm$. Then for every object $A \in \mathcal{A}_\omega$ a colimit $\gamma_M:A_M\rightarrow \hat{A}\; (M\in\mathcal{D})$ of the diagram $D_A$ yields a weak reflection
of $A$ in Inj$\, \mathcal{H}$ via
$$r_A=\gamma_{\varepsilon}:A\rightarrow \hat{A}.$$
{\em {\pf} (1) $\hat{A}$ is injective w.r.t. $\ch$: We want to prove that given $h \in\mathcal{H}$ and $f$ as follows $$\xymatrix{B\ar[r]^h\ar[d]_f&C\\\hat{A}}$$ then $f$ factors through $h$. Firstly, since $\hat{A}=\mbox{colim}D_A$ is a directed colimit of $\overline{\mathcal{H}}$-morphism s (see \ref{a4.6}) with $\overline{\mathcal{H}} \subseteq \cm$, and $B$ has finite $\cm$-rank (because $B\in \mathcal{A}_\omega$),
it follows that hom($B,-$) preserves the colimit of $D_A$. Thus, there exists a colimit morphism $\gamma_N:A_N\rightarrow \hat{A}$ through which $f$ factors, $f=\gamma_N\cdot f^\m$. $$\xymatrix{B\ar[r]^h\ar[d]_f\ar[dr]_{f^\m}&C\ar[dr]^{f^{\m\m}}&\\ \hat{A}&A_N\ar[l]^{\gamma_N}\ar[d]^{a_{N,\, M}}\ar[r]^{h^\m}&A_N(t)\ar[dl]^{h^{\m\m}}\\ & A_M\ar[ul]^{\gamma_M}&}$$ By pushing $h\in \mathcal{H}$ out along $f^\m$ we obtain a morphism $h^\m \in \overline{\mathcal{H}}$. Then by \ref{3.4} there exists $M\geq N$ such that $a_{N,M}=h^{\m\m}\cdot h^\m$ for some $h^{\m\m}:A_N(t)\rightarrow A_M$. The above commutative diagram proves that $f$ factors through $h$.
(2) Let $B$ be injective w.r.t. $\mathcal{H}$. For every morphism $f:A\rightarrow B$ we define a compatible cocone $f_M:A_M\rightarrow B$ of the diagram $D_A$ by induction on $$k=\mbox{the length of the word }M$$ such that $f_{\varepsilon}=f$. Then the desired factorization of $f$ is obtained via the (unique) factorization $g:\hat{A}\rightarrow B$ with $g\cdot \gamma_M=f_M$: in fact, $g\cdot r_A=f$.
For $k\mapsto k+1$, choose for every word $N$ of length $k$ and every $t\in T$ a morphism $f_N(t)$ forming a commutative triangle $$\xymatrix{A_N\ar[r]^{h_{A_N}(t)}\ar[d]_{f_N}&A_N(t)\ar[dl]^{f_N(t)}\\B&}$$ (recalling that $B$ is $\overline{\mathcal{H}}$-injective because it is $\mathcal{H}$-injective). Then for every word $M$ of length $k+1$ we have a unique factorization $f_M:A_M\rightarrow B$ making the following diagrams \begin{equation}\label{(3.5)}\xymatrix{A_{K}\ar[rr]^{h_{A_{K}}(t)}\ar[d]_{a_{K,\, M^{\star}}}&&A_{K}(t)\ar[d]_{d_{M}^K(t)} \ar[ddr]^{f_K(t)}&\\ A_{M^\star}\ar[drrr]_{f_{M^\star}}\ar[rr]^{a_{M^\star,M}}&& A_M\ar[dr]_<<<{f_M}&\\&&&B}\end{equation} commutative for all $K\leq M^\star$ and $t\in M_{k+1}$.
{\color{black} Let us verify the compatibility \begin{equation}\label{(3.7)}f_M=f_N\cdot a_{M,\, N}\qquad \mbox{ for all $M\leq N$ in $\mathcal{D}$.}\end{equation} The last diagram yields $f_{M^\star}=f_M\cdot a_{M^\star,\, M}$. Therefore, it is sufficient to prove \eqref{(3.7)} for words $M$ and $N$ of the same length $k+1$. In order to do that, we will show that \begin{equation}\label{*}f_M\cdot d^K_{M}(t)=f_N\cdot a_{M,N}\cdot d^K_{M}(t),\, \mbox{ for all $K\leq M^\star$ and $t\in M_{k+1}$,} \end{equation} and \begin{equation}\label{**}f_M\cdot a_{M^\star,M}=f_N\cdot a_{M,N}\cdot a_{M^\star,M}. \end{equation} Concerning \eqref{*}, we have $$\begin{array}{ll}f_M\cdot d^K_{M}(t)&=f_K(t)\\ \\&=f_N\cdot d^K_N(t), \; \mbox{by replacing $M$ by $N$ in \eqref{(3.5)}}\\ \\&=f_N\cdot a_{M,N}\cdot d^K_M(t),\; \mbox{by \eqref{5.1b}.}\end{array}$$ As for \eqref{**}, we have $$\begin{array}{ll}f_M\cdot a_{M^\star,M}&=f_{M^\star}\\ \\&=f_{N^\star}\cdot a_{M^\star,N^\star}\\ \\ &=f_N\cdot a_{N^\star,N}\cdot a_{M^\star, N^\star}, \; \mbox{by replacing $M$ by $N$ in \eqref{(3.5)}}\\ \\ &=f_N\cdot a_{M,N}\cdot a_{M^\star, M}. \end{array}$$
}}\end{sub}
\begin{sub} \label{a4.9} {\em \textbf{Convention} Generalizing the above construction from $\omega$ to any infinite cardinal $k$, we call the morphism s of $\mathcal{A}_k$ \textit{$k$-petty}. Let us now denote by $$\overline{\mathcal{H}}_k$$the closure of $\ch$ under $k$-composition (\ref{a2.9}) and pushout in $A_k$. Following \ref{ver}, $\overline{\mathcal{H}}_k$ is closed under $k$-wide pushout. We again assume that a set $T$ is given such that, for every object $B\in \mathcal{A}_k$ we have an indexing $h_B(t):B\rightarrow B(t)$, $t\in T$ of all morphism s of $\overline{\mathcal{H}}_k$ with domain $B$.}\end{sub}
\begin{sub} \label{a4.10} {\em \textbf{Diagram $\mathbf{D_A}$} The poset $\cd$ of \ref{a4.3} is generalized to a poset $\cd_k$: Let $\mathcal{P}_kT$ be the poset of all subsets of $T$ of cardinality $<k$. The elements of $\mathcal{D}_k$ are all functions $$M:\lambda \rightarrow \mathcal{P}_k T $$ where $\lambda<k$ is an ordinal, including the case $\varepsilon:0\rightarrow \mathcal{P}_k T$. The ordering is as follows: for $N: \lambda^{\m}\rightarrow \mathcal{P}_k T$ put $$M\leq N \; \; \mbox{ iff }\; \; \lambda\leq \lambda^{\m} \; \; \mbox{ and }\; \; M_i\subseteq N_i \, \mbox{ for all $i<\lambda$}.$$ We define, for every $A\in \mathcal{A}_k$, the diagram $D_A:\mathcal{D}_k \rightarrow \mathcal{A}$. The objects $D_A(M)=A_M$ and the connecting morphism s $a_{M,N}:A_M\rightarrow A_N$ ($M\leq N$) are defined by transfinite induction on $\lambda<k$. For $\lambda =0$ we have $A_{\varepsilon}=A$.
The isolated step is precisely as in \ref{a4.3}, where for $M:\lambda+1\rightarrow \mathcal{P}_k T$ we denote by $M^\star:\lambda\rightarrow \mathcal{P}_k T$ the domain-restriction. The limit steps are defined via colimits of smooth chains, see \ref{a2.9}: if $\lambda<k$ is a limit ordinal and $M:\lambda\rightarrow \mathcal{P}_k T$ is given, then $A_M$ is a colimit of the chain $A_{M/i}\; (i<\lambda)$, where $M/i$ is the domain restriction of $M$ to $i$, with the connecting morphisms $a_{M/i,\, M/j}:A_{M/i}\rightarrow A_{M/j}$ for all $i\leq j<\lambda$. The proof that these chains are smooth is an easy transfinite induction.
It is also easy to see that all the above results hold: $\hat{A}=\mbox{colim} D_A$ is an $\mathcal{H}$-injective weak reflection of $A$, and all connecting morphisms of $D_A$ are members of $\overline{\mathcal{H}}$. Consequently, the proof of the following proposition is analogous to that of \ref{a4.8}:}\end{sub}
\begin{sub} \label{5.9} {\em {\prop}} Let $\ch$ be a set of $k$-petty morphism s with $\overline{\ch_k}\subseteq \cm$. Then for every object $A\in \mathcal{A}_k$ a colimit $\gamma_M:A_M\rightarrow \hat{A}$ of $D_A$ yields a weak reflection of $A$ in Inj$\ch$ via $r_A=\gamma_\varepsilon:A\rightarrow\hat{A}$.\end{sub}
\section{Completeness in strongly locally ranked categories}
\begin{sub} \label{a5.1} {\em \textbf{Assumption} Throughout this section $\ca$ denotes a strongly locally ranked category. We first prove the completeness of the finitary logic. Recall that the finitary morphism s are those where the domain and codomain are of finite $\cm$-rank. Let us remark that whenever the class $\cm$ is closed under pushout, then the method of proof of Theorem \ref{b3.6} applies again. However, this excludes examples such as $\mathbf{Haus}$ (where strong monomorphisms are not closed under pushout). }\end{sub}
\begin{sub} \label{a5.2} {\em {\theo}} The Finitary Injectivity Logic is complete in every strongly locally ranked category. That is, given a set $\ch$ of finitary morphism s, every finitary morphism $h$ which is an injectivity consequence of $\ch$ is a formal consequence (in the deduction system of \ref{2.4}). Shortly: $\ch \models h$ implies $\ch \vdash h$. \end{sub}
\begin{sub} \label{6.2a} {\em {\rem} We do not need the full strength of weak local presentation for this result. We are going to prove the completeness under the following milder assumptions on $\mathcal{A}$: \begin{enumerate} \item[(i)] $\mathcal{A}$ is cocomplete and has a left-proper factorization system $(\ce,\, \cm)$; \item[(ii)] $\mathcal{A}_\omega$ is a set of objects of finite $\cm$-rank, closed under finite colimits and $\ce$-quotients; \item[(iii)] $\cm$ is closed under filtered colimits in $\mathcal{A}$ (see \ref{a3.4} (iv)). \end{enumerate} The statement we prove is, then, concerned with petty morphisms (see \ref{a4.2}). We show that for every set $\ch$ of petty morphism s we have $$\ch \models h \, \; \mbox{implies} \, \; \ch \vdash h \; \; \mbox{(for all $h$ petty).}$$ The choice of $\mathcal{A}_\omega$ as a set of representatives of all objects of finite $\cm$-rank yields the statement of the theorem.}\end{sub}
\hspace*{-\parindent} {\textbf {Proof of \ref{a5.2} and \ref{6.2a}}} Let then $\ch$ be a set of petty morphism s, and let $$\overline{\mathcal{H}}$$
denote the closure of $\ch$ as in \ref{a4.2}.
(1) We first prove that the theorem holds whenever $\overline{\mathcal{H}} \subseteq \cm$. Moreover, we will show that
for every petty injectivity consequence $\ch \models h$ we have a
formal proof of
$h$ from assumptions in $\ch$ such that the use of {\sc pushout } is always restricted to pushing out along petty morphism s.
To prove
this,
consider, for the given petty injectivity consequence $h:A\rightarrow B$ of $\ch$, the weak reflection $r_A:A\rightarrow \hat{A}$ in Inj$\,\ch$ of
\ref{a4.8}. The object $\hat{A}$ is injective w.r.t. $h$, thus $r_A$ factors through $h$ via some $f:B\rightarrow \hat{A}$: $$\xymatrix{A\ar[rrr]^h\ar[ddd]_{r_A}&&&B\ar[ddd]^g\ar[dddlll]_{f}\\ \\ &&A_M\ar@{.>}[dll]^{\gamma_M}&\\ \hat{A}&&&A_N\ar[lll]^{\gamma_N}\ar@{.>}[ul]^{a_{N,\, M}}}$$ Since $B\in \mathcal{A}_\omega$, it has finite $\cm$-rank, and \ref{a4.6} implies that hom($B,-$) preserves the colimit $\hat{A}=\mbox{colim} D_A$. Then $f$ factors through one of the colimit morphism s $\gamma_N:A_N\rightarrow \hat{A}$: $$f=\gamma_N\cdot g\, \; \mbox{ for some $g:B\rightarrow A_N$}.$$ We know that $r_A=\gamma_{\varepsilon}$ is the composite of the connecting morphism $a_{\varepsilon,\, N}:A\rightarrow A_N$ of $D_A$ and $\gamma_N$, therefore, $$\gamma_N\cdot a_{\varepsilon, \, N}=r_A=\gamma_N\cdot g\cdot h.$$ That is, the colimit morphism $\gamma_N$ merges the parallel pair $a_{\varepsilon,\, N},\, g\cdot h:A\rightarrow A_N$. Now the domain $A$ has finite $\cm$-rank, thus hom($A,-$) also preserves $\hat{A}=\mbox{colim}D_A$. Consequently, by (ii) in \ref{b3.1}(b) the parallel pair is also merged by some connecting morphism $a_{N,\, M}:A_N\rightarrow A_M$ of $D_A$: $$a_{N,\, M}\cdot a_{\varepsilon,\, N}=a_{N,\, M}\cdot g\cdot h:A\rightarrow A_M.$$ The left-hand side is simply $a_{\varepsilon,\, M}$, and this is a morphism of $\overline{\mathcal{H}}$, see Lemma \ref{a4.6}. Recall that the definition of $\overline{\mathcal{H}}$ implies that every morphism in $\overline{\mathcal{H}}$ can be proved from $\mathcal{H}$ using Finitary Injectivity Logic in which {\sc pushout } is only applied to pushing out along petty morphism s. Thus, we have a proof of the right-hand side $a_{N,\, M}\cdot g\cdot h$. The last step is deriving $h$ from this by {\sc cancellation}.
(2) Assuming $\ch \subseteq \ce$, then we prove that Inj$\,\ch$ is a reflective subcategory of $\mathcal{A}$, and for every object $A\in \mathcal{A}_\omega$ the reflection map $r_A:A\rightarrow \hat{A}$ is a formal consequence of $\ch$ lying in $\ce$: $$\ch \vdash r_A \; \; \mbox{and}\; \; r_A \in \ce.$$ In fact, from $\ch\subseteq \ce$ it follows that $\overline{\ch}\subseteq {\ce}$ (since $\ce$ is closed under composition and pushout). Since $A$ has only finitely many $\ce$-quotients, see \ref{a3.2}, we can form a finite wide pushout, $r_A:A\rightarrow \hat{A}$, of all $\ce$-quotients of $A$ lying in $\overline{\ch}$. Clearly, $\ch \vdash r_A$, in fact, $r_A \in \overline{\ch}$.
\hspace*{-\parindent} The object $\hat{A}$ is injective w.r.t. $\ch$: given $h:P\rightarrow P^\m$ in $\ch$ and $f:P\rightarrow \hat{A}$, form a pushout $h^\m$ of $h$ along $f$. This is an $\ce$-quotient in $\overline{\ch}$, then the same is true for $h^\m\cdot r_A$. Consequently, $r_A$ factors through $h^\m \cdot r_A$, and the factorization, $i:B\rightarrow \hat{A}$, is an epimorphism split by $h^\m$, thus, $f=i\cdot g\cdot h$:
$$\xymatrix{&P\ar[r]^{h}\ar[d]_{f}&P^\m\ar[d]^g\\ A\ar[r]^{r_A}&\hat{A} \ar@<.8ex>[r]^{h^\m}&B\ar@<.8ex>[l]^{i}}$$
\hspace*{-\parindent} {\color{black} The morphism} $r_A$ is a weak reflection: given a {morphism} $u$ from $A$ to an object $C$ of Inj$\,\ch$, then $u$ factors through $r_A$ because $C$ is injective w.r.t. $\overline{\mathcal{H}}$ and $r_A\in \overline{\ch}$.
(3) Let $\ch$ be arbitrary. We begin our proof by defining an increasing sequence of sets $\ce_i \subseteq \ce$ of petty morphism s ($i\in Ord$). For every member $f:A\rightarrow B$ of $\overline{\ch}$ we denote by $f_i$ a reflection of $f$ in Inj$\, \ce_i$: $$\xymatrix{A\ar[r]^f\ar[d]_{r_A}&B\ar[d]^{r_B}\\ \hat{A}\ar[r]_{f_i}&\hat{B}}$$
\textit{First step}: $\ce_0=\{\id_A; \, A\in \mathcal{A}_\omega\}$. Here Inj$\, \ce_0=\mathcal{A}$, thus $f_0=f$.
\textit{Isolated step}: For each $f\in \overline{\ch}$, let $f_i=f_i^{\m\m}\cdot f_i^\m$ be the $(\ce,\, \cm)$-factorization of the reflection $f_i$ of $f$ in Inj$\, \ce_i$, and put $$\ce_{i+1}=\ce_i\cup\{ f_i^\m;\, f\in \overline{\ch}\}.$$
\textit{Limit step}: $\ce_j=\cup_{i<j} \ce_i$ for limit ordinals $j$.
We prove that for every ordinal $i$ we have \begin{equation}\label{fi'}\ch \vdash f^\m_i\; \; \; \mbox{for every } f\in \overline{\ch}\end{equation} and \begin{equation}\label{(*)}\mbox{Inj}\, \ch=\mbox{Inj}\, \ce_i\cap\mbox{Inj}\{f_i\}_{f\in \overline{\ch}}.\end{equation}
{\color{black} For $i=0$, \eqref{fi'} and \eqref{(*)} are trivial (use {\sc cancellation} for \eqref{fi'} and {\sc identity} for \eqref{(*)}). Given $i>0$, assuming that $\ch\vdash f^\m_j$ for all $j<i$, with $f:A\rightarrow B$ in $\overline{\mathcal{H}}$, that is, $\ch \vdash \ce_i$, we have, by (2), that \begin{equation}\label{rB}\ch \vdash r_B\end{equation} where $r_B$ is the reflection of $B$ in Inj$\, \ce_i$.} Thus, $\ch \vdash f_i\cdot r_A$. Moreover, $r_A$ is an epimorphism , therefore the following square $$\xymatrix{A\ar[r]^{r_A}\ar[d]_{r_A}&\hat{A}\ar[r]^{f_i}&\hat{B}\ar[d]^{\id}\\ \hat{A}\ar[rr]^{f_i}&&\hat{B}}$$ is a pushout, which proves $\ch \vdash f_i$ (via {\sc pushout}). $\ch \vdash
f^\m_i$ then follows by {\sc cancellation}.
To prove \eqref{(*)}, observe that {\color{black} \eqref{fi'} implies
Inj$\,\ch\subseteq \mbox{Inj}\, \ce_i$}, and our previous argument yields
Inj$\,\ch \subseteq \mbox{Inj}\,\{f_i\}_{f\in \overline{\ch}}$. Thus, it remains to
prove the reverse inclusion: every object $X$ injective w.r.t.
$\ce_i\cup \{f_i\}_{f\in \overline{\ch}} $ is injective w.r.t. $\ch$. In fact, given $f:A\rightarrow B$ in $\ch$ and a {morphism} $u:A\rightarrow X$, then since $X\in \mbox{Inj}\, \ce_i$ we have a factorization $u=v\cdot r_A$, and then the injectivity of $X$ w.r.t. $f_i$ yields the desired factorization of $u$ through $f$. $$\xymatrix{A\ar[rr]^f \ar[dd]_{r_A}\ar[dr]^u&&B\ar[dd]^{r_B}\\ &X&&\\ \hat{A}\ar[ur]^v\ar[rr]_{f_i}&& \hat{B}\ar@{-->}[ul]\\}$$
(4) Since $\mathcal{A}_\omega$ is a small category, there exists an ordinal $j$ with $$\ce_j=\ce_{j+1}.$$ We want to apply (1) to the category $$\mathcal{A}^\m=\mbox{Inj}\,\ce_j,$$ and the set $$\mathcal{A}^\m_\omega=\mathcal{A}_\omega\cap \mbox{obj}\mathcal{A}^\m.$$ Let us verify that $\mathcal{A}^\m$ satisfies the assumptions (i) -- (iii) of Remark \ref{6.2a} w.r.t. $$\ce^\m=\ce\cap \mbox{mor}\mathcal{A}^\m\; \; \mbox{ and } \; \; \cm^\m =\cm \cap \mbox{mor}\mathcal{A}^\m.$$
Ad(i): $\mathcal{A}^\m$ is cocomplete because it is reflective in $\mathcal{A}$. Moreover, since the reflection maps lie in $\ce$, it follows that $(\ce^\m,\, \cm^\m)$ is a factorization system: in fact, $\mathcal{A}^\m$ is closed under factorization in $\mathcal{A}$. Since $\ce \subseteq \mbox{Epi}(\mathcal{A})$, we have $\ce^\m \subseteq \mbox{Epi} (\mathcal{A}^\m)$.
Ad(iii): It is sufficient to prove that $\mathcal{A}^\m$ is closed under filtered colimits of $\cm^\m$-morphism s in $\mathcal{A}$. In fact, let $D$ be a filtered diagram in $\mathcal{A}^\m$ with connecting morphism s in $\cm$, and let $c_t:C_t\rightarrow C \; (t\in T)$ be a colimit of $D$ in $\mathcal{A}$. Then $C\in \mathcal{A}^\m$, i.e., $C$ is injective w.r.t. $f_j:\hat{A}\rightarrow E$ for every $f\in \overline{\ch}$. This follows from $\hat{A}$ having finite $\cm$-rank (because $A\in \mathcal{A}_\omega$ implies $\hat{A}\in \mathcal{A}_\omega$ due to the fact that $r_A:A\rightarrow \hat{A}$ is an $\ce$-quotient): since hom($\hat{A},-$) preserves the colimit of $D$, every morphism $u:\hat{A}\rightarrow C$ factors through some of the colimit morphism s: $$\xymatrix{&\hat{A}\ar@{-->}[ld]_{v}\ar[r]^{f_j}\ar[d]^u&E\\C_t\ar[r]_{c_t}&C& }$$ Since $C_t\in \mathcal{A}^\m$ is injective w.r.t. $f_j$, we have a factorization of $v$ through $f_j$, and therefore, $u$ also factors through $f_j$. This proves $C\in \mathcal{A}^\m$.
Ad(ii): Due to the above, every object of $\mathcal{A}^\m$ having a finite $\cm$-rank in $\mathcal{A}$ has a finite $\cm^\m$-rank in $\mathcal{A}^\m$. Also, a finite colimit of objects of $\mathcal{A}^\m$ in $\mathcal{A}^\m$ is a reflection (thus, an $\ce$-quotient) of the corresponding finite colimit in $\mathcal{A}$. Thus, it lies in $\mathcal{A}^\m_\omega$.
Next we claim that the set $\ch^\m=\{ f_j;\, f\in \overline{\mathcal{H}} \}$ fulfils $$\ch^\m \subseteq \cm^\m$$
and $\ch^\m$ is closed under petty identities, composition, and pushouts along petty morphism s. In fact, in the above $(\ce,\, \cm)$-factorization of $f_j$: $$\xymatrix{A\ar[d]_{r_A}\ar[rr]^f&&B\ar[d]^{r_B}\\ \hat{A}\ar[rr]_{f_j}\ar[rd]_{f^\m_j}&&\hat{B}\\&D\ar[ur]_{f^{\m\m}_j} }$$ we know that $f_j^\m$ lies in $\ce_{j´+1}=\ce_j$ and $\hat{A}$ is injective w.r.t. $\ce_j$, thus, $f^\m_j$ is a split mono{morphism} (as well as an epimorphism, since $\ce \subseteq \mbox{Epi}(\mathcal{A})$). Thus, $f^\m_j$ is an isomorphism , which implies $f_j\in \cm$. $\ch^\m$ contains $\id_A$ for every $A\in \mathcal{A}^\m_\omega$ because $\overline{\mathcal{H}}$ contains it; $\ch^\m$ is closed under composition because $\overline{\mathcal{H}}$ is (and $f\mapsto f_j$ is the action of the reflector functor from $\mathcal{A}$ to Inj$\, \ce_j$). Finally, $\ch^\m$ is closed under pushout along petty morphism s. In fact, to form a pushout of $f_j:\hat{A}\rightarrow \hat{B}$ along $u:\hat{A}\rightarrow C$ in $\mathcal{A}^\m=\mbox{Inj}\, \ce_j$, we form a pushout, $g$, of $f$ along $u\cdot r_A$ in $\mathcal{A}$, and compose it with the reflection map $r_D$ of the codomain $D$: $$\xymatrix{&&A\ar[r]^f\ar[d]_{r_A}&B\ar[d]^{r_B}\ar[dddrr]^v&&\\ &&\hat{A}\ar[r]^{f_j}\ar[ldld]_u&\hat{B}\ar[dr]_{\hat{v}}&&\\ &&&&\hat{D}&\\ C\ar[rrrrr]_g\ar[urrrr]^{\hat{g}}&&&&&D\ar[lu]^{r_D}}$$ Since $C$ lies in $\mathcal{A}^\m$, we can assume $r_C=\id_C$, and the reflection $\hat{g}=r_D\cdot g$ of $g$ in $\mathcal{A}^\m$ is then a pushout of $f_j$ along $u$. Now $f\in \overline{\mathcal{H}}$ implies $g\in \overline{\mathcal{H}}$, and we have $\hat{g}=g_j \in \ch^\m$.
(5) We are ready to prove that if a petty morphism $h:A\rightarrow B$ is an injectivity consequence of $\ch$, then $\ch\vdash h$ in $\mathcal{A}$. We write $\ch\vdash_\mathcal{A} h$ for the latter since we work within two categories: when we apply (1) to $\mathcal{A}^\m$ we use $\vdash_{\mathcal{A}^\m}$ for formal consequence in $\mathcal{A}^\m$. Analogously with $\models_{\mathcal{A}}$ and $\models_{\mathcal{A}^\m}$. Let $\hat{h}:\hat{A} \rightarrow \hat{B}$ be a reflection of $h$ in $\mathcal{A}^\m$, then $$\ch^\m\models_{\mathcal{A}^\m} \hat{h}$$ because every object $C\in \mathcal{A}^\m=\mbox{Inj}\,\ce_j$ which is injective w.r.t. $\ch^\m=\{f_j\}_{f\in \overline{\mathcal{H}}}$ is, due to \eqref{(*)}, injective w.r.t. $\ch$ in $\mathcal{A}$. Then $C$ is injective w.r.t. $h$, and from $C\in \mathcal{A}^\m$ it follows easily that $C$ is injective w.r.t. $\hat{h}$. Due to (4) we can apply (1). Therefore, $$\ch^\m\vdash_{\mathcal{A}^\m} \hat{h}.$$ We thus have a proof of $\hat{h}$ from $\ch^\m$ in $\mathcal{A}^\m$. We modify it to obtain a proof of $h$ from $\ch$ in $\mathcal{A}$. We have no problems with a line of the given proof that uses one of the assumptions $f_j\in \ch^\m$: we know from \eqref{fi'} that $\ch\vdash_\mathcal{A} f_j$, and we substitute that line with a formal proof of $f_j$ in $\mathcal{A}$. No problem is, of course, caused by the lines using {\sc composition} or {\sc cancellation}. But we need to modify the lines using {\sc pushout} because $\mathcal{A}^\m$ is not closed under pushout in $\mathcal{A}$. However, a pushout, $g^{\m\m}$, of a {morphism} $g$ along a petty {morphism} $u$ in $\mathcal{A}^\m$ $$\xymatrix{P\ar[r]^g\ar[d]_u&Q\ar[d]&\\ P^\m\ar[r]^{g^{\m\m}}\ar[drr]_{g^\m}&\\&&Q^\m\ar[ul]_{r_{Q^\m}}}$$ is obtained from a pushout, $g^\m$, of $g$ along $u$ in $\mathcal{A}$ by composing it with a reflection map $r_{Q^\m}$ of the pushout codomain. Recall that $P,\, P^\m,\, Q \in \mathcal{A}_\omega$ imply $Q^\m\in \mathcal{A}_\omega$. Thus, we can replace the line $g^{\m\m}$ of the given proof by using {\sc pushout} in $\mathcal{A}$ (deriving $g^\m$), followed by a proof of $r_{Q^\m}$ (recall from \eqref{rB} that $\ch\vdash_\mathcal{A} r_{Q^\m}$) and an application of {\sc composition}. We thus proved that $$\ch \vdash_\mathcal{A} \hat{h}.$$ Since $r_B\cdot h=\hat{h}\cdot r_A$ and $\ch\vdash_\mathcal{A} r_A$ (see \eqref{rB}), we conclude $\ch \vdash_\mathcal{A} \hat{h}\cdot r_A$; by {\sc cancellation} then $\ch \vdash_\mathcal{A} h$.
\begin{sub} \label{compactness}{\em {\cor} (Compactness Theorem)} Let $\ch$ be a set of finitary morphism s in a strongly locally ranked category. Every finitary morphism which is an injectivity consequence of $\ch$ is an injectivity consequence of a finite subset of $\ch$. \end{sub}
\begin{sub} {\em {\rem} We proceed by generalizing the completeness result
from finitary to $k$-ary, where $k$ is an arbitrary infinite cardinal. The \textit{k-ary logic}, then, deals with $k$-ary morphism s (i.e., those having both domain and codomain of $\cm$-rank $k$) and the $k$-ary Injectivity Deduction System of \ref{ver}. }\end{sub}
\begin{sub} \label{a6.2} {\em \theo} The $k$-ary Injectivity Logic is complete in every strongly locally ranked category. That is, given a set $\ch$ of $k$-ary morphism s, then every $k$-ary morphism which is an injectivity consequence of $\ch$ is a formal consequence (in the $k$-ary Injectivity Deduction System).
{\em {\pf} The whole proof is completely analogous to that of Theorem \ref{a5.2}. As described in Remark \ref{6.2a} we work under the following milder assumptions on the category $\mathcal{A}$: \begin{enumerate}
\item[(i)] $\mathcal{A}$ is cocomplete and has a left-proper factorization
system $(\ce,\, \cm)$;
\item[(ii)] $\mathcal{A}_k$ is a set of objects of $\cm$-rank $k$,
closed under colimits of less than $k$ morphism s and under $\ce$-quotients;
\item[(iii)] $\cm$ is closed under $k$-filtered colimits in
$\mathcal{A}$. \end{enumerate} The statement we prove is concerned with $k$-petty morphism s (see \ref{a4.9}). We denote by $\overline{\mathcal{H}}_k$ the closure of {\color{black} $\ch$ as in \ref{a4.9}.} We write $\ch \vdash h$ for the $k$-ary Injectivity Logic.
(1) The theorem holds whenever $\overline{\mathcal{H}}_k\subseteq \cm$. The proof, based on the construction of a weak reflection $\hat{A}=\mbox{colim}D_A$ of \ref{a4.10}, is completely analogous to that of (1) in \ref{a5.2}.
(2) Assuming $\ch\subseteq \ce$, then Inj$\,\ch$ is a reflective subcategory, and the reflection maps $r_A$ fulfil $\ch\vdash r_A$ and $r_A \in \ce$. This is analogous to the proof of (2) of \ref{a5.2}.
(3) The definition of $\ce_i$ is precisely as in the proof of \ref{a5.2}.
(4) For the first ordinal $j$ with $\ce_j=\ce_{j+1}$ the category $\mathcal{A}^\m=\mbox{Inj}\, \ce_j$ fulfils the assumptions (i)-(iii) above, and the set $\ch^\m=\{ f_j;\, f\in \overline{\mathcal{H}}\}$ fulfils $\ch^\m=\overline{\ch^\m} \subseteq \cm$.
(5) The theorem is then proved by applying (1) to $\mathcal{A}^\m$ and $\ch^\m$: we get $\ch^\m \vdash \hat{h}$ in ${\mathcal{A}^\m}$ and we derive $\ch\vdash h$ in $\mathcal{A}$ precisely as in the proof of \ref{a5.2}.}\end{sub}
\begin{sub} \label{a6.3}{\em {\cor} } The Injectivity Logic is sound and complete. That is, given a set $\ch$ of morphism s of a strongly locally ranked category, then the consequences of $\ch$ are precisely the formal consequences of $\ch$ (in the Injectivity Deduction System). Shortly: $$\ch \models h \; \; \mbox{ iff } \; \; \ch \vdash h\; \; \; \mbox{(for all morphism s $h$)}$$\end{sub}
In fact, soundness was proved in Section 2. Completeness follows from Theorem \ref{a6.2}: since $\ch$ is a set, and since every object of $\mathcal{A}$ has an $\cm$-rank, see \ref{a3.4}(ii), there exists $k$ such that all domains and codomains of morphism s of $\ch \cup \{ h\}$ have $\cm$-rank $k$.
\section{Counterexamples}
\begin{sub} \label{a7.1} {\em {\exa} In ``nice" categories which are not strongly locally ranked the completeness theorem can fail. Here we refer to $\vdash$ of the Deduction System \ref{aa2.11} (and the logic concerning arbitrary morphism s). We denote by $$\mathbf{CPO(1)}$$ the category of unary algebras defined on $CPO$'s. Recall that a $CPO$ is a poset with directed joins, and the corresponding category, $\mathbf{CPO}$, has as morphism s the \textit{continuous functions} (i.e., those preserving directed joins). The category $\mathbf{CPO(1)}$ has as objects the triples $(A,\sqsubseteq,\alpha)$ where $(A, \sqsubseteq)$ is a $CPO$ and $\alpha:A\rightarrow A$ is a unary operation. Morphisms are the continuous algebra homomorphism s.
First let us observe that the assumption of cocompleteness is fulfilled.} \end{sub}
\hspace*{-\parindent} {\lem} \textit{$\mathbf{CPO(1)}$ is cocomplete.}
{ {\pf} The category $\mathbf{CPO}$ is easily seen to be cocomplete. The category $\mathbf{CPO(1)^*}$ of partial unary algebras on $CPO$'s (defined as above except that we allow $\alpha:A^\m\rightarrow A$ for any $A^\m\subseteq A$) is monotopological over $\mathbf{CPO}$, see \cite{AHS}, since for every monosource \newline{$f_i:(A,\sqsubseteq)\rightarrow$} $(A_i,\sqsubseteq_i,\alpha_i)\; (i\in I)$ we define a partial operation $\alpha$ on $A$ at an element $x \in A$ iff $\alpha_i$ is defined at $f_i(x)$ for every $i$, and then $$\alpha x=y\; \mbox{ iff } \; f_i(y)=\alpha_i(f_i(x))\; \mbox{ for all $i\in I$.}$$ Consequently, $\mathbf{CPO(1)^*}$ is cocomplete by \cite{AHS}, 21.42 and 21.15. Further, $\mathbf{CPO(1)}$ is a full reflective subcategory of $\mathbf{CPO(1)^*}$: form a free unary algebra on the given partial unary algebra, ignoring the ordering, and then extend the ordering trivially (i.e., the new\ elements are pairwise incomparable, and incomparable with any of the original elements). Thus, $\mathbf{CPO(1)}$ is cocomplete.
\vspace*{2mm}
We will find morphism s $h_1,\, h_2$ and $k$ of $\mathbf{CPO(1)}$ with $$\{h_1,\, h_2\}\models k\; \; \mbox{ but }\; \; \{h_1,\, h_2\} \not\vdash k.$$
(i) We define a {morphism} {$h_1$} that expresses, by injectivity, the condition
\hspace*{-\parindent} \begin{tabular}{lp{15cm}}(h1)\centerline{$x\sqsubseteq\alpha x \; \; \mbox{ for all $x\in A.$}$}\end{tabular} \newline Let $=$ denote the discrete order on the set $\mathbf{N}$ of natural numbers, and $\sqsubseteq$ that order enlarged by $0\sqsubseteq 1$. Let $s:\mathbf{N}\rightarrow \mathbf{N}$ be the successor operation. Then $$h_1=\id:(\mathbf{N},=,s)\rightarrow (\mathbf{N},\sqsubseteq, s)$$ is a {morphism} such that an algebra is injective w.r.t. $h_1$ iff it fulfils (h1) above.
(ii) The condition
\hspace*{-\parindent} \begin{tabular}{lp{15cm}}(h2)\centerline{$A\not=\emptyset$}\end{tabular} is expressed by the injectivity w.r.t. $$h_2:\emptyset\rightarrow (\mathbf{N},=,s)$$ where $\emptyset$ is the empty (initial) algebra. The following {morphism} $k$
expresses the existence of a fixed point of $\alpha$: $$k:\emptyset\rightarrow 1$$ where 1 is a one-element (terminal) algebra. }
\vspace*{2mm}
\hspace*{-\parindent} {{\prop}} {\it $\{h_1,\, h_2\}\models k$ but $\{h_1,\, h_2\}\not\vdash k$.}
{\pf} To prove $\{h_1,\, h_2\}\models k$, let $(A, \sqsubseteq, \alpha)$ be injective w.r.t. $h_1$ and $h_2$, i.e., fulfill $x\sqsubseteq \alpha(x)$ and be nonempty. Define a smooth (see \ref{a2.9}) chain $(a_i)_{i\in Ord}$ in $(A,\sqsubseteq)$ by transfinite induction: $a_0\in A$ is any chosen element. Given $a_i$ put $a_{i+1}=\alpha(a_i)$; we know that $a_i\sqsubseteq a_{i+1}$. Limit steps are given by (directed) joins, $a_j=\bigsqcup_{i<j}a_i$. Since $A$ is small, there exist $i$ with $a_i=a_{i+1}$, that is, $a_i$ is a fixed point of $\alpha$. Thus, $A$ is injective w.r.t. $k$.
To prove $\{h_1,\, h_2\}\not\vdash k$, it is sufficient to find an extension $\mathcal{K}$ of the category $\mathbf{CPO(1)}$ in which $\mathbf{CPO(1)}$ is closed under colimits (therefore $\vdash$ has the same meaning in $\mathbf{CPO(1)}$ and in $\mathcal{K}$) and in which there exists an object which is injective w.r.t. $h_1$ and $h_2$ but not w.r.t. $k$. Thus $k$ cannot be proved in $\mathcal{K}$ from $h_1,\, h_2$; consequently it cannot be proved in $\mathbf{CPO(1)}$ either.
We define $\mathcal{K}$ by adding a single new object $K$ to $\mathbf{CPO(1)}$. The only {morphism} with domain $K$ is $\id_K$. For every algebra $(A,\sqsubseteq,\alpha)$ of $\mathbf{CPO(1)}$ we call a function $f:A\rightarrow Ord$ a \textit{coloring} of $A$ provided that it is continuous and fulfils $f(\alpha(x))=f(x)+1$ for all $x\in A$.
\hspace*{-\parindent} The hom-object of $A$ and $K$ in $\mathcal{K}$ is defined to be the class of all colorings of $A$. The composition in $\mathcal{K}$ is defined ``naturally": given a continuous homo{morphism}\newline $h:(A,\sqsubseteq,\alpha)\rightarrow (B,\leq, \beta)$, then for every coloring $f:B\rightarrow Ord$ of $B$ we have a coloring $f\cdot h:A\rightarrow Ord$ of $A$. The category $\mathbf{CPO(1)}$ is a full subcategory of $\mathcal{K}$ closed under (small) colimits. In fact, given a colimit cocone $a_i:A_i\rightarrow A$ $(i\in I)$ in $\mathbf{CPO(1)}$, then for every compatible cocone of colorings $f_i:A_i\rightarrow Ord$ $(i\in I)$ there exists an ordinal $j$ such that all ordinals in $\cup_{i\in I}f_i[A_i]$ are smaller than $j$. Let $B=(j^+,\leq,\overline{s})$ be the object of $\mathbf{CPO(1)}$ where $\leq$ is the usual linear ordering of $j^+$ (the poset of all ordinals smaller or equal to $j$), and $\overline{s}$ is the successor map except $\overline{s}(j)=j$. Then the codomain restriction $f^\m_i$ of each $f_i$ defines a continuous homo{morphism} $f^\m_i:A_i\rightarrow B$, and we obtain a compatible cocone $(f_i^\m)_{i\in I}$ for our diagram. The unique continuous homo{morphism} $g:A\rightarrow B$ with $g\cdot a_i=f^\m_i$ yields, by composing it with the inclusion $j^+\hookrightarrow Ord$, a coloring $f:A\rightarrow Ord$ with $f\cdot a_i=f_i$ $(i\in I)$.
It is obvious that $K$ is injective w.r.t. $h_1$: every coloring of $(\mathbf{N},=,s)$ is also a coloring of $(\mathbf{N},\sqsubseteq, s)$. And $K$ is injective w.r.t. $h_2$ (because the inclusion $\mathbf{N}\hookrightarrow Ord$ is a coloring of $(\mathbf{N},=,s)$). But $K$ is not injective w.r.t. $k$, since $1$ has no coloring.
\begin{sub} \label{a7.3} {\em {\exa} None of the deduction rules of the Finitary Injectivity Deduction System can be left out. For each of them we present an example of a finite complete lattice $\mathcal{A}$ in which the reduced deduction system is not complete (for finitary morphism s).
(1) {\color{black} {\sc identity}} The deduction system {\sc cancellation}, {\sc composition} and {\sc pushout } is not complete because nothing can be derived from the empty set of assumptions, although $\emptyset \models \id_A$.
(2) {\sc cancellation} In the poset \begin{center}\begin{picture}(30,30)(0,0) \put(-40,20){$\mathcal{A}\, $:} \put(30,0){\line(0,1){20}} \put(27.1,-2){$\bullet\, 0$} \put(27.1,18){$\bullet\, 1$}
\put(30,20){\line(0,1){20}}
\put(27.1,36){$\bullet\, 2$}
\end{picture} \end{center} the only object injective w.r.t. $\{0\rightarrow 2\}$ is $2$, thus, we see that $\{0\rightarrow 2\}\models 0\rightarrow 1$. However, $0\rightarrow 1$ cannot be derived from $0\rightarrow 2$ by means of {\color{black} {\sc identity}}, {\sc composition} and {\sc pushout } because the set of all morphism s of $\mathcal{A}$ except $0\rightarrow 1$ is closed under composition and pushout.
(3) {\sc composition} In $\mathcal{A}$ above we clearly have $\{ 0\rightarrow 1,\, 1\rightarrow 2\} \models 0\rightarrow 2$. However, the set of all morphism s except $0\rightarrow 2$ is closed under left cancellation and pushout.
(4) {\sc pushout } In the poset \begin{center}\begin{picture}(45,45)(0,-5) \put(-40,20){$\; \; \; \;$} \put(12,20){\line(1,1){20}} \put(29,-2){$\bullet$} \put(30,-12){0}
\put(12,20){\line(1,-1){20}}
\put(29,36){$\bullet$}
\put(29,45){1}
\put(52,20){\line(-1,1){20}}
\put(52,20){\line(-1,-1){20}}
\put(2,17){$a \, \bullet$}
\put(48,17){$\bullet \, b$}
\end{picture} \end{center} we have $\{0\rightarrow a\} \, \models \, b\rightarrow 1$, but we cannot derive $b\rightarrow 1$ from $0\rightarrow a$ using {\color{black} {\sc identity}}, {\sc composition} and {\sc cancellation} because the set of all morphism s except $b\rightarrow 1$ is closed under composition and cancellation.}\end{sub}
\begin{sub} \label{a7.4} {\em {\exa} Here we demonstrate that in the Finitary Injectivity Logic we cannot restrict the statement of the completeness theorem from the given strongly locally ranked category $\mathcal{A}$ to its full subcategory $\mathcal{A}_\omega$ on all objects of finite rank: although the relation $\vdash$ works entirely in $\mathcal{A}_\omega$, the relation $\models$ does not.
More precisely, let $\ch \models_\omega h$ mean that every $\ch$-injective object of finite $\cm$-rank is also $h$-injective. And let $\vdash_\omega$ be the formal consequence w.r.t. Deduction System \ref{2.4}. Then the implication
$$\ch \models_\omega h\;\mbox{ implies }\; \ch \vdash_\omega h$$ does NOT hold in general for sets of finitary morphism s.
Indeed, let $\mathcal{A}=\mathcal{G}ra$ be the category of graphs, i.e., binary relational structures $(A,R)$, $R\subseteq A\times A$, and the usual graph homomorphisms. Recall that $\mathcal{G}ra$ is locally finitely presentable, and the finitely presentable objects are precisely the finite graphs. Let us call a graph a \textit{clique} if $R=A\times A-\Delta_A$. Denote by $C_n$ a clique of cardinality $n$, and let $\mathbf{0}$ be the initial object (empty graph).
For the set $$\mathcal{H}=\{\mathbf{0}\rightarrow C_n\}_{n\in \mathbb{N}}$$
we have the following property:
\centerline{every finite $\mathcal{H}$-injective graph $G$ has a loop (i.e., a morphism from $1$ to $G$).}
\hspace*{-\parindent}In fact, if $G$ has cardinality
less than $n$ and is injective w.r.t. $\mathbf{0}\rightarrow C_n$, then we have a homomorphism $f:C_n \rightarrow G$. Since $f$ cannot be one-to-one, there
exist $x\not=y$ in $C_n$ with $f(x)=f(y)$ -- and the last element defines a loop of $G$ because $(x,y)$ is an edge of $C_n$.
Hence
$$\mathcal{H}\models_\omega (\mathbf{0}\rightarrow \mathbf{1}).$$
However, $\mathbf{0}\rightarrow \mathbf{1}$ cannot be proved in the Finitary Injectivity Logic. In fact, the graph
$$G=\coprod_{n\in\mathbb{N}}C_n$$ demonstrates that $\mathcal{H}\not\models (\mathbf{0}\rightarrow \mathbf{1})$.
}\end{sub}
\end{document} | arXiv | {
"id": "0709.2461.tex",
"language_detection_score": 0.7132740020751953,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Quantum unary approach to option pricing}
\author{Sergi Ramos-Calderer} \email{sergi.ramos@tii.ae} \affiliation{Departament de F\'isica Qu\`antica i Astrof\'isica and Institut de Ci\`encies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'i i Franqu\`es 1, 08028 Barcelona, Spain.} \affiliation{Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE.}
\author{Adrián P\'{e}rez-Salinas} \affiliation{Departament de F\'isica Qu\`antica i Astrof\'isica and Institut de Ci\`encies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'i i Franqu\`es 1, 08028 Barcelona, Spain.} \affiliation{Barcelona Supercomputing Center (BSC), Spain.}
\author{Diego García-Martín} \affiliation{Departament de F\'isica Qu\`antica i Astrof\'isica and Institut de Ci\`encies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'i i Franqu\`es 1, 08028 Barcelona, Spain.} \affiliation{Barcelona Supercomputing Center (BSC), Spain.} \affiliation{Instituto de Física Teórica, UAM-CSIC, Madrid, Spain.}
\author{Carlos Bravo-Prieto} \affiliation{Departament de F\'isica Qu\`antica i Astrof\'isica and Institut de Ci\`encies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'i i Franqu\`es 1, 08028 Barcelona, Spain.} \affiliation{Barcelona Supercomputing Center (BSC), Spain.}
\author{\\Jorge Cortada} \affiliation{Caixabank, Barcelona, Spain.}
\author{Jordi Planagum\`a} \affiliation{Caixabank, Barcelona, Spain.}
\author{Jos\'{e} I. Latorre} \affiliation{Departament de F\'isica Qu\`antica i Astrof\'isica and Institut de Ci\`encies del Cosmos (ICCUB), Universitat de Barcelona, Mart\'i i Franqu\`es 1, 08028 Barcelona, Spain.} \affiliation{Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE.} \affiliation{Center for Quantum Technologies, National University of Singapore, Singapore.}
\begin{abstract} We present a quantum algorithm for European option pricing in finance, where the key idea is to work in the unary representation of the asset value. The algorithm needs novel circuitry and is divided in three parts: first, the amplitude distribution corresponding to the asset value at maturity is generated using a low depth circuit; second, the computation of the expected return is computed with simple controlled gates; and third, standard Amplitude Estimation is used to gain quantum advantage. On the positive side, unary representation remarkably simplifies the structure and depth of the quantum circuit. Amplitude distributions uses quantum superposition to bypass the role of classical Monte Carlo simulation. The unary representation also provides a post-selection consistency check that allows for a substantial mitigation in the error of the computation. On the negative side, unary representation requires linearly many qubits to represent a target probability distribution, as compared to the logarithmic scaling of binary algorithms. We compare the performance of both unary {\sl vs.} binary option pricing algorithms using error maps, and find that unary representation may bring a relevant advantage in practice for near-term devices. \end{abstract}
\maketitle \section{Introduction}
Quantum computing provides new strategies to address problems that nowadays are considered difficult to solve by classical means. The first quantum algorithms showing a theoretical advantage over their classical counterparts are known since the 1990s, such as integer factorization to prime numbers \cite{factorization-shor1999} or a more efficient unstructured database search \cite{search-grover1997}. Nevertheless, current quantum devices are not powerful enough to run quantum algorithms that are able to compete against state-of-the-art classical algorithms. Indeed, available quantum computers are in their Noisy Intermediate-Scale Quantum (NISQ) stage \cite{nisq-preskill2018}, as errors due to decoherence, noisy gate application or error read-out limit the performance of these new machines. These NISQ devices may nonetheless be useful tools for a variety of applications due to the introduction of hybrid variational methods. Some of the proposed applications include quantum chemistry \cite{vqe-peruzzo2014, vqe-higgott2019, vqe-jones2019}, simulation of physical systems \cite{vqs-li2017, vqs-kokail2019, vqs-cirstoiu2019}, combinatorial optimization \cite{qaoa-fahri2014}, solving large systems of linear equations \cite{vls-bravo2019, vls-xu2019, vls-huang2019}, state diagonalization \cite{qsd-larose2019, qsd-bravo2019} or quantum machine learning \cite{qml-mitarai2018, qml-zhu2019, qml-perez2019}. Some exact, non-variational, quantum algorithms are also well suited for NISQ devices \cite{ising-cervera2018, entanglement-subasi2018, quantum-bravyi2018, quantum-bravyi2019}.
A field that is expected to be transformed by the improvement of quantum devices is quantitative finance \cite{qfinance-orus2019, portfolio-kerenidis2019, crashes-orus2019, derivatives-martin2019, credit-egger2019}. In recent years, there has been a surge of new methods and algorithms dealing with financial problems using quantum resources, such as optimization problems \cite{optimization-rosenberg2016, optimization-rebentrost2018, optimization-moll2018, optimization-lopez2015} which are in general hard.
Notably, pricing of financial derivatives is a prominent problem, where many of its computational obstacles are suited to be overcome via quantum computation. In this paper we will deal with options, which are a particular type of financial derivatives. Options are contracts that allow the holder to buy (\textit{call}) or sell (\textit{put}) some asset at a pre-established price (\textit{strike}), or at a future point in time (\textit{maturity date}). The payoff of an option depends on the evolution of the asset's price, which follows a stochastic process. A simple, yet successful model for pricing options is the Black-Scholes model \cite{blackscholes-black1973}. This is an analytically-solvable model that predicts the asset's price evolution to follow a log-normal probability distribution, at a future time $t$. Then, a specified payoff function, which depends on the particular option considered, has to be integrated over this distribution to obtain the expected return of the option. Current classical algorithms rely on computationally-costly Monte Carlo simulations to estimate the expected return of options.
A few quantum algorithms have been proposed to improve on classical option pricing \cite{qfinance-stamatopoulos2019, qfinance-rebentrost2018, qfinance-woerner2019}. It has been shown that quantum computers can provide a quadratic speedup in the number of quantum circuit runs as compared to the number of classical Monte Carlo runs needed to reach a certain precision in the estimation. The basic idea is to exploit quantum Amplitude Estimation \cite{amplitude_estimation-brassard2002, counting-aaronson2019, montecarlo-montanaro2015quantum}. Nonetheless, this can only be achieved when an efficient way of loading the probability distribution of the asset price is available. The idea of using quantum Generative Adversarial Networks (qGANs) \cite{qGAN-lloyd2018, qGAN-dallaire2018} to address this issue has been analyzed \cite{qGAN-zoufal2019}.
In the following, we propose a quantum algorithm for option pricing. The key new idea is to construct a quantum circuit that works in the unary basis of the asset's value, \textit{i.e.}, in a subspace of the full Hilbert space of $n$ qubits. Then, the evolution of the asset's price is computed using an amplitude distributor module. Furthermore, the computation of the payoff greatly simplifies. A third part of the algorithm is common to previous approaches, namely it uses Amplitude Estimation. The unary scheme brings further advantage since it allows for a post-selection strategy that results in error mitigation. Let us recall that error mitigation techniques are likely to be crucial for the success of quantum algorithms in the NISQ era. On the negative side, the number of qubits in the unary algorithm scales linearly with the number of bins, while in the binary algorithm it is logarithmic with the target precision. This results in a worse asymptotic scaling for the unary algorithm. Yet, our estimates for the number of gates indicate that the crossing point between these two is located at a number of qubits that renders a good precision ($< 1\%$) for real-world applications. Moreover, the performance of the unary algorithm is more robust to noise, as we show in simulations. Hence, our proposal seems to be better suited to be run on NISQ devices. Unary representations have also been considered in previous works \cite{spectral-poulin2018, babbush2018,steudtner2019}.
We will illustrate our new algorithm focusing on a simple European option, whose payoff is a function of only the asset's price at maturity date, the only date the contract can be executed at. This straightforward example has been chosen as a proof of concept for this new approach. We will compare the performance of our unary quantum circuit with the previous binary quantum circuit proposal, for a fixed precision or binning of the probability distribution.
The paper is organized as follows. We first introduce the basic ideas on option pricing, both classical and quantum, in Sec. \ref{sec:background}. The unary quantum algorithm is presented and analyzed in Sec. \ref{sec:unary}. We devote Sec. \ref{sec:un-vs-bin} to outline the circuit specifications and compare them for the unary and binary quantum algorithms. Sec. \ref{sec:simulations} is dedicated to describe the results obtained by means of classical simulations for both algorithms. Lastly, conclusions are drawn in Sec. \ref{sec:conclusions}. Further details on several topics are described in the Appendices.
\section{Background}\label{sec:background}
There are three main pieces that lay the groundwork needed for our algorithm. They are a) the economical model employed in European-option pricing, known as the Black-Scholes model; b) the Amplitude Estimation technique that provides a quadratic quantum advantage over classical Monte Carlo methods; and c) a quantum algorithm for option pricing in the binary basis, as proposed in \cite{qfinance-stamatopoulos2019}.
\subsection{Black-Scholes model}\label{sec:econ_model}
The evolution of asset prices in financial markets is usually computed using the model established by F. Black and M. Scholes in Ref. \cite{blackscholes-black1973}. This evolution is governed by two properties of the market, the interest rate and the volatility, which are incorporated into a stochastic differential equation.
The Black-Scholes model for the evolution of an asset's price at time $T$, $S_T$, is based on the following stochastic differential equation, \begin{equation}\label{eq:BSM}
{\rm d}S_T = S_T\, r\, {\rm d}T + S_T\, \sigma\, {\rm d}W_T, \end{equation} where $r$ is the interest rate, $\sigma$ is the volatility and $W_T$ describes a Brownian process. Let us recall that a Brownian process $W_T$ is a continuous stochastic evolution starting at $W_0=0$ and consisting of independent gaussian increments. To be specific, let $\mathcal{N}(\mu, \sigma_s)$ be a normal distribution with mean $\mu$ and standard deviation $\sigma_s$. Then, the increment related to two steps of the Brownian processes is $W_T - W_S \sim \mathcal{N}(0, T - S)$, for $T > S$.
The stochastic differential equation \eqref{eq:BSM} can be approximately solved analytically to first order, yielding the solution \begin{equation}\label{eq:log_normal}
S_T = S_0 e^{(r - \frac{\sigma^2}{2}) T} e^{\sigma W_T}\;\sim\; e^{\mathcal{N}\left(\left(r - \frac{\sigma^2}{2}\right) T, \sigma \sqrt{T}\right)}, \end{equation} which corresponds to a log-normal distribution. The details of this procedure are outlined in App. \ref{sec:ap_econ_model}.
To obtain the expected return of an option, a payoff function has to be integrated over the resulting probability distribution. This is usually solved using classical Monte Carlo simulation.
In the case of European options, the payoff function is \begin{equation}
f(S_T, K) = \max(0, S_T - K), \end{equation} yielding an expected payoff given by
\begin{equation}\label{eq:avg_payoff}
C(S_T, K) = \int_K^\infty \left( S_T - K \right)\,dS_T, \end{equation} where $K$ is the strike. European options can only be executed at a fixed pre-specified time, called {\sl maturity date}. This is the reason why the payoff is computed using only the probability distribution of $S_T$ at time $T$.
Our algorithm employs a quantum circuit that generates a probability distribution following Eq. \eqref{eq:log_normal}, and then encodes the expected payoff of a European option, Eq. \eqref{eq:avg_payoff}, into the amplitudes of an ancilla qubit.
\subsection{Amplitude Estimation\label{sec:AE}}
Amplitude Estimation (AE) is a quantum technique that allows to estimate the probability of obtaining a certain outcome from a quantum state (with a given precision), with up to a quadratic speedup in the number of function calls as compared to direct sampling \cite{amplitude_estimation-brassard2002, amplitude_estimation-suzuki2020}.
\subsubsection*{AE with Quantum Phase Estimation} Let us take an algorithm $\mathcal{A}$ such that \begin{equation}
\mathcal{A} \ket 0_n \ket 0 = \sqrt{1 - a} \ket{\psi_0}_n\ket{0} + \sqrt{a} \ket{\psi_1}_n\ket{1}, \end{equation} where the last qubit serves as an ancilla qubit and the states $\ket{\psi_{0,1}}_n$ can be non-orthogonal. The ancilla qubit is a flag which enables to identify the states as {\sl good} ($\ket{1}$) or {\sl bad} ($\ket{0}$). The state $\mathcal{A}\ket 0_n \ket 0$ can be directly sampled $N$ times, and the estimate for probability of finding a good outcome will be $\bar a$, with \begin{equation}
|a - \bar a| \sim \mathcal{O}(N^{-1/2}), \end{equation} as dictated by the sampling error of a multinomial distribution.
However, AE can improve this result. Let us first define the central operator for AE \cite{amplitude_estimation-brassard2002} \begin{equation}
\mathcal{Q} = - \mathcal{A} \mathcal{S}_0 \mathcal{A}^\dagger \mathcal{S}_{\psi_0}, \end{equation} where the operators $\mathcal{S}_0$ and $\mathcal{S}_{\psi_0}$ are inherited from Grover's search algorithm \cite{search-grover1997}, being \begin{eqnarray}
\mathcal{S}_0 & = & \mathbf{I} - 2 \ket 0_n \bra 0_n \otimes \ket 0 \bra 0 , \\
\mathcal{S}_{\psi_0} & = & \mathbf{I} - 2 \ket{\psi_0}_n\bra{\psi_0}_n \otimes \ket 0 \bra 0. \end{eqnarray} The $\mathcal{S}_0$ operator changes the sign of the $\ket 0_n \ket 0$ state, while $\mathcal{S}_{\psi_0}$ takes the role of an oracle and changes the sign of all bad outcomes. The operator $\mathcal{Q}$ has eigenvalues $e^{\pm i 2 \theta_a}$, with $a = \sin^2(\theta_a)$. The procedure of Quantum Phase Estimation (QPE) is then applied to extract an integer number $y \in \{0, 1, \ldots, 2^m-1\}$ such that $\bar{\theta}_a = \pi y / 2^m$ is an estimate of $\theta_a$, with $m$ the number of ancilla qubits. Recall that a Quantum Fourier Transform (QFT) is required to perform QPE.
The value of $\bar{\theta}_a$ leads to an estimate of $\bar a$, such that \begin{equation}
|a - \bar a| < \frac{2\pi \sqrt{a (1 - a)}}{2^m} + \frac{\pi^2}{2^{2m}} \sim \mathcal{O}\left(\frac{\pi}{2^m}\right) \end{equation} with probability at least $8/\pi^2\approx 81\%$.
The original Amplitude Estimation procedure requires the implementation of QPE, which is highly resource demanding. Hence, the complexity of the circuit precludes its feasibility in the NISQ era.
\subsubsection*{AE without Quantum Phase Estimation} Recently, there has been a new proposal for Amplitude Estimation that does not require QPE, and therefore, is less resource-demanding. This approach is based on iterative procedures \cite{amplitude_estimation-suzuki2020}. The key fact allowing to circumvent the use of QPE is that \begin{equation}\label{eq:q_j} \begin{split}
\mathcal{Q}^{m} \mathcal{A} \ket 0 = \cos\left((2 m + 1)\theta_a\right)\ket{\psi_0}_n \ket{0} + \\ + \sin\left((2 m + 1)\theta_a\right)\ket{\psi_1}_n \ket{1}. \end{split} \end{equation} An integer $m$ is chosen to prepare the state in Eq. \eqref{eq:q_j} and its outcome is measured $N_{\rm shots}$ times, so that the value of $\sin^2\left((2 m + 1)\theta_a\right)$ is estimated with a precision of $\sim N_{\rm shots}^{-1/2}$. This process is repeated several times with different values of $m$ extracted from a set of $\{m_j\}$. At the end of the procedure, the precision achieved is bounded by $\sim N_{\rm shots}^{-1/2}M^{-1}$, with $M=\sum_{j=0}^J m_j$, where $J$ is the last index. The exact scaling of the precision depends on the choice of $m_j$'s. In App. \ref{sec:ap_iae} the full method is explained in further detail.
\subsection{Binary algorithm}\label{sec:binary}
We now present a binary algorithm for option pricing, as introduced in Ref. \cite{qfinance-stamatopoulos2019}. This algorithm is divided in three parts. \begin{itemize}
\item[(a)] {\sl Amplitude distributor}: it encodes the underlying probability distribution of an asset's price at maturity date into a quantum register. The operator representing this piece will be denoted by $\mathcal{D}$. This algorithm uses a quantum Generative Adversarial Network (qGAN) \cite{qGAN-lloyd2018, qGAN-dallaire2018, qGAN-zoufal2019} in order to fulfill this part. Classical knowledge of the probability distribution is required at this stage.
\item[(b)] {\sl Payoff calculation}: it computes the expected payoff of the option, which is encoded into the amplitude of an ancillary qubit. The operators that perform this step will be a comparator $\mathcal{C}$, that separates the state as above or below the strike, and a set of controlled rotations $\mathcal{R}$, that encode the expected payoff into the probability of measuring an ancilla.
\item[(c)] {\sl Amplitude Estimation}: it extracts the expected payoff calculation encoded in the amplitude of the ancilla, reducing the number of circuit calls needed to reach a desired precision. It is based on the operator $\mathcal{Q}$, which may be applied several times. \end{itemize}
A sketch of a quantum circuit implementing the full algorithm is shown in Fig. \ref{fig:binary_circuit}. For a detailed description of each part, refer to App. \ref{sec:ap_binary}.
\begin{figure}
\caption{Full circuit for the binary algorithm for option pricing that include all steps, namely, the amplitude distributor $\mathcal{D}$, payoff estimator comprised of the comparator and payoff estimator $\mathcal{C}$ and $\mathcal{R}$ respectively, followed by components of Amplitude Estimation, $\mathcal{Q}$. The operator $\mathcal{Q}$ is repeated $m$ times, where $m$ depends on the AE algorithm. The payoff is indirectly measured in the last qubit.}
\label{fig:binary_circuit}
\end{figure}
\begin{figure*}
\caption{Scheme for the quantum representation of a given asset price at maturity date. For a given number of Monte Carlo paths, a binning scheme must be applied in such a way that the prices of the asset are separated according to its value. Different Monte Carlo paths that end up in the same bin are color coded accordingly. Each bin is mapped then to an element of the unary basis, whose coefficient is the number of Monte Carlo paths in this bin.
The quantum representation of the asset price at maturity contains all possible Monte Carlo paths simultaneously. The precision is then bounded by the numbers of bins we can store on a quantum state, \textit{i.e.}, how many qubits are available.}
\label{fig:MC}
\end{figure*}
\section{Unary algorithm}\label{sec:unary}
We present now a quantum algorithm that prices European options according to the Black-Scholes model, as outlined in Sec. \ref{sec:econ_model}. The key new idea is to construct a quantum circuit that works in the unary basis of the asset's value. The structure of the algorithm is inherited from the one explained in Sec. \ref{sec:binary}, namely: amplitude distributor module, computation of the payoff and Amplitude Estimation. Furthermore, the implementation of all different pieces is greatly simplified with respect to the binary case. The unary scheme brings further advantage in practice, since it allows for a post-selection strategy that results in error mitigation. Although our unary algorithm requires more qubits than a binary one, its performance is more robust to noise and probably better suited to be run on NISQ devices.
\subsection{Unary representation} The main feature of the algorithm is that it works in the {\sl unary representation} of the asset value encoded on the quantum register. That means that for every element of the basis only one qubit will be in the $\ket 1$ state, whereas all others will remain in $\ket 0$. A quantum register $\ket \psi_n$ made of $n$ qubits in the unary representation can be written as \begin{equation}
\begin{split}
\ket\psi &= \sum_{i=0}^{n-1} \psi_i \ket{i}_n = \sum_{i=0}^{n-1} \psi_i \left(\bigotimes_{j=0}^{n-1} \ket{\delta_{i j}}\right) = \\
&= \psi_0 \ket{00\ldots 01}_n+\psi_1 \ket{00\ldots 10}_n+\ldots \\
& \qquad+ \psi_{n-2} \ket{01\ldots 00}_n+\psi_{n-1} \ket{10\ldots 00}_n ,
\end{split} \end{equation}
where $\ket i_n$ corresponds to the $i$-th element of the unary basis, $\delta_{i j}$ is the Kronecker delta and $\sum_{i=1}^n |\psi_i|^2=1$. A well-known example of a state in the unary representation is the $W$ state, which defines a class of three-qubit multipartite entanglement \cite{entanglement-dur2000}. Depicted in Fig. \ref{fig:MC} is a visual representation of how the unary algorithm would map the outcomes of a Monte Carlo simulation of the asset's price to a quantum register. The ratio of Monte Carlo paths leading to each of the bins will translate into the amplitudes of the corresponding unary basis states.
Given a fixed number of qubits, the unary scheme allows for a lower precision than the binary one. Indeed, only $n$ out of the $2^n$ basis elements of the Hilbert space are used. However, due to the natural mapping between the unary representation and the asset's price evolution, we will find that the probability distribution loading and the expected payoff calculation can be carried out with much simpler quantum circuits. On real devices, the potential gain of the unary representation translates into a shallower circuit depth and simpler connectivity requirements. Furthermore, the unary scheme provides means to post-select results so as to increase the faithfulness of the computation. This is due to the fact that the unary representation resides within a restricted part of the Hilbert space, and that extra space can be used as an indicator of the appearance of errors. As a matter of fact, given a realistic precision goal (<1\%), it may well turn out to be advantageous to move to the unary representation on NISQ devices, as it simplifies the complexity of the circuit and mitigates errors.
In most cases, a quantum computation does not start in a quantum state that belongs to the unary representation, but in the $\ket 0_n$ state instead. To solve this issue, we act with a single Pauli X-gate on any qubit. In our algorithm, this qubit is chosen to be the central one to improve overall circuit depth. At this point, the register displays a single qubit in $\ket{1}$ while all others are in $\ket 0$. This register is an element of the unary basis.
\subsection{Implementation of the algorithm \label{sec:implementation_unary}}
The basic structure of the unary algorithm is directly inherited from the structure of the binary one. All three independent parts are {\sl Amplitude distributor}, {\sl Payoff calculator} and {\sl Amplitude Estimation}. We discuss them now in further detail.
\subsubsection*{Amplitude distributor}
The probability distribution predicted by the Black-Scholes model is based on the one in Eq. \eqref{eq:log_normal}. For a given number of qubits, that is of precision, the asset price at any time can be mapped to a fixed depth circuit that distributes probabilities according to the final desired result. The unary representation is akin to the value of the asset. In other words, for every element in the superposition describing the quantum register, the qubit which is flipped into $\ket{1}$ determines the value of the asset. The classical Monte Carlo spread of asset values will be mapped into the probability of measuring each unary basis element.
The quantum circuit generating the final register operates as a distributor of probability amplitudes. The initial state of the algorithm is given by $\ket{0\ldots 010\ldots 0}_n$, \textit{i.e.}, the element of the unary basis with $\ket 1$ in the middle qubit. Then, the coefficients of the register in the next step of the circuit are generated using partial-SWAP gates (also called parametrized-SWAP or SWAP power gate) between the middle qubit and its neighbors. The partial-SWAP gate is defined as \begin{equation}\label{eq:SWAPRy}
\includegraphics[width=0.3\linewidth, valign=c]{SWAPRy2.png} =
\left(\begin{array}{cccc}
1&0&0&0 \\ 0&\cos{\theta/2}&\sin{\theta/2}&0 \\0&-\sin{\theta/2}&\cos{\theta/2}&0 \\0&0&0&1
\end{array}\right) \end{equation} Moreover, the partial-SWAP gate could be substituted with a partial-iSWAP gate which performs the same purpose of amplitude sharing. This partial-iSWAP gate, \begin{equation}\label{eq:p-iSWAP}
\includegraphics[width=0.3\linewidth, valign=c]{iSWAPRy2.png} =\left(\begin{array}{cccc}
1&0&0&0 \\ 0&\cos{\theta/2}&-i\sin{\theta/2}&0 \\0&-i\sin{\theta/2}&\cos{\theta/2}&0 \\0&0&0&1
\end{array}\right), \end{equation} is a universal entangling gate that comes naturally from the capacitive coupling of superconducting qubits \cite{partialiSWAP-bialczak2010, iSWAP-schuch2003}. As a matter of fact, Google's Sycamore chip in which the supremacy experiment was performed \cite{supremacy2019} allows for this type of gates as they are of great importance as well for quantum chemistry applications \cite{barkoutsos2018quantum, gard2020efficient} or combinatorial optimization \cite{hadfield2019qaoa, wang2019xy, cook2019xy}.
This provides the first step to distribute the probability amplitude from the middle qubit to the rest. The procedure is repeated until the edge of the system is reached, as illustrated in Fig. \ref{fig:ProbUploading}. Specific angles can be fed into each partial-SWAP gate to obtain the target probability distribution in the unary representation. The detailed procurement of these angles is described in App. \ref{sec:ap_amplitude_distribution}.
Let us note that any final probability distribution at time $t$ can be obtained with this circuit whose depth is independent of time, since all the necessary information is carried in the angles of the partial-SWAP gates. To be precise, given $n$ qubits, the circuit will always be of depth $\lfloor{n/2}\rfloor+1$. The time dependency of the solution is encoded in the angles determining the partial-SWAP gates. This idea is reminiscent of the quantum circuits that describe the exact solution of the Ising model \cite{verstraete2009,hebenstreit2017,ising-cervera2018}.
\begin{figure}
\caption{Quantum circuit for loading any probability distribution in the unary representation $\mathcal{D}$. The circuit works as a distributor of amplitude probabilities from its middle qubit to the ones in the edges, using partial-SWAP gates that act only on nearest neighbors. Time dependence is encoded in the angles determining the gates.}
\label{fig:ProbUploading}
\end{figure}
The mapping of a known probability distribution function to the unary system is dependant on $(n-1)$ angles that need to be introduced in the partial-SWAP gates. There are two distinct situations depending on whether the final distribution probability is known exactly or not. The first case can be addressed solving an exact set of $n$ equations with $n-1$ parameters after computing the probability distribution classically. In case only the differential equation is known, but not its solution, other methods should be employed \cite{diffeq-iblisdir2007}.
\subsubsection*{Payoff calculator}
The expected payoff calculation circuit builds upon the action of the amplitude distributor to encode the expected return on an ancillary qubit. The unary representation allows for a simple algorithm to accomplish this task. The procedure will prepare an entangled state in the form \begin{equation}\label{eq:AEstate} \ket{\Psi}=\sqrt{1-a}\ket{\psi_0}_n\ket{0} +
\sqrt{a}\ket{\psi_1}_n\ket{1}, \end{equation}
where $\ket{\psi_{0,1}}$ are states in a superposition of the basis elements below and above the strike, respectively. The payoff is encoded within the amplitude $\sqrt{a}$, with $|a| \leq 1$, ready for Amplitude Estimation \cite{montecarlo-montanaro2015quantum}.
The relevant point to encode the payoff of a European option in an ancillary qubit is to distinguish in the quantum register whether the option price $S_i$ is above or below the strike $K$. This task turns out to be very simple when working in the unary representation, as opposed to the binary one where a comparator $\mathcal{C}$ needs to be introduced. To be explicit, the computation of the payoff can be achieved by applying controlled Y rotations (c$R_y$ gates), whose control qubits are those encoding a price higher than the accorded strike $K$, namely the operator $\mathcal{R}$. These c$R_y$ gates will only span over those qubits that represent asset values larger than the strike. Note that the depth of the circuit will be $n-k$, where $k$ is the unary label of the strike $K$, see Fig. \ref{fig:PayoffCircuit}. \begin{figure}
\caption{Quantum circuit that encodes the expected payoff in an ancillary qubit in the unary representation $\mathcal{C} + \mathcal{R}$. Each qubit with a mapped option price higher than the designated strike controls a c$R_y$ gate on the ancilla, where the rotation angle is a function of its contribution to the expected payoff. The comparator $\mathcal{C}$ is constructed through the control wires, while the $\mathcal{R}$ piece is performed by rotations in the last qubit.}
\label{fig:PayoffCircuit}
\end{figure}
The rotation angle for each c$R_y$ depends on the contribution of the qubit to the expected payoff. This can be achieved using \begin{equation}
\phi_i=2\arcsin\sqrt{\frac{S_i-K}{S_{max}-K}}, \end{equation} where the denominator inside the $\arcsin$ argument is introduced for normalization.
Applying the payoff calculator to a quantum state representing the probability distribution, as depicted in Fig. \ref{fig:MC}, results in \begin{equation} \begin{split}
\ket{\Psi}=\sum_{S_i\leq K}^{n-1}\sqrt{p_i}\ket{i}_n\ket{0}+\sum_{S_i>K}^{n-1}\sqrt{p_i}\cos(\phi_i/2)\ket{i}_n\ket{0}+\\
+\sum_{S_i>K}^{n-1}\sqrt{p_i}\sqrt{\frac{S_i-K}{S_{max}-K}}\ket{i}_n\ket{1}. \end{split} \end{equation} The state is now in the form of Eq. \eqref{eq:AEstate}. It is straightforward to see that the probability of measuring $\ket 1$ in the ancillary qubit is \begin{equation}
P(\ket 1) = \sum_{S_i > K} p_i \frac{S_i - K}{S_{max} - K}. \end{equation} In order to recover the encoded expected payoff, we need to measure the probability of obtaining $\ket{1}$ for the ancilla and then multiply it by the normalization factor ${S_{max}-K}$. Note that the form of the state is such that further Amplitude Estimation can be performed.
\subsubsection*{Amplitude Estimation}
\begin{figure}
\caption{Quantum circuit representation of $\mathcal{S}_{\phi_0}$ (Left) and $\mathcal{S}_0$ (Right) required to perform Amplitude Estimation in the unary basis. Notice that operator $\mathcal{S}_0$ is much simpler in the unary representation as it does not require multi-controlled CNOT gates.}
\label{fig:ae_circuits}
\end{figure}
Let us now move to the application of Amplitude Estimation to our unary option pricing algorithm. As described in Sec. \ref{sec:AE}, Amplitude Estimation is performed by concatenating the operators $\mathcal{A}$ and its inverse $\mathcal{A}^\dagger$ with operators $\mathcal{S}_0$ and $\mathcal{S}_{\psi_0}$. In the following, we will describe how to implement these operators in the unary algorithm. Detailed implementation can be seen as well in Fig. \ref{fig:ae_circuits}.
\begin{figure*}
\caption{Full circuit for the option pricing algorithm in the unary representation. The gate $\mathcal{D}$ is the probability distributor, and $\mathcal{C} + \mathcal{R}$ represent the computation of the payoff. After applying the algorithm, the oracle $\mathcal{S}_{\psi_0}$, the reverse algorithm and $\mathcal{S}_0$ follow. The last step is applying the algorithm again. This block $\mathcal{Q}$ is to be repeated for Amplitude Estimation. Measurements in all qubits is a requirement for post-selection. The qubit labelled as $q_3$ is the one starting the unary representation. }
\label{fig:full_unary}
\end{figure*}
The oracle operator $\mathcal{S}_{\psi_0}$ acts by identifying those coefficients corresponding to accepted outcomes and inverting their sign. In this problem, the task of identifying the element of the basis has been already done by the algorithm $\mathcal{A}$. Accepted outcomes are labelled with $\ket 1$ in the ancilla qubit. Therefore, the function of this oracle can be achieved by local operations in the ancilla qubit. Explicitly, such operation is \begin{equation}\label{eq:oracle}
\mathcal{S}_{\psi_0} = (I^{\otimes n} \otimes (XZX)). \end{equation} Notice that the $X$-gates can be deleted since they only provide a global sign.
For the case of the operator $\mathcal{S}_0$ we must remark a detail that greatly simplifies this computation. The operator $\mathcal{S}_0$ is normally defined using $\ket{0}$ since most quantum algorithms start on that state, as depicted in Eq. \eqref{eq:AEstate}. However, a more apt definition should instead include $\ket{initial}$ as a basis for operator $\mathcal{S}_0$, the state onto which the algorithm $\mathcal{A}$ is first applied. For the unary case, if we isolate the first extra $X$ gate, we can consider the algorithm as starting in that state of the unary basis, heavily simplifying the overall construction. That being the case, $\mathcal{S}_0$ can be constructed out of 2 single-qubit gates and one entangling gate.
With the operator $\mathcal{Q}$ constructed, Amplitude Estimation schemes can be performed. Since the unary algorithm is aimed towards NISQ devices, we use an Amplitude Estimation scheme without Quantum Phase Estimation, explained in detail in App. \ref{sec:ap_iae}. The main idea consists in applying operator $\mathcal{Q}$ a different amount of $m$ times, and process the data in order to get an advantage over ordinary sampling.
\subsection{Error mitigation\label{subsec:error_unary}}
NISQ era algorithms need to be resilient against gate errors and decoherence, since fault-tolerant logical qubits are still far from being a reality. Error mitigation techniques have been studied in past literature, see Refs. \cite{temme-mitigation2017, endo-mitigation2018}, and some of them might find valid applications in the unary algorithms as well. However, the unary representation we are proposing here turns out to offer an additional, native, post-selection strategy that manages to mitigate different types of errors. This feature is not present in its binary counterpart.
The key idea behind the possibility of accomplishing error mitigation is that unary algorithms should ideally work within the unary subspace of the Hilbert space. As a consequence, the read-out of any measurement should reflect this fact. It is then possible to reject any outcome that does not fulfil this requirement. As a matter of fact, a number of failed repetitions of the experiment could be discarded, what results in a trade-off between reduction of errors and loss of accepted samples.
A scheme for the full circuit is depicted in Fig. \ref{fig:full_unary}. In summary, the circuit is composed by one first $X$ gate that initializes the unary basis, one set of amplitude distributor $(\mathcal{D})$ and payoff calculator $(\mathcal{C} + \mathcal{R})$, and $m$ rounds of Amplitude Estimation $\mathcal{Q} = \mathcal{A} \mathcal{S}_0 \mathcal{A}^\dagger \mathcal{S}_{\psi_0}$. Read-out in all qubits is a requirement for post-selection to reduce errors.
We will investigate in detail the performance of unary {\sl vs.} binary circuits for option pricing in Sec. \ref{sec:simulations}. There we will find out that the unary representation is advantageous to the binary one, when targeting the same realistic precision and errors are taken into account.
\section{Unary and binary comparison \label{sec:un-vs-bin}}
We compare here the unary algorithm for option pricing described in Sec. \ref{sec:unary} to the binary one stated in Sec. \ref{sec:binary}, in terms of the necessary circuit design as well as the number of gates required to apply the algorithm and successfully perform Amplitude Estimation.
\subsection{Ideal chip architecture}
The structure of the unary algorithm allows for a simple chip design. In order to upload the desired probability distribution to the quantum register, only local interactions between first-neighbor qubits are required. Therefore, qubits can be arranged on a single 1D line with two-local interactions. Such a connectivity is perfectly suited to carry out the algorithm. In order to compute the expected payoff, the ancillary qubit needs to interact with the rest of the quantum register. This structure is outlined in Fig. \ref{fig:ChipArchitecture} for an arbitrary number of qubits.
The simplicity of the architecture needed to implement the unary algorithm might yield an advantage over alternative algorithms in NISQ computers. Note also that superconducting qubits allow for a natural implementation of the partial-iSWAP gate \cite{partialiSWAP-bialczak2010}. This realization of the quantum circuit would result in a decrease in the number of needed gates by factor of 6 in the amplitude distributor module.
On the other hand, the binary algorithm for payoff calculation needs non-local chip connectivity. For the sake of comparison with the simplest chip architecture presented for the unary algorithm, the most basic connectivity needed to perform the steps described for the binary scheme is displayed in Fig. \ref{fig:ChipArchitectureBinary}. The number of necessary qubits for the binary algorithm scales better asymptotically than in the unary approach, despite the increasing number of ancillary qubits required. Nevertheless, the need for Toffoli gates and almost full connectivity may eliminate this advantage in practical problems for NISQ devices.
\begin{figure}
\caption{Ideal chip architecture to implement the unary algorithm for option pricing. Only a single ancilla qubit, labelled as \emph{a} in the image, has to be non-locally controlled by the rest of the qubits. All other interactions are first-nearest-neighbor gates.}
\caption{Ideal chip architecture to implement the binary algorithm for option pricing with 4 qubits of precision, \emph{q}$_0$,\emph{q}$_1$,\emph{q}$_2$,\emph{q}$_3$, where \emph{a} and \emph{c} stand for ancillary and carrier qubit, respectively, and \emph{b} is another ancilla. The algorithm requires a number of ancillary and carrier qubits equal to the number of precision qubits plus two, 4+2 in this example. Full connectivity is needed between the precision qubits and two ancillas.}
\label{fig:ChipArchitecture}
\label{fig:ChipArchitectureBinary}
\end{figure}
\subsection{Gate count\label{subsec:counting}}
\begin{table*}[t]
\centering
\resizebox{0.38\textwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}\hline
\multirow{2}{*}{\bf Unary} & \multicolumn{4}{c|}{CNOT} & \multicolumn{4}{c|}{partial-iSWAP} \\ \cline{2-9}
& $\mathcal{D}$ & $\mathcal{C} + \mathcal{R}$ & $\mathcal{S}_{\psi_0}$ & $\mathcal{S}_0$ &$\mathcal{D}$ & $\mathcal{C} + \mathcal{R}$ & $\mathcal{S}_{\psi_0}$ & $\mathcal{S}_0$\\ \hline
1-qubit gates & 2n & 2$\kappa$n & 1 & 4 & 1 & $\kappa$10n & 1 & 9\\
2-qubit gates & 4n & 2$\kappa$n & 0 & 1 & n & $\kappa$5n & 0 & 2\\\hline
Circuit depth & 3n & 4$\kappa$n & 1 & 5 & n/2 & 15$\kappa$n & 1 & 10 \\\hline
\end{tabular}}
\resizebox{0.55\textwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}\hline
\multirow{2}{*}{\bf Binary} & \multicolumn{4}{c|}{CNOT} & \multicolumn{4}{c|}{partial-iSWAP} \\ \cline{2-9}
& $\mathcal{D}$ & $\mathcal{C} + \mathcal{R}$ & $\mathcal{S}_{\psi_0}$ & $\mathcal{S}_0$ & $\mathcal{D}$ & $\mathcal{C} + \mathcal{R}$ & $\mathcal{S}_{\psi_0}$ & $\mathcal{S}_0$ \\\hline
1 qubit gates & 3nl & (16+5$\kappa$)n & 1 & 20n - 23 & 8nl & (86+5$\kappa$)n & 1 & 80n - 113 \\
2 qubit gates & nl & 14n & 0 & 12n - 18 & 2nl & 28n & 0 & 24n - 36\\\hline
Circuit depth & nl+l & (27+2$\kappa$)n & 1 & 24n - 30 & 6nl+l & (97+2$\kappa$)n & 1 & 90n - 129\\\hline
\end{tabular}}
\caption{Scaling of the number of 1- and 2-qubit gates and circuit depth as a function of the number of qubits $n$ representing the asset value in unary and binary representations, for the amplitude distributor $\mathcal{D}$, payoff estimator $\mathcal{C} + \mathcal{R}$ and Amplitude Estimation operators $\mathcal{S}_{\psi_0}$ and $\mathcal{S}_0$. Ideal chips architectures are assumed. We compare this scaling in case CNOT or partial-iSWAP gates are implemented. In case the experimental device can implement both CNOT and partial-iSWAP basic gates, the total amount of gates and total depth would be reduced. For the unary circuit, the parameter $0\le \kappa\le 1$ depends on the position of the strike in the qubit register. In the binary case, note the large overheads due to the use of Toffoli gates. The parameter $0\le \kappa\le 1$ characterizes the number of $1$s in the binary representation of the strike price. For the amplitude distributor, $l$ is the number of layers of the qGAN.
}
\label{tab:gates} \end{table*}
\begin{figure*}
\caption{Scaling of the number of gates required for the full algorithm, including a step, $m=1$, of Amplitude Estimation, with the number of bins, for different native gates: CNOT gates (Left), partial-iSWAP gates (Center) and the best possible combination (Right), in which one is allowed both CNOT and iSWAP gates as native to the device. The scaling is calculated assuming ideal connectivity, which would largely hinder the binary implementation were that not the case.}
\label{fig:gate_count}
\end{figure*}
The unary algorithm needs $\order{n}$ partial-SWAP gates in order to distribute the amplitude and $\order{\kappa n}$ controlled-$R_y$ gates to encode the payoff in an ancillary qubit, where $0\le \kappa\le 1$ depends on the strike price K. However, actual quantum devices operate using a native set of gates that are used to construct any other unitary. We present in Table \ref{tab:gates}, left, the gate count of the full circuit as a function of the number of qubits, using either CNOT or partial-iSWAP as the native entangling gate. The gate count assumes the simple ideal chip structure, see Fig. \ref{fig:ChipArchitecture}, that requires first-neighbor interactions and an ancilla connected to the rest of the qubits.
The partial-iSWAP gate introduces a substantial gain for the amplitude distributor but requires more gates in order to implement the payoff calculation. If both partial-iSWAP interaction between nearest neighbors and CNOT-based connection with the single ancilla are implemented, the best scaling of the full algorithm would be achieved. To be precise, the total number of gates would be $(4 \kappa+1) n+1$, and the depth of the circuit would become $(4 \kappa +\frac{1}{2})n$.
The scaling of the gate count for the binary algorithm is displayed in Table \ref{tab:gates}, right. We compare the scaling when using CNOT or partial-iSWAP as native gates. The CNOT gate turns out to be more convenient for the binary algorithm. These results include the part of the algorithm that produces the uploading of the probability distribution into the quantum register (hence the dependence on the number of layers of the variational circuit), but it does not take into account the training required by the qGAN. In both unary and binary cases, the counting for single-qubit gates was made compiling several successive single-qubit gates into a single one.
This gate count is performed assuming full connectivity, or at least the connectivity presented in Fig. \ref{fig:ChipArchitectureBinary}. Existing quantum devices need to implement extra SWAP gates to account for insufficient connections, which are not taken into account in this calculations. Therefore, the gate counting on a computer with less than this ideal connectivity will result in a worse scaling.
Let us emphasize that the gate overhead for the unary representation is much lower than the one for the binary case. This is due to the fact that the unary circuit does not require any three-qubit gate. This simplification is eclipsed by the gain in precision for large $n$, provided an efficient uploading of probability distributions is found for the binary case. The detailed gate count comparing unary {\it vs.} binary circuits is shown in Fig. \ref{fig:gate_count}, where we have taken $\kappa=\frac{1}{2}$ and $l=\frac{\log_2 n}{2}$, where $l$ is the number of layers of the qGAN. In order to compare like with like, the comparison of scaling is made as follows. For a given number of $n$ bins, which directly relate to precision, we take $n$ qubits in the unary representation and only $\log_2 n$ in the binary one. Note that the overhead in the binary representation makes the unary one more convenient for a number of bins less than $\sim 100$. This scaling behaviour confirms that this unary representation would get outperformed by the binary one for a large number of bins, provided the devices performed gates with no error. However, if quantum resources are limited, as in NISQ devices, circumstances are favorable for the unary representation. Moreover, in practise, the connectivity requirements further benefits the unary representation over the binary one.
\section{Simulations\label{sec:simulations}}
The circuits we present in this paper can be simulated using the tools provided by the Python package \textit{Qiskit} \cite{Qiskit}. We first consider the unary and binary algorithms in ideal conditions, that is, we verify the performance of the quantum circuits in the absence of any noise. Then, we test them both under increasing amounts of different sources of noise in order to assess which of the two procedures may be more advantageous for NISQ devices.
The simulations in this work were carried out using a simple yet descriptive model. In the case of single-qubit and two-qubit gate errors, we consider depolarizing noise. That is equivalent to transforming the state after each gate application by $\rho \rightarrow (1 - \epsilon) \rho + \epsilon Tr \rho \frac{I}{d}$, with $d$ the dimension of the Hilbert space. Measurement errors are ten times more likely to happen $(10\epsilon)$, and they are symmetric, \textit{i.e.}, the probability of measuring an incorrect $\ket 0$ or $\ket 1$ is identical. Let us remark here that we have not included thermal relaxation or thermal dephasing. The reason is that, given the shallow depth of the simulated circuits, the execution times are far below current coherence times of qubits (the latter being $\sim 1000$ times the duration of a single-qubit gate), and thermal errors are therefore negligible. This description was adjusted to be comparable to state-of-the-art quantum devices \cite{supremacy2019}.
The accuracy of the expected payoff estimation is used to benchmark both algorithms against the aforementioned errors, as a function of the interpolation parameter $\epsilon$. The simulations were performed with 8 and 3 qubits in the unary and binary basis, respectively, using their ideal chip structures, see Sec. \ref{sec:un-vs-bin}. Notice that both cases correspond to 8 bins. Recall as well that the unary approach includes post-selection, which results in a clear improvement of the algorithm's performance.
The results presented in this section consider depolarizing and measurement errors together. A separate analysis of these errors can be found in App. \ref{sec:ap_more_results}. The code is publicly available in Ref. \cite{github}. It allows to perform simulations with different combinations of several errors, namely {\sl bitflip}, {\sl phaseflip}, {\sl bitphaseflip}, thermal and measurement errors, isolated or as part of custom error models.
\subsection{Amplitude distribution loading}
The log-normal probability distribution used for the simulations is generated in accordance with the Black-Scholes model discussed in Sec. \ref{sec:econ_model}. We work with a particular example, chosen such that the asset price at $T=0$ is $S_0=2$, the volatility of the asset is $\sigma=40\%$, the risk-free market rate is $r=5\%$, the maturity time is $T=0.1$ years, and the accorded strike price for the asset is $K=1.9$. The simulation of the asset price ranges up to three standard deviations from the mean value of the distribution.
The capability of quantum computers to approximate a given probability distribution in the presence of noise can be quantified by the Kullback-Leibler (KL) divergence \cite{KL-kullback1951}. This quantity measures the distance between two probability distributions, vanishing when they are indistinguishable. Fig. \ref{fig:KL} plots the KL divergence for the unary and binary approximations to the log-normal distribution. For the maximum allowed error, the KL divergence of the binary algorithm is one order of magnitude larger than that of the unary one.
\begin{figure}
\caption{Kullback-Leibler divergence between the target probability distribution and those achieved by the quantum algorithms, for equivalent 8 unary qubits and 3 binary qubits, and different levels of depolarizing error. Crosses stand for average results, and the shaded regions encompass the central $70\%$ of the instances. Each probability distribution is estimated using $100$ experiments with $10^4$ samples each. For noiseless computers, the KL divergence almost vanishes, but gets larger as noise is added. For the maximum allowed error, the KL divergence of the binary algorithm is one order of magnitude larger than that of the the unary one.}
\label{fig:KL}
\end{figure}
\subsection{Expected payoff calculation}
In terms of payoff calculation, the algorithms diverge slightly. Classically, with a precision of $10^4$ bins, the estimated payoff for this financial option is $0.1595$, that we take as the exact value for comparison with the quantum strategies. Recall that in order to compare like with like, on the quantum side we have 8 unary qubits and 3 binary qubits, that both correspond to 8 bins.
In Fig. \ref{fig:bin_error}, we show the error of the expected payoff as a function of the number of bins in the probability distribution, for the classical computation. This precision depends on the binning and the position of the strike. Therefore, at a large enough number of bins, the results fall within a reasonable percentage of the actual value. At 100 bins, errors for the option price go well below 1$\%$. This shows that the unary algorithm can be implemented in the range where it uses less quantum gates than the binary algorithm, and still have low discretization errors coming from the binning. \begin{figure}
\caption{Percentage error from the exact value of the expected payoff, for the classical computation, as a function of the number of bins in the probability distribution. With only $\sim 50$ bins, errors for the option price below 0.5\% are already reached.}
\label{fig:bin_error_classical}
\label{fig:bin_error}
\end{figure}
\subsubsection*{Robustness against noise}
\begin{figure}
\caption{For equivalent 8-qubit unary and 3-qubit binary algorithms, percentage error in the payoff calculation for depolarizing and measurement errors, up to 0.5\% for single-qubit gates, 1\% for two qubit gates and 5\% for read-out errors, consistent with state-of-the-art devices. The calculations were averaged over $100$ repetitions with $10^4$ runs each. The shaded regions encompass the central 70\% of the instances in each case. The unary algorithm is more robust against these errors.
}
\label{fig:depolarizing_m_error}
\end{figure}
We show in Fig. \ref{fig:depolarizing_m_error} the average of the relative error of the expected payoff computation when compared to the classical value. The $x$ axis of Fig. \ref{fig:depolarizing_m_error} depicts the single-qubit gate error percentage $\epsilon$, but two-qubit and read-out errors are also included following the model explained previously. The shaded region includes $70\%$ of the total instances used for the average. It can be seen that the unary algorithm, in general, has less deviation form the mean value than the binary algorithm.
\begin{figure*}
\caption{Left: Mean and uncertainty of the outcomes of the expected payoff, obtained with Eq. \eqref{eq:ap_exp_errors} and proper transformations. The dashed lines indicate the exact values. Unary and binary approaches are depicted, and convergence to the optimal values are obtained for both. Notice that these values are not the same since the outcomes of both algorithms are not equally related to the payoff. The shaded regions correspond to the statistical uncertainty. Right: Statistical uncertainties in the expected payoff. The dotted lines indicate the uncertainty given by classical sampling, while the dot-dashed lines represent the optimal uncertainty provided by Amplitude Estimation. Results of the simulations lie in between. In this figure we compare procedures with the same number of applications of the $\mathcal{A}$ or $\mathcal{A}^\dagger$ operators, for noiseless circuits.}
\label{fig:convergence_results}
\end{figure*}
\subsection{Amplitude Estimation}
This section comprises results obtained for the Amplitude Estimation algorithm, that can be divided in three parts. First, results are shown for the case of noiseless devices, converging to the expected value within errors due to binning and Taylor approximation, the latter only in the binary case. All results were obtained for 8 bins, unless stated otherwise. Second, an analysis on the effect that quantum errors induce on the estimated value of $a$ has been performed, both for unary and binary approaches. Third, an analysis on the statistical uncertainty incurred in the estimation has been also included.
Only Amplitude Estimation without phase estimation can be performed on NISQ devices. In these simulations, we have used a procedure based on weighted averages that consider both mean values and uncertainties, for a given series of AE steps, see App. \ref{sec:ap_iae} for further details. In our results, every instance has been repeated $100$ times. The choice of $m_j$ is linear, $m_j = j$, with $j=\{0, 1, 2, \ldots\}$, in order to control how the performance evolves. The confidence level was adjusted to $1 - \alpha = 0.95$.
\begin{figure*}
\caption{Results of the errors in the expected payoff respect to the optimal value, for the unary (Left) and binary (Right) representation, with $M$ iterations of Amplitude Estimation for both the unary and binary approaches. Depolarizing and read-out errors have been considered. Scattering points stand for average values, while the shaded region corresponds to the statistical uncertainties of the results. The behavior of both approaches is very different. In the unary case, the expected payoff is resilient to errors. On the other hand, the binary approach returns acceptable results for $M=0$, while $M\geq 1$ rapidly saturates to the outcome of a random circuit.}
\label{fig:data}
\end{figure*}
In Fig. \ref{fig:convergence_results} it is shown how Amplitude Estimation increases the precision of the measured outcome, converging to the actual value as more iterations of AE are used. The results of this simulation unveil that Amplitude Estimation reduces with every iteration the uncertainty in the value of the expected payoff.
The next step of the analysis is to assess the robustness of both the unary and binary representation against noisy circuits. The results for the deviations in the outcomes of $a$ obtained for noisy circuits are depicted in Fig. \ref{fig:data}, taking into account depolarizing and read-out errors together. The number of iterations has been limited to $M=4$. Two very different behaviors can be observed. In the case of the unary approach, the outcomes endure the noise of the device for $M=0,1,2,3,4$ and for low error rates, while entering into an erratic regime for large ones. For instance, at $M=2$, a result that is very close to the optimal value and with low uncertainty is obtained up to error parameter $\epsilon \sim 0.3\%$. In contradistinction, the binary approach loses its robustness for very small noise levels and $M \geq 1$. This can be attributed to the post-selection scheme and to the lower number of applied gates, that benefit the unary algorithm significantly. The decrease of the uncertainties is detailed in Fig. \ref{fig:errors} in the Appendix.
From these results we can infer that the Amplitude Estimation procedure, when performed on NISQ devices, provides a quantum advantage only for the unary representation and for limited noise levels in the device; specifically, similar to those present in available state-of-the-art machines \cite{supremacy2019}.
Simulations have been extended to several different numbers of bins in the unary representation. We show in Fig. \ref{fig:several_bins} how the deviation in the payoff from the exact value evolves when larger quantum systems are taken into account. In this example, the error parameter was adjusted to $\epsilon = 0.3\%$. We have considered depolarizing and measurement errors. Each experiment was repeated only 10 times to reduce computational costs. In Fig. \ref{fig:several_bins} it is possible to see that the deviation in the payoff increases as more bins are added. This corresponds to the expected trend since systems with more qubits require a larger number of gates, and thus errors are more likely to happen. Larger errors are observed for numbers of bins between 13 and 18. This behavior is expected to reach a saturation regime for large enough error rates. In the binary case, since the circuit is prone to large errors, the output becomes indistinguishable from one of a random circuit. However, this regime has not been reached yet in the unary representation. Concerning the sampling uncertainty, it is larger as the number of bins increase. This reflects that the more bins, the more errors may occur, and thus more instances are to be discarded via the post-selection mechanism, which translates into a slower convergence rate. The decrease of the uncertainties is detailed in Fig. \ref{fig:several_bins_2} in the Appendix.
\begin{figure}
\caption{Results for the error of the expected payoff for increasing number of bins for up to $M$ iterations of Amplitude Estimation for the unary approach, considering depolarizing and read-out errors together. Scattering points represent the mean values obtained for the experiment, while shadowed areas include 70\% of the instances. }
\label{fig:several_bins}
\end{figure}
\section{Conclusions} \label{sec:conclusions}
Finance stands as one of the fields where quantum computation may be of relevance. We have here presented a quantum algorithm that allows for the pricing of European options whose defining trait is that it works in the unary representation of the asset value.
We have illustrated our algorithm in the particular case of a single European option, whose maturity price for the underlying asset is obtained as the solution of a Black-Scholes equation and its expected return depends on a prefixed strike value. The global structure of our algorithm is divided in three steps: a) generation of the amplitude distribution of the asset value at maturity, b) evaluation of the expected return given the strike value, and c) Amplitude Estimation. Our algorithm relies on several new ideas to make this strategy concrete.
The very first step is to define the level of precision the algorithms should aim at. This precision is related to $n$, the number of qubits in the circuit. The more qubits, the more resolution we can get.
The next step corresponds to building the amplitude distribution of the asset at maturity. We have proposed to handle this problem using a circuit of depth $n/2$ that operates as a distributor of probability amplitude. Given a classical description of the probability distribution, this step of the algorithm substitutes the classical Monte Carlo generation of probabilities.
The computation of the expected return is particularly simple in the unary representation. It only needs a series of $n$ conditional two-body gates from the original qubit register to an ancilla. It then follows iterative Amplitude Estimation.
The use of unary representation seems at odds with performing precise computations. This is not so, as the precision of the expected return is an average over a sampled probability distribution which does not need to have too high a resolution. We verify this statement in detail to find that less than 100 qubits are enough to have competitive computations.
The unary algorithm aims to be used during a middle stage between current quantum computers and fault-tolerant devices. The algorithm is designed to find applicability with a relatively low, while still useful, number of bins. Thus, we have designed a circuit which is simple in terms of logic operations and requires a much less sophisticated connectivity than its binary counterpart.
Unary representation definitely offers relevant advantages over the binary one. First, it allows for a simple distribution of probability amplitudes. Second, it provides a trivial computation of expected returns. Third, unary representation should only trigger one output qubit, while reading the expected return in the ancilla. This offers a consistency check. If no output, or more than one are triggered, the run is rejected. The ability to post-select faithful runs mitigates errors and increases the performance of the quantum algorithm. In addition, Amplitude Estimation may be performed successfully only in the unary basis, considering error levels in NISQ devices, since the procedure is more resilient to errors than the binary one.
There are a number of further improvements that may be included in the algorithm. It is possible, for instance, to increase precision by taking the qubits to represent non equispaced elements in the probability distribution. It is enough to populate more densely the subtle regions of the sample distribution to gain some precision. Ideas to include multi-asset computations are also available \cite{underconstruction}.
Finally, let us mention that our unary option pricing algorithm could be tested experimentally on quantum computers recently presented.
\section*{Code availability} The code is available in \href{https://github.com/UB-Quantic/quantum-finance}{Github} \cite{github}.
\onecolumngrid
\appendix \section{Classical Option Pricing\label{sec:ap_econ_model}}
The evolution of asset prices in financial markets is usually computed using a model established by F. Black and M. Scholes in Ref. \cite{blackscholes-black1973}. This evolution is governed by two properties of the market, the interest rate and the volatility, which are incorporated into a stochastic differential equation. The equations controlling a set of assets are usually solved using Monte Carlo methods.
\subsection{The Black-Scholes model}\label{subsec:Black-scholes}
The Black-Scholes model for the evolution of an asset is based on the following stochastic differential equation \cite{blackscholes-black1973} \begin{equation}\label{eq:ap_BSM}
{\rm d}S_T = S_T\, r\, {\rm d}T + S_T\, \sigma\, {\rm d}W_T, \end{equation} where $r$ is the interest rate, $\sigma$ is the volatility and $W_T$ describes a Brownian process. Let us recall that a Brownian process $W_T$ is a continuous stochastic evolution starting at $W_0=0$ and made of independent gaussian increments. To be specific, let $\mathcal{N}(\mu, \sigma_s)$ be a normal distribution with mean $\mu$ and standard deviation $\sigma_s$. Then, the increment related to two steps of the Brownian processes is $W_T - W_S \sim \mathcal{N}(0, T - S)$, for $T > S$.
The above differential equation can be solved analytically up to first order using Ito's lemma \cite{ItoLemma-1944}, whereby $W_T$ is treated as an independent variable with the property that $({\rm d}W_T)^2$ is of the order of ${\rm d}T$. Thus, the approximated derivative ${\rm d}S_T$ can be written as \begin{equation}
{\rm d}S_T = \left( \frac{\partial S_T}{\partial T} + \frac{1}{2}\frac{\partial^2 S_T}{\partial W_T^2}\right) {\rm d}T + \frac{\partial S_T}{\partial W_T} {\rm d}W_T. \end{equation} By direct comparison to Eq. \eqref{eq:ap_BSM}, it is straightforward to see that \begin{eqnarray}
\frac{\partial S_T}{\partial W_T} = S_T\, \sigma, \\
\frac{\partial S_T}{\partial T} + \frac{1}{2}\frac{\partial^2 S_T}{\partial W_T^2} = S_T\, r . \end{eqnarray} Using the initial condition $S_0$ at $T=0$, and the Ansatz \begin{equation}
S_T = S_0 \exp{(f(T) + g(W_T))}, \end{equation} the solution for the asset price turns out to be \begin{equation}
S_T = S_0 e^{(r - \frac{\sigma^2}{2}) T} e^{\sigma W_T}\;\sim\; S_0 e^{\mathcal{N}\left(\left(r - \frac{\sigma^2}{2}\right) T, \sigma \sqrt{T}\right)}. \end{equation} This final result corresponds to a log-normal distribution.
\subsection{European Option}
An option is a contract where in its call/put form, the option holder can buy/sell an asset before a specific date or decline such a right. As a particular case, European options can be exercised only on the specified future date, and only depend on the price of the asset at that time. The price that will be paid for the asset is called \emph{exercise} price or \emph{strike}. The day on which the option can be exercised is called \emph{maturity date}.
A European option payoff is defined as \begin{equation}\label{eq:Payoff}
f(S_T, K) = \max(0, S_T - K), \end{equation} where $K$ is the strike price and $T$ is the maturity date. An analytical solution exists for the payoff of this kind of options.
The expected payoff is given by \begin{equation}
C(S_T, K) = {\rm average}_{S_T \geq K}\left( S_T - K \right) = \int_{d_1}^{\infty} \left(S_T - K\right) \frac{1}{\sqrt{2\pi}} e^{\frac{-x^2}{2}} dx, \end{equation} yielding the analytical solution \begin{equation}\label{eq:exp_payoff_analytical}
C(S_T, K) = S_0 {\rm CDF}_{\mathcal N}(d_1) - K e^{-r T}{\rm CDF}_{\mathcal N}(d_2), \end{equation} with \begin{center} \begin{eqnarray}
d_1 = \frac{1}{\sigma \sqrt{t}}\left( \log \frac{S_0}{K} + \left( r + \frac{\sigma^2}{2}\right) T \right) \\
d_2 = d_1 - \sigma \sqrt{T} \\
{\rm CDF}_{\mathcal N}(x) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^x e^{\frac{-u^2}{2}}du . \end{eqnarray} \end{center}
\section{Details for the binary algorithm \label{sec:ap_binary}}
For the sake of completeness, we now present a binary algorithm for option pricing, as introduced in Ref. \cite{qfinance-stamatopoulos2019}. The binary algorithm is also divided in three parts, namely (a) amplitude distribution loading, (b) expected payoff computation and (c) Amplitude Estimation. The main difference is that all computational-basis states are used to codify the discretized probability distribution of an asset price at maturity time. This implies steps (a) and (b) will require completely different quantum circuitry. We now proceed to describe these steps for the binary case.
\subsection{Amplitude distribution loading}
Uploading probability distributions onto quantum states is a very general problem that was considered in \cite{prob_distributions-grover2002}. In this work, it was claimed that any probability distribution that is efficiently integrable on a classical computer, \textit{e.g.} log-concave distributions, can be loaded efficiently onto a quantum state. However, several authors \cite{finite-montanaro2016, qinspired-garcia2019} have pointed out that the method proposed there requires pre-calculating a number of integrals that grows exponentially with the number of qubits. In the case of option pricing, a reasonable precision requires a moderate number of qubits in the unary representation and many less in the binary representation. But, as a matter of fact, the reduction to a logarithmic number of qubits in the binary representation is, at least partially, compensated by the effort needed to prepare the probability distribution. That is, in practise, both unary and binary representations require similar effort to pre-process the probability distribution to later encode it in the quantum register.
An alternative method to encode a probability distribution in a quantum state is the use of so-called quantum Generative Adversarial Networks (qGANs) \cite{qGAN-lloyd2018, qGAN-dallaire2018, qGAN-zoufal2019}. In this scheme, two agents, a generator and a discriminator compete against each other. The generator learns to produce data that mimics the underlying probability distribution, trying to deceive the discriminator into believing that the new data is faithful. On the other hand, the discriminator has to learn how to tell apart the real data from the data produced by the generator. This quantum adversarial game has a unique endpoint: Nash equilibrium is reached when the generator learns to produce states that deliver probability outcomes that are indistinguishable from the desired probability distribution, and the discriminator cannot tell them apart. In order to upload probability distributions onto quantum states using qGANs, a parametrized quantum circuit may play the role of a generator, whereas the discriminator may be a classical neural network.
At present, there is still a lack of precise understanding on how to efficiently upload probability distributions on a quantum computer in binary representation, which makes rigorous complexity analysis in terms of the number of gates difficult.
\subsection{Payoff computation}\label{subsec:ap_payoff_binary}
A useful feature of the unary algorithm is that, given a strike $K$, one can directly know which qubits will not contribute to the expected return of the option, and therefore adjust the quantum circuit. This is only possible since the unary representation maps directly to the asset price. In a binary encoded setting one needs to compute explicitly which basis elements will make a non-zero contribution to the expected payoff. Hence the need of a quantum comparator, $\mathcal{C}$, that singles out the values of $S_T$ that are smaller than the strike price $K$. This comparator requires the use of $n+1$ ancillary qubits, one of which is retained after the computation. Its action is given by \begin{equation}
\label{eq:ap_C} |\psi{\rangle}|0{\rangle}\xrightarrow{\quad \mathcal{C} \quad} \sum_{S_i<K} \sqrt{p_i}\,|e_i{\rangle}|0{\rangle} + \sum_{S_i\geq K} \sqrt{p_i}\,|e_i{\rangle}|1{\rangle}, \end{equation}
where $\{|e_i{\rangle}\}$ is the computational basis and $\{S_i\}$ are the asset values at maturity associated to computational-basis vectors. The quantum circuit implementing Eq. \eqref{eq:ap_C} can be constructed using cNOTs, Toffoli gates and OR gates, see Fig. \ref{fig:comparator}.
In order to understand the way this circuit works, let us consider the case where the discretization of the interval $[S_{max}-S_{min}]$ is uniform. In this case, the relation between $\{S_i\}$ and $\{|e_i{\rangle}\}$ is \begin{equation} S_i = S_{min}+ \frac{e_i\,(S_{max}-S_{min})}{2^n} .\end{equation} This implies that
\begin{equation} \label{eq:k'} S_i > K \quad \Leftrightarrow \quad e_i > \frac{2^n\,(K-S_{min})}{S_{max}-S_{min}} \equiv K'.\end{equation} The idea goes as follows. First, we classically compute the two's complement of $K'$, \textit{i.e.}, $2^n-K'$, and store it in binary format in a classical array of $n$ bits, $t[j]$ with $j\in[0,1,\dots,n-1]$. Then, using $n$ ancillas, $|a_0\cdots a_{n-1}{\rangle}$, initialized to $|0\cdots0{\rangle}$, we compute the carry bits of the bitwise addition between $t$ and $\{e_i\}$, and store them in superposition into $|a_0\cdots a_{n-1}{\rangle}$. If $e_i>K'$, then necessarily $a_{n-1}=1$.
\begin{figure}
\caption{Quantum comparator, $\mathcal{C}$. The OR gates appearing are three-qubit gates acting on non-adjacent qubits.}
\label{fig:comparator}
\end{figure}
The exact circuit needed for a given strike will depend upon the values of the bits in $t$. If $t[j]=0$, then there will be a carry bit at position $j$ if and only if there is a carry bit at position $j-1$ {\sl and} the $j$-th bit of $e_i$ is 1. This is computed with a Toffoli gate. On the other hand, if $t[j]=1$, there will be a carry bit at position $j$ if and only if there is a carry bit at position $j-1$ {\sl or} the $j$-th bit of $e_i$ is 1. This is computed with an OR gate, shown in Fig.\ref{fig:qOR}. Finally, there will be a carry bit at $a_0$ if and only if $t[0]=1$ and the first bit of $e_i$ is 1. This is achieved with a simple CNOT gate. As explained above, if $e_i\geq K'$, then $a_{n-1}$ must be equal to 1. Hence, applying a CNOT gate controlled by the qubit $\ket{a_{n-1}}$ and targeted at the ancilla, the desired state in Eq. \eqref{eq:ap_C} is obtained.
\begin{figure}
\caption{Decomposition of the OR gate in terms of single-qubit and Toffoli gates.}
\label{fig:qOR}
\end{figure}
Once $\mathcal{C}$ has been applied, the next step is to encode the expected payoff of the option into the amplitudes of a new ancilla. The final state to be created should be
\begin{equation} \label{AE} \sum_{S_i<K} \sqrt{p_i}\,|e_i{\rangle}|0{\rangle}\left[\cos(g_0)|0{\rangle}+\sin(g_0)|1{\rangle}\right] + \sum_{S_i\geq K} \sqrt{p_i}\,|e_i{\rangle}|1{\rangle} \left[\cos(g_0+g(i))|0{\rangle}+\sin(g_0+g(i))|1{\rangle}\right],\end{equation} where
\begin{equation} g_0=\frac{\pi}{4}-c \qquad,\qquad g(i)=\frac{2c\,(e_i-K')}{e_{max}-K'}\,, \end{equation} with $c$ a constant such that $c\in [0,1]$. Thus, the probability of measuring the second ancilla in the $|1{\rangle}$ state in \eqref{AE} is \begin{equation} {\rm{Prob}}(1)=\sum_{S_i<K} p_i \sin^2(g_0)+\sum_{S_i\geq K} p_i \sin^2(g_0+g(i)) \,.\end{equation} Using the approximation \begin{equation} \label{approx} \sin^2\left(cf(i)+\frac{\pi}{4}\right)=\frac{1}{2}+cf(i)+O(c^3f^3(i))\,,\end{equation} to first order, which follows from Taylor-expanding $\sin^2(f(x)+\frac{\pi}{4})$ around $f(x)=0$, the probability becomes \begin{equation} \label{P1} {\rm{Prob}}(1) \simeq \sum_{S_i<K} p_i \left(\frac{1}{2}-c\right)+\sum_{S_i\geq K} p_i \left(\frac{1}{2} +c\left[\frac{2\,(e_i-K')}{e_{max}-K'} -1\right]\right) = \frac{1}{2}-c+\frac{2c}{e_{max}-K'} \sum_{S_i\geq K}\, p_i \,(e_i-K') \,.\end{equation} It is important to note that the approximation made in Eq. \eqref{P1} is valid since $cf(i)= c\left[\frac{2\,(e_i-K')}{e_{max}-K'} -1\right] \in[-c,c]$. Reversing the change from Eq. \eqref{eq:k'}, namely \begin{equation}
\sum_{S_i\geq K} p_i \,(S_i-K) = \frac{S_{max}-S_{min}}{2^n}\sum_{S_i\geq K}\, p_i \,(e_i-K'), \end{equation} the expected payoff function, \textit{i.e.}, $\sum_{S_i\geq K} p_i \,(S_i-K)$, can be recovered from the probability of measuring 1 in the ancilla, Eq. \eqref{P1}, since $c,K,S_{max}$ are all known. \begin{figure}
\caption{Encoding of the expected return of a European option into the amplitudes of an ancilla qubit in binary representation.}
\label{fig:binary_encoding}
\end{figure} \begin{figure}
\caption{\small{Decomposition of a cc$R_y$ gate in terms of cNOTs and c$R_y$ gates.}}
\label{fig:ccRy}
\end{figure}
The quantum circuit that produces the final state \eqref{AE} from Eq. \eqref{eq:ap_C} is composed of $n$ cc$R_y$ gates, plus one $R_y$ and one c$R_y$ gate, shown in Fig. \ref{fig:binary_encoding}. The decomposition of a cc$R_y$ gate in terms of CNOTs and c$Ry$ gates is shown in Fig. \ref{fig:ccRy}.
\subsection{Amplitude Estimation}
The oracle operator $\mathcal{S}_{\psi_0}$ acts on the binary algorithm in the same manner as its unary analogue, as defined in Eq. \eqref{eq:oracle}.
The case of the operator $\mathcal{S}_0$ is slightly different. The target state to flip is that with only $\ket 0$. Thus, a multi-controlled Toffoli gate is required. This multi-controlled gate constitutes the computationally most-costly piece of the circuit. This gate can be decomposed in simpler gates \cite{gates-barenco1995}.
In order to reduce the complexity of the circuit as much as possible, it is necessary to choose the optimal representation of this gate. The most efficient decomposition consists in a chain of standard Toffoli gates that store their outcomes in ancilla qubits. The number of ancillas required is $c - 2$, where $c$ is the number of control qubits. Depending on whether these ancillas are initialized in $0$ or not, the number of Toffoli gates is different. In the case of the binary algorithm, a sufficient number of ancillas is available from the comparator. In addition, the short version of the Toffoli gate can be used. The reason is as follows. The operator $\mathcal{S}_0$ is applied only after the sequence $\mathcal{A}^\dagger \mathcal{S}_{\psi_0} \mathcal{A}$, that leaves the ancillas unchanged as the oracle includes only a phase and does not add any entanglement, making all operations involving the ancillas classical. In other words, there is no need to add any circuit piece that undoes the auxiliary computations stored in the ancillary qubits because the structure of the circuit itself accomplishes this goal.
The full circuit for the binary algorithm, including Amplitude Estimation is depicted in Fig. \ref{fig:full_binary}
\begin{figure*}
\caption{Full circuit for the option pricing algorithm in the binary representation. The gate $\mathcal{D}$ is the probability distributor, $\mathcal{C}$ is the comparator, and $\mathcal{R}$ the rotation step. $\mathcal{C}$ and $\mathcal{R}$ together represent the computation of the payoff. After applying the algorithm, the oracle $\mathcal{S}_{\psi_0}$, the reverse algorithm and $\mathcal{S}_0$ follow. The last step is applying the algorithm again. This block $\mathcal{Q}$ is to be repeated for applying Amplitude Estimation. }
\label{fig:full_binary}
\end{figure*}
\section{Details for the Amplitude Distributor in the unary basis \label{sec:ap_amplitude_distribution}}
Let us consider Fig. \ref{fig:ProbUploading}. In the unary basis, every qubit represents the basis element in which the qubit is $\ket{1}$. Thus, the coefficient of every element depends on as many angles as partial-SWAP gates are needed to reaching its corresponding qubit. Thus, the central qubits of the circuit will depend only on 2 angles, and the number of dependencies increases one by one as we move to the outer part of the circuit. The very last two qubits depend on the same angles. As we move away from the middle, each qubit inherits the same angle dependency than the previous ones plus an additional rotation. Starting from the two edges, their coefficients verify the following ratios \begin{eqnarray}
\left\vert\frac{\psi_{0}}{\psi_{1}}\right\vert ^2 & = & \tan^2(\theta_{1} / 2) \\
\left\vert\frac{\psi_{n-1}}{\psi_{n-2}}\right\vert ^2 & = & \tan^2(\theta_{n - 1} / 2). \end{eqnarray}
Then we identify $|\psi_i|^2 = p_i$, where $\lbrace p_i\rbrace$ is the target probability distribution of the asset prices at maturity. The next step corresponds to considering the qubits $1$ and $2$, as well as $n-3$, $n-2$. The relations for their coefficients must obey \begin{eqnarray}
\left\vert\frac{\psi_{i}}{\psi_{i+1}}\right\vert ^2 & = & \cos^2(\theta_i/2)\tan^2(\theta_{i+1} / 2) \\
\left\vert\frac{\psi_{n-1-i}}{\psi_{n-2-i}}\right\vert ^2 & = & \cos^2(\theta_{n-i}/2)\tan^2(\theta_{n - 1 - i} / 2). \end{eqnarray} Then, it is straightforward to back-substitute parameters step by step until we arrive to the central qubits. This procedure fixes all the angles for the partial-SWAP gates used in the amplitude distributor.
The exact algorithm to be followed can be also found in the provided code \cite{github}.
Once the exact solution for the angles is inserted into the circuit depicted in Fig. \ref{fig:ProbUploading}, the amplitude distributor algorithm is completed. The quantum register then reads \begin{equation}
\ket{\Psi}=\sum^{n-1}_{i=0}\sqrt{p_i}\ket{i}. \end{equation} Note that describing a probability distribution with squared amplitudes of a quantum state allows for a free phase in every coefficient of the quantum circuit. For simplicity, we will set to zero all these relative phases by only operating with real valued partial-SWAP gates.
Let us turn our attention to the gates which are needed in the above circuit. Sharing probability between neighbor qubits can be achieved by introducing a two-qubit gate based on the SWAP and $R_y$ operations. This variant on the SWAP gate performs a partial SWAP operation, where only a piece of the amplitude is transferred from one qubit to another. This operation preserves unarity, that is the state remains as a superposition of elements of the unary basis. This partial-SWAP, can be decomposed using CNOT as the basic entangling gate as \begin{equation}\label{eq:ap_SWAPRy}
\includegraphics[width=0.175\linewidth, valign=c]{SWAPRy2.png} \quad=
\left(\begin{array}{cccc}
1&0&0&0 \\ 0&\cos{\theta/2}&\sin{\theta/2}&0 \\0&-\sin{\theta/2}&\cos{\theta/2}&0 \\0&0&0&1
\end{array}\right)=\quad
\quad\includegraphics[width=0.2\linewidth, valign=c]{SWAPRy.png}, \end{equation} where the usual CNOT gate in the center of the conventional SWAP gate has been substituted by a controlled $y$-rotation, henceforth referred to as c$R_y$ gate. In turn, the c$R_y$ operation can be reworked as a combination of single-qubit gates and CNOT gates \cite{gates-barenco1995}: \begin{equation}\label{CRy}
\includegraphics[width=0.15\linewidth, valign=c]{CRy.png} \quad=
\quad\includegraphics[width=0.325\linewidth, valign=c]{CRyDef.png}. \end{equation} This decomposition will come into play for the expected payoff calculation algorithm as well, albeit with angle $\phi$ in the payoff circuit.
For the purposes of this algorithm, both the CNOT and partial-iSWAP basis gates are analogous, but the direct modeling to partial-iSWAPs can economize the total number of required gates for the amplitude distributor. Partial-iSWAP gates can be used to decompose CNOT gates. More explicitly, a CNOT gate an be reproduced with two iSWAP gates, and 5 single qubit gates.
\begin{algorithm}[t!] \caption{\label{alg:gaussian} Algorithm for Amplitude Estimation based on gaussian distribution of the measurements. } \DontPrintSemicolon \SetKwFunction{FMain}{GaussianAmplitudeEstimation} \SetKwFunction{FArcSin}{MultipleValuesArcsin} \SetKwProg{Fn}{Function}{:} \Fn{\FMain{$N_{\rm shots}$, $J$, $m_j$, $\alpha$}} {\;
$z \leftarrow {\rm CDF}_\mathcal{N}^{-1}(1 - \alpha / 2)$ \;
Ensure $m_0 = 0 $ \;
$a \leftarrow |\bra{1}\mathcal{A}\ket 0 |^2$ with $N_{\rm shots}$ samples\;
$\theta_a^{(0)} \leftarrow \arcsin{\sqrt{a}}$
$\Delta \theta_a^{(0)} = \frac{z}{2 \sqrt{N_{\rm shots}}}$\;
\For{$j \leftarrow 1 \KwTo \; J$}
{
$a \leftarrow |\bra{1}Q^{m_j}A\ket 0 |^2$ with $N_{\rm shots}$ samples\;
$\theta_{\rm array} \leftarrow $ \FArcSin{$a, m_{j-1}$} \;
$\theta_a \leftarrow \min\left(|\theta_{\rm array} - \theta_a^{(j - 1)}|\right)$\;
$\Delta \theta_a \leftarrow \frac{z}{2 (2 m_j + 1)\sqrt{N_{\rm shots}}}$\;
$\theta_a^{(m_j)} \leftarrow \frac{\frac{\theta_a }{\Delta \theta_a^2} + \frac{\theta_a^{(j-1)} }{(\Delta \theta_a^{(j-1)})^2}}{\frac{1}{\Delta \theta_a^2} + \frac{1}{(\Delta \theta_a^{(j-1)})^2}}$\;
$\Delta \theta_a^{(m_j)} \leftarrow \left(\frac{1}{\Delta \theta_a^2} + \frac{1}{(\Delta \theta_a^{(j-1)})^2}\right)^{-1/2}$\;
$[a_j, \Delta a_j] \leftarrow [\sin^2\theta_a^{j}, \sin(2 \theta_a^{j}) \Delta \theta_a^{(j)}]$
}
\KwRet{$[a_j, \Delta a_j]$} } \end{algorithm}
\begin{algorithm}[t!] \caption{\label{alg:gaussian2} Extracting multiple values for the $\arcsin$, auxiliary function needed in Alg. \ref{alg:gaussian}.} \DontPrintSemicolon \SetKwFunction{FArcSin}{MultipleValuesArcsin} \SetKwProg{Fn}{Function}{:} \Fn{\FArcSin{$a, m$}} {\;
$\theta_0 \leftarrow \arcsin{\sqrt{a}}$ \tcp{The value of $\theta_0$ is bounded between $0$ and $\pi / 2$}
The $\arcsin$ function has several solutions
$\theta_{\rm array} \leftarrow [0] * (2 m + 1)$
$\theta_{\rm array}[0] \leftarrow \theta_0$\;
\For{$k \leftarrow 1 \KwTo\, m$}
{
$\theta_{\rm array}[2k - 1]\leftarrow k \pi - \theta_0$\;
$\theta_{\rm array}[2k]\leftarrow k \pi + \theta_0$
}
$\theta_{\rm array} \leftarrow \theta_{\rm array} / (2 m + 1)$\;
\KwRet $\theta_{\rm array}$} \end{algorithm}
\section{Selection of results for the Iterative Amplitude Estimation\label{sec:ap_iae}}
We present here a method for obtaining the most probable value of $a$ in an iterative fashion following similar methods as other Amplitude Estimation without QPE algorithms. We base this procedure in the theory of confidence intervals for a binomial distribution assuming normal distributions \cite{binomial-wallis2013}.
Let us consider a binomial distribution with probability $a$, \textit{i.e.}, for every sample the chance of obtaining $1$ is $a$, while the chance of obtaining $0$ is $1 - a$. Then, if an estimate $\hat a$ of $a$ was obtained using $N$ samples, the true value of $a$ lies in the interval \begin{equation}
a = \hat a \pm \frac{{\rm CDF}_\mathcal{N}^{-1}(1 - \alpha/2) \sqrt{\hat a(1-\hat a)}}{2 \sqrt{N}}, \end{equation} with confidence $(1 - \alpha)$.
From this result we can construct an iterative algorithm returning the optimal value of $a$ using quantum Amplitude Estimation. Let us take a set of $m_j$ for $j={0,1,2,3,\ldots}$. For every $m_j$ the probability of obtaining $\ket 1$ is $\sin^2((2 m_j + 1) \theta_a)$, where $a = \sin^2(\theta_a)$. Let us move to the $\theta$ space. For a given $m$ the values and error of $\theta$ obtained are \begin{equation}\label{eq:weighted}
\theta_a = \frac{1}{2m+1}\arcsin(\sqrt{a}) \qquad \Delta \theta_a = \frac{1}{2m + 1}\frac{{\rm CDF}_\mathcal{N}^{-1}(1 - \alpha/2)}{2 \sqrt{N}}. \end{equation} It is important to understand two main properties of Eq. \eqref{eq:weighted}. First, there are $2m + 1$ possible values for $\theta_a$ within the interval $\theta_a \in [0, \pi / 2]$ as the $\sin^2(\cdot)$ function is $\pi$-periodical. For every new iteration it will be necessary to choose one of them. It is very important to set $m_j = 0$ at first because this case is the only one for which $\theta_a$ corresponds to the expected value for $a$. Otherwise, several possible values of $a$ arise and it is not possible to tell which one is correct. Combining results for several values of $m_j$, it is possible to bound the uncertainty to be as small as desired.
The algorithm is based on the following statements. For a given collection of measurements and uncertainties $\{\theta_i, \Delta \theta_i\}$, the weighted average and uncertainty from the first $j$ terms is \begin{equation}\label{eq:ap_exp_errors}
\Tilde{\theta}_j = \frac{\sum_{i=0}^j \theta_i / \Delta \theta_i^2}{\sum_{i=0}^j 1 / \Delta \theta_i^2} \qquad \Delta \Tilde{\theta}_j = \left(\sum_{i=0}^j 1 / \Delta \theta_i^2\right)^{-1/2}. \end{equation} Notice also that this relation is recursive, as $\Tilde{\theta}_{j+1}$ can be obtained by combining $\Tilde{\theta}_{j}$ and $\theta_{j+1}$. The same holds for uncertainties. Thus, the interpretation of this algorithm is that for every new step $j$ a new term is added to the series $\{\theta, \Delta \theta\}$. The individual uncertainties decrease as $\sim ((2 m + 1)^{-1})$, and the final global uncertainty is obtained as \begin{equation}\label{eq:uncertainty}
\Delta \theta = \frac{{\rm CDF}_\mathcal{N}^{-1}(1 - \alpha / 2)}{\sqrt{N}}\left( \sum_{j=0}^J (2m_j + 1)^2 \right)^{-1/2}, \end{equation} where $J$ denotes the last iteration performed. The full recipe for the algorithm is described in Algs. \ref{alg:gaussian} and \ref{alg:gaussian2}.
In the case of a linear selection of $m_j$, \textit{i.e.}, $m_j = j; j=(0, 1, 2, ..., J)$, the asymptotic behavior of this uncertainty is $\Delta \theta = \mathcal{O}(N^{-1/2} M^{-3/4})$, with $M$ the sum of all $m$. For discovering it we just have to compute
\begin{equation}
\sum_{j=0}^J (j + 1)^2 = 4 \sum_{j=0}^J j^2 + 4 \sum_{j=0}^J j + \sum_{j=0}^J 1. \end{equation} We now take the identities $\sum_{j=0}^J j = J(J+1)/2=M$ and $\sum_{j=0}^J j^2 = J(2J+1)(J+2)/6$. Then, it is direct to check that \begin{equation}\label{eq:prec_lineal}
\Delta \theta = \mathcal{O}(N^{-1/2} J^{-3/2}) = \mathcal{O}(N^{-1/2} M^{-3/4}). \end{equation}
This behavior already surpasses the tendency of the classical sampling, but does not reach the optimal Amplitude Estimation.
In the case of an exponential selection of $m_j$, \textit{i.e.}, $m_j = \{0\} \cup \{2^j\}; j=(0, 1, 2, ..., J)$ we can take the identities $\sum_{j=0}^J 2^j = 2^J - 1 =M$ and $\sum_{j=0}^J 2^{2j} = (2^{2J} - 1) / 3$. Then it is direct to check that \begin{equation}\label{eq:prec_exp}
\Delta \theta = \mathcal{O}(N^{-1/2} 2^{-J}) = \mathcal{O}(N^{-1/2} M^{-1}). \end{equation}
\subsection{Extension to error-mitigation techniques}
The error-mitigation procedure proposed for the unary algorithm discards some of the algorithm instances to retain outcomes within the unary basis. This reduces the precision achieved in the algorithm with respect to the ones predicted in Eqs. \eqref{eq:prec_lineal} and \eqref{eq:prec_exp} in order to maintain accuracy. This section provides some lower bounds on how many AE iterations can be done while still reaching quantum advantage.
We will work now in the scheme where $m_j = j$. Let us assume that, in every iteration of Amplitude Estimation, only a fraction $\tilde p_j$ of the shots are retained. The equivalent version of Eq. \eqref{eq:uncertainty} is now \begin{equation}
\Delta \theta = {\rm CDF}_\mathcal{N}^{-1}(1 - \alpha / 2)\left( \sum_{j=0}^J (2m_j + 1)^2 N\tilde p_j \right)^{-1/2}. \end{equation}
As more errors are bound to occur, $\tilde p_j$ decreases as $m_j$ increases, we can state a bound for the accuracy as \begin{equation}
\Delta \theta \leq \frac{{\rm CDF}_\mathcal{N}^{-1}(1 - \alpha / 2)}{\sqrt{N \tilde p_J}}\left( \sum_{j=0}^J (2m_j + 1)^2 \right)^{-1/2}, \end{equation} since the precision is at least as good as the one obtained for the worst-case scenario. Comparing the trends, both in the linear and the exponential case, with the classical scaling, it is possible to see that quantum advantage is still achieved provided \begin{equation}\label{eq:p_J} \tilde p_J \geq M^{1 - 2\alpha}, \end{equation} with $\alpha = 3/4$ in the linear case and $\alpha = 1$ in the exponential case.
The probability of retaining a shot is at least the probability of having no errors in the circuit, considering that some double errors may lead to erroneous instances but in the unary basis. This zero-error probability in the worst case scenario, that is, at the last iteration of AE, is written as \begin{equation}
p_0 = \left((1-p_e)^{a n + b}\right)^{m_J}, \end{equation} where $p_e$ is the error of an individual gate, and $a$ and $b$ are related to the gate scaling, see Tab. \ref{tab:gates} for the details. In principle, one can expand the calculation of $p_0$ by considering different kinds of errors for different gates, but for the sake of simplicity we will focus on this analysis. Rearranging together the results for Eqs. \eqref{eq:prec_lineal}, \eqref{eq:prec_exp} and \eqref{eq:p_J} it is possible to see that quantum advantage is obtained if the individual gate errors is bounded by \begin{equation}
p_e < 1 - m_J^{\frac{2 - 4\alpha}{(a n + b)m_J}}. \end{equation}
\section{Independent analysis of errors for Amplitude Estimation \label{sec:ap_more_results}}
In this section we present results that extend the ones depicted in Sec. \ref{sec:simulations}.
\begin{figure}
\caption{Results of the sampling uncertainties of the expected payoff, for the unary (Left) and binary (Right) representation, with $M$ iterations of Amplitude Estimation for both the unary and binary approaches, considering depolarizing and read-out errors together. Scattering points represent the uncertainties obtained for the experiment while dash-point lines represent theoretical bounds, where each line is accompanied with the corresponding marker. For every color and symbol, the lower bound is for optimal quantum advantage, and the upper bound is for sampling. For $M=0$, both bounds are equivalent. In every case, a new iteration of the Amplitude Estimation reduces the uncertainty. For the unary case, the scattering points present a tendency to return larger uncertainties as the errors increase, while for the binary case the uncertainties do not explicitly depend on the single-qubit gate error. This difference is a direct consequence of post-selection.}
\label{fig:errors}
\end{figure}
It is also interesting to see how the uncertainties of the measurements decrease as more iterations of Amplitude Estimations are allowed. We must remark that those errors are exclusively due to the uncertainty in the sampling. This can be observed in Fig. \ref{fig:errors}, where the obtained uncertainties are bounded between the classical sampling and the optimal Amplitude Estimation.
Furthermore, a very remarkable behavior of the uncertainties in the unary approach must be noticed. The obtained uncertainties present a tendency to increase as errors get larger. In contradistinction, the binary algorithm does not present this feature. The reason lies in the post-selection that is only applicable in the unary representation. As errors become more likely to happen, the post-selection filter rejects more instances. The direct consequence is that the number of accepted shots drops for large errors, causing less certain outcomes. The joint action of this processes is that the uncertainty decreases more slowly for the unary algorithm than for the binary one. This behaviour contrasts with the error obtained in Fig. \ref{fig:data}, where the binary results reflect a very poor performance.
The apparently contradictory result shown in Fig. \ref{fig:several_bins_2} is related to the distinction between {\sl accuracy} and {\sl precision}. {\sl Accuracy} stands for how close is a measurement to the exact value of a quantity, and {\sl precision} encodes the dispersion of different measurements. Amplitude Estimation is an algorithm to increase the precision of a measurement with respect to the number of samples, but it does not provide any further information regarding the accuracy. Amplitude Estimation for the binary algorithm reflects the expected tendency for the increase in precision, but comes with very poor results in accuracy. The unary algorithm grows slower in terms of precision, but maintains more accurate results. This decrease in precision might lead to losing the quantum advantage provided by AE in the presence of significant error. We study further in App. \ref{sec:ap_iae} the limit of AE iterations that can be performed given the error rates of the quantum device while still maintaining quantum advantage for the unary representation.
\begin{figure}
\caption{Results for the sampling uncertainties of the expected payoff for increasing number of bins for up to $M$ iterations of Amplitude Estimation for the unary approach, considering depolarizing and read-out errors together. Scattering points represent uncertainties obtained and dash-point lines represent theoretical bounds, where each line is accompanied with the corresponding marker. For every color and symbol, the lower bound is for optimal quantum advantage, and the upper bound is for sampling.}
\label{fig:several_bins_2}
\end{figure}
\end{document} | arXiv | {
"id": "1912.01618.tex",
"language_detection_score": 0.8249850869178772,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{How to play two-players restricted quantum games with 10 cards}
\author{Diederik Aerts$^1$, Haroun Amira$^1$, Bart D'Hooghe$^1$, Andrzej Posiewnik$^2$ and Jaroslaw Pykacz$^3$} \affiliation{$^1$Center Leo Apostel for Interdisciplinary Studies and Department of Mathematics, Vrije Universiteit Brussels, 1160 Brussels, Belgium\\ $^2$Institute of Theoretical Physics and Astrophysics, University of Gda\'{n}sk, 80-952 Gda\'{n}sk, Poland\\ $^3$Institute of Mathematics, University of Gda\'{n}sk, 80-952 Gda\'{n}sk, Poland}
\email{diraerts@vub.ac.be, Haroun.Amira@vub.ac.be, bdhooghe@vub.ac.be, fizap@julia.univ.gda.pl, pykacz@math.univ.gda.pl}
\date{\today}
\begin{abstract} We show that it is perfectly possible to play `restricted' two-players, two-strategies quantum games proposed originally by Marinatto and Weber \cite{MW00} having as the only equipment a pack of 10 cards. The `quantum board' of such a model of these quantum games is an extreme simplification of `macroscopic quantum machines' proposed by Aerts et al. in numerous papers \cite{Aerts86,Aer91,Aerts93,Aeretal00} that allow to simulate by macroscopic means various experiments performed on two entangled quantum objects. \end{abstract} \pacs{03.65.--w, 03.67.Lx, 02.50.Le, 87.10.+e} \keywords{quantum games, macroscopic simulation}
\maketitle
\section{Introduction}
Although the theory of quantum games, originated in 1999 by Meyer \cite{Mey99} and Eisert, Wilkens, and Lewenstein \cite{EWL99} is only six years old, numerous results obtained during these years \cite{PS03} have shown that extending the classical theory of games to the quantum domain opens new interesting possibilities. Although Eisert and Wilkens \cite{EW00} noticed that \textquotedblleft Any quantum system which can be manipulated by two parties or more and where the utility of the moves can be reasonably quantified, may be conceived as a quantum game\textquotedblright , the extreme fragility of quantum systems may make playing quantum games difficult. In this respect it is interesting whether quantum games with all their `genuine quantum' features could be played with the use of suitably designed macroscopic devices. The aim of this letter is to show that this is possible, at least in the case of a `restricted' version of a two-players, two-strategies quantum game proposed by Marinatto and Weber \cite{MW00} in which only identity and spin-flip operators are used. Moreover, we show that this can be done at once by anyone equipped with a pack of 10 cards bearing numbers $ 0,1,...,9$.
Our idea of playing quantum games with macroscopic devices stems from the invention devices proposed by one of us \cite{Aer91} that perfectly simulate the behavior and measurements performed on two maximally entangled spin-1/2 particles. For example, they allow to violate the Bell inequality with $2\sqrt{2}$, exactly `in the same way' as it is violated in the EPR experiments. A more recent and further elaborated model consists of two coupled spin-1/2 for which measurements are defined using `randomly breaking measurement elastics' \cite{Aerts93,Aeretal00}. In this paper we use the older model for a single spin-1/2 for which measurements are defined using `randomly selected measurement charges' \cite{Aerts86,Aer91}. In order to play Marinatto and Weber's `restricted' version of two-players, two-strategies quantum game we shall not use the `full power' of this machine, but we give its complete description such that the principle of what we try to do is clear.
\section{Macroscopic simulations of Marinatto and Weber's quantum games}
\subsection{The quantum machine}
The quantum machine is a model for a spin-1/2 particle consisting of a point particle with negative charge $q$ on the surface $S^{2}$ of a
3-dimensional unit sphere \cite{Aerts86,Aerts93}. The spin-state ${|\psi \rangle }=\left( \cos {\frac{\theta }{2}}e^{\frac{-i\phi }{2}},\sin {\frac{\theta }{2}}e^{ \frac{i\phi }{2}}\right) $ is represented by the point $v(1,\theta ,\phi )$ on $S^{2}$. All points of the sphere represent states of the spin: points on the surface $S^{2}$ correspond to pure states, interior points $v(r,\theta
,\phi )$ represent mixed states ${|\psi \rangle }{\langle \psi
|=\frac{1}{2}} \left( \begin{array}{cc} 1+r\cos \theta & r\sin \theta e^{-i\phi } \\ r\sin \theta e^{i\phi } & 1-r\cos \theta \end{array} \right) $, such that the point $v(0,\theta ,\phi )$ in the center of the sphere represents the density matrix $\left( \begin{array}{cc} \frac{1}{2} & 0 \\ 0 & \frac{1}{2} \end{array} \right) $. Hence states are represented equivalently as this is the case in the Bloch model for the spin 1/2.
A measurement $\alpha _{u(\theta ,\phi )}$ along the direction $u$ consists in placing a positive charge $q_{1}$ in $u$ and a positive charge $ q_{2}$ in $-u$. The charges $q_{1}$ and $q_{2}$ are taken at random from the interval $[0,Q]$ and their distribution within this interval is assumed to be uniform, but they have to satisfy the constraint $q_{1}+q_{2}=Q.$ So in fact we can think that only $q_{1}$ is taken at random from the interval $ [0,Q]$ and that $q_{2}=Q-q_{1}$. If the initial state of the machine is as depicted on Fig.~\ref{fig:mqg01}, the forces $F_{1}$ and $F_{2}$ between the negative charge $q$ and, respectively, positive charges $q_{1}$ and $q_{2}$ are \begin{equation}
F_{1}=C\frac{qq_{1}}{|r_{1}|^{2}}\text{ \qquad and\qquad }F_{2}=C\frac{qq_{2}
}{|r_{2}|^{2}} \end{equation} \begin{figure}
\caption{The macroscopic quantum machine}
\label{fig:mqg01}
\end{figure} If $F_{1}>F_{2}$ the electromagnetic forces pull the particle to the point $u $ where it stays and the measurement is said to yield outcome `spin up', and if $F_{1}<F_{2}$ the particle is pulled to $-u$ yielding outcome `spin down'. Denoting the angle between directions $v$ and $u$ by $\theta $, one obtains $r_{1}=2\sin \frac{\theta }{2}$ and $r_{2}=2\cos \frac{\theta }{2}.$ Hence the probability that $F_{1}>F_{2}$ is: \begin{eqnarray}
P\left( C\frac{qq_{1}}{|r_{1}|^{2}}>C\frac{qq_{2}}{|r_{2}|^{2}}\right) &=&P\left( q_{1}>Q\sin ^{2}\frac{\theta }{2}\right) \end{eqnarray} which, since $q_{1}$ is assumed to be uniformly distributed in the interval $ [0,Q]$, yields \begin{eqnarray} P(\text{spin up}) &=&\frac{Q-Q\sin ^{2}\frac{\theta }{2}}{Q}=\cos ^{2} \frac{\theta }{2} \end{eqnarray} and similarly \begin{equation} P(\text{spin down})=P(F_{1}<F_{2})=\sin ^{2}\frac{\theta }{2} \end{equation} which coincides with the quantum mechanical probability distribution over the set of outcomes for a spin-1/2 experiment.
A macroscopic model for a quantum system of two entangled spin-1/2 particles in the singlet state \cite{Aer91} can be constructed by `coupling' two such sphere models by adding a rigid but extendable rod with a fixed center that connects negative charges representing `single' particles (Fig.~\ref {fig:mqg02}). Because of this rod the two negative charges are `entangled' since a measurement performed on one of them necessarily influences the state of the other one.
\subsection{Quantum games proposed by Marinatto and Weber}
The `restricted' version of two-players, two-strategies quantum games proposed by Marinatto and Weber is as follows: The `quantum board' of the game consists of two qubits that are in a definite initial state (entangled or not). Each of two players obtains one qubit and his/her strategy consists in applying to it either the identity or the spin-flip operator, or a probabilistic mixture of both. Then the state of both qubits is measured and the players get their payoff calculated according to the specific bimatrix of the played game and the results of measurements. Marinatto and Weber in their paper \cite{MW00} considered a game with a payoff bimatrix:
\begin{equation} \begin{array}{ccc} & \text{Bob: \emph{O}} & \text{Bob: \emph{T}} \\ \text{Alice: \emph{O}} & (\alpha ,\beta ) & (\gamma ,\gamma ) \\ \text{Alice: \emph{T}} & (\gamma ,\gamma ) & (\beta ,\alpha ) \end{array} \label{BoS} \end{equation} which, if $\alpha >\beta >\gamma $, is the payoff bimatrix of the Battle of the Sexes game (Alice wants to go to the Opera while Bob prefers to watch Television, so if they both choose \emph{O} Alice's payoff $\$_A(O,O)=\alpha $ is bigger than Bob's payoff $\$_B(O,O)=\beta $, and if they both choose \emph{T} their payoffs are the opposite. Since they both prefer to stay together, if their strategies mismatch they are both unhappy and get the lowest payoff $\gamma $). Marinatto and Weber showed that if the initial state of the pair of qubits is not entangled, the quantum version of the game reproduces exactly the classical Battle of the Sexes game played with mixed strategies, but if the game begins with an entangled state of the
`quantum board': $\mid \psi _{in}\rangle =a|OO\rangle +b|TT\rangle $, $
|a|^2+|b|^2=1$, then the expected payoff functions for both players crucially depend on the values of squared moduli of `entanglement coefficients' $|a|^2$ and $|b|^2$, and allow for new `solutions' of the game not attainable in the classical or factorizable quantum case.
\subsection{Marinatto and Weber's `restricted' quantum game realized by the macroscopic quantum machine}
Let us look now how simply Marinatto and Weber's `restricted' quantum game can be macroscopically realized with the use of the macroscopic quantum machine. We describe firstly the macroscopic realization of the game that begins with a general entangled state \begin{equation}
\mid \psi _{in}\rangle =a|OO\rangle +b|TT\rangle ,\text{ \qquad }
|a|^{2}+|b|^{2}=1. \label{in} \end{equation} The game that begins with a non-entangled state can be obtained from it
as a limit in which either $|a|^{2}=0$ or $|b|^{2}=0$. The initial configuration of the macroscopic machine that realizes the state (\ref{in}) is depicted on Fig.~\ref{fig:mqg02}. \begin{figure}
\caption{The initial general `entangled' state of Aerts' quantum machine.}
\label{fig:mqg02}
\end{figure}
Applying the spin-flip operator by any of the players is realized as exchanging the labels \emph{O} and \emph{T} on his/her sphere. Let us note that this is a local operation since it does not influence in any way the sphere of the other player. Applying the identity operator obviously means doing nothing. When both players make (or not) their movements, the measurement is performed which, similarly to the original Aerts' proposal in \cite{Aer91}, consists in placing a positive charge $q_{1}$ on the North pole and a positive charge $q_{2}$ on the South pole of the Alice's sphere, and the same charges, respectively, on the South and North poles of the Bob's sphere (i.e., on the Bob's sphere $q_{1}$ is placed on the South pole and $q_{2}$ on the North pole). Again, charges $q_{1}$ and $q_{2}$ are taken at random from the interval $[0,Q]$ with uniform distribution satisfying the constraint $q_{1}+q_{2}=Q.$ Assuming for simplicity that forces between `left' positive and `right' negative, resp. `right' positive and `left' negative charges are negligible (which can be achieved by using a rod that is long enough or by suitable screening) we can make analogous calculations as for the single sphere model. The forces $F_{1}$ and $F_{2}$ between the negative charges $q$ placed at both ends of the rod and, respectively, positive charges $q_{1}$ and $q_{2}$ are \begin{equation}
F_{1}=C\frac{qq_{1}}{|b|^{2}}\text{ \qquad and\qquad }F_{2}=C\frac{qq_{2}}{
|a|^{2}} \end{equation} The final state of the machine (the result of measurement) depends on which force, $F_{1}$ or $F_{2}$, is bigger. If the labels $O$ and $T$ are placed as on Fig.~\ref{fig:mqg02}, the result of the measurement is $(O,O)$ iff $ F_{1}>F_{2}$, and $(T,T)$ iff $F_{1}<F_{2}$. The probability that $ F_{1}>F_{2}$ is as follows: \begin{equation}
P\left( F_{1}>F_{2}\right) =P\left( q_{1}|a|^{2}>q_{2}|b|^{2}\right)
=P\left( q_{1}>Q|b|^{2}\right) \label{prob1} \end{equation} which, since $q_{1}$ is assumed to be uniformly distributed in the interval $ [0,Q]$, yields \begin{equation}
P(O,O)=P(F_{1}>F_{2})=\frac{Q-Q|b|^{2}}{Q}=1-|b|^{2}=|a|^{2}. \label{prob2} \end{equation} Of course in this case \begin{equation}
P(T,T)=P(F_{1}<F_{2})=1-|a|^{2}=|b|^{2}. \label{prob3} \end{equation}
Let us assume, following Marinatto and Weber, that Alice applies the identity operator (in our model: undertakes no action) with probability $p$ and applies the spin-flip operator (in our model: exchanges the labels $O$ and $T$ on her sphere) with probability $1-p$, and Bob does the same on his side with respective probabilities $q$ and $1-q$. Consequently, when both players make (or not) their movements, the configuration depicted on Fig.~
\ref{fig:mqg02} occurs with probability $pq$, and the result of the measurement is $(O,O) $ with probability $pq|a|^{2}$ and $(T,T)$ with probability $pq|b|^{2}$. Taking into account three other possibilities (Alice undertaking no action and Bob exchanging the labels, Alice exchanging the labels and Bob undertaking no action, and both of them exchanging their labels) which occur with respective probabilities $p(1-q)$, $(1-p)q$, and $ (1-p)(1-q)$, and the payoff bimatrix (\ref{BoS}), we obtain the following formulas for the expected payoff of Alice:
\begin{equation} \begin{array}{ll}
\overline{\$}_{A}(p,q) & =pq(|a|^{2}\alpha +|b|^{2}\beta )+p(1-q)\gamma \\
& \text{ }+(1-p)q\gamma +(1-p)(1-q)(|a|^{2}\beta +|b|^{2}\alpha ) \\
& =p[q(\alpha +\beta -2\gamma )-\alpha |b|^{2}-\beta |a|^{2}+\gamma ] \\
& \text{ }+q(-\alpha |b|^{2}-\beta |a|^{2}+\gamma )+\alpha
|b|^{2}+\beta
|a|^{2}, \end{array} \label{payoffAlice} \end{equation} and the expected payoff of Bob: \begin{equation} \begin{array}{ll}
\overline{\$}_{B}(p,q) & =pq(|b|^{2}\alpha +|a|^{2}\beta )+p(1-q)\gamma \\
& \text{ }+(1-p)q\gamma +(1-p)(1-q)(|a|^{2}\alpha +|b|^{2}\beta ) \\
& =q[p(\alpha +\beta -2\gamma )-\alpha |a|^{2}-\beta |b|^{2}+\gamma ] \\
& \text{ }+p(-\alpha |a|^{2}-\beta |b|^{2}+\gamma )+\alpha
|a|^{2}+\beta
|b|^{2}. \end{array} \label{payoffBob} \end{equation} Let us note that these formulas, although obtained from the `mechanistic' model through `classical' calculations are \emph{exactly }the same as formulas (7.3) of Marinatto and Weber \cite{MW00} for the payoff functions of Alice and Bob in their `reduced' version of the quantum Battle of the Sexes game that begins with a general entangled state (\ref{in}).
The macroscopic model of the quantum game that begins with a
non-entangled state $\mid \psi _{in}\rangle =|OO\rangle $ can be obtained by putting in ( \ref{in}) $a=1$ and $b=0$, which means that in this case the rod on Fig.~\ref {fig:mqg02} leads from the North pole of Alice's sphere to the South pole of Bob's sphere. In this case we obtain \begin{equation} \begin{array}{ll} \overline{\$}_{A}(p,q) & =p[q(\alpha +\beta -2\gamma )+\gamma -\beta ] \\ & \text{ }+q(\gamma -\beta )+\beta , \\ \overline{\$}_{B}(p,q) & =q[p(\alpha +\beta -2\gamma )-\alpha
|a|^{2}-\beta
|b|^{2}+\gamma ] \\ & \text{ }+p(\gamma -\beta )+\alpha , \end{array} \label{payoffNonent} \end{equation} again in the perfect agreement with Marinatto and Weber's \cite{MW00} formulas (3.3).
This result might be surprising since the rod connecting two particles represents entanglement in the macroscopic quantum machine so one could expect that when the initial state of the game is not entangled, this connection should be broken. However, it should be noticed that in the device depicted on Fig.~ \ref{fig:mqg02} the rod connecting two particles is, in fact, redundant. The reason for which we left it on Fig.~\ref{fig:mqg02} is twofold: firstly, we wanted to stress that our idea of a macroscopic device that allows to play quantum games stems from the ideas published in \cite {Aer91,Aeretal00}, and secondly, this rod will be essential for macroscopic simulations of other quantum games, more general than Marinatto and Weber's `restricted' ones.
Thus, we see that what vanishes in the `non-entanglement' limit of the considered quantum game is the `randomness in measurement', since now (except for the zero-probability case when $q_{1}=0$, $q_{2}=Q$) the initial state of the machine does not change in the course of the measurement whatever is the value of $q_{1}$.
\subsection{Marinatto and Weber's `restricted' quantum game realized with a pack of 10 cards}
The lack of any importance of the connecting rod and the fact that all distances, charges, and forces in the device depicted on Fig.~\ref {fig:mqg02} are symmetric with respect to the middle of the rod allow to produce a still more simple model of the considered game, in fact so simple that it can be played with a piece of paper and a pack of 10 cards bearing numbers $0,1,...,9$. The game is played in three steps. In the first step the initial `quantum' state of the game (\ref{in}) is fixed. Since only
the squared moduli of entanglement coefficients $|a|^{2}$ and $|b|^{2}$ are important and $|a|^{2}+|b|^{2}=1$, it is enough to fix a point representing $
|a|^{2}$ in the interval $[0,1]$ (Fig.~\ref{fig:mqg03}). \begin{figure}
\caption{The board to play `restricted' quantum games with 10 cards.}
\label{fig:mqg03}
\end{figure}
In the next step the players exchange, or not, labels $O$ and $T$ on their sides modelling in this way application of spin-flip, resp. identity, operators. In the third step a measurement is made, which is executed by choosing at random a number in the interval $[0,1]$. If a chosen number
is smaller than $|a|^{2}$ which, if the probability distribution is uniform in $
[0,1]$, happens with the probability $|a|^{2}$, the result of the measurement is given by labels placed by both players close to $1$, otherwise by labels placed close to $0$. Although random choosing of a number may be executed in many ways, we propose to use a pack of 10 cards bearing numbers $0,1,...,9$ which allows to draw one by one, with uniform probability, consecutive decimal digits of a number until we are sure that the emerging number is either definitely bigger or definitely smaller than $
|a|^{2}$(we put aside the problem of drawing in this way the number that
\emph{exactly} equals $|a|^{2}$ since its probability is $0$, as well as the fact that in a series of $n$ drawings we actually choose one of $10^{n}$ numbers represented by separate points uniformly distributed in the interval $[0,1-10^{-n}]$). Of course calculations of the payoff functions that we made while describing the device depicted on Fig.~\ref{fig:mqg02} are still valid in this case, so we again obtain perfect macroscopic simulation of Marinatto and Weber's `restricted' two-players, two-strategies quantum games.
Thus, one does not have to be equipped with sophisticated and costly devices and perform subtle manipulations on highly fragile single quantum objects in order to play quantum games, at least in the `restricted' Marinatto and Weber's version: all that suffices is a piece of paper and a pack of 10 cards!
\noindent \textbf{Acknowledgments} This work was carried out within the projects G.0335.02 and G.0452.04 of the Flemish Fund for Scientific research.
\end{document} | arXiv | {
"id": "0505120.tex",
"language_detection_score": 0.851130485534668,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Cohomology and the controlling algebra of crossed homomorphisms on $3$-Lie algebras]{Cohomology and the controlling algebra of crossed homomorphisms on $3$-Lie algebras}
\author{Shuai Hou} \address{Department of Mathematics, Jilin University, Changchun 130012, Jilin, China} \email{houshuai19@mails.jlu.edu.cn}
\author{Meiyan Hu} \address{Department of Mathematics, Jilin University, Changchun 130012, Jilin, China} \email{hmy21@mails.jlu.edu.cn}
\author{Lina Song} \address{Department of Mathematics, Jilin University, Changchun 130012, Jilin, China} \email{songln@jlu.edu.cn}
\author{Yanqiu Zhou} \address{School of Science, Guangxi University of Science and Technology, Liuzhou 545006, China} \email{zhouyanqiunihao@163.com}
\begin{abstract} In this paper, first we give the notion of a crossed homomorphism on a $3$-Lie algebra with respect to an action on another $3$-Lie algebra, and characterize it using a homomorphism from a Lie algebra to the semidirect product Lie algebra. We also establish the relationship between crossed homomorphisms and relative Rota-Baxter operators of weight $1$ on 3-Lie algebras. Next we construct a cohomology theory for a crossed homomorphism on $3$-Lie algebras and classify infinitesimal deformations of crossed homomorphisms using the second cohomology group. Finally, using the higher derived brackets, we construct an $L_\infty$-algebra whose Maurer-Cartan elements are crossed homomorphisms. Consequently, we obtain the twisted $L_\infty$-algebra that controls deformations of a given crossed homomorphism on $3$-Lie algebras. \end{abstract}
\renewcommand{\thefootnote}{} \footnotetext{2020 Mathematics Subject Classification. 17A42, 17B56, 17B38} \keywords{$3$-Lie algebra, crossed homomorphism, $L_\infty$-algebra, cohomology, deformation }
\maketitle
\tableofcontents
\allowdisplaybreaks
\section{Introduction} The notion of $3$-Lie algebras and more generally, $n$-Lie algebras (also called Filippov algebras) was introduced in \cite{Filippov}. See the review article \cite{review,Makhlouf} for more details. The $n$-Lie algebra is the algebraic structure corresponding to Nambu mechanics \cite{Nambu}. In recent years, $3$-Lie algebras have been widely studied and applied in the fields of mathematics and physics, especially in string theory and M2-branes \cite{Bagger,HHM,Gustavsson}. For example, metric $3$-Lie algebras play a significant role in the basic model of Bagger-Lambert-Gustavsson theory \cite{dMFM,Medeiros}, the supersymmetric Yang-Mills theory can be studied by a special structure of $3$-Lie algebras \cite{Gomis}, and in \cite{Basu}, Basu and Harvey suggested to replace the Lie algebra appearing in the Nahm equation by a $3$-Lie algebra for the lifted Nahm equations.
The notion of a crossed homomorphism of on Lie algebras was introduced by Lue in \cite{Lue}. A crossed homomorphism is also called a relative difference operator or differential operator of weight $1$ with respect to the adjoint representation \cite{GUOK,GSZ,Liu-Guo}. Crossed homomorphisms are related to post-Lie algebras and can be used to study the integration of post-Lie algebras \cite{Mencattini}. In \cite{PSTZ}, using the crossed homomorphisms on Lie algebras, they studied the relationship between the category of weak representations of Lie-Rinehart algebras and the monoidal category of representations of Lie algebras of Cartan type. They also introduced the cohomology theory of crossed homomorphisms on Lie algebras and studied linear deformations of crossed homomorphisms. In \cite{JSH}, the authors studied the controlling algebra of relative difference Lie algebras and defined the cohomology of difference Lie algebras with coefficients in arbitrary representations. Crossed homomorphisms on Hopf algebras and Cartier-Kostant-Milnor-Moore theorem for difference Hopf algebras were studied in \cite{GLT}.
The research on the deformation theory of algebraic structures began with the seminal work of Gerstenhaber for associative algebras \cite{Ge}. Next, Nijenhuis and Richardson extended the study of deformation theory to Lie algebra \cite{NR}. In \cite{FO,Makhlouf}, the deformation problem of $n$-Lie algebras and $3$-Lie algebras were studied respectively. See the review \cite{GlST} for more details. Recently, the deformations of certain operators, e.g. morphisms, relative Rota-Baxter operators on $3$-Lie algebras have been deeply studied \cite{Arfa,THS}. Actually, an invertible linear map is a differential operator if and only its inverse is a (relative) Rota-Baxter operator on $3$-Lie algebras \cite{BaiRGuo,BGS-3-Bialgebras}.
The purpose of this paper is to study crossed homomorphisms on 3-Lie algebras, with particular interests in the cohomology and deformation theories. The crossed homomorphisms introduced in this paper are closely related to relative Rota-Baxter operators of weight $1$ on 3-Lie algebras introduced in \cite{HouSZ}. More precisely, the inverse of an invertible crossed homomorphism is a relative Rota-Baxter operators of weight $1$, which generalizes the classical relations between crossed homomorphisms and relative Rota-Baxter operators of weight 1 on Lie algebras, and thus justifies its correctness. A crossed homomorphism gives rise to a new representation, and the corresponding cohomology of 3-Lie algebras is taken to be the cohomology of the crossed homomorphism. As expected, the second cohomology group classifies infinitesimal deformations of the crossed homomorphism. Furthermore, we use Voronov's higher derived brackets to construct an $L_\infty$-algebra whose Maurer-Cartan elements are crossed homomorphisms. Consequently, we obtain the $L_\infty$-algebra governing deformations of a crossed homomorphism. Note that in the Lie algebra context, it is a differential graded Lie algebra governing deformations of a crossed homomorphism on Lie algebras. While for 3-Lie algebras, it is indeed an $L_\infty$-algebra with nontrivial $l_3$ governing deformations of a crossed homomorphism, which is totally different from the case of Lie algebras.
The paper is organized as follows. In Section \ref{sec:two}, we introduce the notion of crossed homomorphisms on $3$-Lie algebras and illustrate the relationship between crossed homomorphisms and relative Rota-Baxter operator of weight $1$. In Section \ref{sec:cohomology}, we establish the cohomology theory of crossed homomorphisms on $3$-Lie algebras. We use the cohomology theory of crossed homomorphisms that we established to classify infinitesimal deformations of crossed homomorphisms. In Section \ref{sec:four}, we construct an $L_{\infty}$-algebra whose Maurer-Cartan elements are precisely crossed homomorphisms on $3$-Lie algebras. We also using Getzler's twisted $L_{\infty}$-algebra theory to characterize deformations of crossed homomorphisms on $3$-Lie algebras.
In this paper, we work over an algebraically closed filed $\mathbb K$ of characteristic $0$.
\noindent {\bf Acknowledgements.} This research is supported by NSFC (12001226).
\section{Crossed homomorphisms on $3$-Lie algebras}\label{sec:two} In this section, we introduce the notion of crossed homomorphisms on $3$-Lie algebras, and find that there is a close relationship between crossed homomorphisms and relative Rota-Baxter operators of weight $1$ on $3$-Lie algebras.
\begin{defi}{\rm (\cite{Filippov})}\label{defi:3Lie} A {\bf 3-Lie algebra} is a vector space $\mathfrak g$ together with a skew-symmetric linear map $[\cdot,\cdot,\cdot]_{\mathfrak g}:\wedge^{3}\mathfrak g\rightarrow \mathfrak g$, such that for $ x_{i}\in \mathfrak g, 1\leq i\leq 5$, the following {\bf Fundamental Identity} holds: \begin{eqnarray} \nonumber\qquad &&[x_1,x_2,[x_3,x_4, x_5]_{\mathfrak g}]_{\mathfrak g}\\ &=&[[x_1,x_2, x_3]_{\mathfrak g},x_4,x_5]_{\mathfrak g}+[x_3,[x_1,x_2, x_4]_{\mathfrak g},x_5]_{\mathfrak g}+[x_3,x_4,[x_1,x_2, x_5]_{\mathfrak g}]_{\mathfrak g}.
\label{eq:jacobi1} \end{eqnarray} \end{defi} For $x_{1},x_{2}\in \mathfrak g$, define $\mathrm{ad}_{x_1,x_2}\in \mathfrak {gl}(\mathfrak g)$ by \begin{eqnarray*}\label{eq2} \mathrm{ad}_{x_{1},x_{2}}x:=[x_{1},x_{2},x]_{\mathfrak g},\quad \forall x\in \mathfrak g. \end{eqnarray*} Then $\mathrm{ad}_{x_{1},x_{2}}$ is a derivation, i.e. $$\mathrm{ad}_{x_{1},x_{2}}[x_{3},x_{4},x_{5}]_{\mathfrak g}=[\mathrm{ad}_{x_{1},x_{2}}x_{3},x_{4},x_{5}]_{\mathfrak g}+ [x_{3},\mathrm{ad}_{x_{1},x_{2}}x_{4},x_{5}]_{\mathfrak g}+[x_{3},x_{4},\mathrm{ad}_{x_{1},x_{2}}x_{5}]_{\mathfrak g}.$$ \begin{defi}{\rm (\cite{KA})} A {\bf representation} of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a vector space $V$ is a linear map: $\rho:\wedge^{2}\mathfrak g\rightarrow \mathfrak {gl}(V)$, such that for all $x_{1}, x_{2}, x_{3}, x_{4}\in \mathfrak g,$ the following equalities hold: \begin{eqnarray} ~\label{representation-1}\rho(x_{1},x_{2})\rho(x_{3},x_{4})&=&\rho([x_{1},x_{2},x_{3}]_{\mathfrak g},x_{4})+ \rho(x_{3},[x_{1},x_{2},x_{4}]_{\mathfrak g})+\rho(x_{3},x_{4})\rho(x_{1},x_{2}),\\ ~\label{representation-2}\rho(x_{1},[x_{2},x_{3},x_{4}]_{\mathfrak g})&=&\rho(x_{3},x_{4})\rho(x_{1},x_{2})-\rho(x_{2},x_{4})\rho(x_{1},x_{3}) +\rho(x_{2},x_{3})\rho(x_{1},x_{4}). \end{eqnarray} \end{defi}
Let $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ be a $3$-Lie algebra. The linear map $\mathrm{ad}:\wedge^2\mathfrak g\rightarrow\mathfrak {gl}(\mathfrak g)$ defines a representation of the $3$-Lie algebra $\mathfrak g$ on itself, which is called the {\bf adjoint representation} of $\mathfrak g.$
\begin{defi}{\rm (\cite{Filippov})} Let $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ be a $3$-Lie algebra. Then the subalgebra $[\mathfrak g,\mathfrak g,\mathfrak g]_{\mathfrak g}$ is called the {\bf derived algebra} of $\mathfrak g$, and denoted by ${\mathfrak g}^1$.
The subspace $${\mathcal{C}}(\mathfrak g)=\{x\in \mathfrak g~|~[x,y,z]_{\mathfrak g}=0,~ \forall y,z\in\mathfrak g\}$$ is called the {\bf center} of $\mathfrak g$. \end{defi}
\begin{defi}\cite{HouSZ} Let $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ and $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$ be two $3$-Lie algebras. Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be a representation of the $3$-Lie algebra $\mathfrak g$ on the vector space $\mathfrak h$. If for all $x,y\in \mathfrak g, u,v,w\in \mathfrak h,$ \begin{eqnarray} \label{eq:action-1}{}\rho(x,y)u\in {\mathcal{C}}(\mathfrak h),\\ \label{eq:action-2}{}\rho(x,y)[u,v,w]_{\mathfrak h}=0, \end{eqnarray} then $\rho$ is called an {\bf action} of $\mathfrak g$ on $\mathfrak h.$ We denote an action by $(\mathfrak h;\rho).$ \end{defi}
\begin{pro}\cite{HouSZ}\label{lem:semi}
Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h}).$ There is a $3$-Lie algebra structure on $\mathfrak g\oplus \mathfrak h$, defined by
\begin{eqnarray} {}[x+u,y+v,z+w]_{\rho}=[x,y,z]_{\mathfrak g}+\rho(x,y)w+\rho(y,z)u+\rho(z,x)v+[u,v,w]_\mathfrak h, \end{eqnarray}
for all $x,y,z\in\mathfrak g,~u,v,w\in\mathfrak h.$ This $3$-Lie algebra is called the semidirect product of the $3$-Lie algebra $\mathfrak g$ and the $3$-Lie algebra $\mathfrak h$ with respect to the action $\rho$, and denoted by $\mathfrak g\ltimes _\rho\mathfrak h.$ \end{pro}
Next we give the notion of crossed homomorphisms on $3$-Lie algebras. \begin{defi}\label{crossed-homo}
Let $\rho:\wedge^2\mathfrak g\to\mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$. A linear map $H:\mathfrak g\to \mathfrak h$ is called a {\bf crossed homomorphism with respect to the action $\rho$} if \begin{eqnarray}\label{eq:crossed-homo} \qquad H[x,y,z]_\mathfrak g=\rho(x,y)(Hz)+\rho(y,z)(Hx)+\rho(z,x)(Hy)+[Hx,Hy,Hz]_\mathfrak h,\quad \forall x, y, z\in \mathfrak g. \end{eqnarray} \end{defi}
\begin{rmk}
If the action $\rho$ of $\mathfrak g$ on $\mathfrak h$ is zero, then any crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ is nothing but a $3$-Lie algebra homomorphism. If~$\mathfrak h$ is commutative, then any crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ is simply a derivation from $\mathfrak g$ to $\mathfrak h$ with respect to the representation $(\mathfrak h; \rho)$. \end{rmk}
In the Lie algebra context, crossed homomorphisms play important roles in the study of representations of Lie algebras of Cartan type. An essential ingredient in the whole theory is that a crossed homomorphism $H:\mathfrak g\to\mathfrak h$ induces a homomorphism from the Lie algebra $\mathfrak g$ to the semidirect product Lie algebra $\mathfrak g\ltimes \mathfrak h$ (\cite[Theorem 2.7]{PSTZ}). Now for crossed homomorphisms on 3-Lie algebras, we still have this characterization.
\begin{thm}
Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h}).$ Then a linear map $H:\mathfrak g\rightarrow\mathfrak h$ is a crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ if and only if the map
$\phi_{H}:\mathfrak g\rightarrow\mathfrak g\ltimes _\rho\mathfrak h$ defined by
\begin{eqnarray}
\phi_{H}(x):=(x,Hx),\quad \forall x\in\mathfrak g,
\end{eqnarray}
is a $3$-Lie algebra homomorphism. \end{thm} \begin{proof} For all $x,y,z\in\mathfrak g,$ we have
\begin{eqnarray*}
\phi_{H}[x,y,z]_{\mathfrak g}&=&([x,y,z]_{\mathfrak g},H[x,y,z]_{\mathfrak g});\\
{}[\phi_{H}(x),\phi_{H}(y),\phi_{H}(z)]_{\rho}
&=&([x,y,z]_{\mathfrak g},\rho(x,y)(Hz)+\rho(y,z)(Hx)+\rho(z,x)(Hy)+[Hx,Hy,Hz]_{\mathfrak h}).
\end{eqnarray*}
Thus, $\phi_{H}[x,y,z]_{\mathfrak g}=[\phi_{H}(x),\phi_{H}(y),\phi_{H}(z)]_{\rho}$ if and only if $H$ is a crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ with respect to the action $\rho.$ \end{proof}
\begin{ex} Let $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ be a $4$-dimensional $3$-Lie algebra with a basis $\{e_1,e_2,e_3,e_4\}$ and the nonzero multiplication is given by $$[e_2,e_3,e_4]_{\mathfrak g}=e_1.$$ The center of $\mathfrak g$ is the subspace generated by $\{e_1\}.$ It is obvious that
the adjoint representation $\mathrm{ad}:\wedge^2\mathfrak g\rightarrow\mathfrak {gl}(\mathfrak g)$ is an action of $\mathfrak g$ on itself. For a matrix $\left(\begin{array}{cccc} a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34}\\ a_{41}&a_{42}&a_{43}&a_{44} \end{array}\right),$ define \begin{align*}
He_1=&a_{11}e_1+a_{21}e_2+a_{31}e_3+a_{41}e_4,\quad He_2=a_{12}e_1+a_{22}e_2+a_{32}e_3+a_{42}e_4,\\
He_3=&a_{13}e_1+a_{23}e_2+a_{33}e_3+a_{43}e_4,\quad He_4=a_{14}e_1+a_{24}e_2+a_{34}e_3+a_{44}e_4. \end{align*} $H$ is a crossed homomorphism from $\mathfrak g$ to $\mathfrak g$ with respect to the action $\mathrm{ad}$ if and only if \begin{eqnarray*} H[e_i,e_j,e_k]_{\mathfrak g}=[He_i,e_j,e_k]_{\mathfrak g}+[e_i,He_j,e_k]_{\mathfrak g}+[e_i,e_j,He_k]_{\mathfrak g}+[He_i,He_j,He_k]_{\mathfrak g},\quad i,j,k=1,2,3,4. \end{eqnarray*} By straightforward computations, we deduce that $H$ is a crossed homomorphism if and only if \begin{eqnarray*} \left\{\begin{array}{rcl} {}a_{11}&=&a_{22}+a_{33}+a_{44}+a_{23}a_{34}a_{42}+a_{24}a_{32}a_{43}+a_{22}a_{33}a_{44}-a_{24}a_{33}a_{42}-a_{22}a_{34}a_{43}-a_{23}a_{32}a_{44},\\ {}a_{21}&=&a_{31}=a_{41}=0. \end{array}\right. \end{eqnarray*} In particular, $\begin{cases} H(e_1)=0,\\ H(e_2)=e_2,\\ H(e_3)=e_3,\\ H(e_4)=-e_4, \end{cases}$is a crossed homomorphism from $\mathfrak g$ to $\mathfrak g$ with respect to the adjoint action $\mathrm{ad}$. \end{ex}
\begin{defi} Let $H$ and $H'$ be two crossed homomorphisms from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. A {\bf homomorphism} from $H$ to $H'$ consists of $3$-Lie algebra homomorphisms $\psi_\mathfrak g: \mathfrak g\lon\mathfrak g$ and $\psi_\mathfrak h: \mathfrak h\lon\mathfrak h$ such that \begin{eqnarray}
\label{condition-1}\psi_\mathfrak h\circ H&=&H'\circ\psi_\mathfrak g,\\
\label{condition-2}\psi_\mathfrak h(\rho(x,y)u)&=&\rho(\psi_\mathfrak g(x),\psi_\mathfrak g(y))(\psi_\mathfrak h(u)),\quad \forall x,y\in \mathfrak g, u\in \mathfrak h. \end{eqnarray} \emptycomment{ i.e. we have the following commutative diagram \[\xymatrix{
\mathfrak g \ar[d]_{H} \ar[r]^{\psi_{\mathfrak g}}
& \mathfrak g \ar[d]^{H'} \\
\mathfrak h \ar[r]^{\psi_{\mathfrak h}}
& \mathfrak h }\]} In particular, if both $\psi_\mathfrak g$ and $\psi_\mathfrak h$ are invertible, $(\psi_\mathfrak g, \psi_\mathfrak h)$ is called an {\bf isomorphism} from $H$ to $H'$. \end{defi} \begin{lem} Let $H:\mathfrak g\rightarrow\mathfrak h$ be a crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ with respect to an action $\rho$. Let $\psi_\mathfrak g: \mathfrak g\lon\mathfrak g$ and $\psi_\mathfrak h: \mathfrak h\lon\mathfrak h$ be $3$-Lie algebra isomorphisms such that \eqref{condition-2} holds. Then $\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}$ is a crossed homomorphism from $\mathfrak g$ to $\mathfrak h$ with respect to the action $\rho$. \end{lem} \begin{proof} For all $x,y,z\in\mathfrak g$, we have \begin{eqnarray*}
&&(\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g})[x,y,z]_{\mathfrak g}\\
&=&\psi^{-1}_{\mathfrak h}\Big(\rho(\psi_{\mathfrak g}(x),\psi_{\mathfrak g}(y))H(\psi_{\mathfrak g}(z))+\rho(\psi_{\mathfrak g}(y),\psi_{\mathfrak g}(z))H(\psi_{\mathfrak g}(x))
+\rho(\psi_{\mathfrak g}(z),\psi_{\mathfrak g}(x))H(\psi_{\mathfrak g}(y))\\
&&+[H(\psi_{\mathfrak g}(x)),H(\psi_{\mathfrak g}(y)),H(\psi_{\mathfrak g}(z))]_{\mathfrak h}\Big)\\
&=&\rho(x,y)(\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(z))+\rho(y,z)(\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(x))+\rho(z,x)(\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(y))\\
&&+[\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(x),\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(y),\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}(z)]_{\mathfrak h}, \end{eqnarray*} which implies that $\psi^{-1}_{\mathfrak h}\circ H\circ\psi_{\mathfrak g}$ is a crossed homomorphism. \end{proof}
At the end of this section, we establish the relationship between crossed homomorphisms and relative Rota-Baxter operators of weight $1$ on $3$-Lie algebras.
Recall from \cite{HouSZ} that a linear map $T: \mathfrak h\rightarrow\mathfrak g$ is called a {\bf
relative Rota-Baxter operator} of weight $\lambda\in \mathbb K$ from a $3$-Lie algebra $\mathfrak h$ to a $3$-Lie algebra $\mathfrak g$ with respect to an action $\rho$ if \begin{eqnarray} \label{eq:rRB}
[Tu,Tv,Tw]_\mathfrak g =T\Big(\rho(Tu,Tv)w+\rho(Tv,Tw)u+\rho(Tw,Tu)v+\lambda[u,v,w]_\mathfrak h\Big), \end{eqnarray} for all $u, v, w\in\mathfrak h.$
\begin{pro}
Let $\rho:\wedge^2\mathfrak g\to\mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$. An invertible linear map $H:\mathfrak g\rightarrow\mathfrak h$ is a crossed homomorphism from the $3$-Lie algebra $\mathfrak g$ to the $3$-Lie algebra $\mathfrak h$ with respect to the action $\rho$ if and only if $H^{-1}$ is a relative Rota-Baxter operator of weight $1$ from the $3$-Lie algebra $\mathfrak h$ to the $3$-Lie algebra $\mathfrak g$ with respect to the action $\rho$. \end{pro} \begin{proof}
If an invertible linear map $H:\mathfrak g\rightarrow\mathfrak h$ is a crossed homomorphism, then for $u_1, u_2, u_3\in
\mathfrak h,$ by \eqref{eq:crossed-homo}, we have
\begin{align*}
&[H^{-1}(u_1),H^{-1}(u_2),H^{-1}(u_3)]_{\mathfrak g}\\
=&H^{-1}(H[H^{-1}(u_1),H^{-1}(u_2),H^{-1}(u_3)]_{\mathfrak g})\\
=&H^{-1}\Big(\rho(H^{-1}(u_1),H^{-1}(u_2))(u_3)+\rho(H^{-1}(u_2),H^{-1}(u_3))(u_1)+\rho(H^{-1}(u_3),H^{-1}(u_1))(u_2)+[u_1,u_2,u_3]_{\mathfrak h}\Big).
\end{align*} Therefore, $H^{-1}$ is a relative Rota-Baxter operator of weight $1$.
Conversely, if $H^{-1}$ be a relative Rota-Baxter operator of weight $1$. For all $x_1, x_2, x_3\in\mathfrak g,$ assume $x_i=H^{-1}(u_i),1\leq i\leq3,$ for $u_i\in \mathfrak h.$ By \eqref{eq:rRB}, we have \begin{align*}
&H[x_1,x_2,x_3]_{\mathfrak g}\\
=&H[H^{-1}(u_1),H^{-1}(u_2),H^{-1}(u_3)]_{\mathfrak g}\\
=&H(H^{-1}(\rho(H^{-1}(u_1),H^{-1}(u_2))(u_3)+\rho(H^{-1}(u_2),H^{-1}(u_3))(u_1)+\rho(H^{-1}(u_3),H^{-1}(u_1))(u_2)+[u_1,u_2,u_3]_{\mathfrak h}))\\
=&\rho(x_1,x_2)H(x_3)+\rho(x_2,x_3)H(x_1)+\rho(x_3,x_1)H(x_2)+[Hx_1,Hx_2,Hx_3]_{\mathfrak h}. \end{align*} So $H$ is a crossed homomorphism. \end{proof}
\section{Cohomologies of crossed homomorphisms on $3$-Lie algebras}\label{sec:cohomology} In this section, we define the cohomology of crossed homomorphisms on $3$-Lie algebras, and we use the second cohomology group to study infinitesimal deformations of crossed homomorphisms. \subsection{Cohomologies of crossed homomorphisms} First, we recall the cohomologies theory of $3$-Lie algebras.
Let $(V;\rho)$ be a representation of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$. Denote by $$\mathfrak C_{\mathsf{3Lie}}^{n}(\mathfrak g;V):=\mathrm{Hom} (\underbrace{\wedge^{2} \mathfrak g\otimes \cdots\otimes \wedge^{2}\mathfrak g}_{(n-1)}\wedge \mathfrak g,V),\quad(n\geq 1),$$ which is the space of $n$-cochains. The coboundary operator ${\rm d}:\mathfrak C_{\mathsf{3Lie}}^{n}(\mathfrak g;V)\rightarrow \mathfrak C_{\mathsf{3Lie}}^{n+1}(\mathfrak g;V)$ is defined by \begin{eqnarray*}&& ({\rm d}f)(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&\sum_{1\leq j<k\leq n}(-1)^{j} f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{j}},\cdots,\mathfrak{X}_{k-1}, [x_j,y_j,x_k]_{\mathfrak g}\wedge y_k\\&&+x_k\wedge[x_j,y_j,y_k]_{\mathfrak g}, \mathfrak{X}_{k+1},\cdots,\mathfrak{X}_{n},x_{n+1})\\&& +\sum_{j=1}^{n}(-1)^{j}f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{j}},\cdots,\mathfrak{X}_{n}, [x_j,y_j,x_{n+1}]_{\mathfrak g})\\&& +\sum_{j=1}^{n}(-1)^{j+1}\rho(x_j,y_j)f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{j}}, \cdots,\mathfrak{X}_{n},x_{n+1})\\&& +(-1)^{n+1}\Big(\rho(y_n,x_{n+1})f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n)+\rho(x_{n+1},x_n)f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n)\Big), \end{eqnarray*} for all$~\mathfrak{X}_{i}=x_{i}\wedge y_{i}\in \wedge^{2}\mathfrak g,~i=1,2,\cdots,n~and~x_{n+1}\in \mathfrak g.$ It was proved in \cite{Casas,Takhtajan1} that ${\rm d}\circ{\rm d}=0.$ Thus, $(\oplus_{n=1}^{+\infty}\mathfrak C_{\mathsf{3Lie}}^{n}(\mathfrak g;V),{\rm d})$ is a cochain complex.
\begin{defi} The {\bf cohomology} of the $3$-Lie algebra $\mathfrak g$ with coefficients in $V$ is the cohomology of the cochain complex $(\oplus_{n=1}^{+\infty} \mathfrak C_{\mathsf{3Lie}}^{n}(\mathfrak g;V),{\rm d})$. Denote by $\mathcal{Z}_{\mathsf{3Lie}}^{n}(\mathfrak g;V)$ and $\mathcal{B}_{\mathsf{3Lie}}^{n}(\mathfrak g;V)$ the set of $n$-cocycles and the set of $n$-coboundaries, respectively. The $n$-th cohomology group is defined by \begin{eqnarray*} \mathcal{H}_{\mathsf{3Lie}}^{n}(\mathfrak g;V)=\mathcal{Z}_{\mathsf{3Lie}}^{n}(\mathfrak g;V)/\mathcal{B}_{\mathsf{3Lie}}^{n}(\mathfrak g;V). \end{eqnarray*} \end{defi}
\begin{lem}
Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$ with respect to an action $\rho.$ Define $\rho_{H}:\wedge^2\mathfrak g\rightarrow\mathfrak {gl}(\mathfrak h)$ by
\begin{eqnarray}\label{crossed-representation}
\rho_{H}(x,y)u:=\rho(x,y)u+[Hx,Hy,u]_{\mathfrak h}, \quad \forall x,y\in\mathfrak g,u\in \mathfrak h.
\end{eqnarray}
Then $\rho_{H}$ is a representation of $\mathfrak g$ on $\mathfrak h$. \end{lem} \begin{proof} By a direct calculation using \eqref{eq:jacobi1}-\eqref{eq:crossed-homo}, for all $x_i\in \mathfrak g,1\leq i\leq 4, u\in \mathfrak h,$ we have \begin{eqnarray*} &&\Big(\rho_{H}(x_1,x_2)\rho_{H}(x_3,x_4)-\rho_{H}(x_3,x_4)\rho_{H}(x_1,x_2)\\ &&\quad -\rho_{H}([x_1,x_2,x_3]_{\mathfrak g},x_4)+\rho_{H}([x_1,x_2,x_4]_{\mathfrak g},x_3)\Big)(u)\\ &=&\rho(x_1,x_2)\rho(x_3,x_4)u+[Hx_1,Hx_2,\rho(x_3,x_4)u]_{\mathfrak h}+\rho(x_1,x_2)[Hx_3,Hx_4,u]_{\mathfrak h}\\ &&+[Hx_1,Hx_2,[Hx_3,Hx_4,u]_{\mathfrak h}]_{\mathfrak h}-\rho(x_3,x_4)\rho(x_1,x_2)u-[Hx_3,Hx_4,\rho(x_1,x_2)u]_{\mathfrak h}\\ &&-\rho(x_3,x_4)[Hx_1,Hx_2,u]_{\mathfrak h}-[Hx_3,Hx_4,[Hx_1,Hx_2,u]_{\mathfrak h}]_{\mathfrak h}-\rho([x_1,x_2,x_3]_{\mathfrak g},x_4)u\\ &&-[H[x_1,x_2,x_3]_{\mathfrak g},Hx_4,u]_{\mathfrak h}+\rho([x_1,x_2,x_4]_{\mathfrak g},x_3)u+[H[x_1,x_2,x_4]_{\mathfrak g},Hx_3,u]_{\mathfrak h}\\ &=&0, \end{eqnarray*} and \begin{eqnarray*} &&\Big(\rho_{H}([x_1,x_2,x_3]_{\mathfrak g},x_4)-\rho_{H}(x_1,x_2)\rho_{H}(x_3,x_4)\\ &&\quad -\rho_{H}(x_2,x_3)\rho_{H}(x_1,x_4)-\rho_{H}(x_3,x_1)\rho_{H}(x_2,x_4)\Big)(u)\\ &=&\rho([x_1,x_2,x_3]_{\mathfrak g},x_4)u+[H[x_1,x_2,x_3]_{\mathfrak h},Hx_4,u]_{\mathfrak h}-\rho(x_1,x_2)\rho(x_3,x_4)u\\ &&-[Hx_1,Hx_2,\rho(x_3,x_4)u]_{\mathfrak h}-\rho(x_1,x_2)[Hx_3,Hx_4,u]_{\mathfrak h}-[Hx_1,Hx_2,[Hx_3,Hx_4,u]_{\mathfrak h}]_{\mathfrak h}\\ &&-\rho(x_2,x_3)\rho(x_1,x_4)u-[Hx_2,Hx_3,\rho(x_1,x_4)u]_{\mathfrak h}-\rho(x_2,x_3)[Hx_1,Hx_4,u]_{\mathfrak h}\\ &&-[Hx_2,Hx_3,[Hx_1,Hx_4,u]_{\mathfrak h}]_{\mathfrak h}-\rho(x_3,x_1)\rho(x_2,x_4)u-[Hx_3,Hx_1,\rho(x_2,x_4)u]_{\mathfrak h}\\ &&-\rho(x_3,x_1)[Hx_2,Hx_4,u]_{\mathfrak h}-[Hx_3,Hx_1,[Hx_2,Hx_4,u]_{\mathfrak h}]_{\mathfrak h}\\ &=&0. \end{eqnarray*} Therefore, we deduce that $(\mathfrak h;\rho_{H})$ is a representation of the $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$. \end{proof}
Let ${\rm d}_{\rho_{H}}:\mathfrak C_{\mathsf{3Lie}}^{n}(\mathfrak g;\mathfrak h)\rightarrow \mathfrak C_{\mathsf{3Lie}}^{n+1}(\mathfrak g;\mathfrak h),(n\geq1)$ be the corresponding coboundary operator of the $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ with coefficients in the representation $(\mathfrak h;\rho_{H})$. More precisely, for all $f\in \mathrm{Hom} (\underbrace{\wedge^{2} \mathfrak g\otimes \cdots\otimes \wedge^{2}\mathfrak g}_{(n-1)}\wedge \mathfrak g,\mathfrak h)$, $\mathfrak{X}_i=x_i\wedge y_i\in \wedge^2\mathfrak g,~ i=1,2,\cdots,n$ and $x_{n+1}\in \mathfrak g,$ we have \begin{eqnarray*} &&({\rm d}_{\rho_{H}}f)(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&\sum_{1\leq i<k\leq n}(-1)^{i} f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}},\cdots,\mathfrak{X}_{k-1}, [x_i,y_i,x_k]_{\mathfrak g}\wedge y_k\\&&+x_k\wedge[x_i,y_i,y_k]_{\mathfrak g}, \mathfrak{X}_{k+1},\cdots,\mathfrak{X}_{n},x_{n+1})\\&& +\sum_{i=1}^{n}(-1)^{i}f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}},\cdots,\mathfrak{X}_{n}, [x_i,y_i,x_{n+1}]_{\mathfrak g})\\&& +\sum_{i=1}^{n}(-1)^{i+1}\rho(x_i,y_i)f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}}, \cdots,\mathfrak{X}_{n},x_{n+1})\\&& +\sum_{i=1}^{n}(-1)^{i+1}[Hx_i,Hy_i,f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}}, \cdots,\mathfrak{X}_{n},x_{n+1})]_{\mathfrak h}\\&& +(-1)^{n+1}\Big(\rho(y_n,x_{n+1})f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n)+[Hy_n,Hx_{n+1},f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n)]_{\mathfrak h} \Big)\\&& +(-1)^{n+1}\Big(\rho(x_{n+1},x_n)f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n)+[Hx_{n+1},Hx_n,f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n)]_{\mathfrak h}\Big). \end{eqnarray*}
It is obvious that $f\in \mathrm{Hom}(\mathfrak g;\mathfrak h)$ is closed if and only if \begin{eqnarray*} f([x_1,x_2,x_3]_{\mathfrak g})&=&\rho(x_1,x_2)f(x_3)+[Hx_1,Hx_2,f(x_3)]_{\mathfrak h}+\rho(x_2,x_3)f(x_1)\\ &&+[Hx_2,Hx_3,f(x_1)]_{\mathfrak h}+\rho(x_3,x_1)f(x_2)+[Hx_3,Hx_1,f(x_2)]_{\mathfrak h},\quad \forall x_1,x_2,x_3\in \mathfrak g. \end{eqnarray*}
Define $\delta:\wedge^2\mathfrak g\rightarrow\mathrm{Hom}(\mathfrak g,\mathfrak h)$ by \begin{eqnarray*} \delta(\mathfrak{X})z=\rho(y,z)H(x)+\rho(z,x)H(y)+[Hx,Hy,Hz]_{\mathfrak h}, \quad \forall\mathfrak{X}=x\wedge y\in\wedge^2\mathfrak g, z\in \mathfrak g. \end{eqnarray*} \begin{pro}
Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$ with respect to an action $\rho.$ Then $\delta(\mathfrak{X})$ is a $1$-cocycle of the $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ with coefficients in $(\mathfrak h;\rho_{H}).$ \end{pro} \begin{proof} For all $x_1,x_2,x_3\in \mathfrak g,$ by \eqref{eq:jacobi1}-\eqref{eq:crossed-homo}, we have \begin{eqnarray*} &&({\rm d}_{\rho_{H}}\delta(\mathfrak{X}))(x_1,x_2,x_3)\\ &=&\rho(x_1,x_2)\delta(\mathfrak{X})(x_3)+\rho(x_2,x_3)\delta(\mathfrak{X})(x_1)+\rho(x_3,x_1)\delta(\mathfrak{X})(x_2)-\delta(\mathfrak{X})([x_1,x_2,x_3]_{\mathfrak g})\\ &&+[Hx_1,Hx_2,\delta(\mathfrak{X})x_3]_{\mathfrak h}+[\delta(\mathfrak{X})x_1,Hx_2,Hx_3]_{\mathfrak h}+[Hx_1,\delta(\mathfrak{X})x_2,Hx_3]_{\mathfrak h}\\ &=&\rho(x_1,x_2)\Big([Hx,Hy,Hx_3]_{\mathfrak h}+\rho(y,x_3)(Hx)+\rho(x_3,x)(Hy)\Big)\\ &&+\rho(x_2,x_3)\Big([Hx,Hy,Hx_1]_{\mathfrak h}+\rho(y,x_1)(Hx)+\rho(x_1,x)(Hy)\Big)\\ &&+\rho(x_3,x_1)\Big([Hx,Hy,Hx_2]_{\mathfrak h}+\rho(y,x_2)(Hx)+\rho(x_2,x)(Hy)\Big)\\ &&+[Hx_1,Hx_2,[Hx,Hy,Hx_3]_{\mathfrak h}]_{\mathfrak h}+[Hx_1,Hx_2,\rho(y,x_3)(Hx)]_{\mathfrak h}+[Hx_1,Hx_2,\rho(x_3,x)(Hy)]_{\mathfrak h}\\ &&+[Hx_2,Hx_3,[Hx,Hy,Hx_1]_{\mathfrak h}]_{\mathfrak h}+[Hx_2,Hx_3,\rho(y,x_1)(Hx)]_{\mathfrak h}+[Hx_2,Hx_3,\rho(x_1,x)(Hy)]_{\mathfrak h}\\ &&+[Hx_3,Hx_1,[Hx,Hy,Hx_2]_{\mathfrak h}]_{\mathfrak h}+[Hx_3,Hx_1,\rho(y,x_2)(Hx)]_{\mathfrak h}+[Hx_3,Hx_1,\rho(x_2,x)(Hy)]_{\mathfrak h}\\ &&-[Hx,Hy,H[x_1,x_2,x_3]_{\mathfrak g}]_{\mathfrak h}-\rho(y,[x_1,x_2,x_3]_{\mathfrak g})(Hx)-\rho([x_1,x_2,x_3]_{\mathfrak g},x)(Hy)\\ &=&0. \end{eqnarray*} Thus, we deduce that ${\rm d}_{\rho_{H}}\delta(\mathfrak{X})=0.$ The proof is finished. \end{proof} We now give the cohomology of crossed homomorphisms on $3$-Lie algebras.
Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$ with respect to an action $\rho.$ Define the set of $n$-cochains by \begin{eqnarray}\label{crossed-cochain} \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h)= \left\{\begin{array}{rcl} {}\mathfrak C_{\mathsf{3Lie}}^{n-1}(\mathfrak g;\mathfrak h),\quad n\geq 2,\\ {}\mathfrak g\wedge\mathfrak g,\quad n=1. \end{array}\right. \end{eqnarray}
Define ${\partial}:\mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h)\rightarrow \mathfrak C_{H}^{n+1}(\mathfrak g;\mathfrak h)$ by \begin{eqnarray}\label{crossed-cohomology} {\partial}= \left\{\begin{array}{rcl} {}{\rm d}_{\rho_{H}},\quad n\geq 2,\\ {}\delta,\quad n=1. \end{array}\right. \end{eqnarray} Then $(\mathop{\oplus}\limits_{n=1}^{\infty} \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h),\partial)$ is a cochain complex. Denote the set of $n$-cocycles by $\mathcal{Z}^n_H(\mathfrak g;\mathfrak h),$ the set of $n$-coboundaries by $\mathcal{B}^n_H(\mathfrak g;\mathfrak h)$ and $n$-th cohomology group by \begin{eqnarray}\label{crossed-cohomology-group} \mathcal{H}^n_H(\mathfrak g;\mathfrak h)=\mathcal{Z}_H^n(\mathfrak g;\mathfrak h)/\mathcal{B}_H^n(\mathfrak g;\mathfrak h),\quad n\geq1. \end{eqnarray}
\begin{defi} The cohomology of the cochain complex $(\mathop{\oplus}\limits_{n=1}^{\infty} \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h),\partial)$ is taken to be the {\bf cohomology for the crossed homomorphism $H$}. \end{defi}
At the end of this subsection, we show that certain homomorphisms between crossed homomorphisms induce homomorphisms between the corresponding cohomology groups. Let $H$ and $H'$ be two crossed homomorphisms from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$ with respect to an action $\rho.$ Let $(\psi_\mathfrak g, \psi_\mathfrak h)$ be a homomorphism from $H$ to $H'$ in which $\psi_\mathfrak g$ is invertible. Define a map $p:\mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h)\rightarrow \mathfrak C_{H'}^{n}(\mathfrak g;\mathfrak h)$ by \begin{eqnarray*} p(\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-2},x_{n-1})=\psi_{\mathfrak h}\Bigg(\omega\Big(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(x_{n-1})\Big)\Bigg), \end{eqnarray*} for all $\omega\in \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h),\mathfrak{X}_i=x_i\wedge y_i\in \wedge^2\mathfrak g,~ i=1,2,\cdots,n-2$ and $x_{n-1}\in \mathfrak g.$
\begin{thm} With above notations, $p$ is a cochain map from the cochain complex $(\mathop{\oplus}\limits_{n=2}^{\infty} \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h),{\rm d}_{\rho_{H}})$ to the cochain complex $(\mathop{\oplus}\limits_{n=2}^{\infty} \mathfrak C_{H'}^{n}(\mathfrak g;\mathfrak h),{\rm d}_{\rho_{H'}})$. Consequently, it induces a homomorphism $p_*$ from the cohomology group $\mathcal{H}^{n}_H(\mathfrak g;\mathfrak h)$ to $\mathcal{H}^{n}_{H'}(\mathfrak g;\mathfrak h)$. \end{thm}
\begin{proof} For all $\omega\in \mathfrak C_{H}^{n}(\mathfrak g;\mathfrak h)$, by \eqref{condition-1}-\eqref{condition-2} and \eqref{crossed-cochain}-\eqref{crossed-cohomology-group}, we have \begin{align*} &{\rm d}_{\rho_{H'}}(p(\omega))(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_{n})\\ =&\sum_{1\leq i<k\leq n-1}(-1)^{i} p(\omega)(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}},\cdots,\mathfrak{X}_{k-1}, [x_i,y_i,x_k]_{\mathfrak g}\wedge y_k\\ &+x_k\wedge[x_i,y_i,y_k]_{\mathfrak g}, \mathfrak{X}_{k+1},\cdots,\mathfrak{X}_{n-1},x_{n})\\ &+\sum_{i=1}^{n-1}(-1)^{i}p(\omega)(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}},\cdots,\mathfrak{X}_{n}, [x_i,y_i,x_{n}]_{\mathfrak g})\\& +\sum_{i=1}^{n-1}(-1)^{i+1}\rho(x_i,y_i)p(\omega)(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}}, \cdots,\mathfrak{X}_{n-1},x_{n})\\& +\sum_{i=1}^{n-1}(-1)^{i+1}[H'x_i,H'y_i,p(\omega)(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}}, \cdots,\mathfrak{X}_{n-1},x_{n})]_{\mathfrak h}\\& +(-1)^{n}\Big(\rho(y_{n-1},x_{n})p(\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-2},x_{n-1})+[H'y_{n-1},H'x_{n},p(\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-2},x_{n-1})]_{\mathfrak h} \Big)\\& +(-1)^{n}\Big(\rho(x_{n},x_{n-1})p(\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-2},y_{n-1})+[H'x_{n},H'x_{n-1},p(\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-2},y_{n-1})]_{\mathfrak h}\Big)\\ =&\sum_{1\leq i<k\leq n-1}(-1)^i\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{k-1})\wedge\psi^{-1}_{\mathfrak g}(y_{k-1}),\\ &\psi^{-1}_{\mathfrak g}([x_i,y_i,x_k]_{\mathfrak g})\wedge \psi^{-1}_{\mathfrak g}(y_k)+\psi^{-1}_{\mathfrak g}(x_k)\wedge\psi^{-1}_{\mathfrak g}([x_i,y_i,y_k]_{\mathfrak g}),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\Big)\\ &+\sum_{i=1}^{n-1}(-1)^{i}\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots, \hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\\& \psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}([x_i,y_i,x_{n}]_{\mathfrak g}))\Big)+\sum_{i=1}^{n-1}(-1)^{i+1}\rho(x_i,y_i)\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\\ &\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\Big)\\& +\sum_{i=1}^{n-1}(-1)^{i+1}[H'x_i,H'y_i,\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\\ &\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\Big)]_{\mathfrak h}\\& +(-1)^{n}\Big(\rho(y_{n-1},x_{n})\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(x_{n-1}))\Big)\\ &+[H'y_{n-1},H'x_{n},\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(x_{n-1}))\Big)]_{\mathfrak h} \Big)\\& +(-1)^{n}\Big(\rho(x_{n},x_{n-1})\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(y_{n-1}))\Big)\\ &+[H'x_{n},H'x_{n-1},\psi_{\mathfrak h}\Big((\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(y_{n-1}))\Big)]_{\mathfrak h}\Big)\\ =&\sum_{1\leq i<k\leq n-1}(-1)^i\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{k-1})\wedge\psi^{-1}_{\mathfrak g}(y_{k-1}),\\ &([\psi^{-1}_{\mathfrak g}(x_i),\psi^{-1}_{\mathfrak g}(y_i),\psi^{-1}_{\mathfrak g}(x_k)]_{\mathfrak g})\wedge \psi^{-1}_{\mathfrak g}(y_k)+\psi^{-1}_{\mathfrak g}(x_k)\wedge([\psi^{-1}_{\mathfrak g}(x_i),\psi^{-1}_{\mathfrak g}(y_i),\psi^{-1}_{\mathfrak g}(y_k)]_{\mathfrak g}),\cdots,\\& \psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\Big) +\sum_{i=1}^{n-1}(-1)^{i}\psi_{\mathfrak h}\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots, \hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\\& \psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),([\psi^{-1}_{\mathfrak g}(x_i),\psi^{-1}_{\mathfrak g}(y_i),\psi^{-1}_{\mathfrak g}(x_{n})]_{\mathfrak g}))\Big)+\sum_{i=1}^{n-1}(-1)^{i+1}\psi_{\mathfrak h}\rho(\psi^{-1}_{\mathfrak g}(x_i),\psi^{-1}_{\mathfrak g}(y_i))\\ &\Big(\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\Big)\\& +\sum_{i=1}^{n-1}(-1)^{i+1}\psi_{\mathfrak h}[H(\psi^{-1}_{\mathfrak g}(x_i)),H(\psi^{-1}_{\mathfrak g}(y_i)),\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\\ &\hat{\psi^{-1}_{\mathfrak g}(x_i)}\wedge\hat{\psi^{-1}_{\mathfrak g}(y_i)},\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))]_{\mathfrak h}\\& +(-1)^{n}\psi_{\mathfrak h}\Big(\rho(\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n}))\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(x_{n-1}))\\ &+[H\psi^{-1}_{\mathfrak g}(y_{n-1}),H\psi^{-1}_{\mathfrak g}(x_{n}),\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(x_{n-1})) ]_{\mathfrak h} \Big)\\& +(-1)^{n}\psi_{\mathfrak h}\Big(\rho(\psi^{-1}_{\mathfrak g}(x_{n}),\psi^{-1}_{\mathfrak g}(x_{n-1}))\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(y_{n-1}))\\ &+[H\psi^{-1}_{\mathfrak g}(x_{n}),H\psi^{-1}_{\mathfrak g}(x_{n-1}),\omega(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-2})\wedge\psi^{-1}_{\mathfrak g}(y_{n-2}),\psi^{-1}_{\mathfrak g}(y_{n-1}))]_{\mathfrak h}\Big)\\ =&\psi_{\mathfrak h}({\rm d}_{\rho_{H}}\omega)\Big(\psi^{-1}_{\mathfrak g}(x_1)\wedge\psi^{-1}_{\mathfrak g}(y_1),\cdots,\psi^{-1}_{\mathfrak g}(x_{n-1})\wedge\psi^{-1}_{\mathfrak g}(y_{n-1}),\psi^{-1}_{\mathfrak g}(x_{n})\Big)\\ =&p({\rm d}_{\rho_{H}}\omega)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_{n}), \end{align*} where $\mathfrak{X}_i=x_i\wedge y_i\in \wedge^2\mathfrak g,~ i=1,2,\cdots,n-1$ and $x_{n}\in \mathfrak g.$ Thus $p$ is a cochain map, and induces a homomorphism $p_*$ from the cohomology group $\mathcal{H}^{n}_H(\mathfrak g;\mathfrak h)$ to $\mathcal{H}^{n}_{H'}(\mathfrak g;\mathfrak h)$. \end{proof} \subsection{Infinitesimal deformations of crossed homomorphisms}\label{sec:defor}
In this section, we use the established cohomology theory to characterize infinitesimal deformations of crossed homomorphisms on 3-Lie algebras.
Let $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ be a $3$-Lie algebra over $\mathbb K$ and $\mathbb K[t]$ be the polynomial ring in one variable $t.$ Then $\mathbb K[t]/(t^2)\otimes_{\mathbb K}\mathfrak g$ is an $\mathbb K[t]/(t^2)$-module. Moreover, $\mathbb K[t]/(t^2)\otimes_{\mathbb K}\mathfrak g$ is a $3$-Lie algebra over $\mathbb K[t]/(t^2)$, where the $3$-Lie algebra structure is defined by \begin{eqnarray*} [f_1(t)\otimes_{\mathbb K} x_1,f_2(t)\otimes_{\mathbb K} x_2,f_3(t)\otimes_{\mathbb K} x_3]= f_1(t)f_2(t) f_3(t)\otimes_{\mathbb K}[x_1,x_2,x_3]_{\mathfrak g}, \end{eqnarray*} for $f_{i}(t)\in \mathbb K[t]/(t^2),1\leq i\leq 3,x_1,x_2,x_3\in \mathfrak g.$
In the sequel, all the vector spaces are finite dimensional vector spaces over $\mathbb K$ and we denote $f(t)\otimes_{\mathbb K} x$ by $f(t)x,$ where $f(t)\in \mathbb K[t]/(t^2).$
\begin{defi} Let $H:\mathfrak g\rightarrow \mathfrak h$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. Let $\mathfrak H:\mathfrak g\rightarrow\mathfrak h$ be a linear map. If $H_t=H+t\mathfrak H$ is still a crossed homomorphism
modulo $t^2$, they we say that $\mathfrak H$ generates an {\bf infinitesimal deformation} of the crossed homomorphism $H$. \end{defi}
Since $H_t=H+t\mathfrak H $ is a crossed homomorphism, for any $x,y,z\in \mathfrak h,$ we have \begin{eqnarray} \label{equivalent-1}\qquad\mathfrak H[x,y,z]_{\mathfrak g}&=&\rho(x,y)\mathfrak H(z)+\rho(y,z)\mathfrak H(x)+\rho(z,x)\mathfrak H(y)\\ \nonumber &&+[\mathfrak H(x),Hy,Hz]_{\mathfrak h}+[Hx,\mathfrak H(y),Hz]_{\mathfrak h}+[H(x),Hy,\mathfrak H(z)]_{\mathfrak h};
\end{eqnarray}
Note that \eqref{equivalent-1} means that $\mathfrak H$ is a $2$-cocycle of the crossed homomorphism $H$. Hence, $\mathfrak H$ defines a cohomology class in $\mathcal{H}^2_{H}(\mathfrak g;\mathfrak h)$.
\begin{defi} Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. Two one-parameter infinitesimal deformations $H^1_{t}=H+t\mathfrak H_{1}$ and $H^2_{t}=H+t\mathfrak H_{2}$ are said to be {\bf equivalent} if there exists $\mathfrak{X}\in\mathfrak g\wedge\mathfrak g$ such that $({\rm{Id}}_{\mathfrak g}+t\mathrm{ad}_{\mathfrak{X}},{\rm{Id}}_{\mathfrak h}+t\rho(\mathfrak{X}))$ is a homomorphism modulo $t^2$ from $H^1_{t}$ to $H^2_{t}$. In particular, an infinitesimal deformation $H_{t}=H+t\mathfrak H_{1}$ of a crossed homomorphism $H$ is said to be {\bf trivial} if there exists $\mathfrak{X}\in \mathfrak g\wedge\mathfrak g$ such that $({\rm{Id}}_{\mathfrak g}+t\mathrm{ad}_{\mathfrak{X}},{\rm{Id}}_{\mathfrak h}+t\rho(\mathfrak{X}))$ is a homomorphism modulo $t^2$ from $H_{t}$ to $H.$ \end{defi}
Let $({\rm{Id}}_{\mathfrak g}+t\mathrm{ad}_{\mathfrak{X}},{\rm{Id}}_{\mathfrak h}+t\rho(\mathfrak{X}))$ be a homomorphism modulo $t^2$ from $H^1_{t}$ to $H^2_{t}.$ By \eqref{condition-1}, we get, \begin{equation*} ({\rm{Id}}_\mathfrak h+t\rho(\mathfrak{X}))(H+t\mathfrak H_1)(z)=(H+t\mathfrak H_2)({\rm{Id}}_{\mathfrak g}+t\mathrm{ad}_{\mathfrak{X}})(z),\quad \forall\mathfrak{X}=x\wedge y\in\wedge^2\mathfrak g, z\in \mathfrak g. \end{equation*} which implies \begin{eqnarray}\label{Nijenhuis-element-4} \mathfrak H_1(z)-\mathfrak H_2(z)&=&\rho(y,z)(Hx)+\rho(z,x)(Hy)+[Hx,Hy,Hz]_{\mathfrak h}. \end{eqnarray} \emptycomment{ \begin{eqnarray}\label{Nijenhuis-element-4} \left\{\begin{array}{rcl} {}\mathfrak H_1(z)-\mathfrak H_2(z)&=&\rho(y,z)(Hx)+\rho(z,x)(Hy)+[Hx,Hy,Hz]_{\mathfrak h},\\ {}\mathfrak H_2[x,y,z]_{\mathfrak g}&=&\rho(x,y)\mathfrak H_1(x). \end{array}\right. \end{eqnarray}}
Now we are ready to give the main result in this section. \begin{thm} Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. If two one-parameter infinitesimal deformations $H^1_{t}=H+t\mathfrak H_{1}$ and $H^2_{t}=H+t\mathfrak H_{2}$ are equivalent, then $\mathfrak H_{1}$ and $\mathfrak H_{2}$
are in the same cohomology class in $\mathcal{H}^2_{H}(\mathfrak g;\mathfrak h)$. \end{thm} \begin{proof}
It is easy to see from the condition \eqref{Nijenhuis-element-4} that \begin{eqnarray*}
\mathfrak H_1(z)&=& \mathfrak H_2(z)+(\partial\mathfrak{X})(z),\quad \forall z\in \mathfrak g, \end{eqnarray*} which implies that $\mathfrak H_1$ and $\mathfrak H_2$ are in the same cohomology class. \end{proof}
\section{Maurer-Cartan characterization of crossed homomorphisms on $3$-Lie algebras}\label{sec:four} In this section, we construct a suitable $L_{\infty}$-algebra, which characterize crossed homomorphisms on $3$-Lie algebras as Maurer-Cartan elements. Then we construct a twisted $L_{\infty}$-algebra that controls deformations of crossed homomorphisms. \begin{defi} An {\em $L_\infty$-algebra} is a $\mathbb Z$-graded vector space $\mathfrak g=\oplus_{k\in\mathbb Z}\mathfrak g^k$ equipped with a collection $(k\ge 1)$ of linear maps $l_k:\otimes^k\mathfrak g\lon\mathfrak g$ of degree $1$ with the property that, for any homogeneous elements $x_1,\cdots,x_n\in \mathfrak g$, we have \begin{itemize}\item[\rm(i)] {\em (graded symmetry)} for every $\sigma\in\mathbb S_{n}$, \begin{eqnarray*} l_n(x_{\sigma(1)},\cdots,x_{\sigma(n-1)},x_{\sigma(n)})=\varepsilon(\sigma)l_n(x_1,\cdots,x_{n-1},x_n), \end{eqnarray*} \item[\rm(ii)] {\em (generalized Jacobi Identity)} for all $n\ge 1$, \begin{eqnarray*}\label{sh-Lie} \sum_{i=1}^{n}\sum_{\sigma\in \mathbb S_{(i,n-i)} }\varepsilon(\sigma)l_{n-i+1}(l_i(x_{\sigma(1)},\cdots,x_{\sigma(i)}),x_{\sigma(i+1)},\cdots,x_{\sigma(n)})=0. \end{eqnarray*} \end{itemize} \end{defi}
\begin{defi}
A {\bf Maurer-Cartan element} of an $L_\infty$-algebra $(\mathfrak g=\oplus_{k\in\mathbb Z}\mathfrak g^k,\{l_i\}_{i=1}^{+\infty})$ is an element $\alpha\in \mathfrak g^0$ satisfying the Maurer-Cartan equation \begin{eqnarray}\label{MC-equationL} \sum_{n=1}^{+\infty} \frac{1}{n!}l_n(\alpha,\cdots,\alpha)=0. \end{eqnarray} \end{defi} Let $\alpha$ be a Maurer-Cartan element of an $L_\infty$-algebra $(\mathfrak g,\{l_i\}_{i=1}^{+\infty})$. For all $k\geq1$ and $x_1,\cdots,x_k\in \mathfrak g,$ define a series of linear maps $l_k^\alpha:\otimes^k\mathfrak g\lon\mathfrak g$ of degree $1$ by \begin{eqnarray}
l^{\alpha}_{k}(x_1,\cdots,x_k)=\sum^{+\infty}_{n=0}\frac{1}{n!}l_{n+k}\{\underbrace{\alpha,\cdots,\alpha}_n,x_1,\cdots,x_k\}. \end{eqnarray}
\begin{thm}{\rm (\cite{Getzler})}\label{thm:twist} With the above notations, $(\mathfrak g,\{l^{\alpha}_i\}_{i=1}^{+\infty})$ is an $L_{\infty}$-algebra, obtained from the $L_\infty$-algebra $(\mathfrak g,\{l_i\}_{i=1}^{+\infty})$ by twisting with the Maurer-Cartan element $\alpha$. Moreover, $\alpha+\alpha'$ is a Maurer-Cartan element of $(\mathfrak g,\{l_i\}_{i=1}^{+\infty})$ if and only if $\alpha'$ is a Maurer-Cartan element of the twisted $L_{\infty}$-algebra $(\mathfrak g,\{l^{\alpha}_i\}_{i=1}^{+\infty})$. \end{thm}
In \cite{Vo}, Th. Voronov developed the theory of higher derived brackets, which is a useful tool to construct explicit $L_\infty$-algebras. \begin{defi}{\rm (\cite{Vo})} A {\bf $V$-data} consists of a quadruple $(L,F,\mathcal{P},\Delta)$, where \begin{itemize} \item[$\bullet$] $(L,[\cdot,\cdot])$ is a graded Lie algebra, \item[$\bullet$] $F$ is an abelian graded Lie subalgebra of $(L,[\cdot,\cdot])$, \item[$\bullet$] $\mathcal{P}:L\lon L$ is a projection, that is $\mathcal{P}\circ \mathcal{P}=\mathcal{P}$, whose image is $F$ and kernel is a graded Lie subalgebra of $(L,[\cdot,\cdot])$, \item[$\bullet$] $\Delta$ is an element in $ \ker(\mathcal{P})^1$ such that $[\Delta,\Delta]=0$. \end{itemize}
\begin{thm}{\rm (\cite{Vo})}\label{thm:db} Let $(L,F,\mathcal{P},\Delta)$ be a $V$-data. Then $(F,\{{l_k}\}_{k=1}^{+\infty})$ is an $L_\infty$-algebra, where \begin{eqnarray}\label{V-shla} l_k(a_1,\cdots,a_k)=\mathcal{P}\underbrace{[\cdots[[}_k\Delta,a_1],a_2],\cdots,a_k],\quad\mbox{for homogeneous}~ a_1,\cdots,a_k\in F. \end{eqnarray} We call $\{{l_k}\}_{k=1}^{+\infty}$ the {\bf higher derived brackets} of the $V$-data $(L,F,\mathcal{P},\Delta)$. \end{thm}
\end{defi} Let $\mathfrak g$ be a vector space. We consider the graded vector space $$C^*(\mathfrak g,\mathfrak g)=\oplus_{n\ge 0}C^n(\mathfrak g,\mathfrak g)=\oplus_{n\ge 0}\mathrm{Hom} (\underbrace{\wedge^{2} \mathfrak g\otimes \cdots\otimes \wedge^{2}\mathfrak g}_{n}\wedge \mathfrak g, \mathfrak g).$$
\begin{thm}{\rm (\cite{NR bracket of n-Lie})}\label{thm:MCL} The graded vector space $C^*(\mathfrak g,\mathfrak g)$ equipped with the graded commutator bracket \begin{eqnarray}\label{3-Lie-bracket} [P,Q]_{\mathsf{3Lie}}=P{\circ}Q-(-1)^{pq}Q{\circ}P,\quad \forall~ P\in C^{p}(\mathfrak g,\mathfrak g),Q\in C^{q}(\mathfrak g,\mathfrak g), \end{eqnarray} is a graded Lie algebra, where $P{\circ}Q\in C^{p+q}(\mathfrak g,\mathfrak g)$ is defined by
\begin{small} \begin{equation*} \begin{aligned} &(P{\circ}Q)(\mathfrak{X}_1,\cdots,\mathfrak{X}_{p+q},x)\\ =&\sum_{k=1}^{p}(-1)^{(k-1)q}\sum_{\sigma\in \mathbb S(k-1,q)}(-1)^\sigma P\Big(\mathfrak{X}_{\sigma(1)},\cdots,\mathfrak{X}_{\sigma(k-1)}, Q\big(\mathfrak{X}_{\sigma(k)},\cdots,\mathfrak{X}_{\sigma(k+q-1)},x_{k+q}\big)\wedge y_{k+q},\mathfrak{X}_{k+q+1},\cdots,\mathfrak{X}_{p+q},x\Big)\\ &+\sum_{k=1}^{p}(-1)^{(k-1)q}\sum_{\sigma\in \mathbb S(k-1,q)}(-1)^\sigma P\Big(\mathfrak{X}_{\sigma(1)},\cdots,\mathfrak{X}_{\sigma(k-1)},x_{k+q}\wedge Q\big(\mathfrak{X}_{\sigma(k)},\cdots,\mathfrak{X}_{\sigma(k+q-1)},y_{k+q}\big),\mathfrak{X}_{k+q+1},\cdots,\mathfrak{X}_{p+q},x\Big)\\ &+\sum_{\sigma\in \mathbb S(p,q)}(-1)^{pq}(-1)^\sigma P\Big(\mathfrak{X}_{\sigma(1)},\cdots,\mathfrak{X}_{\sigma(p)}, Q\big(\mathfrak{X}_{\sigma(p+1)},\cdots,\mathfrak{X}_{\sigma(p+q-1)},\mathfrak{X}_{\sigma(p+q)},x\big)\Big),\\ \end{aligned} \end{equation*} \end{small}
for all $\mathfrak{X}_{i}=x_i\wedge y_i\in \wedge^2 \mathfrak g$, $~i=1,2,\cdots,p+q$ and $x\in\mathfrak g.$
Moreover, $\mu:\wedge^3\mathfrak g\longrightarrow\mathfrak g$ is a $3$-Lie bracket if and only if $[\mu,\mu]_{\mathsf{3Lie}}=0$, i.e.~$\mu$ is a Maurer-Cartan element of the graded Lie algebra $(C^*(\mathfrak g,\mathfrak g),[\cdot,\cdot]_{\mathsf{3Lie}})$.
\end{thm}
Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$. For convenience, we use $\pi:\wedge^3\mathfrak g\rightarrow\mathfrak g$ to indicate the $3$-Lie bracket $[\cdot,\cdot,\cdot]_\mathfrak g$ and $\mu:\wedge^3\mathfrak h\rightarrow\mathfrak h$ to indicate the $3$-Lie bracket $[\cdot,\cdot,\cdot]_\mathfrak h$. In the sequel, we use $\pi+\rho+\mu$ to denote the element in $\mathrm{Hom}(\wedge^3(\mathfrak g\oplus \mathfrak h),\mathfrak g\oplus\mathfrak h)$ given by
\begin{equation} (\pi+\rho+\mu)(x+u,y+v,z+w)=[x,y,z]_{\mathfrak g}+\rho(x,y)w+\rho(y,z)u+\rho(z,x)v+[u,v,w]_\mathfrak h, \end{equation}
for all $x,y,z\in\mathfrak g,~u,v,w\in \mathfrak h.$ Note that the right hand side is exactly the semidirect product 3-Lie algebra structure given in Proposition \ref{lem:semi}.
Therefore by Theorem \ref{thm:MCL}, we have $$[\pi+\rho+\mu,\pi+\rho+\mu]_{\mathsf{3Lie}}=0.$$
\begin{lem}\label{lem-equation-1} Let $H:\mathfrak g\rightarrow\mathfrak h$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. For all $x,y,z\in \mathfrak g, u,v,w\in \mathfrak h,$ we have \begin{eqnarray*} &&[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(x+u,y+v,z+w)\\ &=&2\Big([Hx,Hy,w]_{\mathfrak h}+[Hx,v,Hz]_{\mathfrak h}+[u,Hy,Hz]_{\mathfrak h}\Big) \end{eqnarray*} \end{lem} \begin{proof}
It follows from straightforward computations. \end{proof}
\begin{pro} Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$. Then we have a $V$-data $(L,F,\mathcal{P},\Delta)$ as follows: \begin{itemize} \item[$\bullet$] the graded Lie algebra $(L,[\cdot,\cdot])$ is given by $(C^*(\mathfrak g\oplus \mathfrak h,\mathfrak g\oplus \mathfrak h),[\cdot,\cdot]_{\mathsf{3Lie}})$; \item[$\bullet$] the abelian graded Lie subalgebra $F$ is given by $$ F=C^*(\mathfrak g,\mathfrak h)=\oplus_{n\geq 0}C^{n}(\mathfrak g,\mathfrak h)=\oplus_{n\geq 0}\mathrm{Hom}(\underbrace{\wedge^{2} \mathfrak g\otimes \cdots\otimes \wedge^{2}\mathfrak g}_{n}\wedge \mathfrak g, \mathfrak h); $$ \item[$\bullet$] $\mathcal{P}:L\lon L$ is the projection onto the subspace $F$; \item[$\bullet$] $\Delta=\pi+\rho+\mu.$ \end{itemize} Consequently, we obtain an $L_\infty$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1,l_3)$, where \begin{eqnarray*} l_1(P)&=&[\pi+\rho+\mu,P]_{\mathsf{3Lie}},\\ l_3(P,Q,R)&=&[[[\pi+\rho+\mu,P]_{\mathsf{3Lie}},Q]_{\mathsf{3Lie}},R]_{\mathsf{3Lie}}, \end{eqnarray*} for all $P\in C^m(\mathfrak g,\mathfrak h),Q\in C^n(\mathfrak g,\mathfrak h)$ and $R\in C^k(\mathfrak g,\mathfrak h).$ \end{pro} \begin{proof} By Theorem \ref{thm:db}, $(F,\{l_k\}^{\infty}_{k=1})$ is an $L_{\infty}$-algebra, where $l_k$ is given by \eqref{V-shla}. It is obvious that $\Delta=\pi+\rho+\mu \in \ker(\mathcal{P})^{1}$. For all $P\in C^m(\mathfrak g,\mathfrak h),Q\in C^n(\mathfrak g,\mathfrak h)$ and $R\in C^k(\mathfrak g,\mathfrak h)$, by Lemma \ref{lem-equation-1}, we have \begin{eqnarray*} &&[[\pi+\rho+\mu,P]_{\mathsf{3Lie}},Q]_{\mathsf{3Lie}}\in\ker(\mathcal{P}), \end{eqnarray*} which implies that $l_2=0$. Similarly, we have $l_k=0,$ when $k\geq 4$. Therefore, the graded vector space $C^*(\mathfrak g,\mathfrak h)$ is an $L_{\infty}$-algebra with nontrivial $l_1$, $l_3,$ and other maps are trivial. \end{proof}
\begin{thm}\label{thm:infty-algebra} Let $\rho: \wedge^2\mathfrak g\rightarrow \mathfrak {gl}(\mathfrak h)$ be an action of a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_{\mathfrak g})$ on a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_{\mathfrak h})$. Then Maurer-Cartan elements of the $L_{\infty}$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1,l_3)$ are precisely crossed homomorphisms from the $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to the $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to the action $\rho$. \end{thm}
\begin{proof} It is straightforward to deduce that \begin{eqnarray*} &&[\pi+\rho+\mu,H]_{\mathsf{3Lie}}(x,y,z)=\rho(y,z)(Hx)+\rho(z,x)(Hy)+\rho(x,y)(Hz)-H\pi(x,y,z);\\ && [[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(x,y,z)\\ &=&6\mu(Hx,Hy,Hz). \end{eqnarray*} Let $H$ be a Maurer-Cartan element of the $L_{\infty}$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1,l_3)$. We have \begin{eqnarray*} &&\sum_{n=1}^{+\infty} \frac{1}{n!}l_n(H,\cdots,H)(x,y,z)\\ &=&[\pi+\rho+\mu,H]_{\mathsf{3Lie}}(x,y,z)+\frac{1}{3!}[[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(x,y,z)\\ &=&\mu(Hx,Hy,Hz)+\rho(y,z)(Hx)+\rho(z,x)(Hy)+\rho(x,y)(Hz)-H\pi(x,y,z)\\ &=&0, \end{eqnarray*} which implies that $H$ is a crossed homomorphism from the $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to the $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to the action $\rho$. \end{proof}
\begin{pro} Let $H$ be a crossed homomorphism from a $3$-Lie algebra $\mathfrak g$ to a $3$-Lie algebra $\mathfrak h$ with respect to an action $\rho$.
Then $C^*(\mathfrak g,\mathfrak h)$ carries a twisted $L_{\infty}$-algebra structure as following: \begin{eqnarray} \label{twist-rota-baxter-1}l_1^{H}(P)&=&l_1(P)+\frac{1}{2}l_3(H,H,P),\\ \label{twist-rota-baxter-2}l_2^{H}(P,Q)&=&l_3(H,P,Q),\\ \label{twist-rota-baxter-3}l_3^{H}(P,Q,R)&=&l_3(P,Q,R),\\ l^H_k&=&0,\,\,\,\,k\ge4, \end{eqnarray} where $P\in C^m(\mathfrak g,\mathfrak h),Q\in C^n(\mathfrak g,\mathfrak h)$ and $R\in C^k(\mathfrak g,\mathfrak h)$. \end{pro} \begin{proof}
Since $H$ is a Maurer-Cartan element of the $L_{\infty}$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1,l_3)$, by Theorem~\ref{thm:twist}, we have the conclusions. \end{proof}
The above $L_\infty$-algebra controls deformations of crossed homomorphisms on $3$-Lie algebras.
\begin{thm}\label{thm:deformation} Let $H:\mathfrak g\rightarrow\mathfrak h$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. Then for a linear map $H':\mathfrak g\rightarrow \mathfrak h$, $H+H'$ is a crossed homomorphism if and only if $H'$ is a Maurer-Cartan element of the twisted $L_\infty$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1^{H},l_2^{H},l_3^{H})$, that is $H'$ satisfies the Maurer-Cartan equation: $$ l_1^{H}(H')+\frac{1}{2}l_2^{H}(H',H')+\frac{1}{3!}l_3^{H}(H',H',H')=0. $$ \end{thm} \begin{proof}
By Theorem \ref{thm:infty-algebra}, $H+H'$ is a crossed homomorphism if and only if
$$l_1(H+H')+\frac{1}{3!}l_3(H+H',H+H',H+H')=0.$$ Applying $l_1(H)+\frac{1}{3!}l_3(H,H,H)=0,$ the above condition is equivalent to
$$l_1(H')+\frac{1}{2}l_3(H,H,H')+\frac{1}{2}l_3(H,H',H')+\frac{1}{6}l_3(H',H',H')=0.$$ That is, $l_1^{H}(H')+\frac{1}{2}l_2^{H}(H',H')+\frac{1}{3!}l_3^{H}(H',H',H')=0,$
which implies that $H'$ is a Maurer-Cartan element of the twisted $L_\infty$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1^{H},l_2^{H},l_3^{H})$. \end{proof}
Next we give the relationship between the coboundary operator ${\rm d}_{\rho_{H}}$
and the differential $l_1^{H}$ defined by ~\eqref{twist-rota-baxter-1} using the Maurer-Cartan element $H$ of the $L_{\infty}$-algebra $(C^*(\mathfrak g,\mathfrak h),l_1,l_3)$.
\begin{thm}\label{partial-to-derivation}
Let $H$ be a crossed homomorphism from a $3$-Lie algebra $(\mathfrak g,[\cdot,\cdot,\cdot]_\mathfrak g)$ to a $3$-Lie algebra $(\mathfrak h,[\cdot,\cdot,\cdot]_\mathfrak h)$ with respect to an action $\rho$. Then we have
$$ {\rm d}_{\rho_{H}} f=(-1)^{n-1}l_1^{H} f,\quad \forall f\in \mathrm{Hom}(\underbrace{\wedge^{2} \mathfrak g\otimes \cdots\otimes \wedge^{2}\mathfrak g}_{n-1}\wedge\mathfrak g, \mathfrak h),~n=1,2,\cdots.
$$ \end{thm}
\begin{proof} For all $\mathfrak{X}_i=x_i\wedge y_i\in \wedge^2 \mathfrak g,~ i=1,2,\cdots,n$ and $x_{n+1}\in \mathfrak g$, we have \begin{eqnarray*} &&l_1(f)(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&[\pi+\rho+\mu,f]_{\mathsf{3Lie}}(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&\Big((\pi+\rho+\mu)\circ f-(-1)^{n-1}f\circ(\pi+\rho+\mu)\Big)(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&(\pi+\rho+\mu)(f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n)\wedge y_n,x_{n+1})\\ &&+(\pi+\rho+\mu)(x_n\wedge f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n), x_{n+1})\\ &&+\sum_{i=1}^{n}(-1)^{n-1}(-1)^{i-1}(\pi+\rho+\mu)(\mathfrak{X}_{i},f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_i},\cdots,\mathfrak{X}_n,x_{n+1}))\\ &&-(-1)^{n-1}\sum_{k=1}^{n-1}\sum_{i=1}^{k}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{k}, (\pi+\rho+\mu)(\mathfrak{X}_{i},x_{k+1})\wedge y_{k+1},\mathfrak{X}_{k+2},\cdots,\mathfrak{X}_{n},x_{n+1}\Big)\\ &&-(-1)^{n-1}\sum_{k=1}^{n-1}\sum_{i=1}^{k}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{k},x_{k+1}\wedge (\pi+\rho+\mu)(\mathfrak{X}_{i},y_{k+1}),\mathfrak{X}_{k+2},\cdots,\mathfrak{X}_{n},x_{n+1}\Big)\\ &&-(-1)^{n-1}\sum_{i=1}^{n}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{n},(\pi+\rho+\mu)(\mathfrak{X}_{i},x_{n+1})\Big)\\ &=&\rho(y_n,x_{n+1})f(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n-1},x_n)+\rho(x_{n+1},x_n)f(\mathfrak{X}_{1},\cdots,\mathfrak{X}_{n-1},y_n)\\ &&+\sum_{i=1}^{n}(-1)^{n-1}(-1)^{i-1}\rho(x_i,y_i)f(,\mathfrak{X}_{1},\cdots,\hat{,\mathfrak{X}_{i}},\cdots,,\mathfrak{X}_{n},x_{n+1})\\ &&-(-1)^{n-1}\sum_{k=1}^{n-1}\sum_{i=1}^{k}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{k}, \pi(x_i,y_i,x_{k+1})\wedge y_{k+1},\mathfrak{X}_{k+2},\cdots,\mathfrak{X}_{n},x_{n+1}\Big)\\ &&-(-1)^{n-1}\sum_{k=1}^{n-1}\sum_{i=1}^{k}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{k},x_{k+1}\wedge \pi(x_i,y_i,y_{k+1}),\mathfrak{X}_{k+2},\cdots,\mathfrak{X}_{n},x_{n+1}\Big)\\ &&-(-1)^{n-1}\sum_{i=1}^{n}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{n},\pi(x_i,y_i,x_{n+1})\Big). \end{eqnarray*} By Lemma \ref{lem-equation-1}, we have \begin{eqnarray*} &&l_3(H,H,f)(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&[[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}},f]_{\mathsf{3Lie}}(\mathfrak{X}_1,\cdots,\mathfrak{X}_n,x_{n+1})\\ &=&[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}\Big(f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n)\wedge y_n,x_{n+1}\Big)\\ &&+[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}\Big(x_n\wedge f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n),x_{n+1}\Big)\\ &&+\sum_{i=1}^{n}(-1)^{n-1}(-1)^{i-1}[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}\Big(\mathfrak{X}_i, f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{n},x_{n+1})\Big)\\ &&-(-1)^{n-1}\sum_{k=1}^{n-1}\sum_{i=1}^{k}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{k},[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(\mathfrak{X}_{i},x_{k+1})\wedge y_{k+1}\\ &&+x_{k+1}\wedge [[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(\mathfrak{X}_{i},y_{k+1}),\mathfrak{X}_{k+2},\cdots,\mathfrak{X}_{n},x_{n+1}\Big)\\ &&-(-1)^{n-1}\sum_{i=1}^{n}(-1)^{i+1}f\Big(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}}_{i},\cdots,\mathfrak{X}_{n},[[\pi+\rho+\mu,H]_{\mathsf{3Lie}},H]_{\mathsf{3Lie}}(\mathfrak{X}_{i},x_{n+1})\Big)\\ &=&2\Big([f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},x_n),Hy_n,Hx_{n+1}]_{\mathfrak h}+[Hx_n,f(\mathfrak{X}_1,\cdots,\mathfrak{X}_{n-1},y_n),Hx_{n+1}]_{\mathfrak h}\\ &&+\sum_{i=1}^{n}(-1)^{n-1}(-1)^{i-1}[Hx_i,Hy_i,f(\mathfrak{X}_1,\cdots,\hat{\mathfrak{X}_{i}},\mathfrak{X}_{n},x_{n+1})]_{\mathfrak h}\Big) \end{eqnarray*} Thus, we deduce that ${\rm d}_{H} f=(-1)^{n-1}\Big(l_1(f)+\frac{1}{2}l_3(H,H,f)\Big)$, that is ${\rm d}_{H} f=(-1)^{n-1}l_1^{H} f$. \end{proof}
\end{document} | arXiv | {
"id": "2212.02729.tex",
"language_detection_score": 0.5033859610557556,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Report on the trimestre ``Heat Kernels, Random Walks, and Analysis on Manifolds and Graphs'' at the Centre \'Emile Borel (Institut Henri Poincar\'e, Spring, 2002)}
\author{Stephen Semmes}
\date{}
\maketitle
\renewcommand{\thefootnote}{}
\footnotetext{Some historical notes, mentioned by a colleague: \'Emile Borel spoke at the opening of the author's home institution, Rice University (originally the Rice Institute) in Houston, Texas, in 1912. Borel published ``Molecular theories and mathematics'' in connection with his lectures in the Rice Institute Pamphlet, Volume I (1915), 163--193. Henri Poincar\'e was also invited by President Edgar Odell Lovett and accepted, conditioned on the state of his health, but eventually declined the invitation and subsequently passed away. Borel's paper begins with a tribute to Poincar\'e, and relates a discussion they had about the trip. Borel indicates that he would have changed his subject to an appreciation of Poincar\'e's work, except that Vito Volterra was doing exactly that. Volterra's paper appears in the same issue of the Rice Institute Pamphlet, ``Henri Poincar\'e'', pp. 133--162. Jacques Hadamard contributed ``The early scientific work of Henri Poincar\'e'' and ``The later scientific work of Henri Poincar\'e'' to the Rice Institute Pamphlet, Volume IX (1922), 111-183 and Volume XX (1933), 1--86. Hadamard makes the point in the introduction to the first paper that uses for Poincar\'e's work seemed to take 25 years to be found.}
\subsubsection*{``If it's Tuesday, this must be Belgium.''}
This is the name of a film about a group of tourists who were going from city to city a little too fast. Fortunately in this trimestre there was more time, and while the activities were numerous and extensive, one had the opportunity to delve into various topics in some detail.
To give a part of the mathematical setting, let us review a few classical matters related to calculus and partial differential equations. Fix a positive integer $n$, and let ${\bf R}^n$ be the usual $n$-dimensional Euclidean space, consisting of $n$-tuples of real numbers. If $f(x)$ is a real-valued function on ${\bf R}^n$ which is twice-continuously differentiable, say, then the \emph{Laplacian} of $f$ is denoted $\Delta f$ and defined by \begin{equation}
\Delta f = \sum_{j=1}^n \frac{\partial^2}{\partial x_j^2} f. \end{equation}
Let $f_1(x)$, $f_2(x)$ be two real-valued functions on ${\bf R}^n$ which are continuous and have compact support, so that they are both equal to $0$ outside of a bounded set. More generally, one can assume that $f_1$, $f_2$ satisfy suitable decay conditions, etc. The standard inner product of such functions is defined by \begin{equation} \label{inner product of functions}
\langle f_1, f_2 \rangle = \int_{{\bf R}^n} f_1(x) \, f_2(x) \, dx. \end{equation} There is another symmetric bilinear form which is closely related to the Laplacian, given by \begin{equation} \label{def of mathcal{E}(f_1,f_2)}
\mathcal{E}(f_1,f_2) =
\frac{1}{2} \int_{{\bf R}^n} \nabla f_1(x) \cdot \nabla f_2(x) \, dx \end{equation} when $f_1$, $f_2$ are continuously differentiable, or satisfy other appropriate regularity conditions. Here $\nabla f(x)$ denotes the gradient of $f$ at $x$, i.e., the vector with components $(\partial / \partial x_j) f(x)$, and $v \cdot w$ is the usual inner product on ${\bf R}^n$, so that $v \cdot w = \sum_{j=1}^n v_j \, w_j$. If in addition $f_1$ is twice continuously-differentiable, then \begin{equation}
\mathcal{E}(f_1,f_2) =
- \frac{1}{2} \int_{{\bf R}^n} \Delta f_1(x) \, f_2(x) \, dx. \end{equation} This follows from integration by parts.
The \emph{energy} $\mathcal{E}(f)$ of a function $f$ is defined by \begin{equation} \label{def of mathcal{E}(f)}
\mathcal{E}(f) = \mathcal{E}(f,f) =
\frac{1}{2} \int_{{\bf R}^n} |\nabla f(x)|^2 \, dx, \end{equation}
where $|v|$ denotes the standard Euclidean length of $v$, which is the same as saying that $|v|^2 = v \cdot v$. If $\eta(x)$ is another function on ${\bf R}^n$, then \begin{equation}
\frac{d}{ds} \mathcal{E}(f + s \, \eta) \Big\vert_{s=0} =
- \int_{{\bf R}^n} \Delta f(x) \, \eta(x) \, dx, \end{equation} under suitable conditions on $f$ and $\eta$. This is commonly rephrased as saying that the gradient of the enrgy functional $\mathcal{E}(f)$ is given by $-\Delta f$, where this statement implicitly uses the inner product (\ref{inner product of functions}) on functions on ${\bf R}^n$.
A function $u(x,t)$ on ${\bf R}^n \times (0,\infty)$ which is twice-continuously differentiable in $x$ and continuously differentiable in $x$ and $t$ is said to satisfy the \emph{heat equation} if \begin{equation}
\frac{\partial}{\partial t} u = \Delta u. \end{equation} Under modest growth conditions on a function $f(x)$ on ${\bf R}^n$, there is a unique continuous function $u(x,t)$ on ${\bf R}^n \times [0,\infty)$ such that $u(x,0) = f(x)$, $u(x,t)$ is infinitely differentiable in $x$ and $t$ when $t > 0$, $u(x,t)$ satisfies the heat equation on ${\bf R}^n \times (0,\infty)$, and $u(x,t)$ also satisfies modest growth conditions (which can be related to those of $f$).
One way to look at the heat equation is as an ordinary differential equation in $t$, acting in vector spaces of functions of $x$. To find $u(x,t)$ given $f(x)$ as in the preceding paragraph, one might write \begin{equation} \label{u(x,t) = (exp (t Delta)f)(x)}
u(x,t) = (\exp (t\Delta)f)(x). \end{equation} In fact the Fourier transform gives a useful way to make sense of this.
\subsubsection*{Aspects of symmetry}
Versions of these notions come up in a variety of situations, and a number of these were discussed in the trimestre. In the spirit of the book ``Introduction to Fourier Analysis on Euclidean Spaces'' by E.~Stein and G.~Weiss, which also provides a lot of helpful background information for these topics, one might start by considering the symmetries of the objects just described. They are all invariant under translations, and under rotations on ${\bf R}^n$. They also behave nicely with respect to dilations on ${\bf R}^n$, which is to say under transformations of the form $x \mapsto a x$, where $a$ is a positive real number. In the case of the heat equation, one should use the dilations $(x,t) \mapsto (a x, a^2 t)$, to adjust for the fact that there is one derivative in $t$ and derivatives of order $2$ in $x$.
Instead of Euclidean spaces a basic setting is that of irreducible symmetric spaces of noncompact type, which was discussed in the course of J.-P.~Anker. For these one again has translation invariance and forms of rotation invariance, but no dilation invariance. There are counterparts of Fourier analysis here too, for analyzing solutions to the heat equation, but this has some weaknesses differing from the Euclidean case.
In the Euclidean case the solution $u(x,t)$ to the heat equation with initial data $f(x)$ can be expressed in the form \begin{equation} \label{u(x,t) = int_{{bf R}^n} k_t(x-y) f(y) dy}
u(x,t) = \int_{{\bf R}^n} k_t(x-y) \, f(y) \, dy \end{equation} for a function $k_t(x)$ called the \emph{heat kernel}. The fact that the solution can be written in this manner, instead of \begin{equation} \label{u(x,t) = int_{{bf R}^n} k_t(x,y) f(y) dy}
u(x,t) = \int_{{\bf R}^n} k_t(x,y) \, f(y) \, dy, \end{equation}
reflects the translation-invariance of the problem in $x$. The rotation-invariance of the problem implies in turn that $k_t(x)$ is a radial function of $x$, so that $k_t(x)$ can be written as $h_t(|x|)$ for a function $h_t(r)$ with $t \in (0,\infty)$ and $r \in
[0,\infty)$. One can go further and use dilation-invariance to obtain that $k_t(x)$ is of the form $t^{-n/2} h(|x|/\sqrt{t})$ for a function $h(r)$, $r \in [0,\infty)$. It is a classical result, which is a good exercise to derive, that $k_t(x)$ is in fact a Gaussian function of $x$. This can be viewed in terms of the Fourier transform, or by working out an ordinary differential equation for the function $h(r)$.
In the context of symmetric spaces one can start with a general form for $u(x,t)$ as in (\ref{u(x,t) = int_{{bf R}^n} k_t(x,y) f(y) dy}), and use translation-invariance to reduce to something more like (\ref{u(x,t) = int_{{bf R}^n} k_t(x-y) f(y) dy}). The counterpart of rotation-invariance permits one to reduce the number of variables further, but not in general to $2$ variables. Fourier analysis leads to interesting representations for the heat kernel, but fundamental features concerning size and localization are not always so clear from this representation.
Now let us go in a different direction and suppose that we are working on ${\bf R}^n$ again, but with a differential operator $L$ with variable coefficients in place of the Laplacian. Specifically, we assume that $L$ is of the form \begin{equation}
L = \sum_{j,m = 1}^n \frac{\partial}{\partial x_j} \, a_{j,m}(x)
\, \frac{\partial}{\partial x_m}, \end{equation} where $a_{j,m}(x)$ are bounded real-valued functions which satisfy \begin{equation}
a_{j,m}(x) = a_{m,j}(x) \end{equation} and \begin{equation}
\label{|v|^2 le sum_{j,m = 1}^n a_{j,m}(x) v_j v_m}
|v|^2 \le \sum_{j,m = 1}^n a_{j,m}(x) \, v_j \, v_m \end{equation} for all $v \in {\bf R}^n$. In other words, $(a_{j,m}(x))_{j,m}$ are positive-definite real symmetric matrices which are uniformly bounded in $x$ and bounded from below in the sense of matrices by the identity matrix. Because the coefficients are allowed to depend on $x$, we lose in general the invariance under translations, rotations, or dilations, and the heat kernel should be written as $k_t(x,y)$, with $x, y \in {\bf R}^n$ and $t > 0$, as in (\ref{u(x,t) = int_{{bf R}^n} k_t(x,y) f(y) dy}). However, there are vestiges of these invariances, in that translations and rotations of $L$ lead to operators of the same type, and similarly for dilations if one includes suitable scale-factors. While the precise form of the heat kernel may not be easy to describe, one can try to show that it has many properties in common with the Gaussian kernels in the case of the standard Laplacian.
One can go further and consider coefficients $a_{j,m}(x)$
which are not symmetric in $j$ or $m$, and perhaps not even real-valued. For the latter one can adjust (\ref{|v|^2 le sum_{j,m = 1}^n a_{j,m}(x) v_j v_m}) by taking the real part of the right side, so that one still has ``uniform ellipticity''. More generally one can allow operators of order larger than $2$, and vector-valued functions and systems of differential equations. Questions related to these situations were discussed in the courses of p.~Auscher and P.~Tchamitchian, and of S.~Hofmann and A.~McIntosh.
Note that it still makes sense to talk about \begin{equation} \label{exp (t L)}
\exp (t L) \end{equation} in this type of situation, using spectral theory. This works more nicely when the coefficients $a_{j,m}(x)$ are real and symmetric, so that the operator $L$ is self-adjoint (with a suitable choice of domain). Even without these conditions, one can define (\ref{exp (t L)}), using resolvent integrals. For that matter, one can define more general functions of $L$, and part of the interest of the heat kernels is that the exponentials (\ref{exp (t L)}) and related operators can make good building blocks for studying other functions of $L$.
On a connected Lie group $H$ one can again look at second-order elliptic differential operators $L$ which are invariant under translations, but in general $H$ can be noncommutative and one should be careful to specify whether $L$ is invariant under left translations, right translations, or both. In the case of Lie groups which are nilpotent, such as the Heisenberg groups, dilations can be used in much the same manner as on Euclidean spaces to have an extra degree of symmetry. In the course of W.~Hebisch, solvable Lie groups and operators on them were treated, for which there is a delicate interplay between exponential growth on the one hand and having a fair amount of commutativity around on the other hand.
S.~Lang gave a series of lectures concerning deep questions of expansions for heat kernels on the locally symmetric spaces (of finite volume) \begin{equation}
SL(n,{\bf R}) / SL(n,{\bf Z}), \quad
SL(n,{\bf C}) / SL(n,{\bf Z}[i]), \end{equation} where ${\bf Z}$ denotes the set of integers, and ${\bf Z}[i]$ is the set of complex numbers whose real and imaginary parts are integers.
\subsubsection*{Discrete settings}
Let us consider ${\bf Z}^n$ now instead of ${\bf R}^n$. If $x$, $y$ are elements of ${\bf Z}^n$, let us say that $x$ and $y$ are
\emph{adjacent} if $|x-y| = 1$. Thus $x$ and $y$ are adjacent if they agree in all but one component, where they differ by $\pm 1$. If $f(x)$ is a function on ${\bf Z}^n$, define $A(f)$ on ${\bf Z}^n$ by \begin{equation}
A(f)(x) = \frac{1}{2n} \sum_{y \in {\bf Z}^n \atop |x-y| = 1} f(y), \end{equation} so that $A(f)(x)$ is the average of $f$ over the $2n$ elements of ${\bf Z}^n$ adjacent to $x$.
The linear operator $A - I$ on functions on ${\bf Z}^n$, where $I$ denotes the identity operator, is a discrete version of the Laplacian. This makes more sense if one writes the classical Laplacian of a twice continuously-differentiable function $h$ at a point $x$ as \begin{equation}
\Delta (h)(x) = \lim_{r \to 0} \frac{1}{r^2} ({\rm Av}(h)(x,r) - h(x)), \end{equation} with ${\rm Av}(h)(x,r)$ equal to the average of $h$ over the sphere with center $x$ and radius $r$.
The analogue of the heat equation for a function $u(x,t)$ with $x$ in ${\bf Z}^n$ and $t$ ranging through nonnegative integers can be written as \begin{equation}
u(x,t+1) = \frac{1}{2n} \sum_{y \in {\bf Z}^n \atop |x-y| = 1} u(x,t), \end{equation} which is the same as saying that $u(x,t+1)$ is given by applying the operator $A$ to $u(x,t)$ as a function of $x$. To make this look more like the classical heat equation, one can reexpress this as saying that $u(x,t+1) - u(x,t)$, which is like the ``derivative'' of $u$ in $t$, is equal to $A - I$ applied to $u(x,t)$ as a function of $x$. Clearly, for any function $f(x)$ on ${\bf Z}^n$, there is a unique function $u(x,t)$ defined for $x$ in ${\bf Z}^n$ and $t$ a nonnegative integer such that $u(x,0) = f(x)$ for all $x$ in ${\bf Z}^n$ and $u(x,t)$ satisfies the heat equation above for all $x$ and $t$. In fact, $u(x,t)$ can be written as \begin{equation}
u(x,t) = (A^t)(f)(x), \end{equation} in analogy with (\ref{u(x,t) = (exp (t Delta)f)(x)}).
In analogy with (\ref{u(x,t) = int_{{bf R}^n} k_t(x-y) f(y) dy}), we can write \begin{equation} \label{u(x,t) = sum_{y in {bf Z}^n} p_t(x-y) f(y)}
u(x,t) = \sum_{y \in {\bf Z}^n} p_t(x-y) \, f(y), \end{equation} where the ``heat kernel'' $p_t(w)$ is defined for $t$ a nonnegative integer and $w$ in ${\bf Z}^n$. Specifically, $p_0(w)$ is equal to $0$ when $w \ne 0$ and to $1$ when $w = 0$, $p_1(w)$ is equal to $0$ when $w$ is not adjacent to $0$ and to $1/(2n)$ when $w$ is adjacent to $0$, and $p_t(w)$ can easily be determined explicitly.
In fact, $p_t(x-y)$ is the probability that the standard random walk on ${\bf Z}^n$ goes from $x$ to $y$ in exactly $n$ steps. In the continuous setting there are similar statements for Brownian motion and other processes associated to second-order differential operators.
That the heat kernel in (\ref{u(x,t) = sum_{y in {bf Z}^n} p_t(x-y) f(y)}) is of the form $p_t(x-y)$, rather than $p_t(x,y)$, reflects the translation-invariance here, just as in the classical case on ${\bf R}^n$. Of course one can consider other graphs instead of ${\bf Z}^n$, with similar objects as defined above, and with a formula of the type \begin{equation}
u(x,t) = \sum p_t(x,y) \, f(y) \end{equation} in place of (\ref{u(x,t) = sum_{y in {bf Z}^n} p_t(x-y) f(y)}).
The course of T.~Sunada dealt with \emph{crystal lattices}, which are characterized in terms of a large abelian group of symmetries. The graphs ${\bf Z}^n$ are a very special case of this, and numerous other configurations are possible. In W.~Woess' course, techniques of \emph{generating functions} were discussed, which can lead to remarkable formulas and information about random walks. Part of M.~Barlow's course was concerned with random walks on graphs with self-similarity, and the effect of self-similarity on the heat kernel.
In analogy with second-order differential operators on ${\bf R}^n$ with variable coefficients, one can consider random walks and discrete Laplacians on ${\bf Z}^n$ in which the weighting factors vary from point to point. One does not need to stick to ${\bf R}^n$ or ${\bf Z}^n$ here; one can work on manifolds or graphs, or more generally metric spaces equipped with a measure. Several of the courses dealt with different facets of this, including Sobolev spaces and Sobolev or Poincar\'e inequalities.
R.~Brooks discussed in his course Riemann surfaces, graphs, correspondences between them, and lower bounds for positive eigenvalues for the Laplacian for both.
\subsubsection*{Additional topics}
Let $p$ be a real number, $p > 1$. For suitable functions $f(x)$ on ${\bf R}^n$, consider the \emph{$p$-energy functional} \begin{equation}
\mathcal{E}_p(f) = \frac{1}{p}\int_{{\bf R}^n} |\nabla f(x)|^p. \end{equation} This is the same as $\mathcal{E}(f)$ in (\ref{def of mathcal{E}(f)}) when $p = 2$, but there is not a bilinear version as in (\ref{def of mathcal{E}(f_1,f_2)}) when $p \ne 2$. However, one can again consider the derivative of $\mathcal{E}_p(f)$ in $f$ for all $p$, and this leads to a nonlinear (when $p \ne 2$) second-order differential operator known as the \emph{$p$-Laplacian}.
The $p$-energy is invariant under translations and rotations, and scales under dilations in a simple way, just as when $p = 2$. For $p = n$ there is additional symmetry, known as \emph{conformal invariance}.
One can consider more complicated functionals which behave in roughly the same manner in terms of size, but which incorporate ``variable coefficients'' into the picture. When $p = n$ there is a ``quasi-invariance'' of the energy under \emph{quasiregular} mappings, which are defined in terms of a pointwise quasiconformality property (where the $n$th power of the norm of the differential of the mapping is bounded by a constant times the Jacobian, i.e., the determinant of the differential of the mapping). Quasiregular mappings, unlike quasiconformal mappings, are allowed to have branching, analogous to holomorphic mappings in the complex plane which are not one-to-one. The \emph{quasi-invariance} of the $p$-energy when $p = n$ states that the energy functional is transformed by a quasiregular change of variables into an energy functional of roughly the same type, but with variable coefficients which satisfy bounds in terms of the quasiregularity constant. As a result, a solution of the $n$-Laplace equation is transformed, after composition with a quasiregular mapping, into a solution of an analogous equation with variable coefficients, still with suitable boundedness and ellipticity conditions. This is an important tool in the study of quasiregular mappings, as discussed in the course of I.~Holopainen.
Even with the extra nonlinearity, there are similar issues concerning the relationship between the geometry of a space and the behavior of solutions of differential equations or inequalities as before.
A different kind of nonlinearity was treated in the course of K.-T.~Sturm, with averages, heat flows, and random processes taking values in a metric space, under general conditions of nonpositive curvature. It can be clear how to take a weighted average of two points in a metric space, using a point along a geodesic arc that joins them, but for more than two points not lying on the same geodesic the situation becomes more complicated. A fascinating feature of the probabilistic point of view is that in a sequence of independent samples one can use the ordering of the sequence to apply the two-point case step-by-step; it turns out that there are results to the effect that the limit of this exists and is the same almost surely, and that the common answer is the same as one produced from another procedure which deals with all points in the average at the same time.
The courses of B.~Driver and L.~Saloff-Coste were concerned with analysis on infinite-dimensional spaces. Specifically, Driver's course dealt with Weiner space, spaces of paths in manifolds, and loop groups, while Saloff-Coste's course addressed locally-compact and connected topological groups, such as infinite products of finite-dimensional compact connected Lie groups.
Of course the brief overview given here is not at all intended to be exhaustive. Fortunately, a volume is in preparation containing surveys and other material from the trimestre, in which much more information can be found.
\end{document} | arXiv | {
"id": "0211154.tex",
"language_detection_score": 0.8921887874603271,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A METHOD FOR THE
RESOLUTION OF THE JACOBI EQUATION $Y'' + RY = 0$ ON THE MANIFOLD
$Sp(2)/SU(2)$.}
\begin{abstract} In this paper a method for the resolution of the differential equation of the Jacobi vector fields in the manifold $V_1 = Sp(2)/SU(2)$ is exposed. These results are applied to determine areas and volumes of geodesic spheres and balls. \end{abstract}
{\bf{Mathematics Subject Classification (2000):}} 53C50, 53C25
{\bf{Keywords and phrases:}} Normal homogeneous space, naturally reductive homogeneous space, Jacobi equation, Jacobi operator, geodesic ball, geodesic sphere.
\section*{Introduction} The resolution of the Jacobi equation on a Riemannian manifold can be quite a difficult task. In the Euclidean space the solution is trivial. For the symmetric spaces, the problem is reduced to a system of differential equations with constant coefficients. In the specialized bibliography, particularly in \cite{[Gr]}, the explicit solutions of these systems are found as well as their application to the determination of areas and volumes. In \cite {[Ch1],[Ch2]} a partial solution of this problem for the manifolds $V_1 = Sp(2)/SU(2)$ and $V_2 = SU(5)/{Sp(2)\times S^1}$ is obtained by I. Chavel. It is well known that these manifolds are nonsymmetric normal homogeneous spaces of rank 1 \cite [p.237]{[B]}. The manifold $V_1$ appears in \cite{[B]} and in the book of A. L. Besse \cite [p.203]{[Bs]} as an exceptional naturally reductive homogeneous space. For naturally reductive compact homogeneous spaces, Ziller \cite{[Z]} solves the Jacobi equation working with the canonical connection, which is natural for the nonsymmetric naturally reductive homogeneous spaces; but the solution can be considered of qualitative type, in the sense that it does not allow to obtain in an easy way the explicit solutions of the Jacobi fields for any particular example and for an arbitrary direction of the geodesic. The method used by Chavel, which allows him to solve the Jacobi equation in some particular directions, is based on the use of the canonical connection. Nevertheless, his method
does not seem to apply in a simple way to the resolution of the Jacobi equation along a unit geodesic of arbitrary direction. In \cite {[Ch1],[Ch2]} the same author shows the existence of anisotropic Jacobi fields; that is, they do not come from geodesic variations in the isotropic subgroup. Also, the Jacobi equations on a Riemannian manifold appear in a natural way in the theory of Fanning curves \cite{[AP-D]}.
In this paper, always working with the Levi-Civita connection and using an interesting geometric result of Tsukada \cite{[T]}, the Jacobi equation along a unit geodesic of arbitrary direction is solved. Also, the solutions are applied to obtain the area of the geodesic sphere and the volume of the geodesic ball of radius $t$ in the manifold $V_1= Sp(2)/SU(2)$. In \S 1, for an arbitrary Riemannian manifold, using the induction method, a recurrent formula for the $i$-th covariant derivative of the Jacobi operator $R_t = R(\cdot ,\gamma')\gamma' (t)$ along the geodesic $\gamma$ is given. In \S 2, using the result of the previous section, the expression of the covariant derivative of the curvature tensor at the point $\gamma(0)$ is obtained for an arbitrary naturally reductive homogeneous spaces $M =G/H$, in terms of the brackets of the Lie algebra of $G$. In order to obtain this result the induction method is used again. In the following sections, the previous results are applied to the normal homogeneous space $V_1$. So, in \S 3, always working with a unit geodesic $\gamma$ of arbitrary direction, the values at $\gamma(0)$ of the Jacobi tensor $R_0$, and its covariant derivatives $R\sp{1)}_0$ and $R\sp{2)}_0$ are determined. In Lemma 3.1 it is proved that, for a unit geodesic $\gamma$, $$R\sp{3)}_0= -{\vert \vert \gamma'\vert \vert }^2 R\sp{1)}_0 = - R\sp{1)}_0$$ and
$$ R\sp{4)}_0= -{\vert \vert \gamma'\vert \vert}^2 R\sp{2)}_0 = - R\sp{2)}_0.$$ This section ends by proving that $$R\sp{2n)}_0 = (-1)\sp{n-1} R\sp{2)}_0$$
and $$R\sp{2n+1)}_0 = (-1)\sp{n} R\sp{1)}_0.$$ Using the Taylor development, at the point $\gamma(0)$, of the Jacobi operator, it is possible to obtain quite a simple expression of the Jacobi operator $R_t$ along the geodesic, as well as that of its derivatives. In fact, the explicit expression for $R_t$ is $$R_t= R_0 + R\sp{2)}_0 + R\sp{1)}_0\sin t - R\sp{2)}_0 \cos t.$$ It seems interesting to remark that while
D'Atri and Nickerson \cite{[DN1], [DN2]} impose conditions on the derivatives of the Jacobi operator, known as Ledger's conditions of odd order, in our case, conditions are imposed by the geometric properties of the manifold.
In \S 4 the Jacobi equation with predetermined initial values is solved and the formal expressions of the area and the volume of the geodesic sphere and the ball of radius $t$ are obtained. In a forthcoming paper the problem of determining the areas of tubular hypersurfaces and the volumes of tubes around compatible submanifolds will be approached. Given its generality, we hope that this method could also be applied to solve the Jacobi equation in several other examples of naturally reductive homogeneous spaces.
\section{A formula for the covariant derivative of the Jacobi operator in a Riemannian manifold.} Let $M$ be an $n$-dimensional, connected, real analytical Riemannian manifold, $g = < , >$ its Riemannian metric, $m\in M$, $v\in T_m M$ a unit tangent vector and $\gamma : J \to M$ a geodesic in $ M$ defined on some open interval $J$ of ${\mathbb {R}}$ with $0 \in J$, $m=\gamma(0)$. For a geodesic $\gamma(t)$ in $M$ the associated Jacobi operator $R_t$ is the self-adjoint tensor field along $\gamma$ defined by $$R_t : = R(\cdot ,\gamma')\gamma' (t)$$ for the curvature tensor we follow the notations of \cite{[KN]} . The covariant derivative $R\sp{i)}_t$ of the Jacobi operator $R_t$ along $\gamma$ is the self-adjoint tensor field defined by $$R\sp{i)}_{t}:= (\nabla\sb{\gamma'} \stackrel{i)} \cdots \nabla\sb{\gamma'}R)(\cdot,\gamma')\gamma' (t),$$ where $\nabla$ is the Levi-Civita connection associated to the metric. Its value at $\gamma (0)$ will be denoted by $$R\sp{i)}_{0}:= (\nabla\sb{\gamma'} \stackrel{i)}\cdots \nabla\sb{\gamma'}R)(\cdot,\gamma')\gamma'(0)$$
and we denote $R\sp{_{0)}}_t= R_t.$
First, we prove two combinatorial lemmas for later use.
\begin{lema} For $i \leq 2k$ we have: \begin{enumerate} \item[a)] $$\left( {\begin{array}{*{20}c} {2k + 2} \\ i \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {2k} \\ i \\ \end{array} } \right) + 2\left( {\begin{array}{*{20}c} {2k} \\ {i - 1} \\ \end{array} } \right) + \left( {\begin{array}{*{20}c} {2k} \\ {i - 2} \\\end{array} } \right);$$ \item[b)] $$\left( {\begin{array}{*{20}c} {{{2k + 2}}} \\ {{{2k + 1}}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {{{2k}}} \\ {{{2k - 1}}} \\ \end{array} } \right) + 2;$$ \item[c)] $$\left( {\begin{array}{*{20}c} {{{2k + 2}}} \\ {{{2k + 2}}} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {{{2k}}} \\ {{{2k}}} \\ \end{array} } \right) = 1.$$ \end{enumerate} \end{lema} The proof is a trivial consequence of some properties of the combinatorial numbers. \begin{lema} \[\sum\limits_{j = 1}^i {( - 1)^j \left( {\begin{array}{*{20}c} {k + 1} \\ j \\ \end{array} } \right)} \left( {\begin{array}{*{20}c} {k - j + 1} \\ {i - j} \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {k + 1} \\ i \\ \end{array} } \right).\] \end{lema} The proof follows at once by using the formula \[\left( {\begin{array}{*{20}c} { - x} \\ n \\ \end{array} } \right) = ( - 1)^n \left( {\begin{array}{*{20}c} {x + n - 1} \\ n \\ \end{array} } \right)\] where $x\in {\mathbb {Z}}$, and also the Vandermonde's identity \[\left( {\begin{array}{*{20}c} {x + y} \\ n \\ \end{array} } \right) = \sum\limits_{j = 0}^n {\left( {\begin{array}{*{20}c} x \\ j \\ \end{array} } \right)} \left( {\begin{array}{*{20}c} y \\ {n - j} \\ \end{array} } \right)\] with $x, y \in {\mathbb {Z}}.$ \begin{teo} For $n \geq 1$ we have $$\nabla_{\gamma'} \stackrel{ n)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' = \sum\limits_{i = 0}^n {\left( \begin{array}{*{20}c} n \\ i \\ \end{array} \right)} R\sp{\sp{n-i)}}\sb{t}(\nabla_{\gamma'} \stackrel{ i)}\cdots\nabla_{\gamma'}X).$$ \end{teo} {\bf Proof.\ \ } We prove this by induction. For $n = 1$, we have \begin {equation}\label {(1.3)} \nabla_{\gamma'}R(X, \gamma') \gamma' = (\nabla_{\gamma'}R)(X,\gamma')\gamma'+ R(\nabla_{\gamma'}X, \gamma')\gamma' \end {equation} that is $$\nabla_{\gamma'}R(X, \gamma') \gamma' = R \sp{1)}\sb{t}(X)+ R \sp{_0)}\sb{t}(\nabla_{\gamma'}X)$$ and so the result is true for $n = 1$. Next, suppose that Theorem 1.3 holds for $n= k$. Then we have $$\nabla_{\gamma'} \stackrel{ k)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' = \sum\limits_{i = 0}^k {\left( {\begin{array}{*{20}c} k \\ i \\ \end{array} } \right)} R\sp{k-i)}_t(\nabla_{\gamma'} \stackrel{ i)}\cdots \nabla_{\gamma'}X). $$ Taking the covariant derivative, we obtain \begin{eqnarray*}\nabla_{\gamma'}(\nabla_{\gamma'} \stackrel{ k)}\cdots \nabla_{\gamma'}R(X,\gamma')\gamma')&=&\nabla_{\gamma'} \stackrel{k+1)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' \\&=&\nabla_{\gamma'}( \sum\limits_{i = 0}^k \left( \begin{array}{*{20}c} k
\\ i \\\end{array} \right) R\sp{k-i)}_t(\nabla_{\gamma'} \stackrel{ i)}\dots \nabla_{\gamma'}X)).\\\end{eqnarray*} By applying (\ref {(1.3)}) to each term, it is possible to write \begin{eqnarray*}&&\nabla_{\gamma'} \stackrel{k+1)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' \\&=& \sum\limits_{i = 0}^{k } {\left( {\begin{array}{*{20}c} k \\ i \\ \end{array} } \right)} [R\sp{k+1-i)}_t(\nabla_{\gamma'} \stackrel{i)}\cdots\nabla_{\gamma'}X) + R\sp{k-i)}_t(\nabla_{\gamma'} \stackrel{i+1)}\cdots\nabla_{\gamma'}X)] \\&=&\left( \begin{array}{*{20}c} {k + 1} \\ 0 \\\end{array} \right)R\sp{k+1)}_t(X) + \sum\limits_{i = 0}^{k - 1}[\left( \begin{array}{*{20}c}k \\i \end{array} \right) + \left( \begin{array}{*{20}c}k \\i+1 \\\end{array} \right)] R\sp{k-i)}_t(\nabla_{\gamma'} \stackrel{i+1)}\dots\nabla_{\gamma'}X) \\&&+ \left( \begin{array}{*{20}c}k \\k \\\end{array} \right)R\sp{0)}\sb{t}(\nabla_{\gamma'} \stackrel{ k+1)}\dots\nabla_{\gamma'}X).\\\end{eqnarray*} Now, by applying basic properties of combinatorial numbers we have $$\nabla_{\gamma'} \stackrel{ k+1)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' = \sum\limits_{i = 0}^{k+1 }{\left( {\begin{array}{*{20}c} {k+1} \\ i \\ \end{array} } \right)} R\sp{k+1-i)}_t(\nabla_{\gamma'} \stackrel{ i)}\cdots \nabla_{\gamma'}X) $$ and the result follows.
\begin{cor} We have $$R\sp{\sp{n)}}\sb{t}(X)=\nabla_{\gamma'} \stackrel{n)}\cdots\nabla_{\gamma'}R(X, \gamma') \gamma' - \sum\limits_{i = 1}^{n} {\left( \begin{array}{*{20}c} n \\ i \\ \end{array} \right)} R\sp{n- i)}_t (\nabla_{\gamma'} \stackrel{ i)}\cdots \nabla_{\gamma'}X).$$ \end{cor}
\section{An algebraic expression for the covariant derivative of the Jacobi operator on a naturally reductive homogeneous space} Let $G$ be a Lie group, $H$ a closed subgroup, $G/H$ the space of left cosets of $H$, $\pi : G\to G/H$ the natural projection. For $r \in G$ we denote by $\tau$ the induced action of $ G$ on $G/H $ given by $\tau(r)(sH) = rsH$, $r, s \in G$. The Lie algebras of $G$ and $H$ will be denoted by ${{\bf g}}$ and ${\bf h}$, respectively and ${\bf m} ={{\bf g}}/{\bf h}$ is a vector space which we identify with the tangent space to $G/H$ at $o = \pi(H)$. An affine connection on $G/H$ is said to be invariant if it is invariant under $\tau(r)$ for all $r \in G$. It is well known that it is possible to define in a natural way on ${{\bf g}}$ an $\mathop{\rm Ad}$-invariant metric by $<u, v> = \mathop{\rm Tr}(uv^{t})$, $u, v \in {{\bf g}}$. Let $\nabla$ be the associated Levi-Civita connection. It is well-known \cite [Ch.X, p.186]{[KN]} that there exists an invariant affine connection $D$ on $G/H$ (the {\it canonical connection}) whose torsion $T$ and curvature $B $ tensors are also invariant. In the following we always work with $\nabla $. \begin{defi} {\em \cite [p.202] {[Ch2], [KN]}} $M = G/H$ is said to be a \begin{enumerate} \item[(a)] { {\em Reductive homogeneous space}} if the Lie algebra ${{\bf g}}$ admits a vector space decomposition ${{\bf g}} = {\bf h} + {\bf m}$ such that $[{\bf m}, {\bf h}] \subset {\bf m}$. In this case ${{\bf m}}$ is identified with the tangent space at the origin $o = \pi(H)$. \item[(b)] {{\em Riemannian homogeneous space}} if $G/H $ is a Riemannian manifold such that the metric is preserved by $\tau(r)$ for all $r \in G$.
\item[(c)] {{\em Naturally reductive Riemannian homogeneous space}} if $G/H$, with a H-invariant Riemannian metric, admits an $\mathop{\rm Ad}(H)$-invariant decomposition
${{\bf g}}= {\bf h}+ {\bf m}$ satisfying the condition $$<[u, v]_{\bf m} , w> + <v, [u, w]_{\bf m}> = 0$$ for $ u, v, w \in {\bf m}$.
\item[(d)] {{\em Normal Riemannian homogeneous space}} if the metric on $G/H$ is obtained as follows: there exists a positive definite inner product $< , >$ on ${{\bf g}}$ satisfying $$<[u, v], w> = <u, [v, w]>$$ for all $u, v, w \in {{\bf g}}$. Let ${\bf m} = {{\bf g}}/{\bf h}$ be the orthogonal complement of {\bf h}. Then the decomposition $({{\bf g}}, {\bf h})$ is reductive, and the restriction of the inner product to ${\bf m}$
induces a Riemannian metric on $G/H$, referred to as {{\em normal}}, by the action of $G$ on $G/H$. \end{enumerate} \end{defi} From now on we will assume that $G/H$ is a naturally reductive space. If we define $\Lambda \colon {\bf m} \times {\bf m}\to {\bf m}$ by $$\Lambda(u)v = (1/2)[u, v]_{\bf m}$$ for $u, v\in {\bf m}$, we can identify $\nabla$ and $\Lambda$. Evidently, $\Lambda(u)$ is a skew-symmetric linear endomorphism of $({\bf m}, < , >)$. Therefore $e^{\Lambda(u)}$ is a linear isometry of $({\bf m}, < , >)$. Since the Riemannian connection is a natural torsion free connection on $G/H$, we have \cite [Vol. II, Ch.X]{[Tj],[KN]}: \begin{prop} The following properties hold: \begin{enumerate} \item[(i)] For each $v \in{\bf{ m}}$, the curve $\gamma(t) = \tau(\exp tv)(o)$ is a geodesic with $\gamma(0) = o$, $\gamma'(0) = v$. \item[(ii)] The parallel translation along $\gamma$ is given as follows: $$\tau(\exp tv)_{*} e\sp{- t\Lambda(v)}\colon T_oM \to T_{\gamma(t)}{M}.$$ \item[(iii)]
The (1,3)-tensor $R_t$ on ${\bf m}$ obtained by the parallel translation of the Jacobi operator along $\gamma$ is given as follows: $$R_t = e \sp{t\Lambda(v)} R_0.$$ \end{enumerate} \end{prop} Above, $R_0$ denotes the Jacobi operator at the origin $o$ and $e\sp{t\Lambda(v)}$ denotes the action of $e\sp{t\Lambda(v)}$ on the space $R({\bf m})$ of curvature tensors on ${\bf m}$.
\begin{prop} {\em \cite [Vol. II, p. 202]{[Ch2],[KN]}} Let $\gamma(t)$ be a geodesic with $\gamma(0) = o$, for $v = \gamma'(0) \in {\bf m}$. If X is a differentiable vector field along $\gamma$, then$$R_0(X)= -[[X, v]_{\bf h},v] - (1/4)[[X,v]_{\bf m},v]_{\bf m}.$$ \end{prop} \begin{prop}Under the same hypothesis that in Proposition 2.3, we have, for $ n>0$, \begin {equation}\label{(2.1)} (-1)\sp{n-1} 2\sp{n}R\sp{n)}_0(X)=\sum\limits_{i = 0}^{n} {( - 1)^i \left( \begin{array}{*{20}c}n \\i \\\end{array} \right) [[[X,v]_{\bf m}, \dots , v ]^{i+1)}_{\bf h} , \dots , v]_{\bf m} } \end {equation}
where for each term of the sum we have $n+2$ brackets and the exponent $i+1)$ means the position of the bracket valued on ${\bf h}$.\end{prop} {\bf Proof.\ \ } Using Proposition 2.3, Corollary 1.4 and the fact that \begin{equation} \label{(2.2)} \nabla_{ X}Y = (1/2)[X,Y]_{\bf m}, \quad X,Y \in {\bf m}, \end{equation} we have immediately that (\ref {(2.1)}) is verified for $n = 1$. Next, suppose that this formula holds for $n= k$; then \begin{equation} \label{(2.3)} (-1)^{k-1}2^{k} R\sp{k)}_0 (X)=\sum\limits_{{{i = 0}}}^{{k}} {( - 1)^i \left( {\begin{array}{*{20}c} {{k}} \\ {{i}} \\\end{array} } \right)} [[[X,v]_{\bf m}, \dots , v]^{i+1)}_{\bf h}, \dots , v]_{\bf m}. \end {equation} Using now Corollary 1.4, we have$$R\sp{k+1)}_0 (X)= \nabla_v \stackrel{ k+1)}\cdots\nabla_vR(X,v)v -\sum\limits_{{{i = 1}}}^{{k+1}} { \left( {\begin{array}{*{20}c} {{k+1}} \\ {{i}} \\\end{array} } \right)} R\sp{k+1-i)}_0(\nabla_v \stackrel{ i)}\cdots \nabla_vX).$$ In each term we take into account Proposition 2.3, formulae (\ref{(2.2)}) and (\ref{(2.3)}), so we obtain \begin{eqnarray*}R\sp{k+1)}_0(X)&=& (-1)\sp{k} \frac{1}{{2^{k + 1} }}[[X,v]_{\bf h}, \dots ,v]_{\bf m} \\&&-\sum\limits_{i = 1}^{k + 1} {\left( {\begin{array}{*{20}c} {k + 1} \\ i \\\end{array} } \right)} (-1)\sp{k-1}\frac{1}{{2^{k + 1 - i} }}\\&& \left((-1)^i \frac{1}{{2^i }}\sum\limits_{j = 0}^{k + 1 - i} {\left( {\begin{array}{*{20}c} {k + 1 - i} \\ j \\\end{array} } \right)} (-1)^j [[[X, v]_{\bf m},\dots , v]_ {\bf h}^{i+j+1)}, \dots , v]_{\bf m}\right)\end{eqnarray*} Let us remark that the sum of the terms with all brackets estimated in {\bf m} is $0$. By the other hand the terms that have the bracket estimated in ${\bf h}$ in the $(i+1)$-position are \begin{eqnarray*} &&-\left( {\begin{array}{*{20}c} {k + 1} \\ 1 \\ \end{array} } \right)\frac{1}{{2^k }}(-1)^{k-1} (-1)^{i-1}\frac{1}{2}( - 1) \left( {\begin{array}{*{20}c} k \\ {i - 1}
\\ \end{array} } \right)\\
&&\\%
&&-\left( {\begin{array}{*{20}c} {k + 1} \\ 2 \\ \end{array} } \right)\frac{1}{{2^{k - 1} }}\frac{1}{{2^2 }} (- 1)^{k-2} (-1)^{ i-2}\left( {\begin{array}{*{20}c} {k - 1} \\ {i - 2} \\ \end{array} } \right)\\
&&\\% && - \cdots -\left( {\begin{array}{*{20}c} {k + 1} \\ i \\\end{array} } \right)\frac{1}{{2^{k + 1 - i} }} (- 1)^{k-i} (-1)^i \frac{1}{{2^i }}\left( {\begin{array}{*{20}c} {k + 1 - i} \\ 0 \\\end{array} } \right) \\ &&\\&&\\% &= &( - 1)^k \frac{1}{{2^{k + 1} }}( - 1)^i (\left( {\begin{array}{*{20}c} {k + 1} \\ 1 \\\end{array} } \right)\left( {\begin{array}{*{20}c} k \\ {i - 1} \\\end{array} } \right)\\ &&\\% && - \left( {\begin{array}{*{20}c} {k + 1} \\ 2 \\\end{array} } \right)\left( {\begin{array}{*{20}c} {k - 1} \\ {i - 2}
\\\end{array} } \right)\\ &&\\
&& +\cdots + ( - 1)^{i - 1} \left( {\begin{array}{*{20}c} {k + 1} \\ i \\\end{array} } \right)\left( {\begin{array}{*{20}c} {k + 1 - i} \\ 0 \\\end{array} } \right)). \end{eqnarray*}
Using Lemma 1.2, the last expression equals $$ ( - 1)^k \frac{1}{{2^{k + 1} }}( - 1)^i \left( {\begin{array}{*{20}c} {k + 1} \\
i \\\end{array} } \right) $$
so the formula (\ref{(2.1)}) is true for $n=k+1$ and this finishes the proof.
\section{ An explicit form for the Jacobi operator on the manifold $V_1 = Sp(2)/SU(2)$.} We consider the Lie group $Sp(2)$ and the subgroup $SU(2)$. It is well known that $V_1 = Sp(2)/SU(2)$ is a normal naturally reductive Riemannian homogeneous space \cite{[B],[Ch1],[Ch2]}. We denote by $sp(2)$ and $su(2)$ the Lie algebras of $Sp(2)$ and $SU(2)$ respectively. Using the notations of \cite{[Ch2]} it is known that an element of the Lie algebra $sp(2)$ is a skew-Hermitian matrix of the form \[\left( {\begin{array}{*{20}c} {a_{11} } & {a_{12} } & {a_{13} } & {a_{14} } \\ { - \overline a _{12} } & { - a_{11} } & {\overline a _{14} } & { - \overline a _{13} } \\ { - \overline a _{13} } & { - a_{14} } & {a_{33} } & {a_{34} } \\ { - \overline a _{14} } & {a_{13} } & { - \overline a _{34} } & { - a _{33} } \\ \end{array} } \right)\] where $a\sb{11}$, $a\sb{33}$ are pure imaginary numbers and the other $a _{ij}$ are arbitrary complex numbers. Let $S_{i}$, $i = 1,\dots, 10$ be the matrices of ${sp(2)}$ such that\begin{eqnarray*} S_1&: &a\sb{11}= - a\sb{22} = i;\\ S_2&: &a\sb{33}= - a\sb{44} = i; \\ S_3 &: &a\sb{12}= - a\sb{21} = 1; \\ S_4 &: &a\sb{12}= a\sb{21} = i; \\ S_5 &:& a\sb{34}= - a\sb{43} = 1;
\\ S_6 &:& a\sb{34}= a\sb{43} = i ; \\ S_7 &: &a\sb{13} = - a\sb{31} = a\sb{24} = - a\sb{42} = 1;
\\ S_8 &: &a\sb{13} = a\sb{31} = a\sb{24} = a\sb{42} = i; \\ S_{9}& :& a\sb{14} = - a\sb{41} = a\sb{23} = - a\sb{32} = 1; \\ S_{10} &: & a\sb{14} = a\sb{41}= a\sb{23} = a\sb{32} = i; \\ \end{eqnarray*}
the other $a_{ij}$ being zero in all cases. Evidently $\{S_i\}$ is an adapted basis of $sp(2)$. We construct another basis $\{Q_i\}$ as follows: \small $$\left( {\begin{array}{*{20}c} {Q_1 } \\ {Q_2 } \\ {Q_3 } \\ {Q_4 } \\
{Q_5 } \\ {Q_6 } \\ {Q_7 } \\ {Q_8 } \\ {Q_9 } \\ {Q_{10} } \\\end{array} } \right) = \left( {\begin{array}{*{20}c} {1/2} & { - 3/2} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\\ 0 & 0 & {\sqrt {5/2} } & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & {\sqrt {5/2} } & 0 & 0 & 0 & 0 & 0 & 0
\\ 0 & 0 & 0 & 0 & {\sqrt 6 /2} & 0 & { - \sqrt 2 /2} & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & {\sqrt 6 /2} & 0 &{ - \sqrt 2 /2} & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\sqrt {5}/2 } & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & {\sqrt {5}/2 } \\ {3/2} & {1/2} & 0 & 0 & 0 & 0 & 0 &0& 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & {\sqrt {3}/2 } & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & {\sqrt {3}/2 } & 0 & 0 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {S_1 } \\ {S_2 } \\ {S_3 } \\
{S_4 } \\ {S_5 } \\ {S_6 } \\ {S_7 } \\ {S_8 } \\ {S_9 } \\ {S_{10} } \\ \end{array} } \right).$$ \normalsize We have \cite{[Ch2]}: \begin{enumerate} \item[i)]
If for an inner product on $sp(2)$ we take $<A, B> = -(1/5) \mathop{\rm Tr}(AB)$, then $\{Q_{1}, \dots, Q_{10}\}$ is an orthonormal basis of $sp(2)$.
\item[ii)]The inner product is invariant under $\mathop{\rm Ad}(Sp(2))$.
\item[iii)] Finally, one can show that ${\bf h} =$ linear span of $ \{Q_{8},Q_{9},Q_{10}\} $ is Lie diffeomorphic to $su(2)$ and therefore the group generated by {\bf h} is analytically isomorphic to $SU(2)$. \end{enumerate}
The previous decomposition is taken from \cite [p.234]{[B]}. If we call $ {\bf m} = sp(2)/su(2)$, we know that $\{Q_{1},\dots, Q_{7}\}$ is an adapted basis for ${\bf m}$. It is immediate to prove that the brackets are given by the following relations \begin{equation}
\begin{array} {lcl}
\left[Q_{1}, Q_{2}\right] = Q_{3},
&& \left[Q_{1}, Q_{3}\right] = - Q_{2},\\ \left[Q_{1}, Q_{4}\right] = - Q_{5} - \sqrt{6}Q_{10}, && \left[Q_{1}, Q_{5}\right] = Q_{4} + \sqrt{6}Q_{9},\\ \left[Q_{1}, Q_{6}\right] = -Q_{7}, && \left[Q_{1}, Q_{7}\right] = Q_{6},\\ \left[Q_{1}, Q_{8}\right] = 0, && \left[Q_{1}, Q_{9}\right] = - \sqrt{6}Q_{5},\\ \left[Q_{1}, Q_{10}\right] = \sqrt{6}Q_{4}, && \left[Q_{2}, Q_{3}\right] = Q_{1} + 3 Q_{8},\\ \left[Q_{2},Q_{4}\right] = Q_{6}, && \left[Q_{2}, Q_{5}\right] = - Q_{7},\\ \left[Q_{2}, Q_{6}\right] = - Q_{4}+\sqrt{3/2}Q_{9},&& \left[Q_{2}, Q_{7}\right] = Q_{5} - \sqrt{ 3/2} Q_{10},\\ \left[Q_{2}, Q_{8}\right] = - 3Q_{3}, && \left[Q_{2}, Q_{9}\right] = - \sqrt{ 3/2} Q_{6},\\ \label{CORCHETE} \left[Q_{2}, Q_{10}\right] = \sqrt{ 3/2}Q_{7}, && \left[Q_{3}, Q_{4}\right] = Q_{7}, \\ \left[Q_{3}, Q_{5}\right] =Q_{6}, && \left[Q_{3},Q_{6}\right] = -(\sqrt{ 2} /2)( \sqrt{ 2} Q_{5} -\sqrt{ 3} Q_{10}),\\ \left[Q_{3},Q_{7}\right]=-(\sqrt{ 2} /2)( \sqrt{ 2} Q_{4}-\sqrt{ 3} Q_{9}),&& \left[Q_{3}, Q_{8}\right] = 3 Q_{2},\\ \left[Q_{3}, Q_{9}\right] = -\sqrt{ 3/2} Q_{7}, && \left[Q_{3}, Q_{10}\right] = -\sqrt{ 3/2} Q_{6},\\ \left[Q_{4}, Q_{5}\right] = - Q_{1} + Q_{8},&& \left[Q_{4}, Q_{6}\right] = Q_{2} + \sqrt{ 5/2} Q_{9},\\ \left[Q_{4}, Q_{7}\right] =Q_{3} +\sqrt{ 5/2} Q_{10},&& \left[Q_{4}, Q_{8}\right] = - Q_{5},\\ \left[Q_{4}, Q_{9}\right] = - \sqrt{ 5/2} Q_{6}, &&\left[Q_{4}, Q_{10}\right] = - 2\sqrt{ 3/2} Q_{1} - \sqrt{ 5/2} Q_{7},\\ \left[Q_{5}, Q_{6}\right] =Q_{3} -\sqrt{ 5/2} Q_{10}, && \left[Q_{5}, Q_{7}\right] = -Q_{2} + \sqrt{ 5/2} Q_{9},\\ \left[Q_{5}, Q_{8}\right] = Q_{4}, && \left[Q_{5}, Q_{9}\right] = 2\sqrt{ 3/2} Q_{1} - \sqrt{ 5/2} Q_{7},\\ \left[Q_{5}, Q_{10}\right] = \sqrt{ 5/2} Q_{6}, && \left[Q_{6}, Q_{7}\right] = - Q_{1} + 2 Q_{8},\\ \left[Q_{6}, Q_{8}\right] = - 2 Q_{7}, && \left[Q_{6}, Q_{9}\right] = \sqrt{ 3/2} Q_{2} + \sqrt{ 5/2} Q_{4},\\ \left[Q_{6},Q_{10}\right]=\sqrt{ 3/2} Q_{3} - \sqrt{ 5/2} Q_{5}, && \left[Q_{7}, Q_{8}\right] = 2 Q_{6}\\ \left[Q_{7},Q_{9}\right]=\sqrt{ 3/2} Q_{3} +\sqrt{ 5/2} Q_{5}, &&\left[Q_{7}, Q_{10}\right] = -\sqrt{ 3/2} Q_{2} + \sqrt{ 5/2} Q_{4},\\ \left[Q_{8}, Q_{9}\right] =Q_{10}, && \left[Q_{8}, Q_{10}\right] = - Q_{9},\\ \left[Q_{9}, Q_{10}\right] = Q_{8}.\\ \end{array} \end{equation}
In order to be able to determine the explicit form of the Jacobi operator along an arbitrary geodesic $\gamma$ with initial vector $v$ at the origin $o$, it is useful to determine the values of $ R\sp{i)}_0$, $i= 0, 1, 2, 3, 4$.
In the following we always suppose that $v \in {\bf m}$ is given by
$$v= \sum\nolimits_1^7 {x_i Q_i },\quad \sum\nolimits_1^7 {(x_i )^2 } = 1.$$ We denote $\{E_i, i = 1, \dots, 7\}$ the orthonormal frame field along $\gamma$ obtained by parallel translation of the basis $\{Q_i \}$ along $\gamma$.
For the manifold $V_1$ the operators $R\sp{i)}_0,i = 0, 1, 2, 3, 4,$ written in matrix form are given by $$R\sp{i)}_0 = \left( {\begin{array}{*{20}c} {R^{i)} _{11} } & \cdots & {R^{i)} _{17} } \\ \cdot&
& \cdot \\ \cdot&
& \cdot \\ {R^{i)} _{71} } & \cdots & {R^{i)} _{77} } \\ \end{array} } \right)(0)$$ where ${R^{i)}}_{jk}(0)= <{R^{i)}}(E_k), E_j>(0)$.
In \cite{[T]} Tsukada defines the curves of constant osculator rank in the Euclidean space and this concept is applied to naturally reductive homogeneous spaces; see also \cite [Vol.IV, Ch.7, Add. 4]{[Sp]}. For a unit vector $v \in {\bf m}$ determining the geodesic $\gamma$, $R_{t} = e^{t\Lambda(v)} R_0$ is a curve in $R({\bf m})$. Since $e^{t\Lambda(v)}$ is a $1$-parameter subgroup of the group of linear isometries of $R({\bf m})$, the curve $R_t$ has constant osculating rank $r$ \cite{[T]}. Therefore, for the Jacobi operator we have $$R_t = R_0 + a_1(t) R\sp{1)}_0 + \cdots + a_r(t)R^{r)}_0.$$
With the help of
Propositions 2.3 and 2.4 we obtain:
\begin{lema} At $\gamma(0)$ we have: \begin{enumerate} \item[i)] $R\sp{3)}_0= -{\vert \vert \gamma'\vert \vert }^2 R\sp{1)}_0 = - R\sp{1)}_0$; \item[ii)] $ R\sp{4)}_0= -{\vert \vert \gamma'\vert \vert}^2 R\sp{2)}_0 = - R\sp{2)}_0$. \end{enumerate} \end{lema}
{\bf{ Proof}}. Due to Tsukada's result about the constant osculator rank of the curvature operator on naturally reductive spaces, we know that there exists $r {\in} {\Bbb N}$ such that $R^{1)}, \dots , R^{r+1)}$ are linearly dependent. Now we are going to prove that $r = 2$ in $V_1$.
For that we study the relationship between $R^{1)}$ and $R^{3)}$ (later we shall find another one between $R^{2)}$ and $R^{4)}$ and so on). In particular, we have to compare $R^{1)}_{(i,j)}$ and $R^{3)}_{(i,j)}$ for $i,j=1,\dots,7$. Let us show how to proceed, for instance, to make the comparison between $R^{1)}_{(1,1)}$ and $R^{3)}_{(1,1)}$. The computation on the other $48$ elements of $R^{1)}$ and $R^{3)} $ will be analogous.
From Proposition 2.4 we have \begin {eqnarray}\label{(8.1)} R\sp{1)}_0(X)&=& (1/2) ( [[[X,v]_{\bf{h}},v]_{\bf{m}},v]_{\bf{m}} - [[[X,v]_{\bf{m}},v]_{\bf{h}},v]_{\bf{m}})\\ \nonumber &=& (1/2) \sum\limits_{1 \leq i,j,k \leq 7} x_i x_j x_k ( [[[X, Q_i]_{\bf{h}}, Q_j]_{\bf{m}}, Q_k]_{\bf{m}} - [[[X,Q_i]_{\bf{m}},Q_j]_{\bf{h}}, Q_k]_{\bf{m}}).
\end {eqnarray}
Therefore if we denote $$T_1 [1,i,j,k,1] = <( \nabla_{Q_k}R)(Q_1, Q_i)Q_j, Q_1>,$$
putting $X= Q_1$ and using (\ref {(8.1)}) it follows
\begin {eqnarray} \label{nose} R^{1)}_{(1,1)} &=&
<R\sp{1)}_0(Q_1), Q_1> = (1/2) < [[[Q_1,v]_{\bf{h}},v]_{\bf{m}},v]_{\bf{m}}
- [[[Q_1,v]_{\bf{m}},v]_{\bf{h}},v]_{\bf{m}}, Q_1> \\ \nonumber
&=&
(1/2) \sum\limits_{1 \leq i,j,k \leq 7} x_i x_j x_k < [[[Q_1, Q_i]_{\bf{h}}, Q_j]_{\bf{m}}, Q_k]_{\bf{m}} - [[[Q_1,Q_i]_{\bf{m}},Q_j]_{\bf{h}}, Q_k]_{\bf{m}}, Q_1> \\ \nonumber
&=& (1/2) \sum\limits_{1 \leq i,j,k \leq 7} x_i x_j x_k T_1 [1,i,j,k,1].
\end {eqnarray}
Now, using the values of the brackets of the vectors $Q_i$ in (\ref{CORCHETE}) we obtain that the non-vanishing components of $T_1$ are
\begin{equation} {\label{T1}} \begin{array}{lclcl}
T_1[1,2,6,4,1] = -3/2, && T_1[1,2,7,5,1] = 3/2, && T_1[1,3,6,5,1] = -3/2 \\
T_1[1,3,7,4,1] = -3/2,&& T_1[1,4,2,6,1]= 3/2,&&T_1[1,4,3,7,1] = -3/2\\ T_1[1,4,4,6,1]= -\sqrt{15}/2,&& T_1[1,4,5,7,1]= -\sqrt{15}/2, && T_1[1,4,6,2,1] = -3/2\\
T_1[1,4,6,4,1] = -\sqrt{15},&&T_1[1,4,7,3,1] = -3/2,&& T_1[1,4,7,5,1] = -\sqrt{15}\\
T_1[1,5,2,7,1]= -3/2, &&T_1[1,5,3,6,1] = -3/2, &&T_1[1,5,4,7,1] = -\sqrt{15}/2\\
T_1[1,5,5,6,1] = \sqrt{15}/2,&& T_1[1,5,6,3,1] = -3/2,&& T_1[1,5,6,5,1] = \sqrt{15}\\
T_1[1,5,7,2,1] = 3/2&& T_1[1,5,7,4,1] = -\sqrt{15}, &&T_1[1,6,2,4,1] = 3/2\\
T_1[1,6,3,5,1] = 3/2 && T_1[1,6,4,4,1] = -\sqrt{15}/2, && T_1[1,6,5,5,1] = -\sqrt{15}/2\\
T_1[1,7,2,5,1] = -3/2 && T_1[1,7,3,4,1] = 3/2,&& T_1[1,7,4,5,1]= -\sqrt{15}/2\\
T_1[1,7,5,4,1] = -\sqrt{15}/2 \\
\end{array}
\end{equation}
Finally using (\ref{nose}) and (\ref{T1}) it is a straightforward computation to obtain that
\begin{equation}\label{R11}
R^{1)}_{(1,1)} = \sum\limits_{1\leq i,j,k \leq7} x_i x_j x_k T_1 [1,i,j,k,1] = -2\sqrt{15}({x_4}^2 x_6 - {x_5}^2 x_6 + 2{x_4}x_5 x_7).
\end{equation}
For $R^{3)}$, in an analogous way, we have \begin{eqnarray} \label{(9.1)} R\sp{3)}_0(X)&=& (1/8) ([[[[[X,v]_{\bf{h}},v]_{\bf{m}},v]_{\bf{m}}, v]_{\bf{m}},v]_{\bf{m}} - 3[[[[[X,v]_{\bf{m}},v]_{\bf{h}},v]_{\bf{m}}, v]_{\bf{m}},v]_{\bf{m}} +\\ \nonumber &&+ 3 [[[[[X,v]_{{\bf{m}}},v]_{\bf{m}},v]_{\bf{h}}, v]_{\bf{m}},v]_{\bf{m}} - [[[[[X,v]_{{\bf{m}}},v]_{\bf{m}},v]_{\bf{m}},v]_{\bf{h}},v]_{\bf{m}}). \end{eqnarray}
Let $R^{3)}_{(1,1)} $ be the element $<R\sp{3)}_0(Q_1), Q_1>$ of the matrix of $R^{3)}$. We denote
$$T_3 [1,i,j,k,l,m,1] = \,\, <( \nabla\sb{ Q_m}\nabla\sb{Q_l} \nabla\sb{Q_k}R)(Q_1,Q_i)Q_j, Q_1>, \quad i,j ,k,l,m = 1,\dots,7.$$
Firstly, we compute the values
$T_3[1,i,j,k,l,m,1] $ and we compare them with the values we have obtained before for $T_1$. So, for example, if we study the values of $T_3$ when $i,j,k,l,m$ are respectively $1,1,4,4,6$, or one permutation $\sigma $ of these values, we obtain
$$ \sum\limits_{(i,j,k,l,m) \in S(1,1,4,4,6)} x_i x_j x_k x_l x_m T_3 [1,i,j,k,l,m,1]= (2\sqrt{15}) {x_1}^2 {x_4}^2 x_6.$$
Analogously, if we consider that $i=1$, $j=1$, $k=5$, $l=5$, $m=6$, or one permutation of these values, we obtain
$$ \sum\limits_{(i,j,k,l,m) \in S(1,1,5,5,6)} x_i x_j x_k x_l x_m T_3 [1,i,j,k,l,m,1]= (-2\sqrt{15}) {x_1}^2 {x_5}^2 x_6$$
and for $i=1$, $j=1$, $k=4$, $l=5$, $m=7$, or one permutation of these values, it has
$$ \sum\limits_{(i,j,k,l,m) \in S(1,1,4,5,7)} x_i x_j x_k x_l x_m T_3 [1,i,j,k,l,m,1]= (4\sqrt{15}) {x_1}^2 x_4 {x_5} x_7.$$
On the other hand, we have that for other sets of indices $I$ different from $A$, $B$ or $C$ where
$A=\{h,h,4,4,6\}$, $B=\{h,h,5,5,6\}$, $C=\{h,h,4,5,7\}$, $h= 1,\dots,7$, the following sum vanishes
$$ \sum\limits_{(i,j,k,l,m) \in S(I)} x_i x_j x_k x_l x_m T_3 [1,i,j,k,l,m,1]= 0.$$
In consequence \begin{equation}\label{R3}
\sum\limits_{1 \leq i,j, k,l,m \leq 7} x_i x_j x_k x_l x_m T_3 [1,i,j,k,l,m,1]= \sum\limits_{1 \leq h \leq 7} ({x_h}^2) \, 2\sqrt{15}({x_4}^2 x_6 - {x_5}^2 x_6 + 2{x_4}x_5 x_7) .
\end{equation}
Then we can conclude from (\ref{R11}) and (\ref{R3}) that
$$R^{3)}_{(1,1)} = - ({x_1}^2 + \cdots + {x_7}^2) R^{1)}_{(1,1)} = -{\vert \vert \gamma'\vert \vert }^2 R^{1)}_{(1,1)}.$$
The proof of ii) is analogous to i).
Remark . Using {\it{Mathematica}} it is possible to prove that the non-null components we have calculated in the proof are correct.
\begin{prop} At $\gamma(0)$ we have: \begin{enumerate} \item[i)] $R\sp{2n)}_0 = (-1)\sp{n-1} R\sp{2)}_0$; \item[ii)] $R\sp{2n+1)}_0 = (-1)\sp{n} R\sp{1)}_0$. \end{enumerate}
\end{prop} {\bf Proof.\ \ } We are going to prove i) by induction, ii) may be obtained in a similar way. First, Lemma 3.1 (ii) gives the result for $n = 2$. Next, suppose that for $n = k$ the result is true, that is, \begin {equation}\label{(3.1)}
R\sp{2k)}_0 = (-1)^{k-1} R^{2)}_{0}. \end {equation}
Using Proposition 2.4 we have $$(-1)\sp{2k+1} 2\sp{2k+2 }R\sp{2k+2)}_0(X) = \sum\limits_{i = 0}^{2k + 2} {( - 1)^i } \left( {\begin{array}{*{20}c} {2k + 2} \\ i \\ \end{array} } \right)[[[X,v]_{\bf m},\dots, v]_ {\bf h}^{i+1)}, \dots , v]_{{\bf m}}^{2k+4)}.$$
There are $2k+3$ terms and each one has $2k+4$ brackets. If we take into account Lemma 1.1 in the previous expression, we obtain \begin{eqnarray*} && (-1)^{2k+1} (2)^{2k+2} R\sp{2k+2)}_0(X) \\ &=&\sum\limits_{i = 0}^{2k} {( - 1)^i } \left( {\begin{array}{*{20}c} {2k} \\ i \\ \end{array} } \right) [[[[[X,v]_{\bf m}, \dots, v]_{\bf h} ^{ i+1)}, \dots ,v]_{\bf m}^{ 2k+2)},v]_{\bf m},v]_{\bf m}\\ && -2 (\sum\limits_{i = 0}^{2k} {( - 1)^i } \left({\begin{array}{*{20}c} {2k} \\ i \\ \end{array} } \right)
[[[[X,v]_{\bf m},\dots , v]_{\bf h}^{ i+2)},\dots , v]_{{\bf m}} ^{ 2k+3)},v]_{\bf m})\\ && +\sum\limits_{i = 0}^{2k} {( - 1)^i } \left( {\begin{array}{*{20}c} {2k} \\ i \\\end{array} } \right) [[[X,v]_{\bf m},\dots , v]_{\bf h}^{ i+3)},\dots , v]_{\bf m}^{ 2k+4)}.\end{eqnarray*}
If we call $X' =[X,v]_{\bf m}$ and $X'' = [[X,v]_{\bf m},v]_{\bf m}$ and using also Proposition 2.4, it follows \begin{eqnarray*} &&(-1)^{2k+1} 2^{2k+2} R\sp{2k+2)}_0(X)\\
&=& (-1)^{2k+1} 2^{2k+2} ([[R\sp{2k)}_0(X),v]_{\bf m},v]_{\bf m} - 2[R\sp{2k)}_0(X'),v]_{\bf m} +R\sp{2k)}_0(X'')). \end{eqnarray*} Taking into account Lemma 3.1, formula (\ref{(3.1)}) and the values of $X'$ and $X''$ in previous expression, Proposition 3.2 follows.
The next result follows immediately from Proposition 3.2. \begin{prop} The normal naturally reductive homogeneous space $V_1 = Sp(2)/SU(2)$ is of constant osculator rank $2$. \end{prop} \begin{cor}
Along the geodesic $\gamma$ the Jacobi operator can be written as $$R_t= R_0 + R\sp{2)}_0 + R\sp{1)}_0\sin t - R\sp{2)}_0 \cos t.$$ \end{cor} The proof follows from the Taylor development of $R_t$ at $t=0$ and by using Proposition 3.2.
\begin{cor} Along the geodesic $\gamma$ the derivatives of the Jacobi operator satisfy: \begin{enumerate} \item[i)] $R\sp{2n)}_t = (-1)\sp{n-1} R\sp{2)}\sb{t}$; \item[ii)] $R\sp{2n+1)}\sb{t} = (-1)\sp{n} R\sp{1)}\sb{t}$; \item[iii)]
$ R_t \cdot R\sp{1)}_t= R\sp{1)}_t \cdot R_t ,\\ R_t \cdot R\sp{2)}_t = R\sp{2)}_t \cdot R_t, \\R\sp{1)}_t \cdot R\sp{2)}_t= R\sp{2)}_t \cdot R\sp{1)}\sb{t}$. \end{enumerate} \end{cor} The result is a consequence of Corollary 3.4 and the fact that iii) is true for $t=0$. \begin {rem} {\rm In \cite{[BV]} the authors analyze a class of Riemannian homogeneous spaces
which have the property that the eigenspaces of $R_{\gamma}$ are parallel along $\gamma$. Evidently, this property is not verified in our case. In fact, although, for each $t$, the operators $R_t$ and $R\sp{i)}_t $ commute and therefore are simultaneously diagonalizable, according to Corollary 3.4 and (ii) of Proposition 2.2 the eigenvectors of these operators are not independent of $t$.} \end {rem}
\section{The solution of the Jacobi equation on the manifold $V_1$. Application to the determination of volumes of geodesic balls.} For a naturally reductive homogeneous Riemannian manifold it is possible to write the Jacobi equation as a differential equation with constant coefficients. In order to do that, the canonical connection is frequently used. Since this connection and the Levi-Civita connection have the same geodesics, in an equivalent form, it is possible to write the same equation based on the Levi-Civita connection \cite{[Ch2]}. In this case, the coefficients are functions of the arc-length along the geodesic. In order to work with this equation on the manifold $V_1$ it will be useful to use the simple expression of the Jacobi operator $R_t$.
We shall now introduce some notation and provide some basic formulae which will be needed in this section. For more information see \cite{[BPV],[Ch2],[Z]}. Let $A$ be the Jacobi tensor field along the geodesic $\gamma$ (that is, the solution of the endomorphism valued Jacobi equation $Y'' + R_t Y= 0$ along $\gamma$) with initial values \begin {equation}\label{(4.1)} A_0 = 0,\quad A\sp{1)}_0 =I, \end {equation}
where we consider the covariant differentiation with respect to $\gamma'$ and $I$ is the identity transformation of $T_{\gamma(0)}M$. Then, the Jacobi's equation is $A^{2)}_t= - A\sb{t} R_t $.
In order to be able to obtain the expression of the Jacobi fields with initial conditions (\ref{(4.1)}) at $\gamma(0)$, it is enough to know the development in Taylor's series of $A\sb{t}$ and to apply the initial conditions. Thus, using Lemma 3.1, in the power series of $A\sb{t}$ only appear $R_0$, $R\sp{1)}_0$ and $R\sp{2)}_0$. If $\{E_{i}, i=1,\dots,7 \}$ is the orthonormal frame field along $\gamma$ obtained by parallel translation of the basis $\{Q_i\}$ along $\gamma$, one has $Y\sb{t} = A\sb{t} E\sb{t}$ or $Y\sb{i,t} = A^{j}\sb{i,t}E_{j,t}$ , $1 \le i, j \le 7$, and this is the expression of the Jacobi vector fields along the geodesic $\gamma$ with the indicated initial conditions. \begin{prop} For the manifold $V_1$, one has
$$A\sb{t}= \sum\limits_{k =0}^\infty \frac{{{1}}}{{{{k!}} }} {\beta _k t^k } $$ where $\beta_k = \alpha_{k-1}+\beta_{k-1}'$, \quad $\alpha_k = \alpha_{k-1}'-R\beta_{k-1}$, $k \geq 2$. Moreover, $\alpha_0 = \beta_0 = 0$, $\alpha_1 = 0$,
$\beta_1 = I$ and the coefficients $\beta_k$ are only functions of $R_0$, $R^{1)}_0$ and $R^{2)}_0$. \end{prop} {\bf Proof.\ \ } If we successively derive $A_t^{2)} = -R_tA_t$, we have $$A_t^{i) }= \left(\alpha'_{i-1}(t)-R_t\beta_{i-1}(t)\right)A_t + \left(\alpha_{i- 1}(t)+\beta'_{i- 1}(t) \right)A_t^{1)},$$ we can write this expression as
$$A_t^{i)} = \alpha_i(t) A_t + \beta_i(t) A_t^{1)},$$ where $$\alpha_i(t) = \alpha_{i-1}'(t)-R_t\beta_{i-1}(t),$$ and $$ \beta_i(t) = \alpha_{i- 1}(t)+\beta_{i-1}'(t),
\quad i \geq 2;$$ if $t = 0$ one has $A_0^{0)}= \beta_0(0)= 0$, $A^{1)}_0 = \beta_1(0) =I$, $A^{2)}_0= \beta _2 (0)= 0$, $A^{3)}_0= \beta_3 (0)= -R_0^{0)}=-R_0$, and, in general, $$A^{i)}_0 = \alpha_{i-1}(0)+\beta_{i-1}'(0) = \beta_i(0).$$
If there is no confusion we will identify $ \alpha_{i} = \alpha_{i}(0)$ and $ \beta_{i} = \beta_{i}(0)$. Now the result follows using the development in Taylor's series of $A_t$.
Let $m$ be a point of the manifold $M$ and $V$ and $U$ open neighbourhoods of $0$ in $T_mM$ and of $m$ in $M$ respectively such that
$\exp_m$ is a diffeomorphism of $V$ onto $U$. For all $v \in V$, $\theta(v)$ \cite [p. 54]{[BGM]} is a well-defined function, it is defined as the absolute value of a determinant function: $$\theta(v)= \vert \det T_v \exp_m \vert.$$
\begin{defi}Let $U_{\epsilon}(m)$ be a normal neighbourhood of radius ${\epsilon}>0$ of the point $m$ in $M$. For each $t$ such that $0<t<\epsilon$ and for each $v$ in $T_mM$ the function $t \mapsto \theta(tv)$ is the volume density function at $m$ in the direction $v$. \end{defi} \begin{lema}{\em \cite [p. 90]{[BGM]}} Let $u \in T_o M$ and $t>0$, then for all $v \in T_o M$, $T\sb{tu}\exp_o(v)$ is the value in $t$ of the Jacobi field $Y$ along the geodesic $\gamma$ ($\gamma(0)=o, \gamma'(0)=u$) with initial conditions $Y(o) = 0$, $Y'(o) = v/t$.\end{lema} \begin{prop} In the manifold $V_1$, the volume density function at $o$ is given by \begin{equation} \theta(tu) =
\frac{{{1}}}{{{{t}}^{{7}} }}\left| {\det A} \right|. \end{equation} \end{prop} The proof follows in a natural way from the standard methods of \cite{[BGM],[Gr], [GV]}. \begin{cor} The coefficient of $t\sp{n}$ in the development of $\det A$ is given by \[ a_n={{ }} \sum\limits_{ \matrix{ r_1 + \cdots + r_7 = n \cr 0\leq r_1,\cdots,r_7 \leq n\\ }} \frac{{{1}}}{{{{r_{1}!}} }} \cdots \frac{{{1}}}{{{{r_{7}!}} }} {\sum\limits_\sigma {\mathop{\rm sig}(\sigma )} \beta _{r_1, \sigma (1)}^{1} \cdots\beta _{r_7, \sigma (7)}^{7} }. \]\end{cor} {\bf Proof.\ \ } The seven columns $C_j, j=1,\dots,7$ of $A$ can be written as \[C_j = \beta _0 ^j +\cdots + \frac{{{1}}}{{{{n!}} }} \beta _n ^j t^n + \cdots = \sum\limits_{k = 0}^\infty \frac{{{1}}}{{{{k!}} }} {\beta _k^j } t^k \] where the upper index $j$ shows the $j^{th}$-column of the matrix $\beta_ k$ (Proposition 4.1). Taking into account that the determinant is a multilinear function, the coefficient $a_n$ of $t\sp{n}$ in the development of $\det A$ is \[ {{a}}_{{n}} {{ = }} \sum\limits_{\matrix{ r_1 + \cdots + r_7 = n \cr 0\leq r_1,\dots,r_7 \leq n\\ }} \frac{{{1}}}{{{{r_{1}!}} }} \cdots \frac{{{1}}}{{{{r_{7}!}} }} {\det (\beta _{r_1}^1 ,\dots,\beta _{r_7}^7 ).} \] If we represent the matrix $ \beta_{r_{k}} $ by $\beta_{r_{k}} = (\beta_{r_{k},i}^{j} ), i, j = 1,\dots , 7$, using the algebraic definition of the determinant it follows that $$ a_n={{ }} \sum\limits_{\matrix{ r_1 + \cdots + r_7 = n \cr 0\leq r_1,\dots,r_7 \leq n\\ }} \frac{{{1}}}{{{{r_{1}!}} }} \cdots \frac{{{1}}}{{{{r_{7}!}} }} {\sum\limits_\sigma {\mathop{\rm sig}(\sigma )} \beta _{r_1, \sigma (1)}^{1} \cdots\beta _{r_7, \sigma (7)}^{7} }$$
where $\sigma$ are the permutations of seven elements and $\mathop{\rm sig}(\sigma)$ represents the signature of the corresponding permutation. \begin{lema} {\em \cite{[Gr]}} For the manifold $V_1$, \begin{enumerate} \item[i)] The area of the geodesic sphere with center $o \in V_1$ and radius $t$ is given by $$S_o(t)= t^6 \int\nolimits_{\Omega ^6 (1)} {\theta (tu)du} $$
where $\Omega^6(1)$ denotes the 6-dimensional Euclidean unit sphere.
\item[ii)] The volume of the geodesic ball with center $o\in V_1$ and radius $r$ is given by $$V_o(r)=\int\nolimits_0^r {S_o(t)dt}.$$ \end{enumerate} \end{lema} Now, using the standard notation for moments \cite [p. 255--258]{[Gr]}, we have: \begin{prop} \begin{enumerate} \item[i)] The area of the geodesic ball with center $o$ and radius $t$ is given by $$S_o(t) = (16\pi^{3}/105 ) {{{ }}}\sum\limits_{n = 3}^\infty < a_{2n+1} > t^{2n};$$ \item[ii)] The volume of the geodesic ball with center $o$ and radius $t$ is given by $$ V_o(t)=(16\pi^{3}/105) \sum\limits_{n = 3}^\infty \frac{{{1}}}{{{{2n+1}} }} < a_{2n+1} > t^{2n+1}.$$ \end{enumerate} \end{prop} {\bf Proof.\ \ } i) If we integrate i) of Lemma 4.6 over the sphere we have that the odd powers vanish and then the result follows immediately. For ii) we use that $V_o(r)=\int\nolimits_0^r {S_o}(t)dt$.\par
{\bf Acknowledgements}: The authors gladly acknowledge helpful conversations with J. \'Alvarez Paiva, T. Arias--Marco, J. C. Gonz\'alez - D\'avila, O. Kowalski, E. Mac\'{\i}as and L. Vanhecke.
\small
\normalsize
Author's adresses:
A. M. Naveira\\ Departamento de Geometr\'{\i}a y Topolog\'{\i}a. Facultad de Matem\'aticas.\\ Avda. Andr\'es Estell\'es, N¼ 1\\ 46100 - Burjassot\\ Valencia, SPAIN\par Phone +34-963544363\\ Fax: +34-963544571\\ e-mail: {\tt naveira@uv.es}
A. D. Tarr\'{\i}o Tobar\\ E. U. Arquitectura T\'ecnica\\ Campus A Zapateira. Universidad de A Coru\~na\\ 15192 - A Coru\~na, SPAIN\par Phone +34-981167000 Ext. 2721, 2713\\ Fax: +34-981167060\\ e-mail: {\tt madorana@udc.es}
\end{document} | arXiv | {
"id": "0706.1585.tex",
"language_detection_score": 0.6221545934677124,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\textbf{Decoherence of tripartite states - a trapped ion coupled to an optical cavity}}
\author{S. Shelly Sharma} \email[shelly@uel.br]{} \affiliation{Depto. de F\'{i}sica, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil }
\author{N. K. Sharma} \email[nsharma@uel.br]{} \thanks{} \affiliation{Depto. de Matem\'{a}tica, Universidade Estadual de Londrina, Londrina 86051-990 PR, Brazil }
\author{E. de Almeida} \email[eduardo@uel.br]{} \thanks{} \affiliation{Depto. de F\'{i}sica, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil }
\author{\normalsize Corresponding author: S. Shelly Sharma (shelly@uel.br)}
\begin{abstract} We investigate the decoherence process of three qubit system obtained by manipulating the state of a trapped two-level ion coupled to an optical cavity. Interaction of the ion with a resonant laser and the cavity field tuned to red sideband of ionic vibrational motion, generates tripartite entanglement of internal state of the ion, vibrational state of ionic center of mass motion and the cavity field state. Non-dissipative decoherence occurs due to entanglement of the system with the environment, modeled as a set of noninteracting harmonic oscillators. Analytic expressions for the state operator of tripartite composite system, the probability of generating maximally entangled GHZ state, and the population inversion have been obtained. Coupling to environment results in exponential decay of off diagonal matrix elements of the state operator with time as well as a phase decoherence of the component states.
Numerical calculations to examine the time evolution of GHZ state generation probability and population inversion for different values of system environment coupling strengths are performed. Using negativity as an entanglement measure and linear entropy as a measure of mixedness, the entanglement dynamics of the tripartite system in the presence of decoherence sources has been analyzed. The maximum tripartite entanglement is found to decrease with increase in the strength of system-environment coupling. The negativity as well as the linear entropy as entanglement measures give qualitatively similar results, uniquely identifying maximally entangled and separable states of the system. For large values of system-environment coupling strength, the mixed states of the composite system lying at the boundary of entangled-separable region are reached. For these states the negativity and linear entropy show distinctly different behaviour. This can be understood by noting that whereas the negativity measures entanglement generating quantum correlations, the linear entropy measures all correlations that reduce the purity of the state.
{\textbf {Keywords}}: Trapped ions, Cavity QED, Tripartite entanglement, GHZ state generation, Non-Dissipative Decoherence, Negativity, Linear entropy, Mixed states
\end{abstract} \maketitle
\section{Introduction}
Controlled manipulation of quantum states and implementation of quantum logic gates are essential elements of a quantum computer. Quantum computation relies on nonclassical properties of qubits such as entanglement. Entanglement dynamics is of utmost importance to quantum communication \cite{eker91}, dense coding \cite{benn92} and quantum teleportation \cite{benn93}, as well. Cold ions in a linear trap \cite {wine98} offer a promising physical system to implement quantum computation, as each ion allows two qubit state manipulation. One of the experimental efforts to realize multipartite entanglement involves trapping two level ion in an optical cavity. Coupling of trapped ion to quantized field inside an optical cavity has been successfully achieved \cite{mund03}. More than one trapped ions can be used to construct a more complex multiple function system in which quantum states of several trapped ions entangled with cavity photons are manipulated in a controlled manner. A string of trapped ions in a cavity may be used for implementing quantum gates \cite{pell95}. In quantum networks involving cavity QED setups \cite{cira97}, quantum information is stored and processed by ions trapped in a cavity. De-coherence of quantum systems is one of the major hurdles to implement quantum computers. Suppression of quantum coherence occurs due to interactions of the quantum system with environment resulting in random and unknown perturbations of system Hamiltonian. In a previous article \cite {shar103}, we proposed a scheme to generate three qubit maximally entangled GHZ state, using trapped ion interacting with a resonant external laser and sideband tuned single mode of a cavity field. In this article, we investigate the decoherence process of three qubit system obtained by manipulating the state of a two level trapped ion coupled to an optical cavity. In the context of a qubit based on a single Ca$^+$ ion, Schmidt-Kaler et al \cite{schm03} have found magnetic field fluctuations and laser frequency fluctuations to be the major decoherence sources. Decoherence of motional quantum states of a trapped atom coupled to engineered phase and amplitude reservoirs has been measured \cite{myat00,turc00}. The trap frequency was changed by applying potentials to trap electrodes to simulate a phase reservoir. In general the environment or heat bath is modeled as a system of noninteraction boson modes \cite{cald83}. It is known that for harmonic systems a heat bath is effectively equivalent to an external uncolrrelated random force acting on the quantum system. As such the decoherence effects due to magnetic field fluctuations, laser frequency fluctuations, and potentials applied to elctrodes can be modeled as a system of noninteracting boson modes. Other important decoherence sources are cavity losses and spontaneous decay. The case where spontaneous decay effects and cavity losses become important has been examined by Fidio et al. \cite{fidi02}. In the present work, these two effects are assumed to be negligible.
As the laser-ion interaction times involved in experiments with ions trapped in a cavity are in the micro seconds range, Markovian approximation is likely to yield undesirable results. We consider here only adiabatic decoherence of the system with no energy exchange between the quantum system and environment. A detailed discussion on adiabatic decoherence has been given by Mozyrsky and Privman in ref. \cite{priv98}. Starting from an initial state in which the system and environment are in a separable state, the state operator of the ion-cavity system is obtained by tracing over the environment degrees of freedom. Coherent states are used to evaluate the trace \cite{priv98}. The decoherence effects result in a state operator with off diagonal matrix elements decaying with time, in the eigen basis of the interaction Hamiltonian. The state operator is used to examine the decoherence effects on the probability of generating maximally entangled tripartite GHZ state and population inversion of two level ion. As the coherences decrease the system tends to a purely mixed state of energy eigenvectors. Decoherence of tripartite entanglement is analyzed by applying the Peres-Horodecki condition for separability \cite {pere96} to bipartite decompositions of the composite system. A state is separable if the partial transpose of it's density matrix with respect to a subsystem is positive semidefinite. An entangled state violates the positive partial transpose (PPT) separability criterion. Negativity \cite{vida02} is an entanglement measure based on (PPT) separability criterion. We use Negativity to characterize the entanglement and linear entropy to measure the mixedness of three subsystems.
\section{Unitary evolution of isolated system}
Consider a single two-level ion trapped in a high finesse optical cavity. The frequency of radio frequency (Paul) trap along $x$ axis is $\nu $ and the excitation energy of the ion is $\hbar \omega _{0}$. The ion is radiated by an external resonant laser of frequency $\omega _{L}$ and interacts with the single mode cavity field of frequency $\omega _{c}$ tuned to red sideband of ionic vibrational motion. Interaction with external laser field as well as the cavity field generates entanglement of internal states of the ion, vibrational states of ionic center of mass, and the cavity field number state. The free Hamiltonian of the system is \begin{equation} \hat{H}_{0S}=\hbar \nu \left( \hat{a}^{\dagger }\hat{a}+\frac{1}{2}\right) +\hbar \omega _{c}\hat{b}^{\dagger }\hat{b}+\frac{\hbar \omega _{0}}{2} \hat{\sigma} _{z}\quad , \label{01} \end{equation} $\hat{a}^{\dagger }(\hat{a})$ and $\hat{b}^{\dagger }(\hat{b})$ being the creation(destruction) operators for vibrational phonon and cavity field photon respectively. With the ion placed close to the node of cavity field standing wave, the interaction Hamiltonian is given by \begin{eqnarray} \hat{H}_{I} &=&\hbar \Omega \lbrack \hat{\sigma} _{+}\exp \left[ i\eta _{L}(\hat{a}+\hat{ a}^{\dagger })-i\omega _{L}t\right] +\hat{\sigma} _{-}\exp \left[ i\eta _{L}(\hat{a}+ \hat{a}^{\dagger })+i\omega _{L}t\right]\nonumber \\ &&+\hbar g\left( \hat{\sigma} _{+}\hat{b}+\hat{\sigma} _{-}\hat{b}^{\dagger }\right) {\sin }\left[ \eta _{c}(\hat{a}+\hat{a}^{\dagger })\right] ,\label{02} \end{eqnarray} where $\Omega $ , $g$, $\eta _{L}$, $\eta _{c}$ are the Rabi frequency, the ion-cavity coupling strength, ion-laser Lamb Dicke parameter, and ion-cavity field Lamb Dicke parameter, respectively. We work in the Lamb Dicke regime that is $\eta _{L}\ll 1,$ $\eta _{c}\ll 1$. In the interaction picture determined by the transformation $\widehat{U}=\exp (i\hat{H}_{0S}t/\hbar )$ and rotating wave approximation, for $\omega _{L}=\omega _{0}$ and $\omega _{c}=\omega _{0}-\nu $, the interaction Hamiltonian reduces to \begin{equation} \hat{H}_{II}=\hbar \Omega \lbrack \hat{\sigma} _{+}+\hat{\sigma} _{-}]+\hbar g{\eta _{c}} \left[ \hat{\sigma} _{+}\hat{b}\hat{a}+\hat{\sigma} _{-}\hat{b}^{\dagger }\hat{a} ^{\dagger }\right] . \label{03} \end{equation} The unitary evolution of the isolated system in interaction picture is given by \[ i\hbar \frac{\partial \Psi _{I}(t)}{\partial t}=\hat{H}_{II}\Psi _{I}(t), \] where $\Psi _{I}(t)=\exp (i\hat{H}_{0S}t/\hbar )\Psi _{S}(t).$
\subsection{Basis truncation}
We expand $\hat{H}_{II}$ in the space of basis vectors $ \left\vert i,k,l\right\rangle $ as \begin{equation} \hat{H}_{II}=
\mathop{\displaystyle\sum}
\limits_{i,k,l,i^{\prime },k^{\prime },l^{\prime }}\left\langle i^{\prime },k^{\prime },l^{\prime }\right\vert \hat{H}_{II}\left\vert i,k,l\right\rangle \left\vert i^{\prime },k^{\prime },l^{\prime }\right\rangle \;\left\langle i,k,l\right\vert \text{ ,} \label{05} \end{equation} where $i=\left( g\text{ or }e\right) $ and $k,l(0,1,..,\infty), $ denote the state of ionic vibrational motion and the cavity field number state, respectively. We isolate a four dimensional vector space containing the computational basis vectors $\left\vert g,m-1,n-1\right\rangle ,\left\vert e,m-1,n-1\right\rangle ,\ \left\vert g,m,n\right\rangle $ and $\left\vert e,m,n\right\rangle $ for a given choice of $m,n$ values. The matrix representing the interaction Hamiltonian acting in four dimensional space is \begin{equation} \hat{H}_{IS}\rightarrow \left[ \begin{array}{cccc} 0 & \hbar \Omega & 0 & 0 \\ \hbar\Omega & 0 & \hbar g{\eta _{c}}\sqrt{mn} & 0 \\ 0 & \hbar g{\eta _{c}}\sqrt{mn} & 0 & \hbar \Omega \\ 0 & 0 & \hbar \Omega & 0 \end{array} \right]. \label{06} \end{equation} The unitary transformation that diagonalizes $\hat{H}_{IS}$ is easily obtained and yields the eigenvectors satisfying $\hat{H}_{IS}\Phi _{p}=E_{p}\Phi _{p},$ ($p=1,4$). The computational basis vectors are related to the eigenvectors $\Phi _{p}$ through \begin{equation} \left[ \begin{array}{c} \left\vert g,m-1,n-1\right\rangle \\ \left\vert e,m-1,n-1\right\rangle \\ \left\vert g,m,n\right\rangle \\ \left\vert e,m,n\right\rangle \end{array} \right] =\left[ \begin{array}{cccc} \frac{A+B}{\sqrt{2}} & \frac{A-B}{\sqrt{2}} & \frac{A-B}{\sqrt{2}} & \frac{ -A-B}{\sqrt{2}} \\ \frac{A-B}{\sqrt{2}} & \frac{-A-B}{\sqrt{2}} & \frac{A+B}{\sqrt{2}} & \frac{ A-B}{\sqrt{2}} \\ \frac{B-A}{\sqrt{2}} & \frac{A+B}{\sqrt{2}} & \frac{A+B}{\sqrt{2}} & \frac{ A-B}{\sqrt{2}} \\ \frac{-A-B}{\sqrt{2}} & \frac{B-A}{\sqrt{2}} & \frac{A-B}{\sqrt{2}} & \frac{ -A-B}{\sqrt{2}} \end{array} \right] \left[ \begin{array}{c} \Phi _{1} \\ \Phi _{2} \\ \Phi _{3} \\ \Phi _{4} \end{array} \right] , \label{07} \end{equation} where $a_{mn}=\frac{1}{2}g\eta _{c}\sqrt{mn}$, $\mu _{mn}=\sqrt{ a_{mn}^{2}+\Omega ^{2}}$, $A^{2}=(\mu _{mn}+\Omega )/4\mu _{mn}$, and $ B^{2}=(\mu _{mn}-\Omega )/4\mu _{mn}$. The corresponding eigenvalues are $ E_{1}=\hbar(\mu _{mn}-a_{mn})$, $E_{2}=-\hbar\left( \mu _{mn}+a_{mn}\right)$, $E_{3}=\hbar(\mu_{mn}+a_{mn})$, and $E_{4}=\hbar(a_{mn}-\mu _{mn})$.
\subsection{Unitary evolution in truncated basis space}
Unitary time evolution of the system due to interaction operator $ \hat {H} _{IS}$ had been obtained, analytically, in ref. \cite{shar103} for initial states $\left\vert g,m-1,n-1\right\rangle ,$ and $\left\vert e,m-1,n-1\right\rangle $. For interaction time $t_{p}$ such that $\mu _{mn}t_{p}=p\pi ,\ p=1,2...,$ the initial state $\left\vert g,m-1,n-1\right\rangle $ is found to evolve into \begin{equation} \Psi_I (t_{p})=(-1)^{p}\left[ \cos (a_{mn}t_{p})\left\vert g,m-1,n-1\right\rangle -i\sin (a_{mn}t_{p})\left\vert e,m,n\right\rangle \right] , \label{eq231} \end{equation} Now consider a special initial state with the ion in its ground state occupying the lowest energy trap state, while the cavity is prepared in vacuum state ($m=n=1)$. Without taking into consideration the decoherence effects, the ratio $\alpha =\left( \mu _{11}/a_{11}\right) $ determines the interaction time $t_{_{p}}$ needed to generate maximally entangled tripartite GHZ state. Leaving out an overall phase factor, for an interaction time $t_{p}$ such that $a_{11}t_{p}=\frac{\pi }{4},\frac{5\pi }{4},...,$ ($\alpha =4)$, the system is found to be in the state \begin{equation} \Psi _{GHZ,I}^{-}(t_{p})=\frac{1}{\sqrt{2}}\left( \left\vert g,0,0\right\rangle -i\left\vert e,1,1\right\rangle \right) , \label{eq232} \end{equation} and for $a_{11}t_{p}=\frac{3\pi }{4},\frac{7\pi }{4},...,$ the state of the system is \begin{equation} \Psi _{GHZ,I}^{+}(t_{p})=\frac{1}{\sqrt{2}}\left( \left\vert g,0,0\right\rangle +i\left\vert e,1,1\right\rangle \right) . \label{eq233} \end{equation} The density operator representation for these states is \begin{equation} \widehat{\rho } _{GHZ,I}(t_{p})=\left\vert \Psi _{GHZ,I}(t_{p})\right\rangle \left\langle \Psi _{GHZ,I}(t_{p})\right\vert . \label{eq234} \end{equation} Neglecting an overall phase factor, the states can be written in Schrodinger picture as \begin{eqnarray*} \Psi _{GHZ}^{-}(t_{p}) &=&\frac{1}{\sqrt{2}}\left( \left\vert g,0,0 \right\rangle -i\exp \left( -i 2\omega _{0}t \right) \left\vert e,1,1\right\rangle \right) \label{eq235}\\ \Psi _{GHZ}^{+} (t_{p})&=&\frac{1}{\sqrt{2}}\left( \left\vert g,0,0\right\rangle +i\exp \left( -i 2\omega _{0}t \right) \left\vert e,1,1\right\rangle \right). \label{eq236} \end{eqnarray*}
\section{Decoherence of unitary evolution in truncated space}
We notice that the interaction Hamiltonian $\hat{H}_{II}$ (Eq. (\ref{05})) connects the model space states to states outside the model space. For the specific case of initial state $\left\vert g,0,0\right\rangle,$ the leading term in the probability of finding a state outside the model space varies as $t^{8}.$ The states outside the model space may be considered as part of the environment states. The resulting decoherence effects can be accounted for by coupling the model space interaction Hamiltonian to the environment modeled as a set of oscillators. We recall that the scattering outside the model space results in internal states linked to motional states with phonon number $>1$, resulting in heating of the ions. In a realistic exprerimental set-up, the heating of ions trapped in a radio-frequency trap occurs due to the electric field noise from the trap electrodes \cite{turc200}. The two effects can not be separated. Error in initial phonon state preparation can also add to decoherence by additional scattering to states outside the model space. The cavity states with number of photons $>1$, are created and annihilated in the vector space outside the model space. In a lossy cavity photons leaving the cavity are akin to state reduction which can contribute to creating states in the model space. In this work we consider the average decoherence effects arising due to the decoherence sources such as magnetic field fluctuations, laser frequency fluctuations, potentials applied to elctrodes and coupling to states outside the model space. The random perturbations of the Hamiltonian are accounted for by coupling the model space interaction Hamiltonian to environment modeled as a set of non-interacting harmonic oscillators \cite{cald83}. The phase decoherence results in decay of entanglement needed for various computation related tasks. In interaction picture the Hamiltonian for the system and environment, with $\hat{H}_{IS}$ coupled to environment is given by \begin{equation} \hat{H}=\hat{H}_{IS}+\sum_{k}\hbar \omega _{k}\hat{B}_{k}^{\dagger }\hat{B} _{k}+\hat{H}_{IS}\sum_{k}\left( g_{k}^{\ast }\hat{B}_{k}+g_{k}\hat{B} _{k}^{\dagger }\right) , \label{eq2} \end{equation} where $\hat{B}_{k}^{\dagger }$ and $\hat{B}_{k}$ are bosonic creation and destruction operators for the environment mode of frequency $\omega _k$. The coefficients $g_{k}$, $g_{k}^{\ast }$ are the system environment couplings.
The initial state of the system is assumed to be $\widehat{\rho }_{S}(0)$, while the environment is in an uncorrelated state, $\prod_{k}\widehat{\theta }_{k}$, where $\widehat{\theta }_{k}=Z_{k}^{-1}e^{-\beta \omega _{k}\hat{B} _{k}^{\dagger }\hat{B}_{k}}$ and $Z_{k}=(1-e^{-\beta \omega _{k}})^{-1}.$ The assumption that the environment is in an uncorrelated state allows exact solvability, though no physical fundamental reason can be given for this. However, as the initial state of the system is a doctored state, the assumption of separability of system environment state at $t=0$, holds. The state operator for the system is obtained by solving \begin{equation} \widehat{\rho }_{IS}(t)=Tr_{E}\left( e^{\frac{-{\rm i}\hat{H}t}{\hbar }} \widehat{\rho }_{S}(0)\prod_{k} \hat{\theta} _{k}e^{\frac{{\rm i}\hat{H}t}{\hbar } }\right) , \label{eq3} \end{equation} where $Tr_{E}$ refers to the operation of tracing over bosonic degrees of freedom used to model the environment. Working in the eigen basis of $\hat{H} _{IS}$, we may write \begin{eqnarray} \widehat{\rho }_{IS}(t) &=&\sum_{i,j}Tr_{E}\left( e^{-{\rm i}t\sum_{k}\left( \omega _{k}\hat{B}_{k}^{\dagger }\hat{B}_{k}+\frac{E_{i}}{\hbar }\left( g_{k}^{\ast }\hat{B}_{k}+g_{k}\hat{B}_{k}^{\dagger }\right) \right) }\prod_{k} \hat{\theta} _{k}e^{{\rm i}t\sum_{k}\left( \omega _{k}\hat{B} _{k}^{\dagger }\hat{B}_{k}+\frac{E_{j}}{\hbar }\left( g_{k}^{\ast }\hat{B} _{k}+g_{k}\hat{B}_{k}^{\dagger }\right) \right) }\right) \nonumber \\ &&\exp \left[ \frac{-{\rm i}(E_{i}-E_{j})t}{\hbar }\right] \left\langle \Phi _{i}\right\vert \widehat{\rho }_{S}(0)\left\vert \Phi _{j}\right\rangle \left\vert \Phi _{i}\right\rangle \left\langle \Phi _{j}\right\vert . \label{eq4} \end{eqnarray} The trace over bosonic bath states is evaluated by using coherent states \cite{priv98} and the resulting state operator for the system is \begin{eqnarray} \widehat{\rho }_{IS}(t) &=&\sum_{i,j}\left\langle \Phi _{i}\right\vert \widehat{\rho }_{S}(0)\left\vert \Phi _{j}\right\rangle \exp \left( \frac{ -(E_{i}-E_{j})^{2}\Gamma (t)}{4.0\hbar ^{2}}\right) \nonumber \\ &&\exp \left[ -{\rm i}\left( \frac{(E_{i}-E_{j})t}{\hbar }+\frac{ (E_{i}^{2}-E_{j}^{2})C(t)}{\hbar ^{2}}\right) \right] \left\vert \Phi _{i}\right\rangle \left\langle \Phi _{j}\right\vert , \label{eq5} \end{eqnarray} where, using the notation of ref. \cite{priv98}, \begin{equation} \Gamma (t)=8\sum_{k}\frac{\left\vert g_{k}\right\vert ^{2}}{\omega _{k}^{2}} \sin ^{2}\left( \frac{\omega _{k}t}{2}\right) \coth \left( \frac{\beta \omega _{k}}{2}\right) , \label{eq6} \end{equation} and \begin{equation} C(t)=\sum_{k}\frac{\left\vert g_{k}\right\vert ^{2}}{\omega _{k}^{2}}\left( \sin (\omega _{k}t)-\omega _{k}t\right) . \label{eq7} \end{equation} The summation over bath modes in Eqs. (\ref{eq6}) and (\ref{eq7}) can be replaced by integration over frequency by introducing a frequency dependent density function. For an ohmic dissipation characterized by density function $D(\omega )\left\vert g\right\vert ^{2}=\kappa \omega \exp (-\omega /\omega _{c}),$ we obtain \begin{equation} \Gamma (t)=8\kappa \int d\omega \frac{\exp (-\omega /\omega _{c})}{\omega } \coth \left( \frac{\beta \omega }{2}\right) \sin ^{2}\left( \frac{\omega t}{2 }\right) , \label{eq8a} \end{equation} and \begin{equation} C(t)=\kappa \int d\omega \frac{\exp (-\omega /\omega _{c})}{\omega }\left( \sin (\omega t)-\omega t\right) , \label{eq8b} \end{equation} where $\omega _{c}$ is a cut off frequency and constant $\kappa $ a measure of system-bath coupling strength.
\subsection{Initial State $\widehat{\protect\rho }_{S}(0)=\left|
g,m-1,n-1\right\rangle \left\langle g,m-1,n-1\right| $}
We consider the case when the system is prepared, at $t=0,$ in the state $ \widehat{\rho }_{S}(0)=\left\vert g,m-1,n-1\right\rangle \left\langle g,m-1,n-1\right\vert $. The state operator $\widehat{\rho }_{IS}(t)$ obtained by putting in the energy spectrum of $\hat{H}_{IS}$ in Eq. (\ref {eq5}) is \begin{eqnarray} \widehat{\rho }_{IS}(t) &=&\frac{(A+B)^{2}}{2}\left[ \left\vert \Phi _{1}\right\rangle \left\langle \Phi _{1}\right\vert +\left\vert \Phi _{4}\right\rangle \left\langle \Phi _{4}\right\vert \right] \nonumber \\ &&+\frac{(A-B)^{2}}{2}\left( \left\vert \Phi _{2}\right\rangle \left\langle \Phi _{2}\right\vert +\left\vert \Phi _{3}\right\rangle \left\langle \Phi _{3}\right\vert \right) \nonumber \\ &&-\frac{(A+B)^{2}}{2}\exp \left( -\left( \mu _{mn}-a_{mn}\right) ^{2}\Gamma (t)\right) \nonumber \\ &&\times \left[ \exp \left( -i2(\mu _{mn}-a_{mn})t\right) \left\vert \Phi _{1}\right\rangle \left\langle \Phi _{4}\right\vert +\exp \left( i2(\mu _{mn}-a_{mn})t\right) \left\vert \Phi _{4}\right\rangle \left\langle \Phi _{1}\right\vert \right] \nonumber \\ &&+\frac{(A-B)^{2}}{2}\exp \left( -\left( \mu _{mn}+a_{mn}\right) ^{2}\Gamma (t)\right) \nonumber \\ &&\times \left[ \exp \left( i2(\mu _{mn}+a_{mn})t\right) \left\vert \Phi _{2}\right\rangle \left\langle \Phi _{3}\right\vert +\exp \left( -i2(\mu _{mn}+a_{mn})t\right) \left\vert \Phi _{3}\right\rangle \left\langle \Phi _{2}\right\vert \right] \nonumber \\ &&+\frac{(A^{2}-B^{2})}{2}\exp \left( -\mu _{mn}^{2}\Gamma (t)\right) \nonumber \\ &&\times \left[ \exp \left[ -i\left( 2\mu _{mn}t-\varphi (t)\right) \right] \left\vert \Phi _{1}\right\rangle \left\langle \Phi _{2}\right\vert +\exp \left[ i\left( 2\mu _{mn}t-\varphi (t)\right) \right] \left\vert \Phi _{2}\right\rangle \left\langle \Phi _{1}\right\vert \right. \nonumber \\ &&\left. -\exp \left[ -i\left( 2\mu _{mn}t+\varphi (t)\right) \right] \left\vert \Phi _{3}\right\rangle \left\langle \Phi _{4}\right\vert -\exp \left[ i\left( 2\mu _{mn}t+\varphi (t)\right) \right] \left\vert \Phi _{4}\right\rangle \left\langle \Phi _{3}\right\vert \right] \nonumber \\ &&+\frac{(A^{2}-B^{2})}{2}\exp \left( -a_{mn}^{2}\Gamma (t)\right) \nonumber \\ &&\times \left[ \exp \left[ i\left( 2a_{mn}t+\varphi (t)\right) \right] \left\vert \Phi _{1}\right\rangle \left\langle \Phi _{3}\right\vert +\exp \left[ -i\left( 2a_{mn}t+\varphi (t)\right) \right] \left\vert \Phi _{3}\right\rangle \left\langle \Phi _{1}\right\vert \right. \nonumber \\ &&\left. -\exp \left[ i\left( 2a_{mn}t-\varphi (t)\right) \right] \left\vert \Phi _{2}\right\rangle \left\langle \Phi _{4}\right\vert -\exp \left[ -i\left( 2a_{mn}t-\varphi (t)\right) \right] \left\vert \Phi _{4}\right\rangle \left\langle \Phi _{2}\right\vert \right] , \label{eq9} \end{eqnarray} where $\varphi (t)=4\mu _{mn}a_{mn}C(t)$. The state operator $\widehat{\rho }_{IS}(t)$ contains information about the decay of coherences due to coupling of the system with the environment.
\section{Decoherence and GHZ state generation probability}
\begin{figure}
\caption{ $P_{GHZ}(t)$ as a function of scaled time variable $T(=a_{11}t)$, for the choice $\mu _{11}\backslash a_{11}=4,$ and $\kappa =0,0.001,0.01$ and $0.1$.}
\label{fig1}
\end{figure}
Undergoing decoherence free time evolution, for $\widehat{\rho }_S(0)=\left\vert g,0,0\right\rangle \left\langle g,0,0\right\vert $ and the interaction time $t_{_{p}}$ such that $ a_{11}t_{p}=\frac{\pi }{4},\mu _{11}t_{p}=\pi $ ($\alpha =4)$, the system evolves into a maximally entangled tripartite two mode GHZ state given by Eq. (\ref{eq232})$.$ We define GHZ state generation probability as $ P_{GHZ}(t)=tr\left( \widehat{\rho}_{IS} (t)\widehat{\rho }_{GHZ,I}^{(-)}\right) $, where $ \widehat{\rho }_{GHZ,I}^{(-)}=$ $\left\vert \Psi _{GHZ,I}^{-}\right\rangle \left\langle \Psi _{GHZ,I}^{-}\right\vert $\ . Using Eq. (\ref{eq9}) we write down the density operator $\widehat{\rho }_{IS}(t)$ for the choice $ m=1,n=1$ and evaluate $P_{GHZ}(t)$. The resulting expression for GHZ\ state generation probability is \begin{eqnarray} P_{GHZ}(t) &=&\frac{1}{2}+\frac{\Omega ^{2}}{4\mu _{11}^{2}}\left( \exp \left( \frac{-\mu _{11}^{2}\Gamma (t)}{4.0}\right) \cos \left( 2\mu _{11}t\right) \cos (\varphi (t))\right) \nonumber \\ &&+\frac{\Omega ^{2}}{4\mu _{11}^{2}}\left( \exp \left( \frac{ -a_{11}^{2}\Gamma (t)}{4.0}\right) \sin \left( 2a_{11}t\right) \cos (\varphi (t))-1\right) \nonumber \\ &&+\frac{1}{2}\left( \frac{\mu _{11}-a_{11}}{2\mu _{11}}\right) ^{2}\exp \left( \frac{-\left( \mu _{11}-a_{11}\right) ^{2}\Gamma (t)}{4.0}\right) \sin \left[ 2(\mu _{11}-a_{11})t\right] \nonumber \\ &&-\frac{1}{2}\left( \frac{\mu _{11}+a_{11}}{2\mu _{11}}\right) ^{2}\exp \left( \frac{-\left( \mu _{11}+a_{11}\right) ^{2}\Gamma (t)}{4.0}\right) \sin \left[ 2(\mu _{11}+a_{11})t\right] . \label{eq11} \end{eqnarray} We also calculate the population inversion defined as $I=P_{g}-P_{e},$ where $P_{g}(P_{e})$ is the probability of finding the ion in ground (excited) state. For the choice $m=1,n=1,$ in the state $\widehat{\rho }_{IS}(t)$ of Eq. ( \ref{eq9}), we get \begin{eqnarray} I &=&\frac{\mu _{11}+a_{11}}{2\mu _{11}}\left( \exp \left( -\left( \mu _{11}-a_{11}\right) ^{2}\Gamma (t)\right) \cos \left[ 2(\mu _{11}-a_{11})t \right] \right) \nonumber \\ &&+\frac{\mu _{11}+a_{11}}{2\mu _{11}}(\exp \left( -\left( \mu _{11}+a_{11}\right) ^{2}\Gamma (t)\right) \cos \left[ 2(\mu _{11}+a_{11})t \right] . \label{eq12} \end{eqnarray}
In the limit $\kappa \rightarrow 0$ the Eqs. (\ref{eq9}), (\ref{eq11}) and ( \ref{eq12}) reduce to decoherence free evolution of the state operator, the GHZ\ state generation probability, and the population inversion, respectively. As the cavity is initially in vacuum state and ionic center of mass in the lowest energy trap state, decoherence effects due to cavity decay and heating are expected to be small and have not been considered.
\begin{figure}
\caption{ Population inversion $I(t),$ as a function of scaled time variable $ T(=a_{11}t)$, for the choice $\mu _{11}\backslash a_{11}=4,$ and $\kappa =0, 0.001, 0.01$ and $0.1$.}
\label{fig2}
\end{figure}
\section{Negativity and Linear entropy}
The state of three qubit composite system is represented by $\widehat{ \rho }_{IS}(t)=\widehat{\rho }_{ABC}(t),$ where $A$, $B$, and $C$ refer to the internal state of two-level ion, the state of vibrational motion, and the cavity state respectively. A pure state of a tripartite system can be separable, or entangled. Entangled state may or may not have(biseparable) true tripartite entanglement. Bipartite entanglement is relatively well understood. Entanglement content of a tripartite system may be measured through the entanglement of its bipartite decompositions that is A+(BC), B+(AC), and C+(AB). The decoherence effects result in a composite state $ \widehat{\rho }_{ABC}(t),$ which is usually a mixed state. Mixed state entanglement is less understood than the pure state entanglement. On examining Eq. (\ref{eq9}), in which the state operator is expressed in the eigen-basis of $\hat{H}_{IS}$, we notice that the diagonal matrix elements of density matrix are not affected by coupling to the environment, whereas the moduli of off diagonal matrix elements decay with time. Consequently, when only non-dissipative decoherence is present the trace of $\widehat{\rho }_{ABC}(t)$ remains constant with time and the eigenvalues continue to be positive. Since $\widehat{\rho }_{ABC}(t)$ is positive definite, we use negativity proposed by Vidal and Werner \cite{vida02} as an entanglement measure for pure as well as mixed states. Negativity is based on partial transpose of density matrix of composite system with respect to a subsystem.
The state of a bipartite system, composed of subsystems $A$ and $B$ in finite dimensional Hilbert spaces $d_{A}$ and $d_{B}$, is written as \begin{equation}
\widehat{\rho }=\sum_{i,j=1}^{d_{A}}\sum_{m,r=1}^{d_{B}}\left\langle i,m\left| \widehat{\rho }\right| j,r \right\rangle \left| \
i,m\right\rangle \left\langle j,r \right| . \label{3.0} \end{equation} The partial transpose of density operator with respect to sub-system A is defined as \begin{equation}
\widehat{\rho }^{T_{A}}=\sum_{i,j=1}^{d_{A}}\sum_{m,r}^{d_{B}}\left\langle i,m\left| \widehat{\rho }\right| \ j,r\right\rangle \left| \
j,m\right\rangle \left\langle \ i,r\right| . \label{3.1} \end{equation}
The partial transpose of density matrix of an entangled state is not positive definite. Negativity defined as {\sl N}$^{A}=\sum_{i}\left| \lambda _{i}\right| ,$ where $\lambda _{i}$ are the negative eigenvalues of \ $ \widehat{\rho }^{T_{A}}$ is a measure of entanglement of quantum system $A$ with $B$. For a separable state ${N}^{A}=0$ and at maximal entanglement value of ${N}^{A}$ depends on dimension $d_{A}.$ For the tripartite system at hand, we construct the partial transpose with respect to sub-systems $A$, $B$ or $C$, while keeping the composite state of the remaining two sub-systems unaltered. For the density matrix operator $\widehat{\rho } _{ABC}(t)$ acting on composite space, the transpose with respect to sub-system A reads as \begin{equation} \widehat{\rho }^{T_{A}}=\sum_{i,j=1}^{d_{A}}\sum_{m,r=1}^{d_{B}}
\sum_{n,s=1}^{d_{C}}\left\langle i,m,n\left| \widehat{\rho }\right| \
j,r,s\right\rangle \left| \ j,m,n\right\rangle \left\langle \ i,r,s\right| , \label{3.3} \end{equation} where $d_{X}(X=A,B,C)$ refers to dimension of the Hilbert space of subsystem $X$. We use $\widehat{\rho }^{T_{A}}$, $\widehat{\rho }^{T_{B}}$, and $\widehat{\rho }^{T_{C}}$ to calculate the negativitiy measures for sub-systems $A$, $B$, and $C$, respectively
\begin{figure}\label{fig3}
\end{figure}
\begin{figure}\label{fig4}
\end{figure}
For a quantum state $\widehat{\rho }$ in $d$ dimensional Hilbert space, the linear entropy $S_{l}$ is defined as \begin{equation} S_{l}=\frac{d}{d-1}\left(1-Tr ( \widehat{\rho }^{2})\right ). \label{3.4} \end{equation} Linear entropy is used to measure the mixedness of subsystems $A,B,$ and $C$. For pure states $S_{l}=0,$ whereas for maximally mixed states $S_{l}=1.$ Reduced state operators $ \widehat{\rho }_{red}^{A}=tr_{BC}(\widehat{\rho }_{ABC})$, $\widehat{\rho } _{red}^{B}=tr_{AC}(\widehat{\rho }_{ABC})$, and $\widehat{\rho }_{red}^{C}$ $ =tr_{AB}(\widehat{\rho }_{ABC})$ are used to calculate linear entropy $S_{l}$, for the subsystems $A$, $B$, and $C$, respectively. Negativities and linear entropies are used to characterize the state of the composite system at an instant t, for a given value of coupling to the environment. \begin{figure}
\caption{ Linear entropy \textsl{S}$_{l}^{A}$, calculated from $\widehat{\rho }_{red}^{A}$, as a function of scaled time variable $T(=a_{11}t)$, for the choice $\mu _{11}\backslash a_{11}=4,$ and $\kappa =0, 0.001, 0.01, 0.02, 0.05$
and $0.1$.}
\label{fig5}
\end{figure}
\section{Numerical results and conclusions}
We consider the system prepared in the state $\widehat{\rho }_{S}(0)=\left|
g,0,0\right\rangle \left\langle g,0,0\right| $, at $t=0.$ The laser-ion coupling constant is taken to be $\Omega =8.95$ $MHz$ \cite{mund03}$,$ where as the ratio $\mu _{11}/a_{11}=4.0$. Numerical values of $\Gamma (t)$ and $ C(t)$ are obtained by solving the integrals of Eqs. (\ref{eq8a},\ref{eq8b}) for the choice $\omega _{c}=1200MHz,$ temperature $T=0.03K$ and different values of coupling strength $\kappa .$ The variable $\omega $ in Eqs. (\ref{eq8a},\ref{eq8b}) is varied from zero to $ 3\omega _{c}.$ The value $\kappa =0.001$ corresponds to a weak coupling and successively larger values tend to produce decoherence on a scale comparable to GHZ state generation time which is of the order of $0.34$ nano s$.$ The state of the system at an instant $t$ can be detected experimentally by cavity-photon measurement combined with atomic population inversion measurement.
Figs. (1) and (2) display $P_{GHZ}(t)$ and population inversion $I(t),$ respectively, as a function of scaled time variable $T(=a_{11}t)$ for decoherence parameter values of $\kappa =0,0.001,0.01,$ and $0.1$. Evidently the evolution dynamics of $P_{GHZ}(t)$ and $I(t)$ is sensitive to changes in the decoherence parameter $\kappa $. The presence of decay factors in Eq. ( \ref{eq11}) causes the peak value of $P_{GHZ}$ to decrease with time. Phase decoherence causes a shift in the interaction time needed to get the system in maximally entangled state. The population inversion however does not show any phase decoherence effects. We may point out that at $T=135^{\circ }$ the tripartite system is in maximally entangled state of Eq. (\ref {eq233}) which is orthogonal to the state of Eq. (\ref{eq232})for which $P_{GHZ}(t)$ has been calculated. Population inversion is zero at $T=45^{\circ }$ as well as $ 135^{\circ },$ signalling maximally entangled state generation. For weak coupling ($\kappa =0.001$), the behavior of $I(t)$ does not differ much from no decoherence curve. However, for larger values of $\kappa $, $I(t)$ for the separable system at $T=45^{\circ }$ can indicate the amount of decoherence present.
The state operator of Eq. (\ref{eq9}) for $m=1,n=1$ is used to calculate the density matrices transposed with respect to quantum systems A, B, and C. The negativities are calculated, numerically, from $\widehat{\rho }^{T_{A}}$, $\widehat{\rho }^{T_{B}}$, and $\widehat{\rho }^{T_{C}}$ and plotted as a function of scaled time variable $T(=a_{11}t)$. The negativity plot displaying entanglement dynamics of internal state of two-level ion ($A$) is shown in Fig. 3. For subsystem $B $, the negativity calculated from $\widehat{\rho }^{T_{B}}$=$\widehat{\rho }^{T_{C}}$, is displayed as a function $T(=a_{11}t)$ in Fig. 4. The values of decoherence parameter are $\kappa =0,0.001,0.01,0.02,0.05$ and $0.1$. The time evolution of vibrational motion of ion's center of mass($B$) and cavity number state ($C$) are identical due to inherent symmetry of these subsystems. We notice that as the value of decoherence parameter increases, there is a decrease in the maximum entanglement of sub-systems $A$, $B$ and $C$ with their compliments. Besides that the phase decoherence changes the time at which maximal entanglement of the three parties is observed. These two factors are consistent with the behavior of\ $P_{GHZ}(t)$ and $I(t)$ in Figs. 1 and 2.
Linear entropies $S_{l}$, for subsystems $A$, $B$ and $C$ have been calculated using the reduced density operators obtained from $\widehat{\rho } _{IS}(t)$ of Eq. (\ref{eq9}) for $m=1,n=1$. Figs. 5 and 6, display $S_{l}$ as a function of scaled time variable $T(=a_{11}t)$, for subsystems $A$ and $B$(or $C$), respectively. The decoherence parameter values are $\kappa =0,0.001,0.01,0.02,0.05$ and $0.1$. For $\kappa =0.001,0.01,$ the trend of time evolution of linear entropy follows the variation of entanglement as seen in negativity plots of Figs. 3 and 4. However, for a stronger coupling value of $\kappa =0.02$ for subsystem $A$, whereas {\sl N} tends to zero for $T\ $ approaching $90^{0},$ $S_{l}$ shows fluctuation for $T>90^{0}$. Similarly for $\kappa =0.05$ negativity indicates a decoupling of system $A$ from system $BC$ beyond $T=67^{0}$ while $S_{l}$ points to the presence of some correlations. For $\kappa =0.1,$ the system is highly damped and both {\sl N} and $S_{l}$ show the ion to become separable around $T=45^{0}$. We may conclude that for the states lying at the boundary of entangled separable region the negativity and linear entropy show distinctly different variation with time the reason being that the negativity measures only entanglement generating quantum correlations, whereas the linear entropy measures all correlations that reduce the purity of the state.
\begin{figure}
\caption{ Linear entropy \textsl{S}$_{l}$, calculated from $\widehat{\rho }_{red}^{B}$, as a function of scaled time variable $T(=a_{11}t)$, for the choice $\mu _{11}\backslash a_{11}=4,$ and $\kappa =0, 0.001, 0.01, 0.02, 0.05$
and $0.1$.}
\label{fig6}
\end{figure}
To conclude, we have investigated the decoherence process of three qubit system obtained by manipulating the state of a cold trapped two-level ion coupled to an optical cavity. Interaction of the ion with a resonant laser field and the cavity field tuned to red sideband of ionic vibrational motion, generates entanglement of internal states of the ion, vibrational states of ionic center of mass and the cavity field state. Non-dissipative decoherence of the state of the system occurs due to interaction of the system with the environment. We have considered the effect of decoherence sources such as magnetic field fluctuations, laser frequency fluctuations, potentials applied to elctrodes and coupling to states outside the model space. The random perturbations of the Hamiltonian are accounted for by coupling the model space interaction Hamiltonian to the the environment, modeled as a set of non-interacting harmonic oscillators. The pointer observable is energy of the isolated quantum system. Analytic expression for the state operator of tripartite composite system, including non dissipative decoherence effects has been obtained. The initial state of the tripartite system is a separable state and the state of environment has no initial correlations. Coupling to environment is found to introduce an exponential decay with time of the off diagonal matrix elements of density operator $\widehat{\rho} _{IS}(t)$. In addition, a defasing of various states in the superposition also occurs. The extent to which the process is decohered depends on the system envionment coupling strenth, that can be partially controlled by adjusting the electrode potentials. Besides the strength of coupling to the environment, the energy spectrum of system Hamiltonian plays an important role in determining the decoherence rate of a given initial state of the composite system. For a specific choice of interaction parameters, the isolated system evolves to tripartite GHZ state \cite{shar103}. We have evaluated analytically the probability of generating maximally entangled GHZ state, and the population inversion in the presence of non-dissipative decoherence. Numerical calculations for different values of system environment coupling strengths using interaction parameter from a recent experiment show the peak value of GHZ state generation probability\ to decrease with increase in $\kappa .$ $P_{GHZ}(t)$ is also sensitive to phase decoherence. The population inversion however does not show any phase decoherence effects. The results can be used to study the effects of engineered environment on the process of tripartite maximally entangled state generation. Bipartite entanglement of cavity mode and atomic internal states can be produced without coupling to the motional states. The tripartite entanglement generated through coupling to motional states can be transferred to other ions that might be added to the trap. In comparison with an earlier proposal for GHZ state generation \cite{fidi02}, in the current scheme the initial state of the system is $\left\vert g,0,0\right\rangle$. As such the cavity losses and spontaneous decay effects are minimized.
Using negativity as an entanglement measure, the entanglement dynamics of the tripartite system in the presence of decoherence has been analyzed. As the value of decoherence parameter increases, the maximum entanglement of sub-systems $A$ and $B$ with their compliments decreases. In the context of trapped ion radiated by the single mode cavity field and an external resonant laser, the time for which the two level ion remains coupled to the state of vibrational motion and the cavity state decreases with increase in $ \kappa .$ For large values of $\kappa $ the states of the composite system lying at the boundary of entangled-separable region are reached. For these states the negativity and linear entropy show distinctly different variation with time. This can be understood by noting that whereas the negativity measures entanglement generating quantum correlations, the linear entropy measures all correlations that reduce the purity of the state.
{\Large{Acknowledgments}}
S. S. Sharma and N. K. Sharma acknowledge financial support from CNPq, Brazil. E. de Almeida thanks Capes, Brazil for financial support.
\end{document} | arXiv | {
"id": "0506123.tex",
"language_detection_score": 0.7423948645591736,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\date{} \title{Algorithm for Finding $k$-Vertex Out-trees and its Application to $k$-Internal Out-branching Problem}
\begin{abstract} An out-tree $T$ is an oriented tree with only one vertex of in-degree zero. A vertex $x$ of $T$ is internal if its out-degree is positive. We design randomized and deterministic algorithms for deciding whether an input digraph contains a given out-tree with $k$ vertices. The algorithms are of runtime $O^*(5.704^k)$ and $O^*(5.704^{k(1+o(1))})$, respectively. We apply the deterministic algorithm to obtain a deterministic algorithm of runtime $O^*(c^k)$, where $c$ is a constant, for deciding whether an input digraph contains a spanning out-tree with at least $k$ internal vertices. This answers in affirmative a question of Gutin, Razgon and Kim (Proc. AAIM'08). \end{abstract}
\section{Introduction}
An {\em out-tree} is an oriented tree with only one vertex of in-degree zero called the {\em root}. The $k$-{\sc Out-Tree} problem is the problem of deciding for a given parameter $k$, whether an input digraph contains a given out-tree with $k\ge 2$ vertices. In their seminal work on Color Coding Alon, Yuster, and Zwick \cite{AloYusZwi95} provided fixed-parameter tractable (FPT) randomized and deterministic algorithms for $k$-{\sc Out-Tree}. While Alon, Yuster, and Zwick \cite{AloYusZwi95} only stated that their algorithms are of runtime $O(2^{O(k)}n)$, however, it is easy to see
(see Appendix), that their randomized and deterministic algorithms are of complexity\footnote{In this paper we often use the notation $O^*(f(k))$ instead of $f(k)(kn)^{O(1)}$, i.e., $O^*$ hides not only constants, but also polynomial coefficients.} $O^*((4e)^k)$ and $O^*(c^k)$, where $c\ge 4e$.
The main results of \cite{AloYusZwi95}, however, were a new algorithmic approach called Color Coding and a randomized $O^*((2e)^{k})$ algorithm for deciding whether a digraph contains a path with $k$ vertices (the $k$-{\sc Path} problem). Chen et al. \cite{CHeLuSzeZha07} and Kneis et al. \cite{KneMolRicRos06} developed a modification of Color Coding, Divide-and-Color, that allowed them to design a randomized $O^*(4^{k})$-time algorithm for $k$-{\sc Path}. Divide-and-Color in Kneis et al. \cite{KneMolRicRos06} (and essentially in Chen et al. \cite{CHeLuSzeZha07}) is `symmetric', i.e., both colors play similar role and the probability of coloring each vertex in one of the colors is 0.5. In this paper, we further develop Divide-and-Color by making it asymmetric, i.e., the two colors play different roles and the probability of coloring each vertex in one of the colors depends on the color. As a result, we refine the result of Alon, Yuster, and Zwick by obtaining randomized and deterministic algorithms for $k$-{\sc Out-Tree} of runtime $O^*(5.7^k)$ and $O^*(5.7^{k+o(k)})$ respectively.
It is worth to mention here two recent related results on $k$-{\sc Path} due to Koutis \cite{Kou08} and Williams \cite{Wil08} based on an algebraic approach.
Koutis \cite{Kou08} obtained a randomized $O^*(2^{3k/2})$-time algorithm for $k$-{\sc Path} and
Williams \cite{Wil08} extended his ideas resulting in
a randomized $O^*(2^{k})$-time algorithm for $k$-{\sc Path}. While the randomized algorithms based on Color Coding and Divide-and-Color are not difficult to derandomize, it is not the case for the algorithms of Koutis \cite{Kou08} and Williams \cite{Wil08}. Thus, it is unknown whether there are deterministic algorithms for $k$-{\sc Path} of runtime $O^*(2^{3k/2})$. Moreover, it is not clear whether the randomized algorithms of Koutis \cite{Kou08} and Williams \cite{Wil08} can be extended to solve $k$-{\sc Out-Tree}.
While we believe that the study of fast algorithms for $k$-{\sc Out-Tree} is a problem interesting on its own, we provide an application of our deterministic algorithm. The vertices of an out-tree $T$ of out-degree zero (nonzero) are {\em leaves} ({\em internal vertices}) of $T$. An {\em out-branching} of a digraph $D$ is a spanning subgraph of $D$ which is an out-tree. The {\sc Minimum Leaf} problem is to find an out-branching with the minimum number of leaves in a given digraph $D.$ This problem is of interest in database systems \cite{DemDow00} and the Hamilton path problem is its special case. Thus, in particular, {\sc Minimum Leaf} is NP-hard. In this paper we will study the following parameterized version of {\sc Minimum Leaf} : given a digraph $D$ and a parameter $k$, decide whether $D$ has an out-branching with at least $k$ internal vertices. This problem denoted {\sc $k$-Int-Out-Branching} was studied for symmetric digraphs (i.e., undirected graphs) by Prieto and Sloper \cite{PriSlo03,PriSlo05} and for all digraphs by Gutin et al. \cite{GutRazKim08}. Gutin et al. \cite{GutRazKim08} obtained an algorithm of runtime $O^*(2^{O(k\log k)})$ for {\sc $k$-Int-Out-Branching} and asked whether the problem admits an algorithm of runtime $O^*(2^{O(k)}).$ Note that no such algorithm has been known even for the case of symmetric digraphs \cite{PriSlo03,PriSlo05}. In this paper, we obtain an $O^*(2^{O(k)})$-time algorithm for {\sc $k$-Int-Out-Branching} using our deterministic algorithm for $k$-{\sc Out-Tree} and an out-tree generation algorithm.
For a set $X$ of vertices of a subgraph $H$ of a digraph $D$, $N_H^+(X)$ and $N_H^-(X)$ denote the sets of out-neighbors and in-neighbors of vertices of $X$ in $H$, respectively. Sometimes, when a set has a single element, we will not distinguish between the set and its element. In particular, when $H$ is an out-tree and $x$ is a vertex of $H$ which is not its root, the unique in-neighbor of $x$ is denoted by $N_H^-(x)$. For an out-tree $T$, $\LT{T}$ denotes the set of leaves in $T$ and $\IN{T}=V(T)-\LT{T}$ stands for the set of internal vertices of $T$.
\section{New Algorithms for $k$-{\sc Out-Tree}}\label{TSAsec}
In this section, we introduce and analyze a new randomized algorithm for $k$-{\sc Out-Tree} that uses Divide-and-Color and several other ideas. We provide an analysis of its complexity and a short discussion of its derandomization. We omit proofs of several lemmas of this section. The proofs can be found in Appendix.
The following lemma is well known, see \cite{chung}.
\begin{lemma}\label{balanced} Let $T$ be an undirected tree and let $w:V\rightarrow \mathbb{R}^+\cup \{0\}$ be a weight function on its vertices. There exists a vertex $v\in V(T)$ such that the weight of every subtree $T'$ of $T-v$ is at most $w(T)/2$, where $w(T)=\sum_{v\in V(T)}w(v)$. \end{lemma}
Consider a partition $n=n_1+\cdots +n_q,$ where $n$ and all $n_i$ are nonnegative integers and a bipartition $(A,B)$ of the set $\{1,\ldots ,q\}.$ Let
$d(A,B):=\Big|\sum_{i\in A}n_i-\sum_{i\in B}n_i\Big|.$ Given a set $Q=\{1,\ldots ,q\}$ with a nonnegative integer weight $n_i$ for each element $i\in Q$, we say that a bipartition $(A, B)$ of $Q$ is {\em greedily optimal} if $d(A,B)$ does not decrease by moving an element of one partite set into another. The following procedure describes how to obtain a greedily optimal bipartition in time $O(q \log{q})$. For simplicity we write $\sum_{i\in A}n_i$ as $n(A)$.
\begin{algorithm}[h!]
\caption{\Mx{Bipartition}$(Q,\{n_i:i\in Q\})$}
\begin{algorithmic}[1]
\STATE Let $A:=\emptyset$, $B:=Q$.
\WHILE{$n(A)<n(B)$ and there is an element $i\in B$ with $0<n_i < d(A,B)$}
\STATE Choose such an element $i\in B$ with a largest $n_i$.
\STATE $A:=A\cup\{i\}$ and $B:=B-\{i\}.$
\ENDWHILE
\STATE Return $(A,B)$.
\end{algorithmic} \end{algorithm}
\begin{lemma}\label{largecomp} Let $Q$ be a set of size $q$ with a nonnegative integer weight $n_i$ for each $i\in Q$. The algorithm \Mx{Bipartition}$(Q,\{n_i:i\in Q\})$ finds a greedily optimal bipartition $A\cup B=Q$ in time $O(q\log{q})$. \end{lemma}
This lemma is proved in Appendix. Now we describe a new randomized algorithm for $k$-{\sc Out-Tree}.
Let $D$ be a digraph and let $T$ be an out-tree on $k$ vertices. Let us specify a vertex $t\in V(T)$ and a vertex $w\in V(D)$. We call a copy of $T$ in $D$ a {\em $T$-isomorphic} tree. We say that a $T$-isomorphic tree $T_{D}$ in $D$ is a $(t,w)$-tree if $w\in V(T_D)$ plays the role of $t$.
\begin{figure}
\caption{An example: The given out-tree $T$ is divided into two parts $T[U_w]$ and $T[U_b\cup \{v^*\}]$ by the splitting vertex $v^*$. The digraph $D$ contains a copy of $T$ meeting the restrictions on $L$.}
\label{fig1}
\end{figure}
In the following algorithm \Mx{find-tree}, we have several arguments other than the natural arguments $T$ and $D$. Two arguments are vertices $t$ and $v$ of $T$, and the last argument is a pair consisting of $L\subseteq V(T)$ and $\{X_u:\ u\in L\}$, where $X_u\subset V(D)$ and $X_u$'s are pairwise disjoint. The argument $t$ indicates that we want to return, at the end of the current procedure, the set of vertices $X_t$ such that there is a $(t,w)$-tree for every $w\in X_t$. The fact that $X_t\neq \emptyset$ means two points : we have a $T$-isomorphic tree in $D$, and the information $X_t$ we have can be used to construct a larger tree which uses the current $T$-isomorphic tree as a building block. Here, $X_t$ is a kind of `joint'.
The arguments $L\subseteq V(T)$ and $\{X_u:\ u\in L\}$ form a set of information on the location in $D$ of the vertices playing the role of $u \in L$ obtained in the way we obtained $X_t$ by a recursive call of the algorithm. Let $T_D$ be a $T$-isomorphic tree; if for every $u\in L$, $T_D$ is a $(v,w)$-tree for some $w\in X_{u}$ and $V(T_D)\cap X_{u}=\{w\}$, we say that {\em $T_D$ meets the restrictions on $L$}. The algorithm \Mx{find-tree} intends to find the set $X_t$ of vertices such that for every $w\in X_t$, there is a $(t,w)$-tree which meets the restrictions on $L$; for illustration, see Figure \ref{fig1}.
The basic strategy is as follows. We choose a pair $T_A$ and $T_B$ of subtrees of $T$ such that $V(T_A)\cup V(T_B)=V(T)$ and $T_A$ and $T_B$ share only one vertex, namely $v^*$. We call such $v^*$ a {\em splitting vertex}. We call recursively two `\Mx{find-tree}' procedures on subsets of $V(D)$ to ensure that the subtrees playing the role of $T_A$ and $T_B$ do not overlap. The first call (line 15) tries to find $X_{v^*}$ and the second one (line 18), using the information $X_{v^*}$ delivered by the first call, tries to find $X_t$. Here $t$ is a vertex specified as an input for the algorithm \Mx{find-tree}. In the end, the current procedure will return $X_t$.
A splitting vertex can produce several subtrees, but there are many ways to divide them into two groups ($T_A$ and $T_B$). To make the algorithm more efficient, we try to obtain as `balanced' a partition ($T_A$ and $T_B$) as possible. The algorithm \Mx{tree-Bipartition} is used to produce a pretty `balanced' bipartition of the subtrees. Moreover we introduce another argument to have a better complexity behavior. The argument $v$ is a vertex which indicates whether there is a predetermined splitting vertex. If $v=\emptyset$, we do not have a predetermined splitting vertex so we find one in the current procedure. Otherwise, we use the vertex $v$ as a splitting vertex.
\begin{algorithm}[h!]
\caption{\Mx{find-tree}($T,D,v,t,L,\{X_u:u\in L\}$), see Figure \ref{fig1}}
\begin{algorithmic}[1]
\IF{$|V(T)\setminus L|\geq 2$}
\STATE {\bf for all $u\in V(T)$:} Set $w(u):=0$ if $u\in L$, $w(u):=1$ otherwise.
\STATE {\bf if $v=\emptyset$ then} Find $v^*\in V(T)$ such that the weight of every subtree $T'$ of $T-v^*$ is at most $w(T)/2$ (see Lemma \ref{balanced}) {\bf else} $v^*:=v$
\STATE $(WH,BL)$:=tree-Bipartition$(T,t,v^*,L)$.
\STATE $U_w:=\bigcup_{i\in WH} V(T_i) \cup \{v^*\}$, $U_b:=\bigcup_{i\in BL} V(T_i)$.
\STATE {\bf for all $u \in L\cap U_w$:} color all vertices of $X_u$ in white.
\STATE {\bf for all $u \in L\cap (U_b\setminus \{v^*\})$:} color all vertices of $X_u$ in black.
\STATE $\alpha:=\min\{w(U_w)/w(T),w(U_b)/w(T)\}$.
\STATE {\bf if $\alpha^2-3\alpha+1\leq 0$ } (i.e., $\alpha \geq (3-\sqrt{5})/2$, see (\ref{par2}) and the definition of $\alpha^*$ afterwards) {\bf then} $v_w:=v_b:=\emptyset$
\STATE {\bf else if $w(U_w)<w(U_b)$ then} $v_w:=\emptyset$, $v_b:=v^*$ {\bf else} $v_w:=v^*$, $v_b:=\emptyset$.
\STATE $X_t:=\emptyset$.
\FOR{$i=1$ to $\left\lceil \frac{2.51}{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \right\rceil$}
\STATE Color the vertices of $V(D)-\bigcup_{u\in L}X_u$ in white or black such that for each vertex the probability to be colored in white is $\alpha$ if $w(U_w)\leq w(U_b)$, and $1-\alpha$ otherwise.
\STATE Let $V_w$ ($V_b$) be the set of vertices of $D$ colored in white (black).
\STATE $S:=$find-tree$(T[U_w],D[V_w],v_w,v^*,L\cap U_w,\{X_u:u\in L\cap U_w\})$
\IF{$S \neq \emptyset$}
\STATE $X_{v^*}:=S$, $L:=L\cup \{v^*\}$.
\STATE $S':=$find-tree$(T[U_b\cup \{v^*\}],D[V_b\cup S],v_b,t,(L\cap U_b),\{X_u:u\in (L\cap U_b)\})$.
\STATE $X_t:=X_t\cup S'$.
\ENDIF
\ENDFOR
\STATE Return $X_t$.
\ELSE [$|V(T)\setminus L| \leq 1$]
\STATE {\bf if $\{z\}=V(T)\setminus L$ then} $X_z:=V(D)-\bigcup_{u\in L}X_u$, $L:=L\cup \{z\}$.
\STATE $L^o:=\{$all leaf vertices of $T$\}.
\WHILE{$L^o\neq L$}
\STATE Choose a vertex $z\in L\setminus L^o$ s.t. $N^+_T(z)\subseteq L^0$.
\STATE $X_z:=X_z\cap \bigcap_{u\in N^+_T(z)}N^-(X_u)$; $L^o:=L^o\cup \{z\}$.
\ENDWHILE
\RETURN $X_t$ \ENDIF
\end{algorithmic} \end{algorithm}
Let $r$ be the root of $T$. To decide whether $D$ contains a copy of $T$, it suffices to run \Mx{find-tree}$(T,D,\emptyset,r,\emptyset,\emptyset)$.
\begin{lemma}\label{argument} During the performance of find-tree($T,D,\emptyset,r,\emptyset,\emptyset$), the sets $X_u$, $u\in L$ are pairwise disjoint. \end{lemma} \begin{proof} We prove the claim inductively. For the initial call, trivially the sets $X_u$, $u\in L$ are pairwise disjoint since $L=\emptyset$. Suppose that for a call find-tree($T,D,v,t,L,\{X_u:\ u\in L\}$) the sets $X_v$, $v\in L$ are pairwise disjoint. For the first subsequent call in line 15, the sets are obviously pairwise disjoint. Consider the second subsequent call in line 18. If $v^*\in L$ before line 17, the claim is true since $S$ returned by the first subsequent call is contained in $X_{v^*}$. Otherwise, observe that $X_u\subseteq V_b$ for all $u\in L\cap U_b$ and they are pairwise disjoint. Since $X_{v^*}\cap V_b=\emptyset$, the sets $X_u$ for all $u\in L\cap U_b$ together with $X_{v^*}$ are pairwise disjoint. \end{proof}
\begin{algorithm}[h!]
\caption{\Mx{tree-Bipartition}$(T,t,v^*,L)$}
\begin{algorithmic}[1]
\STATE $T_1,\ldots ,T_q$ are the subtrees of $T-v^*$. $Q:=\{1,\ldots ,q\}$. $w(T_i):=|V(T_i)\setminus L|$, $\forall i\in Q$.
\IF {$v^*=t$}
\STATE $(A,B)$:=\Mx{Bipartition}$(Q,\{n_i:=w(T_i):i\in Q\})$
\STATE {\bf if $w(A)\leq w(B)$ then} $WH:=A$, $BL:=B$. {\bf else} $WH:=B$, $BL:=A$.
\ELSIF {$t\in V(T_l)$ and $w(T_l)-w(v^*)\geq 0$}
\STATE $(A,B)$:=\Mx{Bipartition}$(Q,\{n_i:=w(T_i):i\in Q\setminus \{l\}\}\cup \{n_l:=w(T_l)-w(v^*)\})$.
\STATE {\bf if $l\in B$ then} $WH:=A$, $BL:=B$. {\bf else} $WH:=B$, $BL:=A$.
\ELSE [$t\in V(T_l)$ and $w(T_l)-w(v^*)< 0$]
\STATE $(A,B)$:=\Mx{Bipartition}$((Q\setminus \{l\})\cup \{v^*\},\{n_i:=w(T_i):i\in Q\setminus \{l\}\}\cup \{n_{v^*}:=w(v^*)\})$.
\STATE {\bf if $v^*\in A$ then} $WH:=A-\{v^*\}$, $BL:=B\cup \{l\}$. {\bf else} $WH:=B-\{v^*\}$, $BL:=A\cup \{l\}$.
\ENDIF
\RETURN $(WH,BL)$.
\end{algorithmic} \end{algorithm}
\begin{lemma}\label{treebip} Consider the algorithm \Mx{tree-Bipartition} and let $(WH,BL)$ be a bipartition of $\{1,\ldots ,q\}$ obtained at the end of the algorithm. Then the partition $U_w:=\bigcup_{i\in WH} V(T_i) \cup \{v^*\}$ and $U_b:=\bigcup_{i\in BL} V(T_i)$ of $V(T)$ has the the following property.
\noindent 1) If $v^*=t$, moving a component $T_i$ from one partite set to the other does not decrease the difference $d(w(U_w),w(U_b))$.
\noindent 2) If $v^*\neq t$, either exchanging $v^*$ and the component $T_l$ or moving a component $T_i$, $i\neq v^*,l$ from one partite set to the other does not decrease the difference $d(w(U_w),w(U_b))$. \end{lemma} \begin{proof} Let us consider the property 1). The bipartition $(WH,BL)$ is determined in the first `if' statement in line 3 of \Mx{tree-Bipartition}. Then by Lemma \ref{largecomp} the bipartition $(WH,BL)$ is greedily optimal, which is equivalent to the statement of 1).
Let us consider the property 2). First suppose that the bipartition $(WH,BL)$ is determined in the second `if' statement in line 5 of \Mx{tree-Bipartition}. The exchange of $v^*$ and the component $T_l$ amounts to moving the element $l$ in the algorithm \Mx{Bipartition}. Since $(WH,BL)$ is returned by \Mx{Bipartition} and thus is a greedily optimal bipartition of $Q$, any move of an element in one partite set would not decrease the difference $d(WH,BL)$ and the statement of 2) holds in this case.
Secondly suppose that the bipartition $(WH,BL)$ is determined in the third `if' statement in line 8 of \Mx{tree-Bipartition}. In this case we have $w(T_l)=0$ and thus exchanging $T_l$ and $v^*$ and amounts to moving the element $v^*$ in the algorithm \Mx{Bipartition}. By the same argument as above, any move of an element in one partite set would not decrease the difference $d(WH,BL)$ and again the statement of 2) holds. \end{proof}
Consider the following equation: \begin{equation}\label{par2} \alpha^2-3\alpha +1=0\end{equation} Let $\alpha^*:=(3-\sqrt{5})/2$ be one of its roots. In line 10 of the algorithm \Mx{find-tree}, if $\alpha < \alpha^*$ we decide to pass the present splitting vertex $v^*$ as a splitting vertex to the next recursive call which gets, as an argument, a subtree with greater weight. Lemma \ref{autobalance} justifies this execution. It claims that if $\alpha < \alpha^*$, then in the next recursive call with a subtree of weight $(1-\alpha)w(T)$, we have a more balanced bipartition with $v^*$ as a splitting vertex. Actually, the bipartition in the next step is good enough so as to compensate for the increase in the running time incurred by the biased (`$\alpha < \alpha^*$') bipartition in the present step. We will show this later.
\begin{lemma}\label{autobalance} Suppose that $v^*$ has been chosen to split $T$ for the present call to \Mx{find-tree} such that the weight of every subtree of $T-v^*$ is at most $w(T)/2$ and that $w(T) \geq 5$. Let $\alpha$ be defined as in line $8$ and assume that $\alpha <\alpha^*$. Let $\{U_1,U_2\}=\{U_w,U_b\}$ such that $w(U_2) \geq w(U_1)$ and let $\{T_1,T_2\}=\{T[U_w],T[U_b \cup \{v^*\}]\}$ such that $U_1 \subseteq V(T_1)$ and
$U_2 \subseteq V(T_2)$. Let $\alpha'$ play the role of $\alpha$ in the recursive call using the tree $T_2$. In this case the following holds: $\alpha' \geq (1-2\alpha)/(1-\alpha) > \alpha^*.$ \end{lemma} \begin{proof} Let $T_1,T_2,U_1,U_2,\alpha,\alpha'$ be defined as in the statement. Note that $\alpha=w(U_1)/w(T)$. Let $d=w(U_2)-w(U_1)$ and note that $w(U_1)=(w(T)-d)/2$ and that the following holds \[ \frac{1-2\alpha}{1-\alpha} = \frac{w(T)-2w(U_1)}{w(T)-w(U_1)} = \frac{2d}{w(T)+d}.\]
We now consider the following cases.
{\em Case 1. $d=0$:} In this case $\alpha=1/2 > \alpha^*$, a contradiction.
{\em Case 2. $d=1$:} In this case $\alpha^* > \alpha=w(U_1)/(2w(U_1)+1)$, which implies that $w(U_1) \leq 1$. Therefore $w(U_2) \leq 2$ and $w(T) \leq 3$, a contradiction.
{\em Case 3. $d \geq 2$:} Let $C_1,C_2,\ldots,C_l$ denote the components in $T-v^*$ and without loss of generality assume that $V(C_1) \cup V(C_2) \cup \cdots \cup V(C_a) = U_2$ and $V(C_{a+1}) \cup V(C_{a+2}) \cup \cdots \cup V(C_l) = U_1$. Note that by Lemma \ref{treebip} we must have $w(C_i) \geq d$ or $w(C_i)=0$ for all $i=1,2,\ldots,l$ except possibly for one set $C_j$ (containing $t$), which may have $w(C_j)=1$ (if $w(v^*)=1$).
Let $C_r$ be chosen such that $w(C_r) \geq d$, $1 \leq r \leq a$ and $w(C_r)$ is minimum possible with these constraints. We first consider the case when $w(C_r) > w(U_2)-w(C_r)$. By the above (and the minimality of $V(C_r)$) we note that $w(U_2) \leq w(C_r)+1$ (as either $C_j$, which is defined above, or $v^*$ may belong to $V(T_2)$, but not both). As $w(U_2)=(w(T)+d)/2 \geq w(T)/2 + 1$ we note that $w(C_r) \geq w(T)/2+d/2 -1$. As $w(C_r) \leq w(T)/2$ (By the statement in our theorem) this implies that $d=2$ and $w(C_r)=w(T)/2$ and $w(U_2)=w(C_r)+1$. If $U_1$ contains at least two distinct components with weight at least $d$ then $w(U_1)>w(U_2)$, a contradiction. If $U_1$ contains no component of weight at least $d$ then $w(U_1) \leq 1$ and $w(T) \leq 4$, a contradiction. So $U_1$ contains exactly one component of weight at least $d$. By the minimality of $w(C_r)$ we note that $w(U_1) \geq w(C_r) = w(U_2)-1$, a contradiction to $d \geq 2$.
Therefore we can assume that $w(C_r) \leq w(U_2)-w(C_r)$, which implies the following
(the last equality is proved above) \[ \alpha' \geq \frac{w(C_r)}{w(U_2)} \geq \frac{d}{(w(T)+d)/2} = \frac{1-2\alpha}{1-\alpha}. \]
As $\alpha<\alpha^*$, we note that $\alpha' \geq (1-2\alpha)/(1-\alpha) > (1-2\alpha^*)/(1-\alpha^*) =\alpha^*$. \end{proof}
For the selection of the splitting vertex $v^*$ we have two criteria in the algorithm \Mx{find-tree}: (i) {\em `found'} criterion: the vertex is found so that the weight of every subtree $T'$ of $T-v^*$ is at most $w(T)/2$. (ii) {\em `taken-over'} criterion: the vertex is passed on to the present step as the argument $v$ by the previous step of the algorithm. The following statement is an easy consequence of Lemma \ref{autobalance}.
\begin{corollary}\label{twosteppass} Suppose that $w(T) \geq 5$. If $v^*$ is selected with `taken-over' criterion, then $\alpha > \alpha^*$. \end{corollary} \begin{proof} For the initial call find-tree($T,D,\emptyset,r,\emptyset,\emptyset$) we have $v=\emptyset$ and thus, the splitting vertex $v^*$ is selected with the `found' criterion. We will prove the claim by induction. Consider the first vertex $v^*$ selected with then `taken-over' criterion during the performance of the algorithm. Then in the previous step, the splitting vertex was selected with `found' criterion and thus in the present step we have $\alpha > \alpha^*$ by Lemma \ref{autobalance}.
Now consider a vertex $v^*$ selected with the `taken-over' criterion. Then in the previous step, the splitting vertex was selected with the `found' criterion since otherwise, by the induction hypothesis we have $\alpha > \alpha^*$ in the previous step, and $\emptyset$ has been passed on as the argument $v$ for the present step. This is a contradiction. \end{proof}
Due to Corollary \ref{twosteppass} the vertex $v^*$ selected in line 3 of the algorithm \Mx{find-tree} functions properly as a splitting vertex. In other words, we have more than one subtree of $T-v^*$ in line 4 with positive weights.
\begin{lemma}\label{split} If $w(T)\geq 2$, then for each of $U_w$ and $U_b$ found in line 5 of by \Mx{find-tree} we have $w(U_w)>0$ and $w(U_b)>0$. \end{lemma} \begin{proof} For the sake of contradiction suppose that one of $w(U_w)$ and $w(U_b)$ is zero. Let us assume $w(U_w)=0$ and $w(U_b)=w(T)$. If $v^*$ is selected with `found' criteria, each component in $T[U_b]$ has a weight at most $w(T)/2$ and $T[U_b]$ contains at least two components of positive weights. Then we can move one component with a positive weight from $U_b$ to $U_w$ which will reduce the difference $d(U_w,U_b)$, a contradiction. The same argument applies when $w(U_w)=w(T)$ and $w(U_b)=0$.
Consider the case when $v^*$ is selected with ``taken-over" criteria. There are three possibilities.
{\em Case 1. $w(T)\geq 5$:} In this case we obtain a contradiction with Corollary~\ref{twosteppass}.
{\em Case 2. $w(T)=4$:} In the previous step using $T_0$, where $T\subseteq T_0$, the splitting vertex $v^*$ was selected with ``found" criteria. Then by the argument in the first paragraph, we have $w(T_0)\geq 5$. A contradiction follows from Lemma~\ref{autobalance}.
{\em Case 3. $2\leq w(T)\leq 3$:} First suppose that $w(v^*)=0$. Note that $T[U_w]-v^*$ or $T[U_b]$ contains a component of weight $w(T)$ since otherwise we can move a component with a positive weight from one partite set to the other and reduce $d(U_w,U_b)$. Considering the previous step using $T_0$, where $T\subseteq T_0$, the out-tree $T$ is the larger of $T^0_w$ and $T^0_b$. We pass the splitting vertex $v^*$ to the larger of the two only when $\alpha>\alpha^*$. So when $w(T)=3$, we have $3>(1-\alpha^*)w(T_0)$ and thus $w(T^0)\leq 4$, and when $w(T)=2$ we have $2>(1-\alpha^*)w(T_0)$ and thus $w(T^0)\leq 3$. In either case, however, $T^0-v^*$ contains a component with a weight greater than $w(T^0)/2$, contradicting to the choice of $v^*$ in the previous step (Recall that $v^*$ is selected with `found' criteria in the previous step using $T^0$).
Secondly suppose that that $w(v^*)=1.$ Then $w(U_w)=w(T)$ and $w(U_b)=0$. We can reduce the difference $d(U_w,U_b)$ by moving the component with a positive weight from $U_w$ to $U_b$, a contradiction.
Therefore for each of $U_w$ and $U_b$ found in line 5 of by \Mx{find-tree} we have $w(U_w)>0$ and $w(U_b)>0$. \end{proof}
\begin{lemma}\label{correct} Given a digraph $D$, an out-tree $T$ and a specified vertex $t\in V(T)$, consider the set $X_t$ (in line 22) returned by the algorithm find-tree($T,D,v,t,L,\{X_u:\ u\in L\}$). If $w\in X_t$ then $D$ contains a $(t,w)$-tree that meets the restrictions on $L$. Conversely, if $D$ contains a $(t,w)$-tree for a vertex $w\in V(D)$ that meets the restrictions on $L$, then $X_t$ contains $w$ with probability larger than $1-1/e>0.6321$. \end{lemma} \begin{proof} Lemma \ref{split} guarantees that the splitting vertex $v^*$ selected at any recursive call of \Mx{find-tree} really `splits' the input out-tree $T$ into two nontrivial parts, unless $w(T)\leq 1$.
First we show that if $w\in X_t$ then $D$ contains a $(t,w)$-tree for a vertex $w\in V(D)$ that meets the restrictions on $L$. When $|V(T)\setminus L|\le 1$, using Lemma \ref{argument} it is straightforward to check from the algorithm that the claim holds. Assume that the claim is true for all subsequent calls to \Mx{find-tree}. Since $w\in S'$ for some $S'$ returned by a call in line 18, the subgraph $D[V_b\cup X_{v^*}]$ contains a $T[U_b\cup \{v^*\}]$-isomorphic $(t,w)$-tree $T_D^b$ meeting the restrictions on $(L\cap U_b)\cup \{v^*\}$ by induction hypothesis. Moreover, $X_{v^*}\neq \emptyset$ when $S'\ni w$ is returned and this implies that there is a vertex $u\in X_{v^*}$ such that $T_D^b$ is a $(v^*,u)$-tree. Since $u\in X_{v^*}$, induction hypothesis implies that the subgraph $D[V_w]$ contains a $T[U_w]$-isomorphic $(v^*,u)$-tree, say $T_D^w$.
Consider the subgraph $T_D:=T_D^w \cup T_D^b$. To show that $T_D$ is a $T$-isomorphic $(t,w)$-tree in D, it suffices to show that $V(T_D^w)\cap V(T_D^b)=\{u\}$. Indeed, $V(T_D^w)\subseteq V_w$, $V(T_D^b)\subseteq V_b\cup X_{v^*}$ and $V_w \cap V_b=\emptyset$. Thus if two trees $T_D^w$ and $T_D^b$ share vertices other than $u$, these common vertices should belong to $X_{v^*}$. Since $T_D^b$ meets the restrictions on $(L\cap U_b)\cup \{v^*\}$, we have $X_{v^*}\cap V(T_D^b)=\{u\}$. Hence $u$ is the only vertex that two trees $T_D^w$ and $T_D^b$ have in common. We know that $u$ plays the role of $v^*$ in both trees. Therefore we conclude that $T_D$ is $T$-isomorphic, and since $w$ plays the role of $t$, it is a $(t,w)$-tree. Obviously $T_D$ meets the restrictions on $L$.
Secondly, we shall show that if $D$ contains a $(t,w)$-tree for a vertex $w\in V(D)$ that meets the restrictions on $L$, then $X_t$ contains $w$ with probability larger than $1-1/e>0.6321$. When $|V(T)\setminus L|\le 1$, the algorithm \Mx{find-tree} is deterministic and returns $X_t$ which is exactly the set of all vertices $w$ for which there exists a $(t,w)$-tree meeting the restrictions on $L$. Hence the claim holds for the base case, and we may assume that the claim is true for all subsequent calls to \Mx{find-tree}.
Suppose that there is a $(t,w)$-tree $T_D$ meeting the restrictions on $L$ and that this is a $(v^*,w')$-tree, that is, the vertex $w'$ plays the role of $v^*$. Then the vertices of $T_D$ corresponding to $U_w$, say $T_D^w$, are colored white and those of $T_D$ corresponding to $U_b$, say $T_D^b$, are colored black as intended with probability $\geq (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k$. When we hit the right coloring for $T$, the digraph $D[V_w]$ contains the subtree $T_D^w$ of $T_D$ which is $T[U_w]$-isomorphic and which is a $(v^*,w')$-tree. By induction hypothesis, the set $S$ obtained in line 15 contains $w'$ with probability larger than $1-1/e$. Note that $T_D^w$ meets the restrictions on $L\cap U_w$.
If $w'\in S$, the restrictions delivered onto the subsequent call for \Mx{find-tree} in line 17 contains $w'$. Since $T_D$ meets the restrictions on $L$ confined to $U_b-v^*$ and it is a $(v^*,w')$-tree with $w'\in S=X_{v^*}$, the subtree $T_D^b$ of $T_D$ which is $T[U_b\cup \{v^*\}]$-isomorphic meets all the restrictions on $L$. Hence by induction hypothesis, the set $S'$ returned in line 18 contains $w$ with probability larger than $1-1/e$.
The probability $\rho$ that $S'$, returned by \Mx{find-tree} in line 18 at an iteration of the loop, contains $w$ is, thus, $$\rho > (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k \times (1-1/e)^2 > 0.3995 (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k.$$ After looping $\lceil(0.3995 (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k)^{-1}\rceil$ times in line
12, the probability that $X_t$ contains $w$ is at least $$1-(1-\rho)^{\frac{1}{0.3995 (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k}}>1-(1-0.3995 (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k)^{\frac{1}{0.3995 (\alpha^{\alpha}(1-\alpha)^{1-\alpha})^k}}>1-\frac{1}{e}.$$ Observe that the probability $\rho$ does not depend on $\alpha$ and the probability of coloring a vertex white/black. \end{proof}
The complexity of Algorithm \Mx{find-tree} is analyzed in the following theorem. Its proof given in Appendix is based on Lemmas \ref{split} and \ref{autobalance}.
\begin{theorem}\label{th1} Algorithm \Mx{find-tree} has running time $O(n^2 k^{\rho{}}C{}^k)$, where
$w(T)=k$ and $|V(D)|=n$, and $C{}$ and $\rho{}$ are defined and bounded as follows: $$C{} = \left( \frac{1}{\alpha^*{}^{\alpha^*{}} (1-\alpha^*{})^{1-\alpha^*{}}} \right)^{\frac{1}{\alpha^*{}}},\ \rho{} = \frac{\ln(1/6)}{\ln(1-\alpha^*{})},\ \rho{} \leq 3.724, \mbox{ and } C{}\le 5.704.$$ \end{theorem}
Derandomization of the algorithm \Mx{find-tree} can be carried out using the general method presented by Chen et al. \cite{CHeLuSzeZha07} and based on the construction of $(n,k)$-universal sets studied in \cite{naor1995san} (for details, see Appendix). As a result, we obtain the following:
\begin{theorem} There is a $O(n^2C^{k+o(k)})$ time deterministic algorithm that solves the {\sc $k$-Out-Tree} problem, where $C{}\le 5.704.$ \end{theorem}
\section{Algorithm for {\sc $k$-Int-Out-Branching}}
A {\em $k$-internal out-tree} is an out-tree with at least $k$ internal vertices. We call a $k$-internal out-tree {\em minimal} if none of its proper subtrees is a $k$-internal out-tree, or {\em minimal $k$-tree} in short. The {\sc Rooted Minimal $k$-Tree} problem is as follows: given a digraph $D$, a vertex $u$ of $D$ and a minimal $k$-tree $T$, where $k$ is a parameter, decide whether $D$ contains an out-tree rooted at $u$ and isomorphic to $T.$ Recall that {\sc $k$-Int-Out-Branching} is the following problem: given a digraph $D$ and a parameter $k$, decide whether $D$ contains an out-branching with at least $k$ internal vertices. Finally, the {\sc $k$-Int-Out-Tree} problem is stated as follows: given a digraph $D$ and a parameter $k$, decide whether $D$ contains an out-tree with at least $k$ internal vertices.
\begin{lemma}\label{minimaltree} Let $T$ be a
$k$-internal out-tree. Then $T$ is minimal if and only if
$|\IN{T}|=k$ and every leaf $u\in \LT{T}$ is the only child of its parent $N^-(u)$.
\end{lemma}
\begin{proof} Assume that $T$ is minimal. It cannot have more than $k$ internal vertices, because otherwise by removing any of its leaves, we obtain
a subtree of $T$ with at least $k$ internal vertices. Thus $|\IN{T}|=k$. If there are sibling leaves $u$ and $w$, then removing one of them provides a subtree of $T$ with
$|\IN{T}|$ internal vertices.
Now, assume that
$|\IN{T}|=k$ and every leaf $u\in \LT{T}$ is the only child of its parent $N^-(u)$. Observe that every subtree of $T$ can be obtained from $T$ by deleting a leaf of $T$, a leaf in the resulting out-tree, etc. However, removing any leaf $v$ from $T$ decreases the number of internal vertices, and thus creates subtrees with at most $k-1$ internal vertices. Thus, $T$ is minimal.
\end{proof}
In fact, Lemma \ref{minimaltree} can be used to generate all non-isomorphic minimal $k$-trees. First, build an (arbitrary) out-tree $T^0$ with $k$ vertices. Then extend $T^0$ by adding a vertex $x'$ for each leaf $x\in \LT{T^0}$ with an arc $(x,x')$. The resulting out-tree $T'$ satisfies the properties of Lemma \ref{minimaltree}. Conversely, by Lemma \ref{minimaltree}, any minimal $k$-tree can be constructed in this way.
{\bf Generating Minimal $k$-Tree (GMT) Procedure}
a. Generate a $k$-vertex out-tree $T^0$ and a set $T':=T^0.$
b. For each leaf $x\in \LT{T'}$, add a new vertex $x'$ and an arc $(x,x')$ to $T'$.
Due to the following simple observation, to solve {\sc $k$-Int-Out-Tree} for a digraph $D$ it suffices to solve {\sc Rooted Minimal $k$-Tree} for each vertex $u\in V(D)$ and each minimal $k$-tree $T$ rooted at $u.$
\begin{lemma}\label{containmintree} Any $k$-internal out-tree rooted at $r$ contains a minimal $k$-tree rooted at $r$ as a subdigraph. \end{lemma}
Similarly, the next two lemmas show that to solve {\sc $k$-Out-Branching} for a digraph $D$ it suffices to solve {\sc Rooted Minimal $k$-Tree} for each vertex $u\in S$ and each minimal $k$-tree $T$ rooted at $u,$ where $S$ is the unique strong connectivity component of $D$ without incoming arcs.
\begin{lemma}\cite{BanGut00} A digraph $D$ has an out-branching rooted at vertex $r\in V(D)$ if and only if $D$ has a unique strong connectivity component $S$ of $D$ without incoming arcs and $r\in S.$ One can check whether $D$ has a unique strong connectivity component and find one, if it exists, in time $O(m+n)$, where $n$ and $m$ are the number of vertices and arcs in $D$, respectively. \end{lemma}
\begin{lemma}\label{extension} Suppose a given digraph $D$ with $n$ vertices and $m$ arcs has an out-branching rooted at vertex $r$. Then any minimal $k$-tree rooted at $r$ can be extended to a $k$-internal out-branching rooted at $r$ in time $O(m+n)$. \end{lemma}
\begin{proof} Let $T$ be a $k$-internal out-tree rooted at $r$. If $T$ is spanning, there is nothing to prove. Otherwise, choose $u\in V(D)\setminus V(T)$. Since there is an out-branching rooted at $r$, there is a directed path $P$ from $r$ to $u$. This implies that whenever $V(D)\setminus V(T)\neq \emptyset$, there is an arc $(v,w)$ with $v\in V(T)$ and $w\in V(D)\setminus V(T)$. By adding the vertex $w$ and the arc $(v,w)$ to $T$, we obtain a $k$-internal out-tree and the number of vertices $T$ spans is strictly increased by this operation. Using breadth-first search starting at some vertex of $V(T)$, we can extend $T$ into a $k$-internal out-branching in $O(n+m)$ time. \end{proof}
Since {\sc $k$-Int-Out-Tree} and {\sc $k$-Int-Out-Branching} can be solved similarly, we will only deal with the {\sc $k$-Int-Out-Branching} problem. We will assume that our input digraph contains a unique strong connectivity component $S$. Our algorithm called {\em IOBA} for solving {\sc $k$-Int-Out-Branching} for a digraph $D$ runs in two stages. In the first stage, we generate {\em all} minimal $k$-trees. We use the GMT procedure described above to achieve this. At the second stage, for each $u\in S$ and each minimal $k$-tree $T$, we check whether $D$ contains an out-tree rooted at $u$ and isomorphic to $T$ using our algorithm from the previous section. We return TRUE if and only if we succeed in finding an out-tree $H$ of $D$ rooted at $u\in S$ which is isomorphic to a minimal $k$-tree.
In the literature, mainly rooted (undirected) trees and not out-trees are studied. However, every rooted tree can be made an out-tree by orienting every edge away from the root and every out-tree can be made a rooted tree by disregarding all orientations. Thus, rooted trees and out-trees are equivalent and we can use results obtained for rooted trees for out-trees.
Otter \cite{Ott48} showed that the number of non-isomorphic out-trees on $k$ vertices is $t_k=O^*(2.95^k)$. We can generate all non-isomorphic rooted trees on $k$ vertices using the algorithm of Beyer and Hedetniemi \cite{BeyHed80} of runtime $O(t_k)$. Using the GMT procedure we generate all minimal $k$-trees. We see that the first stage of IOBA can be completed in time $O^*(2.95^k)$.
In the second stage of IOBA, we try to find a copy of a minimal $k$-tree $T$ in $D$ using our algorithm from the previous section. The running time of our algorithm is $O^*(5.704^k)$. Since the number of vertices of $T$ is bounded from above by $2k-1$, the overall running time for the second stage of the algorithm is $O^*(2.95^k\cdot 5.704^{2k-1})$. Thus, the overall time complexity of the algorithm is $O^*(2.95^k\cdot 5.704^{2k-1})=O^*(96^k)$.
We can reduce the complexity with a more refined analysis of the algorithm. The major contribution to the large constant 96 in the above simple analysis comes from the running time of our algorithm from the previous section. There we use the upper bound on the number of vertices in a minimal $k$-tree. Most of the minimal $k$-trees have less than $k-1$ leaves, which implies that the upper bound $2k-1$ on the order of a minimal $k$-tree is too big for the majority of the minimal $k$-trees. Let $T(k)$ be the running time of IOBA. Then we have
\begin{equation}\label{eq1}T(k)=O^*\left(\sum_{k+1\leq k'\leq 2k-1} \mbox{(\# of minimal }k-\mbox{trees on }k'\mbox{ vertices) }\times (5.704^{k'})\right)\end{equation}
A minimal $k$-tree $T'$ on $k'$ vertices has $k'-k$ leaves, and thus the out-tree $T^0$ from which $T'$ is constructed has $k$ vertices of which $k'-k$ are leaves. Hence the number of minimal $k$-trees on $k'$ vertices is the same as the number of non-isomorphic out-trees on $k$ vertices with $k'-k$ leaves. Here an interesting counting problem arises. Let $g(k,l)$ be the number of non-isomorphic out-trees on $k$ vertices with $l$ leaves. Enumerate $g(k,l)$. To our knowledge, such a function has not been studied yet. Leaving it as a challenging open question, here we give an upper bound on $g(k,l)$ and use it for a better analysis of $T(k)$. In particular we are interested in the case when $l\geq k/2$.
Consider an out-tree $T^0$ on $k\ge 3$ vertices which has $\alpha k$ internal vertices and $(1- \alpha )k$ leaves. We want to obtain an upper bound on the number of such non-isomorphic out-trees $T^0$. Let $T^c$ be the subtree of $T^0$ obtained after deleting all its leaves and suppose that $T^c$ has $\beta k$ leaves. Assume that $\alpha \leq 1/2$ and notice that $\alpha k$ and $\beta k$ are integers. Clearly $\beta < \alpha$.
Each out-tree $T^0$ with $(1- \alpha )k$ leaves can be obtained by appending $(1-\alpha )k$ leaves to $T^c$ so that each of the vertices in $\LT{T^c}$ has at least one leaf appended to it. Imagine that we have $\beta k =|\LT{T^c}|$ and $\alpha k - \beta k=|\IN{T^c}|$ distinct boxes. Then what we are
looking for is the number of ways to put $(1-\alpha)k$ balls into the boxes so that each of the first $\beta k$ boxes is nonempty. Again this is equivalent to putting $(1-\alpha -\beta)k$ balls into $\alpha k$ distinct boxes. It is an easy exercise to see that this number equals $ \binom{k-\beta k-1}{\alpha k-1}. $
Note that the above number does not give the exact value for the non-isomorphic out-trees on $k$ vertices with $(1-\alpha )k$ leaves. This is because we treat an out-tree $T^c$ as a labeled one, which may lead to us to distinguishing two assignments of balls even though the two corresponding out-trees $T^0$'s are isomorphic to each other.
A minimal $k$-tree obtained from $T^0$ has $(1-\alpha)k$ leaves and thus $(2-\alpha)k$ vertices. With the upper bound $O^*(2.95^{\alpha k})$ on the number of $T^c$'s by \cite{Ott48}, by (\ref{eq1}) we have the following: \begin{align*} T(k) &= O^*\left(\sum_{\alpha \leq 1/2} \sum_{\beta < \alpha} 2.95^{\alpha k} \binom{k-\beta k-1}{\alpha k-1}(5.704)^{(2-\alpha)k}\right) + O^*\left(\sum_{\alpha >1/2}2.95^{\alpha k}(5.704)^{(2-\alpha)k}\right)\\
&= O^*\left(\sum_{\alpha \leq 1/2} \sum_{\beta < \alpha} 2.95^{\alpha k} \binom{k}{\alpha k}(5.704)^{(2-\alpha)k}\right)+O^*\left(2.95^k(5.704)^{3k/2}\right)\\
&= O^*\left(\sum_{\alpha \leq 1/2} \left(2.95^{\alpha} \frac{1}{\alpha ^{\alpha}(1-\alpha)^{1-\alpha}} (5.704)^{(2-\alpha)}\right)^k\right)+O^*(40.2^k) \end{align*} The term in the sum over $\alpha \leq 1/2$ above is maximized when $\alpha = \frac{2.95}{2.95+5.704}$, which yields $T(k)=O^*(49.4^k).$ Thus, we conclude with the following theorem. \begin{theorem} {\sc $k$-Int-Out-Branching} is solvable in time $O^*(49.4^k)$. \end{theorem}
\section{Conclusion} In this paper we refine the approach of Chen et al. \cite{CHeLuSzeZha07} and Rossmanith \cite{KneMolRicRos06} based on Divide-and-Color technique. Our technique is based on a more complicated coloring and within this technique we refined the result of Alon et al. \cite{AloYusZwi95} for the $k$-{\sc Out-Tree} problem. It is interesting to see if this technique can be used to obtain faster algorithms for other parameterized problems.
As a byproduct of our work, we obtained the first $O^*(2^{O(k)})$ for {\sc $k$-Int-Out-Branching}. We used the classical result of Otter \cite{Ott48} that the number of non-isomorphic trees on $k$ vertices is $O^*(2.95^k)$. An interesting combinatorial problem is to refine this bound for trees having $\lfloor \alpha k \rfloor$ leaves for some $\alpha < 1$.
\section{Appendix} \subsection{Algorithm of Alon, Yuster and Zwick}\label{Alonsec}
Let $c: V(D)\rightarrow \{1,\ldots ,k\}$ be a vertex $k$-coloring of a digraph $D$ and let $T$ be a $k$-vertex out-tree contained in $D$ (as a subgraph). Then $V(T)$ and $T$ are {\em colorful} if no pair of vertices of $T$ are of the same color.
The following algorithm of \cite{AloYusZwi95} verifies whether $D$ contains a colorful out-tree $H$ such that $H$ is isomorphic to $T$, when a coloring $c: V(D)\rightarrow \{1,\ldots ,k\}$ is given. Note that a $k$-vertex subgraph $H$ will be colorful with a probability of at least $k!/k^k > e^{-k}$. Thus, we can find a copy of $T$ in $D$ in $e^k$ expected iterations of the following algorithm.
\begin{algorithm}[h!]
\caption{$\mathcal{L}(T, r)$}
\begin{algorithmic}[1]
\REQUIRE An out-tree $T$ on $k$ vertices, a specified vertex $r$ of $D$
\ENSURE $\mathcal{C}_T(u)$ for each vertex $u$ of $D$, which is a family of all color sets that appear on colorful copies of $T$ in $D$, where $u$ plays the role of $r$
\IF{$|V(T)|=1$}
\FORALL{$u\in V(D)$}
\STATE Insert $\{c(u)\}$ into $\mathcal{C}_T(u)$.
\ENDFOR
\STATE Return $\mathcal{C}_T(u)$ for each vertex $u$ of $D$.
\ELSE
\STATE Choose an arc $(r',r'')\in A(T)$.
\STATE Let $T'$ and $T''$ be the subtrees of $T$ obtained by deleting $(r',r'')$, where $T'$ and $T''$ contains $r'$ and $r''$, respectively.
\STATE Call $\mathcal{L}(T',r')$.
\STATE Call $\mathcal{L}(T'',r'')$.
\FORALL{$u\in V(D)$}
\STATE Compose the family of color sets $\mathcal{C}_T(u)$ as follows:
\FORALL{$(u,v)\in A(D)$}
\FORALL{$ C'\in \mathcal{C}_{T'}(u)$ and $C''\in \mathcal{C}_{T''}(v)$}
\STATE $C:= C' \cup C''$ if $C'\cap C''=\emptyset$
\ENDFOR
\ENDFOR
\ENDFOR
\STATE Return $\mathcal{C}_T(u)$ for each vertex $u$ of $D$.
\ENDIF
\end{algorithmic} \end{algorithm}
\begin{theorem}\label{ccrefined}
Let $T$ be a out-tree on $k$ vertices and let $D=(V,A)$ be a digraph. A subgraph of $D$ isomorphic to $T$, if one exists, can be found in $O(k(4e)^k\cdot |A|)$ expected time. \end{theorem} \begin{proof}
Let $|V(T')|=k'$ and $|V(T'')|=k''$, where $k'+k''=k$. Then
$|\mathcal{C}_{T'}(u)|=\binom{k-1}{k'-1}$ and
$|\mathcal{C}_{T'}(u)|=\binom{k-1}{k''-1}$. Checking $C' \cap C''=\emptyset$ takes $O(k)$ time. Hence, lines 11-18 requires at most
$\binom{k}{k/2}^2\cdot k|A| \leq k2^{2k}|A|$ operations.
Let $T(k)$ be the number of operations for $\mathcal{L}(T, r)$. We have the following recursion.
\begin{equation}
T(k)\leq T(k') + T(k'') + k2^{2k-2}|A| \end{equation}
By induction, it is not difficult to check that $T(k)\leq k4^k|A|$. \end{proof}
Let $\mathcal{C}$ be a family of vertex $k$-colorings of a digraph
$D$. We call $\mathcal{C}$ an {\em $(n,k)$-family of perfect hashing functions} if for each $X\subseteq V(D)$, $|X|=k$, there is a coloring $c\in \mathcal{C}$ such that $X$ is colorful with respect to $c.$ One can derandomize the above algorithm of Alon et al. by using any $(n,k)$-family of perfect hashing functions in the obvious way. The time complexity of the derandomized algorithm depends of the size of the $(n,k)$-family of perfect hashing functions. Let $\tau(n,k)$ denote the minimum size of an $(n,k)$-family of perfect hashing functions. Nilli \cite{nilliCPC3} proved that $\tau(n,k)\ge \Omega(e^k\log n/\sqrt{k}).$ It is unclear whether there is an $(n,k)$-family of perfect hashing functions of size $O^*(e^k)$ \cite{CHeLuSzeZha07}, but even if it does exist, the running time of the derandomized algorithm would be $O^*((4e)^k).$
\subsection{Proof of Lemma \ref{largecomp}}
\begin{proof} First we want to show that the values $n_i$ chosen in line 3 of the algorithm do not increase during the performance of the algorithm. The values of $n_i$ do not increase because the values of the difference $d(A,B)$ do not increase during the performance of the algorithm. In fact, $d(A,B)$ strictly decreases. To see this, suppose that the element $i$ is selected in the present step. If $n(A\cup\{i\}) < n(B-\{i\})$, then obviously the difference $d(A,B)$ strictly decreases. Else if $n(A\cup\{i\}) > n(B-\{i\})$, we have $d(A\cup\{i\},B-\{i\})<n_i<d(A,B)$.
To see that the algorithm returns a greedily optimal bipartition $(A,B)$, it is enough to observe that for the final bipartition $(A,B)$, moving any element of $A$ or $B$ does not decrease $d(A,B)$. Suppose that the last movement of the element $i_0$ makes $n(A) > n(B)$. Then a simple computation implies that $d(A,B)<n_{i_0}.$ Since the values of $n_i$ in line 3 of the algorithm do not increase during the performance of the algorithm, $n_j\geq n_{i_0} >d(A,B)$ for every $j\in A$, the movement of any element in $A$ would not decrease $d(A,B)$. On the other hand suppose that $n(A)<n(B)$. By the definition of the algorithm, for every $j \in B$ with a positive weight we have $n_j\geq d(A,B)$ and thus the movement of any element in $B$ would not decrease $d(A,B)$. Hence the current bipartition $(A,B)$ is greedily optimal.
Now let us consider the running time of the algorithm. Sorting the elements in nondecreasing order of their weights will take $O(q\log{q})$ time. Moreover, once an element is moved from one partite set to another, it will not be moved again and we move at most $q$ elements without duplication during the algorithm. This gives us the running time of $O(q\log{q})$. \end{proof}
\subsection{Proof of Theorem \ref{th1}}
\begin{proof} Let $L(T,D)$ denote the number of times the `if'-statement in line $1$ of Algorithm \Mx{find-tree} is false (in all recursive calls to \Mx{find-tree}). We will prove that $L(T,D)\le R(k)=Bk^{\rho{}}C{}^k+1$, $B\ge 1$ is a constant whose value will determined later in the proof. This would imply that the number of calls to \Mx{find-tree} where the `if'-statement in line $1$ is true is also bounded by $R(k)$ as if line $1$ is true then we will have two calls to \Mx{find-tree} (in lines $15$ and $18$). We can therefore think of the search tree of Algorithm 3 as an out-tree where all internal nodes have out-degree equal two and therefore the number of leaves is grater than the number of internal nodes.
Observe that each iteration of the for-loop in line 12 of Algorithm \Mx{find-tree} makes two recursive calls to \Mx{find-tree} and the time spent in each iteration of the for-loop is at most $O(n^2)$. As the time spent in each call of \Mx{find-tree} outside the for-loop is also bounded by $O(n^2)$ we obtain the desired complexity bound $O(n^2 k^{\rho{}}C{}^k)$.
Thus, it remains to show that $L(T,D)\le R(k)=Bk^{\rho{}}C{}^k+1$. First note that if $k=0$ or $k=1$ then line $1$ is false exactly once (as there are no recursive calls) and $\min\{R(1),R(0)\}\ge 1=L(T,D)$. If $k\in \{3, 4\}$, then line 1 is false a constant number of times by Lemma \ref{split} and let $B$ be the minimal integer such that $L(T,D)\le R(k)=Bk^{\rho{}}C{}^k+1$ for both $k=3$ and 4. Thus, we may now assume that $k \geq 5$ and proceed by induction on $k$.
Let $R'(\alpha{},k)=\frac{6 ((1-\alpha{})k)^{\rho{}} C{}^{(1-\alpha{})k} }{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}}.$ Let $\alpha{}$ be defined as in line $8$ of Algorithm \Mx{find-tree}. We will consider the following two cases separately.
{\em Case 1, $\alpha{} \geq \alpha^*{}$:} In this case we note that the following holds as $k \geq 2$ and $(1-\alpha{}) \geq \alpha{}$.
\begin{center} $\begin{array}{rcl}
{}
L(T,D) & \leq & \left\lceil \frac{2.51}{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \right\rceil
\times \left( R(\alpha{}k) + R((1-\alpha)k) \right) \\
{}
& \leq & \frac{3}{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \times (2 \cdot R((1-\alpha)k)) \\
& = & R'(\alpha{},k). \\ \end{array}$ \end{center}
By the definition of $\rho{}$ we observe that $(1-\alpha^*{})^{\rho{}} = 1/6$, which implies that the following holds by the definition of $C{}$:
$$ R'(\alpha^*{},k) = 6 ((1-\alpha^*{})k)^{\rho{}} C{}^{(1-\alpha^*{})k} \times C{}^{\alpha^*{} k}
= k^{\rho{}} C{}^k
= R(k). $$
Observe that
$$\ln(R'(\alpha{},k)) = \ln(6) + \rho{} \left[ \ln(k) + \ln(1-\alpha{}) \right] +
k \left[ (1-\alpha{})\ln(C{}) -\alpha{} \ln(\alpha{}) - (1-\alpha{})\ln(1-\alpha{}) \right] $$
We now differentiate $\ln(R'(\alpha{},k))$ which gives us the following:
\begin{center} $\begin{array}{rcl} \frac{\partial(\ln(R'(\alpha{},k)))}{\partial (\alpha{})} & = & \rho{} \frac{-1}{1-\alpha{}} +
k \left( -\ln(C{}) - ( 1+ \ln(\alpha{})) + (1 + \ln(1-\alpha{})) \right) \\
& = & \frac{- \rho{}}{1-\alpha{}} + k \left( \ln \left( \frac{1-\alpha{}}{\alpha{} C{}} \right) \right). \\ \end{array}$ \end{center}
Since $k \geq 0$ we note that the above equality implies that $R'(\alpha{},k)$ is a decreasing function in $\alpha{}$ in the interval $\alpha^*{} \leq \alpha{} \leq 1/2$. Therefore $L(T,D) \leq R'(\alpha{},k) \leq R'(\alpha^*{},k) = R(k)$, which proves Case 1.
{\em Case 2, $\alpha{} < \alpha^*{}$:} In this case we will specify the splitting vertex when we make recursive calls using the larger of $U_w$ and $U_b$ (defined in line $5$ of Algorithm \Mx{find-tree}). Let $\alpha{}'$ denote the $\alpha{}$-value in such a recursive call.
By Lemma \ref{autobalance} we note that the following holds :
$$ \frac{1}{2} \geq \alpha{}' \geq \frac{1-2\alpha{}}{1-\alpha{}} > \alpha^*{}. $$
Analogously to Case 1 (as $R'(\alpha{}',(1-\alpha{})k)$ is a decreasing function in $\alpha{}'$ when $1/2 \geq \alpha{}' \geq \alpha^*{}$)
we note that the $L$-values for these recursive calls are bounded by the following, where $\beta{} = \frac{1-2\alpha{}}{1-\alpha{}}$ (which implies that $(1-\alpha{})(1-\beta{})= \alpha{}$):
\begin{center} $\begin{array}{rcl}
{} R'(\alpha{}',(1-\alpha{})k) & \leq & R'\left( \beta{}, (1-\alpha{})k \right) \\
{}
& = & \frac{3}{ \left( \beta{}^{\beta{}} (1-\beta{})^{(1-\beta{})} \right)^{(1-\alpha{})k}} \times 2 \times R((1-\beta)(1-\alpha{})k) \\
& = & \frac{6 R(\alpha{}k)}{ \left( \beta{}^{\beta{}} (1-\beta{})^{(1-\beta{})} \right)^{(1-\alpha{})k} }. \\ \end{array}$ \end{center}
Thus, in the worst case we may assume that $\alpha{}'=\beta{}=(1-2\alpha{})/(1-\alpha{})$ in all the recursive calls using the larger of $U_w$ and $U_b$. The following now holds (as $k \geq 2$).
\begin{center} $\begin{array}{rcl}
{}
L(T,D) & \leq & \left\lceil \frac{2.51}{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \right\rceil
\times \left( R(\alpha{}k) + R'(\alpha{}',(1-\alpha{})k) \right) \\
{}
& \leq & \frac{3}{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \times R(\alpha{}k) \times
\left( 1+ \frac{6}{ \left( \beta{}^{\beta{}} (1-\beta{})^{(1-\beta{})} \right)^{(1-\alpha{})k} } \right) \\
& \leq & \frac{3R(\alpha{}k) }{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \times
\frac{7}{ \left( \beta{}^{\beta{}} (1-\beta{})^{(1-\beta{})} \right)^{(1-\alpha{})k} } \\ \end{array}$ \end{center}
Let $R^*(\alpha{},k)$ denote the bottom right-hand side of the above equality (for any value of $\alpha{}$). By the definition of $\rho{}$ we note that $ \rho{} = \frac{2 \ln(1/6)}{ 2 \ln(1-\alpha^*{})} = \frac{\ln(1/36)}{\ln(\alpha^*)}$, which implies that $(\alpha^*{})^{\rho{}}= 1/36$. By the definition of $C{}$ and the fact that if $\alpha{}=\alpha^*{}$ then $\beta{}=(1-2\alpha^*{})/(1-\alpha^*{})=\alpha^*{}$, we obtain the following:
\begin{center} $\begin{array}{rcl}
{}
R^*(\alpha^*{},k) & = & \frac{3 R(\alpha^*{}k) }{\alpha^*{}^{\alpha^*{}k} (1-\alpha^*{})^{(1-\alpha^*{})k}} \times
\frac{7}{ \left( \alpha^*{}^{\alpha^*{}} (1-\alpha^*{})^{(1-\alpha^*{})} \right)^{(1-\alpha^*{})k} } \\
{}
& = & 21 \cdot R(\alpha^*{}k) \cdot C{}^{\alpha^*{}k} \cdot C{}^{\alpha^*{} (1-\alpha^*{}) k} \\
{}
& = & 21 \alpha^*{}^{\rho{}} k^{\rho{}} C{}^{\alpha^*{}k} \times C{}^{(2\alpha^*{}-\alpha^*{}^2)k} \\
{}
& = & 21 \alpha^*{}^{\rho{}} R(k) \\
& < & R(k). \\ \end{array}$ \end{center}
We will now simplify $R^*(\alpha{},k)$ further, before we differentiate $\ln(R^*(\alpha{},k))$. Note that $\beta{} = \frac{1-2\alpha{}}{1-\alpha{}}$ implies that $(1-\alpha{})(1-\beta{})= \alpha{}$ and $\beta(1-\alpha{})=1-2\alpha{}$.
\begin{center} $\begin{array}{rcl}
{}
R^*(\alpha{},k) & = & \frac{21 R(\alpha{}k) }{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \times
\frac{1}{ \left( \beta{}^{\beta{}} (1-\beta{})^{(1-\beta{})} \right)^{(1-\alpha{})k} } \\
{}
& = & \frac{21 (\alpha{}k)^{\rho{}} C^{\alpha{}k} }{\alpha{}^{\alpha{}k} (1-\alpha{})^{(1-\alpha{})k}} \times
\frac{1}{ \left( \frac{1-2\alpha{}}{1-\alpha{}} \right)^{(1-2\alpha{})k}
\left( \frac{\alpha{}}{1-\alpha{}} \right)^{\alpha{}k} } \\
& = & 21 (\alpha{}k)^{\rho{}} \left( \frac{ C^{\alpha{}} }{\alpha{}^{2\alpha{}} (1-2\alpha{})^{(1-2\alpha{})}} \right)^k. \\ \end{array}$ \end{center}
Thus, we have the following:
$$\ln(R^*(\alpha{},k)) = \ln(21) + \rho{} \left( \ln(k) + \ln(\alpha{}) \right) +
k \left( \alpha{}\ln(C{}) -2\alpha{} \ln(\alpha{}) - (1-2\alpha{})\ln(1-2\alpha{}) \right). $$
We now differentiate $\ln(R^*(\alpha{},k))$ which gives us the following:
\begin{center} $\begin{array}{rcl}
{} \frac{\partial(\ln(R^*(\alpha{},k)))}{\partial(\alpha{})} & = & \frac{\rho{}}{\alpha{}} + k \left( \ln(C{}) -2(1+\ln(\alpha{})) + 2(1+\ln(1-2\alpha{})) \right) \\
& = & \frac{\rho{}}{\alpha{}} + k \left( \ln\left( \frac{C{} (1-2\alpha{})^2}{\alpha{}^2} \right) \right) \\ \end{array}$ \end{center}
Since $k \geq 0$ we note that the above equality implies that $R^*(\alpha{},k)$ is an increasing function in $\alpha{}$ in the interval $1/3 \leq \alpha{} \leq \alpha^*{}$. Therefore $L(T,D) \leq R^*(\alpha{},k) \leq R^*(\alpha^*{},k) < R(k)$, which proves Case 2. \end{proof}
\subsection{Derandomization of Our Randomized Algorithm for $k$-{\sc Out-Tree}}\label{Derandomdsec} In this subsection we discuss the derandomization of the algorithm \Mx{find-tree} using the general method presented by Chen et al. \cite{CHeLuSzeZha07} and based on the construction of $(n,k)$-universal sets studied in \cite{naor1995san}. \begin{definition}
An $(n,k)$-universal set $\mathcal F$ is a set of functions from $[n]$ to $\{0,1\}$, such that for every subset $S\subseteq [n],|S|=k$ the set $\mathcal F|_{S}=\{f|_S,f\in T\}$ is equal to the set $2^S$ of all the functions from $S$ to $\{0,1\}$. \end{definition} \begin{proposition}[\cite{naor1995san}] \label{propuniversalsets}
There is a deterministic algorithm of running time $O(2^kk^{O(\log k)} n\log n)$ that constructs an $(n,k)$-universal set $\mathcal F$ such that $|\mathcal F|=2^kk^{O(\log k)}\log n$ \end{proposition}
We explain how Proposition \ref{propuniversalsets} is used to achieve a deterministic algorithm for the {\sc $k$-Out-Tree} problem. Let $V(G')=\{v_1,\dots,v_n\}$. First, we construct an $(n,k)$-universal set $\mathcal F$ of size $2^kk^{O(\log k)}\log n$ (this can be done in time $O(2^kk^{O(\log k)}n\log n)$). Then we call the algorithm \Mx{find-tree} but replace steps 13 and 14 by the following steps: \begin{enumerate} \item[13] {\bf for} each function $f\in \mathcal F$ {\bf do} \item[14] $\forall i$ such that $x_i\in V(D)-\bigcup_{u\in L}X_u$, let $v_i$ be colored in white if $f(i)=0$ and in black otherwise \end{enumerate} Note that this replacement makes the algorithm {\bf find-tree} become deterministic. Then, since $\mathcal F$ is a $(n,k)$-universal set and if there is a subgraph isomorphic to $T$ in $D$, there is a function in $\mathcal F$ such that the vertices corresponding to $U_w$ in $D$ with be colored in white while the vertices corresponding to $U_b$ will be colored in black. Using induction on $k$, we can prove that this deterministic algorithm correctly returns the required tree at the condition that such a tree exists in the graph. We can also derive the running time of this deterministic algorithm to find a complexity of $O(n^2C^{k+o(k)})$.
\begin{theorem} There is a $O(n^2C^{k+o(k)})$ time deterministic algorithm that solves the {\sc $k$-Out-Tree} problem. \end{theorem}
\end{document} | arXiv | {
"id": "0903.0938.tex",
"language_detection_score": 0.7566224336624146,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\large
\begin{abstract} Let $\mathbb{T}$ be a torus. We prove that all subsets of $\mathbb{T}$ with finitely many boundary components (none of them being points) embed properly into $\mathbb{C}^2$. We also show that the algebras of analytic functions on certain countably connected subsets of closed Riemann surfaces are doubly generated. \end{abstract}
\maketitle
\section{Introduction, main results and notation}
Our main concern is the problem of embedding bordered Riemann surfaces properly into $\mathbb{C}^2$. A (finite) bordered Riemann surface is obtained by removing a finite set of closed disjoint connected components $D_1,...,D_k$ from a compact surface $\mathcal{R}$, i.e. the bordered surface is $\tilde\mathcal{R}:=\mathcal{R}\setminus\cup_{i=1}^k D_i$. \
For a positive integer $d\geq 2$ it is known that there is a lowest possible integer $N_d=[\frac{3d}{2}]+1$ such that all Stein manifolds of dimension $d$ embed properly into $\mathbb{C}^{N_d}$ \cite{eg}\cite{fs2}\cite{sc} (for more details, see for instance the survey \cite{fc2}). It is also known that all open Riemann surfaces embed properly into $\mathbb{C}^3$, but it remains an open question whether the dimension of the target domain in this case always can be pushed down to 2. \
For (positive) results when the genus of $\mathcal{R}$ is $0$ we refer to the articles \cite{kn}\cite{al}\cite{la}\cite{gs}\cite{wd2}, and in the case of genus $\geq 1$ to \cite{cf}\cite{wd3}. \
We prove the following theorem:
\begin{theorem}\label{main} Let $\mathbb{T}$ be a torus, and let $U\subset\mathbb{T}$ be a domain such that $\mathbb{T}\setminus U$ consists of a finite number of connected components, none of them being points. Then $U$ embeds properly into $\mathbb{C}^2$. \end{theorem}
In \cite{wd3} we proved that under the assumption that $U$ can be embedded onto a Runge surface in $\mathbb{C}^2$, one can embed arbitrarily small perturbations of $U$ properly into $\mathbb{C}^2$. Our task then is to
\
(i) Embed $U$ onto a Runge surface, \
(ii) Pass from small perturbations to $U$ itself.
\
(We say that a surface $U$ is Runge if holomorphic functions on $U$ may be approximated uniformly on compacts in $U$ by polynomials).\
To achieve (i) we recall from \cite{wd3} that for any one complementary component $D_1$, we have that $\mathbb{T}\setminus D_1$ embeds into $\mathbb{C}^2$ by some map $\phi$, and that the image is Runge. To embed the smaller domain $U$ onto a Runge surface, we will perturb the image of $U$ by constructing a map that could be described as a local (near some neighborhood of $\phi(U)$) singular shear acting transversally to $\phi(U)$ - the singularities being placed inside each component of $\phi(\mathbb{T}\setminus U)$. This construction is the content of Section 3. \
To achieve (ii) we will apply a technique from \cite{gs} used by Globevnik and Stens\o nes to embed planar domains into $\mathbb{C}^2$. He and Schramm have shown that any subset of $\mathbb{T}$ is biholomorphic to a circular subset $U'$ of another torus $\mathbb{T}'$ \cite{hs}. This allows us to identify $U$ with a point in $\mathbb{R}^N$ in such a way that the point corresponds to the complex structure on $\mathbb{T}$ and the centers and the radii of the boundary components of $U$. Now small perturbations of $U'$ embeds properly into $\mathbb{C}^2$, and the perturbation corresponds to some circled subset of some torus, i.e. some (other) point in $\mathbb{R}^N$. So if we identify all subsets of tori close to $U$ with points in a ball $B$ in $\mathbb{R}^N$, we may in this manner construct a map $\psi:B\rightarrow\mathbb{R}^N$, such that all circled domains corresponding to points in the image $\psi(B)$ embed properly into $\mathbb{C}^2$. Our goal is to construct the map $\psi$ in such a way that it is continuous and close to the identity. In that case, by Brouwer's fixed point theorem, the point corresponding to $U$ will be contained in the image $\psi(B)$, and the result follows. \
Continuity in the setting of uniformization of subsets of tori is treated in Section 2, while continuity regarding the identification of circled subsets with properly embeddable subsets is dealt with in Section 4. \
As was pointed out in \cite{cf}, the question about the embeddability of an open Riemann surface ${\Omega}$ is related to a question about the function algebra $\mathcal{O}({\Omega})$ of all analytic functions on ${\Omega}$. For an integer $m\in\mathbb{N}$ we say that the algebra $\mathcal{O}({\Omega})$ is $m$-generated if there exist functions $f_i\in\mathcal{O}({\Omega})$, $i=1,...,m$ such that $\mathbb{C}[f_1,...,f_m]$ is dense in $\mathcal{O}({\Omega})$. Since any ${\Omega}$ embeds properly into $\mathbb{C}^3$ we have that $\mathcal{O}({\Omega})$ is 3-generated, but it is unknown whether or not 2 generators might be sufficient. By the perturbation results in Section 3 we get the following: \begin{theorem}\label{alg} Let $\mathbb{T}$ be a torus, and let $U\subset\mathbb{T}$ be domain such that each connected component of $\mathbb{T}\setminus U$ has got non-empty interior. Then the function algebra $\mathcal{O}(U)$ is $2$-generated. \end{theorem} Theorem \ref{alg} is a special case of the following theorem: \begin{theorem}\label{algmain} Let $\mathcal{R}$ be a closed Riemann surface, let $U\subset\mathcal{R}$ be a domain such that $\partial U$ is a collection of smooth Jordan curves, and let $\phi:U\rightarrow\mathbb{C}^2$ be an embedding that extends across $\partial U$. Assume that $\phi(\overline U)$ is polynomially convex. If $V\subset U$ is a connected open set obtained from $U$ by removing at most countably many disks, then $\mathcal{O}(V)$ is $2$-generated. \end{theorem} The proof of the last two theorems will be given in Section 3. \
As usual we will denote an ${\epsilon}$-ball centered at a point $p$ in $\mathbb{R}^n$ or $\mathbb{C}^n$ by $B_{\epsilon}(p)$ (or simply $B_{\epsilon}$ if the center is the origin), and the corresponding ${\epsilon}$-disk in $\mathbb{C}$ will be denoted $\triangle_{\epsilon}(p)$. By a disk in a Riemann surface $\mathcal{R}$ we will mean a subset homeomorphic to $\overline\triangle$. \
\verb"Acknowledgement" The author would like to thank Franc Forstneri\v{c} for several comments and suggestions for improvements of the present article. In particular the present proof of Proposition \ref{perturb} was showed us by Forstneri\v{c}.
\section{Circled subsets of tori and uniformization}
Let ${\tau}\in\mathbb{C}$ be contained in the upper half plane $H^+$. If we define the lattice $$ L_{\tau}:=\{m\cdot{\tau} + n\in\mathbb{C};m,n\in\mathbb{Z}\}, $$ we obtain a torus by considering the quotient $\mathbb{C}/\sim_{\tau}$, where $z\sim_{\tau} w\Leftrightarrow z-w\in L_{\tau}$. It is known that all tori are obtained in this way. For a given ${\tau}$ we let $\mathcal{R}(\Omega({\tau}))$ denote the quotient, i.e. the torus, and we let $\Omega({\tau})$ denote $\mathbb{C}$ regarded as its universal cover. We may choose ${\tau}$ with $0<\mathrm{Re}({\tau})\leq 1$. \
We are concerned with subsets of tori with finitely many boundary components. Let $\mathbb{T}$ be a torus, let $\tilde K_1,...,\tilde K_m$ be compact connected disjoint subsets of $\mathbb{T}$, such that $\tilde\mathbb{T}:=\mathbb{T}\setminus(\cup_{i=1}^m\tilde K_i)$ is connected. Then $\mathbb{T}$ may be identified with its cover $\Omega({\tau})$ for some ${\tau}$, and $\tilde\mathbb{T}$ with some subset $U$ of $\Omega({\tau})$. It is clear that $U$ is completely determined by ${\tau}$ and some choice of complementary components $K_1,\cdot\cdot\cdot, K_m$ of $U$ that intersect the parallelogram with vertices $0,1,{\tau},{\tau}+1$,
and we let $\Omega({\tau},K_1,\cdot\cdot\cdot,K_m)$ denote such a $U$. We call such a set an m-domain. We let $\mathcal{R}({\Omega}({\tau},\cdot\cdot\cdot))$ denote the corresponding subset of $\mathcal{R}({\Omega}({\tau}))$. \
Fix an m-domain ${\Omega}({\lambda},K_1,...,K_m)$, and assume that ${\lambda}\notin K_i$ for $i=1,...,m$. We want to consider a space of m-domains "close" to ${\Omega}({\lambda},K_1,...,K_m)$. For this purpose we recall the definition of the Hausdorff metric: Let $X$ be a metric space with distance function $m:X\times X\rightarrow\mathbb{R}^+$. For two closed subsets $S_1,S_2$ of $X$ one defines first $$ d(S_1,S_2)=\mathrm{sup}_{x\in S_1}\mathrm{inf}\{m(x,y);y\in S_2\}. $$ Then the Hausdorff distance between the sets $S_1$ and $S_2$ is defined by $$ d_H(S_1,S_2)=d(S_1,S_2)+d(S_2,S_1). $$
Let ${\delta}>0$, let $U_0$ denote the ${\delta}$-disk centered at ${\lambda}$, and for $i=1,...,m$ let $U_i$ denote the ${\delta}$-disk centered at the closed connected sets $K_i$ with respect to the Hausdorff metric: $$ U_i=\{S\subset\mathbb{C};S \mathrm{\ is \ closed}, d_H(S,K_i)<{\delta}\}. $$ If ${\delta}$ is small enough then if ${\lambda}'\in U_0$ and if $C_i$
is a connected set $C_i\in U_i$ with $\mathbb{C}\setminus C_i$ connected for $i=1,...,m$, then the set ${\Omega}({\lambda}',C_1,...,C_m)$ is an m-domain. (We will also choose ${\delta}$ small enough such that $C_i\in U_i, C_j\in U_j, i\neq j\Rightarrow C_i\cap C_j=\emptyset$, and such that no element $C_i\in U_i$ can intersect the disk $U_0$). We call the set of these m-domains $X^m_{{\delta}}({\Omega}({\lambda},K_1,...,K_m))$. Let $\Omega_1=\Omega({\tau},K_1,\cdot\cdot\cdot,K_m),\Omega_2= \Omega({\lambda},C_1,\cdot\cdot\cdot,C_m)\in X^m_{{\delta}}({\Omega}({\lambda},K_1,...,K_m))$, and let $S_1=\{{\tau}\}\cup K_1\cup\cdot\cdot\cdot\cup K_m$, $S_2=\{{\lambda}\}\cup C_1\cup\cdot\cdot\cdot\cup C_m$ be the corresponding subsets of $\mathbb{C}$. We then define $$ d_1(\Omega_1,\Omega_2):=d_H(S_1,S_2), $$ As a subset of the set of all m-domains we have all m-domains whose boundary components are all circles. We will let these m-domains be denoted $\Omega({\tau},z_1,r_1,\cdot\cdot\cdot,z_m,r_m)$, where $(z_i,r_i)$ corresponds to the center and the radius of the ith boundary component (for some choice of ordering of these components). We will use boldface letters, such as $\bf x\rm$, to denote a 2m-tuple $\bf x\rm=(z_1,r_1,\cdot\cdot\cdot,z_m,r_m)$ to simplify notation to $\Omega({\tau},\bf x\rm)$. We call such domains circled m-domains, and we denote the set of all such domains $T^m$. \
Let $\Omega({\tau},\bf{x}\rm)$ be a circled m-domain, and let $X^m_{{\delta}}({\Omega}({\tau},\bf{x}\rm))$ be a space as defined above. For all circled m-domains contained in $X^m_{{\delta}}({\Omega}({\tau},\bf{x}\rm))$ we have a natural ordering of all the boundary components, and we may identify all such domains $\Omega({\lambda},\bf{y}\rm)$ with points $({\lambda},\bf{y}\rm)\in\mathbb{R}^{2+3m}$. So if ${\epsilon}$ is small enough, the points in the ball $B_{\epsilon}({\tau},\bf{x}\rm)\subset\mathbb{R}^{2+3m}$ are in unique correspondence with circled m-domains in $X^m_{{\delta}}({\Omega}({\tau},\bf{x}\rm))$. We may thus give another metric to this (local) space of circled m-domains, henceforth denoted $T^m_{\epsilon}({\tau},\bf{x}\rm)$, by defining $$
d_2(\Omega({\tau},\bf{x}\rm),\Omega({\lambda},\bf{y}\rm)):=\|({\tau},\bf{x}\rm)-({\lambda},\bf{y}\rm)\|, $$
where $\|\cdot\|$ is the euclidian distance on $\mathbb{R}^{2+3m}$. \
We will now give a lemma regarding conformal mappings of arbitrary m-domains domains onto circular m-domains. The contents of the lemma are in essence results proved by He and Schramm \cite{hs}. Stating the results for the special case of tori, they showed the following: Let $\mathbb{T}\setminus\cup_{i=1}^m K_i$ be an m-connected subdomain of some torus $\mathbb{T}$. Then there exists some torus $\mathbb{T}'$ and a domain ${\Omega}\subset\mathbb{T}'$ such that the following holds:
\
(1) ${\Omega}$ is circled, meaning that if we lift ${\Omega}$ to the universal cover of $\mathbb{T}'$ then the complement consists of exact disks (these disks may also be points), \
(2) ${\Omega}$ is conformally equivalent to $\mathbb{T}\setminus\cup_{i=1}^m K_i$.
\
Furthermore they proved that
\
(3) A circled domain in the Riemann sphere is unique up to M\"{o}bius transformations, i.e. if $f:{\Omega}_1\rightarrow{\Omega}_2$ is a biholomorphic map between circled domains, then $f$ is the restriction to ${\Omega}_1$ of a M\"{o}bius transformation.
\
Formulating (1) and (2) for m-domains as defined above we have the following:
\
$(a)$ \ For any ${\Omega}={\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m)$ there exists a conformal mapping $f$ that maps ${\Omega}$ onto some ${\Omega}({\lambda}',\bf{x}\rm)\in T^m$, \
$(b)$ \ The map $f$ respects the relation $\sim_{\lambda}$, meaning that $f(z+m+n{\lambda})=f(z)+m+nf({\lambda})$ for all $m,n\in\mathbb{Z}$.
\
In (b) we have normalized so that $f$ fixes the points $0$ and $1$. By $(3)$ we have then that $f$ is unique.
Now fix a domain ${\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m)$, and consider a space $X^m_{{\delta}}({\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m))$ of nearby m-domains as defined above. For each domain ${\Omega}'={\Omega}({\lambda}',C_1,...,C_m)\in X^m_{{\delta}}({\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m))$ there is a unique map $f$ that maps ${\Omega}'$ onto a circular m-domain as above, fixing the points $0$ and $1$, and we may define a map $\varphi:X^m_{{\delta}}({\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m))\rightarrow T^m$ by $$ \varphi({\Omega}')=(f({\lambda}'),z_1,r_1,...,z_m,r_m), $$ where $z_i$ and $r_1$ are the center and radius of the boundary component corresponding to $C_i$. Note that by uniqueness, if ${\Omega}'={\Omega}({\lambda}',C_1,...,C_m)$ is a circled m-domain so that ${\Omega}'$ has the representation ${\Omega}({\lambda}',z_1,r_1,...,z_m,r_m)$ where $(z_i,r_i)$ is the center and the radius of $C_i$, then $\varphi({\Omega}')=({\lambda}',z_1,r_1,...,z_m,r_m)$. In this respect we may say that $\varphi\mid_{T^m\cap X^m_{{\delta}}({\Omega}({\lambda},K_1,\cdot\cdot\cdot,K_m))} = \mathrm{id}$. \
We will sum these things up in a lemma, and we want to establish that the map $\varphi$ is continuous. To prove this we will need the following definitions and theorem from \cite{gz}: \
Let $\{B_n\}$, for $n=1,2,..$, denote a sequence of domains in the Riemann sphere that include the point $z=\infty$. We define the kernel of this sequence as the largest domain $B$ including $z=\infty$ every closed subset of which is contained in each $B_n$ from some $n$ on. We shall say that the sequence $\{B_n\}$ converges to its kernel $B$ if an arbitrary subsequence has the same kernel $B$. \
\begin{theorem}\label{gz}(\cite{gz}, page 228.) Let $\{A_n\}$ denote a sequence of domains $A_n$, $n=1,2,...$, in the Riemann sphere that include the point $z=\infty$. Suppose that this sequence converges to a kernel $A$. Let $\{f_n(z)\}$ denote a sequence of functions $\zeta=f_n(z)$ such that for each $n=1,2,...$, the function $f_n(z)$ maps the domain $A_n$ onto a domain $B_n$ including the point $\zeta=\infty$ in such a way that $f_n(\infty)=\infty$ and $f_n'(\infty)=1$. Then for the sequence $\{f_n(z)\}$ to converge uniformly in the interior of the domain A to a univalent function $f(z)$ it is necessary and sufficient that the sequence $\{B_n\}$ have a kernel and converge to it, in which case the function $\zeta=f(z)$ maps A univalently onto B. \end{theorem} We want to apply this theorem for sequences of m-domains. Let $A_n$ be a sequence of m-domains including the origin and converging to an m-domain $A$. Let $A_n'$ and $A'$ denote the domains in $\mathbb{C}$ including $\infty$ given by the correspondence $z\mapsto\frac{1}{z}$. Then $A_n'$ is a sequence as above, and $A'$ is its kernel. Let $\{f_n\}$ be a sequence of univalent functions mapping $A_n$ onto a domain $B_n$ including the origin and $f_n(0)=0$, $f_n'(0)=1$. For each n define the function $F_n(z)=\frac{1}{f_n(\frac{1}{z})}$ mapping the domain $A_n'$ onto $B_n'$, where $B_n'$'s relation with $B_n$ is given by the correspondence $z\mapsto\frac{1}{z}$. Then the sequences $A_n'$ and $F_n$ satisfy the conditions in the above theorem. If the sequence $f_n(z)$ converges to a univalent function $f$ on $A$, the sequence $F_n$ converges to a univalent function $F$ on $A'$. By the theorem the sequence $B_n'$ has a kernel $B'$ and converges to it, and $F$ maps $A'$ onto $B'$. This implies that the sequence $B_n$ has a kernel $B$ and converges to it, and $f$ maps $A$ onto $B$. On the other hand, if the sequence $B_n$ has a kernel $B$ and converges to it, then the sequence $B_n'$ has a kernel $B'$ and converges to it, and by the theorem $F_n$ converges to a univalent function $F$ on $A'$, mapping $A'$ onto the kernel $B'$. So the sequence $f_n$ converges to a univalent function $f$ on $A$ mapping $A$ onto the kernel $B$.
\begin{lemma}\label{mmap} Let $X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))$ be a space of m-domains as defined above. There is a map $\varphi:X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))\rightarrow T^m$ such that the following holds:
\
(i) $\mathcal{R}(\varphi({\Omega}'))$ is conformally equivalent to $\mathcal{R}({\Omega}')$ for all ${\Omega}'\in X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))$, \
(ii) $\varphi\mid_{T^m\cap X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))} = \mathrm{id}$,\
(iii) $\varphi$ is continuous with respect to $d_1$ and $d_2$. \ \end{lemma}
\begin{proof} We have already defined $\varphi$ and established $(i)$ and $(ii)$. To prove continuity we first choose a different normalization of the uniformizing maps. For each map $f:{\Omega}'\rightarrow\mathbb{C}$ as above, we compose with a linear map and assume that $f(0)=0, f'(0)=1$. \
Let ${\Omega}({\lambda},Y_1,...,Y_m)\in X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))$ and let $f:{\Omega}({\lambda},Y_1,...,Y_m)\rightarrow\mathbb{C}$ be the corresponding map. Let $\{{\Omega}({\lambda}_j,Y_1^j,..,Y_m^j\}\subset X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))$ such that ${\Omega}({\lambda}_j,Y_1^j,...,Y_m^j)\rightarrow {\Omega}({\lambda},Y_1,...,Y_m)$ and let $f_j:{\Omega}({\lambda}_j,Y_1^j,...,Y_m^j)\rightarrow\mathbb{C}$ be the corresponding maps for those domains. By abuse of notation we will let $f(Y_i)$ and $f_j(Y^j_i)$ denote complementary components of the images. Note that the sequence of domains ${\Omega}({\lambda}_j,Y_1^j,...,Y_m^j)$ has the domain ${\Omega}({\lambda},Y_1,...,Y_m)$ as its kernel and converges to it. We claim that $f_j\rightarrow f$ uniformly on compacts in ${\Omega}({\lambda},Y_1,...,Y_m)$, and that $f_j(Y_i^j)\rightarrow f(Y_i)$. This will prove the continuity of the map $\varphi$ defined above. That we chose a different normalization does not matter since we will then also have that $\frac{f_j}{f_j(1)}\rightarrow\frac{f}{f(1)}$. \
To show that $f_j\rightarrow f$ it suffices to show that every subsequence of $f_j$ admits a subsequence converging to $f$. By assumption on the family $X^m_{{\delta}}({\Omega}({\tau},K_1,\cdot\cdot\cdot,K_m))$ there exists a $t>0$
such that $\overline\triangle_t=\{{\zeta}\in\mathbb{C};|{\zeta}|\leq t\}\subset {\Omega}({\lambda}_j,Y_1^j,...,Y_m^j)$ for all $j$. Now let $t_0<t$ and consider the functions $h_j(z)=\frac{1}{f_j(z)}$ on $W^j_{t_0}={\Omega}({\lambda}_j,Y_1^j,...,Y_m^j)\setminus\overline{\triangle}_{t_0}$. By Koebe's $\frac{1}{4}$-Theorem we have that $h_j(W^j_{t_0})\subset\triangle_{\frac{4}{t_0}}$ for all $j$, so the sequence $h_j$ is a normal family on $W_{t_0}={\Omega}({\lambda},Y_1,...,Y_m)\setminus\overline{\triangle}_{t_0}$. Passing to a subsequence we assume that $h_j\rightarrow h$. Now $h$ cannot be constantly zero, for this would mean that $f_j=\frac{1}{h_j}\rightarrow\infty$ uniformly on compacts. This would contradict the fact that $f_j'(0)=1$ for all $j$. But this means that that the sequence $f_j$ converges to some function $g$ on $W_{t_0}$, hence we may assume that $f_j$ converges to $g$ on ${\Omega}({\lambda},Y_1,...,Y_m)$. Since $g'(0)=1$ we have that $g$ cannot be constant, and we conclude that $g$ maps ${\Omega}({\lambda},Y_1,...,Y_m)$ univalently onto some subset of $\mathbb{C}$. \
Since $f_j$ converges to $g$ we have now that the for each $i$, the set $f_j(Y^j_i)$ is a bounded sequence of disks $\triangle_{r^j_i}(z^j_i)$ (some of these disks could be points). So by passing to a subsequence we may assume that each of the sequence of pairs $(z^j_i,r^j_i)$ converges to some pair $(z_i,r_i)$. We have that
\
$(B) \ f_j(z+m+n{\lambda}_j)=f_j(z)+m f_j(1)+n f_j({\lambda}_j)$
\
for all $j$ and for all $m,n\in\mathbb{Z}$. So if we let $Q_j$ be the set of disks in $\mathbb{C}$ generated by the set of disks $\triangle_{r^j_i}(z^j_i)$ and the lattice determined by $f_j(1)$ and $f_j({\lambda}_j)$, we get that $f_j({\Omega}({\lambda}_j,Y^j_1,...,Y^j_m))=\mathbb{C}\setminus Q_j$. \
From $(B)$ we now get that
\
$(C) \ g(z+m+n{\lambda})=g(z)+mg(1)+ng({\lambda})$
\
for all $m,n\in\mathbb{Z}$. \
We must have that $g(1)$ and $g({\lambda})$ are linearly independent over $\mathbb{R}$. To see this let $V$ be some open set in ${\Omega}({\lambda},K_1,...,K_m)$ containing the point ${\lambda}$. Then $g(V)$ contains an open set around $g({\lambda})$. Now for each $m,n\in\mathbb{Z}$ let $V_{m,n}$ denote the translated sets $V+m+n{\lambda}$. Then $g(V_{m,n})=g(V)+mg(1)+ng({\lambda})$, and if $g(1)$ and $g({\lambda})$ are linearly dependent over $\mathbb{R}$ then $g(V_{m,n})$ would intersect the straight line segment between $0$ and $g({\lambda})$ for infinitely many choices of $m,n\in\mathbb{Z}$. This would contradict the fact that $g$ is univalent. \
Let $Q$ now denote the circled subset of $\mathbb{C}$ generated by the disks $\triangle_{r_i}(z_i)$ and the lattice determined by $g(1)$ and $g({\lambda})$. Now $\mathbb{C}\setminus Q$ is the kernel for sequence $\mathbb{C}\setminus Q_j$, and it follows from Theorem \ref{gz} that $g({\Omega}({\lambda},Y_1,...,Y_m))=\mathbb{C}\setminus Q$. But then $g$ is the unique function satisfying $g(0)=0, g'(0)=1$ that maps ${\Omega}({\lambda},Y_1,...,Y_m)$ onto a circled subset of $\mathbb{C}$ having a cluster point at infinity, and this contradicts $(A)$. We conclude then that $f_j\rightarrow f$. \
Now from Theorem \ref{gz} we have that $f({\Omega}({\lambda},Y_1,...,Y_m))$ is the kernel for the sequence $f_j({\Omega}({\lambda},Y^j_1,...,Y^j_m))$ to which it converges. Since an arbitrary subsequence has the same kernel we have that each sequence of disks $f_j(Y^j_i)$ must converge to $f(Y_i)$, and this completes the proof. \end{proof}
Now let ${\Omega}({\tau},\bf{x}\rm)\in T^m$ so that no boundary components intersects the point ${\tau}$, let $X_{\delta}^m({\Omega}({\tau},\bf{x}\rm))$ be a space as defined above, and choose ${\epsilon}>0$ such that $T^m_{\epsilon}({\tau},\bf{x}\rm)\subset X_{\delta}^m({\Omega}({\tau},\bf{x}\rm))$. Let $\varphi:X_{\delta}^m({\Omega}({\tau},\bf{x}\rm))\rightarrow T^m$ be the map from Lemma \ref{mmap}. We then have the following:
\begin{lemma}\label{lim} For every $\mu>0$ there exists a $\widehat{\delta}>0$ such that, if $$ \psi:T^m_{\epsilon}({\tau},\bf{x}\rm)\rightarrow X_{\delta}^m({\Omega}({\tau},\bf{x}\rm)) $$ is a map with $d_1(\psi({\Omega}({\lambda},\bf{y}\rm)),{\Omega}({\lambda},\bf{y}\rm))<\widehat{\delta}$ for all ${\Omega}({\lambda},\bf{y}\rm)\in T^m_{\epsilon}({\tau},\bf{x}\rm)$, then $$ d_2(\varphi\circ\psi({\Omega}({\lambda},\bf{y}\rm)),{\Omega}({\lambda},\bf{y}\rm))<\mu $$ for all ${\Omega}({\lambda},\bf{y}\rm)\in T^m_{\epsilon}({\tau},\bf{x}\rm)$. \end{lemma}
\begin{proof}
This follows from the facts that $\varphi|_{T^m\cap X_{\delta}^m({\Omega}({\tau},\bf{x}\rm))}=\mathrm{id}$, $\varphi$ is continuous, and $\overline{T^m_{\epsilon}({\tau},\bf{x}\rm)}$ is complete. \end{proof}
Theorem \ref{main} will follow from the previous lemmas and the following proposition. The proof of the proposition will be given in sections 3 and 4.
\begin{proposition}\label{mainmap} Let ${\Omega}({\tau},\bf{x}\rm)\in T^m$ such that no complementary component of ${\Omega}({\tau},\bf{x}\rm)\in T^m$ intersect the point ${\tau}$, and such that no boundary component is a single point. Let $X^m_{\delta}({\Omega}({\tau},\bf{x}\rm))$ be a space as above. If ${\epsilon}>0$ is small enough, then for all $\widehat{\delta}>0$ there exists a map $\psi:T^m_{\epsilon}({\Omega}({\tau},\bf{x}\rm))\rightarrow X^m_{\delta}({\Omega}({\tau},\bf{x}\rm))$ such that the following holds:
\
(i) $\psi$ is continuous with respect to $d_1$ and $d_2$, \
(ii) $d_1({\Omega}({\lambda},\bf{y}\rm),\psi({\Omega}({\lambda},\bf{y}\rm)))<\widehat{\delta}$ for all ${\Omega}({\lambda},\bf{y}\rm)\in T^m_{\epsilon}({\Omega}({\tau},\bf{x}\rm))$, \
(iii) All $\mathcal{R}(\psi({\Omega}({\lambda},\bf{y}\rm)))$ embed properly into $\mathbb{C}^2$. \end{proposition}
\emph{Proof of Theorem} \ref{main}: \ Lift $U$ to the universal cover of $\mathbb{T}$ and write this lifting as an m-domain ${\Omega}({\lambda},K_1,...,K_m)$. By Lemma 1, ${\Omega}({\lambda},K_1,...,K_m)$ is biholomophic to some circled m-domain ${\Omega}({\tau},\bf{x}\rm)\in T^m$(see (1),(2),(a) and (b) on page 3), so it is enough to proof the result for $\mathcal{R}({\Omega}({\tau},\bf{x}\rm))$. By a linear translation we may assume that no boundary component of ${\Omega}({\tau},\bf{x}\rm)$ intersect the point ${\tau}$, and we cannot have that any boundary component of ${\Omega}({\tau},\bf{x}\rm)$ is a point, since no $K_i$ is a point. Let ${\epsilon}>0$ be in accordance with Proposition \ref{mainmap}. There exists a $\mu>0$ such that if $F:B_{\epsilon}({\tau},\bf{x}\rm)\rightarrow\mathbb{R}^{2+3m}$ is a continuous map satisfying $$
(*) \ \|F-id\|_{B_{\epsilon}({\tau},\bf{x}\rm)}<\mu, $$ then $$ (**)({\tau},\bf{x}\rm)\in F(B_{\epsilon}({\tau},\bf{x}\rm)). $$ Choose $\widehat{\delta}>0$ depending on $\mu$ as in Lemma \ref{lim}, choose $\psi$ as in Proposition \ref{mainmap} depending on $\widehat{\delta}$, and consider the composition $$ F=\varphi\circ\psi $$ (regarded as a map from $B_{\epsilon}({\tau},\bf{x}\rm)$ into $\mathbb{R}^{2+3m}$). Then $F$ is a map satisfying $(*)$ so we have $(**)$. We have that all circled m-domains corresponding to points in $F(B_{\epsilon}({\tau},\bf{x}\rm))$ embed properly into $\mathbb{C}^2$, so $\mathcal{R}({\Omega}({\tau},\bf{x}\rm))$ embeds properly into $\mathbb{C}^2$. $\square$ \
It is clear that we have proved the following formulation of Theorem 1, which we formulate for easier reference in applications to embeddings with interpolation: \
\bf{Theorem 1':}\rm \ Let $\mathbb{T}$ be a torus, and let $U\subset\mathbb{T}$ be a domain such that $\mathbb{T}\setminus U$ consists of a finite number of connected components, none of them being points. Then $U$ embeds onto a surface in $\mathbb{C}^2$ satisfying the conditions in Theorem 1 in \cite{wd3}.
\section{Perturbing surfaces in $\mathbb{C}^2$ and consequences for function algebras.}
Let $\mathcal{R}$ be an open Riemann surface, and let $U$ be an open subset of $\mathcal{R}$. We say that $U$ is Runge in $\mathcal{R}$ if every holomorphic function $f\in\mathcal{O}(U)$ can be approximated uniformly on compacts in $U$ by functions that are holomorphic on $\mathcal{R}$. If $\phi(\mathcal{R})$ is an embedded surface in $\mathbb{C}^2$ we will say that $\phi(\mathcal{R})$ is Runge (in $\mathbb{C}^2$) if all functions $f\in\mathcal{O}(\phi(\mathcal{R}))$ can be approximated uniformly on compacts in $\phi(\mathcal{R})$ by polynomials. \ Now let $M$ be a complex manifold and let $K\subset M$ be a compact subset of M. Recall the definition of the holomorphically convex hull of $K$ with respect to $M$: $$
\widehat{K}_M=\{x\in M;|f(x)|\leq\|f\|_K, \forall f\in\mathcal{O}(M)\}. $$ If $M=\mathbb{C}^n$ we simplify to $\widehat K=\widehat K_{\mathbb{C}^n}$, and we call $\widehat K$ the polynomially convex hull of $K$. If $K=\widehat K$ we say that $K$ is polynomially convex. \
For an open Riemann surface $\mathcal{R}$, and a compact set $K\subset\mathcal{R}$, we have that:
\
(1) $\widehat K_{\mathcal{R}}$ is the union of $K$ and all the relatively compact components of $\mathcal{R}\setminus K$, \
(2) An open subset $U$ of $\mathcal{R}$ is Runge if and only if $\widehat K_{\mathcal{R}}\subset U$ for all compact $K\subset U$.
\
These results can be found in \cite{bs}, \cite{ma}. \
We will need the following standard result: \begin{lemma}\label{runge} Let $U\subset\mathbb{C}^k$ be Runge and Stein, and let $X\subset U$ be an analytic set. For $M\subset\subset X$ we have that $$ \widehat M=\widehat M_{\mathcal{O}(U)}=\widehat M_{\mathcal{O}(X)}. $$
\end{lemma} \begin{proposition}\label{perturb} Let $\mathcal{R}$ be a closed Riemann surface, let $V\subset\mathcal{R}$ be a domain such that $\partial V$ is a collection of smooth Jordan curves, and let $$ \phi:V\rightarrow\mathbb{C}^2 $$ be an embedding, holomorphic across the boundary. Assume that $\phi(\overline V)$ is polynomially convex. Then for any finite set of distinct points $\{p_i\}_{i=1}^m\subset V$, there exist arbitrarily small open disks $D_i\subset V$ with $p_i\in D_i$, and a neighborhood ${\Omega}$ of $\phi(\overline{V}\setminus\cup_{i=1}^m D_i)$, such that for all ${\epsilon}>0$ there exists an injective holomorphic map $$ \xi:{\Omega}\rightarrow\mathbb{C}^2 $$ such that the following holds:
\
(i) $\|\xi-id\|_{\phi(\overline{V}\setminus\cup_{i=1}^m D_i)}<{\epsilon}$ \
(ii) $\xi\circ\phi(\overline{V}\setminus\cup_{i=1}^m D_i)$ is polynomially convex. \end{proposition} \begin{proof}
Let $V\subset\subset W$ such that $\phi|_W$ is an embedding. Since $\phi(\overline V)$ is polynomially convex there is a Runge and Stein neighborhood basis $U_j$ of $\phi(\overline V)$ in $\mathbb{C}^2$. We may assume that $W_j:=\phi(W)\cap U_j$ is a closed submanifold of $U_j$ for all $j\in\mathbb{N}$, and that $\phi(V)$ is Runge in $W_j$. Let $x_i$ denote $\phi(p_i)$
for $i=1,...,m$, and let $Q=\{x_1,...,x_m\}$. \
Now let $\mathcal{N}$ denote the normal bundle of $W_1$. Since $\mathcal{N}$ is a line bundle and $W_1$ is a Riemann surface, we have that $\mathcal{N}\cong W_1\times\mathbb{C}$ (see for instance \cite{fs}, p.229). For some large enough $j\in\mathbb{N}$ we have that $U_j$ embeds into $\mathcal{N}$ with $W_j$ as the zero section, i.e. there is an injective holomorphic map $$ (*) \ F:U_j\rightarrow W_j\times\mathbb{C} $$ such that $F(x)=(x,0)$ for all $x\in W_j$. We might as well assume that this is true for $j=1$ (for a reference to these claims see \cite{gr} pages 255-258 and Remark \ref{outline} below).\
Let $f\in\mathcal{O}(\phi(W))$ with $f(x)=0$ for $x\in Q$, and $f(x)\neq 0$ for $x\notin Q$ (see for instance \cite{fs}). For any ${\delta}>0$ we let $$ \psi_{\delta}: W_1\setminus Q\times\mathbb{C}\rightarrow W_1\setminus Q\times\mathbb{C} $$ be the biholomorfic map defined by $\psi_{\delta}(x,{\lambda})=(x,{\lambda}+\frac{{\delta}}{f(x)})$. Then $\psi_{\delta}(F(W_1\setminus Q))$ is a closed submanifold of $W_1\times\mathbb{C}$ for all choices of ${\delta}$, and we get that $W_1^{\delta}:=F^{-1}(\psi_{\delta}(F(W_j\setminus Q)))$ is a closed submanifold of $U_1$. \
Let ${\Omega}_j$ be a neighborhood basis of $\phi(\overline{V}\setminus(\cup_{i=1}^m D_i))$ in $\mathbb{C}^2$. If $j$ is large enough and ${\delta}$ is small enough we have that $$ G_{\delta}:=F^{-1}\circ\psi_{\delta}\circ F:{\Omega}_j\rightarrow U_1 $$ is an injective holomorphic map. Moreover we have that $G_{\delta}(\phi(\overline{V}\setminus(\cup_{i=1}^m D_i)))$ is holomorhically convex in $W_1^{\delta}$. Put ${\Omega}:={\Omega}_j$, $\xi:=G_{\delta}$, and the result follows by Lemma \ref{runge}. \end{proof} \begin{remark}\label{outline} We outline a simple proof of the existence of the map $(*)$ in our setting: Let $g\in\mathcal{O}(U_1)$ be a defining function for $W_1$, and let $\bigtriangledown g(x)$ denote the gradient of $g$. Such a function exists since Cousins second problem has a solution in this setting. Define a map $$ H:W_1\times\mathbb{C}\rightarrow\mathbb{C}^2 $$ by $H(x,{\lambda})=x +{\lambda}\cdot\bigtriangledown g(x)$. It is seen that $H$ is injective near $W_1\times\{0\}$, and we may let $F=H^{-1}$ on $U_j$ if $j$ is big enough. \end{remark} \emph{Proof of Theorem \ref{algmain}:} Let $\{K_j\}$ be a holomorphically convex exhaustion of $V$ such that $U\setminus K_j$ has finitely many complementary components for each $j\in\mathbb{N}$. We will repeatedly use Proposition \ref{perturb} to construct an embedding $\phi$ of $V$ into $\mathbb{C}^2$ such that each $\phi(K_i)$ is polynomially convex, and this will prove the theorem.\
Assume that we are in the following situation which we call $S_i$:\
We have found a domain $V_i\subset\mathcal{R}$ such that $V\subset\subset V_i$, with $K_i$ holomorphically convex in $V_i$, and an embedding $\phi_i:V_i\rightarrow\mathbb{C}^2$ such that the conditions in Proposition \ref{perturb} are satisfied for the pair $(V_i,\phi_i)$. In particular we have that $\phi_i(K_i)$ is polynomially convex. \
We will show that we can use Proposition \ref{perturb} to pass to situation $S_{i+1}$. \
Let $T_1,...,T_k$ denote the connected components of $V_i\setminus K_{i+1}$. If no $T_j$ is relatively compact in $V_i$ we have that $K_{i+1}$ is holomorphically convex in $V_i$ and we define $V_{i+1}:=V_i, \phi_{i+1}:=\phi_i$. Assume on the other hand that $T_{i_1},...,T_{i_s}$ are relatively compact in $V_i$. By assumption and since $K_{i+1}$ is holomorphically convex in $V$, we may find points $p_j\in T_{i_j}$ such that $p_j\in (U\setminus V)^\circ$. And so there are disks $D_j\subset V_i\setminus V$ such that $p_j\in D_j$. Define $V_{i+1}=V_i\setminus\cup_{j=1}^s D_j$ and Proposition \ref{perturb} furnishes the map $\phi_{i+1}$. We are in $S_{i+1}$. \
We may now use this procedure to construct an appropriate embedding of $V$ into $\mathbb{C}^2$. Let $V_1$ be a smoothly bounded domain in $\mathcal{R}$, homeomorphic to $U$ with $U\subset\subset V_1$, and such that $\phi$ is defined on $V_1$. Assume that $K_1$ is a point and define $\phi_1:=\phi$. Notice that for each step, when passing from $S_i$ to $S_{i+1}$, we may choose any ${\delta}_i>0$ and make sure that $\|\phi_{i+1}-\phi_i\|_{K_{i+1}}<{\delta}_i$. Therefore we may choose a sequence $\{\phi_i\}$ such that $$ \phi:=\lim_{i\rightarrow\infty}\phi_i $$ exists on $V$ and is an embedding. Moreover, since $\phi_i(K_i)$ is polynomially convex for each $i\in\mathbb{N}$, and since $\phi(K_{i+1})$ can be made an arbitrarily small perturbation of $\phi_i(K_{i+1})$, we may assume that each $\phi(K_i)$ is polynomially convex. The result follows. $
\square$ \
\emph{Proof of Theorem \ref{alg}:} Let $T$ be a connected component of $\mathbb{T}\setminus V$, and let $p\in T$ be an interior point. Then $\mathbb{T}\setminus\{p\}$ embeds as a closed submanifold of $\mathbb{C}^2$ by some map $\phi$. Let $D$ be a smoothly bounded disk such that $D\subset\subset T$, and define $U=\mathbb{T}\setminus D$. The collection $(U,\phi,V)$ satisfies the conditions in Theorem \ref{algmain}.$
\square$
\section{Continuous perturbation of families of Riemann surfaces - proof of Proposition \ref{mainmap}}
Briefly the idea behind the proof of Proposition \ref{mainmap} is the following: Start with the space $T^m_{\epsilon}({\Omega}({\tau},\bf{x}\rm))$ and consider Theorem \ref{old} below. In effect we showed in \cite{wd3} that for each fixed ${\Omega}({\lambda},\bf{y}\rm)\in T^m_{\epsilon}({\Omega}({\tau},\bf{x}\rm))$ there exists an arbitrarily small perturbation $U_{({\lambda},\bf{y}\rm)}$ of ${\Omega}({\lambda},\bf{y}\rm)$ such that $U_{({\lambda},\bf{y}\rm)}$ embeds onto a surface in $\mathbb{C}^2$ satisfying the conditions in Theorem \ref{old}. I.e. $U_{({\lambda},\bf{y}\rm)}$ embeds properly into $\mathbb{C}^2$. Suppose that we could make sure that the perturbed $m$-domains vary continuously with the parameter $({\lambda},\bf{y}\rm)$ (with respect to the metric defined in Section 2). Then the correspondence ${\Omega}({\lambda},\bf{y}\rm)\mapsto U_{({\lambda},\bf{y}\rm)}$ defines a continuous map $\psi:T^m_{\epsilon}({\Omega}({\tau},\bf{x}\rm))\rightarrow X^m({\tau},\bf{x}\rm)$, and all the image domains embed properly into $\mathbb{C}^2$. If $\psi$ could be made arbitrarily close to the identity then Proposition \ref{mainmap} would follow from Lemma \ref{lim}. This is indeed what we will prove. \
The following theorem is approximately the same as Theorem 1 in \cite{wd3}. The difference is that Theorem 1 was formulated for surfaces with smooth boundaries, whereas the following is formulated for surfaces with piecewise smooth boundaries. The difference in the proof however is not significant. \begin{theorem}\label{old} Let $M\subset\mathbb{C}^2$ be a Riemann surface whose boundary components are piecewise smooth Jordan curves $\partial_1,...,\partial_m$. Assume that there are points $p_i\in\partial_i$ such that $$ \pi_1^{-1}(\pi_1(p_i))\cap\overline M=p_i. $$ Assume that each boundary component $\partial_i$ is smooth near $p_i$, and that all points $p_{i}$ are regular points of the projection $\pi_{1}$. Then $M$ can be properly holomorphically embedded into $\mathbb{C}^2$. \end{theorem} As outlined above we want to embed families of $m$-domains onto surfaces satisfying the conditions in this theorem. It seems worth it however to formulate a result for closed Riemann surfaces in general: Fix an integer $g\geq 0$. Let $B_{\epsilon}$ denote a ball of radius $r={\epsilon}$ in some $\mathbb{R}^N$ and let $X$ be a smooth manifold with a projection $\pi:X\rightarrow B_{\epsilon}$ such that $X_y:=\pi^{-1}(y)$ is a closed Riemann surface of genus $g$ for each $y\in B_{\epsilon}$ $-$ the complex structure on each fibre $Y_y$ being specified by the parameter $y$. Let $m:X\times X\rightarrow\mathbb{R}^+$ be a smooth metric on $X$ that induces the topology. \
For $i=1,...,m$ let $f_i:B_{\epsilon}\times\overline\triangle$ be a smooth embedding such that $f_i(\{y\}\times\overline\triangle)\subset X_y$ for each $y\in B_{\epsilon}$, and such that the images $f_i(B_{\epsilon}\times\overline\triangle)$ are pairwise disjoint. Let $Y:=X\setminus\cup_{j=1}^m f_j(B_{\epsilon}\times\overline\triangle)$. Then $Y$ is a submanifold of $X$ and each fiber $Y_y\subset X_y$ is an open Riemann surface (specifically a closed Riemann surface of genus $g$ with $m$ disks removed). For $0<{\delta}<1$ let $Y^{\delta}$ denote $X\setminus\cup_{j=1}^m f_i(B_{\epsilon}\times\overline\triangle_{1-{\delta}})$. \begin{proposition}\label{conper} Let $F:Y^{\delta}\rightarrow B_{\epsilon}\times\mathbb{C}^2$ be a smooth map such that $F(y,\cdot):Y^{\delta}_y\rightarrow\{y\}\times\mathbb{C}^2$ is a holomorphic embedding for each $y\in B_{\epsilon}$. Assume that $F(y,\overline{Y_y})$ is polynomially convex in each fiber $\{y\}\times\mathbb{C}^2$. \
Then, by possibly having to decrease ${\epsilon}$, for all $\widehat{\delta}>0$ there exist a family of domains $U_y\subset X_y$, $y\in B_{\epsilon}$, and a smooth map $G:\cup_{y\in B_{\epsilon}}\{y\}\times\overline U_y\rightarrow B_{\epsilon}\times\mathbb{C}^2$ such that the following hold for all $y\in B_{\epsilon}$:
\
(i) \ $U_y$ is homeomorphic to $Y_y$, \
(ii) \ $Y_y\subset U_y\subset Y^{\widehat{\delta}}_y$, \
(iii) \ $d_H(U_{y_j},U_y)\rightarrow 0$ for all $y_j\rightarrow y, \ y_j\in B_{\epsilon}$, \
(iv) \ $G(y,\cdot)$ is a holomorphic embedding of $U_y$ into $\{y\}\times\mathbb{C}^2$, \
(v) \ $G(y,\overline U_y)$ satisfies the conditions in Theorem 5 when regarded as an embedded Riemann surface in the fiber $\{y\}\times\mathbb{C}^2$.
\end{proposition} \begin{proof} We will prove the result in the case that each fiber $Y_y$ is a closed Riemann surface with a single component removed. We will make some comments along the way as regards the general case, which is essentially the same. \
We may assume that $F(\overline{Y^{\delta}})\subset B_{\epsilon}\times\triangle\times\mathbb{C}$. For any $0<r<\widehat{\delta}$ let $s_r\subset\overline\triangle$ denote the curve $s_r:=\{z\in\mathbb{C};\mathrm{Im}(z)=0,-1\leq\mathrm{Re}(z)\leq -1+r\}$, and let $S_r\subset B_{\epsilon}\times\overline\triangle$ denote the manifold $S_r:=\cup_{y\in B_{\epsilon}} \{y\}\times s_r$. Then $f_1(S_r)\subset X$ is a smooth manifold attached to the boundary of $Y$ with $f_1(S_r)\subset Y^{\widehat{\delta}}\setminus Y$. In each fiber $Y^{\delta}_y$ we have that $c_y:=f_1(S_r)\cap Y^{\delta}_y$ is a smooth curve attached to the Riemann surface $Y_y$. \
Let $H$ denote the composition $F\circ f_1$, and let $E_r$
denote $H(S_r)$. Then $E_r$ is a submanifold of $B_{\epsilon}\times\mathbb{C}^2$, and each fiber slice $\gamma_y:=E_r\cap(\{y\}\times\mathbb{C}^2)$ is a smooth curve attached to the embedded Riemann surface $F(Y_y)$. \
Let us first concentrate on some fiber over $y\in B_{\epsilon}$ and explain how we can modify $F|_{Y_y^{\delta}}$ to get all claims in the theorem, except of course $(iii)$, for that particular fiber. The idea is the following: We find a neighborhood $W_y$ of $F(Y_y)\cup\gamma_y$ in $\{y\}\times\mathbb{C}^2$ and an injective holomorphic map $\psi_y:W_y\rightarrow\{y\}\times\mathbb{C}^2$ such that $\psi$ is close to the identity on $F(Y_y)$ and such that $\psi_y$ stretches the curve $\gamma_y$ so that $\psi_y(\gamma_y)$ intersects the cylinder $\{y\}\times\partial\triangle\times\mathbb{C}^2$ transversally and at a single point. For a small $\mu>0$ let $V_y^\mu$ denote the $\mu$-neighborhood $$ (*) \ V_y^\mu:=\{x\in Y_y^{\delta};d(x,Y_y\cup c_y)<\mu\} $$ of $Y_y\cup c_y$ in $Y_y^{\delta}$. We find a pair $(G_y,U_y)$ as in the proposition by defining $G_y:=\psi_y\circ F$ and then $$ (**) \ U_y:=G_y^{-1}(G_y(V_y^\mu)\cap(\{y\}\times\triangle\times\mathbb{C})). $$ (Meaning that $U_y$ is the connected component of the pullback that contains $Y_y$). In the general case we attach disjoint curves in a similar manner, one for each boundary component, and stretch each curve. \
More detailed we carry out the construction (still focusing on a particular fiber) as follows: Let $m_y$ be a smoothly embedded curve $m_y:[0,1]\rightarrow\{y\}\times\mathbb{C}^2$ such that
\
(i) \ $m_y\cap F(Y^{{\delta}}_y\setminus Y_y)\supset\gamma_y$, \
(ii) \ $(m_y\setminus\gamma_y)\cap F(\overline Y_y)=\emptyset$, \
(iii) \ The intersection $\gamma_y\cap(\{y\}\times\partial\triangle\times\mathbb{C})$ consists of a single point (which is not the end point), and the intersection is transversal.
\
Let $x_0\in (0,1)$ and let $g:[0,\infty)\times [0,1]\rightarrow [0,1]$ be an isotopy of diffeomorphisms such that
\
(a) \ $g(t,x)=x$ for all $x\in [0,x_0], t\in [0,\infty)$, \
(b) \ $\mathrm{lim}_{t\rightarrow\infty} g(t,x)=1$ for all $x>x_0$.
\
Define an isotopy $\phi_y:[0,1]\times m_y\rightarrow m_y$ by $\phi_y(t,x):=m_y\circ g(t,m_y^{-1}(x))$. If $N_y$ is a small neighborhood of $F(Y_y)$ in $\{y\}\times\mathbb{C}^2$ we may define an isotopy of diffeomorphisms $\xi_y:[0,1]\times N_y\cup\gamma_y\rightarrow N_y\cup\gamma_y$ by $$
\xi_y|_{N_y}:=\mathrm{Id}, \ \xi_y(t,x):=\phi(t,x) \ \mathrm{for} \ x\in m_y. $$ We will argue in a moment that for arbitrarily small $x_0$ and arbitrarily large $t_0$ there is a neighborhood $W_y$ of $F(Y_y)\cup m_y$ in $\{y\}\times\mathbb{C}^2$ such we can approximate the map $\xi_y(t_0,\cdot)$ good in $\mathcal{C}^1$-norm on $F(Y_y)\cup m_y$ by an injective holomorphic map $$ \psi_y:W_y\rightarrow\{y\}\times\mathbb{C}^2. $$ Granted the existence of this approximation this proves, by the construction $(*)$ and $(**)$ above, the result (except $(iii)$) for any particular fiber $Y_y$. \
To get $(iii)$ we carry out this construction simultaneously for all fibers. By possibly having to decrease ${\epsilon}$ we see that we can find a smooth submanifold $M$ of $B_{\epsilon}\times\mathbb{C}^2$ such that in each fiber we have that $m_y:=M\cap\{y\}\times\mathbb{C}^2$ is a smooth curve satisfying $(i)-(iii)$ above. Let $D:B_{\epsilon}\times [0,1]\rightarrow M$ be a diffeomorphism. In the general case we attach several disjoint smooth manifolds, one for each boundary component. For dimension reasons this does not raise a problem. \
Let $\varphi:[0,\infty)\times B_{\epsilon}\times [0,1]\rightarrow B_{\epsilon}\times [0,1]$ be the isotopy $\varphi(t,y,x)=(y,g(t,x))$, and let $\phi:[0,\infty)\times M\rightarrow M$ be the isotopy $\phi=D\circ\varphi\circ D^{-1}$. \
Now regard $B_{\epsilon}({\tau},\bf{x}\rm)$ as the real ${\epsilon}$-ball contained in $\mathbb{C}^N$, and let $\mathcal{N}$ be a small neighborhood of $F(Y)$ in $\mathbb{C}^N\times\mathbb{C}^2$. Define $\xi:[0,\infty)\times(\mathcal{N}\cup M)\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)\times(\mathcal{N}\cup M)$ by $$ \xi(t,x):=x \ \mathrm{for} \ x\in\mathcal{N}, \xi(t,x):=\phi(t,x) \ \mathrm{for} \ x\in M. $$ Since each $F(Y_y)$ is polynomially convex in the fiber over $\{y\}$ it follows by \cite{Sb} that each $F(Y_y)\cup m_y$ is polynomially convex in the fiber. And so since $B_{\epsilon}({\tau},\bf{x}\rm)\subset\mathbb{C}^N$ is real it follows that $F(Y)\cup M$ is polynomially convex in $\mathbb{C}^N$. \
By \cite{fl} we have then that for any fixed $t_0$ and $x_0$ there is a neighborhood $W$ of $F(Y)\cup M$ such that $\xi(t_0,\cdot)$ can by approximated arbitrarily good by an injective holomorphic map $\psi:W \rightarrow B_{\epsilon}\times\mathbb{C}^2$ preserving fibers, and the approximation is good in $\mathcal{C}^1$-norm. \
Define $G:=\psi\circ F$, chose a small $\mu>0$ and define domains $U_y$ as in $(*)$ and $(**)$ above. If $\mu$ is small enough then $(iii)$ follows by transversality.
\end{proof}
To prove Proposition \ref{mainmap} then, we have to construct manifolds $X,Y$ and $Y^{\delta}$ as above with subsets of tori as fibers, construct a suitable map $F$, and then apply Proposition \ref{conper}. \
Recall the Weierstrass p-function (depending on ${\lambda}$): $$ \varrho_{\lambda}(z)=\frac{1}{z^2}+\sum_{(m,n)\in\mathbb{Z}^2\setminus (0,0)}\frac{1}{(z-(m+n\cdot{\lambda}))^2}- \frac{1}{(m+n\cdot{\lambda})^2}. $$ This a meromorphic function in $z$ respecting the relation $\sim_{\lambda}$. Fix a 1-domain ${\Omega}({\tau},0,r)$, an ${\epsilon}>0$, and let $W_{\epsilon}:=\cup_{{\lambda}\in\triangle_{\epsilon}({\tau})}\{{\lambda}\}\times{\Omega}({\lambda},0,r)$. If ${\epsilon}>0$ is small enough and $p$ is close to the origin we may define a map $$ \widehat\phi_p({\lambda},z)=(\varrho_{\lambda}(z-p),\varrho_{\lambda}(z)), $$ from $W_{\epsilon}$ into $\mathbb{C}^2$.
\begin{lemma}\label{jac} For sufficiently small ${\epsilon}$ and $p$ we have that $\widehat\phi_p$ is holomorphic in the variables $({\lambda},z)$. For each fixed ${\lambda}$ we have that $\widehat\phi_p({\lambda},\cdot)$ embeds $\mathcal{R}({\lambda},{\Omega}({\lambda},0,r))$ into $\mathbb{C}^2$. \end{lemma} \begin{proof} If ${\epsilon}$ and $p$ is chosen small enough we have that $\widehat\phi_{p}({\lambda},z)$ is holomorphic in the $z$-variable for all fixed ${\lambda}\in\triangle_{{\epsilon}}({\tau})$. To prove that $\widehat\phi$ is holomorphic in both variables we inspect the standard proof of the fact that $\varrho_{{\lambda}}(z)$ converges as a function in the $z$-variable. \
Following Ahlfors \cite{ah} we have for $2|z|\leq |m+n{\tau}|$, that $$
|\frac{1}{(z-(m+n{\tau}))^2}-\frac{1}{(m+n{\tau})^2}\mid \leq\frac{10|
z|}{|m+n{\tau}|^3}. $$ So to prove that $\varrho_{{\tau}}(z)$ converges it is enough to prove that $$
\sum_{(m,n)\in\mathbb{Z}^2\setminus (0,0)}\frac{1}{|m+n{\tau}|^3} $$ converges. This in turn is proved by observing that there exists a positive constant $K$ such that $$
|m + n{\tau}|\geq K(|m|+ |n|) $$ for all $m,n\in\mathbb{N}$, and then getting the estimate $$
(*) \sum_{(m,n)\in\mathbb{Z}^2\setminus (0,0)}\frac{1}{|m+n{\tau}|^3}\leq 4K^{-3}\sum_{n=1}^{\infty}\frac{1}{n^2}<\infty. $$ But $K$ may be chosen such that $$
|m+n{\lambda}|\geq K(|m|+|n|) $$ for all ${\lambda}$ close to ${\tau}$, so the inequality $(*)$ holds as we vary ${\tau}$. This shows that the sum $\varrho_{{\lambda}}(z)$ converges uniformly on compacts in $W_{{\epsilon}}$ in the variables $({\lambda},z)$. And if the shift determined by $p$ is small enough we have that $\widehat\phi_{p}$ is holomorphic on $W_{{\epsilon}}$. \
In \cite{wd3} we demonstrated that the map $z\mapsto (\varrho_{\lambda}(z-p),\varrho_{\lambda}(z))$ is an embedding provided that $2p$ is not contained in the lattice determined by ${\lambda}$. So all $\phi_p({\lambda},\cdot)$ are fiberwise embeddings as long as ${\epsilon}$ is small, and $p$ is chosen close to the origin. \ \end{proof}
Let us now construct manifolds $X,Y$ and $Y^{\delta}$ as above. Fix an $m$-domain ${\Omega}({\tau},\bf{x}\rm)$ and let ${\epsilon}>0$. We define $X:=\cup_{({\lambda},\bf{y}\rm)\in B_{\epsilon}({\tau},\bf{x}\rm)}\{({\lambda},\bf{y}\rm)\}\times\mathcal{R}({\Omega}({\lambda}))$ and we let $\pi:X\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)$ be the obvious projection. Let $q:B_{\epsilon}({\tau},\bf{x}\rm)\times\mathbb{C}\rightarrow X$ be the map defined by the standard quotient map on each fiber - $q({\lambda},\bf{y}\rm,{\zeta})=({\lambda},\bf{y}\rm,[{\zeta}])$ where $[{\zeta}]$ denotes the equivalence class of ${\zeta}$ in $\mathbb{C}/\sim_{\lambda}$. Then $q$ induces a differentiable structure on $X$ such that each fiber $X_{({\lambda},\bf{y}\rm)}$ is a closed Riemann surface which we equip with the complex structure corresponding to ${\lambda}$. Let $m:X\times X\rightarrow\mathbb{R}^+$ be a smooth metric that induces the topology. \
Next let $V_{\epsilon}=\cup_{({\lambda},\bf{y}\rm)\in B_{\epsilon}({\tau},\bf{x}\rm)}\{({\lambda},\bf{y}\rm)\}\times{\Omega}({\lambda},\bf{y}\rm)$. Then $Y:=q(V_{\epsilon})\subset X$ is a submanifold $Y$ of $X$ as above. This is seen by defining $g_i:B_{\epsilon}({\tau},\bf{x}\rm)\times\overline\triangle\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)\times\mathbb{C}$ by $g_i({\lambda},\bf{y}\rm,t)=({\lambda},\bf{y}\rm,z_i+t\cdot r_i)$ and $f_i=q\circ g_i$. \
To construct the map $F:Y^{\delta}\rightarrow B_{\epsilon}\times\mathbb{C}^2$ we first let $V^{\delta}_{\epsilon}$ denote the set $q^{-1}(Y^{\delta})$, and define a map $$ \phi_p:V^{\delta}_{\epsilon}\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)\times\mathbb{C}^2 $$ by $\phi_p({\lambda},\bf{y}\rm,{\zeta})=({\lambda},\bf{y}\rm,\widehat\phi_p({\lambda},{\zeta}-z_1))$ (here $z_1$ is a component of the fixed point $({\tau},\bf{x}\rm)$ and not a variable). This is a well defined mapping if ${\epsilon}$ and $p$ are small enough. Now define a map $$ \Phi:Y^{\delta}\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)\times\mathbb{C}^2 $$
by $\Phi(x)=\phi_p(q^{-1}(x))$ for $x\in Y^{\delta}$. This is well defined because $\phi_p$ respects the relation $\sim_{\lambda}$ on fibers, and it follows from Lemma \ref{jac} that $\Phi$ is a smooth mapping such that $\Phi|_{X_y}$ is an embedding for each fiber $X_y$. In the following proof of Proposition \ref{mainmap} we use $\Phi$ to construct $F$: \
\emph{Proof of Proposition \ref{mainmap}:} Let $X,Y,Y^{\delta}$ and $\Phi$ be as just defined. By Proposition \ref{perturb} there is an open set $U\subset\mathbb{C}^2$ and an injective holomorphic map $\xi:U\rightarrow\mathbb{C}^2$ such that $\Phi(Y^{\delta}_{({\tau},\bf{x}\rm)})\subset\{({\tau},\bf{x}\rm)\}\times U$, and such that $\xi\circ\Phi(Y_{({\tau},\bf{x}\rm)})$ is polynomially convex in the fiber $({\tau},\bf{x}\rm)\times\mathbb{C}^2$. Define $$ \Psi: B_{\epsilon}({\tau},\bf{x}\rm)\times U\rightarrow B_{\epsilon}({\tau},\bf{x}\rm)\times\mathbb{C}^2 $$ by $\Psi({\lambda},\bf{y}\rm,w_1,w_2)=({\lambda},\bf{y}\rm,\xi(w_1,w_2))$.\
If ${\epsilon}$ is small enough we have that $\Psi\circ\Phi(Y_{({\lambda},\bf{y}\rm)})$ is polynomially convex in the fiber $({\lambda},\bf{y}\rm)\times\mathbb{C}^2$ for all $({\lambda},\bf{y}\rm)$. To see this choose a Runge and Stein domain $N\subset\mathbb{C}^2$ such that $\Psi\circ\Phi(Y_{({\tau},\bf{x}\rm)})\subset\{({\tau},\bf{x}\rm)\}\times N$ and $\Psi\circ\Phi(Y_{({\tau},\bf{x}\rm)}^{\delta})\cap\{({\tau},\bf{x}\rm)\}\times N\subset\subset\Psi\circ\Phi(Y_{({\tau},\bf{x}\rm)}^{\delta})$. If ${\epsilon}$ is small then $\Psi\circ\Phi(Y_{({\lambda},\bf{y}\rm)}^{\delta})\cap\{({\lambda},\bf{y}\rm)\}\times N\subset\subset\Psi\circ\Phi(Y_{({\lambda},\bf{y}\rm)}^{\delta})$ for all $({\lambda},\bf{y}\rm)\in B_{\epsilon}({\tau},\bf{x}\rm)$, i.e. $\Psi\circ\Phi(Y_{({\lambda},\bf{y}\rm)}^{\delta})\cap\{({\lambda},\bf{y}\rm)\}\times N$ is a closed submanifold of $\{({\lambda},\bf{y}\rm)\}\times N$. So if ${\epsilon}$ is small the claim follows from Lemma 3.\
Define $F=\Psi\circ\Phi$ and the pair $(Y^{\delta},F)$ satisfies the conditions in Proposition \ref{conper}. Let $G$ be as in Proposition \ref{conper} and define $\psi({\Omega}({\lambda},\bf{y}\rm))$
to be the $m$-domain corresponding to $U_{({\lambda},\bf{y}\rm)}$. Now
$(i)-(v)$ guaranties that the conclusions of Proposition
\ref{mainmap} are satisfied.$
\square$
\end{document} | arXiv | {
"id": "0602443.tex",
"language_detection_score": 0.7061290740966797,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\null\vspace*{-1truecm}
\mbox{\small IISc-CHEP-8/04}\\
\vspace*{-0.2truecm}
\mbox{\tt\small quant-ph/0405128}\\ \vspace*{0.5truecm}Quantum Random Walks do not need a Coin Toss} \author{Apoorva Patel} \email{adpatel@cts.iisc.ernet.in} \altaffiliation[Also at ]{Supercomputer Education and Research Centre,
Indian Institute of Science, Bangalore-560012, India.} \author{K.S. Raghunathan} \email{ksraghu@yahoo.com} \author{Pranaw Rungta} \email{pranaw@cts.iisc.ernet.in} \affiliation{Centre for High Energy Physics, Indian Institute of Science,
Bangalore-560012, India} \date{\today}
\begin{abstract} \noindent Classical randomized algorithms use a coin toss instruction to explore different evolutionary branches of a problem. Quantum algorithms, on the other hand, can explore multiple evolutionary branches by mere superposition of states. Discrete quantum random walks, studied in the literature, have nonetheless used both superposition and a quantum coin toss instruction. This is not necessary, and a discrete quantum random walk without a quantum coin toss instruction is defined and analyzed here. Our construction eliminates quantum entanglement from the algorithm, and the results match those obtained with a quantum coin toss instruction. \end{abstract} \pacs{PACS: 03.67.Lx} \maketitle
\section{Motivation}
Random walks are a fundamental ingredient of non-deterministic algorithms \cite{motwani}, and are used to tackle a wide variety of problems---from graph structures to Monte Carlo samplings. Such algorithms have many branches, which are explored probabilistically, to estimate the correct result. A classical computer can explore only one branch at a time, so typically the algorithm is executed several times, and the estimate of the final result is extracted from the ensemble of individual executions by methods of probability theory. To ensure that different branches are explored in different executions, one needs non-deterministic instructions, and they are provided in the form of random numbers. A coin toss is the simplest example of a random number generator, and it can be included as an instruction for a probabilistic Turing machine.
A quantum computer can explore multiple branches in a different manner, i.e. by using a superposition of states. The probabilistic result can then be arrived at by interference of amplitudes corresponding to different branches. Thus as long as the means to construct a variety of superposed states exist, there is no a priori reason to include a coin toss as an instruction for a (probabilistic) quantum Turing machine. This is obvious enough, and indeed continuous time quantum random walks have been studied without recourse to a coin toss instruction \cite{farhi}. Nevertheless, a coin toss instruction has been considered necessary in construction of discrete time quantum random walks (see for instance, Refs.\cite{kempe,ambainis}). In this article, we demonstrate that this is a misconception arising out of unnecessarily restrictive assumptions. We explicitly construct a quantum random walk on a line without using a coin toss instruction, and analyze its properties.
There also exists confusion in the literature about different scaling behavior of discrete and continuous time quantum random walk algorithms (see again, Refs.\cite{kempe,ambainis}), because the former have been constructed using a coin toss instruction while the latter do not contain a coin toss instruction. Our work eliminates this confusion in the sense that scaling behavior of discrete and continuous time quantum random walk algorithms, both constructed without a coin toss instruction, would coincide. Thereafter, a quantum coin would be an additional resource; if its inclusion can improve scaling behavior of some quantum algorithms, that should not be a surprise.
\section{Quantum Random Walk on a Line}
A random walk is a diffusion process, which is generated by the Laplacian operator in the continuum. To construct a discrete quantum walk, we must discretize this process using evolution operators that are both unitary and ultra-local (an ultra-local operator vanishes outside a finite range \cite{ultra}). To begin with, consider the walk on a line. The allowed positions are labeled by integers, and the simplest translation invariant ultra-local discretization of the Laplacian operator is \begin{equation}
H |n\rangle ~\propto~ \big[ -|n-1\rangle + 2|n\rangle - |n+1\rangle \big] ~. \end{equation} The corresponding evolution operator is \begin{equation} U(\Delta t) ~=~ \exp(iH\Delta t) ~=~ 1 + iH\Delta t + O((\Delta t)^2) ~. \end{equation} With a finite $\Delta t$, $U$ has an exponential tail and so it is not ultra-local. The evolution operator can be made ultra-local by truncation, say by dropping the $O((\Delta t)^2)$ part, but then it is not unitary. One may search for ultra-local translationally invariant unitary evolution operators using the ansatz \begin{equation}
U |n\rangle ~=~ a|n-1\rangle + b|n\rangle + c|n+1\rangle ~, \end{equation} but then the orthogonality constraints between different rows of the unitary matrix make two of $\{a,b,c\}$ vanish, and one obtains a directed walk instead of a random walk.
One way to bypass this problem and construct a ultra-local unitary random walk is to enlarge the Hilbert space and add a quantum coin toss instruction, e.g. \begin{equation} U = \sum_n \Big[
|\!\uparrow\rangle\langle\uparrow\!| \otimes |n+1 \rangle\langle n|
+ |\!\downarrow\rangle\langle\downarrow\!| \otimes |n-1 \rangle\langle n|
\Big] . \label{evolcoin} \end{equation}
This modification however brings its own set of caveats. If the coin state is measured at every time step (in other words, if the coin is classical), one gets no improvement over the classical random walk. With a unitary coin evolution operator, that entangles the coin state with the position state, the quantum walk performs better than the classical walk in certain algorithms. But in this case, the final results depend on the initial state of the coin, because quantum evolution is reversible and not Markovian. For example, the final state distribution of the quantum walk depends on whether the initial coin state was $|\!\uparrow\rangle$, or $|\!\downarrow\rangle$, or some linear combination thereof. To get around this initial coin state sensitivity, further algorithmic modifications such as averaging over initial coin states, or intermittent coin measurements, or use of multiple coins, have been suggested, but they still leave a feeling of something to be desired.
\subsection{Getting Rid of the Coin}
The way out of the above conundrum is familiar to lattice field theorists \cite{staggered}. It has also been used to simulate quantum scattering with ultra-local operators \cite{richardson}, and to construct quantum cellular automata \cite{meyer}. In its simplest version, the Laplacian operator is decomposed into its even/odd parts, $H = H_e + H_o$, \begin{equation} H \propto \left(\matrix{
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
\cdots&
-1&
2&
-1&
0&
0&
0&\cdots \cr
\cdots&
0&
-1&
2&
-1&
0&
0&\cdots \cr
\cdots&
0&
0&
-1&
2&
-1&
0&\cdots \cr
\cdots&
0&
0&
0&
-1&
2&
-1&\cdots \cr
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
}\right) , \end{equation} \begin{equation} H_e \propto \left(\matrix{
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
\cdots&
-1&
1&
0&
0&
0&
0&\cdots \cr
\cdots&
0&
0&
1&
-1&
0&
0&\cdots \cr
\cdots&
0&
0&
-1&
1&
0&
0&\cdots \cr
\cdots&
0&
0&
0&
0&
1&
-1&\cdots \cr
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
}\right) , \end{equation} \begin{equation} H_o \propto \left(\matrix{
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
\cdots&
0&
1&
-1&
0&
0&
0&\cdots \cr
\cdots&
0&
-1&
1&
0&
0&
0&\cdots \cr
\cdots&
0&
0&
0&
1&
-1&
0&\cdots \cr
\cdots&
0&
0&
0&
-1&
1&
0&\cdots \cr
\cdots&\cdots &\cdots &\cdots &\cdots &\cdots &\cdots &\cdots \cr
}\right) . \end{equation} The two parts, $H_e$ and $H_o$, are individually Hermitian. They are block-diagonal with a constant $2\times2$ matrix, and so they can be exponentiated while maintaining ultra-locality. The total evolution operator can therefore be easily truncated, without giving up either unitarity or ultra-locality, \begin{eqnarray} U(\Delta t) = e^{i(H_e+H_o)\Delta t} &=& e^{iH_e\Delta t} e^{iH_o\Delta t} + O((\Delta t)^2) \\ &=& U_e(\Delta t) U_o(\Delta t) + O((\Delta t)^2) ~. \nonumber \end{eqnarray} The quantum random walk can now be generated using $U_e U_o$ as the evolution operator for the amplitude distribution $\psi(n,t)$, \begin{equation} \psi(n,t) = [U_e U_o]^t \psi(n,0) ~, \label{walkt} \end{equation} The fact that $U_e$ and $U_o$ do not commute with each other is enough for the quantum random walk to explore all possible states. The price paid for the above manipulation is that the evolution operator is translationally invariant along the line in steps of 2, instead of 1.
The $2\times2$ matrix appearing in $H_e$ and $H_o$ is proportional to $(1-\sigma_1)$, and so its exponential will be of the form $(c1+is\sigma_1)$,
$|c|^2+|s|^2=1$. A random walk should have at least two non-zero entries in each row of the evolution operator. Even though our random walk treats even and odd sites differently by construction, we can obtain an unbiased random walk, by choosing the $2\times2$ blocks of $U_e$ and $U_o$ as $\frac{1}{\sqrt2}{1\,i \choose i\,1}$. The discrete quantum random walk then evolves the amplitude distribution according to \begin{eqnarray}
U_o |n\rangle &=& {1 \over \sqrt{2}}\Big[ |n\rangle + i|n+(-1)^n\rangle \Big] , \\
U_e |n\rangle &=& {1 \over \sqrt{2}}\Big[ |n\rangle + i|n-(-1)^n\rangle \Big] , \end{eqnarray} \begin{equation}
U_e U_o |n\rangle = {1 \over 2}\Big[
i|n-1\rangle + |n\rangle + i|n+1\rangle - |n+2(-1)^n\rangle \Big] . \label{walkstep} \end{equation}
\subsection{Relation to the Walk with a Coin}
Our construction of discrete quantum random walk has exchanged the up/down coin states for the even/odd site label. In the language of lattice field theory, this strategy resembles staggered fermions \cite{staggered}, while that with a coin (or spin) is akin to Wilson fermions \cite{wilson}. Indeed, an explicit relation between our construction and that with a coin can be established. Let \begin{equation} \Psi(n,t) \equiv \left(\matrix{ \psi(2n,t) \cr \psi(2n+1,t) \cr}\right) \end{equation} describe the amplitude distribution in a two-component notation. Then Eqs.(\ref{walkt},\ref{walkstep}) are equivalent to the evolution \begin{equation} \Psi(N,t) = [U C]^t \Psi(N,0) ~,~~ C = {1\over\sqrt{2}} \left(\matrix{ 1 & i \cr i & 1 \cr}\right) ~, \end{equation} \begin{equation}
U|N\rangle = {1\over\sqrt{2}}|N\rangle + {i\sigma_1 \over \sqrt{2}}\sum_\pm
{1\pm\sigma_3 \over 2}|N \mp 1\rangle ~. \label{evolnocoin} \end{equation}
Here, for clarity, we have denoted the basis states for $\Psi$ by $|N\rangle$. The symmetric coin operator $C$ mixes the up/down components of $\Psi$. The walk operator $U$ distributes the amplitude equally between remaining at the same site and moving to the neighboring sites. The projection operators $(1\pm\sigma_3)/2$ pick out the amplitude components that move forward and backward. Finally, the operator $\sigma_1$ interchanges the up/down components of $\Psi$, producing what Ref.\cite{gridsrch1} has called the flip-flop walk.
It is also instructive to note that while the diffusion operator $H$ has the structure of a second derivative, its two parts, $H_e$ and $H_o$, have the structure of a first derivative. This split is reminiscent of the ``square-root'' one takes to go from the Klein-Gordon operator to the Dirac operator. For a quantum random walk with a coin, this feature has been used to construct an efficient search algorithm on a spatial lattice of dimension greater than one \cite{gridsrch1,gridsrch2}. Reanalysis of that problem is in progress, without using a coin, in view of our results \cite{inprogress}.
\section{Analysis of the Walk}
It is straightforward to analyze the properties of the walk in Eq.(\ref{walkstep}) using the Fourier transform: \begin{equation} \widetilde\psi(k,t) = \sum_n e^{ikn} \psi(n,t) ~,~ \end{equation} \begin{equation} \psi(n,t) = \int_{-\pi}^{\pi} {dk \over 2\pi}~e^{-ikn} \widetilde\psi(k,t) ~. \end{equation} The evolution of the amplitude distribution in Fourier space is easily obtained by splitting it into its even/odd parts: \begin{equation} \psi \equiv \left(\matrix{ \psi_e \cr \psi_o \cr}\right) ~,~~ \psi(k,t) = [M(k)]^t \psi(k,0) ~,~ \end{equation} \begin{equation} M(k) = \left(\matrix{ -ie^{ik}\sin k & i\cos k \cr
i\cos k & ie^{-ik}\sin k \cr} \right) ~. \end{equation} The unitary matrix $M$ has the eigenvalues, $\lambda_\pm \equiv e^{\pm i\omega_k}$ (this $\pm$ sign label continues in all the results below), \begin{equation} \lambda_\pm = \sin^2 k \pm i\cos k \sqrt{1+\sin^2 k} ~,~~ \omega_k = \cos^{-1}(\sin^2 k) ~, \end{equation} with the (unnormalized) eigenvectors, \begin{eqnarray} e_\pm &\propto& \left(\matrix{ -\sin k \pm \sqrt{1+\sin^2 k} \cr 1 \cr} \right) ~, \nonumber\\
&\propto& \left(\matrix{ 1 \cr \sin k \pm \sqrt{1+\sin^2 k} \cr} \right) ~. \end{eqnarray} The evolution of amplitude distribution then follows \begin{equation} \widetilde\psi(k,t) = e^{ iw_k t} \widetilde\psi_+(k,0)
+ e^{-iw_k t} \widetilde\psi_-(k,0) ~, \end{equation} where $\widetilde\psi_\pm(k,0)$ are the projections of the initial amplitude distribution along $e_\pm$. The amplitude distribution in the position space is given by the inverse Fourier transform of $\widetilde\psi(k,t)$. While we are unable to evaluate it exactly, many properties of the quantum random walk can be extracted numerically as well as by suitable approximations.
Consider a walk starting at the origin, $\psi_{\rm o}(n,0)=\delta_{n,0}$. Its amplitude distribution at later times is specified by \begin{equation} \widetilde\psi_{{\rm o},\pm}(k,0) = {\pm 1 \over 2\sqrt{1+\sin^2 k}}
\left(\matrix{-\sin k \pm\sqrt{1+\sin^2 k} \cr 1 \cr}\right) ~, \end{equation} \begin{eqnarray} \psi_{\rm o}(n,t) &=& {1 \over 2\pi}\int_{-\pi}^\pi
{dk~e^{-ikn} \over \sqrt{1+\sin^2 k}} \\
&\times& \left(\matrix{ -i\sin\omega_k t \sin k + \cos\omega_k t
\sqrt{1+\sin^2 k} \cr i\sin\omega_k t \cr}\right) ~. \nonumber \end{eqnarray} This walk is asymmetric because our definitions treat even and odd sites differently. We can get rid of the asymmetry by initializing the walk as $\psi_{\rm s}(n,0)=(\delta_{n,0}+\delta_{n,1})/\sqrt{2}$. The walk is then symmetric under $n\leftrightarrow(1-n)$, and the amplitude distribution evolves according to \begin{eqnarray} \widetilde\psi_{{\rm s},\pm}(k,0) &=& {\pm 1 \over 2\sqrt{2(1+\sin^2 k)}} \\
&\times& \left(\matrix{e^{ik} - \sin k \pm \sqrt{1+\sin^2 k} \cr
1 + e^{ik}\sin k \pm e^{ik}\sqrt{1+\sin^2 k} \cr}\right) ~, \nonumber \end{eqnarray} \begin{eqnarray} \label{symwalk} &&\psi_{\rm s}(n,t) = {1 \over 2\pi}\int_{-\pi}^\pi
{dk~e^{-ikn} \over \sqrt{2(1+\sin^2 k)}} \\ &\times& \left(\matrix{ i\sin\omega_k t (e^{ik}-\sin k)
+ \cos\omega_k t \sqrt{1+\sin^2 k} \cr
i\sin\omega_k t (1+e^{ik}\sin k)
+ \cos\omega_k t~e^{ik}\sqrt{1+\sin^2 k} \cr}
\right) ~. \nonumber \end{eqnarray} Figs.1-2 show the numerically evaluated probability distributions, after $32$ time steps, for asymmetric and symmetric quantum random walks respectively. Note that, by construction, the distributions after $t$ time steps remain within the interval $[-2t+1,2t]$.
\begin{figure}
\caption{Probability distribution after $32$ time steps for the asymmetric quantum random walk $\psi_{\rm o}$.}
\end{figure}
\begin{figure}
\caption{Probability distribution after $32$ time steps for the symmetric quantum random walk $\psi_{\rm s}$. The dark curve denotes the smoothed distribution of Eq.(\ref{smooth}).}
\end{figure}
\subsection{Asymptotic Behavior of the Walk}
For large $t$, a good approximation to the distribution, Eq.(\ref{symwalk}), can be obtained by the stationary phase method, as in Ref.\cite{nayak}. The integral is periodic, and a sum of terms of the form \begin{equation} I(n,t) = \int_\pi^\pi {dk \over 2\pi}~g(k)~e^{i\phi(k,n,t)} ~. \end{equation} The highly oscillatory part of the integrand is determined by $\phi(k,n,t) = -kn \pm \omega_k t$, while the remaining part $g(k)$ is bounded. Simple algebra yields \begin{eqnarray} {d \omega_k \over dk } &=& -{2\sin k \over \sqrt{1+\sin^2 k}} ~,\\ {d^2\omega_k \over dk^2} &=& -{2\cos k \over (1+\sin^2 k)^{3/2}} ~,\\ {d^3\omega_k \over dk^3} &=& {4\sin k (1+\cos^2 k) \over (1+\sin^2 k)^{5/2}} ~. \end{eqnarray} The stationary point of the integral, $k=k_0$, has to satisfy \begin{equation} \alpha \equiv {n \over t} = \mp{2\sin k_0 \over \sqrt{1+\sin^2 k_0}} ~, \end{equation} which has a solution only for $n\in[-\sqrt{2}t,\sqrt{2}t]$.
We now separately consider the three cases:\\
(1) $|n| > \sqrt{2}t$: There is no stationary point in this case. For $|n|=(\sqrt{2}+\epsilon)t$, $|d\phi(k)/dk|>\epsilon$, and repeated integration by parts shows that the integral falls off faster than any positive integer power of $\epsilon t$.\\
(2) $|n| = \sqrt{2}t$: In this case, there is a stationary point of order 2 at $k_0=\mp{\rm sgn}(n){\pi/2}$. The integral is therefore proportional to $t^{-1/3}$. Explicitly, \begin{eqnarray} \psi_{\rm s}( \sqrt{2}t,t) &=& c~t^{-1/3}
\left(\matrix{ (1+{1-i\over\sqrt{2}})\cos({\pi t \over \sqrt{2}}) \cr
(1-{1-i\over\sqrt{2}})\sin({\pi t \over \sqrt{2}}) \cr}
\right) ,\\ \psi_{\rm s}(-\sqrt{2}t,t) &=& c~t^{-1/3}
\left(\matrix{ (1-{1-i\over\sqrt{2}})\cos({\pi t \over \sqrt{2}}) \cr
(-1-{1-i\over\sqrt{2}})\sin({\pi t \over \sqrt{2}}) \cr}
\right) ,\\ c &=& {1 \over 2\pi 3^{1/6}}~\Gamma\left({1\over3}\right)
\approx 0.355 ~. \end{eqnarray}
(3) $|n| < \sqrt{2}t$: There are two stationary points in this case, $k_{01}\in(-\pi/2,\pi/2)$ and $k_{02}=\pi-k_{01}$, with \begin{eqnarray} \sin k_0 &=& \mp {n \over \sqrt{4t^2 - n^2}} ~,~ \\
\left|{d^2\omega_k \over dk^2}\right|_{k=k_0}
&=& {\sqrt{4t^2-2n^2}~(4t^2-n^2) \over 4t^3} ~. \end{eqnarray} The integral is therefore proportional to $t^{-1/2}$. In terms of the phase, \begin{equation} \phi_0 = -k_{01} n + \omega_{k_0} t - (\pi/4) ~, \end{equation} the distribution amplitude is \begin{eqnarray} \psi_{\rm s} &=& {1 \over \sqrt{t}(4t^2-2n^2)^{1/4}} \nonumber\\
&\times& \left[ \cos\phi_0 \left(\matrix{
((1-i)n + 2t)/\sqrt{4t^2-n^2} \cr
\sqrt{4t^2-2n^2}/(2t+n) \cr}\right) \right. \\
& & \left. + i\sin\phi_0 \left(\matrix{
\sqrt{4t^2-2n^2}/\sqrt{4t^2-n^2} \cr
((1-i)n + 2t)/(2t+n) \cr}\right) \right] ~. \nonumber \end{eqnarray} The smoothed probability distribution, obtained by replacing the highly oscillatory terms by their mean values, is \begin{equation} \label{smooth}
|\psi_{\rm s}|_{\rm smooth}^2 = {4t^2 \over \pi\sqrt{4t^2-2n^2}~(4t^2-n^2)} ~. \end{equation} (Here, the $n\leftrightarrow(1-n)$ symmetry can be restored by replacing $n$ by $(n-{1\over2})$.) As shown in Fig.2, it represents the average behavior of the distribution very well. Its low order moments are easily calculated to be, \begin{eqnarray} \label{moment0}
\int_{n=-\sqrt{2}t}^{\sqrt{2}t} |\psi_{\rm s}|_{\rm smooth}^2 dn &=& 1 ~, \\
\int_{n=-\sqrt{2}t}^{\sqrt{2}t} |n| \cdot |\psi_{\rm s}|_{\rm smooth}^2 dn &=& t ~, \\
\int_{n=-\sqrt{2}t}^{\sqrt{2}t} n^2 |\psi_{\rm s}|_{\rm smooth}^2 dn
&=& 2(2-\sqrt{2})t^2 ~. \label{moment2} \end{eqnarray}
The following properties of the quantum random walk are easily deduced from all the above results:\\ $\bullet$ The probability distribution is double-peaked with maxima approximately at $\pm\sqrt{2}t$. The distribution falls off steeply beyond the peaks, while it is rather flat in the region between the peaks. With increasing $t$, the peaks become more pronounced, because the height of the peaks decreases slower than that for the flat region.\\ $\bullet$ The size of the tail of the amplitude distribution is limited by $(\epsilon t)^{-1} \sim t^{-1/3}$, which gives $\Delta n_> = \Delta(\epsilon t) = O(t^{1/3})$. On the inner side, the width of the peaks is governed by
$|\omega_k^{''} t|^{-1/2} \sim t^{-1/3}$. For $|n|=(\sqrt{2}-\delta)t$, this gives $\Delta n_< = \Delta(\delta t) = O(t^{1/3})$. The peaks therefore make a negligible contribution to the probability distribution, $O(t^{-1/3})$.\\ $\bullet$ Rapid oscillations contribute to the probability distribution (and hence to its moments) only at subleading order. They can be safely ignored in an asymptotic analysis, retaining only the smooth part of the probability distribution.\\ $\bullet$ The quantum random walk spreads linearly in time, with a speed smaller by a factor of $\sqrt{2}$ compared to a directed walk. This speed is a measure of its mixing behavior and hitting probability. The probability distribution is qualitatively similar to a uniform distribution over the interval $[-\sqrt{2}t,\sqrt{2}t]$. In particular, $m^{th}$ moment of the probability distribution is proportional to $t^m$.
These properties agree with those obtained in Ref.\cite{nayak} for a quantum random walk with a coin-toss instruction, demonstrating that the coin offers no advantage in this particular set up. (Extra factors of $2$ appear in our results, because a single step of our walk is a product of two nearest neighbor operators, $U_e$ and $U_o$.) It is important to note that the properties of the quantum random walk are in sharp contrast to those of the classical random walk. The classical random walk produces a binomial probability distribution, which in the symmetric case has a single peak centered at the origin and variance equal to $t$.
\subsection{The Walk in Presence of an Absorbing Wall}
The escape probability of the quantum random walk can be calculated by introducing an absorbing wall, say between $n=0$ and $n=-1$. Mathematically, the absorbing wall can be represented by a projection operator for $n\ge0$. The unabsorbed part of the walk is given by \begin{eqnarray} \psi(n,t+1) &=& P_{n\ge0}U_e U_o~\psi(n,t) ~,\\
&=& U_e U_o~\psi(n,t) - {1\over2}\delta_{n,-1}(i\psi(0,t)-\psi(1,t)) ~, \nonumber \end{eqnarray} with the absorption probability, \begin{equation}
P_{\rm abs}(t) = 1 - \sum_{n\ge0} |\psi(n,t)|^2 ~. \end{equation} Fig.3 shows the numerically evaluated probability distribution, in presence of this absorbing wall, after $32$ time steps and with the symmetric initial state. Comparison with Fig.2 shows that the absorbing wall disturbs the evolution of the walk only marginally. The probability distribution in the region close to $n=0$ is depleted as anticipated, while it is a bit of a surprise that the peak height near $n=\sqrt{2}t$ increases slightly. As a result, the escape speed from the wall is little higher than the spreading speed without the wall. As shown in Fig.4, we find that the first two time steps dominate absorption, $P_{\rm s,abs}(t=1)=0.25$ and $P_{\rm s,abs}(t=2) =0.375$, with very little absorption later on. Asymptotically, the net absorption probability approaches $P_{\rm s,abs}(\infty) \approx 0.4098$ for the symmetric walk. (We also find, for the asymmetric walk starting at the origin, $P_{\rm o,abs}(\infty)\approx0.2732$.) This value is smaller than the corresponding result $P_{\rm abs}(\infty)=2/\pi$ for the symmetric quantum random walk with a coin-toss instruction \cite{watrous}.
Thus the part of quantum random walk going away from the absorbing wall just takes off at a constant speed, hardly ever returning to the starting point. Again, this behavior is in a sharp contrast to that of the classical random walk. A classical random walk always returns to the starting point, sooner or later, and so its absorption probability approaches unity as $t\rightarrow\infty$.
\begin{figure}
\caption{Probability distribution after $32$ time steps for the symmetric quantum random walk, with an absorbing wall on the left side of $n=0$.}
\end{figure}
\begin{figure}
\caption{Time dependence of the probability for the symmetric quantum random walk to remain unabsorbed, in presence of an absorbing wall to the left of $n=0$. The walk gets within a few per cent of the asymptotic escape probability in just two time steps.}
\end{figure}
\subsection{Comparison to the Walk with a Coin}
The above results bring out the differences of our quantum random walk construction compared to that of Refs.\cite{nayak,watrous}:\\ (1) We have absorbed the two states of the coin in to the even/odd site label at no extra cost. This is possible because, due to its discrete symmetry, the walk with a coin effectively uses only half the sites (e.g. for a walk starting at origin, the amplitude distribution is restricted to only odd sites at odd $t$ and only even sites at even $t$). Our walk makes use of all the sites at every instance.\\ (2) It can be seen from Eqs.(\ref{evolnocoin},\ref{evolcoin}) that, at every time step, $\Psi$ has 50\% probability to stay put at the same location, while the walk with a coin has no probability to remain at the same location. Yet, both achieve the same spread of amplitude distribution, as exemplified by the moments in Eqs.(\ref{moment0}-\ref{moment2}). This means that our walk is smoother---more directed and less zigzag.\\ (3) When the coin is considered a separate degree of freedom, quantum evolution entangles the coin and the walk position. On the other hand, when the coin states are made part of the position space, as we have done, entanglement disappears completely---only superposition representing the amplitude distribution survives \cite{entangle}. This elimination of quantum entanglement would be a tremendous advantage in any practical implementation of the quantum random walk, because quantum entanglement is highly fragile against environmental disturbances while mere superposition is much more stable. The cost for gaining this advantage is the loss of short distance homogeneity---translational invariance holds in steps of 2 instead of 1.
\section{Extensions and Outlook}
The quantum random walk on a line is easily converted to that on a circle by imposing periodic boundary conditions. When the circle has $N$ points, the only change in the analysis is to replace the integral over $k$ in the inverse Fourier transform by a discrete sum over $k$-values separated by $2\pi/N$. Since the quantum random walk spreads essentially uniformly, there is not much change in its behavior. All that one has to bear in mind is that, on a long time scale, unitary evolution makes the walk cycle through phases of spreading out and contracting towards the initial state.
Going beyond one dimension, the strategy of constructing discrete ultra-local unitary evolution operators by splitting the Hamiltonian in to block-diagonal parts is applicable to random walks on any finite-color graph \cite{richardson}. One just constructs $2\times2$ block unitary matrices for each color of the graph, and the single time step evolution operator becomes the product of all the block unitary matrices. In particular, the $d-$dimensional hypercubic walk can be constructed as a tensor product of $d$ one-dimensional walks, using $2d$ block unitary matrices.
Our results clearly demonstrate that discrete quantum random walks with useful properties can be constructed without a coin toss instruction. The addition of a coin toss instruction may still be beneficial in specific quantum problems. A coin is an extra resource, and there are known instances where the addition of a coin toss instruction makes classical randomized algorithms have a better scaling behavior compared to their deterministic counterparts \cite{motwani}. One may hope for a similar situation in the quantum case too, keeping in mind that a careful initialization of the quantum coin state would be a must in such cases.
A clear advantage of quantum random walks is their linear spread in time, compared to square-root spread in time for classical random walks. So they are expected to be useful in problems requiring fast hitting times. Some examples of this nature have been explored in graph theoretical and sampling problems (see Refs.\cite{kempe,ambainis} for reviews), and more applications need to be investigated.
\end{document} | arXiv | {
"id": "0405128.tex",
"language_detection_score": 0.7866091132164001,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A general comparison theorem for $1$-dimensional anticipated BSDEs}
\begin{abstract}
Anticipated backward stochastic differential equation (ABSDE) studied the first time in 2007 is a new type of stochastic differential equations. In this paper, we establish a general comparison theorem for $1$-dimensional ABSDEs with the generators depending on the anticipated term of $Z$. \\ \par $\textit{Keywords:}$ Anticipated backward stochastic differential equation, Backward stochastic differential equation, Comparison theorem \end{abstract}
\section{Introduction}\label{sec:intro}
Backward Stochastic Differential Equation (BSDE) of the following general form was considered the first time by Pardoux-Peng \cite{PP1} in 1990: \begin{center} $Y_t=\xi+\int_t^T g(s, Y_s, Z_s)ds-\int_t^T Z_sdB_s.$ \end{center} Since then, the theory of BSDEs has been studied with great interest. One of the achievements of this theory is the comparison theorem, which is due to Peng \cite{P} and then generalized by Pardoux-Peng \cite{PP2}, El Karoui-Peng-Quenez \cite{KPQ} and Hu-Peng \cite{HP}. It allows to compare the solutions of two BSDEs whenever we can compare the terminal conditions and the generators.
Recently, a new type of BSDE, called anticipated BSDE (ABSDE in short), was introduced by Peng-Yang \cite{PY} (see also Yang \cite{Y}). The ABSDE is of the following form: $$ \left\{ \begin{tabular}{rll} $-dY_t=$ & $f(t, Y_t, Z_t, Y_{t+\delta(t)}, Z_{t+\zeta(t)})dt-Z_tdB_t, $ & $ t\in[0, T];$\\ $Y_t=$ & $\xi_t, $ & $t\in[T, T+K];$\\ $Z_t=$ & $\eta_t, $ & $t\in[T, T+K],$ \end{tabular}\right.\eqno(1.1)$$ where $\delta(\cdot): [0, T]\rightarrow \mathbb{R}^+ \setminus \{0\}$ and $\zeta(\cdot): [0, T]\rightarrow \mathbb{R}^+\setminus \{0\}$ are continuous functions satisfying \par $\mathbf{(a1)}$ there exists a constant $K \geq 0$ such that for each $s\in[0, T],$ $s+\delta(s) \leq T+K$, $s+\zeta(s) \leq T+K;$ \par $\mathbf{(a2)}$ there exists a constant $M \geq 0$ such that for each $t\in[0, T]$ and each nonnegative integrable function $g(\cdot),$ $\int_t^T g(s+\delta(s))ds\leq M\int_t^{T+K} g(s)ds$, $\int_t^T g(s+\zeta(s))ds\leq M\int_t^{T+K} g(s)ds.$
Peng and Yang proved in \cite{PY} that (1.1) has a unique adapted solution under proper assumptions, furthermore, they established a comparison theorem, which requires that the generators of the ABSDEs cannot depend on the anticipated term of $Z$ and one of them must be increasing in the anticipated term of $Y$.
The aim of this paper is to give a more general comparison theorem in which the generators of the ABSDEs break through the above restrictions. The main approach we adopt is to consider an ABSDE as a series of BSDEs and then apply the well-known comparison theorem for $1$-dimensional BSDEs (see \cite{KPQ}).
The paper is organized as follows: in Section 2, we list some notations and some existing results. In Section 3, we mainly study the comparison theorem for ABSDEs.
\section{Preliminaries}\label{sec:pre}
Let $\{B_t; t\geq 0\}$ be a $d$-dimensional standard Brownian motion on a probability space $(\Omega, \mathcal{F}, P)$ and $\{\mathcal{F}_t; t\geq 0\}$ be its natural filtration. Denote by
$|\cdot|$ the norm in $\mathbb{R}^m.$ Given $T >0,$ we make the following notations:
$L^2(\mathcal{F}_T; \mathbb{R}^m)$ = $\{\xi\in \mathbb{R}^m$ $|$ $\xi$ is an $\mathcal{F}_T$-measurable random variable such that
$E|\xi|^2< \infty\};$
$L_{\mathcal{F}}^2(0, T; \mathbb{R}^m)$ = $\{ \varphi: \Omega\times
[0, T]\rightarrow \mathbb{R}^m$ $|$ $\varphi$ is progressively measurable; $E\int_0^T |\varphi_t|^2dt< \infty\};$
$S_{\mathcal{F}}^2(0, T; \mathbb{R}^m)$ = $\{\psi: \Omega\times [0, T]\rightarrow \mathbb{R}^m$ $|$ $\psi$ is continuous and progressively measurable; $E[\sup_{0 \leq t \leq T} |\psi_t|^2]< \infty\}.$
Now consider the ABSDE (1.1). First for the generator $f(\omega, s, y, z, \theta, \phi): \Omega \times [0, T]\times \mathbb{R}^m\times \mathbb{R}^{m\times d}\times S_\mathcal{F}^2(s, T+K; \mathbb{R}^m)\times L_\mathcal{F}^2(s, T+K; \mathbb{R}^{m\times d})\rightarrow L^2 (\mathcal{F}_s; \mathbb{R}^m),$ we use two hypotheses:
${\bf{(H1)}}$ there exists a constant $L > 0$ such that for each $s\in [0, T],$ $y, y^\prime\in \mathbb{R}^m,$ $z, z^\prime \in
\mathbb{R}^{m\times d},$ $\theta, \theta^\prime \in L_\mathcal{F}^2(s, T+K; \mathbb{R}^m),$ $\phi, \phi^\prime \in L_\mathcal{F}^2(s, T+K; \mathbb{R}^{m\times d}),$ $r, \bar{r}\in [s, T+K],$ the following holds: $$|f(s, y, z, \theta_r, \phi_{\bar{r}})-f(s, y^\prime, z^\prime, \theta_r^\prime,
\phi_{\bar{r}}^\prime)|\leq L(|y-
y^\prime|+|z-z^\prime|+E^{\mathcal{F}_s}[|\theta_r-\theta_r^\prime|+|\phi_{\bar{r}}-\phi_{\bar{r}}^\prime|]); $$
${\bf{(H2)}}$ $E[\int_0^T |f(s, 0, 0, 0, 0)|^2ds]< \infty.$
Let us review the existence and uniqueness theorem for ABSDEs from \cite{PY}:
\begin{theorem}\label{Theorem 2.1} Assume that $f$ satisfies (H1) and (H2), $\delta,$ $\zeta$ satisfy (a1) and (a2), then for arbitrary given terminal conditions $(\xi, \eta) \in S_\mathcal{F}^2(T, T+K; \mathbb{R}^m)\times L_\mathcal{F}^2(T, T+K; \mathbb{R}^{m\times d}),$ the ABSDE (1.1) has a unique solution, i.e., there exists a unique pair of $\mathcal{F}_t$-adapted processes $(Y, Z)\in S_\mathcal{F}^2(0, T+K; \mathbb{R}^m)\times L_\mathcal{F}^2(0, T+K; \mathbb{R}^{m\times d})$ satisfying (1.1). \end{theorem}
Next we will recall the comparison theorem from \cite{PY}. Let $(Y^{(j)}, Z^{(j)})$ $(j=1, 2)$ be solutions of the following $1$-dimensional ABSDEs respectively: $$ \left\{ \begin{tabular}{rll} $-dY_t^{(j)}=$ & $f_j(t, Y_t^{(j)}, Z_t^{(j)}, Y_{t+\delta(t)}^{(j)})dt-Z_t^{(j)}dB_t, $ & $ t\in[0, T];$\\ $Y_t^{(j)}=$ & $\xi_t^{(j)}, $ & $t\in[T, T+K].$ \end{tabular}\right.\eqno(2.1) $$
\begin{theorem}\label{Theorem 2.2} Assume that $f_1,$ $f_2$ satisfy (H1) and (H2), $\xi^{(1)}, \xi^{(2)} \in S_\mathcal{F}^2(T, T+K; \mathbb{R}),$ $\delta$ satisfies (a1), (a2), and for each $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$ $f_2(t, y, z, \cdot)$ is increasing, i.e., $f_2(t, y, z, \theta_r)\geq f_2(t, y, z, \theta_r^\prime)$, if $\theta_r\geq \theta_r^\prime,$ $\theta, \theta^\prime\in L_\mathcal{F}^2(t, T+K; \mathbb{R}), r\in [t, T+K].$ If $\xi_s^{(1)}\geq \xi_s^{(2)}, s\in [T, T+K]$ and $f_1(t, y, z, \theta_r)\geq f_2(t, y, z, \theta_r), t\in[0, T], y\in \mathbb{R}, z\in \mathbb{R}^d, \theta\in L_\mathcal{F}^2(t, T+K; \mathbb{R}), r\in [t, T+K],$ then $Y_t^{(1)}\geq Y_t^{(2)},\ a.e.,a.s..$ \end{theorem}
\section{Comparison Theorem for Anticipated BSDEs}\label{secsec:bsde}
Consider the following $1$-dimensional ABSDEs: $$ \left\{ \begin{tabular}{rll} $-dY_t^{(j)}=$ & $f_j(t, Y_t^{(j)}, Z_t^{(j)}, Y_{t+\delta(t)}^{(j)}, Z_{t+\zeta(t)}^{(j)})dt-Z_t^{(j)}dB_t, $ & $ t\in[0, T];$\\ $Y_t^{(j)}=$ & $\xi_t^{(j)}, $ & $t\in [T, T+K];$\\ $Z_t^{(j)}=$ & $\eta_t^{(j)}, $ & $t\in [T, T+K],$ \end{tabular}\right.\eqno(3.1) $$ where $j=1, 2,$ $f_j$ satisfies (H1), (H2), $(\xi^{(j)}, \eta^{(j)})\in S_\mathcal{F}^2(T, T+K; \mathbb{R})\times L_\mathcal{F}^2(T, T+K; \mathbb{R}^{d}),$ $\delta, \zeta$ satisfy (a1) and (a2). By Theorem \ref{Theorem 2.1}, either of the above ABSDEs has a unique adapted solution.
\begin{proposition}\label{Proposition 3.1} Putting $t_0=T,$ we define by iteration \begin{center} $ t_i:=\min\{t \in [0, T]: \min\{s+ \delta(s),\ s+\zeta(s)\}\geq t_{i-1},\ for\ all\ s\in [t, T]\},\ \ i\geq 1. $ \end{center} Set $N:=\max\{i: t_{i-1} > 0\}$. Then $N$ is finite, $t_N=0$ and \begin{center} $ [0, T]=[0, t_{N-1}] \cup [t_{N-1}, t_{n-2}] \cup \cdots \cup [t_2, t_1] \cup [t_1, T]. $ \end{center} \end{proposition}
\begin{proof} Let us first prove that $N$ is finite. For this purpose, we apply the method of reduction to absurdity. Suppose $N$ is infinite. From the definition of $\{t_i\}_{i=1}^{+\infty}$, we know $$ \min\{t_i+ \delta(t_i),\ t_i+ \zeta(t_i)\}=t_{i-1},\ i=1, 2, \cdots.\eqno(3.2) $$ Since $\delta(\cdot)$ and $\zeta(\cdot)$ are continuous and positive, thus obviously we have $t_i<t_{i-1}$ $(i=1, 2, \cdots).$ Therefore $\{t_i\}_{i=1}^{+\infty}$ converges as a strictly monotone and bounded series. Denote its limit by $\bar{t}.$ Letting $i\rightarrow +\infty$ on both sides of (3.2), we get \begin{center} $\min\{\bar{t}+\delta(\bar{t}),\ \bar{t}+\zeta(\bar{t})\}=\bar{t}.$ \end{center} Hence $\delta(\bar{t})=0$ or $\zeta(\bar{t})=0$, which is just a contradiction since both $\delta$ and $\zeta$ are positive.
Next we will show that $t_N=0.$ In fact, the following holds obviously: $$\min\{t_N+\delta(t_N),\ t_N+\zeta(t_N)\}> t_N,$$ which implies $t_N = 0,$ or else we can find a $\tilde{t}\in [0, t_N)$ due to the continuity of $\delta(\cdot)$ and $\zeta(\cdot)$ such that $$\min\{s+\delta(s),\ s+\zeta(s)\}\geq t_N,\ for\ all\ s\in [\tilde{t}, T],$$ from which we know that $\tilde{t}$ is an element of the series as well. \end{proof}
\begin{proposition}\label{Proposition 3.2} Suppose $(Y^{(j)}, Z^{(j)})$ $(j=1, 2)$ are the solutions of ABSDEs (3.1) respectively. Then for fixed $i\in {\{1, 2, \cdots, N\}},$ over time interval $[t_i, t_{i-1}]$, ABSDEs (3.1) are equivalent to the following ABSDEs: $$ \left\{ \begin{tabular}{rll} $-d\bar{Y}_t^{(j)}=$ & $f_j(t, \bar{Y}_t^{(j)}, \bar{Z}_t^{(j)}, \bar{Y}_{t+\delta(t)}^{(j)}, \bar{Z}_{t+\zeta(t)}^{(j)})dt-\bar{Z}_t^{(j)}dB_t, $ & $ t\in[t_i, t_{i-1}];$\\ $\bar{Y}_t^{(j)}=$ & $Y_t^{(j)}, $ & $t\in [t_{i-1}, T+K];$\\ $\bar{Z}_t^{(j)}=$ & $Z_t^{(j)}, $ & $t\in [t_{i-1}, T+K],$ \end{tabular}\right.\eqno(3.3) $$ which are also equivalent to the following BSDEs with terminal conditions $Y_{t_{i-1}}^{(j)}$ respectively: $$ \tilde{Y}_t^{(j)}=Y_{t_{i-1}}^{(j)}+\int_t^{t_{i-1}} f_j(s, \tilde{Y}_s^{(j)}, \tilde{Z}_s^{(j)}, Y_{s+\delta(s)}^{(j)}, Z_{s+\zeta(s)}^{(j)})ds-\int_t^{t_{i-1}} \tilde{Z}_s^{(j)}dB_s.\eqno(3.4)$$ That is to say, $$Y_t^{(j)}=\bar{Y}_t^{(j)}=\tilde{Y}_t^{(j)},\ Z_t^{(j)}=\bar{Z}_t^{(j)}=\tilde{Z}_t^{(j)}=\frac{d\langle \tilde{Y}^{(j)}, B\rangle_t}{d t},\ t\in [t_i, t_{i-1}],\ j=1,2,$$ where $\langle \tilde{Y}^{(j)}, B\rangle$ is the variation process generated by $\tilde{Y}^{(j)}$ and the Brownian motion $B$. \end{proposition}
\begin{proof} We only need to prove the equivalence between ABSDE (3.3) and BSDE (3.4). It is obvious that for each $s\in [t_i, t_{i-1}]$, $s+\delta(s)\geq t_{i-1}$, $s+\zeta(s)\geq t_{i-1}$, thus $(\bar{Y}_{t+\delta(t)}^{(j)}, \bar{Z}_{t+\zeta(t)}^{(j)})=(Y_{t+\delta(t)}^{(j)}, Z_{t+\zeta(t)}^{(j)})$ in the generator of ABSDE (3.3). Clearly $f_j(\cdot, \cdot, \cdot, Y_{s+\delta(s)}^{(j)}, Z_{t+\zeta(t)}^{(j)})$ satisfies the Lipschitz condition as well as the integrable condition since $f_j$ satisfies (H1), (H2). Thus BSDE (3.4) has a unique adapted solution.
Moreover, it is obvious that $(Y_t^{(j)}, Z_t^{(j)})_{t\in [t_i, t_{i-1}]}$ satisfies both ABSDE (3.3) and BSDE (3.4). Then from the uniqueness of ABSDE's solution and that of BSDE's, we can easily obtain the desired equalities. \end{proof}
\begin{theorem}\label{Theorem 3.1} Let $(Y^{(j)}, Z^{(j)})\in S_\mathcal{F}^2(0, T+K; \mathbb{R})\times L_\mathcal{F}^2(0, T+K; \mathbb{R}^d)$ $(j=1, 2)$ be the unique solutions to ABSDEs (3.1) respectively. If \item{(i)} $\xi_s^{(1)}\geq \xi_s^{(2)}, s\in [T, T+K], a.e., a.s.;$ \item{(ii)} for all $t\in [0, T]$, $(y, z)\in \mathbb{R}\times \mathbb{R}^d,$ $\theta^{(j)}\in S_\mathcal{F}^2(t, T+K; \mathbb{R})$ $(j=1, 2)$ such that $\theta^{(1)} \geq \theta^{(2)},$ $\{\theta_{r}^{(j)}\}_{r\in[t, T]}$ is a continuous semimartingale and $(\theta_{r}^{(j)})_{r\in [T, T+K]}=(\xi_r^{(j)})_{r\in [T, T_K]}$, $$ f_1(t, y, z, \theta_{t+\delta(t)}^{(1)}, \eta_{t+\zeta(t)}^{(1)}) \geq f_2(t, y, z, \theta_{t+\delta(t)}^{(2)}, \eta_{t+\zeta(t)}^{(2)}),\ \ a.e., a.s.,\eqno(3.5) $$ $$ f_1(t, y, z, \theta_{t+\delta(t)}^{(1)}, \frac{d\langle
\theta^{(1)}, B\rangle_r}{d r}|_{r=t+\zeta(t)}) \geq f_2(t, y, z,
\theta_{t+\delta(t)}^{(2)}, \frac{d\langle \theta^{(2)}, B\rangle_r}{d r}|_{r=t+\zeta(t)}),\ \ a.e., a.s.,\eqno(3.6) $$ $$
f_1(t, y, z, \xi_{t+\delta(t)}^{(1)}, \frac{d\langle \theta^{(1)}, B\rangle_r}{d r}|_{r=t+\zeta(t)}) \geq f_2(t, y, z,
\xi_{t+\delta(t)}^{(2)}, \frac{d\langle \theta^{(2)}, B\rangle_r}{d r}|_{r=t+\zeta(t)}),\ \ a.e., a.s.,\eqno(3.7) $$ then $Y_t^{(1)} \geq Y_t^{(2)}, a.e., a.s..$
Moreover, the following holds: \[ Y_0^{(1)}=Y_0^{(2)}\Leftrightarrow \left\{ \begin{tabular}{ll} $\xi_T^{(1)}=\xi_T^{(2)};$ & $$\\ $f_1(t, Y_t^{(2)}, Z_t^{(2)}, Y_{t+\delta(t)}^{(1)}, Z_{t+\zeta(t)}^{(1)})=f_2(t, Y_t^{(2)}, Z_t^{(2)}, Y_{t+\delta(t)}^{(2)}, Z_{t+\zeta(t)}^{(2)}),\ \ t\in [0, T].$ \end{tabular} \right. \] \end{theorem}
\begin{proof} Consider the ABSDE (3.1) one time interval by one time interval. For the first step, we consider the case when $t\in [t_1, T]$. According to Proposition \ref{Proposition 3.2}, we can equivalently consider the following BSDE: \begin{center} $ \tilde{Y}_t^{(j)}=\xi_T^{(j)}+\int_t^T f_j(s, \tilde{Y}_s^{(j)}, \tilde{Z}_s^{(j)}, \xi_{s+\delta(s)}^{(j)}, \eta_{s+\zeta(s)}^{(j)})ds-\int_t^T \tilde{Z}_s^{(j)}dB_s, $ \end{center} from which we have $$ \tilde{Z}_t^{(j)}=\frac{d\langle \tilde{Y}^{(j)}, B\rangle_t}{d t},\ \ t\in [t_1, T].\eqno(3.8) $$ Noticing that $\xi^{(j)}\in S_\mathcal{F}^2(T, T+K; \mathbb{R})$ $(j=1, 2)$ and $\xi^{(1)}\geq \xi^{(2)},$ from (3.5) in (ii), we can get, for $s\in [t_1, T],$ $y\in \mathbb{R}$, $z\in \mathbb{R}^d,$ \begin{center} $f_1(s, y, z, \xi_{s+\delta(s)}^{(1)}, \eta_{s+\zeta(s)}^{(1)}) \geq f_2(s, y, z, \xi_{s+\delta(s)}^{(2)}, \eta_{s+\zeta(s)}^{(2)}). $ \end{center} According to the comparison theorem for $1$-dimensional BSDEs, we can get \begin{center} $ \tilde{Y}_t^{(1)}\geq \tilde{Y}_t^{(2)},\ \ t\in[t_1, T],\ \ a.e.,a.s. $ \end{center}
as well as \[ Y_{t_1}^{(1)}=Y_{t_1}^{(2)}\Leftrightarrow \left\{ \begin{tabular}{ll} $\xi_T^{(1)}=\xi_T^{(2)};$ & $$\\ $f_1(t, \tilde{Y}_t^{(2)}, \tilde{Z}_t^{(2)}, \xi_{t+\delta(t)}^{(1)}, \eta_{t+\zeta(t)}^{(1)})=f_2(t, \tilde{Y}_t^{(2)}, \tilde{Z}_t^{(2)}, \xi_{t+\delta(t)}^{(2)}, \eta_{t+\zeta(t)}^{(2)}),\ \ t\in [t_1, T].$ \end{tabular} \right. \] Consequently, $$ Y_t^{(1)}\geq Y_t^{(2)},\ \ t\in[t_1, T+K],\ \ a.e.,a.s..\eqno(3.9) $$
For the second step, we consider the case when $t\in [t_2, t_1]$. Similarly, according to Proposition \ref{Proposition 3.2}, we can consider the following BSDE equivalently: \begin{center} $ \tilde{Y}_t^{(j)}=Y_{t_1}^{(j)}+\int_t^{t_1} f_j(s, \tilde{Y}_s^{(j)}, \tilde{Z}_s^{(j)}, Y_{s+\delta(s)}^{(j)}, Z_{s+\zeta(s)}^{(j)})ds-\int_t^{t_1} \tilde{Z}_s^{(j)}dB_s, $ \end{center} from which we have $\tilde{Z}_t^{(j)}=\frac{d\langle \tilde{Y}^{(j)}, B\rangle_t}{d t}$ for $t\in [t_2, t_1]$. Noticing (3.8) and (3.9), according to (ii), we have, for $s\in [t_2, t_1]$, $y\in \mathbb{R}$, $z\in \mathbb{R}^d,$ \begin{center} $f_1(s, y, z, Y_{s+\delta(s)}^{(1)}, Z_{s+\zeta(s)}^{(1)}) \geq f_2(s, y, z, Y_{s+\delta(s)}^{(2)}, Z_{s+\zeta(s)}^{(2)}). $ \end{center} Applying the comparison theorem for BSDEs again, we can finally get \begin{center} $ Y_t^{(1)}\geq Y_t^{(2)},\ \ t\in[t_2, t_1],\ \ a.e.,a.s. $ \end{center} as well as \[ Y_{t_2}^{(1)}=Y_{t_2}^{(2)}\Leftrightarrow \left\{ \begin{tabular}{ll} $Y_{t_1}^{(1)}=Y_{t_1}^{(2)};$ & $$\\ $f_1(t, \tilde{Y}_t^{(2)}, \tilde{Z}_t^{(2)}, Y_{t+\delta(t)}^{(1)}, Z_{t+\zeta(t)}^{(1)})=f_2(t, \tilde{Y}_t^{(2)}, \tilde{Z}_t^{(2)}, Y_{t+\delta(t)}^{(2)}, Z_{t+\zeta(t)}^{(2)}),\ \ t\in [t_2, t_1].$ \end{tabular} \right. \]
Similarly to the above steps, we can give the proofs for the other cases when $t\in [t_3, t_2],$ $[t_4, t_3],$ $\cdots,$ $[t_N, t_{N-1}].$ \end{proof}
\begin{example}\label{Example 3.1} Now suppose that we are facing with the following two ABSDEs: $$\left\{ \begin{tabular}{rll} $-dY_t^{(1)}=$ & $E^{\mathcal{F}_t}[Y_{t+\delta(t)}^{(1)}+\sin
(2Y_{t+\delta(t)}^{(1)})+|Z_{t+\zeta(t)}^{(1)}|+2]dt-Z_t^{(1)}dB_t, $ & $ t\in[0, T];$\\ $Y_t^{(1)}=$ & $\xi_t^{(1)}, $ & $t\in[T, T+K];$\\ $Z_t^{(1)}=$ & $\eta_t^{(1)}, $ & $t\in[T, T+K],$ \end{tabular}\right.$$ $$ \left\{ \begin{tabular}{rll}
$-dY_t^{(2)}=$ & $E^{\mathcal{F}_t}[Y_{t+\delta(t)}^{(2)}+2 |\cos Y_{t+\delta(t)}^{(2)}|+\sin Z_{t+\zeta(t)}^{(2)}-2]dt-Z_t^{(2)}dB_t, $ & $ t\in[0, T];$\\ $Y_t^{(2)}=$ & $\xi_t^{(2)}, $ & $t\in[T, T+K];$\\ $Z_t^{(2)}=$ & $\eta_t^{(2)}, $ & $t\in[T, T+K],$ \end{tabular}\right. $$ where $\xi_t^{(1)}\geq \xi_t^{(2)}, t\in [T, T+K].$
As both the generators depend on the anticipated term of $Z$ and neither of them is increasing in the anticipated term of $Y$, we cannot apply Peng, Yang's comparison theorem to compare $Y^{(1)}$ and $Y^{(2)}.$ While the following holds true: \begin{center}
$x+\sin (2x)+|u|+2 \geq y+2 |\cos y| + \sin v-2,\ for\ all\ x\geq y, x, y\in \mathbb{R}, u, v \in \mathbb{R}^d,$ \end{center} which implies (3.5)-(3.7), then according to Theorem \ref{Theorem 3.1}, we get $Y_t^{(1)}\geq Y_t^{(2)},\ a.e.,\ a.s..$ \end{example}
\begin{remark}\label{Remark 3.1} By the same way, for the case when $\delta=\zeta$, (3.5)-(3.7) can be replaced by (3.6) together with \begin{center} $ f_1(t, y, z, \xi_{t+\delta(t)}^{(1)}, \eta_{t+\zeta(t)}^{(1)}) \geq f_2(t, y, z, \xi_{t+\delta(t)}^{(2)}, \eta_{t+\zeta(t)}^{(2)}),\ \ a.e., a.s.. $ \end{center} \end{remark}
\begin{remark}\label{Remark 3.2} If $f_1$ and $f_2$ are independent of the anticipated term of $Z$, then (3.5)-(3.7) reduces to $$ f_1(t, y, z, \theta_{t+\delta(t)}^{(1)}) \geq f_2(t, y, z, \theta_{t+\delta(t)}^{(2)}).\eqno(3.10) $$ Note that this conclusion is just with respect to the ABSDEs (2.1). \end{remark}
\begin{remark}\label{Remark 3.3} The generators $f_1$ and $f_2$ will satisfy (3.10), if for all $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$ $\theta\in L_\mathcal{F}^2(t, T+K; \mathbb{R}),$ $r\in [t, T+K],$ $f_1(t, y, z, \theta_r)\geq f_2(t, y, z, \theta_r),$ together with one of the following: \item{(i)} for all $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$ $f_1(t, y, z, \cdot)$ is increasing, i.e., $f_1(t, y, z, \theta_r)\geq f_1(t, y, z, \theta_r^\prime),$ if $\theta \geq \theta^\prime,$ $\theta, \theta^\prime\in L_\mathcal{F}^2(t, T+K; \mathbb{R}),$ $r\in [t, T+K];$ \item{(ii)} for all $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$ $f_2(t, y, z, \cdot)$ is increasing, i.e., $f_2(t, y, z, \theta_r)\geq f_2(t, y, z, \theta_r^\prime),$ if $\theta \geq \theta^\prime,$ $\theta, \theta^\prime\in L_\mathcal{F}^2(t, T+K; \mathbb{R}),$ $r\in [t, T+K].$
Note that the latter is just the case that Peng-Yang [6] discussed (see Theorem \ref{Theorem 2.2}). \end{remark}
\begin{remark}\label{Remark 3.4} The generators $f_1$ and $f_2$ will satisfy (3.10), if \begin{center} $ f_1(t, y, z, \theta_r)\geq \tilde{f}(t, y, z, \theta_r)\geq f_2(t, y, z, \theta_r), $ \end{center} for all $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$ $\theta \in L_\mathcal{F}^2(t, T+K; \mathbb{R}), r\in [t, T+K].$ Here the function $\tilde{f}(t, y, z, \cdot)$ is increasing, for all $t\in [0, T],$ $y\in \mathbb{R},$ $z\in \mathbb{R}^d,$
i.e., $\tilde{f}(t, y, z, \theta_r)\geq \tilde{f}(t, y, z, \theta_r^\prime),$ if $\theta_r\geq \theta_r^\prime,$ $\theta, \theta^\prime\in L_\mathcal{F}^2(t, T+K; \mathbb{R}), r\in [t, T+K].$ \end{remark}
\begin{example}\label{Example 3.2} The following three functions satisfy the conditions in Remark \ref{Remark 3.4}: $f_1(t, y, z, \theta_r)=E^{\mathcal{F}_t}[\theta_r+2 \cos \theta_r+1]$, $\tilde{f}(t, y, z, \theta_r)=E^{\mathcal{F}_t}[\theta_r+\cos \theta_r]$, $f_2(t, y, z, \theta_r)=E^{\mathcal{F}_t}[\theta_r+\sin (2\theta_r)-2].$ \end{example}
\end{document} | arXiv | {
"id": "0911.0507.tex",
"language_detection_score": 0.59862220287323,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Hybrid subconvexity and uniform sup norm bounds of Eisenstein series]{Hybrid subconvexity for class group $L$-functions and uniform sup norm bounds of Eisenstein series} \subjclass[2010]{11F03(primary), and 11L07(secondary)} \author{Asbjørn Christian Nordentoft} \address{Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø, Denmark} \email{\href{mailto:acnordentoft@outlook.com}{acnordentoft@outlook.com}} \date{\today} \begin{abstract} In this paper we study hybrid subconvexity bounds for class group $L$-functions associated to quadratic extensions $K/\ensuremath\mathbb{Q}$ (real or imaginary). Our proof relies on relating the class group $L$-functions to Eisenstein series evaluated at Heegner points using formulas due to Hecke. The main technical contribution is the following uniform sup norm bound for Eisenstein series;
$$E(z,1/2+it)\ll_\ensuremath\varepsilon y^{1/2} (|t|+1)^{1/3+\ensuremath\varepsilon},\quad y\gg 1,$$ extending work of Blomer and Titchmarsh. Finally we propose a uniform version of the sup norm conjecture for Eisenstein series. \end{abstract} \maketitle \author \section{Introduction} This paper is concerned with the family of $L$-functions $L_K(s,\chi)$ associated to a character $\chi$ of the (wide) class group $\text{\rm Cl }(K)$ of a quadratic field extension $K/\ensuremath\mathbb{Q}$ (real or imaginary) of discriminant $D$. One of our results is a hybrid subconvexity bound in terms of the discriminant $D$ and the archimedian parameter $t$ where $s=1/2+it$ (both for individual class group $L$-functions and for the second moment of the entire family). We will do this by relating the subconvexity bound for class group $L$-functions to sup norm bounds of Eisenstein series via formulas due to Hecke. Our second main result is what we will call a {\it uniform sup norm bound} of Eisenstein series. \subsection{Class group $L$-functions}The study of analytic properties of the family of class group $L$-functions was initiated by Duke, Friedlander and Iwaniec in \cite{DuFrIw95} where they computed the second moment of class group $L$-functions in the limit $D\rightarrow -\infty$. Other notable works on the family of class group $L$-functions include \cite{Bl04}, \cite{DuFrIw02}, \cite{BlHaMi07} \cite{Te11}.\\ Our approach in the imaginary quadratic case is to use a classical formula of Hecke, which relates class group $L$-functions to Eisenstein series evaluated at Heegner points; \begin{align}
\label{heegnersum} L_K(s, \chi)= \frac{2^{s+1} \zeta(2s) |D|^{-s/2}}{\omega_K } \sum_{\mathfrak{a}}\chi(\mathfrak{a}) E(z_{\mathfrak{a}},s),
\end{align} where the sum runs over a complete set of representatives for the class group of the imaginary quadratic field $K$ of discriminant $D$, $z_{\mathfrak{a}}\in \ensuremath\mathbb{H}$ is the associated Heegner point and $\omega_K \in \{2,4,6\}$. There is a real quadratic analogue also due to Hecke (see (\ref{heegnerintegral}) below). These formulas give a connection between subconvexity bounds and the so-called {\it sup norm problem} for Eisenstein series, which we will introduce shortly. \begin{remark}The connection between the sup norm problem and subconvexity estimates can be traced back to Sarnak \cite[(4.19)]{Sa95}. However this paper together with the recent work of Hu and Saha \cite{HuSaha19} seem to be the first time sup norm results have been used to obtain new subconvexity results. Hu and Saha apply sup norm bounds of automorphic forms on quaternion algebras (in the depth aspect) to obtain subconvexity estimates in the depth aspect for $L(1/2, f \otimes \theta_\chi)$, where $f$ is a quaternionic automorphic form and $\theta_\chi$ is an essentially fixed theta series. \end{remark} \begin{remark}The formula (\ref{heegnersum}) was also the starting point for Templier in \cite{Te11}, where it was combined with equidistribution of Heegner points to give an alternative computation (compared with \cite{DuFrIw95}) of the second moment of the family of class group $L$-functions as $D\rightarrow -\infty$. Similarly Michel and Venkatesh \cite{MichelVenk07} used an analogue of (\ref{heegnersum}) in the case of cusp forms due to Zhang \cite{Zh01}, \cite{Zh04} to deduce non-vanishing results for the central values of the corresponding Rankin-Selberg $L$-functions. The approach of Michel and Venkatesh was then applied by Dittmer, Proulx and Seybert in \cite{DiPrSe15} to deduce non-vanishing for class group $L$-functions as well (their method only shows non-vanishing for one class group character for each $K$, whereas Blomer in \cite{Bl04} achieved a much stronger result using mollification). \end{remark}
\subsection{The sup norm problem}Now let $\Gamma_0(1)=\hbox{\rm PSL}_2(\ensuremath\mathbb{Z})$ and denote by $X_0(1):=\Gamma_0(1)\backslash \ensuremath\mathbb{H}$ the modular curve. The sup norm problem for $X_0(1)$ is concerned with bounds of the following form for some fixed $\theta>0$;
$$ \sup_{z\in C} |u_j(z)| \ll_{C} t_j^{\theta}, $$ where $u_j$ is a Maass form of level 1, $t_j$ is the spectral parameter and $C\subset \ensuremath\mathbb{H}$ is compact. The case $\theta=1/4+\ensuremath\varepsilon$ is known as the {\it convexity bound} and is elementary to prove, but it is conjectured \cite[Conjecture 3.10]{Sa95} that any $\theta>0$ is admissible. Iwaniec and Sarnak in their seminal paper \cite{IwSa} were the first to go beyond the convexity bound by proving the bound $\ll_\ensuremath\varepsilon t_j^{5/24+\ensuremath\varepsilon}$. \\ In this paper we will focus on the analogue for the continuous spectrum which is constituted by Eisenstein series. This means that we are concerned with bounds of the type
\begin{align} \label{eissup} \sup_{z\in C} |E(z,1/2+it)| \ll_{C} (|t|+1)^{\theta}, \end{align}
where $\theta>0$ is fixed and $C$ is compact. In this case the convexity bound is $\theta=1/2+\ensuremath\varepsilon$, and again the sup norm conjecture predicts that any $\theta>0$ is admissible. Iwaniec and Sarnak's method also applies in this case and yields similarly the bound $\ll_\ensuremath\varepsilon (|t|+1)^{5/12+\ensuremath\varepsilon}$. In \cite{Yo18} Young used a slight modification of the Iwaniec--Sarnak method to prove the bound $\ll_\ensuremath\varepsilon (|t|+1)^{3/8+\ensuremath\varepsilon}$. In \cite{Bl18} Blomer improved this using exponential sum methods, building on earlier work of Titchmarsh \cite{Ti}, and proved the Weyl type bound $\ll_\ensuremath\varepsilon (|t|+1)^{1/3+\ensuremath\varepsilon}$. Finally the sup norm problem for Eisenstein series over general number fields has been dealt with in the work of Assing \cite{Assing19}.\\
Plugging Blomer's result into (\ref{heegnersum}) yields immediately a subconvexity bound for $L_K(s, \chi)$ in the $t$-aspect, which recovers a result of S\"{o}hne \cite{Soehne97} (the conductor of $L_K(s, \chi)$ is $|D|(|t|+1)^2 $, which means that the convexity bound is $\ll_\ensuremath\varepsilon |D|^{1/4+\ensuremath\varepsilon}(|t|+1)^{1/2+\ensuremath\varepsilon} $). If one however wants a hybrid subconvexity estimate, one needs to control the $D$-dependence in (\ref{heegnersum}). This leads to what we will call the {\it uniform sup norm problem}, which are sup norm bounds with an explicit dependence on $z$. In a similar vein Huang and Xu \cite{HuangXu17} studied sup norm bounds of Eisenstein series with level and obtained bounds uniform in both the spectral parameter and the level.
\subsection{Statement of results}Our first result is the following translation between uniform sup norm bounds of the Eisenstein series $E(z,s)$ and hybrid subconvexity bounds for $L_K(s, \chi)$. Let
\begin{align}\label{F}\mathcal{F}:=\{ z\in \ensuremath\mathbb{H}\mid -1/2\leq \operatorname{Re} z\leq 0, |z|\geq 1\text{ \rm or } 0< \operatorname{Re} z< 1/2, |z|>1\},\end{align}
denote the standard fundamental domain for $\Gamma_0(1)$.
\begin{thm} \label{translationthm} Assume the following uniform bound uniformly for all $z=x+iy\in \mathcal{F}$;
\begin{align} \label{bound1} E(z,1/2+it) \ll y^\delta (|t|+1)^\theta, \end{align} with $1/2\leq \delta\leq 1$ and $\theta>0$. Then it follows that
\begin{align}\label{singleL} L_K(1/2+it, \chi )\ll_{\ensuremath\varepsilon} |D|^{1/4+\ensuremath\varepsilon}\, (|t|+1)^{\theta+\ensuremath\varepsilon}, \end{align} for any $\ensuremath\varepsilon>0$ and $\chi\in \widehat{\text{\rm Cl }(K)}$, a (wide) class group character of a quadratic extension $K/\ensuremath\mathbb{Q}$ (real or imaginary) of discriminant $D$. \\ Furthermore it also follows from (\ref{bound1}) that
\begin{align}\label{squareL}\sum_{\chi \in \widehat{\text{\rm Cl }(K)}} |L_K(1/2+it, \chi )|^2\ll_{\ensuremath\varepsilon} |D|^{\delta+\ensuremath\varepsilon}\, (|t|+1)^{2\theta+\ensuremath\varepsilon}, \end{align}
for any $\ensuremath\varepsilon>0$. \end{thm} The second part of this paper is concerned with proving a result of the type (\ref{bound1}). As we will see in Section \ref{extyoung} below, the results of Young \cite{Yo18} imply the following.
\begin{thm}[M. Young]\label{young} For $z\in \mathcal{F}$, the standard fundamental domain (\ref{F}) for $\Gamma_0(1)$, we have
\begin{equation} \label{youngbound} E(z, 1/2+it) \ll_\ensuremath\varepsilon y^{1/2} (|t|+1)^{3/8+\ensuremath\varepsilon}, \end{equation} for any $\ensuremath\varepsilon>0$. \end{thm} \begin{remark}
Huang and Xu \cite[Theorem 1.1]{HuangXu17} obtained the slightly stronger bound $E(z,s)\ll_\ensuremath\varepsilon y^{1/2}+|t|^{3/8+\ensuremath\varepsilon}$. \end{remark}
It turns out however to be a much more delicate task to upgrade Blomer's Weyl type estimate to a uniform one, which is the main technical contribution of this paper. Our result is the following. \begin{thm} \label{mainthm} For $z\in \mathcal{F}$, the standard fundamental domain (\ref{F}) for $\Gamma_0(1)$, we have
\begin{equation} \label{mainbound} E(z, 1/2+it) \ll_\ensuremath\varepsilon y^{1/2} (|t|+1)^{1/3+\ensuremath\varepsilon}, \end{equation} for any $\ensuremath\varepsilon>0$. \end{thm} Combining this bound with Theorem \ref{translationthm}, we arrive at the following. \begin{cor} Let $K/\ensuremath\mathbb{Q}$ be a quadratic extension (real or imaginary) of discriminant $D$ and $\chi$ a (wide) class group character of $K$. Then
\begin{equation}\label{L-bound} L_K(1/2+it, \chi )\ll_{\ensuremath\varepsilon} |D|^{1/4+\ensuremath\varepsilon}\, (|t|+1)^{1/3+\ensuremath\varepsilon}, \end{equation} and
\begin{align}\label{sqsum2}\sum_{\chi \in \widehat{\text{\rm Cl }(K)}} |L_K(1/2+it, \chi )|^2\ll_{\ensuremath\varepsilon} |D|^{1/2+\ensuremath\varepsilon}\, (|t|+1)^{2/3+\ensuremath\varepsilon}, \end{align} for any $\ensuremath\varepsilon>0$. \end{cor}
\begin{remark} Observe that for imaginary quadratic fields, (\ref{sqsum2}) corresponds to Lindelöf on average in the $D$-aspect, since $h(K)\gg |D|^{1/2-\ensuremath\varepsilon}$. On the other hand if $K/\ensuremath\mathbb{Q}$ is a real quadratic fields with class number 1, (\ref{sqsum2}) just recovers (\ref{L-bound}). \end{remark} \begin{remark} As mentioned above it has been conjectured \cite[Conjecture 3.10]{Sa95} that the following should hold for all $\ensuremath\varepsilon >0$;
\begin{align} \label{conjecture4} \sup_{z\in C} |E(z,1/2+it)| \ll_{\ensuremath\varepsilon,C} (|t|+1)^\ensuremath\varepsilon, \end{align}
where $C\subset \ensuremath\mathbb{H}$ is a compact set. This implies the Lindelöf hypothesis in the $t$-aspect for the class group $L$-function. In the last section we will speculate what the uniform analogue of (\ref{conjecture4}) should be. \end{remark}
\subsubsection{Hybrid subconvexity bounds for class group $L$-functions}The first to obtain subconvexity for class group $L$-functions seems to be S\"{o}hne \cite{Soehne97} in the $t$-aspect and Duke, Friedlaner and Iwaniec \cite{DuFrIw02} in the $D$-aspect (which was then improved numerically by Blomer, Harcos and Michel \cite{BlHaMi07}). The first to achieve subconvexity in both aspects simultaneously (with an unspecified exponent) was Michel and Venkatesh \cite{MichelVenk10} as a consequence of their solution of the subconvexity problem for $\hbox{\rm GL}_2$ automorphic $L$-functions (for general number fields). The results of Michel and Venkatesh were then later made explicit by Wu \cite{HuAndersen18}. More precisely \cite[Corollary 1.4]{HuAndersen18} states that if $\pi$ is an automorphic representation of $\hbox{\rm GL}_2(\ensuremath\mathbb{A}_\ensuremath\mathbb{Q})$ with (unitary) central character $\omega$, then we have \begin{align}\label{Huesub}L(\pi,1/2)\ll \ensuremath {\bf C}(\pi)^{1/4}\left( \frac{\ensuremath {\bf C}(\pi)}{\ensuremath {\bf C}(\omega)}\right)^{-\frac{1-2\theta}{40}}\ensuremath {\bf C}(\omega)^{-1/160},\end{align} where ${\bf C}(\pi),{\bf C}(\omega)$ denote the analytic conductors of respectively $\pi,\omega$ and $\theta$ is any approximation towards the Ramanujan--Petersson conjecture. Let us briefly explain how to extract a subconvexity bound for class group $L$-functions from (\ref{Huesub}). \\
Let $\chi$ be a (wide) class group character of the quadratic extension $K/\ensuremath\mathbb{Q}$ of conductor $D$, $\theta_\chi\in \mathcal{M}_1(\Gamma_0(|D|), \chi_D)$ the theta series associated to $\chi$ (see \cite[Section 14.3]{IwKo}) and $\pi_\chi$ the corresponding automorphic representation of $\hbox{\rm GL}_2(\ensuremath\mathbb{A}_\ensuremath\mathbb{Q})$. The analytic conductor of the automorphic representation $\pi_\chi \otimes |\cdot|_{\ensuremath\mathbb{A}_\ensuremath\mathbb{Q}}^{it}$ is given by $D(|t|+1)^2$ and the same is true for the analytic conductor of its central character. By plugging this into (\ref{Huesub}) above, we thus get
$$L(\pi_\chi \otimes |\cdot|_{\ensuremath\mathbb{A}_\ensuremath\mathbb{Q}}^{it},1/2)=L_K(1/2+it, \chi )\ll \left(|D|^{1/4}\, (|t|+1)^{1/2}\right)^{1-1/40}, $$ which is the state of the art for hybrid subconvexity. We observe that the bound (\ref{L-bound}) improves on this in certain regimes of $t$ and $D$. Combining the result of Wu with ours, we arrive at the following improvement. \begin{cor}\label{hybridsub} Let $K/\ensuremath\mathbb{Q}$ be a quadratic extension of discriminant $D$ and $\chi$ a (wide) class group character of $K$. Then we have
\begin{equation}\label{Lbound} L_K(1/2+it, \chi )\ll_\ensuremath\varepsilon \begin{cases}|D|^{1/4+\ensuremath\varepsilon}\, (|t|+1)^{1/3+\ensuremath\varepsilon},& \text{for } t> |D|^{3/74}\\
\left( |D|^{1/4}\, (|t|+1)^{1/2}\right)^{1-1/40},& \text{for } t\leq |D|^{3/74} \end{cases}, \end{equation} for any $\ensuremath\varepsilon>0$. \end{cor} \begin{remark} The state of the art hybrid subconvexity bound for $\hbox{\rm GL}_1$ automorphic $L$-functions \cite[Corollary 1.2]{Wu19} is very similar to the above; the best hybrid subconvexity bound is obtained by combining the results of Wu \cite{Wu19} and those of S\"{o}hne \cite{Soehne97}. Notice that the bounds obtained in these two papers depend on the number field and are thus not relevant in our hybrid setting. \end{remark}
\begin{remark} In the special case where $\chi$ is a genus character, we have the following factorization in terms of quadratic Dirichlet $L$-functions; $$ L_K(s, \chi)= L(s, \left(\tfrac{d_1}{\cdot}\right))L(s, \left(\tfrac{d_2}{\cdot}\right)),$$ where $\chi$ corresponds to the factorization $d_1 d_2=D$. In this case it follows from \cite[(1.8)]{Young17} that we have the following improvement on the above;
$$L_K(1/2+it, \chi)\ll_\ensuremath\varepsilon |D|^{1/6+\ensuremath\varepsilon}(|t|+1)^{1/3+\ensuremath\varepsilon}.$$ \end{remark}
\section{From sup norm bounds to subconvexity}
In this section we will prove Theorem \ref{translationthm}. First of all we will introduce some background on quadratic fields and the formulas due to Hecke mentioned above.
\subsection{Quadratic fields} We will now recall a few standard facts about quadratic fields and refer to \cite[Chapter 22]{IwKo}, \cite[Section 1]{Sarnak85} and \cite[Section 2]{DuImTo} for more background.\\ Let $K/\ensuremath\mathbb{Q}$ be a quadratic extension of number fields, then we can write $K=\ensuremath\mathbb{Q}[\sqrt{D}]$ where $D$ is the discriminant of $K$. We denote by $\text{\rm Cl }(K)$ the class group of $K$ consisting of classes of fractional ideals modulo principal ideals. According to Gauss each fractional ideal class $\mathfrak{a}$ corresponds to an equivalence class of integral binary quadratic forms of discriminant $D$ modulo integral linear transformations. When $D<0$ we can to each $\mathfrak{a}\in\text{\rm Cl }(K)$ associate a Heegner point on the modular curve given by;
$$ z_\mathfrak{a}:=\frac{-b+i\sqrt{|D|}}{2a}\in X_0(1),$$ where $Q=aX^2+bXY+cY^2$ is any representative of $\mathfrak{a}$. We denote by $h(K)$ the size of the class group and we have the following (ineffective) bound due to Siegel;
\begin{align}\label{Siegel} |D|^{1/2-\ensuremath\varepsilon}\ll_\ensuremath\varepsilon h(K)\ll_\ensuremath\varepsilon |D|^{1/2+\ensuremath\varepsilon}.\end{align}
When $D> 0$, we can analogously to any ideal class $\mathfrak{a}$ in the (wide) class group of $K$ associate a certain primitive, closed geodesic $C_\mathfrak{a}$ on $X_0(1)$. If $\mathfrak{a}$ corresponds to some integral binary quadratic form $Q=aX^2+bXY+cY^2$, then $C_\mathfrak{a}$ is defined as the projection onto $X_0(1)$ of a certain arc on the semi-circle $S_Q\subset \ensuremath\mathbb{H}$ defined by the end-points $\frac{-b\pm \sqrt{D}}{2a}$ (see the references above for the precise definition). The hyperbolic line element on $X_0(1)$ is given by $|ds|= |dz|/y$ and $C_\mathfrak{a}$ has hyperbolic length $2\log \epsilon_K$, where $\epsilon_K$ is the fundamental unit of $K$. Similar to the imaginary quadratic case we have the (ineffective) bound;
\begin{align}\label{Siegel2} |D|^{1/2-\ensuremath\varepsilon}\ll_\ensuremath\varepsilon h(K)\log \epsilon_K \ll_\ensuremath\varepsilon |D|^{1/2+\ensuremath\varepsilon}, \end{align} also due to Siegel.
\subsection{Hecke's formula for class group $L$-functions} For a real or imaginary quadratic extension $K/\ensuremath\mathbb{Q}$ and a character $\chi$ of $\text{\rm Cl }(K)$, we associate the class group $L$-function absolutely convergent for $\operatorname{Re} s>1$; \begin{align} \label{classgroupL} L_K(s,\chi):= \sum_{\mathfrak{a}} \chi(\mathfrak{a}) N_K(\mathfrak{a})^{-s}=\prod_{\mathfrak{p}}(1-\chi(\mathfrak{p})N_K(\mathfrak{p})^{-s}), \end{align} where $N_K$ is the norm and the sum runs over all integral ideals of $K$ and the product is taken over integral prime ideals of $K$. The class group $L$-functions admit analytic continuation and functional equations, which we will see shortly follows from the same properties for the non-holomorphic Eisenstein series.\\
The connection between class group $L$-functions and Eisenstein series is given by a beautiful formula due to Hecke. In the introduction we already mentioned that for imaginary quadratic extensions $K/\ensuremath\mathbb{Q}$, the formula reads \cite[(22.58)]{IwKo}; \begin{align*}
L_K(s, \chi)= \frac{2^{s+1} \zeta(2s) |D|^{-s/2}}{\omega_K } \sum_{\mathfrak{a}}\chi(\mathfrak{a}) E(z_{\mathfrak{a}},s),
\end{align*} where the sum runs over a complete set of representatives for the class group of $K$, $z_{\mathfrak{a}}$ is the associated Heegner point and $\omega_K\in\{2,4,6\}$ denotes the number of roots of unity in $K$.\\ For real quadratic fields, we have similarly the following formula \cite[(7.7)]{DuImTo}; \begin{align}
\label{heegnerintegral} L_K(s, \chi)= \frac{\zeta(2s) D^{-s/2} \Gamma(s)}{\Gamma(s/2)^2} \sum_{\mathfrak{a}} \chi(\mathfrak{a}) \int_{C_\mathfrak{a}} E(z,s)y^{-1}|dz|.
\end{align} We observe that analytic continuation and functional equation for $L_K(s, \chi)$ now follows from the corresponding properties of the Eisenstein series \cite[Theorem 6.5]{Iw}.
\subsection{Proof of Theorem \ref{translationthm}} In this section we will prove Theorem \ref{translationthm}. To do this we will need a lemma that bounds averages over Heegner points (resp. cycles) of the function $y:X_0(1)\rightarrow \ensuremath\mathbb{R}_+$ defined by $y(z):=\operatorname{Im} (z_\mathcal{F})$, where $z_\mathcal{F}\in \ensuremath\mathbb{H}$ is the representative of $z\in X_0(1)$ which lies in $\mathcal{F}$, the standard fundamental domain (\ref{F}) for $\Gamma_0(1)$. Observe that this function is continuous. \begin{lemma}\label{boundcongruence} Let $K/ \ensuremath\mathbb{Q}$ be a quadratic field of discriminant $D$. Then we have for any $\delta>0$ and $\ensuremath\varepsilon>0$;
$$ \sum_{\mathfrak{a}\in \Cl(K)} \begin{cases} y(z_\mathfrak{a})^\delta & \text{\rm if } D<0,\\ \int_{C_\mathfrak{a}} y(z)^\delta \, |ds| &\text{\rm if } D>0,\end{cases} \ll_\ensuremath\varepsilon |D|^{\max(\delta, 1)/2+\ensuremath\varepsilon}. $$ \end{lemma} \begin{proof} Assume $D<0$. The representative of $z_\mathfrak{a}\in X_0(1)$ which lies in $\mathcal{F}$, is exactly given by
$$ (z_\mathfrak{a})_\mathcal{F}=\frac{-b+i\sqrt{|D|}}{2a}, $$ where the integral binary quadratic form $aX^2+bXY+cY^2$ of discriminant $D$ corresponds to $\mathfrak{a}$ and $(a,b,c)$ is reduced \cite[(22.12)]{IwKo}, meaning that; $$ -a<b\leq a\leq c\quad \text{or} \quad -a\leq b\leq a=c. $$
Since $\mathcal{F}\subset \{z\in \ensuremath\mathbb{H}\mid \operatorname{Im} z\geq \sqrt{3}/2\}$, we conclude that $a\ll \sqrt{|D|}$ and thus we get; \begin{align*}
\sum_{\mathfrak{a}\in \Cl(K)} y(z_\mathfrak{a})^\delta =& |D|^{\delta/2}\sum_{a>0} \frac{\# \{a,b,c\mid b^2-4ac=D, (a,b,c)\text{ reduced} \}}{(2a)^\delta}\\
\ll& |D|^{\delta/2}\sum_{0<a\ll |D|^{1/2}} \frac{\rho_D(a)}{a^\delta}, \end{align*}
where $\rho_D(a)=\# \{ 0<b\leq 2a \mid b^2\equiv D\text{ \rm mod } 4a\}$. It is well-known \cite[p. 521]{IwKo} that $\rho_D$ is multiplicative with $\rho_D(p^\alpha)=1+\chi_D(p)$ if $p\not| D$, $\rho_D(p)=1$ if $p|D$ and $\rho_D(p^\alpha)=0$ if $p|D$, $\alpha>1$, which implies the bound $\rho_D(a)\ll \sum_{d|a}1 \ll_\ensuremath\varepsilon a^{\ensuremath\varepsilon} $. Thus we conclude that
$$ \sum_{\mathfrak{a}\in \Cl(K)} y(z_\mathfrak{a})^\delta \ll_\ensuremath\varepsilon |D|^{\delta/2} \sum_{0<a\ll \sqrt{|D|}} \frac{a^\ensuremath\varepsilon}{a^\delta}\ll |D|^{1/2\max(\delta,1)+\ensuremath\varepsilon}, $$ as wanted.\\%Observe that we may assume $\delta\geq 1$ by using the trivial estimate $y(z)^\delta\ll y(z)$ when $\delta<1$.\\
Now we turn to the case $D>0$. We denote by $\Omega_D$ all integral binary quadratic forms of discriminant $D$ and for $Q=aX^2+bXY+cY^2\in \Omega_D$, we denote by $S_Q$ the semi-circle in $\ensuremath\mathbb{H}$ with end-points $\frac{-b\pm\sqrt{D}}{2a}$. Then it follows from an easy lemma \cite[Lemma 6]{DuImTo11} (observe that they use a different looking but equivalent measure) that; \begin{align}
\label{furtherequal}\sum_{\mathfrak{a}\in \Cl(K)} \int_{C_\mathfrak{a}} y(z)^\delta \, |ds|=\sum_{Q\in \Omega_D} \int_{S_Q \cap \mathcal{F}} y(z)^\delta \, |ds|, \end{align} where $\mathcal{F}$ is the standard fundamental domain (\ref{F}) for $\Gamma_0(1)$.\\ Now we take the quotient from the left by $\Gamma_\infty=\langle T \rangle$ where $T=\begin{psmallmatrix} 1&1\\0&1\end{psmallmatrix}$, which rewrites (\ref{furtherequal}) as the following;
\begin{align}\label{finfty}\sum_{[Q]\in \Gamma_\infty\backslash\Omega_D} \int_{S_Q \cap \mathcal{F}^{(\infty)}} y(z)^\delta\, |ds|,\end{align}
where $\mathcal{F}^{(\infty)}:=\cup_{n\in \ensuremath\mathbb{Z}} T^{(n)}\mathcal{F}$ is the union of all horizontal translates of $\mathcal{F}$ (notice that the integral above does not depend on the choice of $Q$). Since $\mathcal{F}^{(\infty)}\subset \{z\in \ensuremath\mathbb{H}\mid \operatorname{Im} z\geq \sqrt{3}/2\}$, we only get contributions in (\ref{finfty}) from quadratic forms $Q=aX^2+bXY+cY^2$ with $a\ll \sqrt{D}$ and furthermore we can pick representatives of $\Gamma_\infty\backslash\Omega_D$ satisfying $|b|\leq 2a$. Now we recall that $|ds|=y^{-1}|dz|$ and use the trivial fact that the Euclidean circumference of $S_Q$ is $\ll \frac{D^{1/2}}{a}$, which implies; \begin{align*}
\sum_{[Q]\in \Gamma_\infty\backslash\Omega_D} \int_{S_Q \cap \mathcal{F}^{(\infty)}} y(z)^\delta\, |ds|&= \sum_{0<a\ll D^{1/2}} \sum_{\substack{[Q]\in \Gamma_\infty\backslash\Omega_D,\\ Q(1,0)=a}}\int_{S_{Q}\cap \mathcal{F}^{(\infty)}} y(z)^{\delta-1} |dz|\\ &\ll \sum_{0<a\ll D^{1/2}} \sum_{\substack{[Q]\in \Gamma_\infty\backslash\Omega_D,\\ Q(1,0)=a}} \frac{D^{1/2}}{a}\left( \max_{z\in S_{Q}\cap \mathcal{F}^{(\infty)}} y(z)^{\delta-1}\right)\\
&\ll D^{1/2+\max(\delta-1,0)/2}\sum_{0<a\ll D^{1/2}} \frac{\rho_D(a)}{a}. \end{align*} Now the conclusion follows exactly as in the case of negative $D$ using the bound $\rho_D(a)\ll_\ensuremath\varepsilon a^\ensuremath\varepsilon$ (which also holds for $D>0$ by the above). \end{proof}
Now we are ready to prove Theorem \ref{translationthm}. \begin{proof}[Proof of Theorem \ref{translationthm}] Consider the case $D<0$. By feeding (\ref{bound1}) into (\ref{heegnersum}), we see that \begin{align}
L_K(1/2+it,\chi ) \ll_\ensuremath\varepsilon \frac{(|t|+1)^\ensuremath\varepsilon}{|D|^{1/4}} \sum_{\mathfrak{a}} y(z_\mathfrak{a})^{\delta} (|t|+1)^{\theta}, \end{align} where we used some standard estimates for $\zeta$ on $\operatorname{Re} s=1$. \\ Now since we assumed $\delta\leq 1$, it follows from Lemma \ref{boundcongruence} that
$$L_K(1/2+it,\chi) \ll_{\ensuremath\varepsilon} |D|^{1/4+\ensuremath\varepsilon} (|t|+1)^{\theta+\ensuremath\varepsilon}.$$ as wanted. \\
To prove (\ref{squareL}), we observe that by orthogonality, the formula (\ref{heegnersum}) implies that
$$ \sum_\chi |L_K(1/2+it, \chi)|^2 = \frac{8 h(K) |\zeta(1+2it)|^2}{\omega_K^2 |D|^{1/2}}\sum_\mathfrak{a} |E(z_\mathfrak{a},1/2+it)|^2. $$ Thus by the assumption (\ref{bound1}), Siegel's bound (\ref{Siegel}) and standard estimates for the zeta function, we get
$$ \sum_\chi |L_K(1/2+it, \chi)|^2 \ll_\ensuremath\varepsilon (|t|+1)^{2\theta+\ensuremath\varepsilon} |D|^\ensuremath\varepsilon \sum_\mathfrak{a}y(z_\mathfrak{a})^{2\delta},$$ and the result follows directly from Lemma \ref{boundcongruence}.\\
The proof of (\ref{singleL}) for $D$ positive is exactly the same using Lemma \ref{boundcongruence} and Hecke's formula (\ref{heegnerintegral}) in the case $D>0$. \\ In order to prove (\ref{squareL}), we use orthogonality as above to get \begin{align*}
\sum_\chi |L_K(1/2+it, \chi)|^2 \ll_\ensuremath\varepsilon (|t|+1)^{2\theta+\ensuremath\varepsilon} \frac{h(K)}{D^{1/2}} \sum_\mathfrak{a}\left |\int_{C_\mathfrak{a}} y(z)^\delta |ds|\right|^2 . \end{align*} Now we apply Cauchy-Schwarz to bound the above by
$$ (|t|+1)^{2\theta+\ensuremath\varepsilon} \frac{h(K)\log \epsilon_K}{D^{1/2}} \sum_\mathfrak{a} \int_{C_\mathfrak{a}} y(z)^{2\delta}|ds|, $$ and the results follows from Lemma \ref{boundcongruence} and Siegel's bound (\ref{Siegel2}).
\end{proof}
\begin{remark} If one believes the sup norm conjecture (\ref{conjecture4}), Theorem \ref{translationthm} tells you in particular that the cancellations in individual Eisenstein series are strong enough to give the Lindelöf hypothesis for class group $L$-functions in the $t$-aspect. It is however conjectured that (\ref{eissup}) holds for eigenfunctions on any hyperbolic surface \cite[Conjecture 3.10]{Sa95}. So in some sense the $t$-aspect is not essentially arithmetic. This method is however not able to give subconvexity estimates in the $D$-aspect for individual $L$-functions. This is due to the fact that the sup norm bounds do not \lq \lq see{\rq \rq} the arithmetics of the Heegner points (it is uniform for $z$ in a fixed compact set) and the cancellation between Eisenstein series evaluated at the different Heegner points is exactly what gives rise to subconvexity behavior in the $D$-aspect. In the last section (see (\ref{conjecture})), we will state a uniform analogue of the conjecture (\ref{eissup}), which using (\ref{squareL}) does give Lindelöf on average in the $D$-aspect for imaginary quadratic fields.\end{remark}
\section{Uniform sup norm bounds of Eisenstein series}
In this section we will prove the hybrid bound (\ref{youngbound}) and (\ref{mainbound}) for the classical Eisenstein series. The proof of (\ref{youngbound}) follows directly from \cite{Yo18}. The proof of (\ref{mainbound}) requires much more work and is an adaptation (and elaboration) of the argument in \cite{Bl18} building on \cite{Ti34}, which in turn is an extension of the van der Corput method \cite[Section 8.3]{IwKo}.
\subsection{Uniform bounds for Eisenstein series following Young}\label{extyoung} In \cite{Yo18} Young extends the method used by Iwaniec and Sarnak in \cite{IwSa} to give the first non-trivial result towards the sup norm conjecture for the modular curve. The main insight of Young was that one can choose a more efficient mollifier, which improves the bound for the continuous spectrum. The method of Iwaniec and Sarnak embeds respectively the cusp form and Eisenstein series into the entire spectrum of the modular curve. Then an application of the Selberg trace formula (with a carefully chosen test function) reduces the sup norm bound to a bound of the geometric side, which can be done with elementary means. The action of the Hecke operators plays a crucial role in the argument.\\ In \cite{Yo18} the sup norm bound is stated as a bound in the $t$-aspect with $z$ in a fixed compact set, but as Young also mentions the method yields something slightly stronger (this was also observed by Huang and Xu \cite[p. 2]{HuangXu17}).\\ The main inequality in Young's paper is \cite[(6.3)]{Yo18}, which gives \begin{align}
|E(z, 1/2+it)|^2 \ll_\ensuremath\varepsilon (N|t|)^\ensuremath\varepsilon \left( \frac{|t|}{N}+|t|^{1/2}(N+N^{1/2}y) \right), \end{align}
where $N$ is some parameter to be chosen appropriately. By inspecting \cite[Lemma 4.1, Lemma 5.1]{Yo18} one sees that the restrictions on the variables are $\log N\gg (\log t)^{2/3+\delta}$ for some fixed $\delta>0$ and $y\ll |t|^{100}$. In particular in the range $y\ll |t|^{1/4}$, we can put $N=|t|^{1/4}$ and get
$$ |E(z, 1/2+it)|^2 \ll_\ensuremath\varepsilon |t|^{3/4+\ensuremath\varepsilon}+ |t|^{3/4+\ensuremath\varepsilon}+ |t|^{5/8+\ensuremath\varepsilon}y. $$ From this we conclude
$$ |E(z, 1/2+it)| \ll_\ensuremath\varepsilon y^{1/2} |t|^{3/8+\ensuremath\varepsilon} , \quad 1\ll y\ll |t|^{1/4}.$$
In the range $y\gg |t|^{1/4}$, we have the trivial bound \cite[(3.2)]{Yo18}, which yields
$$ |E(z, 1/2+it)|\ll_\ensuremath\varepsilon y^{1/2}+ |t|^{3/8+\ensuremath\varepsilon}. $$ Combining the two, concludes the proof of Theorem \ref{young}.
\subsection{Titchmarsh's method for bounding Epstein zeta functions}
Now we turn to the proof of Theorem \ref{mainthm}. The following serves first of all as an extension of Blomer and Titchmarsh's work but secondly as an elaboration of some of the details, which are left out in \cite{Bl18}. The approach expresses the non-holomorphic Eisenstein series in terms of an {\it Epstein zeta function}, which is then bounded using the van der Corput method from the theory of exponential sums. Originally Titchmarsh considered only Epstein zeta functions associated to diagonal matrices and there are some technical difficulties to deal with general Epstein zeta functions. Furthermore in order to get a bound uniform in the entries of the matrix defining the Epstein zeta function, one has to modify parts of the argument.\\
Given any positive definite matrix $Z\in \hbox{\rm GL}_2(\ensuremath\mathbb{R})$, we can consider the quadratic form $Q(\ensuremath \boldsymbol{x})=\ensuremath \boldsymbol{x} \, Z\, \ensuremath \boldsymbol{x} ^T$, $\ensuremath \boldsymbol{x}=(x_1,x_2)\in \ensuremath\mathbb{R}^2$ and the associated Epstein zeta function $$ E_{\text{\rm Epstein}}(Z, s):= \sum_{ \ensuremath \boldsymbol{x} \in \ensuremath\mathbb{Z}^2 \backslash (0,0)} Q(\ensuremath \boldsymbol{x})^{-s}, $$ which satisfies the functional equation $$ \Gamma_\ensuremath\mathbb{R}(2s)E_{\text{\rm Epstein}}(Z, s)= (\det Z)^{-1/2} \Gamma_\ensuremath\mathbb{R}(2(1-s))E_{\text{\rm Epstein}}(Z^{-1},1-s),$$ where $\Gamma_\ensuremath\mathbb{R} (s):=\pi^{-s/2} \Gamma(s/2)$.\\ Recall that this is related to the non-holomorphic Eisenstein series as follows \begin{equation} \zeta(2s)E(z,s)= y^s E_{\text{\rm Epstein}}(Z,s),\qquad Z=\begin{pmatrix}x^2 +y^2 & x \\ x & 1 \end{pmatrix}, \end{equation} which reduces the sup norm problem for Eisenstein series to bounding the Epstein zeta function. We may restrict to the case where $z\in \mathcal{F}$, the standard fundamental domain (\ref{F}) for $X_0(1)$, which corresponds to considering only matrices of the form $$ Z=\begin{pmatrix} a & b \\ b & 1 \end{pmatrix},$$
where $a\geq 1$ and $|b|\leq 1/2$.\\ The trivial estimate \cite[(3.2)]{Yo18}; $$E(z,1/2+it)\ll y^{1/2}+(t/y)^{1/2}$$
yields (\ref{mainbound}) in the range $|t|^{1/6}\ll y$ and thus in the sequel we may assume $a\ll |t|^{1/3}$ and thus also $|t|\gg 1$.
\subsection{Reduction to an exponential sum} As in \cite{Bl18} we start by applying an approximate functional equation \cite[Theorem 5.3]{IwKo} with $G(u)=e^{u^2}$, but deviate slightly by using a balanced version (corresponding to putting $X=a^{1/2}$ in \cite[Theorem 5.3]{IwKo}). By estimating the contribution coming from the pole of $E_{\text{\rm Epstein}}(Z,s)$ at $s=1$ trivially, the approximate functional equation yields \begin{align}\nonumber E_{\text{\rm Epstein}}(Z, 1/2+it)&= \sum_{{\bf x}\neq 0} \frac{W^+_t(Q_+({\bf x})a^{-1/2})}{Q_+({\bf x})^{1/2+it}}\\ \label{approx}& + \frac{\Gamma_\ensuremath\mathbb{R} (1-2it)}{\Gamma_\ensuremath\mathbb{R}(1+2it) (\det Z)^{1/2}} \sum_{{\bf x}\neq 0} \frac{W^{-}_t(Q_-({\bf x})a^{1/2})}{Q_-({\bf x})^{1/2-it}} +O(1)\end{align} where $Q_\pm({\bf x})={\bf x}\, Z^{\pm 1} \, {\bf x}^T$ and $$ W^{\pm}_t(y)=\frac{1}{2\pi i}\int_{(1)} e^{u^2} \frac{\Gamma_\ensuremath\mathbb{R}(2(u+1/2\pm it))}{\Gamma_\ensuremath\mathbb{R}(2(1/2\pm it))} y^{-u}\frac{du}{u}.$$
The weight $W^\pm_t$ can be nicely bounded as follows; we move the contour to the line $(A)$ with $A>0$ and bound the integrand using Stirling's approximation as follows;
\begin{align*} e^{u^2/2}\frac{\Gamma_\ensuremath\mathbb{R}(2(u+1/2\pm it))}{\Gamma_\ensuremath\mathbb{R}(2(1/2\pm it))} u^{-1}\ll \frac{e^{A^2/2}e^{-b^2/2} \pi^{-A/2}e^{-A} (|t|^A + (b+A)^A)}{A+|b|} \ll_A |t|^A , \end{align*} with $u=A+ib$ using that $e^{-b^2/2} (b+A)^A \rightarrow 0$ as $b\rightarrow \infty$. Thus we get the bound
$$ W^\pm_t(y)\ll_A |t|^A/y^A \int_{-\infty}^\infty e^{-x^2/2} dx \ll |t|^A/y^A, $$
and more generally one deduces $\frac{\partial^n }{\partial y^n}W^\pm_t(y)\ll_A |t|^A/y^{A+n}$ as in \cite[Proposition 5.4]{IwKo}. From this we see that the contributions in (\ref{approx}) from $\ensuremath \boldsymbol{x}$ such that $Q_\pm(\ensuremath \boldsymbol{x})\gg a^{\pm 1/2} |t|^{1+\ensuremath\varepsilon}$ are negligible. \\
To deal with the remaining sums in (\ref{approx}), we divide the range of summation into dyadic rectangles of the form $(X_1,2X_1)\times (X_2,2X_2)$. Observe that we get $O(\log^2 t)$ such rectangles, which implies that it suffices to bound each of these dyadic sums individually. \\ For each such rectangle we get by two-dimensional partial summation; \begin{align} \label{dyadic}\sum_{\substack{X_1\leq x_1\leq 2X_1\\ X_2\leq x_2\leq 2X_2}} \frac{W^+_t(Q_+({\bf x})a^{-1/2})}{Q_+({\bf x})^{1/2+it}} = F_+(2\ensuremath \boldsymbol{X}) \sum_{\substack{X_1\leq x_1\leq 2X_1\\ X_2\leq x_2\leq 2X_2}} e^{i t \log Q_+(\ensuremath \boldsymbol{x})} \\ \nonumber -\int_{X_1}^{2X_1} \left( \sum_{\substack{X_1\leq x_1\leq x \\ X_2\leq x_2\leq 2X_2}}e^{it\log Q_+(\ensuremath \boldsymbol{x})}\right) F_+^{(1,0)} (x, 2X_2) dx\\ \nonumber -\int_{X_2}^{2X_2} \left( \sum_{\substack{X_1\leq x_1\leq 2X_1 \\ X_2\leq x_2\leq y}}e^{it\log Q_+(\ensuremath \boldsymbol{x})}\right) F_+^{(0,1)} (2X_1, y) dy\\ \nonumber + \int_{X_1}^{2X_1}\int_{X_2}^{2X_2} \left( \sum_{\substack{X_1\leq x_1\leq x \\ X_2\leq x_2\leq y}}e^{it\log Q_+(\ensuremath \boldsymbol{x})}\right)F_+^{(1,1)}(x_1,x_2)\, dxdy, \end{align} where $\ensuremath \boldsymbol{X}=(X_1,X_2)$, $F_+({\bf x})=W^+_t(Q_+({\bf x})a^{-1/2})/Q_+( {\bf x} )^{1/2}$ and $F_+^{(i,j)}:=\frac{\partial^{i+j}F_+}{\partial x_1^i \partial x_2^j }$.\\ Similarly we get \begin{align}\label{dyadic-}\sum_{\substack{X_1\leq x_1\leq 2X_1\\ X_2\leq x_2\leq 2X_2}} \frac{W^-_t(Q_-({\bf x})a^{1/2})}{(\det Z)^{1/2}Q_-({\bf x})^{1/2-it}}=F_-(2\ensuremath \boldsymbol{X})\sum_{\substack{X_1\leq x_1 \leq 2X_1 \\ X_2\leq x_2\leq 2X_2}}e^{it\log Q_-(\ensuremath \boldsymbol{x})} +\ldots, \end{align} where $F_-({\bf x})=W^-_t(Q_-({\bf x} )a^{1/2})/\left((\det Z)Q_-( {\bf x} )\right)^{1/2}$.\\ Now we have reduced the desired bound on the Epstein zeta function to proving a certain estimate on exponential sums. The result we need is the following.
\begin{prop} \label{mainprop}For $\ensuremath \boldsymbol{X}=(X_1,X_2)$ satisfying $Q_+(\ensuremath \boldsymbol{X})\ll a^{1/2}|t|^{1+\ensuremath\varepsilon}$, we have the following bound;
\begin{equation}\label{mainpropeq} \frac{1}{Q_+(\ensuremath \boldsymbol{X})^{1/2}}\sum_{\substack{X_1 \leq x_1 \leq X_1' \\ X_2 \leq x_2 \leq X_2'}} e^{it\log Q_+(\ensuremath \boldsymbol{x})}\ll_\ensuremath\varepsilon |t|^{1/3+\ensuremath\varepsilon}, \end{equation}
uniformly in $a\geq 1$, where $X_i\leq X_i'\leq 2X_i$. Similarly for $\ensuremath \boldsymbol{X}=(X_1,X_2)$ satisfying $Q_-(\ensuremath \boldsymbol{X})\ll a^{-1/2} |t|^{1+\ensuremath\varepsilon}$, we have
\begin{equation}\label{mainpropeq2}\frac{1}{((\det Z)Q_-(\ensuremath \boldsymbol{X}))^{1/2}}\sum_{\substack{X_1 \leq x_1 \leq X_1' \\ X_2 \leq x_2 \leq X_2'}} e^{it\log Q_-(\ensuremath \boldsymbol{x})}\ll_\ensuremath\varepsilon |t|^{1/3+\ensuremath\varepsilon}, \end{equation} where $X_i\leq X_i'\leq 2X_i$. \end{prop} \begin{remark}Observe that when proving (\ref{mainpropeq}), we may assume
\begin{equation}\label{lowerbound} X_1\gg |t|^{1/3}\quad \text{ and }\quad X_2\gg |t|^{1/3} a^{1/2}, \end{equation} and similar when proving (\ref{mainpropeq2}), we may assume
\begin{equation}\label{lowerbound2} X_1\gg |t|^{1/3} a^{1/2}\quad \text{ and }\quad X_2\gg |t|^{1/3}, \end{equation} since otherwise the bounds follows from the trivial estimate on the exponentials. \end{remark}
Now let us see how Theorem \ref{mainthm} follows from the above proposition.
\begin{proof}[Proof of Theorem \ref{mainthm} assuming Proposition \ref{mainprop}] We will begin by deducing from Proposition \ref{mainprop} that $E_{\text{\rm Epstein}}(Z,s)\ll_\ensuremath\varepsilon (|t|+1)^{1/3+\ensuremath\varepsilon}$ for all $Z$ as above; by the above reductions, it suffices to prove the same bound for each of the dyadic sums (\ref{dyadic}) and (\ref{dyadic-}) with $X_1,X_2$ satisfying respectively $Q_\pm(\ensuremath \boldsymbol{X})\ll a^{\pm 1/2} |t|^{1+\ensuremath\varepsilon}$. We do this by bounding each of the four terms, we get after applying partial summation separately (observe that we may assume $|t|\gg 1$).\\
The above estimates for $W^+_t$ imply $ W^+ _t(Q_+(\ensuremath \boldsymbol{x})a^{- 1/2})\ll |t|^\ensuremath\varepsilon $, which together with (\ref{mainpropeq}) implies that we can bound the first sum on the right-hand side of (\ref{dyadic}) by the following;
$$F_+(2\ensuremath \boldsymbol{X}) \sum_{\substack{X_1\leq x_1\leq 2X_1\\ X_2\leq x_2\leq 2X_2}} e^{i t\log Q_+(\ensuremath \boldsymbol{x})}\ll |t|^{1/3+\ensuremath\varepsilon}.$$
Similarly using $\frac{\partial^n }{\partial y^n}W^+_t(y)\ll |t|^{A}/y^{A+n}$ and the chain rule, we get
$$ F_+^{(1,0)}(\ensuremath \boldsymbol{x}) \ll \frac{|t|^{\ensuremath\varepsilon}a^{1/2}}{Q_+(\ensuremath \boldsymbol{X})}, \quad F_+^{(0,1)}(\ensuremath \boldsymbol{x}) \ll \frac{|t|^{\ensuremath\varepsilon} }{Q_+(\ensuremath \boldsymbol{X})}, \quad F_+^{(1,1)}(\ensuremath \boldsymbol{x}) \ll \frac{|t|^{\ensuremath\varepsilon}a^{1/2}}{Q_+(\ensuremath \boldsymbol{X})^{3/2}},$$ which together with (\ref{mainpropeq}) implies \begin{align*} \int_{X_1}^{2X_1} \left( \sum_{\substack{X_1\leq x_1\leq x \\ X_2\leq x_2\leq 2X_2}}e^{it\log Q_+(\ensuremath \boldsymbol{x})}\right) F_+^{(1,0)} (x, 2X_2) dx\\
\ll X_1 |t|^{1/3+\ensuremath\varepsilon}Q_+(\ensuremath \boldsymbol{X})^{1/2} \frac{a^{1/2}}{Q_+(\ensuremath \boldsymbol{X})} \ll |t|^{1/3+\ensuremath\varepsilon},\end{align*} using $X_1a^{1/2}\ll Q_+(\ensuremath \boldsymbol{X})^{1/2}$, and similarly for the other one-dimensional integral. Finally a similar calculation gives \begin{align*} \int_{X_1}^{2X_1}\int_{X_2}^{2X_2} \left( \sum_{\substack{X_1\leq x_1\leq x \\ X_2\leq x_2\leq y}}e^{it\log Q_+(\ensuremath \boldsymbol{x})}\right)F^{(1,1)} (x, y)\, dxdy&\\
\ll \frac{X_1X_2 a^{1/2}Q_+(\ensuremath \boldsymbol{X})^{1/2}|t|^{1/3+\ensuremath\varepsilon}}{Q_+(\ensuremath \boldsymbol{X})^{3/2}}&, \end{align*} which yields the desired bound for the $Q_+$-sum.\\ The sum involving $Q_-$ can be bounded similarly using
\begin{align*} F_-^{(1,0)}(\ensuremath \boldsymbol{x}) \ll \frac{|t|^{\ensuremath\varepsilon}}{(\det Z)Q_-(\ensuremath \boldsymbol{X})}, \quad F_-^{(0,1)}(\ensuremath \boldsymbol{x}) \ll \frac{ |t|^{\ensuremath\varepsilon}a^{1/2}}{(\det Z)Q_-(\ensuremath \boldsymbol{X})}, \\
F_-^{(1,1)}(\ensuremath \boldsymbol{x}) \ll \frac{|t|^{\ensuremath\varepsilon} a^{1/2}}{\left( (\det Z)Q_-(\ensuremath \boldsymbol{X})\right)^{3/2}},\end{align*} which yields the desired bound for the Epstein zeta function.\\ Thus we conclude that
$$ E(z,1/2+it)=\frac{y^{1/2+it}}{\zeta(1+2it)}E_{\text{\rm Epstein}}(Z,1/2+it)\ll_\ensuremath\varepsilon y^{1/2}(|t|+1)^{1/3+\ensuremath\varepsilon},$$
using $\zeta(1+2it)\gg_\ensuremath\varepsilon (|t|+1)^{-\ensuremath\varepsilon}$. This finishes the proof.
\end{proof}
\section{A uniform bound for an exponential sum in two variables} In this section we will prove Proposition \ref{mainprop} using an extension of the ideas of Titchmarsh and Blomer building on the work of van der Corput. \\ Firstly we will make a simplification; if we multiply with the phase $(\det Z)^{it}$ in (\ref{mainpropeq2}), the summands become; $$e^{it \log (\det Z)}e^{it\log Q_-(\ensuremath \boldsymbol{x})} = e^{it\log \left( (\det Z)Q_-(\ensuremath \boldsymbol{x})\right)},$$
where $(\det Z)Q_-(\ensuremath \boldsymbol{x})=x_1^2-2bx_1x_2+ax_2^2$. Since $\det Z\asymp a$, the ranges $Q_+(\ensuremath \boldsymbol{X})\ll a^{1/2} |t|^{1+\ensuremath\varepsilon}$ and $(\det Z)Q_-(\ensuremath \boldsymbol{X})\ll (\det Z)a^{-1/2} |t|^{1+\ensuremath\varepsilon}$ are the same just with $X_1$ and $X_2$ interchanged. Thus by symmetry the two bounds (\ref{mainpropeq}) and (\ref{mainpropeq2}) are equivalent, which is exactly why we used a balanced approximate functional equation in the first place.\\
Thus we see that it suffices to prove (\ref{mainpropeq}) under the assumption $Q_+(\ensuremath \boldsymbol{x})\ll a^{1/2}|t|^{1+\ensuremath\varepsilon}$. To lighten notation, we put $Q:=Q_+$. \subsection{Some lemmas of Titchmarsh} Titchmarsh \cite{Ti34} extended the van der Corput method for bounding exponential sums \cite[Section 8.3]{IwKo} to two-dimensional sums. In this section we will quote some lemmas due to Titchmarsh, which we will employ later.\\ Through-out this section we assume that $$f: (X_1, X_1')\times (X_2, X_2')\rightarrow \ensuremath\mathbb{R}$$ has algebraic partial derivatives of order one to three. We will as above use the notation $f^{(i,j)}:= \frac{\partial^{i+j}f}{\partial x_1^i\partial x_2^j}$. \\
The first lemma is a version of Weyl differencing in the two-dimensional setting. \begin{lemma}[Lemma $\beta$, \cite{Ti34}] \label{Wdiff} Let $\rho \leq \min (X_1'-X_1, X_2'-X_2)$ be a positive integer. Then we have \begin{align}\nonumber \sum_{\substack{X_1 \leq x_1\leq X_1'\\ X_2\leq x_2 \leq X_2'}} e^{if(\ensuremath \boldsymbol{x})} \ll &\frac{(X_1'-X_1)(X_2'-X_2)}{\rho}\\
\nonumber&+ \frac{(X_1'-X_1)^{1/2}(X_2'-X_2)^{1/2}}{\rho} \left(\sum_{\substack{1\leq \mu_1\leq \rho-1\\0\leq \mu_2\leq \rho-1}} |S_1(\ensuremath \boldsymbol{\mu})| \right)^{1/2} \\
\label{beta}&+ \frac{(X_1'-X_1)^{1/2}(X_2'-X_2)^{1/2}}{\rho} \left(\sum_{\substack{0\leq \mu_1\leq \rho-1\\1 \leq \mu_2\leq \rho-1}} |S_2(\ensuremath \boldsymbol{\mu})| \right)^{1/2}, \end{align} where $\ensuremath \boldsymbol{x}=(x_1,x_2)$, $\ensuremath \boldsymbol{\mu}=(\mu_1,\mu_2)$ and $$S_1(\ensuremath \boldsymbol{\mu})=\sum_{\substack{X_1\leq x_1\leq X_1'-\mu_1\\ensuremath \boldsymbol{X}_2\leq x_2 \leq X_2'-\mu_2}}e^{i[f(\ensuremath \boldsymbol{x}+\ensuremath \boldsymbol{\mu})-f(\ensuremath \boldsymbol{x})]}, \quad S_2(\ensuremath \boldsymbol{\mu})=\sum_{\substack{X_1\leq x_1\leq X_1'-\mu_1\\ensuremath \boldsymbol{X}_2+\mu_2\leq x_2 \leq X_2'}}e^{i[f(\ensuremath \boldsymbol{x}+(\mu_1,-\mu_2))-f(\ensuremath \boldsymbol{x})]}.$$ \end{lemma}
The above lemma reduces the task to bounding the sums $S_1(\ensuremath \boldsymbol{\mu})$ and $S_2(\ensuremath \boldsymbol{\mu})$ with $\mu_1,\mu_2$ in the appropriate ranges. The idea of the van der Corput method is to reduce the bound of the sums $S_1(\ensuremath \boldsymbol{\mu})$ and $S_2(\ensuremath \boldsymbol{\mu})$ to bounding a certain integral. We will use the following extension of van der Corput's result due to Titchmarsh.
\begin{lemma}[Lemma $\gamma$, \cite{Ti34}] \label{sumint} Let $l=\max(X_1'-X_1, X_2'-X_2)$ and assume that $f$ satisfies
$$ |f^{(1,0)}(\ensuremath \boldsymbol{x})|\leq \frac{3\pi}{2},\qquad |f^{(0,1)}(\ensuremath \boldsymbol{x})|\leq \frac{3\pi}{2}. $$ Then \begin{align} \sum_{\substack{X_1\leq x_1\leq X_1'\\ X_2\leq x_2\leq X_2'}} e^{if(\ensuremath \boldsymbol{x})}= \int_{(X_1,X_1')\times(X_2, X_2')} e^{if(\ensuremath \boldsymbol{x})}d\ensuremath \boldsymbol{x}+ O(l). \end{align} \end{lemma}
Finally we gonna bound this integral by a second derivative test.
\begin{lemma}[Lemma $\epsilon$, \cite{Ti34}] \label{vanderC} Let $\Omega\subset \ensuremath\mathbb{R}^2$ be a rectangle and $l$ its maximal side length. If $f:\Omega\rightarrow \ensuremath\mathbb{R}$ is a function satisfying the conditions mentioned in the beginning of the section and
\begin{align} \label{normbound} r\ll |f^{(2,0)}(\ensuremath \boldsymbol{x} )| \ll r ,\quad r\ll |f^{(0,2)}(\ensuremath \boldsymbol{x})| \ll r, \quad |f^{(1,1)}(\ensuremath \boldsymbol{x})|\ll r \\
\label{detbound} |f^{(2,0)}(\ensuremath \boldsymbol{x})f^{(0,2)}(\ensuremath \boldsymbol{x})-(f^{(1,1)}(\ensuremath \boldsymbol{x}))^2| \gg r^2,\qquad \ensuremath \boldsymbol{x} \in \Omega. \end{align} Then $$ \int_{\Omega} e^{if(\ensuremath \boldsymbol{x})}d\ensuremath \boldsymbol{x} \ll \frac{1+\log l+\log r}{r}, $$ where the implied constant depends only on the angle of the rectangle to the coordinate axes. \end{lemma} \begin{remark}Note that as stated, \cite[Lemma $\epsilon$]{Ti34} (or more precisely Lemma $\delta$) assumes that
$$ |f^{(2,0)}(\ensuremath \boldsymbol{x} )|, |f^{(0,2)}(\ensuremath \boldsymbol{x} )|\geq r,\quad |f^{(2,0)}(\ensuremath \boldsymbol{x})f^{(0,2)}(\ensuremath \boldsymbol{x})-(f^{(1,1)}(\ensuremath \boldsymbol{x}))^2| \geq r^2,$$ that is; without an implicit constant in the lower bounds. By inspecting the proof of \cite[Lemma $\epsilon$]{Ti34}, one however sees that Lemma \ref{vanderC} as stated above follows with the exact same proof (this observation is also implicit in \cite{Bl18}).\end{remark} \subsection{Applying the lemmas} With these results of Titchmarsh at our disposal, we are now ready to make some reductions in the direction of proving (\ref{mainpropeq}).\\ By applying Lemma \ref{Wdiff} with $f(\ensuremath \boldsymbol{x})=t\log Q(\ensuremath \boldsymbol{x})$ and $Q(\ensuremath \boldsymbol{x})=ax_1^2+2bx_1x_2+x_2^2$ to the left hand side of (\ref{mainpropeq}), we reduce the task to bounding sums of the following kind; \begin{align}\label{S1} S'(\ensuremath \boldsymbol{\mu})=\sum_{\substack{X_1 \leq x_1 \leq X_1' \\ X_2 \leq x_2 \leq X_2'}} e^{ig_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x} )}, \end{align} where \begin{align}\label{gmu}g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}):=t (\log Q(\ensuremath \boldsymbol{x}+\ensuremath \boldsymbol{\mu})-\log Q(\ensuremath \boldsymbol{x} )),\end{align} $X_i'\leq 2X_i$ and $\ensuremath \boldsymbol{\mu}=(\mu_1,\mu_2)\in [0,\rho]\times [0,\rho ]$ with $\rho= o(\min(X_1,X_2))$ to be chosen appropriately later. \\
The first step is to divide the rectangle of summation in $S'(\ensuremath \boldsymbol{\mu})$ into rectangles $\Delta_{p,q}$ (where $p,q$ runs through an appropriate indexing set) each with side lengths $l_1\times l_2$, where
\begin{align}\label{l} l_1 \asymp \frac{Q(\ensuremath \boldsymbol{X})^{3/2}} {a |t|^{1+2\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{1/2}}, \quad l_2 \asymp \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{1/2} |t|^{1+2\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{1/2}}.\end{align} We denote the sub-sum associated to $\Delta_{p,q}$ by $S_{p,q}(\ensuremath \boldsymbol{\mu})$ and observe that the number of such sub-sums is bounded by;
$$\frac{X_1X_2}{l_1 l_2}\ll \frac{X_1X_2}{a^{-3/2}Q(\ensuremath \boldsymbol{x})^3 |t|^{-2-2\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{-1}}.$$ We will bound the sub-sums $S_{p,q}(\ensuremath \boldsymbol{\mu})$ individually. \begin{remark} There is some balancing in choosing the values $l_1,l_2$; one the hand $l_1,l_2$ have to be small enough so that $g_{\ensuremath \boldsymbol{\mu}}$ and its derivatives are close to being constant in $\Delta_{p,q}$ (i.e. the variation is small), and on the other hand the number of rectangles $\Delta_{p,q}$ grows reciprocally with $l_1,l_2$. The reason for choosing these specific values will become clear later. \end{remark} \subsection{Bounds on derivatives of $g_{\ensuremath \boldsymbol{\mu}}$}
In this subsection we will prove upper bounds on partial derivates of $g_{\ensuremath \boldsymbol{\mu}}$ and a lower bound on the determinant of the Hesse-matrix of $g_{\ensuremath \boldsymbol{\mu}}$. Titchmarsh \cite{Ti34} only considers diagonal matrices and the fact that $b\neq 0$ creates some minor technical difficulties, which were also addressed by Blomer in \cite{Bl18}. We need to be a bit more careful since we need to consider the $a$-dependence as well, so our methods of computation differ a bit from those in \cite{Bl18}; to handle the upper bounds on the derivates we apply a Taylor expansion around $\mu$ and to lower bound the Hesse determinant we use an explicit calculation. \\ First of all we will need the following lemma.
\begin{lemma}\label{boundlemma} Let $f(\ensuremath \boldsymbol{x})=t\log(Q(\ensuremath \boldsymbol{x}))$ with $Q(x_1,x_2)=ax_1^2+2bx_1x_2+x_2^2$ where $|b|\leq 1/2$ and $a\geq 1$. Then we have
\begin{align}\label{derivativef} f^{(i,j)}(\ensuremath \boldsymbol{x}) \ll_{i,j} \frac{a^{i/2}|t|}{Q(\ensuremath \boldsymbol{x})^{(i+j)/2}}, \end{align} where the implied constant depends on $i,j$ but is independent of $a,b$. \end{lemma} \begin{proof} Observe that $f(\ensuremath \boldsymbol{x})$ is the composition of the function $h(\ensuremath \boldsymbol{x}):= t\log(x_1^2+x_2^2)$ with the linear map $$\ensuremath \boldsymbol{x}\mapsto \begin{pmatrix} (a-b^2)^{1/2} & 0 \\b & 1\end{pmatrix}\ensuremath \boldsymbol{x}^T, $$ where $a-b^2> 0$ by the assumptions. Now one sees by a direct computation that \begin{align*} h^{(i,j)}(\ensuremath \boldsymbol{x})= t\sum_{\substack{0\leq k\leq i, k\equiv i \, (2)\\ 0\leq l\leq j, l\equiv j \, (2)}} c_{k,l} \frac{x_1^k x_2^l}{(x_1^2+x_2^2)^{(i+j+k+l)/2}},\end{align*} for some constants $c_{k,l}$. Thus we get the bound
\begin{align}\label{hbound} h^{(i,j)}(\ensuremath \boldsymbol{x})\ll_{i,j} \frac{|t|}{(x_1^2+x_2^2)^{(i+j)/2}}, \end{align} using the elementary inequality $xy\ll_\alpha x^{1/\alpha}+y^{1/(1-\alpha)} $ for $0<\alpha<1$.\\ By the chain rule we have $$f^{(i,j)}(\ensuremath \boldsymbol{x})= \sum_{l=0}^i \binom{i}{l}(a-b^2)^{(i-l)/2}b^{l} h^{(i-l,j+l)}((a-b^2)^{1/2} x_1,bx_1+x_2), $$ and thus the results follows from (\ref{hbound}) since $b$ is bounded. \end{proof} From this we can now prove the following bounds. \begin{lemma} \label{hessbound} Let $\ensuremath \boldsymbol{\mu}$, $\ensuremath \boldsymbol{x}$ and $\ensuremath \boldsymbol{X}$ satisfy the constraints coming from Lemma \ref{Wdiff}.Then we have \begin{align}
\label{normbound}\left| g_{\ensuremath \boldsymbol{\mu}}^{(i,j)}(\ensuremath \boldsymbol{x})\right| &\ll a^{i/2}\frac{|t| Q(\ensuremath \boldsymbol{\mu} )^{1/2}}{Q(\ensuremath \boldsymbol{X})^{(i+j+1)/2}},\\
\label{detbound} \det (\text{\rm Hess}(g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})) &\gg \left(a^{1/2} \frac{|t| \, Q(\ensuremath \boldsymbol{\mu})^{1/2}}{Q(\ensuremath \boldsymbol{X})^{3/2}} \right)^2. \end{align} \end{lemma} \begin{proof} It follows from a two-dimensional Taylor expansion that \begin{align} \label{taylor} g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}) = \sum_{\alpha\in \{(1,0),(0,1)\}} \frac{1}{\alpha!}\, \ensuremath \boldsymbol{\mu}^\alpha \int_0^1 f^{\alpha}(\ensuremath \boldsymbol{x}+t\ensuremath \boldsymbol{\mu}) dt, \end{align} where we use the multi-exponential notation $(x_1,x_2)^{(i,j)}:= x_1^ix_2^j$.\\ Using Lemma \ref{boundlemma}, we see that for $\alpha=(\alpha_1,\alpha_2)\in \{(1,0),(0,1)\}$, we have
\begin{align*} \ensuremath \boldsymbol{\mu}^\alpha f^{\alpha+(i,j)}(\ensuremath \boldsymbol{x}) &\ll_{i,j} \ensuremath \boldsymbol{\mu}^{\alpha} \frac{|t|a^{(\alpha_1 +i)/2}}{Q(\ensuremath \boldsymbol{X})^{(i+j+1)/2}}\\
&\ll a^{i/2}\frac{|t| Q(\mu)^{1/2}}{Q(\ensuremath \boldsymbol{X})^{(i+j+1)/2}}, \end{align*} using that $\mu_1 a^{1/2}\ll Q(\ensuremath \boldsymbol{\mu})^{1/2}$, respectively $\mu_2\ll Q(\ensuremath \boldsymbol{\mu})^{1/2}$.\\ Thus by applying $\frac{\partial^{i+j}}{\partial x_1^i \partial x_2^j}$ term by term in (\ref{taylor}) and the bound above, we conclude (\ref{normbound}).\\
To prove the last inequality, we apply the following direct computation;
$$ \det (\text{\rm Hess}(g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}))= \frac{t^2(\det Q) Q(2\ensuremath \boldsymbol{x}+\ensuremath \boldsymbol{\mu})Q(\ensuremath \boldsymbol{\mu})}{Q(\ensuremath \boldsymbol{x})^2Q(\ensuremath \boldsymbol{x}+\ensuremath \boldsymbol{\mu})^2}\gg a \frac{|t|^2 Q(\ensuremath \boldsymbol{\mu})}{Q(\ensuremath \boldsymbol{X})^3}, $$
where we used $||\ensuremath \boldsymbol{\mu}||=o(\min (X_1,X_2))$.
\end{proof}
\subsection{Proof of Proposition \ref{mainprop}} Now we would like to apply Lemma \ref{sumint}, but obviously we need to alter $g_{\ensuremath \boldsymbol{\mu}}$ a bit in order for the conditions on the derivatives to be satisfied. We observe that the maximum variation in $\Delta_{p,q}$ of $ g^{(1,0)}_{\ensuremath \boldsymbol{\mu}} $ is bounded by
$$ l_1 \cdot \max_{\ensuremath \boldsymbol{x} \in \Delta_{p,q}} \left| g^{(2,0)}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}) \right| + l_2 \cdot \max_{\ensuremath \boldsymbol{x} \in \Delta_{p,q}} \left| g^{(1,1)}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}) \right| \ll |t|^{-\ensuremath\varepsilon}, $$ where we used (\ref{normbound}), and similarly for $ g^{(0,1)}_{\ensuremath \boldsymbol{\mu}}$, in which case the variation is even smaller.\\ Thus for sufficiently large $t$ the variation in each sub-sum $S_{p,q}$ is less than $\pi$, which was exactly why we chose $l_1,l_2$ as in (\ref{l}). Thus (following Titchmarsh) we can, associated to each $\Delta_{p,q}$, find integers $M,N$ such that $$ G_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}):= g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})-2\pi M x_1-2\pi N x_2,$$ satisfies
$$\left| G^{(1,0)}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})\right|\leq 3\pi/2 \quad \text{and}\quad \left| G^{(0,1)}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x}) \right| \leq 3\pi/2,$$ for all $\ensuremath \boldsymbol{x}\in\Delta_{p,q}$. Thus we get by Lemma \ref{sumint} \begin{equation}\label{sumtoint} \sum_{\ensuremath \boldsymbol{x} \in \Delta_{p,q}} e^{i g_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})}=\sum_{\ensuremath \boldsymbol{x} \in \Delta_{p,q}} e^{i G_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})} = \int_{\Delta_{p,q}} e^{iG_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})}d\ensuremath \boldsymbol{x} +O(l). \end{equation} Observe that all partial derivates of order at least two of $G_{\ensuremath \boldsymbol{\mu}}$ and $g_{\ensuremath \boldsymbol{\mu}}$ coincide. \\
We would like to apply Lemma \ref{vanderC}, but we cannot do this directly since the required lower bounds on the order two derivatives do not hold in general. By considering different cases and doing an appropriate change of variable, we can however put us in a situation where we can apply Lemma \ref{vanderC}. Titchmarsh makes similar considerations in the proof of \cite[Lemma $\zeta$]{Ti34} and on \cite[p. 497]{Ti34}, but his argument gets simplified by the fact that $G_{\ensuremath \boldsymbol{\mu}}^{(2,0)}=-a G_{\ensuremath \boldsymbol{\mu}}^{(0,2)}$ when $b=0$ (which is {\it not} true for $b\neq 0$).\\ The idea to deal with the non-diagonal case is quite simply to consider two cases; if the partial derivative $G_{\ensuremath \boldsymbol{\mu}}^{(1,1)}$ is small then the lower bound on the Hesse-determinant forces the two other partial derivatives to be large. If on the other hand $G_{\ensuremath \boldsymbol{\mu}}^{(1,1)}$ is large then after a change of variable, we can force the new partial derivatives $(2,0)$ and $(0,2)$ to be large. This will allow us to prove the following key lemma.
\begin{lemma} \label{lemmatomainprop} With notation as above we have
$$ \int_{\Delta_{p,q}} e^{ig_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})}d\ensuremath \boldsymbol{x}=\int_{\Delta_{p,q}} e^{iG_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{x})}d\ensuremath \boldsymbol{x} \ll |t|^{-1+\ensuremath\varepsilon} \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{ a^{1/2} Q(\ensuremath \boldsymbol{\mu})^{1/2}}. $$ \end{lemma}
\begin{proof} Firstly we make a change of variables to the new variables $\ensuremath \boldsymbol{y}=(y_1,y_2)=(a^{1/4} x_1,a^{-1/4}x_2)$, under which the integral becomes \begin{align}\label{integraldelta} \int_{\tilde{\Delta}_{p,q}} e^{i\tilde{G}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{y})}d\ensuremath \boldsymbol{y}, \end{align} where $\tilde{G}_{\ensuremath \boldsymbol{\mu}}(\ensuremath \boldsymbol{y})=G_{\ensuremath \boldsymbol{\mu}} (a^{-1/4}y_1,a^{1/4}y_2)$ and the new rectangle $\tilde{\Delta}_{p,q}$ has side lengths $(a^{1/4}l_1)\times (a^{-1/4}l_2)$.\\ The reason for doing this change of variable is that by the bounds in Lemma \ref{hessbound} and the chain rule, it now follows that all order two partial derivates of $\tilde{G}_{\ensuremath \boldsymbol{\mu}}$ are bounded by
\begin{align}\label{r}\ll |t| a^{1/2} Q(\ensuremath \boldsymbol{\mu})^{1/2} Q(\ensuremath \boldsymbol{X})^{-3/2}=:r.\end{align} Let $\lambda_1,\lambda_2>0$ be constants independent of $a,b$ and $t$ (large enough) such that \begin{align}
\label{normbound1}|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{\alpha}(\ensuremath \boldsymbol{y})| &\leq \lambda_1 r,\\
\label{hessbound1}|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}(\ensuremath \boldsymbol{y})-(\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}(\ensuremath \boldsymbol{y}))^2| &\geq \lambda_2 r^2 , \end{align} for $\alpha\in \{(2,0),(1,1),(0,2)\}$ and $\ensuremath \boldsymbol{y} \in\tilde{\Delta}_{p,q} $. We now split into different cases depending on the sizes of the order two partial derivatives.\\
{\bf Case 1:} Assume that $(\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}(\ensuremath \boldsymbol{y}))^2 < \lambda_2 r^2/2$ for all $\ensuremath \boldsymbol{y} \in \tilde{\Delta}_{p,q}$. \\ Then it follows from (\ref{hessbound1}) that
$$|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}(\ensuremath \boldsymbol{y})|> \lambda_2 r^2/2.$$ Thus we conclude using the bound (\ref{normbound1}) above
$$ \lambda_2 r^2/2 < |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}(\ensuremath \boldsymbol{y})| < \lambda_1 r |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})|, $$
and thus $|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})| \gg r$ and similarly for $\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}(\ensuremath \boldsymbol{y})$. The result now follows from Lemma \ref{vanderC}. \\
{\bf Case 2:} Assume that $|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}(\ensuremath \boldsymbol{y})|^2 \geq \lambda_2 r^2/2$ for some $\ensuremath \boldsymbol{y}\in \tilde{\Delta}_{p,q}$. \\ This we will show implies that for any $\delta>0$, we have
$$|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}(\ensuremath \boldsymbol{y})| \geq (2^{-1/2}-\delta)\lambda_2^{1/2} r$$ for {\it all} $\ensuremath \boldsymbol{y} \in\tilde{\Delta}_{p,q} $ when $t$ is sufficiently large. To see this we bound the variation of $\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}$ in $\tilde{\Delta}_{p,q}$; we observe that by the chain rule $$\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(i,j)}(\ensuremath \boldsymbol{y})= a^{(j-i)/4} G_{\ensuremath \boldsymbol{\mu}}^{(i,j)}(a^{-1/4}y_1,a^{1/4}y_2),$$ and thus by applying (\ref{normbound}), we can bound the variation of $\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)}$ in $\tilde{\Delta}_{p,q}$ by;
\begin{align} \nonumber &a^{1/4}l_1 \cdot \max_{\ensuremath \boldsymbol{y}\in \tilde{\Delta}_{p,q}} |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,1)}(a^{-1/4}y_1,a^{1/4}y_2)|+a^{-1/4}l_2\cdot \max_{\ensuremath \boldsymbol{y}\in \tilde{\Delta}_{p,q}} |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,2)}(a^{-1/4}y_1,a^{1/4}y_2)| \\
\nonumber &=l_1 \cdot \max_{\ensuremath \boldsymbol{x}\in \Delta_{p,q}} |G_{\ensuremath \boldsymbol{\mu}}^{(2,1)}(\ensuremath \boldsymbol{x})|+l_2\cdot \max_{\ensuremath \boldsymbol{x}\in \Delta_{p,q}} |G_{\ensuremath \boldsymbol{\mu}}^{(1,2)}(\ensuremath \boldsymbol{x})| \\
\nonumber &\ll \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|^{1+2\ensuremath\varepsilon}}\cdot \frac{a Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|}{Q(\ensuremath \boldsymbol{X})^2}+\frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{1/2} Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|^{1+2\ensuremath\varepsilon}}\cdot \frac{a^{1/2} Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|}{Q(\ensuremath \boldsymbol{X})^2}\\
\nonumber&\ll |t|^{-2\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{X})^{-1/2}, \end{align}
which is $o(r)$ as $t\rightarrow \infty$ since $Q(\ensuremath \boldsymbol{X}) \ll a^{1/2} |t|^{1+\ensuremath\varepsilon}$ (recall the definition (\ref{r}) of $r$). Now we have two further sub-cases.\\
{\bf Case 2.1:} If
$$|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})|,|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}(\ensuremath \boldsymbol{y})|> 2^{-2} \lambda_1^{-1} \lambda_2 r, $$ for {\it all} $\ensuremath \boldsymbol{y} \in \tilde{\Delta}_{p,q}$, then we can apply Lemma \ref{vanderC} directly.\\
{\bf Case 2.2:} So we may assume that, say, $|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})|\leq 2^{-2}\lambda_1^{-1} \lambda_2 r $ for some $\ensuremath \boldsymbol{y} \in \tilde{\Delta}_{p,q}$. As above, we see using (\ref{normbound}) that the variation of $\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}$ in $\tilde{\Delta}_{p,q}$ is bounded by
\begin{align} \nonumber &a^{1/4}l_1 \cdot \max_{\ensuremath \boldsymbol{y}\in \tilde{\Delta}_{p,q}} |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(3,0)}(a^{-1/4}y_1,a^{1/4}y_2)|+a^{-1/4}l_2\cdot \max_{\ensuremath \boldsymbol{y}\in \tilde{\Delta}_{p,q}} |\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,1)}(a^{-1/4}y_1,a^{1/4}y_2)| \\
\nonumber &= a^{-1/2} l_1 \cdot \max_{\ensuremath \boldsymbol{x}\in \Delta_{p,q}} |G_{\ensuremath \boldsymbol{\mu}}^{(3,0)}(\ensuremath \boldsymbol{x})|+a^{-1/2}l_2\cdot \max_{\ensuremath \boldsymbol{x}\in \Delta_{p,q}} |G_{\ensuremath \boldsymbol{\mu}}^{(2,1)}(\ensuremath \boldsymbol{x})| \\
\nonumber &\ll \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{3/2} Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|^{1+2\ensuremath\varepsilon}}\cdot \frac{a^{3/2} Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|}{Q(\ensuremath \boldsymbol{X})^2}+\frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{aQ(\ensuremath \boldsymbol{\mu})^{1/2} |t|^{1+2\ensuremath\varepsilon}}\cdot \frac{a Q(\ensuremath \boldsymbol{\mu})^{1/2} |t|}{Q(\ensuremath \boldsymbol{X})^2}\\
\nonumber&\ll |t|^{-2\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{X})^{-1/2}, \end{align} which as above is $o(r)$ as $t\rightarrow \infty$. Thus we conclude that for any $\delta'>0$;
$$|\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}(\ensuremath \boldsymbol{y})|\leq (2^{-2}+\delta')\lambda_1^{-1} \lambda_2 r $$ holds for {\it all} $\ensuremath \boldsymbol{y}\in\tilde{\Delta}_{p,q}$ when $t$ is sufficiently large. \\ If we write \begin{align}\label{cov} \ensuremath \boldsymbol{z}=(z_1,z_2)=( dy_1-cy_2, dy_1+cy_2), \end{align} with $cd=1/2$, then after a change of variable the integral (\ref{integraldelta}) becomes $$ \int_{\Omega_{p,q}}e^{ih(\ensuremath \boldsymbol{z})}d\ensuremath \boldsymbol{z}, $$ where $h(\ensuremath \boldsymbol{z})=\tilde{G}_{\ensuremath \boldsymbol{\mu}}(cz_1+cz_2, -dz_1+dz_2)$ and $\Omega_{p,q}$ is a new rectangle with angle $\pi/4$ to the coordinate axis and maximum side length $\ll a^{1/4}l_1 \max(c,d)$. We observe that \begin{align*} h^{(2,0)}&= c^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}+d^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}-\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)},\\ h^{(0,2)}&= c^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}+d^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}+\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(1,1)},\\ h^{(1,1)} &= c^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(2,0)}-d^2\tilde{G}_{\ensuremath \boldsymbol{\mu}}^{(0,2)}. \end{align*} Thus by choosing $c=\lambda_1^{1/2}\lambda_2^{-1/4}, d=\lambda_1^{-1/2}\lambda_2^{1/4}/2$ and $\delta,\delta'$ sufficiently small, we get for {\it all} $\ensuremath \boldsymbol{z}\in \Omega_{p,q}$ the following bounds;
$$ r \ll (2^{-1/2}-1/2-\delta-\delta')\lambda_2^{1/2} r\leq |h^{(2,0)}(\ensuremath \boldsymbol{z}) |, |h^{(0,2)}(\ensuremath \boldsymbol{z})| \ll r,\quad |h^{(1,1)}(\ensuremath \boldsymbol{z})|\ll r. $$ Since the determinant of the Hesse-matrix is unchanged under the change of variable corresponding to (\ref{cov}), the result follows from Lemma \ref{vanderC}. Observe that the implied constant we get from Lemma \ref{vanderC} is indeed uniform in $a,b$ and $t$ since the angles of the rectangles $\Omega_{p,q}$ to the coordinate axes are fixed. \end{proof}
We are now ready to finish the proof of our main theorem. \begin{proof}[Proof of Proposition \ref{mainprop} and Theorem \ref{mainthm}] Combining (\ref{sumtoint}) and Lemma \ref{lemmatomainprop}, we get the following bound for all $\ensuremath \boldsymbol{\mu}$ as above; \begin{align*} S'(\ensuremath \boldsymbol{\mu})&= \sum_{p,q}S_{p,q}(\ensuremath \boldsymbol{\mu})\\
&\ll \sum_{p,q} \left(|t|^{-1+\ensuremath\varepsilon} \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{1/2}Q(\ensuremath \boldsymbol{\mu})^{1/2}}+l_2\right) \\
&\ll \frac{X_1X_2}{a^{-3/2}Q(\ensuremath \boldsymbol{X})^3 |t|^{-2-4\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{-1}} \cdot \left( |t|^{-1+\ensuremath\varepsilon} \frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{1/2}Q(\ensuremath \boldsymbol{\mu})^{1/2}}+\frac{Q(\ensuremath \boldsymbol{X})^{3/2}}{a^{1/2} |t|^{1+2\ensuremath\varepsilon}Q(\ensuremath \boldsymbol{\mu})^{1/2}}\right)\\
&\ll a^{1/2} \frac{|t|^{1+5\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{1/2}}{Q(\ensuremath \boldsymbol{X})^{1/2}}, \end{align*} where we used $a^{1/2}X_1X_2\ll Q(\ensuremath \boldsymbol{X})$. Plugging this into Lemma \ref{Wdiff} yields; \begin{align*} \frac{1}{Q(\ensuremath \boldsymbol{X})^{1/2}}\sum_{\substack{X_1 \leq x_1\leq X_1'\\ X_2\leq x_2\leq X_2'}} & e^{if(x_1,x_2)}\\
&\ll \frac{X_1X_2}{Q(\ensuremath \boldsymbol{X})^{1/2}\rho} +\frac{(X_1X_2)^{1/2}}{Q(\ensuremath \boldsymbol{X})^{1/2}\rho}\left( \sum_{0\leq \mu_1, \mu_2\leq \rho} \frac{a^{1/2} |t|^{1+5\ensuremath\varepsilon} Q(\ensuremath \boldsymbol{\mu})^{1/2}}{Q(\ensuremath \boldsymbol{X})^{1/2}} \right)^{1/2}\\
&\ll \frac{Q(\ensuremath \boldsymbol{X})^{1/2}}{a^{1/2}\rho}+ \frac{|t|^{1/2+3\ensuremath\varepsilon}}{ Q(\ensuremath \boldsymbol{X})^{1/4}\rho}\left( \sum_{0\leq \mu_1, \mu_2\leq \rho} Q(\ensuremath \boldsymbol{\mu})^{1/2} \right)^{1/2}\\
&\ll \frac{Q(\ensuremath \boldsymbol{X})^{1/2}}{a^{1/2}\rho}+ \frac{|t|^{1/2+3\ensuremath\varepsilon}a^{1/4}}{ Q(\ensuremath \boldsymbol{X})^{1/4}\rho}\left( \sum_{||\ensuremath \boldsymbol{\mu}||\leq \rho} ||\ensuremath \boldsymbol{\mu}|| \right)^{1/2}\\
&\ll \frac{Q(\ensuremath \boldsymbol{X})^{1/2}}{a^{1/2}\rho}+ \frac{|t|^{1/2+3\ensuremath\varepsilon}a^{1/4} \rho^{1/2} }{ Q(\ensuremath \boldsymbol{X})^{1/4}}.
\end{align*}
Finally we choose an integer $ \rho \asymp Q(\ensuremath \boldsymbol{X})^{1/2}|t|^{-1/3} a^{-1/2} $ to balance the terms, which yields the desired bound $\ll_\ensuremath\varepsilon |t|^{1/3+3\ensuremath\varepsilon}$. This choice of $\rho$ is admissible with respect to the conditions in Lemma \ref{Wdiff} since first of all
$$ \rho \ll a^{1/4} |t|^{1/2+\ensuremath\varepsilon} |t|^{-1/3}a^{-1/2} =|t|^{1/6+\ensuremath\varepsilon}a^{-1/4}, $$ which is less than $X_1$ and $X_2$ by (\ref{lowerbound}) and secondly we have $\rho \gg 1$, which again follows from (\ref{lowerbound}). \\ This proves Proposition \ref{mainprop} and consequently we conclude the proof of Theorem \ref{mainthm}. \end{proof}
\section{Lower bounds for the sup norm and a conjecture} As a concluding remark we will make some consideration on the best possible bound of the type (\ref{bound1}). First of all the appearance of $y^\delta$ in (\ref{bound1}) is necessary in the sense that for a fixed $t$, the Eisenstein series is unbounded because of the constant Fourier coefficient. We will now show that the lower bound $\delta\geq 1/2$ holds for any bound of the form (\ref{bound1}) and state a uniform version of the sup norm conjecture for Eisenstein series. \\
We have for $t$ fixed the following crude bound for the $K$-Bessel function \cite[p. 60]{Iw}; $$ K_{it}(y)\ll_t y^{-1/2}e^{-y},$$ as $y\rightarrow \infty$. Thus from the Fourier expansion of the Eisenstein series \cite[Theorem 3.4]{Iw}; \begin{align*} E(z,s)&=y^s+\varphi(s)y^{1-s}+ 4\sqrt{y} \sum_{n\geq 1} \frac{K_{s-1/2}(2\pi y n)\tau_{s-1/2}(n)}{\Gamma(s) \zeta(2s) \pi^{-s}}\, \cos(2\pi x n), \end{align*} we see that \begin{align}\label{trivialbound}E(z,1/2+it)=y^{1/2+it}+\varphi(1/2+it)y^{1/2-it}+O_t(e^{-\pi y}). \end{align} Now observe that for fixed $t\geq 1$, we can choose arbitrarily large $y$ such that $$1+\varphi(1/2+it)y^{-2it}=2,$$
using that $|\varphi(1/2+it)|=1$.\\ For such $y$, we thus have $$E(z,1/2+it)=y^{1/2}(2y^{it}+o_t(1))\gg y^{1/2},$$ when $t$ is sufficiently large. Since we can let $y\rightarrow \infty$, we conclude that any bound of the form (\ref{bound1}) has to satisfy $\delta\geq 1/2$.\\
One might speculate that the following holds for any $\ensuremath\varepsilon>0$;
\begin{equation}\label{conjecture} \text{\bf Conjecture: }\quad E(z, 1/2+it)\ll_\ensuremath\varepsilon y^{1/2} (|t|+1)^\ensuremath\varepsilon , \end{equation} uniformly for $z\in \mathcal{F}$, the standard fundamental domain (\ref{F}) for $\Gamma_0(1)$. Note that this conjecture together with (\ref{squareL}) implies simultaneous Lindelöf in the $t$-aspect and on average in the $D$-aspect for the family of class group $L$-functions of imaginary quadratic fields. \\
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} | arXiv | {
"id": "1903.03932.tex",
"language_detection_score": 0.6374062299728394,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Activation of entanglement from quantum coherence and superposition}
\author{Lu-Feng Qiao}
\thanks{These authors contributed equally to this work.} \affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Jun Gao}
\thanks{These authors contributed equally to this work.} \affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Alexander Streltsov}
\thanks{These authors contributed equally to this work.} \affiliation{Faculty of Applied Physics and Mathematics, Gda\'{n}sk University of Technology, 80-233 Gda\'{n}sk, Poland}
\affiliation{National Quantum Information Centre in Gda\'{n}sk, 81-824 Sopot, Poland}
\author{Swapan Rana}
\affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, ES-08860 Castelldefels, Spain}
\author{Ruo-Jing~Ren}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Zhi-Qiang Jiao}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Cheng-Qiu Hu}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Xiao-Yun Xu}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Ci-Yu Wang}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Hao Tang}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Ai-Lin Yang}
\affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\author{Zhi-Hao Ma}
\affiliation{Department of Mathematics, Shanghai Jiaotong University, Shanghai 200240, China}
\author{Maciej Lewenstein}
\affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, ES-08860 Castelldefels, Spain}
\affiliation{ICREA, Pg.~Lluis Companys 23, ES-08010 Barcelona, Spain}
\author{Xian-Min Jin}
\thanks{xianmin.jin@sjtu.edu.cn} \affiliation{State Key Laboratory of Advanced Optical Communication Systems and Networks, Institute of Natural Sciences $\&$ Department of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China}
\affiliation{Synergetic Innovation Center of Quantum Information and Quantum Physics, University of Science and Technology of China, Hefei, Anhui 230026, China}
\maketitle \textbf{Quantum entanglement and coherence are two fundamental features of nature, arising from the superposition principle of quantum mechanics \cite{Schrodinger1935a}. While considered as puzzling phenomena in the early days of quantum theory \cite{EinsteinPhysRev.47.777}, it is only very recently that entanglement and coherence have been recognized as resources for the emerging quantum technologies, including quantum metrology, quantum communication, and quantum computing \cite{Horodecki2009,Streltsov2016}. In this work we study the limitations for the interconversion between coherence and entanglement. We prove a fundamental no-go theorem, stating that a general resource theory of superposition does not allow for entanglement activation. By constructing a CNOT gate as a free operation, we experimentally show that such activation is possible within the more constrained framework of quantum coherence. Our results provide new insights into the interplay between coherence and entanglement, representing a substantial step forward for solving longstanding open questions in quantum information science.}
Quantum resource theories provide a fundamental framework for studying general notions of nonclassicality, including quantum entanglement \cite{Vedral1997,Horodecki2009} and coherence \cite{Baumgratz2014,Streltsov2016}. Any such resource theory is based on the notion of free states and free operations. Free operations are physical transformations which do not consume any resources. They strongly depend on the problem under study, and are usually motivated by physical or technological constraints. In entanglement theory, these constraints are naturally given by the \emph{distance lab paradigm}: two spatially separated parties can perform quantum measurements in their local labs, but can only exchange classical information between each other.
Free states of a resource theory are quantum states which can be produced without consuming any resources. In entanglement theory, these free states are called separable \cite{WernerPhysRevA.40.4277}. Various quantum protocols require the presence of entanglement. This includes quantum teleportation \cite{BennettPhysRevLett.70.1895,Jin2010}, quantum cryptography \cite{EkertPhysRevLett.67.661}, and quantum state merging \cite{Horodecki2005}. As has been demonstrated very recently, it is indeed possible to establish and maintain high degree of entanglement via large distances \cite{Yin2017}.
The resource theory of quantum coherence studies technological limitations for establishing quantum superpositions \cite{Baumgratz2014,Streltsov2016}. This theory requires the existence of a distinguished basis, which can be interpreted as \emph{classical}, and is usually present due to the unavoidable decoherence \cite{Zurek2003}. Quantum states belonging to this basis are then called incoherent, and considered as the free states of coherence theory. Superpositions of these free states are said to possess coherence. Incoherent operations are free operations of coherence theory: they correspond to quantum measurements which do not create coherence for individual measurement outcomes \cite{Baumgratz2014}. Recent results show that coherence plays a crucial role for quantum metrology \cite{Giovannetti2011NaPho,MarvianPhysRevA.94.052324}, and that coherence might be more suitable than entanglement to capture the performance of quantum algorithms \cite{HilleryPhysRevA.93.012111,Matera2016}. Recent investigations also show that coherence and entanglement play an important role in biological systems \cite{Huelga2013}.
Due to the aforementioned significance of coherence and entanglement for quantum technologies, it is crucial to understand how these fundamental resources can be converted into each other. In this work we address this question, and confirm our theoretical results by an experiment with photons. We present a fundamental no-go theorem, showing that a general resource theory of superposition does not allow for entanglement activation, while this is possible within the more constrained theory of coherence. This result shares the same spirit with the celebrated no-cloning theorem \cite{Wootters1982}: a general quantum state cannot be copied, while cloning is in fact possible for a restricted set of mutually orthogonal states. We experimentally demonstrate entanglement activation from coherence by preparing photon states with different degrees of coherence and activating them into entanglement by applying an optical CNOT gate. Our results lead to a fundamental insight about entanglement quantifiers, proving that trace norm entanglement violates strong monotonicity. This shows how recent results on the resource theory of quantum coherence can be used for solving important open questions in quantum information science.
\section*{No-go Theorem of Entanglement Activation}
Entanglement activation from coherence has been first studied in \cite{Streltsov2015}. There, it was shown that any nonzero amount of coherence in a quantum state $\rho$ can be activated into entanglement by coupling the state to an incoherent ancilla $\sigma_{i}$ and performing a bipartite incoherent operation on the total state $\rho\otimes\sigma_{i}$. On a quantitative level, the amount of coherence in a state $\rho$ bounds the amount of activated entanglement as~\cite{Streltsov2015} \begin{equation} E(\Lambda_{i}[\rho\otimes\sigma_{i}])\leq C(\rho),\label{eq:EntCoh-1} \end{equation} where $\Lambda_{i}$ is an incoherent operation, and $E$ and $C$ are general distance-based entanglement and coherence monotones, see Methods section for rigorous definitions and more details. In many relevant cases, the optimal incoherent operation saturating the inequality (\ref{eq:EntCoh-1}) is the CNOT gate (see Fig.~\ref{fig:conceptual}).
We will now study this relation from a very general perspective, by resorting to the resource theory of superposition \cite{Killoran2016,Theurer2017}. In this theory, the free states $\{\ket{c_{i}}\}$ are not necessarily mutually orthogonal. Thus, the theory of superposition is more general than the resource theory of coherence, and is indeed powerful enough to cover also the resource theory of entanglement, which is obtained by allowing for continuous sets of free states. Any convex combination of the free states $\{\ket{c_{i}}\}$ is also a free state, which is a very natural assumption in any quantum resource theory. Free operations and further properties of the resource theory of superposition have been discussed in \cite{Killoran2016,Theurer2017}.
\begin{figure}\label{fig:conceptual}
\end{figure}
In the following we will study the resource theory of superposition for a two-qubit system. We assume that each of the qubits has two pure free states which we denote by $\ket{c_{0}}$ and $\ket{c_{1}}$, assuming that $0<|\!\braket{c_{0}|c_{1}}\!|<1$. Pure free states of both qubits have the form $\ket{c_{i}}\otimes\ket{c_{j}}$, and convex combinations of such states are also free. We will now consider unitary operations which do not create superpositions of the free states on both qubits. Following the notion of Ref. \cite{Theurer2017}, we will call these unitaries \emph{superposition-free}. In general, these unitaries induce the transformation \begin{equation} U\ket{c_{k}}\ket{c_{l}}=e^{i\phi_{kl}}\ket{c_{m}}\ket{c_{n}}\label{eq:U} \end{equation} with some phases $e^{i\phi_{kl}}$. Our main question in this context is the following: \emph{can a bipartite superposition-free unitary create entanglement?} The answer to this question is affirmative in the traditional framework of quantum coherence, i.e., for orthogonal free states $\ket{c_{0}}$ and $\ket{c_{1}}$ \cite{Streltsov2015}. In this case, the CNOT gate is a superposition-free unitary which can create entanglement. It is reasonable to believe that these ideas transfer to the more general concept of superpositions, and that superposition-free unitaries also allow to create entanglement.
Quite surprisingly, we will see in the following that this is not the case for the framework considered here. This is the statement of the following theorem. \begin{thm} \label{thm:1}It is not possible to create entanglement via superposition-free unitaries on two qubits. \end{thm} \noindent We note that the theorem applies for the case where each of the qubits has two superposition-free states $\ket{c_{0}}$ and
$\ket{c_{1}}$ with $0<|\!\braket{c_{0}|c_{1}}\!|<1$. The proof of this theorem will be a combination of several results, which we will present below.
Before we prove the above theorem, we will first show that every superposition-free unitary on two qubits can be decomposed into two elementary operations, which we will denote by $V$ and $W$. The first elementary operation is the swap gate $V=\sum_{i,j}\ket{ij}\!\bra{ji}$, which corresponds to an exchange of the two qubits: \begin{equation} V\ket{c_{k}}\ket{c_{l}}\rightarrow\ket{c_{l}}\ket{c_{k}}.\label{eq:V} \end{equation} The second elementary operation transforms an initial superposition-free state $\ket{c_{k}}\ket{c_{l}}$ as follows: \begin{equation} W\ket{c_{k}}\ket{c_{l}}=e^{i\varphi_{k}}\ket{c_{\mathrm{mod}(k+1,2)}}\ket{c_{l}},\label{eq:W} \end{equation} where the phases $e^{i\varphi_{k}}$ are defined as \begin{equation}
e^{i\varphi_{0}}=1,\,\,\,\,\,\,e^{i\varphi_{1}}=\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}.\label{eq:phases} \end{equation} The existence of such a unitary is guaranteed by Lemma~3 in \cite{Marvian2013} (see also \cite{Chefles2004,Killoran2016}). Note that Eq.~(\ref{eq:W}) defines the action of $W$ onto any pure two-qubit state $\ket{\psi}$, since any such state can be written as $\ket{\psi}=\sum_{k,l}a_{kl}\ket{c_{k}}\ket{c_{l}}$ with complex numbers $a_{kl}$. Moreover, $W$ can be chosen to be a local unitary, acting on the first qubit only. With these tools, we are now in position to prove the following theorem. \begin{thm} \label{thm:2}There exist only eight superposition-free unitaries for two qubits, which can all be expressed as combinations of $V$ and $W$. \end{thm}
\noindent This theorem applies to the same framework of superposition as Theorem~\ref{thm:1}, i.e., it holds if each qubit has two superposition-free states $\ket{c_{0}}$ and $\ket{c_{1}}$ with $0<|\!\braket{c_{0}|c_{1}}\!|<1$. The proof of the theorem is given in Appendix~\ref{sec:Superposition-free-unitaries}. We list all eight possible transformations in Table~\ref{tab:1}. \begin{table} \begin{centering} \begin{tabular}{ccccccccc} \hline Unitary & $V^{2}$ & $V$ & $WVW$ & $(VW)^{2}$ & $W$ & $WV$ & $VWV$ & $VW$\tabularnewline \hline $e^{i\phi_{00}}$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$ & $1$\tabularnewline \hline
$e^{i\phi_{11}}$ & $1$ & $1$ & $\frac{\braket{c_{0}|c_{1}}^{2}}{\braket{c_{1}|c_{0}}^{2}}$ & $\frac{\braket{c_{0}|c_{1}}^{2}}{\braket{c_{1}|c_{0}}^{2}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$\tabularnewline \hline
$e^{i\phi_{01}}$ & $1$ & $1$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $1$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $1$\tabularnewline \hline
$e^{i\phi_{10}}$ & $1$ & $1$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$ & $1$ & $1$ & $\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}$\tabularnewline \hline \end{tabular} \par\end{centering}
\caption{\label{tab:1}\textbf{All superposition-free unitaries on two qubits.} Any superposition-free unitary on two qubits can be expressed as a product of elementary unitaries $V$ and $W$ given in the main text. The phases $e^{i\phi_{kl}}$ in the table correspond to the phases in Eq.~(\ref{eq:U}).} \end{table}
The tools provided so far give important insight on the structure of superposition-free unitaries for two qubits and allow us to complete the proof of Theorem~\ref{thm:1}. For this, it is enough to show that both elementary operations $V$ and $W$ cannot create entanglement. Clearly, entanglement cannot be created with the swap unitary $V$. The second elementary operation $W$ also cannot create entanglement, as it can be implemented as a local unitary acting on the first qubit only.
At this point it is interesting to compare our results to results reported in \cite{Killoran2016,Regula1704.04153}. Applied to the setting considered here, the results of \cite{Killoran2016} imply that superposition can be converted into entanglement in a \emph{universal} way: there exists a (not necessarily superposition-free) quantum operation $\Lambda$ which universally converts any state of the form $\ket{\psi}=(\alpha_{0}\ket{c_{0}}+\alpha_{1}\ket{c_{1}})\otimes\ket{c_{0}}$ into an entangled state whenever both coefficients $\alpha_{0}$ and $\alpha_{1}$ are nonzero. Note that this is not a contradiction to our results presented above, as the quantum operation $\Lambda$ in this conversion is not necessarily superposition-free.
We will now show how recent advances in coherence theory can be used to solve important open questions in the theory of entanglement. For this, we recall that Eq.~(\ref{eq:EntCoh-1}) also applies to entanglement and coherence quantifiers based on the trace norm: \begin{align}
C_{\mathrm{t}}(\rho) & =\min_{\sigma\in\mathcal{I}}||\rho-\sigma||_{1},\label{eq:Ct}\\
E_{\mathrm{t}}(\rho) & =\min_{\sigma\in\mathcal{S}}||\rho-\sigma||_{1},\label{eq:Et} \end{align}
where $\mathcal{I}$ and $\mathcal{S}$ are the sets of incoherent and separable states, respectively. The trace norm $||M||_{1}=\mathrm{Tr}\sqrt{M^{\dagger}M}$
is one of the most important quantities in quantum information theory. Its significance comes from its operational interpretation, as $p=1/2+||\rho-\sigma||_{1}/4$ is the optimal probability for distinguishing two quantum states $\rho$ and $\sigma$ via quantum measurements. The coherence and entanglement quantifiers (\ref{eq:Ct}) and (\ref{eq:Et}) thus have the operational interpretation via the probability to distinguish a state $\rho$ from the set of incoherent and separable states, respectively.
Despite its clear operational significance, it is only very recently that the trace norm has been investigated within the resource theory of quantum coherence \cite{Rana2016,Chen2016,Yu2016}, and surprisingly little is known about the trace norm entanglement $E_{\mathrm{t}}$ \cite{Eisert2006}. Remarkably, it was shown in \cite{Yu2016} that $C_{\mathrm{t}}$ violates strong monotonicity: the trace norm coherence of a state can increase on average under a suitable incoherent operation. We refer to the Methods section for a rigorous definition of strong monotonicity. As we show in the following theorem, these results also extend to the trace norm entanglement, thus settling an important question in entanglement theory which was open for decades. \begin{thm} \label{thm:trace-norm}Trace norm entanglement is not a strong entanglement monotone. \end{thm} \noindent The proof of the theorem can be found in Appendix~\ref{sec:trace-norm}, where we in fact show that the trace norm entanglement can increase on average under a local measurement. This finishes the theoretical part of this work, and we will now focus on experimental entanglement activation from coherence.
\section*{Experimental Entanglement Activation \protect \\ from Coherence }
\begin{figure*}\label{fig:setup}
\end{figure*}
The results presented above impose strong constraints on the possible activation of superpositions into entanglement. On the other hand, it is known that activation of entanglement from \emph{coherence} is possible \cite{Streltsov2015}, i.e., the aforementioned constraints can be circumvented if the free states $\ket{c_{0}}$ and $\ket{c_{1}}$ are orthogonal. In this case, as is shown in Fig.~\ref{fig:conceptual}, any nonzero amount of coherence in a state $\rho$ can be converted into entanglement by adding an incoherent ancilla $\sigma_{i}$ and performing a bipartite incoherent unitary on the total state $\rho\otimes\sigma_{i}$. As we will see in the following, such an activation can indeed be performed with current experimental techniques.
Following our previous discussion, the individual systems will be qubits. As a quantifier of coherence we will use the $\ell_{1}$-norm of coherence, which is a strong coherence monotone, and corresponds to the sum of the absolute values of the off-diagonal elements \cite{Baumgratz2014}: \begin{equation}
C(\rho)=\sum\limits _{i\neq j}\left|\rho_{ij}\right|.\label{eq:coherence} \end{equation} After performing a bipartite incoherent operation on the total state $\rho\otimes\sigma_{i}$, the amount of entanglement in the total state will be quantified via concurrence $E$. Concurrence is a natural entanglement quantifier for two-qubit states, as it admits the following closed expression \cite{Wootters1998}: \begin{equation} E(\rho)=\max\left\{ 0,\lambda_{1}-\lambda_{2}-\lambda_{3}-\lambda_{4}\right\} ,\label{eq:concurrence} \end{equation} where $\lambda_{i}$ are the square roots of the eigenvalues of $\rho\tilde{\rho}$ in decreasing order, and $\tilde{\rho}$ is defined as $\tilde{\rho}=(\sigma_{y}\otimes\sigma_{y})\rho^{*}(\sigma_{y}\otimes\sigma_{y})$ with Pauli $y$-matrix $\sigma_{y}$, and complex conjugation is taken in the computational basis.
\begin{table}[hb!]
\centering{}\begin{tabular}{p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}} \hline\noalign{
} \hline\noalign{
} ZZ & $\langle{00}\arrowvert$ & $\langle{01}\arrowvert$ & $\langle{10}\arrowvert$ & $\langle{11}\arrowvert$\\ \noalign{
}\hline\noalign{
} $\arrowvert{00}\rangle$ &0.929 &0.034 &0.033 &0.004 \\ $\arrowvert{01}\rangle$ &0.053 &0.914 &0.002 &0.031 \\ $\arrowvert{10}\rangle$ &0.004 &0.002 &0.159 &0.835 \\ $\arrowvert{11}\rangle$ &0.001 &0.005 &0.816 &0.178 \\ \noalign{
}\hline \end{tabular} \begin{tabular}{p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}|p{1.2cm}} \hline\noalign{
} XX & $\langle{00}\arrowvert$ & $\langle{01}\arrowvert$ & $\langle{10}\arrowvert$ & $\langle{11}\arrowvert$\\ \noalign{
}\hline\noalign{
} $\arrowvert{00}\rangle$ &0.896 &0.004 &0.099 &0.001 \\ $\arrowvert{01}\rangle$ &0.002 &0.173 &0.001 &0.824 \\ $\arrowvert{10}\rangle$ &0.103 &0.002 &0.892 &0.003 \\ $\arrowvert{11}\rangle$ &0.001 &0.827 &0.001 &0.171 \\ \noalign{
}\hline \hline\noalign{
} \end{tabular} \caption{\label{tab:truthtable}Truth table of the CNOT gate.} \end{table}
\begin{figure*}\label{fig:tomography}
\end{figure*}
As we show in Appendix~\ref{sec:concurrence}, Eq.~(\ref{eq:EntCoh-1}) also applies in this situation, i.e., the amount of coherence in the state $\rho$ bounds the amount of concurrence that can be activated from the state via incoherent operations. Moreover, the optimal incoherent operation in the above setting is the CNOT gate, as it allows to saturate the inequality~(\ref{eq:EntCoh-1}). We also note that for the systems considered here the $\ell_{1}$-norm coherence coincides with the trace norm coherence~\cite{Shao2015}. Thus, the results discussed in this section also hold if $C$ is the trace norm coherence defined in Eq.~(\ref{eq:Ct}).
Here, we experimentally verify this relation between coherence and entanglement by the means of quantum optics, using the fact that polarization is easy to manipulate with high precision. By utilizing the phase flip introduced by second order interference, we construct the incoherent operation with a combination of a controlled phase gate and two Hadamard gates. We prepare a set of system states with different amount of coherence, and observe that coherence and entanglement are highly correlated with acceptable errors under the state of art of optical CNOT operation \cite{Kiesel2005,Okamoto2005,Crespi2011}.
The sketch of our experiment setup is shown in Fig.~\ref{fig:setup}. It can be divided into three parts: the preparation of identical photons, the incoherent operation and the state analysis module. We use a mode lock Ti:sapphire oscillator emitting $130fs$ pulses centered at $780nm$ with a repetition rate of $77MHz$. The near-infrared light is frequency doubled to ultraviolet light of $390nm$ in a $1.3mm$ thick $LiB_{3}O_{5}$ (LBO) crystal. Two identical photons are created by pumping a $2mm$ thick $\beta-BaB_{2}O_{4}$ (BBO) crystal via a type-II spontaneous parametric down-conversion process in a beam-like scheme \cite{Kim2003,Kwiat1995}. Two $3nm$ band pass filters are used to improve the visibility of interference for it ensures the spectral indistinguishablity of the photon pairs. The photons are coupled into the single mode fibers, with one serving as the system photon while the other one as the ancilla photon. A quarter wave plate and a half wave plate are used in both arms to compensate the polarization rotation induced by the single mode fibers.
The two indistinguishable photons are then injected into the CNOT gate module based on the second-order interference \cite{Hong1987}. The key feature in this optical CNOT gate scheme is a partial polarizing beam splitter (PPBS), which perfectly reflects vertical polarization and reflects (transmits) 1/3 (2/3) of horizontal polarization. We mount the coupler for the ancilla photon on a one-dimensional translation stage to ensure the temporal overlap between the photon pairs. The ideal HOM interference visibility on this PPBS is $V_{th}=80\%$ and we experimentally achieve $V_{exp}=67.9\pm1.0\%$. The relative visibility is $V_{re}=V_{exp}/V_{th}=84.9\%$. The mismatch can be attributed to the imperfection of the PPBS, whose reflection ratio of the horizontal polarization $29\%$ deviates from the ideal value of 33.3. In order to evaluate the performance of the CNOT gate, we measure the truth tables and estimate the process fidelity \cite{Hofmann2005}. In the $ZZ$ basis, we define the computational basis as $\arrowvert{0}\rangle_{z}=\arrowvert{H}\rangle$ and $\arrowvert{1}\rangle_{z}=\arrowvert{V}\rangle$ for the control qubit and $\arrowvert{0}\rangle_{z}=\arrowvert{D}\rangle$ and $\arrowvert{1}\rangle_{z}=\arrowvert{A}\rangle$ for the target qubit. The CNOT gate flips the target qubit when the control qubit is $\arrowvert{1}\rangle_{z}$. In the $XX$ basis, it is equivalent to transform the bases using a Hardamard gate, where the control qubit is encoded in $\arrowvert{D}\rangle-\arrowvert{A}\rangle$ basis and the target qubit in $\arrowvert{H}\rangle-\arrowvert{V}\rangle$ basis. Table~\ref{tab:truthtable} gives the normalized possibilities of all the combinations with four different input and output states in both $ZZ$ and $XX$ basis. We can see that the control and the target qubit swap in the $XX$ basis, where the control qubit remains unchanged when the target qubit is $\arrowvert{0}\rangle_{x}$ and flips when the target qubit is $\arrowvert{1}\rangle_{x}$.
\begin{figure}\label{fig:equivalence}
\end{figure}
The fidelity can be defined as the average value of the possibility to get the correct output over all inputs. From this definition we can calculate $F_{zz}=0.87$ and $F_{xx}=0.86$. These two complementary fidelity values can bound the quantum process fidelity according to \cite{Hofmann2005} \begin{equation} F_{zz}+F_{xx}-1\le F_{process}\le Min\{F_{zz},F_{xx}\}. \end{equation} Thus, we can estimate $0.73\le F_{process}\le0.86$. The process fidelity also benchmarks the minimal entanglement capability $C\ge2F_{process}-1$, as in our case, the result is larger than~$0.47$.
After experimentally characterizing the incoherent operation, we generate a series of quantum states: \begin{equation} \begin{aligned}\rho=cos^{2}(\vartheta)\arrowvert{H}\rangle\langle{H}\arrowvert+cos(\vartheta)sin(\vartheta)\arrowvert{H}\rangle\langle{V}\arrowvert\\ +sin(\vartheta)cos(\vartheta)\arrowvert{V}\rangle\langle{H}\arrowvert+sin^{2}(\vartheta)\arrowvert{V}\rangle\langle{V}\arrowvert \end{aligned} \end{equation} By choosing different polarization parameter $\vartheta$, we are able to tune the corresponding amount of coherence in the system qubit in $\{ \arrowvert{H}\rangle,\arrowvert{V}\rangle \}$ basis. We split the system qubit on a beam splitter and prepare the two copies with the same polarization to test the relationship between coherence and entanglement. The ancilla qubit is fixed to $\sigma_{i}=\arrowvert{H}\rangle\langle{H}\arrowvert$ as an incoherent state during the whole experiment. We first conduct the one-qubit tomography with a combination of quarter wave plate and polarizer to reconstruct the $2\times2$ density matrix of the system qubit \cite{James2001} and further estimate the amount of coherence defined in Eq.~(\ref{eq:coherence}). The other copy of the system qubit is guided to the CNOT gate and interferes with the ancilla qubit on the PPBS. After the incoherent operation, the two-qubit tomography is used to evaluate the entanglement, as quantified via concurrence in Eq.~(\ref{eq:concurrence}).
In our experiment, we prepare seven different system states to test the relation between coherence and entanglement in Eq.~(\ref{eq:EntCoh-1}). As we vary the coherence parameter, the density matrix of the entanglement states generated by the incoherent operation correspondingly alter, as demonstrated in Fig.~\ref{fig:tomography}, from separable states to maximal entangled state. To further evaluate the relation between coherence and entanglement, we compare their exact values in Fig.~\ref{fig:equivalence}. The blue bars represent the amount of coherence and the red bars represent the amount of entanglement. The outside frames are the theoretical prediction by considering the ideal cases.
With high-extinction polarization device, we are able to prepare the maximal coherence state $\arrowvert{D}\rangle=(\arrowvert{H}\rangle+\arrowvert{V}\rangle)/\sqrt{2}$ and the measured coherence is up to $C=0.999$, which is very close to ideal scenario. The measured entanglement of the generated entangled state is $E=0.864$. In the next step we decrease the coherence of the system qubit and the corresponding entanglement changes with the same tendency. The system with the minimal coherence in our experiment has $C=0.09$, and the corresponding activated entanglement between the two qubit is measured to be $E=0.07$. Given the imperfection of the incoherent operation, certain mismatch exists between the measured entanglement and coherence. A considerably high conversion efficiency can be expected after certain optimization of the device.
\section*{Conclusions}
In this work we explored the possibilities and limitation to activate entanglement from quantum coherence and superposition. While coherence can be activated into entanglement via free unitaries of the theory \cite{Streltsov2015}, we have shown that such an activation is not possible within a more general theory of quantum superposition. We have rigorously proven this statement for a general two-qubit system, where each of the qubits has two superposition-free states $\ket{c_{0}}$
and $\ket{c_{1}}$ with $0<|\braket{c_{0}|c_{1}}|<1$. We have further shown that only eight superposition-free unitaries are possible in this setting, and all of them can be represented in terms of two elementary operations.
An important consequence of our discussion is the finding that in the general framework of superposition considered here there is no unitary which corresponds to the action of a CNOT gate, i.e., which flips the state of the second qubit between $\ket{c_{0}}$ and $\ket{c_{1}}$ conditioned on the first qubit being in the state $\ket{c_{0}}$ or $\ket{c_{1}}$. Such a CNOT gate exists only in the more restricted resource theory of coherence, which arises in our framework in the limit of orthogonal states $\ket{c_{0}}$ and $\ket{c_{1}}$. These results are analogous to the no-cloning theorem \cite{Wootters1982}, i.e., while it is not possible to clone a general quantum state, cloning is possible in a more restricted theory, where the considered states are mutually orthogonal.
We have experimentally demonstrated that entanglement activation from coherence is indeed possible. We have prepared single-qubit states with different values of coherence by using polarized photons and experimentally activated coherence into entanglement via an optical CNOT gate which is the optimal incoherent operation in the considered setting. We have then compared the amount of final entanglement to the amount of initial coherence, finding a good agreement between theory and experiment. Both quantities clearly show the same tendency: a large amount of initial coherence leads to a large amount of activated entanglement.
We also note that related results have been presented very recently in \cite{Wu1710.01738}, where cyclic interconversion between coherence and entanglement has been demonstrated experimentally, based on the framework of assisted coherence distillation \cite{Chitambar2016,Streltsov2017} and coherence activation from entanglement \cite{Streltsov2015} and quantum discord~\cite{Modi2012,Ma2016}.
Our work also lead to a surprising result in entanglement theory, showing that the trace norm entanglement violates strong monotonicity. This solves an important question in quantum information theory which was open for decades, and clearly demonstrate how recent developments on the resource theory of quantum coherence \cite{Streltsov2016} can be applied for advancing other research areas of quantum information and technology.
\section*{Methods}
An important question in any quantum resource theory is to quantify the amount of the resource in a given quantum state. A general resource quantifier $\mathcal{R}$ should at least have the following property: \begin{equation} \mathcal{R}(\Lambda_{f}[\rho])\leq\mathcal{R}(\rho),\label{eq:monotone} \end{equation} where $\Lambda_{f}$ is a free operation of the resource theory. In entanglement theory, $\Lambda_{f}$ are usually chosen to be \emph{local operations and classical communication} \cite{Horodecki2009}. In the resource theory of coherence, a possible choice for $\Lambda_{f}$ are \emph{incoherent operations} introduced in \cite{Baumgratz2014}, and alternative frameworks have also been discussed recently \cite{Winter2016,Yadin2016}, see also the review \cite{Streltsov2016} and references therein.
Any nonnegative function $\mathcal{R}$ which fulfills Eq.~(\ref{eq:monotone}) is called\emph{ monotone} of the corresponding resource theory. A very general family are distance-based monotones \begin{equation} \mathcal{R}_{D}(\rho)=\inf_{\sigma\in\mathcal{F}}D(\rho,\sigma), \end{equation} where $\mathcal{F}$ is the set of free states and $D$ is a suitable distance. The quantity $\mathcal{R}_{D}$ fulfills monotonicity~(\ref{eq:monotone}) for any distance $D$ which is contractive under quantum operations:
$D(\Lambda[\rho],\Lambda[\sigma])\leq D(\rho,\sigma)$. Important examples for such distances are the quantum relative entropy $S(\rho||\sigma)=\mathrm{Tr}[\rho\log_{2}\rho]-\mathrm{Tr}[\rho\log_{2}\sigma]$
and the trace distance $D_{\mathrm{t}}(\rho,\sigma)=\frac{1}{2}||\rho-\sigma||_{1}$
with the trace norm $||M||_{1}=\mathrm{Tr}\sqrt{M^{\dagger}M}$.
In many resource theories it is also important to consider \emph{selective free operations}. Here, an initial quantum state $\rho$ is transformed into an ensemble \begin{equation} \rho\rightarrow\{q_{i},\sigma_{i}\}\label{eq:selective} \end{equation} with probabilities $q_{i}$ and quantum states $\sigma_{i}$. In entanglement theory, this is motivated by the fact that the parties can -- in principle -- record the outcome of their local measurements. Each state $\sigma_{i}$ then corresponds to the state of the system for a particular sequence of local measurement outcomes, with a corresponding overall probability $q_{i}$. A similar approach has been taken recently in the resource theory of coherence \cite{Baumgratz2014,Winter2016,Streltsov2016,Yadin2016}.
For a resource theory with selective free operations as given in Eq.~(\ref{eq:selective}), it is reasonable to demand that the corresponding resource quantifier $\mathcal{R}$ admits \emph{strong monotonicity}: \begin{equation} \sum_{i}q_{i}\mathcal{R}(\sigma_{i})\leq\mathcal{R}(\rho)\label{eq:strong-monotone} \end{equation} for any ensemble $\{q_{i},\sigma_{i}\}$ which can be obtained from the state $\rho$ by the means of selective free operations. The motivation for this requirement is similar to the standard monotonicity (\ref{eq:monotone}): the resource should not increase on average even if the outcomes of free measurements are recorded. Entanglement and coherence monotones based on the relative entropy fulfill strong monotonicity \cite{Vedral1997,Baumgratz2014}. As was shown in \cite{Yu2016}, the trace norm coherence violates strong monotonicity. As we prove in Appendix~\ref{sec:trace-norm}, strong monotonicity is also violated by the trace norm entanglement. Note that strong monotonicity~(\ref{eq:strong-monotone}) implies monotonicity~(\ref{eq:monotone}) if $\mathcal{R}$ is convex.
\begin{thebibliography}{46} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Schr\"odinger}(1935)}]{Schrodinger1935a}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Schr\"odinger}},\ }\href {\doibase 10.1007/BF01491891} {\bibfield {journal}
{\bibinfo {journal} {Naturwissenschaften}\ }\textbf {\bibinfo {volume}
{23}},\ \bibinfo {pages} {807} (\bibinfo {year} {1935})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Einstein}\ \emph {et~al.}(1935)\citenamefont
{Einstein}, \citenamefont {Podolsky},\ and\ \citenamefont
{Rosen}}]{EinsteinPhysRev.47.777}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Einstein}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Podolsky}},
\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Rosen}},\ }\href
{\doibase 10.1103/PhysRev.47.777} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev.}\ }\textbf {\bibinfo {volume} {47}},\ \bibinfo {pages} {777}
(\bibinfo {year} {1935})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Horodecki}\ \emph {et~al.}(2009)\citenamefont
{Horodecki}, \citenamefont {Horodecki}, \citenamefont {Horodecki},\ and\
\citenamefont {Horodecki}}]{Horodecki2009}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Horodecki}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Horodecki}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Horodecki}},\ }\href {\doibase 10.1103/RevModPhys.81.865} {\bibfield
{journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume}
{81}},\ \bibinfo {pages} {865} (\bibinfo {year} {2009})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {{Streltsov}}\ \emph {et~al.}(2016)\citenamefont
{{Streltsov}}, \citenamefont {{Adesso}},\ and\ \citenamefont
{{Plenio}}}]{Streltsov2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Streltsov}}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{{Adesso}}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{{Plenio}}},\ }\href {https://arxiv.org/abs/1609.02439} {\bibfield {journal}
{\bibinfo {journal} {arXiv:1609.02439}\ } (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vedral}\ \emph {et~al.}(1997)\citenamefont {Vedral},
\citenamefont {Plenio}, \citenamefont {Rippin},\ and\ \citenamefont
{Knight}}]{Vedral1997}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Vedral}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},
\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Rippin}}, \ and\
\bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {Knight}},\ }\href
{\doibase 10.1103/PhysRevLett.78.2275} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {78}},\ \bibinfo
{pages} {2275} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Baumgratz}\ \emph {et~al.}(2014)\citenamefont
{Baumgratz}, \citenamefont {Cramer},\ and\ \citenamefont
{Plenio}}]{Baumgratz2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Baumgratz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cramer}}, \
and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\
}\href {\doibase 10.1103/PhysRevLett.113.140401} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\
\bibinfo {pages} {140401} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Werner}(1989)}]{WernerPhysRevA.40.4277}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont
{Werner}},\ }\href {\doibase 10.1103/PhysRevA.40.4277} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {40}},\
\bibinfo {pages} {4277} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bennett}\ \emph {et~al.}(1993)\citenamefont
{Bennett}, \citenamefont {Brassard}, \citenamefont {Cr\'epeau}, \citenamefont
{Jozsa}, \citenamefont {Peres},\ and\ \citenamefont
{Wootters}}]{BennettPhysRevLett.70.1895}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont
{Bennett}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}},
\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Cr\'epeau}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Jozsa}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Peres}}, \ and\ \bibinfo {author}
{\bibfnamefont {W.~K.}\ \bibnamefont {Wootters}},\ }\href {\doibase
10.1103/PhysRevLett.70.1895} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages}
{1895} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jin}\ \emph {et~al.}(2010)\citenamefont {Jin},
\citenamefont {Ren}, \citenamefont {Yang}, \citenamefont {Yi}, \citenamefont
{Zhou}, \citenamefont {Xu}, \citenamefont {Wang}, \citenamefont {Yang},
\citenamefont {Hu}, \citenamefont {Jiang}, \citenamefont {Yang},
\citenamefont {Yin}, \citenamefont {Chen}, \citenamefont {Peng},\ and\
\citenamefont {Pan}}]{Jin2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-M.}\ \bibnamefont
{Jin}}, \bibinfo {author} {\bibfnamefont {J.-G.}\ \bibnamefont {Ren}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yang}}, \bibinfo {author}
{\bibfnamefont {Z.-H.}\ \bibnamefont {Yi}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {X.-F.}\
\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {S.-K.}\ \bibnamefont
{Wang}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yang}}, \bibinfo
{author} {\bibfnamefont {Y.-F.}\ \bibnamefont {Hu}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Yang}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Chen}}, \bibinfo
{author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase
10.1038/nphoton.2010.87} {\bibfield {journal} {\bibinfo {journal} {Nat.
Photon.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {376} (\bibinfo
{year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ekert}(1991)}]{EkertPhysRevLett.67.661}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont
{Ekert}},\ }\href {\doibase 10.1103/PhysRevLett.67.661} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {67}},\
\bibinfo {pages} {661} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Horodecki}}\ \emph {et~al.}(2005)\citenamefont
{{Horodecki}}, \citenamefont {{Oppenheim}},\ and\ \citenamefont
{{Winter}}}]{Horodecki2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{{Horodecki}}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Oppenheim}}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Winter}}},\ }\href {\doibase 10.1038/nature03909} {\bibfield {journal}
{\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {436}},\ \bibinfo
{pages} {673} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yin}\ \emph {et~al.}(2017)\citenamefont {Yin},
\citenamefont {Cao}, \citenamefont {Li}, \citenamefont {Liao}, \citenamefont
{Zhang}, \citenamefont {Ren}, \citenamefont {Cai}, \citenamefont {Liu},
\citenamefont {Li}, \citenamefont {Dai}, \citenamefont {Li}, \citenamefont
{Lu}, \citenamefont {Gong}, \citenamefont {Xu}, \citenamefont {Li},
\citenamefont {Li}, \citenamefont {Yin}, \citenamefont {Jiang}, \citenamefont
{Li}, \citenamefont {Jia}, \citenamefont {Ren}, \citenamefont {He},
\citenamefont {Zhou}, \citenamefont {Zhang}, \citenamefont {Wang},
\citenamefont {Chang}, \citenamefont {Zhu}, \citenamefont {Liu},
\citenamefont {Chen}, \citenamefont {Lu}, \citenamefont {Shu}, \citenamefont
{Peng}, \citenamefont {Wang},\ and\ \citenamefont {Pan}}]{Yin2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo
{author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {S.-K.}\ \bibnamefont {Liao}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont
{J.-G.}\ \bibnamefont {Ren}}, \bibinfo {author} {\bibfnamefont {W.-Q.}\
\bibnamefont {Cai}}, \bibinfo {author} {\bibfnamefont {W.-Y.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Dai}}, \bibinfo {author}
{\bibfnamefont {G.-B.}\ \bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont
{Q.-M.}\ \bibnamefont {Lu}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\
\bibnamefont {Gong}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Xu}}, \bibinfo {author} {\bibfnamefont {S.-L.}\ \bibnamefont {Li}}, \bibinfo
{author} {\bibfnamefont {F.-Z.}\ \bibnamefont {Li}}, \bibinfo {author}
{\bibfnamefont {Y.-Y.}\ \bibnamefont {Yin}}, \bibinfo {author} {\bibfnamefont
{Z.-Q.}\ \bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.-J.}\
\bibnamefont {Jia}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Ren}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {He}}, \bibinfo
{author} {\bibfnamefont {Y.-L.}\ \bibnamefont {Zhou}}, \bibinfo {author}
{\bibfnamefont {X.-X.}\ \bibnamefont {Zhang}}, \bibinfo {author}
{\bibfnamefont {N.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Chang}}, \bibinfo {author} {\bibfnamefont {Z.-C.}\
\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {N.-L.}\ \bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Y.-A.}\ \bibnamefont {Chen}},
\bibinfo {author} {\bibfnamefont {C.-Y.}\ \bibnamefont {Lu}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Shu}}, \bibinfo {author}
{\bibfnamefont {C.-Z.}\ \bibnamefont {Peng}}, \bibinfo {author}
{\bibfnamefont {J.-Y.}\ \bibnamefont {Wang}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.-W.}\ \bibnamefont {Pan}},\ }\href {\doibase
10.1126/science.aan3211} {\bibfield {journal} {\bibinfo {journal}
{Science}\ }\textbf {\bibinfo {volume} {35}},\ \bibinfo {pages} {1140}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zurek}(2003)}]{Zurek2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont
{Zurek}},\ }\href {\doibase 10.1103/RevModPhys.75.715} {\bibfield {journal}
{\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {75}},\
\bibinfo {pages} {715} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Giovannetti}}\ \emph {et~al.}(2011)\citenamefont
{{Giovannetti}}, \citenamefont {{Lloyd}},\ and\ \citenamefont
{{Maccone}}}]{Giovannetti2011NaPho}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{{Giovannetti}}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{{Lloyd}}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{{Maccone}}},\ }\href {\doibase 10.1038/nphoton.2011.35} {\bibfield
{journal} {\bibinfo {journal} {Nat. Photon.}\ }\textbf {\bibinfo {volume}
{5}},\ \bibinfo {pages} {222} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marvian}\ and\ \citenamefont
{Spekkens}(2016)}]{MarvianPhysRevA.94.052324}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Marvian}}\ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Spekkens}},\ }\href {\doibase 10.1103/PhysRevA.94.052324} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{94}},\ \bibinfo {pages} {052324} (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Hillery}(2016)}]{HilleryPhysRevA.93.012111}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Hillery}},\ }\href {\doibase 10.1103/PhysRevA.93.012111} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{93}},\ \bibinfo {pages} {012111} (\bibinfo {year} {2016})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Matera}\ \emph {et~al.}(2016)\citenamefont {Matera},
\citenamefont {Egloff}, \citenamefont {Killoran},\ and\ \citenamefont
{Plenio}}]{Matera2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Matera}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Egloff}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}}, \ and\
\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href
{\doibase 10.1088/2058-9565/1/1/01LT01} {\bibfield {journal} {\bibinfo
{journal} {Quantum Sci. Technol.}\ }\textbf {\bibinfo {volume} {1}},\
\bibinfo {pages} {01LT01} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huelga}\ and\ \citenamefont
{Plenio}(2013)}]{Huelga2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Huelga}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Plenio}},\ }\href {\doibase 10.1080/00405000.2013.829687} {\bibfield
{journal} {\bibinfo {journal} {Contemporary Physics}\ }\textbf {\bibinfo
{volume} {54}},\ \bibinfo {pages} {181} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Wootters}\ and\ \citenamefont
{Zurek}(1982)}]{Wootters1982}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont
{Wootters}}\ and\ \bibinfo {author} {\bibfnamefont {W.~H.}\ \bibnamefont
{Zurek}},\ }\href {\doibase 10.1038/299802a0} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {299}},\ \bibinfo {pages}
{802} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2015)\citenamefont
{Streltsov}, \citenamefont {Singh}, \citenamefont {Dhar}, \citenamefont
{Bera},\ and\ \citenamefont {Adesso}}]{Streltsov2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Streltsov}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Singh}},
\bibinfo {author} {\bibfnamefont {H.~S.}\ \bibnamefont {Dhar}}, \bibinfo
{author} {\bibfnamefont {M.~N.}\ \bibnamefont {Bera}}, \ and\ \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Adesso}},\ }\href {\doibase
10.1103/PhysRevLett.115.020403} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages}
{020403} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Killoran}\ \emph {et~al.}(2016)\citenamefont
{Killoran}, \citenamefont {Steinhoff},\ and\ \citenamefont
{Plenio}}]{Killoran2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Killoran}}, \bibinfo {author} {\bibfnamefont {F.~E.~S.}\ \bibnamefont
{Steinhoff}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}},\ }\href {\doibase 10.1103/PhysRevLett.116.080402} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {080402} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Theurer}}\ \emph {et~al.}(2017)\citenamefont
{{Theurer}}, \citenamefont {{Killoran}}, \citenamefont {{Egloff}},\ and\
\citenamefont {{Plenio}}}]{Theurer2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{{Theurer}}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{{Killoran}}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{{Egloff}}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{{Plenio}}},\ }\href {https://arxiv.org/abs/1703.10943} {\bibfield {journal}
{\bibinfo {journal} {arXiv:1703.10943}\ } (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Marvian}\ and\ \citenamefont
{Spekkens}(2013)}]{Marvian2013}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont
{Marvian}}\ and\ \bibinfo {author} {\bibfnamefont {R.~W.}\ \bibnamefont
{Spekkens}},\ }\href {\doibase 10.1088/1367-2630/15/3/033001} {\bibfield
{journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume}
{15}},\ \bibinfo {pages} {033001} (\bibinfo {year} {2013})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {{Chefles}}\ \emph {et~al.}(2004)\citenamefont
{{Chefles}}, \citenamefont {{Jozsa}},\ and\ \citenamefont
{{Winter}}}]{Chefles2004}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Chefles}}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Jozsa}}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {{Winter}}},\
}\href {\doibase 10.1142/S0219749904000031} {\bibfield {journal} {\bibinfo
{journal} {Int. J. Quant. Inf.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo
{pages} {11} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Regula}}\ \emph {et~al.}(2017)\citenamefont
{{Regula}}, \citenamefont {{Piani}}, \citenamefont {{Cianciaruso}},
\citenamefont {{Bromley}}, \citenamefont {{Streltsov}},\ and\ \citenamefont
{{Adesso}}}]{Regula1704.04153}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{{Regula}}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Piani}}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Cianciaruso}}}, \bibinfo
{author} {\bibfnamefont {T.~R.}\ \bibnamefont {{Bromley}}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {{Streltsov}}}, \ and\ \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {{Adesso}}},\ }\href
{https://arxiv.org/abs/1704.04153} {\bibfield {journal} {\bibinfo {journal}
{arXiv:1704.04153}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rana}\ \emph {et~al.}(2016)\citenamefont {Rana},
\citenamefont {Parashar},\ and\ \citenamefont {Lewenstein}}]{Rana2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Rana}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Parashar}}, \
and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\
}\href {\doibase 10.1103/PhysRevA.93.012110} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo
{pages} {012110} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2016)\citenamefont {Chen},
\citenamefont {Grogan}, \citenamefont {Johnston}, \citenamefont {Li},\ and\
\citenamefont {Plosker}}]{Chen2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Grogan}},
\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Johnston}}, \bibinfo
{author} {\bibfnamefont {C.-K.}\ \bibnamefont {Li}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Plosker}},\ }\href {\doibase
10.1103/PhysRevA.94.042313} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {042313}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2016)\citenamefont {Yu},
\citenamefont {Zhang}, \citenamefont {Xu},\ and\ \citenamefont
{Tong}}]{Yu2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {X.-D.}\ \bibnamefont
{Yu}}, \bibinfo {author} {\bibfnamefont {D.-J.}\ \bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {G.~F.}\ \bibnamefont {Xu}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~M.}\ \bibnamefont {Tong}},\ }\href {\doibase
10.1103/PhysRevA.94.060302} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {060302}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Eisert}}(2001)}]{Eisert2006}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{{Eisert}}},\ }\emph {\bibinfo {title} {{Entanglement in quantum information
theory}}},\ \href@noop {} {Ph.D. thesis},\ \bibinfo {school} {University of
Potsdam} (\bibinfo {year} {2001}),\ \Eprint
{http://arxiv.org/abs/arXiv:quant-ph/0610253} {arXiv:quant-ph/0610253}
\BibitemShut {NoStop} \bibitem [{\citenamefont {Wootters}(1998)}]{Wootters1998}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~K.}\ \bibnamefont
{Wootters}},\ }\href {\doibase 10.1103/PhysRevLett.80.2245} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {80}},\ \bibinfo {pages} {2245} (\bibinfo {year}
{1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shao}\ \emph {et~al.}(2015)\citenamefont {Shao},
\citenamefont {Xi}, \citenamefont {Fan},\ and\ \citenamefont
{Li}}]{Shao2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-H.}\ \bibnamefont
{Shao}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Xi}}, \bibinfo
{author} {\bibfnamefont {H.}~\bibnamefont {Fan}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Li}},\ }\href {\doibase
10.1103/PhysRevA.91.042120} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. A}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {042120}
(\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Kiesel}\ \emph {et~al.}(2005)\citenamefont {Kiesel},
\citenamefont {Schmid}, \citenamefont {Weber}, \citenamefont
{Ursin},\ and\ \citenamefont
{Weinfurter}}]{Kiesel2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}\ \bibnamefont
{Kiesel}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Schmid}},
\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Weber}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Ursin}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\href {\doibase
10.1103/PhysRevLett.95.210505} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages}
{210505} (\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Okamoto}\ \emph {et~al.}(2016)\citenamefont
{Okamoto}, \citenamefont {Hofmann}, \citenamefont {Takeuchi},\ and\ \citenamefont
{Sasaki}}]{Okamoto2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Okamoto}}, \bibinfo {author} {\bibfnamefont {H.~F.}~\bibnamefont
{Hofmann}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Takeuchi}},
\ and\ \bibinfo {author}
{\bibfnamefont {K.}~\bibnamefont {Sasaki}},\ }\href {\doibase
10.1103/PhysRevLett.95.210506} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages}
{210506} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Crespi}}\ \emph {et~al.}(2011)\citenamefont
{{Crespi}}, \citenamefont {{Ramponi}}, \citenamefont {{Osellame}},
\citenamefont {{Sansoni}}, \citenamefont {{Bongioanni}}, \citenamefont
{{Sciarrino}}, \citenamefont {{Vallone}},\ and\ \citenamefont
{{Mataloni}}}]{Crespi2011}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{{Crespi}}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Ramponi}}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Osellame}}}, \bibinfo
{author} {\bibfnamefont {L.}~\bibnamefont {{Sansoni}}}, \bibinfo {author}
{\bibfnamefont {I.}~\bibnamefont {{Bongioanni}}}, \bibinfo {author}
{\bibfnamefont {F.}~\bibnamefont {{Sciarrino}}}, \bibinfo {author}
{\bibfnamefont {G.}~\bibnamefont {{Vallone}}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {{Mataloni}}},\ }\href {\doibase
10.1038/ncomms1570} {\bibfield {journal} {\bibinfo {journal} {Nature
Commun.}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {eid} {566} (\bibinfo
{year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kim}(2003)}]{Kim2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont
{Kim}},\ }\href {\doibase 10.1103/PhysRevA.68.013804} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {68}},\
\bibinfo {pages} {013804} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kwiat}\ \emph {et~al.}(1995)\citenamefont {Kwiat},
\citenamefont {Mattle}, \citenamefont {Weinfurter}, \citenamefont
{Zeilinger}, \citenamefont {Sergienko},\ and\ \citenamefont
{Shih}}]{Kwiat1995}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~G.}\ \bibnamefont
{Kwiat}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mattle}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Zeilinger}}, \bibinfo {author}
{\bibfnamefont {A.~V.}\ \bibnamefont {Sergienko}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Shih}},\ }\href {\doibase
10.1103/PhysRevLett.75.4337} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages}
{4337} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hong}\ \emph {et~al.}(1987)\citenamefont {Hong},
\citenamefont {Ou},\ and\ \citenamefont {Mandel}}]{Hong1987}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~K.}\ \bibnamefont
{Hong}}, \bibinfo {author} {\bibfnamefont {Z.~Y.}\ \bibnamefont {Ou}}, \ and\
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mandel}},\ }\href
{\doibase 10.1103/PhysRevLett.59.2044} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo
{pages} {2044} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hofmann}(2005)}]{Hofmann2005}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~F.}\ \bibnamefont
{Hofmann}},\ }\href {\doibase 10.1103/PhysRevLett.94.160504} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {94}},\ \bibinfo {pages} {160504} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {James}\ \emph {et~al.}(2001)\citenamefont {James},
\citenamefont {Kwiat}, \citenamefont {Munro},\ and\ \citenamefont
{White}}]{James2001}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~F.~V.}\
\bibnamefont {James}}, \bibinfo {author} {\bibfnamefont {P.~G.}\ \bibnamefont
{Kwiat}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Munro}}, \
and\ \bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {White}},\ }\href
{\doibase 10.1103/PhysRevA.64.052312} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo
{pages} {052312} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Wu}}\ \emph {et~al.}(2017)\citenamefont {{Wu}},
\citenamefont {{Hou}}, \citenamefont {{Zhao}}, \citenamefont {{Xiang}},
\citenamefont {{Li}}, \citenamefont {{Guo}}, \citenamefont {{Ma}},
\citenamefont {{He}}, \citenamefont {{Thompson}},\ and\ \citenamefont
{{Gu}}}]{Wu1710.01738}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.-D.}\ \bibnamefont
{{Wu}}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {{Hou}}},
\bibinfo {author} {\bibfnamefont {Y.-Y.}\ \bibnamefont {{Zhao}}}, \bibinfo
{author} {\bibfnamefont {G.-Y.}\ \bibnamefont {{Xiang}}}, \bibinfo {author}
{\bibfnamefont {C.-F.}\ \bibnamefont {{Li}}}, \bibinfo {author}
{\bibfnamefont {G.-C.}\ \bibnamefont {{Guo}}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {{Ma}}}, \bibinfo {author} {\bibfnamefont
{Q.-Y.}\ \bibnamefont {{He}}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {{Thompson}}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {{Gu}}},\ }\href {https://arxiv.org/abs/1710.01738}
{\bibfield {journal} {\bibinfo {journal} {arXiv:1710.01738}\ } (\bibinfo
{year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chitambar}\ \emph {et~al.}(2016)\citenamefont
{Chitambar}, \citenamefont {Streltsov}, \citenamefont {Rana}, \citenamefont
{Bera}, \citenamefont {Adesso},\ and\ \citenamefont
{Lewenstein}}]{Chitambar2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Chitambar}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Streltsov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rana}},
\bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont {Bera}}, \bibinfo
{author} {\bibfnamefont {G.}~\bibnamefont {Adesso}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\ }\href {\doibase
10.1103/PhysRevLett.116.070402} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{070402} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2017)\citenamefont
{Streltsov}, \citenamefont {Rana}, \citenamefont {Bera},\ and\ \citenamefont
{Lewenstein}}]{Streltsov2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Streltsov}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Rana}},
\bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont {Bera}}, \ and\
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lewenstein}},\ }\href
{\doibase 10.1103/PhysRevX.7.011024} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages}
{011024} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Modi}\ \emph {et~al.}(2012)\citenamefont {Modi},
\citenamefont {Brodutch}, \citenamefont {Cable}, \citenamefont {Paterek},\
and\ \citenamefont {Vedral}}]{Modi2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Modi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Brodutch}},
\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Cable}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Paterek}}, \ and\ \bibinfo
{author} {\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\href {\doibase
10.1103/RevModPhys.84.1655} {\bibfield {journal} {\bibinfo {journal} {Rev.
Mod. Phys.}\ }\textbf {\bibinfo {volume} {84}},\ \bibinfo {pages} {1655}
(\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2016)\citenamefont {Ma},
\citenamefont {Yadin}, \citenamefont {Girolami}, \citenamefont {Vedral},\
and\ \citenamefont {Gu}}]{Ma2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Ma}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Yadin}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Girolami}}, \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Vedral}}, \ and\ \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gu}},\ }\href {\doibase
10.1103/PhysRevLett.116.160407} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages}
{160407} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Winter}\ and\ \citenamefont
{Yang}(2016)}]{Winter2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Winter}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Yang}},\
}\href {\doibase 10.1103/PhysRevLett.116.120404} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {116}},\
\bibinfo {pages} {120404} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yadin}\ \emph {et~al.}(2016)\citenamefont {Yadin},
\citenamefont {Ma}, \citenamefont {Girolami}, \citenamefont {Gu},\ and\
\citenamefont {Vedral}}]{Yadin2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Yadin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ma}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Girolami}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Gu}}, \ and\ \bibinfo {author}
{\bibfnamefont {V.}~\bibnamefont {Vedral}},\ }\href {\doibase
10.1103/PhysRevX.6.041028} {\bibfield {journal} {\bibinfo {journal} {Phys.
Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {041028}
(\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wei}\ and\ \citenamefont {Goldbart}(2003)}]{Wei2003}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.-C.}\ \bibnamefont
{Wei}}\ and\ \bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont
{Goldbart}},\ }\href {\doibase 10.1103/PhysRevA.68.042307} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume}
{68}},\ \bibinfo {pages} {042307} (\bibinfo {year} {2003})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Streltsov}\ \emph {et~al.}(2010)\citenamefont
{Streltsov}, \citenamefont {Kampermann},\ and\ \citenamefont
{Bru\ss}}]{Streltsov2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Streltsov}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Kampermann}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont
{Bru\ss}},\ }\href {\doibase 10.1088/1367-2630/12/12/123004} {\bibfield
{journal} {\bibinfo {journal} {New J. Phys.}\ }\textbf {\bibinfo {volume}
{12}},\ \bibinfo {pages} {123004} (\bibinfo {year} {2010})}\BibitemShut
{NoStop} \end{thebibliography}
\appendix
\section{\label{sec:Superposition-free-unitaries} Proof of Theorem~\ref{thm:2}}
In the following, we will characterize all superposition-free unitaries acting on two qubits. In particular, we will show that any superposition-free unitary in this framework can be decomposed into a sequence of elementary unitaries $V$ and $W$, given in Eqs.~(\ref{eq:V}) and (\ref{eq:W}) of the main text. An important ingredient for our proof is the following lemma \cite{Chefles2004,Marvian2013,Killoran2016}. \begin{lem} \label{lem:1}For two sets of states $\left\{ \ket{\psi_{i}}\right\} _{i=1}^{N}$ and $\left\{ \ket{\phi_{i}}\right\} _{i=1}^{N}$ there exists a unitary operation such that $U\ket{\psi_{i}}=\ket{\phi_{i}}$ for all $i$
if and only if $\braket{\psi_{i}|\psi_{j}}=\braket{\phi_{i}|\phi_{j}}$ holds true for all $i$ and $j$. \end{lem} In general, a superposition-free unitary $U$ acts on a superposition-free state $\ket{c_{k}}\ket{c_{l}}$ as follows: \begin{equation} U\ket{c_{k}}\ket{c_{l}}=e^{i\phi_{kl}}\ket{c_{m}}\ket{c_{n}}, \end{equation} where the possible final states $e^{i\phi_{kl}}\ket{c_{m}}\ket{c_{n}}$ are constrained by Lemma~\ref{lem:1}. As we will see in the following, there exist 8 classes of superposition-free unitaries. For each of those classes we will find a decomposition into the elementary operations $V$ and $W$.
\textbf{Class 1.} We start with the most simple transformation, corresponding to the situation where an initial superposition-free state remains unchanged (up to a possible phase): \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{1}}\ket{c_{0}}. \end{align} \end{subequations} Note that by Lemma~\ref{lem:1}, all phases $e^{i\phi_{kl}}$ must be equal. It is straightforward to see that this transformation corresponds to $V^{2}$.
\textbf{Class 2.} We now consider the transformation \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{0}}\ket{c_{1}}. \end{align} \end{subequations} By applying Lemma~\ref{lem:1}, we see that -- similar as in the previous case -- all phases $e^{i\phi_{kl}}$ must be equal. This transformation corresponds to the swap unitary $V$.
\textbf{Class 3.} The next transformation that we will consider has the following form: \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{1}}\ket{c_{0}}. \end{align} \end{subequations} Up to an overall phase, the phases $e^{i\phi_{kl}}$ are fixed by Lemma~\ref{lem:1} as follows: \begin{subequations} \label{eq:phases-3} \begin{align} e^{i\phi_{00}} & =1,\\
e^{i\phi_{01}} & =e^{i\phi_{10}}=e^{i\frac{\phi_{11}}{2}}=\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}. \end{align} \end{subequations} This transformation corresponds to the unitary $WVW$.
\textbf{Class 4.} In the next step we consider the following transformation: \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{0}}\ket{c_{1}}. \end{align} \end{subequations} It can be verified by inspection that (up to an overall phase), Lemma~\ref{lem:1} fixes the phases $e^{i\phi_{kl}}$ in the same way as in Eq.~(\ref{eq:phases-3}). Note that this transformation corresponds to the transformation of Class 3, followed by a swap. Thus, it corresponds to the unitary $(VW)^{2}$.
\textbf{Class 5.} We now consider the transformation \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{0}}\ket{c_{0}}. \end{align} \end{subequations} Up to an overall phase, Lemma~\ref{lem:1} fixes the phases $e^{i\phi_{kl}}$ as follows: \begin{subequations} \label{eq:phases-5} \begin{align} e^{i\phi_{00}} & =e^{i\phi_{01}}=1,\\
e^{i\phi_{11}} & =e^{i\phi_{10}}=\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}. \end{align} \end{subequations} This transformation corresponds to the unitary $W$.
\textbf{Class 6.} In the next step we consider the transformation \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{1}}\ket{c_{1}}. \end{align} \end{subequations} By applying Lemma~\ref{lem:1}, we see that the phases $e^{i\phi_{kl}}$ are fixed as follows: \begin{subequations} \label{eq:phases-6} \begin{align} e^{i\phi_{00}} & =e^{i\phi_{10}}=1,\\
e^{i\phi_{11}} & =e^{i\phi_{01}}=\frac{\braket{c_{0}|c_{1}}}{\braket{c_{1}|c_{0}}}. \end{align} \end{subequations} As can be verified by inspection, this transformation corresponds to the unitary $WV$.
\textbf{Class 7.} The next transformation that we will consider has the following form: \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{0}}\ket{c_{0}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{1}}\ket{c_{1}}. \end{align} \end{subequations} Up to an overall phase, Lemma~\ref{lem:1} fixes the phases $e^{i\phi_{kl}}$ as in Eqs.~(\ref{eq:phases-6}). This transformation corresponds to the transformation of Class 6 followed by a swap, and the corresponding unitary is $VWV$.
\textbf{Class 8.} Our final transformation has the following form: \begin{subequations} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{1}}\ket{c_{0}},\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{1}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{0}}\ket{c_{0}}. \end{align} \end{subequations} Up to an overall phase, Lemma~\ref{lem:1} fixes the phases $e^{i\phi_{kl}}$ as in Eq.~(\ref{eq:phases-5}). This transformation corresponds to the transformation of Class 5 followed by a swap, and the corresponding unitary is $VW$.
As we will discuss in the following, these eight classes indeed characterize all superposition-free unitaries on two qubits. This can be seen by inspection, applying Lemma~\ref{lem:1} to all the remaining permutations of the superposition-free states. As an example, consider the following transition: \begin{subequations} \label{eq:CNOT} \begin{align} \ket{c_{0}}\ket{c_{0}} & \rightarrow e^{i\phi_{00}}\ket{c_{0}}\ket{c_{0}},\label{eq:violation-1}\\ \ket{c_{1}}\ket{c_{1}} & \rightarrow e^{i\phi_{11}}\ket{c_{1}}\ket{c_{0}},\label{eq:violation-2}\\ \ket{c_{0}}\ket{c_{1}} & \rightarrow e^{i\phi_{01}}\ket{c_{0}}\ket{c_{1}},\\ \ket{c_{1}}\ket{c_{0}} & \rightarrow e^{i\phi_{10}}\ket{c_{1}}\ket{c_{1}}. \end{align} \end{subequations} Transition of this form can be regarded as CNOT operation in the resource theory of superposition, as (up to a phase) the state of the second qubit is flipped between $\ket{c_{0}}$ and $\ket{c_{1}}$, conditioned on the first qubit being in one of these states.
The transition in Eqs.~(\ref{eq:CNOT}) is not covered by the above classes, and it is indeed impossible via unitary operations. If such a transition was possible via unitaries, this would lead to a violation of Lemma~\ref{lem:1}. In particular, Lemma~\ref{lem:1} together with Eqs.~(\ref{eq:violation-1}) and (\ref{eq:violation-2}) implies that \begin{equation}
\braket{c_{0}|c_{1}}^{2}=e^{i(\phi_{11}-\phi_{00})}\braket{c_{0}|c_{1}}, \end{equation} which cannot be true for any choice of the phases $e^{i\phi_{00}}$
and $e^{i\phi_{11}}$ in the considered range $0<|\!\braket{c_{0}|c_{1}}\!|<1$. By similar arguments, all transitions which are not covered by the above classes can be ruled out, and the proof is complete.
\section{\label{sec:trace-norm}Proof of Theorem \ref{thm:trace-norm}}
In the following, we will use results from \cite{Chen2016}, where the authors provided an important link between $E_{\mathrm{t}}$ and $C_{\mathrm{t}}$. In particular, theorems 2 and 3 in \cite{Chen2016} imply the following equality: \begin{equation} E_{\mathrm{t}}\left(\frac{1}{d}\sum_{i,j=0}^{d-1}\ket{ii}\!\bra{jj}\right)=C_{\mathrm{t}}\left(\frac{1}{d}\sum_{i,j=0}^{d-1}\ket{i}\!\bra{j}\right)=2-\frac{2}{d}.\label{eq:EtCt} \end{equation} Equipped with these tools we are now in position to prove Theorem~\ref{thm:trace-norm} of the main text.
We will consider the bipartite state \begin{equation} \rho=\frac{p}{2}\sum_{i,j=0}^{1}\ket{ii}\!\bra{jj}+\frac{1-p}{3}\sum_{k,l=2}^{4}\ket{kk}\!\bra{ll}\label{eq:violation-state} \end{equation} with probability $0\leq p\leq1$. Consider now local measurement on the first party with Kraus operators \begin{equation} K_{1}=\sum_{i=0}^{1}\ket{i}\!\bra{i}\otimes\openone,\,\,\,\,\,\,\,K_{2}=\sum_{j=2}^{4}\ket{j}\!\bra{j}\otimes\openone. \end{equation} It is straightforward to check that the corresponding measurement probabilities take the form \begin{align} q_{1} & =\mathrm{Tr}\left[K_{1}\rho K_{1}^{\dagger}\right]=p,\\ q_{2} & =\mathrm{Tr}\left[K_{2}\rho K_{2}^{\dagger}\right]=1-p. \end{align} Moreover, the post-measurement states are given as \begin{align} \sigma_{1} & =\frac{K_{1}\rho K_{1}^{\dagger}}{p_{1}}=\frac{1}{2}\sum_{i,j=0}^{1}\ket{ii}\!\bra{jj},\\ \sigma_{2} & =\frac{K_{2}\rho K_{2}^{\dagger}}{p_{2}}=\frac{1}{3}\sum_{k,l=2}^{4}\ket{kk}\!\bra{ll}. \end{align}
\begin{figure}\label{fig:violation}
\end{figure}
We will now complete the proof of the theorem by showing that for a suitable choice of the probability $p$ it holds that \begin{equation} q_{1}E_{\mathrm{t}}\left(\sigma_{1}\right)+q_{2}E_{\mathrm{t}}\left(\sigma_{2}\right)>E_{\mathrm{t}}\left(\rho\right).\label{eq:violation} \end{equation}
For this, we define the separable state $\delta=\frac{1}{2}\sum_{i=0}^{1}\ket{ii}\!\bra{ii}$, and note that it provides an upper bound on the trace norm entanglement, i.e., $E_{\mathrm{t}}(\rho)\leq||\rho-\delta||_{1}$. Moreover, it is straightforward to verify that \begin{equation}
||\rho-\delta||_{1}=\begin{cases} 2-2p & \mathrm{for\,}p<\frac{1}{2},\\ 1 & \mathrm{for\,}p\geq\frac{1}{2}. \end{cases} \end{equation} On the other hand, using Eq.~(\ref{eq:EtCt}) we obtain \begin{align} E_{\mathrm{t}}(\sigma_{1}) & =1,\,\,\,\,\,\,\,E_{\mathrm{t}}(\sigma_{2})=\frac{4}{3}. \end{align} Using these results, we immediately see that Eq.~(\ref{eq:violation}) is fulfilled for $0.4<p<1$, see also Fig.~\ref{fig:violation}.
\section{\label{sec:concurrence}Activation of $\ell_{1}$-norm coherence \protect \\ into concurrence}
We will now show that the inequality \begin{equation} E(\Lambda_{i}[\rho\otimes\sigma_{i}])\leq C(\rho)\label{eq:EntCoh-2} \end{equation} holds for $\ell_{1}$-norm coherence $C$ and concurrence $E$, where $\rho$ and $\sigma_{i}$ are single-qubit states, and $\Lambda_{i}$ is a bipartite incoherent operation. Moreover, we will also see that equality in Eq.~(\ref{eq:EntCoh-2}) is achieved if $\Lambda_{i}$ is a CNOT gate.
For proving the statement, we first recall the definition of geometric entanglement \cite{Wei2003,Streltsov2010} and geometric coherence \cite{Streltsov2015} \begin{align} E_{\mathrm{g}}(\rho) & =1-\max_{\sigma\in\mathcal{S}}F(\rho,\sigma),\\ C_{\mathrm{g}}(\rho) & =1-\max_{\sigma\in\mathcal{I}}F(\rho,\sigma) \end{align}
with fidelity $F(\rho,\sigma)=||\sqrt{\rho}\sqrt{\sigma}||_{1}^{2}$. Note that these quantities fulfill Eq.~(\ref{eq:EntCoh-2}), and equality is attained if $\Lambda_{i}$ is a CNOT gate \cite{Streltsov2015}.
For a single-qubit state $\rho$, the geometric coherence $C_{\mathrm{g}}$ is related to the $\ell_{1}$-norm coherence $C$ as follows \cite{Streltsov2015}: \begin{equation} C_{\mathrm{g}}(\rho)=\frac{1}{2}[1-\sqrt{1-C(\rho)^{2}}].\label{eq:Cg} \end{equation} It is now crucial to note that the same functional relation holds between the geometric entanglement $E_{\mathrm{g}}$ and the concurrence $E$ for any two-qubit state $\mu$ \cite{Wei2003,Streltsov2010}: \begin{equation} E_{\mathrm{g}}(\mu)=\frac{1}{2}[1-\sqrt{1-E(\mu)^{2}}].\label{eq:Eg} \end{equation} Recalling that Eq.~(\ref{eq:EntCoh-2}) is fulfilled for the geometric entanglement $E_{\mathrm{g}}$ and geometric coherence $C_{\mathrm{g}}$, these results imply that Eq.~(\ref{eq:EntCoh-2}) also holds for $\ell_{1}$-norm of coherence $C$ and concurrence $E$. Moreover, for these quantifiers the CNOT gate must also be the optimal incoherent operation, attaining equality in Eq.~(\ref{eq:EntCoh-2}). Our results also hold if $C$ is chosen to be the trace norm coherence, as for single-qubit states the trace norm coherence coindiced with the $\ell_{1}$-norm coherence \cite{Shao2015}.
\end{document} | arXiv | {
"id": "1710.04447.tex",
"language_detection_score": 0.6192721724510193,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{High-fidelity teleportation beyond the no-cloning limit and entanglement swapping\\ for continuous variables}
\author{Nobuyuki Takei$^{1,2}$, Hidehiro Yonezawa$^{1,2}$, Takao Aoki$^{1,2}$, and Akira Furusawa$^{1,2}$}
\affiliation{ $^{1}$Department of Applied Physics, School of Engineering, The University of Tokyo,\\ 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan\\ $^{2}$CREST, Japan Science and Technology (JST) Agency, 1-9-9 Yaesu, Chuo-ku, Tokyo 103-0028, Japan }
\date{\today}
\begin{abstract}
We experimentally demonstrate continuous-variable quantum teleportation beyond the no-cloning limit. We teleport a coherent state and achieve the fidelity of 0.70$\pm$0.02 that surpasses the no-cloning limit of 2/3. Surpassing the limit is necessary to transfer the nonclassicality of an input quantum state. By using our high-fidelity teleporter, we demonstrate entanglement swapping, namely teleportation of quantum entanglement, as an example of transfer of nonclassicality.
\end{abstract}
\pacs{03.67.Hk, 42.50.Dv, 03.67.Mn}
\maketitle
Quantum teleportation \cite{Bennett93,Bouwmeester97,Braunstein98,vanLoock00} is an essential protocol in quantum communication and quantum information processing \cite{Braunstein03,Nielsen00}. This protocol enables reliable transfer of an arbitrary, unknown quantum state from one location to another. This transfer is achieved by utilizing shared quantum entanglement and classical communication between two locations. Experiments of quantum teleportation have been successfully demonstrated with photonic qubits \cite{Bouwmeester97} and atomic qubits \cite{Riebe04,Barrett04} and also realized in optical field modes \cite{Furusawa98,Bowen03,Zhang03}. In particular, the teleportation experiments with atomic qubits and optical field modes are considered to be deterministic or unconditional.
Quantum teleportation can also be combined with other operations to construct advanced quantum circuits in quantum information processing \cite{Braunstein03,Nielsen00}. The teleported state will be manipulated in subsequent operations, some of which may rely on the nonclassicality contained in the state. Therefore it is desirable to realize a high-quality teleporter which preserves the nonclassicality throughout the process.
In a continuous-variable (CV) system \cite{Braunstein98,vanLoock00}, a required quality to accomplish the transfer of nonclassicality is as follows: the fidelity $F_c$ of a coherent state input exceeds 2/3 at unity gains of classical channels \cite{Ban04}.
Here the fidelity is a measure that quantifies the overlap between the input and the output states: $F=\langle \psi_{in} |\hat{\rho}_{out} |\psi_{in} \rangle$ \cite{Braunstein00}. Quantum teleportation succeeds when the fidelity exceeds the classical limit ($F_c=1/2$ for a coherent state input) which is the best achievable value without the use of entanglement. The value of 2/3 is referred to as the no-cloning limit, because surpassing this limit warrants that the teleported state is the best remaining copy of the input state \cite{Grosshans01}. As mentioned at the beginning, the essence of teleportation is the transfer of an arbitrary quantum state. To achieve it, the gains of classical channels must be set to unity. Otherwise the displacement of the teleported state does not match that of the input state, and the fidelity drops to zero when it is averaged over the whole phase space \cite{vanLoock00}. Note that the concept of gain is peculiar to a CV system and there is no counterpart in a qubit system.
A teleporter surpassing the no-cloning limit enables the transfer of the following nonclassicality in an input quantum state.
It is possible to transfer a negative part of the Wigner function of a quantum state like the Schr\"odinger-cat state $|\psi_{cat} \rangle \propto |\alpha \rangle \pm |-\alpha \rangle$ and a single photon state \cite{Ban04}. The negative part is the signature of the nonclassicality \cite{Leonhardt97}. Moreover, two resources of quantum entanglement for teleporters surpassing the no-cloning limit allows one to perform entanglement swapping \cite{Pan98,Tan99}: one resource of entanglement can be teleported by the use of the other. The teleported entanglement is still capable of bipartite quantum protocols (e.g., quantum teleportation).
Although quantum teleportation of coherent states has been successfully performed \cite{Furusawa98,Zhang03,Bowen03} and the fidelity $F_c$ beyond the classical limit of 1/2 \cite{Braunstein00} has been obtained, $F_c > 2/3$ has never been achieved. In terms of the transfer of nonclassicality, entanglement swapping has been demonstrated recently \cite{Jia04}. However, the gains of classical channels were tuned to optimal values (non-unity) for the transfer of the particular entanglement. At such non-unity gains, one would fail in teleportation of other input states such as a coherent state.
In this Letter we demonstrate teleportation of a coherent state at unity gains, and we achieve the fidelity of $0.70\pm0.02$ surpassing $F_c =2/3$ for the first time to the best of our knowledge. By using our teleporter we demonstrate entanglement swapping as an example of teleportation of nonclassicality. The gains of our teleporter are always set to unity to teleport an arbitrary state.
The quantum state to be teleported in our experiment is that of an electromagnetic field mode as in the previous works \cite{Furusawa98,Zhang03,Bowen03,Jia04}. An electromagnetic field mode is represented by an annihilation operator $\hat{a}$ whose real and imaginary parts ($\hat{a}=\hat{x}+i\hat{p}$) correspond to quadrature-phase amplitude operators with the canonical commutation relation $[\hat{x}, \hat{p}]=i/2$ (units-free, with $\hbar=1/2$).
The fidelity $F_c$ is mainly limited by the degree of correlation of shared quantum entanglement between sender Alice and receiver Bob. For CVs such as quadrature-phase amplitudes, the ideal EPR (Einstein-Podolsky-Rosen) entangled state shows entanglement of $\hat{x}_{i}-\hat{x}_{j} \to 0$ and $\hat{p}_{i}+\hat{p}_{j} \to 0$, where subscripts $i$ and $j$ denote two relevant modes of the state. The existence of entanglement between the relevant modes can be checked by the inseparability criterion \cite{Duan00,Simon00}: $\Delta_{i,j} \equiv \langle [\Delta (\hat{x}_{i}-\hat{x}_{j} )]^2\rangle + \langle [\Delta (\hat{p}_{i} +\hat{p}_{j})]^2\rangle <1$, where the variances of a vacuum state are $\langle (\Delta \hat{x}^{(0)})^2 \rangle=\langle (\Delta \hat{p}^{(0)})^2 \rangle=1/4$ and a superscript (0) denotes the vacuum state. If this inequality holds, the relevant modes are entangled. In the case in which Alice (mode A) and Bob (mode B) share entanglement of $\langle [\Delta (\hat{x}_{\mathrm{A}}-\hat{x}_{\mathrm{B}} )]^2\rangle \simeq \langle [\Delta (\hat{p}_{\mathrm{A}} +\hat{p}_{\mathrm{B}})]^2\rangle$, the inseparability criterion $\Delta_{\mathrm{A},\mathrm{B}}<1$ corresponds to the fidelity $F_c >1/2$ for a teleporter without losses \cite{Braunstein01}. Furthermore $\Delta_{\mathrm{A},\mathrm{B}}<1/2$ corresponds to the fidelity $F_c >2/3$. Therefore, in order to achieve $F_c >2/3$, we need quantum entanglement with at least $\Delta_{\mathrm{A},\mathrm{B}}<1/2$.
When $F_c >2/3$ is achieved, it is possible to perform entanglement swapping with the teleporter and an entanglement resource with $\Delta_{ref,in}<1/2$, where we assume that the entangled state consists of two sub-systems: `reference' and `input'. While the reference is kept during a teleportation process, the input is teleported to an output station. After the process, the success of this protocol is verified by examining quantum entanglement between the reference and the output: $\Delta_{ref,out}<1$. Note that to accomplish this protocol, we need two pairs of entangled states with $\Delta_{i,j}<1/2$.
\begin{figure}
\caption{ The experimental set-up for teleportation of quantum entanglement. OPOs are optical parametric oscillators. All beam splitters except 99/1 BSs are 50/50 beam splitters. LOs are local oscillators for homodyne detection. SA is a spectrum analyzer. The ellipses illustrate the squeezed quadrature of each beam. Symbols and abbreviations are defined in the text. }
\end{figure} The scheme for entanglement swapping is illustrated in Fig. 1. Two pairs of entangled beams denoted by EPR1 and EPR2 are generated by combining squeezed vacuum states at half beam splitters. One of the EPR1 beams is used as a reference. The other is used as an input and teleported to the output mode. The EPR2 beams consist of mode A and B, and they are utilized as a resource of teleportation. In the case of a coherent state input, a modulated beam is put into the input mode instead of the EPR1 beam.
Each squeezed vacuum state is generated from a subthreshold optical parametric oscillator (OPO) with a potassium niobate crystal (length 10mm). The crystal is temperature-tuned for type-I noncritical phase matching. Each OPO cavity is a bow-tie-type ring cavity which consists of two spherical mirrors (radius of curvature 50 mm) and two flat mirrors. The round trip length is 500 mm and the waist size in the crystal is 20$\mu$m. The output of a continuous wave Ti:Sapphire laser at 860nm is frequency doubled in an external cavity with the same configuration as the OPOs. The output beam at 430nm is divided into four beams to pump four OPOs. The pump power is about 80mW for each OPO.
We describe here a teleportation process in the Heisenberg picture. First Alice and Bob share entangled EPR2 beams of mode A and B. Alice performs ``Bell measurement" on her entangled mode ($\hat{x}_{{\rm A}} ,\hat{p}_{{\rm A}}$) and an unknown input mode ($\hat{x}_{in} ,\hat{p}_{in}$). She combines these modes at a half beam splitter and measures $\hat{x}_u=(\hat{x}_{in}-\hat{x}_{\rm{A}})/\sqrt{2}$ and $\hat{p}_v=(\hat{p}_{in}+ \hat{p}_{\rm{A}})/\sqrt{2}$ with two optical homodyne detectors. These measured values $x_u$ and $p_v$ for $\hat{x}_u$ and $\hat{p}_v$ are sent to Bob through classical channels with gains $g_x$ and $g_p$, respectively.
The gains are adjusted in the manner of Ref.~\cite{Zhang03}. The normalized gains are defined as $g_x =\langle \hat{x}_{out} \rangle /\langle \hat{x}_{in} \rangle$ and $g_p =\langle \hat{p}_{out} \rangle /\langle \hat{p}_{in} \rangle$. We obtain the measured gains of $g_x =1.00 \pm0.02$ and $g_p =0.99 \pm0.02$, respectively. For simplicity, these gains are fixed throughout the experiment and treated as unity.
Let us write Bob's initial mode before the measurement of Alice as: $\hat{x}_{{\rm B}} = \hat{x}_{in} -( \hat{x}_{\rm A}-\hat{x}_{\rm B})-\sqrt{2} \hat{x}_{u}$ and $\hat{p}_{{\rm B}} = \hat{p}_{in} +(\hat{p}_{\rm A}+\hat{p}_{\rm B})-\sqrt{2} \hat{p}_{v}$. Note that in this step Bob's mode remains unchanged. After measuring $\hat{x}_{u}$ and $\hat{p}_{v}$ at Alice, these operators collapse and reduce to certain values. Receiving her measurement results, Bob displaces his mode as $\hat{x}_{{\rm B} } \to \hat{x}_{out}=\hat{x}_{{\rm B} } + \sqrt{2} g_x x_{u}$,\ $\hat{p}_{{\rm B} } \to \hat{p}_{out}=\hat{p}_{{\rm B} } + \sqrt{2} g_p p_{v}$ and accomplishes the teleportation. Here we write explicitly the gains $g_x$ and $g_p$ to show the meaning of them, but they are treated as unity as mentioned before. In our experiment, displacement operation is performed by using electro-optical modulators (EOMs) and highly reflecting mirrors (99/1 beam splitters). Bob modulates two beams by using amplitude and phase modulators (AM and PM in Fig. 1). We use two beams to avoid the mixing of amplitude and phase modulations. The amplitude and phase modulations correspond to the displacement of $p$ and $x$ quadratures, respectively. The modulated beams are combined with Bob's mode ($\hat{x}_{\rm B} ,\hat{p}_{\rm B}$) at 99/1 beam splitters.
The teleported mode becomes \begin{eqnarray} \hat{x}_{out} &=& \hat{x}_{in} - (\hat{x}_{\rm A}-\hat{x}_{\rm B}), \nonumber \\ \hat{p}_{out} &=& \hat{p}_{in} + (\hat{p}_{\rm A}+\hat{p}_{\rm B}). \label{eq:output} \end{eqnarray} In the ideal case, the EPR2 state is the state for which $\hat{x}_{\rm A}-\hat{x}_{\rm B} \to 0$ and $\hat{p}_{\rm A}+\hat{p}_{\rm B} \to 0$. Then the teleported state is identical to the input state. In real experiments, however, the teleported state has additional fluctuations. Without entanglement, at least two units of vacuum noise are added \cite{Braunstein98}. In other words, the noise $\langle [\Delta (\hat{x}_{\mathrm{A}}-\hat{x}_{\mathrm{B}} )]^2\rangle \ge 2\times\frac{1}{4}$ is added in $x$ quadrature (similarly in $p$ quadrature). These variances correspond to $\Delta_{\mathrm{A},\mathrm{B}}\ge 1$, resulting in the fidelity $F_c \le 1/2$. On the other hand, with entanglement, added noise is less than two units of vacuum noise. In the case with entanglement of $\Delta_{\mathrm{A},\mathrm{B}}<1/2$ which is necessary to accomplish $F_c >2/3$, the added noise is less than a unit of vacuum noise.
\begin{figure}
\caption{ The measurement results of the teleported state for a coherent state input in $x$ quadrature. Each trace is normalized to the corresponding vacuum noise level. Trace i shows the corresponding vacuum noise level $\langle (\Delta \hat{x}_{out}^{(0)} )^2\rangle =1/4$. Traces ii shows the teleported state for a vacuum input. Note that the variance of the teleported state for a vacuum input corresponds to that for a coherent state input. Trace iii shows the teleported state for a coherent state input with the phase scanned. At the top (bottom) of the trace, the relative phase between the input and the LO is $0$ or $\pi$ ($\pi/2$ or 3$\pi$/2). The measurement frequency is centered at 1 MHz, and the resolution and video bandwidths are 30kHz and 300 Hz, respectively. Traces i and ii are averaged 20 times. }
\end{figure} We first perform teleportation of a coherent state to quantify the quality of our teleporter with the fidelity $F_c$. In our experiment, we use frequency sidebands at $\pm$1MHz of an optical carrier beam as a quantum state. Thus a coherent state can be generated by applying phase modulation with EOM to the carrier beam. This modulated beam is put into the input mode instead of the EPR1 beam. Figure 2 shows measurement results of the teleported mode. The measured amplitude of the coherent state is $20.7\pm 0.2 $dB compared to the corresponding vacuum noise level. The measured values of the variances are $\langle (\Delta \hat{x}_{out} )^2 \rangle =2.82 \pm 0.09$dB and $\langle (\Delta \hat{p}_{out} )^2 \rangle =2.64\pm 0.08$ dB (not shown). The fidelity for a coherent state input can be written as $F_c =2/\sqrt{(1+4\sigma_x)(1+4\sigma_p)}$, where $\sigma_x =\left \langle (\Delta \hat{x}_{out} )^2\right \rangle$ and $\sigma_p =\left \langle (\Delta \hat{p}_{out} )^2\right \rangle$ \cite{Furusawa98,Braunstein01}. The fidelity obtained from the measured variances is $F_c =0.70 \pm 0.02$. This result clearly shows the success of teleportation of a coherent state beyond the no-cloning limit. Moreover we examine the correlation of the EPR2 beams and obtain the entanglement of $\Delta_{{\rm A},{\rm B}}=0.42 \pm 0.01$, from which the expected fidelity of $F_c =0.70\pm0.01$ is calculated. The experimental result is in good agreement with the calculation. Such good agreement indicates that our phase-locking system is very stable and that the fidelity is mainly limited by the degree of entanglement of the resource. As discussed in Ref.~\cite{Zhang03}, residual phase fluctuation in a locking system affects an achievable fidelity, and probably has prevented previous works from surpassing the no-cloning limit. Highly stabilized phase-locking system (both mechanically and electronically) allows us to achieve the fidelity of 0.70.
\begin{figure}
\caption{ Correlation measurement for EPR1 beams. (a) The measurement result of the reference mode alone. Trace i shows the corresponding vacuum noise level $\langle (\Delta \hat{x}_{ref}^{(0)} )^2\rangle =\langle (\Delta \hat{p}_{ref}^{(0)} )^2\rangle =1/4$. Traces ii and iii are the measurement results of $\langle (\Delta \hat{x}_{ref} )^2\rangle$ and $\langle (\Delta \hat{p}_{ref} )^2\rangle$, respectively. (b) The measurement result of the correlation between the input mode and the reference mode. Trace i shows the corresponding vacuum noise level $\langle [\Delta (\hat{x}_{ref}^{(0)}-\hat{x}_{in}^{(0)} )]^2\rangle =\langle [\Delta (\hat{p}_{ref}^{(0)} +\hat{p}_{in}^{(0)})]^2\rangle =1/2$. Traces ii and iii are the measurement results of $\langle [\Delta (\hat{x}_{ref}-\hat{x}_{in} )]^2\rangle$ and $\langle [\Delta (\hat{p}_{ref} +\hat{p}_{in})]^2\rangle$, respectively. The measurement condition is the same as that of Fig. 2. }
\end{figure} Next we demonstrate entanglement swapping. Before performing the experiment, we measure the noise power of each mode for EPR1 beams and the initial correlation between the modes with homodyne detection. For the reference mode, we obtain the noise levels of $5.23 \pm 0.14$dB and $4.44 \pm 0.14$dB for $x$ and $p$ quadratures, respectively (Fig. 3a). Similarly, the noise levels of $5.19 \pm 0.13$dB and $4.37 \pm 0.14$dB are obtained for $x$ and $p$ quadratures for the input mode (not shown). By making electrical subtraction or summation of the homodyne detection outputs, we observe the noise levels of $-3.19 \pm 0.13$dB for $x$ quadrature and $-4.19 \pm 0.14$dB for $p$ quadrature (Fig. 3b). From these values, we obtain the measured variances of $\Delta_{ref,in}=0.43 \pm 0.01<1$. This result shows the existence of the quantum entanglement between the input and the reference, and also indicates that we can transfer this entanglement with our teleporter.
We then proceed to the experiment of entanglement swapping and measure the correlation between the output and the reference in a similar way. The state in the reference mode does not change in the process. For the output mode, the noise levels of $6.06 \pm 0.12$dB and $5.47 \pm 0.14$dB are obtained for $x$ and $p$ quadratures, respectively, as shown in Fig. 4a. Because of the imperfect teleportation, some noises are added to the teleported state, resulting in the larger variances than that of the reference. Figure 4b shows the results of the correlation measurement. We observe the noise levels of $-0.25 \pm 0.13$dB and $-0.60 \pm 0.13$dB for $x$ and $p$ quadratures, respectively, yielding $\Delta_{ref,out}=0.91 \pm 0.02<1$. This result clearly shows the existence of quantum entanglement between the output and the reference. Therefore we can declare the success of entanglement swapping with unity gains.
In summary, we have demonstrated teleportation of a coherent state with the fidelity of 0.70$\pm$0.02. By using this high-fidelity teleporter, we have demonstrated entanglement swapping, or teleportation of quantum entanglement as a nonclassical input. Moreover, this high-quality teleporter will allow us to apply the teleported state to the subsequent manipilations and the construction of an advanced quantum circuits. For example, a bipartite quantum protocol like quantum teleportation can be performed by using the swapped entanglement. In addition, our teleporter has the capability of transferring a negative part of the Wigner function of a quantum state like a single photon state.
This work was partly supported by the MEXT and the MPHPT of Japan, and Research Foundation for Opto-Science and Technology.
\end{document} | arXiv | {
"id": "0501086.tex",
"language_detection_score": 0.8186341524124146,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A short note on the operator norm upper bound for sub-Gaussian tailed random matrices}
\author{Eric Benhamou \thanks{A.I. Square Connect} \thanks{Lamsade, Paris Dauphine} \thanks{Email:
\texttt{eric.benhamou@aisquareconnect.com, eric.benhamou@dauphine.eu}} \and Jamal Atif \footnotemark[2] \thanks{Email: \texttt{jamal.atif@dauphine.fr}} \and Rida Laraki \footnotemark[2] \thanks{Email: \texttt{rida.laraki@dauphine.fr}}} \maketitle
\begin{abstract} This paper investigates an upper bound of the operator norm for sub-Gaussian tailed random matrices. A lot of attention has been put on uniformly bounded sub-Gaussian tailed random matrices with independent coefficients. However, little has been done for sub-Gaussian tailed random matrices whose matrix coefficients variance are not equal or for matrix for which coefficients are not independent. This is precisely the subject of this paper. After proving that random matrices with uniform sub-Gaussian tailed independent coefficients satisfy the Tracy Widom bound, that is, their matrix operator norm remains bounded by $O(\sqrt n )$ with overwhelming probability, we prove that a less stringent condition is that the matrix rows are independent and uniformly sub-Gaussian. This does not impose in particular that all matrix coefficients are independent, but only their rows, which is a weaker condition. \end{abstract}
\section{Introduction} Random matrices and their spectra have been under intensive study in many fields. This is the case in Statistics since the work of \cite{Wishart_1928} on sample covariance matrices, in Numerical Analysis since their introduction by \cite{VonNeumann_1947} in the 1940s, in Physics as a consequence of the work of \cite{Wigner_1955, Wigner_1958} since the 1950s on in Banach Space Theory and Differential Geometric Analysis with the work of \cite{Grothendieck_1956} in a similar period. More recently, in machine learning, the netflix prize (see \cite{wiki:Netflix_prize}) has attracted a lot of attention with a large part of the community investigating recommender systems (see \cite{wiki:recommender_system}) and collaborative filtering methods, which ultimately also rely on random matrices and their eigen and singular values spectra.
In particular, an interesting and important problem in matrix completion problem has been to investigate where the operator norm is concentrated to be able to make some reasonable assumptions about missing entries. Other important contribution have been the Tracy Widom law, which says that for Wigner matrix, the operator norm is concentrated in the range of $\left[ 2 \sqrt{n} - O(n^{-1/6}), \right.$ $ \left. 2 \sqrt{n} + O(n^{-1/6}) \right]$ (see \cite{Tracy_1994}), and the Marchenko–Pastur distribution that describes the asymptotic behavior of singular values of large rectangular random matrices (see \cite{Marcenko_1967}).
However, most of these results have been derived under the assumptions of independent and identically distributed coefficients. It is natural to ask similar questions about general random matrices whose entries distribution may differ. In particular, to make the question more concrete, we are interested in finding an upper bound of the operator norm of a random matrix whose coefficients are sub Gaussian and see the implied consequence for the matrix coefficients. The paper is organized as follows. In section \ref{intro}, we recall various definitions. In section \ref{main}, we first proved that for independent and uniform sub-Gaussian tailed random squared matrices their operator norm satisfies the Tracy Widom bound, that is, the matrix operator norm for the $L_a, L_b$ norm remains bounded by $O(\sqrt n )$. We see that a less stringent sufficient condition is that the matrix rows $L_{a}$ norms are uniformly sub-Gaussian and independent. This implies in particular that a matrix with coefficients that are not necessarily independent and sub-Gaussian can still validate an upper bound for the its operator norm of $O(\sqrt n )$ with overwhelming probability.
The condition of independence of rows has already been mentioned in \cite{vershynin_2018} with a similar setting and proof and appeared as early as 2017. Additionally, \cite{Benaych_2018} provided a similar proof in the Hermitian case and pointed kindly to the authors the last two references that authors were not aware of at the time of their writing. This article has at least the merit to be self contained and to focus only on sub-Gaussian random matrix making the presentation shorter and self consistent. But for more details, we advise the reader to refer to the last two references that cover a much wider scope and are respectively 300 and 80 pages long.
\section{Some definitions}\label{intro}
Suppose $\| \cdot \|_a$ and $\| \cdot \|_b$ are norms on $\mathbb{R}^m$ and $\mathbb{R}^n$, respectively. We can of course generalize easily the concept to norms operating on $\mathbb{C}^m$ and $\mathbb{C}^n$ if we look at matrices with complex number coefficients.
\begin{definition}\label{operatornorm_def}
We define the operator norm of $ \mathbf{X} \in \mathbb{R}^{m \times n}$, induced by the norms $\| \dots \|_a$ and $\| \dots \|_b$, as \begin{equation}\label{operatornorm}
\| \mathbf{X} \|_{a,b} = \operatorname{sup} \{ \| \mathbf{X}u \|_{a} \,\, | \,\, \| u \|_{b} \leq 1\} . \end{equation}
We will denote this norm as $\| \cdot \|_{op}$ and we will drop the $a,b$ indices to make things simpler whenever there is no risk of confusion and have the following definition \begin{equation}\label{operatornorm2}
\| \mathbf{X} \|_{op} = \operatorname{sup} \{ \| \mathbf{X}u \| \,\, | \,\, \| u \| \leq 1\} . \end{equation}
\end{definition}
When $\| \cdot \|_a$ and $\| \cdot \|_b$ are both Euclidean norms, the operator norm of $\mathbf{X}$ is its
\textit{maximum singular value}, and is denoted $\| \cdot \|_2$: \begin{equation}
\| \mathbf{X} \|_{2} = \sigma_{\text{max}}( \mathbf{X} ) = ( \lambda_{\text{max}} (\mathbf{X}^T \mathbf{X} ))^{1/2}. \end{equation}
where $ \sigma_{\text{max}}( \mathbf{X} )$ is the maximum singular value of the matrix $\mathbf{X}$ and where $\lambda_{\text{max}} (\mathbf{X}^T \mathbf{X} )$ is the maximum eigen value of the matrix $\mathbf{X}^T \mathbf{X}$ also defined as $\text{sup} \{ u^T \mathbf{X}^T \mathbf{X} u\,\| \,\ \| u \|_2 = 1\}$.
In the rest of the paper, we will assume to simplify notation that $a = b = 2$ to keep things simple but all results remain the same for any $a, b \geq 1$.
\begin{remark} For the trivial matrix consisting entirely of single ones, it has an operator norm of exactly $n$.
This can be seen easily by taking the vector $u = ( 1/ \sqrt n, \ldots, 1/ \sqrt n)^T$ that gives $ \| \mathbf{X}u \|_{ 2} = n$, and proves that the operator norm should be at least equal to $n$. But the Cauchy-Schwarz inequality proves that it cannot be more than $n$. This vector is the right one to choose for the $L_2$ norm. But using the fact that any norm is equivalent in finite dimension (and that the matrix space is of finite dimension $n^2$), this result is not specific to the $L_2$ norm and is true for any norm.
Furthermore, the same application of the Cauchy Schwartz proves that the operator norm of any matrix whose coefficients are uniformly bounded by a constant $K$ has an operator norm bounded by $Kn$. In other words, using the Landau notation, any matrix whose entries are all uniformly $O(1)$ has an operator norm of $O(n)$. However, this upper bound does not take into account of any possible cancellations in the matrix $M$. Indeed, intuitively, using the concentration inequality of Hoeffding and Markov, we should expect with overwhelming probability (a notion that we will define shortly) that the operator norm should be bounded by $\sqrt{n}$ rather than $n$ in most cases where matrices coefficients are symmetrically distributed and have tails that are decreasing fast enough, a concept that we will also make more precised shortly with the concept of sub-Gaussian tails.
As for Euclidean norms, the operator norm boils down to computing the maximum singular value and for symmetric matrices, the maximum eigen values, it gives fruitful information about the these two quantities. \end{remark}
\begin{definition} A random variable $\xi$ is called sub-Gaussian if there are non negative constants $B, b > 0$ such that for every $t > 0$, \begin{equation}\label{sub-Gaussian}
\mathbb{P}(| \xi |>t) \leq B \exp(- b t^2). \end{equation} where $\mathbb{P}$ is the probability measure defined on a usual probability space $ \Omega = ( \Omega, \mathcal{B}, \mathbb{P})$. where $ \Omega$ is the ambient sample space, associated with a $\sigma$-algebra $\mathcal{B}$ of subsets of $\Omega$. \end{definition}
\begin{remark} Sub-Gaussian can be defined in multiple ways. We have used the traditional definition that states that the tails of the variable $\xi$ are dominated by, meaning they decay at least as fast as, the tails of a Gaussian. A more probabilistic way of defining the sub-Gaussian is to state that a random variable $\xi$ is called sub-Gaussian with variance proxy $\sigma$ if \begin{equation}\label{sub-Gaussian2}
\mathbb{P}(| \xi - \mathbb{E}[X] |>t) \leq 2 \exp(- \frac{ t^2}{2 \sigma^2}). \end{equation}
Chernoff bound allows to translate a bound on the moment generating function into a tail bound and vice versa. So we should expect to have equivalent definition in terms of moment generating, Laplace transform and many more criteria. Indeed, there are many equivalent definitions ( that can be found for instance in \cite{Buldygin_1980} or \cite{Ledoux_1991}) \begin{itemize} \item A random variable $\xi$ is sub-Gaussian.
\item A random variable $\xi$ satisfies the $\psi_2$ -condition, that is, there exist two non negative real constants $B, b>0$ such that $ \mathbb{E}[e^{b \xi^2}] \leq B$.
\item A random variable $\xi$ satisfies the Laplace transform condition, that is there exist two non negative real constants $B, b>0$ such that $\forall \lambda \in \mathbb{R}$, $\ \ \mathbb{E}[e^{\lambda (\xi-\operatorname{E}[\xi])} ] \leq Be^{\lambda^2 b / 2}$. This condition is also referred to as the moment generating-condition, that is there exist two non negative real constants $B, b>0$ such that $ \mathbb{E}[ e^{t \xi} ] \leq B e^{t^2 b^2 /2 }$. The parameter $b$ is directly related to the variance proxy $\sigma$.
\item A random variable $\xi$ satisfies the Moment condition, that is there exists a non negative real constant $K>0$ such that $\ \forall p \geq 1 \ \left ( \mathbb{E}[ |\xi|^p \right ])^{1/p} \leq K \sqrt{p}$. It is easy to see with for instance Gaussian variables that $K$ can be expressed with respect to the variance proxy $\sigma$ as follows: $K= \sigma e^{1 / e}$ for $k \geq 2$ and $K = \sigma \sqrt{ 2 \pi}$.
\item A random variable $\xi$ satisfies the Union bound condition, that is there exists a non negative real constant $c>0$ such that $ \forall n \ge c \ \mathbb{E}[\max\{|\xi_1 - \operatorname{E}[\xi]|,\ldots,|\xi_n - \operatorname{E}[\xi]|\}] \leq c \sqrt{\log n}$ where $\xi_1, \ldots, \xi_n$ are independent and identically distributed random variables, copies of $\xi$.
\item The tail is less than the one of a Gaussian of variance proxy $\sigma$, there exist $b > 0$ and $Z \sim \mathcal{N}(0, \sigma^2)$ such that
$\mathbb{P}(| \xi |>t) \leq b \mathbb{P}(| Z | \geq t )$. The latter definition explains the term sub-Gaussian constants quite well. \end{itemize} Obviously, the different negative real constants $B, b>0$ are not necessarily the same. \end{remark}
\begin{definition} Referring to \cite{Tao_2013}, we say that an event $E$ holds with overwhelming probability if, for every fixed real constant $k > 0$, we have \begin{equation}\label{overwhelming} \mathbb{P}(E) \geq 1 - C_k / n^k \end{equation} \noindent for some constant $C_k$ independent of $n$ or equivalently $\mathbb{P}(E^{c}) \leq C_k e^{ -k \ln n } $ where $A^{c}$ denotes the complementary of $A$. \end{definition}
\begin{remark} Of course, the concept of overwhelming probability can be extended to a family of events $E_{\alpha}$ depending on some parameter $\alpha$ with the condition that each event in the family holds with overwhelming probability uniformly in $\alpha$ if the constant $C_k$ in the definition of overwhelming probability is independent of $\alpha$. \end{remark}
\begin{remark} Using Boole's inequality (also referred to as the union bound in the English mathematical literature) that states that the probability measure is $\sigma$-sub additive, we trivially see that if a family of events $E_{\alpha}$ of polynomial cardinality holds with overwhelming probability, then the intersection over $\alpha$ of this family $\bigcap \limits_{\alpha} E_{\alpha}$ still holds with overwhelming probability. \end{remark}
\begin{remark} The previous Boole's inequality remark emphasizes that although the concept of overwhelming probability is not the same as the one of almost surely, it is still something with very high probability. In the rest of the paper, we will even get tighter bound and prove \begin{equation}\label{overwhelming2} \mathbb{P}(E^{c}) \leq C_k e^{ -k n } \end{equation} which implies that the event $E$ holds with overwhelming probability. \end{remark}
\section{Upper bound for operator norm for sub-Gaussian tailed matrices}\label{main} Equipped with these definition, we shall prove the following statement
\begin{proposition}\label{prop1} Let a squared matrix $M$ be with independent coefficients $\xi_{i,j}$ with zero mean that are uniformly sub-Gaussian , then there exist non negative real constants $C,c > 0$ such that \begin{equation}
\mathbb{P} ( \| M \|_{op} > A \sqrt n )\leq C \exp( -c A n) \label{eq1} \end{equation}
\noindent for all $ A \geq C$. In particular, we have $\| M \|_{op} = O(\sqrt n )$ with overwhelming probability \end{proposition}
\begin{proof} See \ref{proof1}. \end{proof}
\begin{remark} This result is quite natural as the matrix coefficients $\xi_{i,j}$ are uniformly sub-Gaussian. Indeed in the proof, we have used the fact that the matrix coefficients $\xi_{i,j}$ $L_{\infty}$ norm was sub-Gaussian, hence any of the matrix row for the $L_a$ was sub-Gaussian. But can we go further and find a less stringent sufficient condition for the inequality \ref{eq1} to hold? The answer is yes and is provided by the condition stated in proposition \ref{prop2}. \end{remark}
\begin{proposition}\label{prop2} Let a squared matrix $M$ such that any of its row is uniformly sub-Gaussian for the norm $L_a$ and independent, then there exist non negative real constants $C,c > 0$ such that \begin{equation}
\mathbb{P} ( \| M \|_{a,b} > A \sqrt n )\leq C \exp( -c A n) \label{eq2} \end{equation}
\noindent for all $ A \geq C$. In particular, we have $\| M \|_{a,b} = O(\sqrt n )$ with overwhelming probability \end{proposition}
\begin{proof} See \ref{proof2}. \end{proof}
\begin{remark} If a random matrix has its rows uniformly sub-Gaussian, necessarily, any of its coefficients is also uniformly sub-Gaussian. This is trivially seen as for a given coefficient $\xi_{ij}$, the corresponding row $R_i$ is sub-Gaussian, hence there are positive constants $B, b$ that does not depend on $i$ such that for every $t > 0$, \begin{equation}
\mathbb {P} (\| R_i \ |>t) \leq B \exp( {-b t^{2}} ) \end{equation}
Hence since $\| R_i \ | > | xi_{ij} |$, we have as well \begin{equation}
\mathbb{P} ( | xi_{ij} | > t )\leq B \exp({ -b t^{2} } ) \end{equation} which proves the uniform sub-Gaussian character of any of the matrix row. The independence of the matrix rows, however, does not imply that each of the matrix row are independent, making the condition of proposition \ref{prop1} less stringent. \end{remark}
\section{Conclusion}
This paper investigated an upper bound of the operator norm for sub-Gaussian tailed random matrices. We proved here that random matrices with independent rows that are uniformly sub-Gaussian satisfy the Tracy Widom bound, that is, the matrix operator norm remains bounded by $O(\sqrt n )$. An interesting extension would be to see how we can generalize our result to the $(\ell_p,\ell_r)$-Grothendieck problem, which seeks to maximize the bilinear form $y^T A x$ for an input matrix $A \in {\mathbb R}^{m \times n}$ over vectors $x,y$ with $\|x\|_p=\|y\|_r=1$. We know this problem is equivalent to computing the $p \to r^\ast$ operator norm of $A$, where $\ell_{r^*}$ is the dual norm to $\ell_r$.
\ifnum1=1
\appendix \section{Proofs} \subsection{Proof of proposition \ref{prop1}}\label{proof1}
We will do the proof thanks to three simple lemmas below that take advantage of the uniform sub-Gaussian tails bounds and the remarkable property of the Lipschitz character of the map $x \rightarrow \| M x \|$, combined with the compacity of the unit sphere.
Let us define the unit sphere $\mathcal{S} := \{ u \in \mathbb{R}^n | \| u \| = 1 \}$ of the $\mathbb{R}^n$ vector space. The result is similar for complex coefficients matrices in which case the unit sphere is modified into $\mathcal{S} := \{ u \in \mathbb{R}^n | \| u \| = 1 \}$ of the $\mathbb{C}^n$. We will first prove the following lemma
\begin{lemma}\label{lemma1} If the coefficients $\xi_{i,j}$ of $M$ are independent and have uniformly sub-Gaussian tails, then there exist absolute constants $C, c > 0$ such that for any $u \in \mathcal{S}$, we have \begin{equation}\label{lemma1_eq}
\mathbb{P} ( \| M u \| > A \sqrt n )\leq C \exp( -c A n) \end{equation} for all $ A \geq C$. \end{lemma}
\begin{proof} Let $R_1, \ldots, R_n$ be the $n$ rows of the matrix $\mathbf{M}$, then the column vector $\mathbf{M} u$ has coefficients $R_i u $ for $i = 1, \ldots, n$.
The matrix coefficients $\xi_{i,j}$ are all uniformly sub Gaussian, hence there are positive constants $B, b> 0$ independent of $i,j$ such that for every $t > 0$, \begin{equation}
\mathbb{P}( | \xi_{i,j} | \geq t ) \leq B \exp(-b t^2). \end{equation}
This implies in particular that $R_i$ is also with sub-Gaussian tails but with different coefficients. This is because we have \begin{equation}
\mathbb{P}( | R_i | \geq t ) \leq \mathbb{P}( \sqrt n \max_{j}| \xi_{i,j} | \geq t ) \leq B \exp(-\frac{b}{n} t^2). \end{equation} Hence taking $b'= \frac{b}{n}$, we have \begin{equation}
\mathbb{P}( \| R_i \| \geq t ) \leq B \exp(-b' t^2). \end{equation}
The Cauchy Schwartz inequality gives us that for $u \in \mathcal{S}$, we have $| R_i u | \leq \| R_i \| \| u \| = \| R_i \| $ as $\| u \| =1$, hence, \begin{equation}
\mathbb{P}( | R_i u | \geq t ) \leq \mathbb{P}( \| R_i \| \geq t ) \leq B \exp(-b' t^2). \end{equation} which states that $R_i u$ is uniformly sub-Gaussian or equivalently, that it satisfies the $\psi_2$ condition, that there exist two non negative constants $b, B >0$ (that are different constants from previously) such that: \begin{equation}
\mathbb{E}[ e^{b | R_i u | ^2} ] \leq B. \end{equation} Because of the assumption that the matrix coefficients are independent, each row $R_i u$ is also independent and the vector $M u$ satisfies also the $\psi_2$ condition as: \begin{equation}
\mathbb{E}[ e^{b \| M u \| ^2} ] = \mathbb{E}[ \prod_{i=1}^n e^{b | R_i u | ^2} ] = \prod_{i=1}^n \mathbb{E}[ e^{b | R_i u | ^2} ] \leq B^n. \end{equation} Let us take $C= B^n$ and take $A \geq C$ and $n \geq 1$. The Markov property gives us \begin{eqnarray}
\mathbb{P}( \| M u \| \geq A \sqrt{n} ) = \mathbb{P}( e^{ b \| M u \|^2 } \geq e^{ b \, A^2 n } ) \leq \frac{ \mathbb{E}[ e^{b \| M u \| ^2} ] } { e^{ b \, A^2 n } } \leq C e^{ -b \, A^2 n } \leq C e^{ -b \, C A n } \end{eqnarray} Taking $c = b \,C$, we get the required inequality: $$
\mathbb{P} ( \| M u \| > A \sqrt n )\leq C \exp( -c A n) $$ which concludes the proof. \end{proof}
\begin{remark}
Expressing the lemma \ref{lemma1} in terms of probability, we have proved that for any individual unit vector $u$, the norm of the matrix multiplication of $M$ with $u$, denoted by $\| M u \|$ is with growth at most $\sqrt n$ or equivalently $\| M u \| = O(\sqrt n )$ with overwhelming probability. \end{remark}
\begin{remark}
At this stage, we could imagine that equipped with lemma \ref{lemma1}, we could finalize the proof of proposition \ref{prop1}. The slight difference between lemma \ref{lemma1} and proposition \ref{prop1} is the applying set. Lemma \ref{lemma1} states that for any individual unit vector $u$, the norm of the matrix multiplication of $M$ with $u$, denoted by $\| M u \|$ is with growth at most $\sqrt n$. Proposition \ref{prop1} states that the supremum over the unit sphere of any individual unit vector $u$, the norm of the matrix multiplication of $M$ with $u$, denoted by $\| M u \|$ is with growth at most $\sqrt n$. We could imagine going from lemma \ref{lemma1} to proposition \ref{prop1} using the simple union bound on all points of the unit sphere for the operator norm as follows: \begin{equation}
\mathbb{P}( \| \mathbf{M} \|_{op} > \lambda ) \leq \mathbb{P}( \bigcup \limits_{u \in \mathcal{S} } \| \mathbf{M} u \| > \lambda ) \end{equation} However, we would be stuck as the unit sphere $\mathcal{S}$ is an uncountable number of points set.
To solve this issue, we shall change the set in the union bound and use the usual trick of maximal $\varepsilon$-net of the unit sphere $\mathcal{S}$, denoted by $\Sigma(\varepsilon)$. This leads to lemma \ref{lemma2}. As we will see shortly, the maximal $\varepsilon$-net of the sphere $\mathcal{S}$ is countable, using standard packing arguments. On this particular set, we can exploit the fact that the map $x \rightarrow \| M x \|$ is Lipschitz with Lipschitz constant given by $\| M \|_{op}$. The induced continuity also us controlling the upper bound of the norm of $\| M v \|$ for $v \in \Sigma(\varepsilon)$. \end{remark}
\begin{lemma}\label{lemma2} Let $0 < \varepsilon < 1$ and $\Sigma(\varepsilon)$ be the maximal $\varepsilon$-net of the sphere $\mathcal{S}$, that is the set of points in $\mathcal{S}$ separated from each other by a distance of at least $\varepsilon$ and which is maximal with respect to set inclusion. Then for any $n \times n$ matrix $M$ and any $\lambda > 0$, we have \begin{equation}\label{lemma2_eq}
\mathbb{P}( \| M \|_{op} > \lambda) \leq \mathbb{P}( \bigcup \limits_{v \in \Sigma(\varepsilon) } \| \mathbf{M} v \| > \lambda (1-\varepsilon) ) \end{equation} \end{lemma}
\begin{proof}
From the definition of the operator norm (see \ref{operatornorm_def}) as a supremum, using the fact that the map $x \rightarrow \| M x \|$ is Lipschitz, hence continuous and that the unit sphere $\mathcal{S}$ is compact as we are in finite dimension, we can find $x \in \mathcal{S}$ such that it attains the supremum (recall that a continuous function attains its supremum on a compact set).
\begin{equation}
\| M x \| = \| M \|_{op} \end{equation} We can eliminate the trivial case of $x$ belonging to $\Sigma(\varepsilon)$ as the inequality \ref{lemma2_eq} is easily verified in this scenario. In the other case, where $x$ does not belong to $\Sigma(\varepsilon)$, there must exist a point $y$ in $\Sigma(\varepsilon)$ whose distance to $x$ is less than $\varepsilon$ (otherwise we would have a contradiction of the maximality of $\Sigma(\varepsilon)$ by including $x$ to $\Sigma(\varepsilon)$).
We are going now to use the Lipschitz feature of the map $x \rightarrow \| M x \|$ whose Lipschitz constant given by $\| M \|_{op}$
Since $\| x- y \| \leq \varepsilon$, the Lipschitz property gives us \begin{equation}
\| M (x-y) \| \leq \| M \|_{op}\| x- y \|\leq M \|_{op}\| \varepsilon \end{equation} The triangular inequality gives us \begin{equation}
\| M x \|_{op} = \| M x \| \leq \| M (x-y) \| + \| M y \| \leq \ \| M \|_{op} \varepsilon + \| M y \| \end{equation} Hence, \begin{equation}
\| M y \| \geq M \|_{op} (1-\varepsilon) \end{equation}
In particular if $\| M y \|_{op} > \lambda$, then $\| M y \| > \lambda (1-\varepsilon)$ which concludes the proof \end{proof}
\begin{remark}
The lemma \ref{lemma2_eq} is very intuitive. The continuity of the map $x \rightarrow \| M x \|$ implies that it attains its maximum on the compact unit sphere. By packing argument, we have necessarily that around this optimum, there is a point of the maximum set $\Sigma(\varepsilon)$ with a distance lower than $\varepsilon$. As the map $x \rightarrow \| M x \|$ is Lipschitz, the decrease between the optimum and this point in $x \rightarrow \| M x \|$ should be at most $ \| M \|_{op} \varepsilon $ as $ \| M \|_{op} $ is the constant Lipschitz. \end{remark}
We recall last but not least that the cardinality of the maximal $\varepsilon$-net of the sphere $\mathcal{S}$, $\Sigma(\varepsilon)$ should be polynomial at most in $n-1$, the dimension of the sphere with the following two lemmas
\begin{lemma}\label{lemma3} Let $0 < \varepsilon < 1$, and let $\Sigma(\varepsilon)$ be a maximal $\varepsilon$-net of the unit sphere $\mathcal{S}$. Then $\Sigma(\varepsilon)$ has cardinality at most $\frac{C }{\varepsilon^{n-1}} $ for some non negative constant $C > 0$ and at least $\frac{c}{\varepsilon^{n-1}}$ for some constant $c >0$. \end{lemma}
\begin{proof} The proof is quite intuitive and simple. It relies on a volume packing argument. The balls of radius $\varepsilon / 2$ centered around each point of $\Sigma(\varepsilon)$ are disjoint and they are in the same numbers as the cardinal of $\Sigma(\varepsilon)$. By the triangular inequality, and using the fact that $\varepsilon / 2 <1$, all these balls are contained within the intersection of the large ball of radius $(1+\varepsilon / 2)$ and center the origin, and the smaller ball of radius $(1-\varepsilon / 2)$ and center the same origin. Hence, (using the fact that the volume of a ball is a constant times the radius to the power the dimension of the space), we can pack at most $$ \frac{ (1+\varepsilon / 2 )^{n} - (1-\varepsilon /2)^{n} }{ (\varepsilon /2)^n} $$ of these balls, which proves that the cardinality is at most $(C / \varepsilon)) ^{n-1} $ for some non negative constant $C > 0$ as the constant is for $\varepsilon $ small equivalent to $\frac{2 n} {\varepsilon ^{n-1}}$.
Reciprocally, for $\varepsilon < 2$, the $\Sigma(\varepsilon)$ is not empty. If we sequentially pack the space between the same previous large and small balll of radius $(1+\varepsilon / 2)$ and $(1-\varepsilon / 2)$ respectively, both centered at the origin, with balls that do not intersect and have radius $\varepsilon / 2$ and with centers on the unit sphere, we can take the set of the centers of these balls. As the balls of radius $\varepsilon / 2$ do not intersect, by the triangular inequality, their centers are at least at a distance greater or equal to $\varepsilon$. Because $\Sigma(\varepsilon)$ is a maximal set, its cardinality should be at least equal to the number of previously created centers. We can pack $$ \frac{ (1+\varepsilon / 2 )^{n} - (1-\varepsilon /2)^{n} }{ (\varepsilon /2)^n} $$ of these centers, which proves that the cardinality is at least $(c / \varepsilon)) ^{n-1} $ for some non negative constant $c > 0$ \end{proof}
\begin{proof} We can now prove proposition \ref{prop1} as follows. Using lemma \ref{lemma2} and the union bound, we have \begin{equation}
\mathbb{P}( \| M \|_{op} > A \sqrt{n}) \leq \mathbb{P}( \bigcup \limits_{v \in \Sigma(\varepsilon) } \| \mathbf{M} v \| > A \sqrt{n} (1-\varepsilon) ) \leq \sum_{v \in \Sigma(\varepsilon)} \mathbb{P}( \| \mathbf{M} v \| > A \sqrt{n} (1-\varepsilon) ) \end{equation}
Lemma \ref{lemma1} states that for $v \in \mathcal{S}$, there exist absolute constants $C,c > 0$ such that \begin{equation}
\mathbb{P}( \| \mathbf{M} v \| > A \sqrt{n} (1-\varepsilon) ) \leq C \exp( -c A (1-\varepsilon) n ) \end{equation}
Since the cardinality of $\Sigma(\varepsilon) $ is bounded by $\frac{K }{\varepsilon^{n-1}}$, we can upper bound $\mathbb{P}( \| M \|_{op} > A \sqrt{n})$ by \begin{equation}
\mathbb{P}( \| M \|_{op} > A \sqrt{n}) \leq \frac{K }{\varepsilon^{n-1}} C \exp( -c A (1-\varepsilon) n ) \end{equation}
Fixing $\varepsilon = 1 / 2$, denoting by $C ' = K C $ and taking $c'$ such that $c'A = cA / 2 - \ln 2$, we have \begin{equation}
\mathbb{P}( \| M \|_{op} > A \sqrt{n}) \leq C' exp( -c' A n ) \end{equation} which concludes the proof. \end{proof}
\subsection{Proof of proposition \ref{prop2}}\label{proof2} \begin{proof} This is exactly the same reasoning as proposition \ref{prop1} but starting with the fact that any of the matrix $M$ row is uniformly sub-Gaussian for the norm $L_a$. This means that there exist absolute constants $B,b>0$ such that for any $i=1, \ldots,n$ \begin{equation}
\mathbb{P}( \| R_i \|_a \geq t ) \leq B \exp(-b t^2). \end{equation} for the norm $L_a$. The independence of the rows allows us proving the following lemma (similar to lemma \ref{lemma1}) that under the condition of proposition \ref{prop2}, there exist absolute constants $C, c > 0$ such that for any $u \in \mathcal{S}$, we have \begin{equation}\label{lemma1_eq2}
\mathbb{P} ( \| M u \| > A \sqrt n )\leq C \exp( -c A n) \end{equation} for all $ A \geq C$. lemma \ref{lemma2} and \ref{lemma3} remain unchanged allowing to conclude. \end{proof}
\end{document} | arXiv | {
"id": "1812.09618.tex",
"language_detection_score": 0.7853076457977295,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Surpassing the repeaterless bound with a photon-number encoded measurement-device-independent quantum key distribution protocol}
\author{\"{O}zlem Erk{\i}l{\i}\c{c}} \email{ozlemerkilic1995@gmail.com} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \author{Lorc\'{a}n Conlon} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \author{Biveen Shajilal} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \author{Sebastian Kish} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \author{Spyros Tserkis} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \author{Yong-Su Kim} \affiliation{Center for Quantum Information, Korea Institute of Science and Technology (KIST), Seoul 02792, Republic of Korea} \affiliation{Division of Nano \& Information Technology, KIST School, Korea University of Science and Technology, Seoul 02792, Republic of Korea} \author{Ping Koy Lam} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia} \affiliation{Institute of Materials Research and Engineering, Agency for Science, Technology and Research (A*STAR), Singapore 138634} \author{Syed M. Assad} \email{cqtsma@gmail.com} \affiliation{Centre of Excellence for Quantum Computation and Communication Technology, The Department of Quantum Science and Technology, Research School of Physics and Engineering, The Australian National University, Canberra, Australian Capital Territory, Australia}
\date{\today}
\maketitle
\section*{Abstract} Decoherence is detrimental to quantum key distribution~(QKD) over large distances. One of the proposed solutions is to use quantum repeaters, which divide the total distance between the users into smaller segments to minimise the effects of the losses in the channel. However, the secret key rates that repeater protocols can achieve are fundamentally bounded by the separation between each neighbouring node. Here we introduce a measurement-device-independent protocol which uses high-dimensional states prepared by two distant trusted parties and a coherent total photon number detection for the entanglement swapping measurement at the repeater station. We present an experimentally feasible protocol that can be implemented with current technology as the required states reduce down to the single-photon level over large distances. This protocol outperforms the existing measurement-device-independent and twin-field QKD protocols by surpassing the fundamental limit of the repeaterless bound for the pure-loss channel at a shorter distance and achieves a higher transmission distance in total when experimental imperfections are considered.
\section{\label{sec:level1}Introduction} Quantum key distribution is a method used to securely establish a secret key between two distant trusted parties, namely Alice and Bob~\cite{ekert2014,gisin2002,pirandola2020advances}. Depending on the degrees of freedom of the underlying quantum system involved, QKD protocols are classified into two types, discrete-variable (DV) protocols where the key information is encoded on discrete degrees of freedom of photonic states such as polarisation~\cite{ch1984quantum,ekert1991quantum} and continuous-variable (CV) based protocols which encode the keys on continuous degrees of freedom such as amplitude and phase quadratures of the optical field~\cite{ralph1999,hillery2000}. In QKD, the main obstacle in establishing a secure key over large distances is the decoherence induced by photon losses.
Quantum repeaters are devices that can be used to improve the transmission distance of QKD protocols by dividing the total distance into smaller portions between the sender and receiver, making the losses in the channel more manageable~\cite{briegel1998quantum,dur1999quantum,duan2001long,Sangouard2011,munro2015inside}. Quantum repeaters~\cite{munro2015inside} use entanglement swapping~\cite{goebel2008multistage,kaltenbaek2009high,li2019experimental} to distribute entanglement, which is enhanced by entanglement distillation protocols~\cite{zhao2003experimental,vollbrecht2011entanglement,bratzik2013quantum}. One issue is that a majority of these repeater protocols require the use of quantum memories~\cite{simon2007quantum,Sangouard2011,dias2020}. However, quantum memories are limited by their operational wavelengths and memory efficiencies. Even though solid-state quantum memories \cite{bussieres2014quantum,stuart2021initialization} can operate at telecommunication wavelengths, their memory efficiency limits their efficacy. In contrast, cold-atom quantum memories currently hold the record for the efficiency, but operate outside of telecommunication wavelengths requiring frequency conversion to leverage communication infrastructure~\cite{cho2016highly,hsiao2018highly}. The frequency conversion results in low efficiencies limiting the performance of the current quantum repeaters~\cite{maring2014storage}.
The PLOB bound~\cite{pirandola2017fundamental} sets the fundamental limit for the maximum amount of private states that can be transferred in QKD for a given quantum channel without the use of a repeater (See Ref.~\cite{wilde2017converse} for the strong converse property of the bound and Ref.~\cite{pirandola2019end} for the bounds generalised to repeater-assisted communication). No point-to-point QKD protocol can surpass this bound unless there is a quantum repeater splitting the channel. Therefore, the PLOB bound can also be used as a benchmark to test the quality of quantum repeaters~\cite{pirandola2020advances}. It is known that the PLOB bound can be saturated with the squeezed-state protocol without the need for several copies of the states or a collective measurement for the pure-loss channel~\cite{pirandola2020advances}. When there is a repeater-chain, the end-to-end quantum capacity scales with the number of repeaters~\cite{pirandola2019end} and it is still an open question whether the corresponding repeater bounds can be saturated with a simple protocol without multiple copies of the quantum states.
Measurement-device-independent QKD (MDI-QKD) protocols are a type of repeater protocols in which the secret keys are established via the measurement of an untrusted third party~\cite{braunstein2012side,lo2012measurement,pirandola2013cvmdi,pirandola2015high}. These protocols are called \lq measurement-device-independent\rq\hspace{0.05cm} as Alice and Bob do not perform a measurement in their stations, but the measurement is performed by an untrusted party, called Charlie. Twin-field QKD (TF-QKD)~\cite{lucamarini2018} is a DV based MDI protocol which utilises weak identical coherent states sent by both Alice and Bob to Charlie, who performs entanglement swapping via a probabilistic photon detection measurement. TF-QKD protocol is the first repeater protocol without a quantum memory that is able to surpass the PLOB bound~\cite{lucamarini2018,chen511km,chen658km} as it scales proportionally to the single-repeater bound~\cite{pirandola2019end}. CV based MDI (CV-MDI) QKD protocols work in a similar fashion where Alice and Bob both send a distribution of either coherent or squeezed states to Charlie, where he performs a heterodyne measurement~\cite{pirandola2013cvmdi,pirandola2015high,wang2019cvmdi,ma2019cvmdi}. In order to achieve a positive key rate in these CV-MDI protocols, the relay is positioned very close to Alice resulting in a very asymmetric set-up. As the relay is not placed right in the middle between Alice and Bob, the protocols scale like the repeaterless bound instead of the single-repeater bound. Hence, these protocols always sit below the PLOB bound.
In this work, we present a photon-number encoded MDI repeater protocol that surpasses the PLOB bound without the use of quantum memories through an entanglement swapping measurement.
Unlike the TF-QKD protocol, the entanglement swapping is obtained by a coherent total photon number measurement performed by Charlie who measures the total number of photons coming from Alice and Bob without knowing the individual contributions. Even though the photon-number encoded states are vulnerable to losses, we show that in the short distance regime, the secret key rates are much higher than the ones of the single-photon encoded states. We also propose an experimentally feasible protocol using single-photons as these high dimensional states reduce down to the single-photon level over large distances. This protocol performs better than the existing MDI and TF-QKD protocols as it attains higher key rates for the same transmission distances.
\section{\label{sec:results}Results} \subsection{\label{sec:protocol}The Measurement-Device-Independent Protocol} \subsubsection{\label{sec:statessection}Alice and Bob's States for Generating a Key} Let us assume that both Alice and Bob generate two-mode entangled states in their stations where they keep one arm of the entangled states to themselves and send the other to Charlie. Charlie then performs a joint entanglement swapping measurement on the states that Alice and Bob send.
QKD protocols can be expressed in either entanglement-based or prepare-and-measure schemes. Both of these models are mathematically equivalent~\cite{weedbrook2012,grosshans2003}, however the entanglement-based representation is more convenient for the security analysis of a QKD protocol. In the conventional entanglement-based CV-QKD protocols, Alice sends one arm of a two-mode squeezed vacuum state (TMSV) to Bob while performing a heterodyne measurement on the other arm of the TMSV state she kept. This procedure is equivalent to Alice sending a coherent state in the prepare-and-measure scheme~\cite{grosshans2003}. This entangled two-mode state in Fock basis is expressed as \begin{equation} \label{eq:eprstate} \ket{\Psi}_{\mathrm{A_1 A_2}}=\frac{1}{\sqrt{N}}\sqrt{1-\gamma^2}\sum_{n=0}^{n_\text{max}}\gamma^n\ket{nn}_{\mathrm{A_{1}A_{2}}}, \end{equation} where \text{$\gamma\in[0,1)$} is the squeezing parameter and $N$ is the normalisation coefficient given by $\sum_{n=0}^{n_\text{max}}(1-\gamma^2)\gamma^{2n}$. \text{$\ket{n}$} denotes the $n$-photon Fock state. Note that a TMSV state is retrieved with when \text{$n_\text{max}\rightarrow\infty$}~\cite{weedbrook2012}.
In this paper, we use the entanglement-based version, shown in Fig.~\ref{fig:repeater_diagram}(a), for the security analysis of the prepare-and-measure method, shown in Fig.~\ref{fig:repeater_diagram}(b). We express Alice's and Bob's states as follows: \begin{subequations} \label{eq:statealice} \begin{equation} \label{eq:statealicesub} \ket{\Psi}_{\mathrm{A_{1}A_{2}}}=\sum_{n=0}^{n_{\text{max}}}\sqrt{a_n}\ket{nn}_{\mathrm{A_{1}A_{2}}}, \end{equation} \begin{equation} \label{eq:bobsstatesub} \ket{\Psi}_{\mathrm{B_{1}B_{2}}}=\sum_{n=0}^{n_{\text{max}}}\sqrt{b_n}\ket{nn}_{\mathrm{B_{1}B_{2}}}, \end{equation} \end{subequations} where $\sum_{n=0}^{n_{\text{max}}}a_n=1$ and $\sum_{n=0}^{n_{\text{max}}}b_n=1$. \text{$a_{n}$} and \text{$b_{n}$} represent real coefficients of each Fock-number state \text{$\ket{nn}$}. These coefficients are the same for both Alice and Bob and optimised to achieve an optimal key rate explained in more detail in Sec.~\ref{sec:calkeyrate}. \text{$n_{\text{max}}$} is the maximum number of photons that Alice and Bob send individually, and each parties encode the key information on the Fock states \text{$\ket{n}$}. \begin{figure*}\label{fig:repeater_diagram}
\end{figure*}
In the entanglement-based scheme, Alice and Bob keep one arm of the entangled states to measure the number of photons using a photon-number resolving detector (PNRD) to establish a key while sending the other arm to Charlie. Charlie performs a coherent total photon number measurement on the incoming modes from Alice and Bob, and announces the outcome of his measurement (described in detail in Sec.~\ref{sec:charliemeasurement}). Alice and Bob's measurement in their own stations is represented as \begin{equation} \Pi_{n}=\ketbra{n}{n}, \label{eq:pnrdpovm} \end{equation} where \text{$n$} denotes the number of photons being measured. In the prepare-and-measure scheme, this corresponds to preparing the Fock state $\ket{n}$ with probability $a_n$.
The states in the prepare-and-measure scheme can be engineered experimentally with several different methods such as conditional teleportation~\cite{asavanant2021wave}, coherent displacements and photon subtraction~\cite{fiuravsek2005conditional}, and repeated parametric-down conversion~\cite{clausen2001conditional}. Alternatively, these states can be created by extending the work presented in Ref.~\cite{bimbard2010quantum} to higher photon levels by using spontaneous parametric down-conversion on the signal channel and conditional measurements on the idler channel.
\subsubsection{\label{sec:charliemeasurement}Charlie's Measurement} The states are sent to Charlie via a channel with a total transmissivity of \text{$\tau\in[0,1]$}, which is split into smaller channels between Alice and Charlie and Charlie and Bob represented as \text{$\tau_{\mathrm{A}}$} and \text{$\tau_{\mathrm{B}}$} respectively. Single-repeater protocols can be benchmarked based on the PLOB bound which is given by \text{$-\mathrm{log}_{2}(1-\tau)$}~\cite{pirandola2017fundamental,pirandola2019end}. In order to surpass this bound, the protocol needs to scale like the single-repeater bound~\cite{pirandola2019end} which is expressed as \text{$-\mathrm{log}_{2}(1-\sqrt{\tau})$}. This requires Charlie to be positioned in the middle of Alice and Bob such that the key-rate scales with the square root of the transmission probability, $O(\sqrt{\tau})$. In this protocol, Charlie performs a collective photon number measurement on the incoming modes from Alice, \text{${\mathrm{A_2}}$}, and Bob, \text{$\mathrm{{B_2}}$}.
If Alice and Bob send a maximum of \text{$n$} photons each, denoted as \text{$n_\text{max}$}, Charlie can measure from $0$ to \text{$2n_\text{max}$} photons. Charlie's measurement can be realised by projecting the modes \text{${\mathrm{A_2}}$} and \text{$\mathrm{{B_2}}$} onto the following states
\begin{equation} \ket{\phi_{c}^j}=\sum_{n=0}^{c}\frac{\omega^{nj}\ket{n}\!\ket{c-n}}{\sqrt{c+1}}, \label{eq:clickpovm} \end{equation} where \text{$c\in\{0,1,\cdots,2n_{\text{max}}\}$} represents the total number of photons Charlie receives from the two modes, and \text{$j\in\{0,1,\cdots,c \}$} denotes the different states in the $c$-photon subspace states while \text{$\omega$} is given by \text{$\omega=e^\frac{2\pi i}{c+1}$}.
For example, when $c=2$, Charlie's three possible outcomes are
\begin{subequations} \label{eq:alltheoutcomes} \begin{equation} \ket{\phi_{2}^0}=\frac{1}{\sqrt{3}}(\ket{02}+\ket{11}+\ket{20}), \label{eq:phi0} \end{equation} \begin{equation} \ket{\phi_{2}^1}=\frac{1}{\sqrt{3}}(\ket{02}+e^{\frac{2\pi i}{3}}\ket{11}+e^{-\frac{2\pi i}{3}}\ket{20}), \label{eq:phi1} \end{equation} \begin{equation} \ket{\phi_{2}^2}=\frac{1}{\sqrt{3}}(\ket{02}+e^{-\frac{2\pi i}{3}}\ket{11}+e^{\frac{2\pi i}{3}}\ket{20}). \label{eq:phi2} \end{equation} \end{subequations} These measurements are designed such that even though Charlie knows the total number of photons between Alice and Bob, he does not know the number of photons in each mode separately.
The outcomes of Charlie's measurement form a valid positive operator value measurement (POVM) for a given outcome \begin{equation} \Pi_{c}^j=\ketbra{\phi_{c}^j}{\phi_{c}^j}, \label{eq:povm} \end{equation} with all the possible outcomes satisfying the identity resolution with \text{$c\in\{0,1,\cdots,2n_{\text{max}}\}$} and \text{$j\in\{0,1,\cdots,c \}$}, i.e., \begin{equation} \sum_{c=0}^{2n_\text{max}}\sum_{j=0}^c\Pi_{c}^j=\mathbb{I}. \label{eq:povmidentity} \end{equation}
The measurement performed by Charlie establishes correlations between Alice and Bob. In the lossless channel, when Charlie detects two photons with his POVM element \text{$\ket{\phi_{2}^0}$}, Alice and Bob's state becomes \text{$\ket{\psi}_{\mathrm{A_{1}B_{1}}|^{j=0}_{c=2}}=\sqrt{a_{0}a_{2}}\ket{02}+a_{1}\ket{11}+\sqrt{a_{2}a_{0}}\ket{20}$}. Therefore, Charlie swaps the entanglement between Alice and Bob via the measurement he performs similar to many MDI protocols~\cite{lo2012measurement,lucamarini2018}.
\subsubsection{\label{sec:checkstatessection}Alice and Bob's Check States for Security} A possible security issue is that Charlie can potentially lie to Alice and Bob about his measurement outcome, as he can perform separable measurements on Alice and Bob's modes individually or announce a different photon number from the one he actually measured. When the latter occurs, Alice and Bob can tell that Charlie is not telling the truth as the probabilities of measuring different number of photons are not equal. However, when the former happens, Alice and Bob cannot distinguish whether Charlie is performing a total photon number measurement or a separable measurement on the two modes. Even though the separable measurement does not yield an entangled state between Alice and Bob, it still establishes classical correlations between the parties. The probability of Charlie measuring a given number of photons when he performs a separable measurement ends up being the same as his joint measurement described in Sec.~\ref{sec:charliemeasurement}.
We address this security issue by Alice and Bob randomly switching from their key states and sending some check states to Charlie to detect any abnormalities in the system. One of the possible check states they send consists of a superposition of the photon number states, and are analogous to the original DV diagonal states which are in the following form \begin{equation} \ket{+}=\frac{1}{\sqrt{n_{\text{max}}+1}}\sum_{n=0}^{n_{\text{max}}}\ket{n}. \label{eq:diagonalstates} \end{equation}
The untrusted party, Charlie, is required to announce the total number of photons he measured as well as the outcome index $(c,j)$. Table~\ref{tab:probtable} shows Charlie's probability of measuring $c=2$ photons as Alice and Bob send a mixture of key states and check states. Whenever both parties send $\ket{++}_{\mathrm{AB}}$, the probability of Charlie measuring $c=2$ photons is different for the non-separable and separable measurements. This is due to the nature of Charlie's POVM. For $c=2$, Charlie has three different outcomes in this set labelled as $\ket{\phi_{2}^0}$, $\ket{\phi_{2}^1}$, and $\ket{\phi_{2}^2}$. If Alice and Bob send $\ket{++}_{\mathrm{AB}}$, the probability of measuring $\ket{\phi_{2}^0}$ is $1/3$ whereas the other two outcomes return $0$. In the case of separable measurements, the probability of measuring a two-photon event is equal, allowing Alice and Bob to determine whether Charlie is being unfaithful or not.
\begin{table}[t!] \renewcommand{1.2}{1.2} \caption{\label{tab:probtable} Charlie's measurement probability for both non-separable and separable measurements for \text{$c=2$} when Alice and Bob send a combination of their check states, \text{$(+)$}, \text{$\ket{+}=\frac{1}{\sqrt{3}}(\ket{0}+\ket{1}+\ket{2})$} and key states, (\text{$K$}), \text{$\rho_{A_2}=\frac{1}{3}(\ketbra{0}+\ketbra{1}+\ketbra{2})$} in the prepare-and-measure representation with \text{$n_{\text{max}}=2$} photons.} \begin{ruledtabular}
\begin{tabular}{cP{0.5cm}P{0.5cm}P{0.5cm}|P{0.5cm}P{0.5cm}P{0.5cm}}
& \multicolumn{3}{c|}{Non-separable} &\multicolumn{3}{c}{Separable}\\ \hline AB & \text{$\Pi_{2}^0$} & \text{$\Pi_{2}^1$} & \text{$\Pi_{2}^2$} & 02 & 11 & 20 \\ \colrule $KK$ & 1/9 & 1/9 & 1/9 & 1/9 & 1/9 & 1/9\\ $K+$ & 1/9 & 1/9 & 1/9 & 1/9 & 1/9 & 1/9\\ $+K$ & 1/9 & 1/9 & 1/9 & 1/9 & 1/9 & 1/9\\ $++$ & 1/3 & 0 & 0 & 1/9 & 1/9 & 1/9\\ \end{tabular} \end{ruledtabular} \end{table}
The separable measurement is not the only possible measurement that Charlie can make. Ideally, Alice and Bob should not rely on Charlie's announcement of his measurement basis to determine if Charlie was being reliable or estimate how much information is leaked to another malicious party, called Eve. For security purposes, it is essential to utilise two or more non-orthogonal bases in QKD. For example, in BB84~\cite{ch1984quantum} and the six-state protocol~\cite{bruss1998optimal}, Alice sends states in two and three different orthogonal bases to Bob, respectively. By calculating the bit-error rates in these bases, Alice and Bob can estimate Eve's information. However, these protocols use only the probabilities of the matched measurement outcomes which overestimates Eve's information resulting in a lower key rate~\cite{liang2015tomographic}. Refs.~\cite{watanabe2008tomography,liang2015tomographic} showed that full tomography of the quantum state between Alice and Bob can enhance the secret key rate due to bounding Eve's information more accurately. Instead of using the statistics of the matched bases only, Alice and Bob can estimate their joint state from both the matched and unmatched bases. This joint state then can be used to calculate the Holevo bound on Eve's information. Holevo bound~\cite{holevo1998capacity} describes the maximum amount of classical information that can be extracted from a quantum channel. In QKD, Holevo bound can be used to upper bound the leaked information to Eve.
Our protocol requires a similar approach to the protocols discussed above~\cite{watanabe2008tomography,liang2015tomographic}, where Alice and Bob measure their joint state in mutually unbiased bases to perform a full tomography of their joint state in the entanglement-based scheme. Two bases $\{ \ket{e_i} \}_{i=0}^{m-1}$ and $\{ \ket{h_i} \}_{i=0}^{m-1}$ are called mutually unbiased when $|\braket{e_i}{h_j}|^2=1/m$ for any $i$ and $j$~\cite{schwinger1960unitary}, where \text{$m$} is the dimension of the Hilbert space. If the dimension of the Hilbert space, $m$, is a power of a prime number, there exists \text{$m+1$} mutually unbiased bases which form a complete set~\cite{wootters1989optimal}. In Methods~\ref{sec:appDVtomogprahy}, we show how to estimate Eve's information by reconstructing Alice and Bob's joint state through full tomography when Alice and Bob send single-photon states, i.e., \text{$n_\text{max}=1$}. In the entanglement-based scheme, Alice and Bob measure the modes they keep in their stations using the eigenvectors of the $X$, $Y$ and $Z$ bases which are expressed as \begin{subequations} \label{eq:allbases} \begin{equation} \ket{\pm x}=\frac{\ket{0}\pm\ket{1}}{\sqrt{2}}, \label{eq:xbasis} \end{equation} \begin{equation} \ket{\pm y}=\frac{\ket{0}\pm i\ket{1}}{\sqrt{2}}, \label{eq:ybasis} \end{equation} \begin{equation} \ket{+z}=\ket{0},\ \ket{- z}=\ket{1}. \label{eq:zbasis} \end{equation} \end{subequations} These bases form a complete set of mutually unbiased bases for \text{$m=2$}. In the equivalent prepare-and-measure scheme, Alice and Bob's measurement on the two mode entangled states, $\ket{\Psi}_{\mathrm{A_{1}A_{2}}}\!=\!\sqrt{a_0}\ket{00}+\sqrt{a_1}\ket{11}$ and $\ket{\Psi}_{\mathrm{B_{1}B_{2}}}\!=\!\sqrt{b_0}\ket{00}+\sqrt{b_1}\ket{11}$ in the $Z$ basis corresponds to them preparing the following states \begin{subequations} \label{eq:prepz} \begin{equation} \label{eq:st0} \ket{\psi_{+z}}=\ket{0}, \end{equation} \begin{equation} \label{eq:st1} \ket{\psi_{- z}}=\ket{1}, \end{equation} \end{subequations}
with probability $a_0$ and $b_0$, and $a_1$ and $b_1$ respectively. Their measurement in the $X$ basis is equivalent to them preparing the following states with equal probability \begin{equation} \label{eq:pmx}
\ket{\psi_{\pm x}}=\sqrt{\epsilon_0}\ket{0}\pm\sqrt{\epsilon_1}\ket{1}, \end{equation} i.e., they prepare $\ket{\psi_{+x}}$ and $\ket{\psi_{-x}}$ with a probability of $0.5$, where $\epsilon_0$ and $\epsilon_1$ represent the coefficients $a_0$ and $b_0$, and $a_1$ and $b_1$, respectively. Similarly, their measurement in the $Y$ bases corresponds to them preparing the following states with equal probability \begin{equation} \label{eq:pmy}
\ket{\psi_{\pm y}}=\sqrt{\epsilon_0}\ket{0}\mp i\sqrt{\epsilon_1}\ket{1}. \end{equation} We present the detailed results of this protocol in Sec.~\ref{sec:realimplementation}.
When Alice and Bob wish to encode the key onto the higher dimensional states, i.e., \text{$n_\text{max}>1$}, the number of check states they need to send increases. However, determining the existence of a complete set of mutually unbiased bases in an arbitrary dimensional Hilbert space is still an open problem in quantum information~\cite{horodecki2022five}. In this protocol, if Alice and Bob send states with \text{$n_\text{max}$} photons with a dimension of \text{$m=n_\text{max}+1$}, they need to send check states in \text{$m+1$} different bases to estimate Eve's Holevo bound provided that $m$ is a power of a prime number. These check states can be determined by following the method discussed in Ref.~\cite{wootters1989optimal}. We show the key rates of these higher dimensional states later in detail in Sec.~\ref{sec:highdim} with \text{$n_\text{max}=7$} photons.
\subsubsection{\label{sec:calkeyrate}Calculation of the Secret Key Rate} In the entanglement-based protocol, the global state before Charlie's measurement is a four-mode state. The dimension to simulate this protocol scales as \text{$m^4$}. Therefore, the coefficients of Alice and Bob's states in Eq.~\eqref{eq:statealice} are optimised by considering a classical protocol where Eve and Charlie perform a photon number measurement on their modes. We optimise the difference between the classical mutual information between Alice and Bob, and, Eve and Alice. We call this protocol the `classical protocol' and an explicit method for the implementation of this protocol is shown in the Methods~\ref{sec:appkeyclassical}. The reason for doing this is to avoid having to optimise a high dimensional four mode joint state with a total dimension of $m^4$. However, when computing the secret key rates, we do not assume any type of attacks for Eve and calculate Eve's Holevo bound instead and Charlie performs his collective photon number measurement. It is also important to note that the optimisation problem is not convex for the high-dimensional states and the solution provided for the coefficients $a_n$ and $b_n$ in this paper is one possible solution.
The states that Alice and Bob prepare are previously shown in Eq.~\eqref{eq:statealice}. They send these states through a pure-loss channel with a tranmissivity \text{$\tau_{\mathrm{A}}$} and \text{$\tau_{\mathrm{B}}$} for the channel between Alice and Charlie and Charlie and Bob, respectively. The pure-loss channel is modelled with a beamsplitter with a tranmissivity \text{$\tau$} where the beamsplitter mixes the input mode with the vacuum. The beamsplitter transformation can be defined as \begin{equation} B(\tau)=\mathrm{exp}[\mathrm{cos}^{-1}({\sqrt{\tau}})(\hat{a}^\dagger\hat{b}-\hat{a}\hat{b}^\dagger)], \label{eq:beamsplitter} \end{equation} where \text{$\tau$} can be written as a function of the fibre distance, \text{$d$}, with a loss of 0.2dB per km with \text{$\tau=10^{-0.02d}$}. \text{$\hat{a}$} and \text{$\hat{b}$} are the annihilation operators, while \text{$\hat{a}^\dagger$} and \text{$\hat{b}^\dagger$} are the creation operators of the two modes respectively.
In this protocol, we assume that Eve has full access to the channel between Alice and Charlie and Charlie and Bob including Charlie's measurements. Eve mixes vacuum with the incoming modes causing Alice and Bob to lose photons. Thus, we can express the state between Alice and Charlie and Charlie and Bob after Eve's attack as \begin{subequations} \begin{equation} \label{eq:alicesstate} \rho_{\mathrm{A_{1}C_{A}}}\!=\!\mathrm{Tr_{3}}\!\big[\big\{\mathbb{I}_{m}{\otimes}B(\tau_\mathrm{{A}})\big\}\big\{\rho_{\mathrm{A_{1}A_{2}}}{\otimes}\!\ketbra{0}{0}\!\big\}\big\{\mathbb{I}_{m}{\otimes}B(\tau_{\mathrm{A}})\big\}^\dagger\big], \end{equation} \begin{equation} \label{eq:bobsstate} \rho_{\mathrm{C_{B}B_{1}}}\!=\!\mathrm{Tr_{1}}\!\big[\big\{B(\tau_{\mathrm{B}}){\otimes}\mathbb{I}_{m}\big\}\big\{\!\ketbra{0}{0}\!{\otimes}\rho_{\mathrm{B_{2}B_{1}}}\big\} \\\big\{B(\tau_{\mathrm{B}}){\otimes}\mathbb{I}_{m}\big\}^\dagger\big], \end{equation} \end{subequations} where \text{$\text{Tr}_i[\rho]$} stands for tracing out the $i$-th mode of the state \text{$\rho$}.
After Charlie's measurement and tracing out his modes, the subnormalised state between Alice and Bob becomes \begin{equation} \label{eq:rhoAB}
\Tilde{\rho}_{\mathrm{AB}|^{j}_c}\!=\!\mathrm{Tr_{23}}\!\big[\big(\mathbb{I}_{m}{\otimes}\Pi_{c}^j{\otimes} \mathbb{I}_{m}\big)\!\big(\rho_{\mathrm{A_{1}C_{A}}}\!{\otimes}\rho_{\mathrm{C_{B}B_{1}}}\big)\!\big(\mathbb{I}_{m}{\otimes}\Pi_{c}^j{\otimes}\mathbb{I}_{m}\big)^\dagger\big]. \end{equation}
We can calculate Charlie's probability of obtaining outcomes \text{$(c,j)$} from the following expression \begin{equation}
P_{^{j}_c}=\mathrm{Tr}\big[\Tilde{\rho}_{\mathrm{AB}|^{j}_c}\big]. \end{equation}
Normalising Alice and Bob's joint state by Charlie's probability of measuring \text{$c$} photons for his measurement \text{$j$} gives us the final conditional state between them as \begin{equation}
\rho_{\mathrm{AB}|^{j}_c}=\frac{\Tilde{\rho}_{\mathrm{AB}|^{j}_c}}{P^{j}_{c}}. \label{eq:rhoABnorm} \end{equation}
However, for the key states that Alice and Bob send, the probability of Charlie measuring \text{$c$} photons, Alice and Bob's conditional mutual information and Eve's conditional information do not change for each \text{$j$} ranging from, \text{$0$} to \text{$c$}. As such, there is no need to calculate Alice and Bob's conditional joint state for each value of \text{$j$}. Therefore, we omit \text{$j$} from the following equations and set it to zero.
We then calculate Charlie's total probability of measuring \text{$c$} photons from \begin{equation}
P_{c}=\sum_{j=0}^c\mathrm{Tr}\big[\Tilde{\rho}_{\mathrm{AB}|^{j}_c}\big]=(c+1)\mathrm{Tr}\big[\Tilde{\rho}_{\mathrm{AB}|^{j=0}_c}\big], \end{equation} since there are \text{$c+1$} POVM outcomes with a total photon number \text{$c$}.
In order to calculate Alice and Bob's mutual information, we first generate Alice and Bob's probability table as follows \begin{equation}
P(n_{a},n_{b}|c)=\bra{n_{a},n_{b}}\!\rho_{\mathrm{AB}|_c}\!\ket{n_{a},n_{b}}, \label{eq:probABtable} \end{equation} where each term in Alice and Bob's mutual information is given by the conditional Shannon's entropy as expressed below \begin{subequations} \label{eq:alicebobiab} \begin{equation}
H(A|c)=-\sum_{n_{a}\!=0}^{n_{\text{max}}}P(n_{a}|c)\log_{2}P(n_{a}|c), \label{eq:subeqsalice} \end{equation} \begin{equation}
H(B|c)=-\sum_{n_{b}\!=0}^{n_{\text{max}}}P(n_{b}|c)\log_{2}P(n_{b}|c), \label{eq:subeqsbob} \end{equation} \begin{equation}
H(AB|c)=-\sum_{n_{a}\!=0}^{n_{\text{max}}}\sum_{n_{b}\!=0}^{n_{\text{max}}}P(n_{a},n_{b}|c)\log_{2}P(n_{a},n_{b}|c). \label{eq:subeeqsabtogether} \end{equation} \end{subequations}
Using the equations above, we evaluate Alice and Bob's mutual information conditioned on Charlie's measurement outcome from \text{$I_{AB|c}=H(A|c)+H(B|c)-H(AB|c)$}.
Eve's information is calculated from Alice and Bob's conditional state after Bob's measurement outcome on this joint state using \begin{equation}
I_{E|c}=S(\rho_{\mathrm{AB}|c})-\sum_{b=0}^{n_{\text{max}}}P_{b}S(\rho_{\mathrm{A}|cb}), \label{eq:eveinfo} \end{equation}
where Bob's POVM is shown in Eq.~\eqref{eq:pnrdpovm} in Sec.~\ref{sec:protocol}. \text{$b$} represents the number of photons that Bob measures while \text{$P_{b}$} corresponds to Bob's probability of measuring \text{$b$} photons. The subnormalised state \text{$\Tilde{\rho}_{\mathrm{A}|cb}$} is obtained from \begin{equation}
\Tilde{\rho}_{\text{A}|cb}=\mathrm{Tr}_{2}[(\mathbb{I}_{m}\otimes\Pi_{b})\rho_{\text{AB}|_c}(\mathbb{I}_{m}\otimes\Pi_{b})^\dagger], \label{eq:evestate} \end{equation} where Bob's probability of measuring \text{$b$} photons is given by \begin{equation}
P_{b}=\mathrm{Tr}[\Tilde{\rho}_{\mathrm{A}|cb}]. \end{equation}
Alice's subnormalised state conditioned on Bob's and Charlie's measurement outcomes, \text{$\Tilde{\rho}_{\mathrm{A}|cb}$} is then normalised by Bob's measurement probability by \begin{equation}
\rho_{{\mathrm{A}|cb}}=\frac{\Tilde{\rho}_{\mathrm{A}|cb}}{P_{b}}. \end{equation}
The asymptotic key rate of this protocol requires the combination of all the possible outcomes of Charlie's POVM since Alice and Bob are sending states with \text{$n$} photons each with a possibility of measuring \text{$0$} to \text{$2n$} photons by Charlie. However, we discard events where Eve's conditional information is greater than Alice and Bob's conditional mutual information. For example, when a zero photon occurs, Eve gets more information than Alice and Bob due to all the photons being lost to Eve. As such we exclude the case when \text{$c=0$}. Similarly, when Charlie measures \text{$c=2n_\text{max}$} photons, the key rate conditioned on this measurement outcome is zero even though Eve's conditional information is zero.
Therefore, the resulting asymptotic key rate can be expressed as \begin{equation}
K=\sum_{c=0}^{2n_\text{max}}P_{c}\text{max}\big[0,I_{AB|c}-I_{E|c}\big]. \label{eq:keyrate} \end{equation}
\begin{figure*}
\caption{(a) Simulation results of our repeater protocol for a pure-loss channel with a loss of $0.2$dB/km. Solid orange and red dashed lines show our protocol using the states shown in Eq.~\eqref{eq:eprstate} with $n_\text{max}=7$ photons with optimised squeezing coefficients and a squeezing coefficient of \text{$\gamma=0.26$}, respectively. Blue dashed line shows the results of the optimised states with $n_\text{max}=1$ photon given in Eq.~\eqref{eq:statealice} and using Charlie's POVM with an outcome of $1$ and $2$ photons. The solid blue line shows our protocol using the optimised states with $n_\text{max}=7$ photons given in Eq.~\eqref{eq:statealice}. Black solid lines show the single-repeater and repeaterless bounds. Solid grey line represents the CV-MDI protocol~\cite{pirandola2013cvmdi,pirandola2015high} with a variance of 1000 and relay positioned at 0.01m away from Alice while the solid green line shows the TF-QKD protocol with no phase post-selection (NPP-TF-QKD) using optimised coherent states and infinite decoy states. (b) The comparison of the reverse coherent information of the optimised states with $n_\text{max}=7$ and $n_\text{max}=1$ photons in the form of Eq.~\eqref{eq:statealice} in point-to-point communications between Alice and Bob and with a single-repeater. The faint blue and orange lines represent the RCI of the single-repeater and point-to-point communications of the optimised states with $n_\text{max}=7$ photons correspondingly while the blue and red dashed lines show the RCI of the single-repeater and point-to-point communications of the optimised states with $n_\text{max}=1$ photon. The black solid line shows the reverse coherent information of an infinitely squeezed TMSV state denoted as PLOB.}
\label{fig:repeater_plots}
\end{figure*}
\subsection{\label{sec:highdim}The Results of the High-dimensional States} Our simulation results are shown in Fig.~\ref{fig:repeater_plots}(a) for the pure-loss channel with $0.2$dB loss per km. We compare our results with the existing MDI protocols such as the CV-MDI protocol from Pirandola et al.~\cite{pirandola2013cvmdi,pirandola2015high} and one of the best performing TF-QKD protocols known as TF-QKD without phase post-selection (NPP-TF-QKD) from Cui et al.~\cite{cui2019twin} and Lu et al.~\cite{lu2019improving}.
We first show the case where Alice and Bob send the states shown in Eq.~\eqref{eq:eprstate} with a squeezing coefficient of \text{$\gamma=0.26$} for each distance with \text{$n_{\text{max}}=7$} photons. The squeezing level of \text{$\gamma=0.26$} was determined based on the shortest distance that the protocol exceeds the PLOB bound (refer to Sec.~\ref{sec:appcoeffs} Table~\ref{tab:eprtable} for the details). With these states, the PLOB bound and the CV-MDI protocol are surpassed at \text{$144$} km and \text{$114$} km, respectively, while the protocol is performing worse than the TF-QKD protocol. We also demonstrate the key rates of the same states where the values of \text{$\gamma$} are optimised to give the maximum secret key rate at the corresponding distance. For distances greater than \text{$50$} km, there is not much difference compared to the states with \text{$\gamma=0.26$} and the PLOB bound is still surpassed at the same distance as the case of \text{$\gamma=0.26$}. However, the key rates are now higher at short distances below \text{$50$} km. This indicates that in the short distance regime, the contribution of the higher order photons to the key rate is significant while at larger distances, the main contribution comes from the the first few photons of the state as the majority of the photons are lost to the environment at such distances. This can also be seen from the optimal squeezing level given in Table~\ref{tab:eprtable}, which is higher for short distances and lower for larger distances.
\begin{figure*}
\caption{(a) The optimised coefficients of the states shown in Eq.~\eqref{eq:statealice} with $n_\text{max}=7$ photons, where the coefficients are explicitly shown in Table~\ref{tab:cvtable}. (b) The optimised coefficients of the states shown in Eq.~\eqref{eq:statealice} with $n_\text{max}=1$ photon (refer to Table~\ref{tab:dvtable} for the optimised coefficients for each distance).}
\label{fig:opt_coefficients_figure}
\end{figure*}
When Alice and Bob send the optimised states shown in Eq.~(\ref{eq:statealice}), these states outperform the results of the states with optimised \text{$\gamma$} by surpassing the repeaterless bound and the CV-MDI protocol at \text{$108$} km and \text{$75$} km respectively. These states also do considerably better than the TF-QKD protocol as the TF-QKD protocol exceeds the PLOB bound at only \text{$130$} km and its key rates are lower than our protocol at each distance. It is important to note that this result can also be achieved by using the optimised states with \text{$n_\mathrm{max}=1$} photon in the form of \text{$\sqrt{a_{0}}\ket{00}+\sqrt{a_{1}}\ket{11}$} as shown in Fig.~\ref{fig:repeater_plots}(a) since both states reach the PLOB bound at the same distance and the key-rates converge beyond \text{$10$} km. In Fig.~\ref{fig:repeater_plots}(a), both high-dimensional and single-photons states have the same gradient, scaling like the single-repeater bound with $O(\sqrt{\tau})$.
The probability of receiving $n$-photons in this case is given by $(\sqrt{\tau})^n$. Therefore, the main scaling of the key rates comes from the single-photon level while the remaining photons help the key rate incrementally. As the loss gets higher, the probability of receiving higher photons drops. Therefore, beyond $10$ km, we are only interested in \text{$1$} or \text{$2$} photons. This is further emphasised in Fig.~\ref{fig:opt_coefficients_figure}(a) where we show the probability of sending each Fock-number state of the optimised states given in Eq.~\eqref{eq:statealice} for each transmission distance. At short distances, the high-dimensional states have contribution from each photon number. It is important to note that at $0$ km, the probability of sending each Fock-number state is not equal due to key rate being equal to zero when Charlie receives $0$ or $14$ photons in total. Therefore, the coefficients of the Fock states $\ket{0}$ and $\ket{7}$ are minimised accordingly. As the distance increases, the high-dimensional states reduce down to the single-photon level as the coefficients of the Fock states above one photon approach zero. The probabilities of sending zero and one photon, denoted as \text{$a_0$} and \text{$a_1$}, of these high-dimensional states shown in Fig.~\ref{fig:opt_coefficients_figure}(a) converge to the coefficients of the optimised states with \text{$n_\mathrm{max}=1$} photon shown in Fig.~\ref{fig:opt_coefficients_figure}(b) beyond approximately $50$ km. However, the main advantage of using the optimised states with \text{$n_\mathrm{max}=7$} is the ability of obtaining higher key rates at shorter distances. This is shown in Fig.~\ref{fig:different_photons}, as the secret key rate increases when the number of encoded photons changes from $1$ to $7$ photons.
As the key rates of the optimised states with \text{$n_\mathrm{max}=7$} converge with the results of the states with optimised \text{$\gamma$} below $10$ km and with the optimised states with \text{$n_\mathrm{max}=1$} photon, one can use the combination of the states with optimised \text{$\gamma$} and optimised states with \text{$n_\mathrm{max}=1$} photon beyond this distance to achieve the same results of the states given in Eq.~(\ref{eq:statealice}). \begin{figure}
\caption{ The simulation results of the secret key rate when the number of encoded photons varies from $1$ to $7$ photons at $5$ km. The states are in the form of Eq.~\eqref{eq:statealice} where the coefficients of each state are optimised.}
\label{fig:different_photons}
\end{figure}
As mentioned previously, the maximum key rate achievable by QKD for the point-to-point and single-repeater communication is bounded by the PLOB and single-repeater bounds respectively~\cite{pirandola2017fundamental,pirandola2019end}. These bounds are determined by the maximum amount of entanglement that a channel can sustain, also known as the entanglement flux, which coincides with reverse coherent information (RCI) of a maximally entangled TMSV state for the pure-loss channel~\cite{pirandola2017fundamental,pirandola2009direct,garcia2009reverse}. RCI is used to lower bound the distillable entanglement of a given channel~\cite{garcia2009reverse} and is a measure of the transmission of quantum information. While the key rates above demonstrate that our protocol surpasses the PLOB bound and acts as a repeater, the secret key rate is a measure of the transmission of classical information. The key rates are also bounded by the amount of entanglement that Alice and Bob can distill. Therefore, we also compute the RCI of our quantum states to verify the distillable entanglement between Alice and Bob after Charlie's measurement using
\begin{equation}
\text{RCI}=\sum_{c=0}^{2n_\text{max}}P_{c}\text{max}\bigl[0,S(\rho_{\text{A}|c})-S(\rho_{\text{AB}|c})\bigr], \label{eq:rci} \end{equation}
where \text{$S(\rho_{\text{AB}|c})$} and \text{$S(\rho_{\text{A}|c})$} are the von Neumann entropies of the joint state between Alice and Bob \text{$\rho_{\mathrm{AB}|c}$} and Alice's state \text{$\text{Tr}_2[\rho_{\text{AB}|c}]$} respectively. \begin{figure*}\label{fig:single_photon_experiment}
\end{figure*}
In Fig.~\ref{fig:repeater_plots}(b), we show the RCI of the optimised states with \text{$n_\text{max}=7$} and \text{$n_\text{max}=1$} when Alice and Bob perform point-to-point and single-repeater communications. We compare these results with the PLOB bound as it coincides with the RCI of a maximally entangled TMSV state in the pure-loss channel. Note that when Alice and Bob communicate directly using the optimised states with \text{$n_\text{max}=7$}, they cannot saturate the PLOB bound due to sending states with a limited number of photons. However, they can reach the PLOB bound if they send infinitely squeezed TMSV states with an infinite number of photons~\cite{pirandola2020advances}. In Fig.~\ref{fig:repeater_plots}(b), when Alice and Bob perform point-to-point communication, they can distill more entanglement at short distances. However, with the use of a repeater, they are able to distill more entanglement beyond $47$ km and surpass the RCI of an infinitely squeezed TMSV state at $108$ km. Note that they also surpass the PLOB bound at this distance when we calculate their secret key rate as shown in Fig.~\ref{fig:repeater_plots}(a) and the key rates coincide with the reverse coherent information of Alice and Bob's conditional joint state on Charlie's measurement outcome. This indicates that after Charlie's measurement, Alice and Bob's PNRD measurement is optimal as Alice and Bob achieve the same key rates as the distillable entanglement of their joint state.
\subsection{\label{sec:realimplementation}Realistic Implementation of the MDI Protocol with Single-Photon States} The experimental realisation of the higher dimensional optimised states and Charlie's measurement is quite challenging with state of the art technology. However, we present an experimentally feasible implementation of our protocol, shown in Fig.~\ref{fig:single_photon_experiment}, by using single-photon states which can be performed with existing technology. Fig.~\ref{fig:repeater_plots}(a) demonstrates that beyond $10$ km, the single-photon states achieve the same key rates as the higher dimensional states and the high-dimensional states reduce down to the single-photon level as demonstrated in Fig.~\ref{fig:opt_coefficients_figure}(a) and Fig.~\ref{fig:opt_coefficients_figure}(b). \begin{figure}\label{fig:single_photon_plots}
\end{figure}
When Alice and Bob send single-photon states, Charlie can measure from $0$ to $2$ photons. However, as mentioned previously in Sec.~\ref{sec:calkeyrate}, when Charlie measures $2$ photons, the conditional secret key rate is zero as such the contribution to the key rate comes from only the single-photon detection events. This eliminates Charlie having to distinguish between $c=2$ photon outcomes, i.e., \text{$\ket{\phi_{2}^0}$}, \text{$\ket{\phi_{2}^1}$} and \text{$\ket{\phi_{2}^2}$} and requires him to only distinguish between the single-photon outcomes. Therefore, we can simplify our protocol to Fig.~\ref{fig:single_photon_experiment}, where Charlie interferes the single photons coming from Alice and Bob at a 50:50 beamsplitter and uses two photon-number resolving detectors up to the two-photon level. After Charlie's measurement, Alice and Bob can estimate their joint state to bound Eve's information using the statistics of their matched and unmatched data of $X$, $Y$ and $Z$ bases as mentioned in Sec.~\ref{sec:calkeyrate}.
Additionally, we consider the detrimental effects of the detector inefficiency and dark counts to the key rates. The single-photon states are optimised for a detector with an efficiency of $85\%$ and a dark count rate of $5\times10^{-8}$ where the coefficients of the zero and single photons are shown in Table~\ref{tab:singlephotontable} and in Fig.~\ref{fig:opt_coefficients_figure}(b). In a lossless channel, the probability of sending a single-photon initially is half. However, as the channel becomes more lossy, it is likely that the single-photon will be lost during transmission. When Charlie receives no photons, this corresponds to a large bit-error rate reducing the key rates. This is compensated by reducing the probability of sending single-photons to decrease the bit-error rates and increase the key rates~\cite{yin2019measurement}.
With realistic dark count rates and detector efficiencies, our protocol surpasses the PLOB bound at $116$ km while the NPP-TF-QKD surpasses at $137$ km as shown in Fig.~\ref{fig:single_photon_plots}. The NPP-TF-QKD protocol drops to zero beyond $518$ km whereas our protocol drops to zero beyond $542$ km showing a $24$ km advancement in the transmission distance. These improvements are a result of several factors. Even though both protocols use optimised states, our protocol has more freedom over optimising the coefficients of the single-photon state while the TF-QKD protocols need to ensure that the intensities of the coherent states are still weak enough while optimising the key rates. This is also one of the key differences between our protocol and the Sending-or-Not-Sending TF-QKD (SNS-TF-QKD) protocol~\cite{wang2018twin}, where Alice and Bob send weak coherent states and no states with a probability of \text{$\epsilon$} and \text{$1-\epsilon$}, respectively. However, the probability of the single-photon detection is still determined by the intensity of the weak coherent states in the SNS-TF-QKD protocol whereas in this protocol, Alice and Bob send single-photon states with a probability of \text{$\epsilon$} which determines the probability of detection at Charlie's detectors. Our protocol also has the ability to distinguish two-photon events occurring at a single detector at Charlie. For example, if Charlie receives no photons on one detector and two photons on the other, these events can be disregarded and do not contribute to bit-error rates. However, in TF-QKD protocols with single-photon detectors, this event would register as one click, causing an increase in the bit-error rate. Therefore, the use of PNRDs in Charlie's station improves the bit-error rates. Furthermore, our protocol can estimate Eve's information more accurately due to the use of the probabilities of the matched and unmatched bases. These are the main factors that distinguish our protocol from the existing MDI and TF-QKD protocols.
\section{\label{sec:level5}Discussion} In this paper, we introduced a new MDI protocol using higher dimensional states that surpasses the repeaterless bound without the need of quantum memories as it scales like the single-repeater bound. However, for large distances, the states required in this protocol reduce down to the single-photon level due to the losses in the channel. Based on this, we proposed an experimentally feasible implementation of this protocol just using single-photons and photon-number resolving detectors which performs better than the existing protocols such as NPP-TF-QKD protocol~\cite{cui2019twin,lu2019improving}.
Furthermore, we investigated whether the single-repeater bound can be saturated with a simple protocol by using only single copies of the states sent by Alice and Bob and without collective measurements performed by Charlie. Our results show that unlike the repeaterless bound, this is probably not possible with single copies of the states and likely to require many copies of the states sent by Alice and Bob and collective measurements as previously shown by Garc\'{i}a-Patr\'{o}n et al.~\cite{garcia2009reverse} and a new protocol proposed by Winnel et al.~\cite{winnel2022achieving}.
The results presented in this work refer to the asymptotic key rates, and the security of this protocol with finite-size effects needs to be considered in the future. In this protocol, there are no misalignment errors in the $Z$ basis due to sending single-photons. However, the misalignment errors are likely to impact the statistics of the check states in the $X$ and $Y$ bases which can be investigated in future work. The feasibility of extending this protocol to a network of multiple users can also be studied.
\section{Methods} \subsection{\label{sec:appDVtomogprahy} Estimating Eve's Information Using Quantum Tomography with Single-Photon States} In this section, we show how Alice and Bob can estimate their joint state conditioned on Charlie's measurement outcome to bound Eve's information.
Alice and Bob measure their joint state in the $X$, $Y$ and $Z$ bases in the entanglement-based scheme as introduced in Sec.~\ref{sec:checkstatessection} to construct the statistics of their matched and unmatched results. From the probabilities measured in these bases, Alice and Bob can estimate their joint state. Writing their joint state as
\begin{equation}
\hat{\rho}_{\text{AB}|c=1}=\frac{1}{4}\big(\mathbb{I}_{4}+\vec{s}_a\!\cdot\vec{\sigma}_a+\vec{s}_b\cdot\vec{\sigma}_b+\sum_{j,k}{r_{jk}}(\sigma_{j}{\otimes}\sigma_{k})\big), \label{eq:fanorep} \end{equation} where \text{$\vec{s}_a\!\cdot\vec{\sigma}_a$} and \text{$\vec{s}_b\!\cdot\vec{\sigma}_b$} describe Alice and Bob's reduced states calculated from their local measurements while \text{$r_{jk}(\sigma_{j}{\otimes}\sigma_{k})$} gives the correlations between Alice and Bob determined from their measurements performed in the bases \text{$j=\{X,Y,Z\}$} and \text{$k=\{X,Y,Z\}$} where \text{$r_{jk}$} is the correlation coefficient and \text{$\sigma_{j}$} and \text{$\sigma_{k}$} are the standard Pauli matrices \text{$\sigma_X$}, \text{$\sigma_Y$} and \text{$\sigma_Z$}. The terms \text{$\vec{s}_a\cdot\vec{\sigma}_a$} and \text{$\vec{s}_b\cdot\vec{\sigma}_b$} can be expressed as \begin{subequations} \label{eq:reducedstates} \begin{equation} \label{eq:alicesqubit} \vec{s}_a\!\cdot\vec{\sigma}_a=a_X(\sigma_X{\otimes}\mathbb{I}_{2})+a_Y(\sigma_Y{\otimes}\mathbb{I}_{2})+a_Z(\sigma_Z{\otimes}\mathbb{I}_{2}), \end{equation} \begin{equation} \label{eq:bobsqubit} \vec{s}_b\!\cdot\vec{\sigma}_b=b_X(\mathbb{I}_{2}{\otimes}\sigma_X)+b_Y(\mathbb{I}_{2}{\otimes}\sigma_Y)+b_Z(\mathbb{I}_{2}{\otimes}\sigma_Z). \end{equation} \end{subequations} where $\{a_X,a_Y,a_Z\}$ and $\{b_X,b_Y,b_Z\}$ represent the coefficients given in Eq.~\eqref{eq:abcoeffs} when Alice and Bob measure in the bases $j=\{X,Y,Z\}$.
When Alice and Bob measure their own qubits in any basis \text{$j=\{X,Y,Z\}$}, their measurement outcomes can be expressed as \begin{equation}
\Pi_{\pm j}=\ketbra{\pm j}, \end{equation} which can be calculated using the eigenvectors of the $X$, $Y$ and $Z$ bases as defined previously in Eq.~\eqref{eq:allbases}. The probability of their measurement then can be calculated from \begin{subequations} \begin{equation}
P_{a}(\pm j)=\mathrm{Tr}\big[\big(\Pi_{\pm j}{\otimes} \mathbb{I}_{2}\big)\rho_{\text{AB}|c=1}\big(\Pi_{\pm j}{\otimes} \mathbb{I}_{2}\big)^\dagger\big], \label{eq:palice} \end{equation} \begin{equation}
P_{b}(\pm j)=\mathrm{Tr}\big[\big(\mathbb{I}_{2}{\otimes}\Pi_{\pm j}\big)\rho_{\text{AB}|c=1}\big(\mathbb{I}_{2}{\otimes}\Pi_{\pm j}\big)^\dagger\big], \label{eq:pbob} \end{equation} \end{subequations}
where \text{$\rho_{AB|c=1}$} is determined from Eq.~\eqref{eq:rhoABnorm}.
The coefficients in Eqs.~\eqref{eq:alicesqubit} and~\eqref{eq:bobsqubit} can computed from Eqs.~\eqref{eq:palice} and~\eqref{eq:pbob} where \begin{subequations} \label{eq:abcoeffs} \begin{equation} a_j=P_{a}(+j)-P_{a}(-j), \label{eq:aj} \end{equation} \begin{equation} b_j=P_{b}(+j)-P_{b}(-j). \label{eq:bj} \end{equation} \end{subequations}
In order to determine the correlation coefficients \text{$r_{jk}$}, Alice and Bob construct a joint probability table of their measurements in all the bases where these probabilities are calculated from \begin{equation}
P(a\!=\!\pm j,b\!=\!\pm k){=}\mathrm{Tr}\big[\!\big(\Pi_{\pm j}{\otimes}\Pi_{\pm k}\big)\rho_{\text{AB}|c=1}\big(\Pi_{\pm j}{\otimes}\Pi_{\pm k}\big)\!^\dagger\big]. \label{eq:probrjk} \end{equation} Using Eq.~\eqref{eq:probrjk}, the correlation coefficients become \begin{multline}
r_{jk}=P(a\!=\!+j,b\!=\!+k)+P(a\!=\!-j,b\!=\!-k)\\-P(a\!=\!+j,b\!=\!-k)-P(a\!=\!-j,b\!=\!+k).
\label{eq:correlations} \end{multline}
After Alice and Bob reconstruct their estimated joint matrix \text{$\hat{\rho}_{\text{AB}|c=1}$}, they can estimate Eve's information using the Holevo bound as given in Eq.~\eqref{eq:eveinfo}.
\subsection{\label{sec:appkeyclassical}Classical Protocol Used to Optimise the Coefficients of the High Dimensional States} This section describes how the states that Alice and Bob prepare are chosen. The coefficients of these states are determined based on the following classical protocol. We assume Eve taps off the signal sent by Alice and Bob, and measures the number of photons denoted as \text{$n_{e_a}$} and \text{$n_{e_b}$}. Then we maximise the average difference in mutual information \begin{equation}
\max\limits_{\{P(n_a),P(n_b)\}}\bigg[\sum_{c=0}^{2n_\text{max}}P_{c}(n_c)\big(I_{AB|c}-I_{AE|c}\big)\bigg], \label{eq:diffmutualinfo} \end{equation}
where \text{$I_{AB|c}$} and \text{$I_{AE|c}$} are Alice and Bob's mutual information and mutual information between Alice and Eve conditioned on Charlie's measurement outcome respectively. $P(n_c)$ represents the probability of Charlie measuring $n_c$ photons in total. Note that $P(n_a)$ and $P(n_b)$ are related to the optimised coefficients from Eq.~\eqref{eq:statealice} as they are the probability of sending \text{$n$} photons for the corresponding Fock-number state \text{$\ket{n}$}, also expressed as $a_n$ and $b_n$ throughout this paper.
In the classical protocol, Charlie measures the number of photons coming from Alice and Bob individually with two separate PNRDs. In Fock basis, both classical and quantum simulations yield the same probabilities for Charlie's measurement outcome. The probability of Charlie measuring \text{$n_{c_a}$} or \text{$n_{c_b}$} photons on Alice's and Bob's mode individually can be computed as \begin{equation} P_{c_a}(n_{c_{a}})=\sum_{n_{a}\!=0}^{n_{\text{max}}}\binom{n_{a}}{n_{c_{a}}}\tau_{\text{A}}^{n_{c_a}}(1-\tau_{\text{A}})^{n_{a}-n_{c_{a}}}P({n_{a})}, \label{eq:probsending} \end{equation} where \text{$n_\text{max}$} refers to the maximum number of photons Alice and Bob are sending individually. \text{$\tau_{\text{A}}$} is the probability of a photon arriving at Charlie from Alice or Bob as a function of the fibre distance with \text{$\tau_{\text{A}}=10^{-0.02d}$}. \text{$(1-\tau_{\text{A}})^{n_{a}-n_{c_{a}}}$} is the probability of losing \text{$n_{a}-n_{c_{a}}$} photons to Eve. The probability of the collective photon number measurement performed by Charlie for a given number of photons $n_c$ can be calculated using Eq.~\eqref{eq:probsending} as shown below \begin{equation} P_{c}(n_{c})=\sum_{n_{c_{a}}\!=0}^{n_{c}}P_{c_{a}}(n_{c_{a}})P_{c_{b}}(n_{c}-n_{c_{a}}), \label{eq:probcharlie} \end{equation} where $n_{c}-n_{c_{a}}$ gives the number of photons measured on Bob's mode.
Alice and Bob's mutual information conditioned on Charlie's measurement outcome is obtained from the probability table between Alice and Bob which is as follows \begin{equation}
P(n_{a},n_{b}|n_{c}){=}{\binom{\!n_{a}+n_{b}\!}{\!n_c\!}}\!\frac{\tau^{n_{c}}(1-\tau)^{n_{a}+n_{b}-n_c}P(n_a)P(n_b)}{P_{c}(n_{c})}, \label{eq:probtableAB} \end{equation}
where \text{$n_a+n_b$} is equal to the total number of photons in the system and \text{$\tau$} in the equation above corresponds to the transmission probability in one channel only. We evaluate Alice and Bob's mutual information conditioned on Charlie's measurement outcome from the same approach shown in Sec.~\ref{sec:calkeyrate} using \text{$I_{AB|c}=H(A|c)+H(B|c)-H(AB|c)$} and Eqs.~\eqref{eq:subeqsalice}, \eqref{eq:subeqsbob} and \eqref{eq:subeeqsabtogether}.
We quantify Eve's information conditioned on each photon measurement in a similar fashion as Alice and Bob's mutual information using \text{$I_{E|c}=H(A|c)+H(E|c)-H(AE|c)$}. Since Eve has access to both channels between Alice and Charlie and Charlie and Bob, we need to consider events where each party loses photons to Eve. We compute the probability table between Alice, Bob and the two modes of Eve conditioned on Charlie's outcome as follows \begin{multline}
P(n_a,n_b,n_{e_a},n_{e_b}|n_c)=
\frac{1}{P_c(n_{c})}\binom{n_{a}}{n_{e_{a}}}\binom{n_{b}}{n_{e_{b}}}\\\tau^{n_{a}+n_{b}-(n_{e_{a}}+n_{e_{b}})}(1-\tau)^{n_{e_{a}}+n_{e_{b}}}P(n_a)P(n_b),
\label{eq:probtableABEve} \end{multline} provided \text{$n_{a}+n_{b}-(n_{e_{a}}+n_{e_{b}})=n_{c}$}, where \text{$n_{e_a}$} and \text{$n_{e_b}$} are the photons lost to Eve by Alice and Bob respectively and \text{$n_{a}+n_{b}-(n_{e_{a}}+n_{e_{b}})$} corresponds to the total number of photons measured by Charlie. Therefore, using the probability table between Alice, Bob and Eve, we can calculate the entropies below to compute Eve's information \begin{subequations} \begin{multline}
H(E_\text{A}E_\text{B}|c)=-\sum_{n_{e_{a}}\!=0}^{n_{\text{max}}}\sum_{n_{e_{b}}\!=0}^{n_{\text{max}}}\\P(n_{e_a},n_{e_b}|c)\log_{2}P(n_{e_a},n_{e_b}|c), \label{eq:eqse1e2} \end{multline} \begin{multline}
H(AE_\text{A}E_\text{B}|c)=-\sum_{n_{a}\!=0}^{n_{\text{max}}}\sum_{n_{e_a}\!=0}^{n_{\text{max}}}\sum_{n_{e_b}\!=0}^{n_{\text{max}}}P(n_{a},n_{e_{a}},n_{e_{b}}|c)\\\log_{2}P(n_{a},n_{e_{a}},n_{e_{b}}|c). \label{eq:eqsae} \end{multline} \end{subequations}
\subsection{\label{sec:darknoisemodel}Modelling Dark Noise in the Entanglement Swapping Measurement} \begin{figure}\label{PNRD}
\end{figure} This section describes how to model the dark noise and detector efficiency at Charlie's photon detectors to achieve the results of Fig.~\ref{fig:single_photon_plots}. The effects of dark noise is modelled by interacting the incoming state with a thermal state at a beamsplitter as in Fig.~\ref{PNRD}.
The efficiency of the single photon detection in this framework is the transmissivity of the beamsplitter~($\tau$), i.e., $\eta_d = \tau$. The density matrix of the state to be detected can be written down as \begin{equation} \rho_{\text{out}} = B(\eta_d)(\rho_{\text{in}}{\otimes}\rho(\bar{n}))B(\eta_d)^{\dagger}, \end{equation} where $\rho(\bar{n})$ is the density matrix of the thermal state. The beamsplitter transformation is shown in Eq.~\eqref{eq:beamsplitter}. The density matrix of the thermal state is given by \begin{equation} \label{eq:thermalstdark} \rho(\bar{n}) = \sum_{n=0}^{\infty} \frac{\bar{n}^n}{(1+\bar{n})^{n+1}}\ketbra{n}, \end{equation} where $\bar{n}=\text{Tr}[\rho(\bar{n})a^{\dagger}a]$ is the mean photon number of the thermal state. Consequently the dark count is given by $(1-\eta_d)\bar{n}$. For low dark counts, the summation in Eq.~\eqref{eq:thermalstdark} can be truncated accordingly.
\subsection{\label{sec:appcoeffs} Coefficients of the Optimised States} In this section, we present some of the coefficients of the optimised states with \text{$n_\text{max}=7$} and \text{$n_\text{max}=1$} photons used in Fig.~\ref{fig:repeater_plots}(a) and (b) for each distance in Tables~\ref{tab:cvtable} and~\ref{tab:dvtable} correspondingly. These coefficients represent the probability of sending the corresponding Fock-number state. We give the values of the optimised squeezing parameters of the states given in Eq.~\eqref{eq:eprstate} with \text{$n_\text{max}=7$} photons for each distance used in Fig.~\ref{fig:repeater_plots}(a) in Table~\ref{tab:eprtable}. In Table~\ref{tab:singlephotontable}, we present the coefficients of the optimised single-photon states shown in Fig.~\ref{fig:single_photon_plots}.
\begin{table*}[htp!] \caption{\label{tab:cvtable} The coefficients of the optimised states with \text{$n_\text{max}=7$} photons.} \begin{ruledtabular}
\begin{tabular}{P{0.1\linewidth}|c|c|c|c|c|c|c|c} \makecell{Distance \\ (km)} & \makecell{\text{$a_0$} \\ \text{$b_0$}} & \makecell{\text{$a_1$} \\ \text{$b_1$} } & \makecell{\text{$a_2$} \\ \text{$b_2$} } & \makecell{\text{$a_3$} \\ \text{$b_3$}} & \makecell{\text{$a_4$} \\ \text{$b_4$}} & \makecell{\text{$a_5$} \\ \text{$b_5$}} & \makecell{\text{$a_6$} \\ \text{$b_6$}} & \makecell{\text{$a_7$} \\ \text{$b_7$}}\\ \colrule $0$ & $0.0823$ & $0.1162$ & $0.1432$ & $0.1582$ & $0.1582$ & $0.1432$ & $0.1162$ & $0.0823$\\ $0.5$ & $0.1073$ & $0.1359$ & $0.1557$ & $0.1603$ & $0.1496$ & $0.1267$ & $0.0968$ & $0.0676$\\ $1$ & $0.1226$ & $0.1472$ & $0.1616$ & $0.1600$ & $0.1438$ & $0.1174$ & $0.0868$ & $0.0605$\\
$2.5$ & $0.1550$ & $0.1705$ & $0.1710$ & $0.1567$ & $0.1307$ & $0.0994$ & $0.0691$ & $0.0477$\\ $5$ & $0.1967$ & $0.2012$ & $0.1786$ & $0.1483$ & $0.1129$ & $0.0787$ & $0.0508$ & $0.0329$\\ $10$ & $0.4468$ & $0.3137$ & $0.1410$ & $0.0601$ & $0.0245$ & $9.4433\times10^{-3}$ & $3.3895\times10^{-3}$ & $1.1308\times10^{-3}$\\ $15$ & $0.6654$ & $0.2955$ & $0.0366$ & $2.4273\times10^{-3}$ & $1.1213\times10^{-4}$ & $3.9036\times10^{-6}$ & $9.8873\times10^{-8}$ & $0$\\ $20$ & $0.7230$ & $0.2608$ & $0.0160$ & $2.8606\times10^{-4}$ & $2.1895\times10^{-6}$ & $3.2584\times10^{-9}$ & $0$ & $0$ \\ $25$ & $0.7548$ & $0.2366$ & $8.5496\times10^{-3}$ & $4.9545\times10^{-5}$ & $6.2159\times10^{-8}$ & $0$ & $0$ & $0$ \\ $30$ & $0.7760$ & $0.2189$ & $5.0283\times10^{-3}$ & $1.0464\times10^{-5}$ & $0$ & $0$ & $0$ & $0$ \\ $50$ & $0.8176$ & $0.1811$ & $1.3468\times10^{-3}$ & $1.2642\times10^{-7}$ & $0$ & $0$ & $0$ & $0$ \\ $100$ & $0.8477$ & $0.1520$ & $3.3582\times10^{-4}$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ $200$ & $0.8571$ & $0.1427$ & $1.9467\times10^{-4}$ & $0$ & $0$ & $0$ & $0$ & $0$ \\ \end{tabular} \end{ruledtabular} \end{table*}
\begin{table}[htp!] \caption{\label{tab:dvtable} The coefficients of the optimised states with \text{$n_\text{max}=1$} photon.} \begin{ruledtabular}
\begin{tabular}{P{2.5cm}|P{2.7cm}|P{2.7cm}} \makecell{Distance \\ (km)} & \makecell{\text{$a_0$} \\ \text{$b_0$}} & \makecell{\text{$a_1$} \\ \text{$b_1$}}\\ \colrule $0$ & $0.5$ & $0.5$ \\ $0.5$ & $0.5308$ & $0.4692$ \\ $1$ & $0.5505$ & $0.4495$ \\ $2.5$ & $0.5918$ & $0.4082$ \\ $5$ & $0.6371$ & $0.3629$ \\ $10$ & $0.6935$ & $0.3065$ \\ $15$ & $0.7292$ & $0.2708$ \\ $20$ & $0.7542$ & $0.2458$ \\ $30$ & $0.7869$ & $0.2131$ \\ $50$ & $0.8205$ & $0.1795$ \\ $100$ & $0.8483$ & $0.1517$ \\ $200$ & $0.8575$ & $0.1425$ \\ \end{tabular} \end{ruledtabular} \end{table}
\begin{table}[htp!] \caption{\label{tab:eprtable} The optimised squeezing parameters ($\gamma$) of the states shown in Eq.~\eqref{eq:eprstate} with \text{$n_\text{max}=7$} photons. } \begin{ruledtabular}
\begin{tabular}{P{4cm}|P{4.2cm}} Distance (km) & Squeezing Parameter (\text{$\gamma$}) \\ \colrule $0$ & $0.84$ \\ $0.5$ & $0.83$ \\ $1$ & $0.83$ \\ $2.5$ & $0.82$ \\ $5$ & $0.81$ \\ $10$ & $0.71$ \\ $15$ & $0.52$ \\ $20$ & $0.44$ \\ $25$ & $0.40$ \\ $30$ & $0.37$ \\ $40$ & $0.33$ \\ $50$ & $0.30$ \\ $100$ & $0.26$ \\ $200$ & $0.25$ \\ \end{tabular} \end{ruledtabular} \end{table}
\begin{table}[h!] \caption{\label{tab:singlephotontable} The coefficients of the single-photon states when the detector dark count rate is \text{$5\times10^{-8}$} and with a detector efficiency of $0.85$. } \begin{ruledtabular}
\begin{tabular}{P{2.5cm}|P{2.7cm}|P{2.7cm}} \makecell{Distance \\ (km)} & \makecell{\text{$a_0$} \\ \text{$b_0$}} & \makecell{\text{$a_1$} \\ \text{$b_1$}}\\ \colrule $0.5$ & $0.6697$ & $0.3303$ \\ $1$ & $0.6751$ & $0.3249$ \\ $2.5$ & $0.6896$ & $0.3104$ \\ $5$ & $0.7100$ & $0.2900$ \\ $10$ & $0.7405$ & $0.2595$ \\ $15$ & $0.7624$ & $0.2376$ \\ $20$ & $0.7790$ & $0.2210$ \\ $30$ & $0.8020$ & $0.1980$ \\ $50$ & $0.8275$ & $0.1725$ \\ $100$ & $0.8499$ & $0.1501$ \\ $200$ & $0.8576$ & $0.1424$ \\ $400$ & $0.8588$ & $0.1412$ \\ $420$ & $0.8591$ & $0.1409$ \\ $440$ & $0.8598$ & $0.1402$ \\ $460$ & $0.8609$ & $0.1391$ \\ $480$ & $0.8630$ & $0.1370$ \\ $490$ & $0.8647$ & $0.1353$ \\ $500$ & $0.8669$ & $0.1331$ \\ $516$ & $0.8721$ & $0.1279$ \\ $518$ & $0.8730$ & $0.1270$ \\ $520$ & $0.8739$ & $0.1261$ \\ $522$ & $0.8749$ & $0.1251$ \\ $524$ & $0.8760$ & $0.1240$ \\ $530$ & $0.8796$ & $0.1204$ \\ $532$ & $0.8810$ & $0.1190$ \\ $534$ & $0.8825$ & $0.1175$ \\ $536$ & $0.8841$ & $0.1159$ \\ $538$ & $0.8859$ & $0.1141$ \\ $540$ & $0.8878$ & $0.1122$ \\ $542$ & $0.8898$ & $0.1102$ \\ \end{tabular} \end{ruledtabular} \end{table}
\section*{Data availability} The data that supports the findings of this study are available from the corresponding author upon reasonable request.
\section*{Code availability} The codes that support the findings of this study are available from the corresponding author upon reasonable request.
\FloatBarrier
\section*{Acknowledgments} We thank Matthew S. Winnel for his valuable discussion during this project. This research was funded by the Australian Research Council Centre of Excellence for Quantum Computation and Communication Technology (Grant No. CE110001027). Y.-S.K acknowledges support from the KIST institutional program (2E31021).
\section*{Author contributions} O.E. conceived the project. O.E. and S.A. developed the theory. O.E. performed the numerical analysis. O.E. wrote the manuscript. All authors contributed towards~the~theory,~discussions~of~the~results~and~the manuscript. S.A. supervised the project.
\section*{Competing Interests} The authors declare no competing financial or non-financial interests.
\end{document} | arXiv | {
"id": "2211.03445.tex",
"language_detection_score": 0.7744868397712708,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter} \title{ The minimum number of some resolving sets for the Crystal Cubic Carbon $CCC(n)$ and the Layer Cycle Graph $LCG(n, k)$ }
\author[label1]{Jia-Bao Liu} \ead{liujiabaoad@163.com;liujiabao@ahjzu.edu.cn} \author[label2]{Ali Zafari \corref{1}} \ead{zafari.math.pu@gmail.com; zafari.math@pnu.ac.ir} \address[label1]{School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, P.R. China} \address[label2]{Department of Mathematics, Faculty of Science, Payame Noor University, P.O. Box 19395-4697, Tehran, Iran} \cortext[1]{Corresponding author}
\begin{abstract} The problem of determining resolving sets in graph theory has a long history, as it has many applications in chemistry, robot navigation, combinatorial optimization and utilization of the idea in pattern recognition and processing of images that also provide motivation for founding the theory. Especially, it is well known that these problems are NP hard. In the present work, first we define the structure of the crystal cubic carbon $CCC(n)$ by a new method, and we will find the minimum number of doubly resolving sets and strong resolving sets for the crystal cubic carbon $CCC(n)$. Also, we construct a new class of graphs of order $n+ \Sigma_{p=2}^{k}n^2(n-1)^{p-2}$, is denoted by $LCG(n, k)$ and recall that the layer cycle graph with parameters $n$ and $k$. Moreover, we will compute the minimum number of some resolving sets for the layer cycle graph $LCG(n, k)$.
\end{abstract}
\begin{keyword} resolving set; doubly resolving; strong resolving. \MSC[2010] 05C12; 05C90. \end{keyword} \end{frontmatter}
\section{Introduction and Preliminaries} \label{sec:introduction} Throughout this work, we will considering connected simple graphs and we follow the notion and terminology from the book [1]. The structure of a graph in graph theory is often considered as a set of vertices and edges. The problem of determining resolving sets in graph theory has a long history, as it has many applications in chemistry [2,3], robot navigation [4], combinatorial optimization [5], and utilization of the idea in pattern recognition and processing of images that also provide motivation for founding the theory [6]. For example, from a chemical graph theory perspective, a molecular graph is a graph consists of atoms called vertices and the chemical bond between atoms called edges. Especially, if we consider a graph as a chemical compound, then by changing the set of atoms and permuting their positions, a collection of compounds is essentially defined that are characterized by the substructure common to them. In a chemical compound, it is very important for chemists to find the minimum number of atoms so that they can identify other atoms relative to that the smallest set of atoms, so chemists require mathematical forms for a set of chemical compound to give distinct representations to distinct compound structures, and its corresponding in graph theory is to find the minimum number of resolving sets, see [7-11].
Suppose $R=\{r_1, r_2, ..., r_m\}$ is an order subset of vertices belonging to a graph $G$. For each vertex $u$ of $G$, we shall use the notation $r(u|R)$ to denote the representation of $u$ corresponding to $R$ in graph $G$, that is the $m$-tuple $(d(u, r_1), ..., d(u, r_m ))$, where $d(u, r_i)$ is the length of geodesic between $u$ and $r_i$, $1\leq i\leq m$. If the representation of distinct vertices in $V(G)-R$ is distinct, then the order subset $R$ is called a resolving set of graph $G$, see [12]. Therefore, it is important to find the minimum number of resolving sets in graph $G$. The minimum number of resolving sets in graph $G$ is called the metric dimension of $G$ and this minimum number is denoted by $\beta(G)$. The metric dimension and its related parameters has been studied by many researchers over the years, because their remarkable applications in graph theory and other sciences is important. For more specialized topics, see [13-19]. Indeed, the concept and notation of the metric dimension problem, was first introduced by Slater [6] under the term locating set, and the idea of metric dimension of a graph was individually introduced by Harary and Melter in [20].
One of the more specialized topics related to the metric dimension is a doubly resolving set of graph. C\'{a}ceres et al. [21]
define the notion of a doubly resolving set. Also, we can verify that an ordered subset $Z=\{z_1, z_2, ..., z_n\}$ of vertices of a graph $G$ is called a doubly resolving set for $G$, if every two distinct vertices $u$ and $v$ of $G$ are doubly resolved by some two vertices in the set $Z$, that is, for any various vertices $u, v \in V(G)$ we have $r(u|Z)-r(v|Z)\neq\mu I$, where $\mu$ is an integer, and $I$ denotes the unit $n$- vector $(1,..., 1)$. The minimum number of doubly resolving sets in graph $G$ is denoted by $\psi(G)$. For more information on the doubly resolving set of graphs, see
[22-25].
A vertex $u$ of a graph $G$ is called maximally distant from a vertex $v$ of $G$, if for every $w\in N_G(u)$, we have $d(v,w) \leq d(v, u)$, where $N_G (u)$ to denote the set of neighbors that $u$ has in $G$. If $u$ is maximally distant from $v$ and $v$ is maximally distant from $u$, then $u$ and $v$ are said to be mutually maximally distant [26].
For vertices $u$ and $v$ of a graph $G$, we use the interval $I_G[u, v]$ to denote as the collection of all vertices that belong to a shortest path between $u$ and $v$. A vertex $w$ strongly resolves two vertices $u$ and $v$ if $v$ belongs to $I_G[u, w]$ or $u$ belongs to $I_G[v, w]$. A set $R= \{r_1, r_2, ..., r_t\}$ of vertices of $G$ is a strong resolving set of $G$ if every two distinct vertices of $G$ are strongly resolved by some vertex of $R$. The minimum number of strong resolving sets in graph $G$ is denoted by $sdim(G)$, see
[5, 27, 28].\\
In this article we focus on some resolving parameters of two classes of graphs the crystal cubic carbon $CCC(n)$, and the layer cycle graph will be denoted by $LCG(n, k)$. We will first describe these classes of graphs that are used in the next section. Note that, these classes of graphs are important because they enable us to obtain many combinatorial problems about resolving sets into chemical compounds. \\
1). Let $n$ be fixed positive integer and $k$ be an integer so that $2\leq k \leq n$. The crystal cubic carbon $CCC(n)$, is defined in
[7]. Also, some of the chemical parameters of the crystal cubic carbon $CCC(n)$ have been calculated by other researchers, further details can be given in [29,30].
In [31]
the authors showed that if $n\geq 2$ is an integer, then the minimum number of a resolving set of $CCC(n)$ is $16\times 7^{(n-2)}$. In the present paper, we will discuss the minimum number of doubly resolving sets and strong resolving sets for the crystal cubic carbon $CCC(n)$. For this purpose, first we introduce some notation which is used throughout this article and is related to the crystal cubic carbon $CCC(n)$, and we define the structure of the crystal cubic carbon $CCC(n)$ by a new method.
Consider two sets
$W_1=\{1, ..., 4\}$ and $W_2=\{5,..., 8\}$, and let $M$ be a graph with vertex set $\{1, 2, ..., 8\}$. Now suppose that the edge set $E(M)$ is
$$E(M)=\{ij\,|\, i, j\in W_1, i<j, j-i=1 \text{or} j-i=3 \}\\\cup
\{ij\,|\, i, j\in W_2, i<j, j-i=1 \text{or} j-i=3 \}\\
\cup\{ij\,|\, i\in W_1, j\in W_2, j-i=4 \},$$ We can verify that the graph $M$, defined already, is isomorphic to the cartesian product $C_4\Box P_2$,
where $C_4$ and $P_2$ denote the cycle on $4$ and the path on $2$ vertices, respectively. From now on, for convenience, it can be assumed that $V(C_4\Box P_2)=V(M)$ and $E(C_4\Box P_2)=E(M)$. The cartesian product $C_4\Box P_2$ is depicted in Figure 1. \begin{center} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=15.0cm,y=10.0cm] \clip(5.531538312092156,2.3252649721940215) rectangle (6.113499725856829,2.865884864762297); \draw (5.999118452553847,2.7728532194486912)-- (6.,2.6); \draw (6.,2.6)-- (5.8,2.6); \draw (5.8,2.6)-- (5.797172680635512,2.766733650602681); \draw (5.797172680635512,2.766733650602681)-- (5.999118452553847,2.7728532194486912); \draw (5.859090364648229,2.6814380778860616)-- (5.865450598678444,2.512891876085364); \draw (5.865450598678444,2.512891876085364)-- (5.652382758666241,2.512891876085364); \draw (5.859090364648229,2.6814380778860616)-- (5.652382758666241,2.6782579608709542); \draw (5.652382758666241,2.6782579608709542)-- (5.652382758666241,2.512891876085364); \draw (6.,2.6)-- (5.865450598678444,2.512891876085364); \draw (5.8,2.6)-- (5.652382758666241,2.512891876085364); \draw (5.999118452553847,2.7728532194486912)-- (5.859090364648229,2.6814380778860616); \draw (5.797172680635512,2.766733650602681)-- (5.652382758666241,2.6782579608709542); \draw (5.706444747923069,2.4174883656321393) node[anchor=north west] {Figure 1}; \begin{scriptsize} \draw [fill=black] (5.999118452553847,2.7728532194486912) circle (1.5pt); \draw[color=black] (6.013816683464034,2.788183109535685) node {$5$}; \draw [fill=black] (6.,2.6) circle (1.5pt); \draw[color=black] (6.012996800479142,2.624195971614127) node {$1$}; \draw [fill=black] (5.8,2.6) circle (1.5pt); \draw[color=black] (5.808208492406509,2.5796743334026218) node {$4$}; \draw [fill=black] (5.797172680635512,2.766733650602681) circle (1.5pt); \draw[color=black] (5.785947673300757,2.788183109535685) node {$8$}; \draw [fill=black] (5.859090364648229,2.6814380778860616) circle (1.5pt); \draw[color=black] (5.846369896587799,2.702319950127782) node {$6$}; \draw [fill=black] (5.865450598678444,2.512891876085364) circle (1.5pt); \draw[color=black] (5.859090364648229,2.4874509399645044) node {$2$}; \draw [fill=black] (5.652382758666241,2.512891876085364) circle (1.5pt); \draw[color=black] (5.6460225246360265,2.4810907059342893) node {$3$}; \draw [fill=black] (5.652382758666241,2.6782579608709542) circle (1.5pt); \draw[color=black] (5.650581588515167,2.7036988969918148) node {$7$}; \end{scriptsize} \end{tikzpicture} \end{center}
Also, we shall use the notation $Q^{(k)}_{r_s}$ to denote a cubic graph of order $8$, with vertex set
$$V(Q^{(k)}_{r_s})=\{ (x_{r_s}, 1)^{(k)}, ..., (x_{r_s}, 8)^{(k)}\},$$ and the edge set $E(Q^{(k)}_{r_s})$ is
$E(Q^{(k)}_{r_s})=\{(x_{r_s}, i)^{(k)}(x_{r_s}, j)^{(k)}\,|\, i, j\in W_1, i<j, j-i=1 \text{or} j-i=3 \}\cup \{(x_{r_s}, i)^{(k)}(x_{r_s}, j)^{(k)}\,|\, i, j\in W_2, i<j, j-i=1 \text{or} j-i=3 \}\cup
\{(x_{r_s}, i)^{(k)}(x_{r_s}, j)^{(k)}\,|\, i\in W_1, j\in W_2, j-i=4 \},$ for $1\leq r \leq 8$, and $1\leq s \leq 7^{k-2}$. The graph $Q^{(2)}_{1_1}$ is depicted in Figure 2. \begin{center} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=15.0cm,y=10.0cm] \clip(5.531538312092156,2.3252649721940215) rectangle (6.113499725856829,2.865884864762297); \draw (5.999118452553847,2.7728532194486912)-- (6.,2.6); \draw (6.,2.6)-- (5.8,2.6); \draw (5.8,2.6)-- (5.797172680635512,2.766733650602681); \draw (5.797172680635512,2.766733650602681)-- (5.999118452553847,2.7728532194486912); \draw (5.859090364648229,2.6814380778860616)-- (5.865450598678444,2.512891876085364); \draw (5.865450598678444,2.512891876085364)-- (5.652382758666241,2.512891876085364); \draw (5.859090364648229,2.6814380778860616)-- (5.652382758666241,2.6782579608709542); \draw (5.652382758666241,2.6782579608709542)-- (5.652382758666241,2.512891876085364); \draw (6.,2.6)-- (5.865450598678444,2.512891876085364); \draw (5.8,2.6)-- (5.652382758666241,2.512891876085364); \draw (5.999118452553847,2.7728532194486912)-- (5.859090364648229,2.6814380778860616); \draw (5.797172680635512,2.766733650602681)-- (5.652382758666241,2.6782579608709542); \draw (5.706444747923069,2.4174883656321393) node[anchor=north west] {Figure 2}; \begin{scriptsize} \draw [fill=black] (5.999118452553847,2.7728532194486912) circle (1.5pt); \draw[color=black] (6.036816683464034,2.788183109535685) node {$(x_{1_1}, 5)^{(2)}$}; \draw [fill=black] (6.,2.6) circle (1.5pt); \draw[color=black] (6.036996800479142,2.624195971614127) node {$(x_{1_1}, 1)^{(2)}$}; \draw [fill=black] (5.8,2.6) circle (1.5pt); \draw[color=black] (5.828208492406509,2.5796743334026218) node {$(x_{1_1}, 4)^{(2)}$}; \draw [fill=black] (5.797172680635512,2.766733650602681) circle (1.5pt); \draw[color=black] (5.785947673300757,2.788183109535685) node {$(x_{1_1}, 8)^{(2)}$}; \draw [fill=black] (5.859090364648229,2.6814380778860616) circle (1.5pt); \draw[color=black] (5.846369896587799,2.702319950127782) node {$(x_{1_1}, 6)^{(2)}$}; \draw [fill=black] (5.865450598678444,2.512891876085364) circle (1.5pt); \draw[color=black] (5.859090364648229,2.4874509399645044) node {$(x_{1_1}, 2)^{(2)}$}; \draw [fill=black] (5.652382758666241,2.512891876085364) circle (1.5pt); \draw[color=black] (5.6460225246360265,2.4810907059342893) node {$(x_{1_1}, 3)^{(2)}$}; \draw [fill=black] (5.652382758666241,2.6782579608709542) circle (1.5pt); \draw[color=black] (5.650581588515167,2.7036988969918148) node {$(x_{1_1}, 7)^{(2)}$}; \end{scriptsize} \end{tikzpicture} \end{center}
We can see that a cubic graph $Q^{(k)}_{r_s}$ of order $8$ is isomorphic to the cartesian product $C_4\Box P_2$. Now, suppose $H$ is a graph of order $8+ 64\Sigma_{k=2}^{n}7^{(k-2)}$ with vertex set $$V(H)=L_1 \cup L_2\cup ...\cup L_n,$$ where the sets $L_1, L_2, ..., L_n$ are called the layers of $H$ such that $L_1=V(C_4\Box P_2)=\{1, 2, ..., 8\}$, and for $k\geq 2$ we have $$L_k=\{Q^{(k)}_{1_1}, Q^{(k)}_{1_2}, ..., Q^{(k)}_{1_{7^{k-2}}}; ...; Q^{(k)}_{8_1}, Q^{(k)}_{8_2}, ..., Q^{(k)}_{8_{7^{k-2}}}\},$$ where $Q^{(k)}_{r_s}$ is defined already and every $(x_{r_{s}}, 1)^{(k)}\in Q^{(k)}_{r_{s}}$ is called head vertex of $Q^{(k)}_{r_{s}}$ in the layer $L_k$. Now, let the adjacency relation of graph $H$ given as follows. Suppose that $r$ is an arbitrary vertex in the layer $L_1$, $1\leq r \leq 8$, and $r$ is adjacent to the head vertex of cubic $Q^{(2)}_{r_1}$ in the layer $L_2$ by an edge. Also every vertex in cubic $Q^{(k)}_{r_s}\in L_k$ ($k\geq 2$), except head vertex $(x_{rs}, 1)^{(k)}$, is adjacent to exactly the head vertex of one cubic in the layer $L_{k+1}$ say $Q^{(k+1)}_{r_s}$ by an edge, then we can see that the resulting graph is isomorphic to the crystal cubic carbon $CCC(n)$. In particular, we say that two cubes are congruous, if both of them lie in the same layer. It is natural to consider its vertex set of crystal cubic carbon $CCC(n)$ as partitioned into $n$ layers. The layers $L_1$ and $L_2$ consisting of the vertices $\{1, 2, ..., 8\}$ and $\{Q^{(2)}_{1_1}, Q^{(2)}_{2_1}, ..., Q^{(2)}_{8_1}\}$, respectively. In particular, each layer $L_k$ ($k\geq 2$), consisting of the $64\times7^{(k-2)}$ vertices. Moreover, each layer $L_k$ ($k\geq 2$), consisting of the $8\times7^{(k-2)}$ cubes. The crystal cubic carbon $CCC(2)$ is depicted in Figure 3.
\begin{center} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=10.0cm,y=10.0cm] \clip(5.9081308513605615,1.2932437619909456) rectangle (7.099822839633116,2.101402696566582); \draw (6.4,1.8)-- (6.5,1.8); \draw (6.5,1.8)-- (6.5,1.7); \draw (6.5,1.7)-- (6.4,1.7); \draw (6.4,1.7)-- (6.4,1.8); \draw (6.35,1.75)-- (6.45,1.75); \draw (6.45,1.65)-- (6.35,1.65); \draw (6.45,1.75)-- (6.45,1.65); \draw (6.35,1.75)-- (6.35,1.65); \draw (6.4,1.8)-- (6.35,1.75); \draw (6.5,1.8)-- (6.45,1.75); \draw (6.5,1.7)-- (6.45,1.65); \draw (6.4,1.7)-- (6.35,1.65); \draw (6.55,2.05)-- (6.65,2.05); \draw (6.65,2.05)-- (6.65,1.95); \draw (6.65,1.95)-- (6.55,1.95); \draw (6.55,1.95)-- (6.55,2.05); \draw (6.5,2.)-- (6.6,2.); \draw (6.6,2.)-- (6.6,1.9); \draw (6.6,1.9)-- (6.5,1.9); \draw (6.5,1.9)-- (6.5,2.); \draw (6.55,2.05)-- (6.5,2.); \draw (6.65,2.05)-- (6.6,2.); \draw (6.65,1.95)-- (6.6,1.9); \draw (6.55,1.95)-- (6.5,1.9); \draw (6.8,1.95)-- (6.95,1.95); \draw (6.95,1.95)-- (6.95,1.85); \draw (6.95,1.85)-- (6.8,1.85); \draw (6.8,1.85)-- (6.8,1.95); \draw (6.75,1.9)-- (6.9,1.9); \draw (6.9,1.9)-- (6.9,1.8); \draw (6.75,1.8)-- (6.75,1.9); \draw (6.8,1.95)-- (6.75,1.9); \draw (6.95,1.95)-- (6.9,1.9); \draw (6.95,1.85)-- (6.9,1.8); \draw (6.8,1.85)-- (6.75,1.8); \draw (6.9,1.8)-- (6.75,1.8); \draw (6.65,1.7)-- (6.75,1.7); \draw (6.75,1.7)-- (6.75,1.6); \draw (6.75,1.6)-- (6.65,1.6); \draw (6.65,1.6)-- (6.65,1.7); \draw (6.6,1.65)-- (6.7,1.65); \draw (6.7,1.65)-- (6.7,1.55); \draw (6.7,1.55)-- (6.6,1.55); \draw (6.6,1.55)-- (6.6,1.65); \draw (6.65,1.7)-- (6.6,1.65); \draw (6.75,1.7)-- (6.7,1.65); \draw (6.75,1.6)-- (6.7,1.55); \draw (6.65,1.6)-- (6.6,1.55); \draw (6.4,1.55)-- (6.5,1.55); \draw (6.5,1.55)-- (6.5,1.45); \draw (6.5,1.45)-- (6.4,1.45); \draw (6.4,1.45)-- (6.4,1.55); \draw (6.35,1.5)-- (6.45,1.5); \draw (6.45,1.5)-- (6.45,1.4); \draw (6.45,1.4)-- (6.35,1.4); \draw (6.35,1.4)-- (6.35,1.5); \draw (6.4,1.55)-- (6.35,1.5); \draw (6.5,1.55)-- (6.45,1.5); \draw (6.5,1.45)-- (6.45,1.4); \draw (6.4,1.45)-- (6.35,1.4); \draw (6.2,1.55)-- (6.3,1.55); \draw (6.3,1.55)-- (6.3,1.45); \draw (6.3,1.45)-- (6.2,1.45); \draw (6.2,1.45)-- (6.2,1.55); \draw (6.15,1.5)-- (6.25,1.5); \draw (6.25,1.5)-- (6.25,1.4); \draw (6.25,1.4)-- (6.15,1.4); \draw (6.15,1.4)-- (6.15,1.5); \draw (6.15,1.5)-- (6.2,1.55); \draw (6.25,1.5)-- (6.3,1.55); \draw (6.25,1.4)-- (6.3,1.45); \draw (6.2,1.45)-- (6.15,1.4); \draw (6.,1.7)-- (6.1,1.7); \draw (6.1,1.7)-- (6.1,1.6); \draw (6.1,1.6)-- (6.,1.6); \draw (6.,1.6)-- (6.,1.7); \draw (6.05,1.75)-- (6.15,1.75); \draw (6.15,1.75)-- (6.15,1.65); \draw (6.15,1.65)-- (6.05,1.65); \draw (6.05,1.65)-- (6.05,1.75); \draw (6.05,1.75)-- (6.,1.7); \draw (6.15,1.75)-- (6.1,1.7); \draw (6.15,1.65)-- (6.1,1.6); \draw (6.05,1.65)-- (6.,1.6); \draw (6.3,2.05)-- (6.4,2.05); \draw (6.4,2.05)-- (6.4,1.95); \draw (6.4,1.95)-- (6.3,1.95); \draw (6.3,1.95)-- (6.3,2.05); \draw (6.25,2.)-- (6.35,2.); \draw (6.35,2.)-- (6.35,1.9); \draw (6.35,1.9)-- (6.25,1.9); \draw (6.25,1.9)-- (6.25,2.); \draw (6.3,2.05)-- (6.25,2.); \draw (6.4,2.05)-- (6.35,2.); \draw (6.4,1.95)-- (6.35,1.9); \draw (6.3,1.95)-- (6.25,1.9); \draw (6.1,1.95)-- (6.2,1.95); \draw (6.2,1.95)-- (6.2,1.85); \draw (6.2,1.85)-- (6.1,1.85); \draw (6.1,1.85)-- (6.1,1.95); \draw (6.05,1.9)-- (6.15,1.9); \draw (6.15,1.9)-- (6.15,1.8); \draw (6.15,1.8)-- (6.05,1.8); \draw (6.05,1.8)-- (6.05,1.9); \draw (6.05,1.9)-- (6.1,1.95); \draw (6.2,1.95)-- (6.15,1.9); \draw (6.2,1.85)-- (6.15,1.8); \draw (6.1,1.85)-- (6.05,1.8); \draw (6.5,1.7)-- (6.75,1.8); \draw (6.5,1.8)-- (6.5,1.9); \draw (6.4,1.8)-- (6.35,1.9); \draw (6.35,1.75)-- (6.2,1.85); \draw (6.45,1.65)-- (6.5,1.55); \draw (6.35,1.65)-- (6.3,1.55); \draw (6.4,1.7)-- (6.15,1.65); \draw (6.45,1.75)-- (6.6,1.65); \draw (6.219059123178684,1.3373805022973364) node[anchor=north west] {Figure 3. Crystal cubic carbon $CCC(2)$}; \begin{scriptsize} \draw [fill=black] (6.4,1.8) circle (1.5pt); \draw[color=black] (6.4103765169160445,1.8213626891053447) node {$8$}; \draw [fill=black] (6.5,1.8) circle (1.5pt); \draw[color=black] (6.510825650027141,1.8213626891053447) node {$5$}; \draw [fill=black] (6.5,1.7) circle (1.5pt); \draw[color=black] (6.507781736902563,1.7285233388056953) node {$1$}; \draw [fill=black] (6.4,1.7) circle (1.5pt); \draw[color=black] (6.401244777542308,1.6789524681861725) node {$4$}; \draw [fill=black] (6.35,1.75) circle (1.5pt); \draw[color=black] (6.331234775676998,1.7398307306777459) node {$7$}; \draw [fill=black] (6.45,1.75) circle (1.5pt); \draw[color=black] (6.437771735037253,1.7680942094252179) node {$6$}; \draw [fill=black] (6.45,1.65) circle (1.5pt); \draw[color=black] (6.437771735037253,1.630249858192914) node {$2$}; \draw [fill=black] (6.35,1.65) circle (1.5pt); \draw[color=black] (6.323624992865552,1.653947467253518) node {$3$}; \draw [fill=black] (6.55,2.05) circle (1.5pt); \draw [fill=black] (6.65,2.05) circle (1.5pt); \draw [fill=black] (6.65,1.95) circle (1.5pt); \draw [fill=black] (6.55,1.95) circle (1.5pt); \draw [fill=black] (6.5,2.) circle (1.5pt); \draw [fill=black] (6.6,2.) circle (1.5pt); \draw [fill=black] (6.6,1.9) circle (1.5pt); \draw [fill=qqqqff] (6.5,1.9) circle (1.5pt); \draw [fill=black] (6.8,1.95) circle (1.5pt); \draw[color=black] (6.807607179673562,1.9796461715834353) node {$(x_{1_1}, 8)^{(2)}$}; \draw [fill=black] (6.95,1.95) circle (1.5pt); \draw[color=black] (6.952193053091049,1.982690084708014) node {$(x_{1_1}, 7)^{(2)}$}; \draw [fill=black] (6.95,1.85) circle (1.5pt); \draw[color=black] (7.01412618713943,1.85484573347571) node {$(x_{1_1}, 3)^{(2)}$}; \draw [fill=black] (6.8,1.85) circle (1.5pt); \draw[color=black] (6.851080222178618,1.832977472849446) node {$(x_{1_1}, 4)^{(2)}$}; \draw [fill=black] (6.75,1.9) circle (1.5pt); \draw[color=black] (6.701509351559095,1.9096361697181259) node {$(x_{1_1}, 5)^{(2)}$}; \draw [fill=black] (6.9,1.9) circle (1.5pt); \draw[color=black] (6.879139138101161,1.914202039404994) node {$(x_{1_1}, 6)^{(2)}$}; \draw [fill=black] (6.9,1.8) circle (1.5pt); \draw[color=black] (6.958056312784659,1.7848357316104007) node {$(x_{1_1}, 2)^{(2)}$}; \draw [fill=qqqqff] (6.75,1.8) circle (1.5pt); \draw[color=qqqqff] (6.695421525309938,1.8122309497316087) node {$(x_{1_1}, 1)^{(2)}$}; \draw [fill=black] (6.65,1.7) circle (1.5pt); \draw [fill=black] (6.75,1.7) circle (1.5pt); \draw [fill=black] (6.75,1.6) circle (1.5pt); \draw [fill=black] (6.65,1.6) circle (1.5pt); \draw [fill=qqqqff] (6.6,1.65) circle (1.5pt); \draw [fill=black] (6.7,1.65) circle (1.5pt); \draw [fill=black] (6.7,1.55) circle (1.5pt); \draw [fill=black] (6.6,1.55) circle (1.5pt); \draw [fill=black] (6.4,1.55) circle (1.5pt); \draw [fill=qqqqff] (6.5,1.55) circle (1.5pt); \draw [fill=black] (6.5,1.45) circle (1.5pt); \draw [fill=black] (6.4,1.45) circle (1.5pt); \draw [fill=black] (6.35,1.5) circle (1.5pt); \draw [fill=black] (6.45,1.5) circle (1.5pt); \draw [fill=black] (6.45,1.4) circle (1.5pt); \draw [fill=black] (6.35,1.4) circle (1.5pt); \draw [fill=black] (6.2,1.55) circle (1.5pt); \draw [fill=qqqqff] (6.3,1.55) circle (1.5pt); \draw [fill=black] (6.3,1.45) circle (1.5pt); \draw [fill=black] (6.2,1.45) circle (1.5pt); \draw [fill=black] (6.15,1.5) circle (1.5pt); \draw [fill=black] (6.25,1.5) circle (1.5pt); \draw [fill=black] (6.25,1.4) circle (1.5pt); \draw [fill=black] (6.15,1.4) circle (1.5pt); \draw [fill=black] (6.,1.7) circle (1.5pt); \draw [fill=black] (6.1,1.7) circle (1.5pt); \draw [fill=black] (6.1,1.6) circle (1.5pt); \draw [fill=black] (6.,1.6) circle (1.5pt); \draw [fill=black] (6.05,1.75) circle (1.5pt); \draw [fill=black] (6.15,1.75) circle (1.5pt); \draw [fill=qqqqff] (6.15,1.65) circle (1.5pt); \draw [fill=black] (6.05,1.65) circle (1.5pt); \draw [fill=black] (6.3,2.05) circle (1.5pt); \draw [fill=black] (6.4,2.05) circle (1.5pt); \draw [fill=black] (6.4,1.95) circle (1.5pt); \draw [fill=black] (6.3,1.95) circle (1.5pt); \draw [fill=black] (6.25,2.) circle (1.5pt); \draw [fill=black] (6.35,2.) circle (1.5pt); \draw [fill=qqqqff] (6.35,1.9) circle (1.5pt); \draw [fill=black] (6.25,1.9) circle (1.5pt); \draw [fill=black] (6.1,1.95) circle (1.5pt); \draw [fill=black] (6.2,1.95) circle (1.5pt); \draw [fill=qqqqff] (6.2,1.85) circle (1.5pt); \draw [fill=black] (6.1,1.85) circle (1.5pt); \draw [fill=black] (6.05,1.9) circle (1.5pt); \draw [fill=black] (6.15,1.9) circle (1.5pt); \draw [fill=black] (6.15,1.8) circle (1.5pt); \draw [fill=black] (6.05,1.8) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center}
2). Let $n$ and $k$ be fixed positive integers so that $n\geq 3$, $k\geq 2$ and $[n]=\{1,2, ..., n\}$, also, let $p$ be an integer so that $2\leq p \leq k$. In this section, we construct a class of graphs of order $n+ \Sigma_{p=2}^{k}n^2(n-1)^{p-2}$, denoted by $LCG(n,k)$ and recall that the layer cycle graph with parameters $n$ and $k$. Moreover, we obtain some resolving parameters for this class of graphs in the next section. For this purpose, first we introduce some notation which is used throughout this paper and is related to the layer cycle graph $LCG(n, k)$ as follows. We shall use the notation $C^{(p)}_{r_{s}}$ to denote a cycle of order $n$, with vertex set
$$V(C^{(p)}_{r_s})=\{ (x_{r_{s}}, 1)^{(p)}, (x_{r_{s}}, 2)^{(p)}, ..., (x_{r_{s}}, n)^{(p)}\},$$
and the edge set $E(C^{(p)}_{r_{s}})$ is
$$E(C^{(p)}_{r_{s}})=\{(x_{r_{s}}, i)^{(p)}(x_{r_{s}}, j)^{(p)}\,|\, i,j\in [n], i<j, j-i=1 \text{or} j-i=n-1 \},$$ for $1\leq r \leq n$, and $1\leq s \leq (n-1)^{p-2}$. We can verify that $C^{(p)}_{r_{s}}$ is isomorphic to the cycle $C_n$, where vertex set of the cycle $C_n$ is $V(C_n)=\{1, 2, ..., n\}$ and the edge set $E(C_n)$ is
$E(C_n)=\{ij\,|\, i,j\in [n], i<j, j-i=1 \text{or} j-i=n-1 \}$. Now, suppose $G$ is a graph with vertex set $V(G)=U_1 \cup U_2\cup ...\cup U_k$, where the sets $U_1, U_2, ..., U_k$ are called the layers of $G$ such that $U_1=V(C_n)$, and for $p\geq 2$ we have $$U_p=\{C^{(p)}_{1_{1}}, C^{(p)}_{1_2}, ..., C^{(p)}_{1_{(n-1)^{p-2}}}; C^{(p)}_{2_1}, C^{(p)}_{2_2}, ..., C^{(p)}_{2_{(n-1)^{p-2}}}; ...; C^{(p)}_{n_1}, C^{(p)}_{n_2}, ..., C^{(p)}_{n_{(n-1)^{p-2}}}\},$$ where $C^{(p)}_{r_{s}}$ is defined already and every $(x_{r_{s}}, 1)^{(p)}\in C^{(p)}_{r_{s}}$ is called head vertex of $C^{(p)}_{r_{s}}$ in the layer $U_p$. Now, let the adjacency relation of graph $G$ given as follows. Suppose that $r$ is an arbitrary vertex in the layer $U_1$, $1\leq r \leq n$, and $r$ is adjacent to the head vertex of $C^{(2)}_{r_{1}}$ in the layer $U_2$ by an edge. Also every vertex in cycle $C^{(p)}_{r_{s}}\in U_p$ ($p\geq 2$), except head vertex $(x_{r_{s}}, 1)^{(p)}$, is adjacent to exactly the head vertex of one cycle in the layer $U_{p+1}$ say $C^{(p+1)}_{r_{s}}$ by an edge, then the resulting graph is called the layer cycle graph $LCG(n,k)$ with parameters $n$, $k$. In particular, we say that two cycles are congruous, if both of them lie in the same layer. It is natural to consider its vertex set of layer cycle graph $LCG(n, k)$ as partitioned into $k$ layers. The layers $U_1$ and $U_2$ consisting of the vertices $\{1, 2, ..., n\}$ and $\{C^{(2)}_{1_{1}}, C^{(2)}_{2_{1}}, ..., C^{(2)}_{n_{1}}\}$, respectively. In particular, each layer $U_p$ ($p\geq 2$), consisting of the $n^2(n-1)^{p-2}$ vertices. The layer cycle graph $LCG(5, 3)$ is depicted in Figure 4.
\begin{center} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.27cm,y=0.327cm] \clip(-2.0161813448739916,-13.951664529057004) rectangle (50.28365550848355,30.48598746405911); \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (21.11928368662371,11.188737830007517) -- (22.835259488187784,11.173409959359034) -- (23.380102844068425,12.800663354723385) -- (22.00085875498314,13.821689132015706) -- (20.603595673265406,12.825464370407792) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (21.9826220461177,18.09544470595976) -- (23.576477479029673,19.888532067985828) -- (22.36367847443493,21.95847313075419) -- (20.02027203516137,21.444679700228047) -- (19.784766210829684,19.057196834198123) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (29.354203423335605,11.85284426038754) -- (31.849816596011475,12.947796598080227) -- (31.57964192195906,15.659624648483248) -- (28.91705161781937,16.240674217584996) -- (27.541654985797543,13.887954550035332) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (24.165823077659372,4.87345052858861) -- (26.10008191447365,5.875886386967328) -- (25.74442761129017,8.02524557393089) -- (23.59036232686335,8.351186747127494) -- (22.614731070284837,6.40327028353245) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (16.799553634800827,7.275404959971025) -- (16.271306744849774,9.462061111019457) -- (14.028435897277692,9.635382375647339) -- (13.170512371052915,7.55584465711205) -- (14.883157319669923,6.0972984015419485) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (15.371278536118586,14.049065661548541) -- (16.73618124611577,15.17337641228766) -- (16.088676313342184,16.818907157676236) -- (14.323593547007713,16.711590337120207) -- (13.880217337229903,14.999734149063434) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (14.283927088315918,23.7414470607691) -- (14.943093884089782,25.73266865533003) -- (13.253023353216788,26.974894844061872) -- (11.549335525978833,25.75141125585246) -- (12.186469073399314,23.753030624929522) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (18.376688783641207,25.217300111423555) -- (19.392606967635725,26.994470873052073) -- (18.016353117968542,28.509842449212282) -- (16.149863277731814,27.66922282723627) -- (16.37256296647634,25.6343197530848) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (22.144342349366035,25.777725944306205) -- (23.70929876510567,27.193702802166083) -- (22.846222875434552,29.11962571179351) -- (20.747856225007613,28.893934671795478) -- (20.31407020385561,26.828527028492953) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (27.428294775233624,25.73266865533003) -- (28.291633134727608,27.92421987558411) -- (26.474130290889796,29.4225300192501) -- (24.48751339925445,28.156985393470332) -- (25.077219481436956,25.876525656792687) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (32.608873780487116,22.529590037397952) -- (35.9321606904652,22.513450407064763) -- (36.97446252343122,25.66909665826192) -- (34.295353572762444,27.63553292830615) -- (31.59727134871906,25.695211128706887) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (42.25796186329197,16.062570288608036) -- (43.0670567138712,18.917465504377393) -- (40.601914074464155,20.56917157315409) -- (38.26927728561473,18.735086847313212) -- (39.292771106104425,15.949854079719815) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (43.77958301811022,6.685986462373042) -- (44.799858606835414,9.551627803701711) -- (42.389754231661996,11.407499425029656) -- (39.879952222644796,9.688849824438034) -- (40.73891365121282,6.770794335193035) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (41.44094045625145,-1.960569491516952) -- (42.23786817270744,1.0279094451931547) -- (39.6419200138118,2.709323521719836) -- (37.24060810212559,0.760015633465736) -- (38.35246388200917,-2.1261369725402615) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (30.76803661726853,-3.022042083545845) -- (32.47550364612156,-1.894158848478957) -- (31.93045927504162,0.0782738829771048) -- (29.88613629948438,0.1694211164728574) -- (29.16771958768762,-1.7466795267023065) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (25.834439342321645,-4.085710068733035) -- (27.693937347385617,-3.5544249244290085) -- (27.76327163345554,-1.6217610912261866) -- (25.946624573768485,-0.9585942977832149) -- (24.754540659249407,-2.4813985124279974) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (21.982622046117694,-3.3551929953150093) -- (23.443656192953675,-1.9605694915169596) -- (22.568774802622578,-0.1400810822105365) -- (20.567034220437215,-0.40958087293199075) -- (20.20477189431776,-2.3966293128652527) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (17.710753784202524,-1.938532158762184) -- (19.06055375244573,-0.5659459877189097) -- (18.17225785949232,1.1419425208701957) -- (16.27346083733697,0.8248894971303508) -- (15.988235632861286,-1.0789485563659031) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (11.832770627095226,-2.3497029288222673) -- (13.22647054051915,-1.2680063512512225) -- (12.628392920072928,0.39174365839726377) -- (10.865060709302579,0.33582899961694945) -- (10.373339090035232,-1.3584781696271235) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (7.3058699347198806,-1.2300524180989334) -- (8.860757139072371,-0.26956791135095415) -- (8.42776866061024,1.5060237318714) -- (6.605279859831052,1.6429152107751133) -- (5.911908315295607,-0.04807284571450726) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (6.176889003073895,2.4889435920291993) -- (5.844835787883899,4.414852240131268) -- (3.9105777317866863,4.6941893679452855) -- (3.047193725295304,2.940920559152053) -- (4.447851120037782,1.5780037160887772) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (4.915086785351911,5.743065100891315) -- (6.207423082573568,7.279116409575084) -- (5.145905354375257,8.982867224907654) -- (3.1975150214664714,8.49979182845976) -- (3.0548613005754297,6.497483998993561) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (4.317390998009919,10.657452685703491) -- (5.5127825726939035,12.184897475577547) -- (4.429492563513084,13.79378882024563) -- (2.564590943482164,13.260693565581974) -- (2.495308365809135,11.322331234290468) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (4.070250997468012,14.867178713923998) -- (5.645603858769902,15.970304128743685) -- (5.083280050994329,17.80943823305526) -- (3.160391963803871,17.84296020456922) -- (2.5343055771334706,16.02454381802318) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (6.158386431463805,20.441170566351666) -- (4.9629948567798206,21.96861535622572) -- (3.1409122245790404,21.303736807638742) -- (3.2101948022520697,19.36537447634724) -- (5.075096422282987,18.832279221683585) -- cycle; \fill[color=zzttqq,fill=zzttqq,fill opacity=0.1] (7.903565722061873,22.544957789505915) -- (8.920805306174907,24.540853926378794) -- (7.336939598195463,26.125072086783895) -- (5.340817172935715,25.1082766186362) -- (5.691011376398764,22.895644299308962) -- cycle; \draw [color=zzttqq] (21.11928368662371,11.188737830007517)-- (22.835259488187784,11.173409959359034); \draw [color=zzttqq] (22.835259488187784,11.173409959359034)-- (23.380102844068425,12.800663354723385); \draw [color=zzttqq] (23.380102844068425,12.800663354723385)-- (22.00085875498314,13.821689132015706); \draw [color=zzttqq] (22.00085875498314,13.821689132015706)-- (20.603595673265406,12.825464370407792); \draw [color=zzttqq] (20.603595673265406,12.825464370407792)-- (21.11928368662371,11.188737830007517); \draw [color=zzttqq] (21.9826220461177,18.09544470595976)-- (23.576477479029673,19.888532067985828); \draw [color=zzttqq] (23.576477479029673,19.888532067985828)-- (22.36367847443493,21.95847313075419); \draw [color=zzttqq] (22.36367847443493,21.95847313075419)-- (20.02027203516137,21.444679700228047); \draw [color=zzttqq] (20.02027203516137,21.444679700228047)-- (19.784766210829684,19.057196834198123); \draw [color=zzttqq] (19.784766210829684,19.057196834198123)-- (21.9826220461177,18.09544470595976); \draw [color=zzttqq] (29.354203423335605,11.85284426038754)-- (31.849816596011475,12.947796598080227); \draw [color=zzttqq] (31.849816596011475,12.947796598080227)-- (31.57964192195906,15.659624648483248); \draw [color=zzttqq] (31.57964192195906,15.659624648483248)-- (28.91705161781937,16.240674217584996); \draw [color=zzttqq] (28.91705161781937,16.240674217584996)-- (27.541654985797543,13.887954550035332); \draw [color=zzttqq] (27.541654985797543,13.887954550035332)-- (29.354203423335605,11.85284426038754); \draw [color=zzttqq] (24.165823077659372,4.87345052858861)-- (26.10008191447365,5.875886386967328); \draw [color=zzttqq] (26.10008191447365,5.875886386967328)-- (25.74442761129017,8.02524557393089); \draw [color=zzttqq] (25.74442761129017,8.02524557393089)-- (23.59036232686335,8.351186747127494); \draw [color=zzttqq] (23.59036232686335,8.351186747127494)-- (22.614731070284837,6.40327028353245); \draw [color=zzttqq] (22.614731070284837,6.40327028353245)-- (24.165823077659372,4.87345052858861); \draw [color=zzttqq] (16.799553634800827,7.275404959971025)-- (16.271306744849774,9.462061111019457); \draw [color=zzttqq] (16.271306744849774,9.462061111019457)-- (14.028435897277692,9.635382375647339); \draw [color=zzttqq] (14.028435897277692,9.635382375647339)-- (13.170512371052915,7.55584465711205); \draw [color=zzttqq] (13.170512371052915,7.55584465711205)-- (14.883157319669923,6.0972984015419485); \draw [color=zzttqq] (14.883157319669923,6.0972984015419485)-- (16.799553634800827,7.275404959971025); \draw [color=zzttqq] (15.371278536118586,14.049065661548541)-- (16.73618124611577,15.17337641228766); \draw [color=zzttqq] (16.73618124611577,15.17337641228766)-- (16.088676313342184,16.818907157676236); \draw [color=zzttqq] (16.088676313342184,16.818907157676236)-- (14.323593547007713,16.711590337120207); \draw [color=zzttqq] (14.323593547007713,16.711590337120207)-- (13.880217337229903,14.999734149063434); \draw [color=zzttqq] (13.880217337229903,14.999734149063434)-- (15.371278536118586,14.049065661548541); \draw (22.00085875498314,13.821689132015706)-- (21.9826220461177,18.09544470595976); \draw (23.380102844068425,12.800663354723385)-- (27.541654985797543,13.887954550035332); \draw (22.835259488187784,11.173409959359034)-- (23.59036232686335,8.351186747127494); \draw (21.11928368662371,11.188737830007517)-- (16.271306744849774,9.462061111019457); \draw (20.603595673265406,12.825464370407792)-- (16.73618124611577,15.17337641228766); \draw [color=zzttqq] (14.283927088315918,23.7414470607691)-- (14.943093884089782,25.73266865533003); \draw [color=zzttqq] (14.943093884089782,25.73266865533003)-- (13.253023353216788,26.974894844061872); \draw [color=zzttqq] (13.253023353216788,26.974894844061872)-- (11.549335525978833,25.75141125585246); \draw [color=zzttqq] (11.549335525978833,25.75141125585246)-- (12.186469073399314,23.753030624929522); \draw [color=zzttqq] (12.186469073399314,23.753030624929522)-- (14.283927088315918,23.7414470607691); \draw [color=zzttqq] (18.376688783641207,25.217300111423555)-- (19.392606967635725,26.994470873052073); \draw [color=zzttqq] (19.392606967635725,26.994470873052073)-- (18.016353117968542,28.509842449212282); \draw [color=zzttqq] (18.016353117968542,28.509842449212282)-- (16.149863277731814,27.66922282723627); \draw [color=zzttqq] (16.149863277731814,27.66922282723627)-- (16.37256296647634,25.6343197530848); \draw [color=zzttqq] (16.37256296647634,25.6343197530848)-- (18.376688783641207,25.217300111423555); \draw [color=zzttqq] (22.144342349366035,25.777725944306205)-- (23.70929876510567,27.193702802166083); \draw [color=zzttqq] (23.70929876510567,27.193702802166083)-- (22.846222875434552,29.11962571179351); \draw [color=zzttqq] (22.846222875434552,29.11962571179351)-- (20.747856225007613,28.893934671795478); \draw [color=zzttqq] (20.747856225007613,28.893934671795478)-- (20.31407020385561,26.828527028492953); \draw [color=zzttqq] (20.31407020385561,26.828527028492953)-- (22.144342349366035,25.777725944306205); \draw [color=zzttqq] (27.428294775233624,25.73266865533003)-- (28.291633134727608,27.92421987558411); \draw [color=zzttqq] (28.291633134727608,27.92421987558411)-- (26.474130290889796,29.4225300192501); \draw [color=zzttqq] (26.474130290889796,29.4225300192501)-- (24.48751339925445,28.156985393470332); \draw [color=zzttqq] (24.48751339925445,28.156985393470332)-- (25.077219481436956,25.876525656792687); \draw [color=zzttqq] (25.077219481436956,25.876525656792687)-- (27.428294775233624,25.73266865533003); \draw [color=zzttqq] (32.608873780487116,22.529590037397952)-- (35.9321606904652,22.513450407064763); \draw [color=zzttqq] (35.9321606904652,22.513450407064763)-- (36.97446252343122,25.66909665826192); \draw [color=zzttqq] (36.97446252343122,25.66909665826192)-- (34.295353572762444,27.63553292830615); \draw [color=zzttqq] (34.295353572762444,27.63553292830615)-- (31.59727134871906,25.695211128706887); \draw [color=zzttqq] (31.59727134871906,25.695211128706887)-- (32.608873780487116,22.529590037397952); \draw (19.784766210829684,19.057196834198123)-- (14.283927088315918,23.7414470607691); \draw (20.02027203516137,21.444679700228047)-- (18.376688783641207,25.217300111423555); \draw (22.36367847443493,21.95847313075419)-- (22.144342349366035,25.777725944306205); \draw (23.576477479029673,19.888532067985828)-- (25.077219481436956,25.876525656792687); \draw [color=zzttqq] (42.25796186329197,16.062570288608036)-- (43.0670567138712,18.917465504377393); \draw [color=zzttqq] (43.0670567138712,18.917465504377393)-- (40.601914074464155,20.56917157315409); \draw [color=zzttqq] (40.601914074464155,20.56917157315409)-- (38.26927728561473,18.735086847313212); \draw [color=zzttqq] (38.26927728561473,18.735086847313212)-- (39.292771106104425,15.949854079719815); \draw [color=zzttqq] (39.292771106104425,15.949854079719815)-- (42.25796186329197,16.062570288608036); \draw [color=zzttqq] (43.77958301811022,6.685986462373042)-- (44.799858606835414,9.551627803701711); \draw [color=zzttqq] (44.799858606835414,9.551627803701711)-- (42.389754231661996,11.407499425029656); \draw [color=zzttqq] (42.389754231661996,11.407499425029656)-- (39.879952222644796,9.688849824438034); \draw [color=zzttqq] (39.879952222644796,9.688849824438034)-- (40.73891365121282,6.770794335193035); \draw [color=zzttqq] (40.73891365121282,6.770794335193035)-- (43.77958301811022,6.685986462373042); \draw [color=zzttqq] (41.44094045625145,-1.960569491516952)-- (42.23786817270744,1.0279094451931547); \draw [color=zzttqq] (42.23786817270744,1.0279094451931547)-- (39.6419200138118,2.709323521719836); \draw [color=zzttqq] (39.6419200138118,2.709323521719836)-- (37.24060810212559,0.760015633465736); \draw [color=zzttqq] (37.24060810212559,0.760015633465736)-- (38.35246388200917,-2.1261369725402615); \draw [color=zzttqq] (38.35246388200917,-2.1261369725402615)-- (41.44094045625145,-1.960569491516952); \draw (31.57964192195906,15.659624648483248)-- (38.26927728561473,18.735086847313212); \draw (31.849816596011475,12.947796598080227)-- (39.879952222644796,9.688849824438034); \draw (29.354203423335605,11.85284426038754)-- (37.24060810212559,0.760015633465736); \draw [color=zzttqq] (30.76803661726853,-3.022042083545845)-- (32.47550364612156,-1.894158848478957); \draw [color=zzttqq] (32.47550364612156,-1.894158848478957)-- (31.93045927504162,0.0782738829771048); \draw [color=zzttqq] (31.93045927504162,0.0782738829771048)-- (29.88613629948438,0.1694211164728574); \draw [color=zzttqq] (29.88613629948438,0.1694211164728574)-- (29.16771958768762,-1.7466795267023065); \draw [color=zzttqq] (29.16771958768762,-1.7466795267023065)-- (30.76803661726853,-3.022042083545845); \draw [color=zzttqq] (25.834439342321645,-4.085710068733035)-- (27.693937347385617,-3.5544249244290085); \draw [color=zzttqq] (27.693937347385617,-3.5544249244290085)-- (27.76327163345554,-1.6217610912261866); \draw [color=zzttqq] (27.76327163345554,-1.6217610912261866)-- (25.946624573768485,-0.9585942977832149); \draw [color=zzttqq] (25.946624573768485,-0.9585942977832149)-- (24.754540659249407,-2.4813985124279974); \draw [color=zzttqq] (24.754540659249407,-2.4813985124279974)-- (25.834439342321645,-4.085710068733035); \draw [color=zzttqq] (21.982622046117694,-3.3551929953150093)-- (23.443656192953675,-1.9605694915169596); \draw [color=zzttqq] (23.443656192953675,-1.9605694915169596)-- (22.568774802622578,-0.1400810822105365); \draw [color=zzttqq] (22.568774802622578,-0.1400810822105365)-- (20.567034220437215,-0.40958087293199075); \draw [color=zzttqq] (20.567034220437215,-0.40958087293199075)-- (20.20477189431776,-2.3966293128652527); \draw [color=zzttqq] (20.20477189431776,-2.3966293128652527)-- (21.982622046117694,-3.3551929953150093); \draw [color=zzttqq] (17.710753784202524,-1.938532158762184)-- (19.06055375244573,-0.5659459877189097); \draw [color=zzttqq] (19.06055375244573,-0.5659459877189097)-- (18.17225785949232,1.1419425208701957); \draw [color=zzttqq] (18.17225785949232,1.1419425208701957)-- (16.27346083733697,0.8248894971303508); \draw [color=zzttqq] (16.27346083733697,0.8248894971303508)-- (15.988235632861286,-1.0789485563659031); \draw [color=zzttqq] (15.988235632861286,-1.0789485563659031)-- (17.710753784202524,-1.938532158762184); \draw (25.74442761129017,8.02524557393089)-- (29.88613629948438,0.1694211164728574); \draw (26.10008191447365,5.875886386967328)-- (25.946624573768485,-0.9585942977832149); \draw (24.165823077659372,4.87345052858861)-- (22.568774802622578,-0.1400810822105365); \draw (22.614731070284837,6.40327028353245)-- (18.17225785949232,1.1419425208701957); \draw [color=zzttqq] (11.832770627095226,-2.3497029288222673)-- (13.22647054051915,-1.2680063512512225); \draw [color=zzttqq] (13.22647054051915,-1.2680063512512225)-- (12.628392920072928,0.39174365839726377); \draw [color=zzttqq] (12.628392920072928,0.39174365839726377)-- (10.865060709302579,0.33582899961694945); \draw [color=zzttqq] (10.865060709302579,0.33582899961694945)-- (10.373339090035232,-1.3584781696271235); \draw [color=zzttqq] (10.373339090035232,-1.3584781696271235)-- (11.832770627095226,-2.3497029288222673); \draw [color=zzttqq] (7.3058699347198806,-1.2300524180989334)-- (8.860757139072371,-0.26956791135095415); \draw [color=zzttqq] (8.860757139072371,-0.26956791135095415)-- (8.42776866061024,1.5060237318714); \draw [color=zzttqq] (8.42776866061024,1.5060237318714)-- (6.605279859831052,1.6429152107751133); \draw [color=zzttqq] (6.605279859831052,1.6429152107751133)-- (5.911908315295607,-0.04807284571450726); \draw [color=zzttqq] (5.911908315295607,-0.04807284571450726)-- (7.3058699347198806,-1.2300524180989334); \draw [color=zzttqq] (6.176889003073895,2.4889435920291993)-- (5.844835787883899,4.414852240131268); \draw [color=zzttqq] (5.844835787883899,4.414852240131268)-- (3.9105777317866863,4.6941893679452855); \draw [color=zzttqq] (3.9105777317866863,4.6941893679452855)-- (3.047193725295304,2.940920559152053); \draw [color=zzttqq] (3.047193725295304,2.940920559152053)-- (4.447851120037782,1.5780037160887772); \draw [color=zzttqq] (4.447851120037782,1.5780037160887772)-- (6.176889003073895,2.4889435920291993); \draw [color=zzttqq] (4.915086785351911,5.743065100891315)-- (6.207423082573568,7.279116409575084); \draw [color=zzttqq] (6.207423082573568,7.279116409575084)-- (5.145905354375257,8.982867224907654); \draw [color=zzttqq] (5.145905354375257,8.982867224907654)-- (3.1975150214664714,8.49979182845976); \draw [color=zzttqq] (3.1975150214664714,8.49979182845976)-- (3.0548613005754297,6.497483998993561); \draw [color=zzttqq] (3.0548613005754297,6.497483998993561)-- (4.915086785351911,5.743065100891315); \draw (16.799553634800827,7.275404959971025)-- (12.628392920072928,0.39174365839726377); \draw (14.883157319669923,6.0972984015419485)-- (8.42776866061024,1.5060237318714); \draw (13.170512371052915,7.55584465711205)-- (5.844835787883899,4.414852240131268); \draw (14.028435897277692,9.635382375647339)-- (6.207423082573568,7.279116409575084); \draw [color=zzttqq] (4.317390998009919,10.657452685703491)-- (5.5127825726939035,12.184897475577547); \draw [color=zzttqq] (5.5127825726939035,12.184897475577547)-- (4.429492563513084,13.79378882024563); \draw [color=zzttqq] (4.429492563513084,13.79378882024563)-- (2.564590943482164,13.260693565581974); \draw [color=zzttqq] (2.564590943482164,13.260693565581974)-- (2.495308365809135,11.322331234290468); \draw [color=zzttqq] (2.495308365809135,11.322331234290468)-- (4.317390998009919,10.657452685703491); \draw [color=zzttqq] (4.070250997468012,14.867178713923998)-- (5.645603858769902,15.970304128743685); \draw [color=zzttqq] (5.645603858769902,15.970304128743685)-- (5.083280050994329,17.80943823305526); \draw [color=zzttqq] (5.083280050994329,17.80943823305526)-- (3.160391963803871,17.84296020456922); \draw [color=zzttqq] (3.160391963803871,17.84296020456922)-- (2.5343055771334706,16.02454381802318); \draw [color=zzttqq] (2.5343055771334706,16.02454381802318)-- (4.070250997468012,14.867178713923998); \draw (15.371278536118586,14.049065661548541)-- (5.5127825726939035,12.184897475577547); \draw (13.880217337229903,14.999734149063434)-- (5.645603858769902,15.970304128743685); \draw [color=zzttqq] (6.158386431463805,20.441170566351666)-- (4.9629948567798206,21.96861535622572); \draw [color=zzttqq] (4.9629948567798206,21.96861535622572)-- (3.1409122245790404,21.303736807638742); \draw [color=zzttqq] (3.1409122245790404,21.303736807638742)-- (3.2101948022520697,19.36537447634724); \draw [color=zzttqq] (3.2101948022520697,19.36537447634724)-- (5.075096422282987,18.832279221683585); \draw [color=zzttqq] (5.075096422282987,18.832279221683585)-- (6.158386431463805,20.441170566351666); \draw [color=zzttqq] (7.903565722061873,22.544957789505915)-- (8.920805306174907,24.540853926378794); \draw [color=zzttqq] (8.920805306174907,24.540853926378794)-- (7.336939598195463,26.125072086783895); \draw [color=zzttqq] (7.336939598195463,26.125072086783895)-- (5.340817172935715,25.1082766186362); \draw [color=zzttqq] (5.340817172935715,25.1082766186362)-- (5.691011376398764,22.895644299308962); \draw [color=zzttqq] (5.691011376398764,22.895644299308962)-- (7.903565722061873,22.544957789505915); \draw (14.323593547007713,16.711590337120207)-- (6.158386431463805,20.441170566351666); \draw (16.088676313342184,16.818907157676236)-- (7.903565722061873,22.544957789505915); \draw (28.91705161781937,16.240674217584996)-- (32.608873780487116,22.529590037397952); \draw (17.21768792947298,-7.205822322005217) node[anchor=north west] { Figure 4. Layer cycle graph $LCG(5,3)$}; \begin{scriptsize} \draw [fill=black] (21.11928368662371,11.188737830007517) circle (1.5pt); \draw[color=black] (20.37633085035029,11.273532972553782) node {$3$}; \draw [fill=black] (22.835259488187784,11.173409959359034) circle (1.5pt); \draw[color=black] (23.373558988384592,11.178990140904692) node {$2$}; \draw [fill=black] (23.380102844068425,12.800663354723385) circle (1.5pt); \draw[color=black] (23.309016156735515,13.718961289044679) node {$1$}; \draw [fill=black] (22.00085875498314,13.821689132015706) circle (1.5pt); \draw[color=black] (22.615387661788287,14.148046952342858) node {$5$}; \draw [fill=black] (20.603595673265406,12.825464370407792) circle (1.5pt); \draw[color=black] (20.33768792947298,13.751232704869224) node {$4$}; \draw [fill=qqqqff] (21.9826220461177,18.09544470595976) circle (1.5pt); \draw [fill=black] (23.576477479029673,19.888532067985828) circle (1.5pt); \draw [fill=black] (22.36367847443493,21.95847313075419) circle (1.5pt); \draw [fill=black] (20.02027203516137,21.444679700228047) circle (1.5pt); \draw [fill=black] (19.784766210829684,19.057196834198123) circle (1.5pt); \draw [fill=black] (29.354203423335605,11.85284426038754) circle (1.5pt); \draw[color=black] (27.85397287894528,11.112175893431059) node {$(x_{1_1}, 5)^{(2)}$}; \draw [fill=black] (31.849816596011475,12.947796598080227) circle (1.5pt); \draw[color=black] (33.608015264453196,13.486689873220135) node {$(x_{1_1}, 4)^{(2)}$}; \draw [fill=black] (31.57964192195906,15.659624648483248) circle (1.5pt); \draw[color=black] (31.645301106207813,16.528932437184665) node {$(x_{1_1}, 3)^{(2)}$}; \draw [fill=black] (28.91705161781937,16.240674217584996) circle (1.5pt); \draw[color=black] (27.344887215647125,16.528932437184665) node {$(x_{1_1}, 2)^{(2)}$}; \draw [fill=qqqqff] (27.541654985797543,13.887954550035332) circle (1.5pt); \draw[color=qqqqff] (25.68671588905082,14.544861199816493) node {$(x_{1_1}, 1)^{(2)}$}; \draw [fill=black] (24.165823077659372,4.87345052858861) circle (1.5pt); \draw [fill=black] (26.10008191447365,5.875886386967328) circle (1.5pt); \draw [fill=black] (25.74442761129017,8.02524557393089) circle (1.5pt); \draw [fill=qqqqff] (23.59036232686335,8.351186747127494) circle (1.5pt); \draw [fill=black] (22.614731070284837,6.40327028353245) circle (1.5pt); \draw [fill=black] (16.799553634800827,7.275404959971025) circle (1.5pt); \draw [fill=qqqqff] (16.271306744849774,9.462061111019457) circle (1.5pt); \draw [fill=black] (14.028435897277692,9.635382375647339) circle (1.5pt); \draw [fill=black] (13.170512371052915,7.55584465711205) circle (1.5pt); \draw [fill=black] (14.883157319669923,6.0972984015419485) circle (1.5pt); \draw [fill=black] (15.371278536118586,14.049065661548541) circle (1.5pt); \draw [fill=qqqqff] (16.73618124611577,15.17337641228766) circle (1.5pt); \draw [fill=black] (16.088676313342184,16.818907157676236) circle (1.5pt); \draw [fill=black] (14.323593547007713,16.711590337120207) circle (1.5pt); \draw [fill=black] (13.880217337229903,14.999734149063434) circle (1.5pt); \draw [fill=qqqqff] (14.283927088315918,23.7414470607691) circle (1.5pt); \draw [fill=black] (14.943093884089782,25.73266865533003) circle (1.5pt); \draw [fill=black] (13.253023353216788,26.974894844061872) circle (1.5pt); \draw [fill=black] (11.549335525978833,25.75141125585246) circle (1.5pt); \draw [fill=black] (12.186469073399314,23.753030624929522) circle (1.5pt); \draw [fill=qqqqff] (18.376688783641207,25.217300111423555) circle (1.5pt); \draw [fill=black] (19.392606967635725,26.994470873052073) circle (1.5pt); \draw [fill=black] (18.016353117968542,28.509842449212282) circle (1.5pt); \draw [fill=black] (16.149863277731814,27.66922282723627) circle (1.5pt); \draw [fill=black] (16.37256296647634,25.6343197530848) circle (1.5pt); \draw [fill=qqqqff] (22.144342349366035,25.777725944306205) circle (1.5pt); \draw [fill=black] (23.70929876510567,27.193702802166083) circle (1.5pt); \draw [fill=black] (22.846222875434552,29.11962571179351) circle (1.5pt); \draw [fill=black] (20.747856225007613,28.893934671795478) circle (1.5pt); \draw [fill=black] (20.31407020385561,26.828527028492953) circle (1.5pt); \draw [fill=black] (27.428294775233624,25.73266865533003) circle (1.5pt); \draw [fill=black] (28.291633134727608,27.92421987558411) circle (1.5pt); \draw [fill=black] (26.474130290889796,29.4225300192501) circle (1.5pt); \draw [fill=black] (24.48751339925445,28.156985393470332) circle (1.5pt); \draw [fill=qqqqff] (25.077219481436956,25.876525656792687) circle (1.5pt); \draw [fill=qqqqff] (32.608873780487116,22.529590037397952) circle (1.5pt); \draw[color=qqqqff] (30.63892960115504,22.364803496096464) node {$(x_{1_1}, 1)^{(3)}$}; \draw [fill=black] (35.9321606904652,22.513450407064763) circle (1.5pt); \draw[color=black] (37.832971986662965,22.422974822692826) node {$(x_{1_1}, 5)^{(3)}$}; \draw [fill=black] (36.97446252343122,25.66909665826192) circle (1.5pt); \draw[color=black] (38.12751481831204,26.375188534797342) node {$(x_{1_1}, 4)^{(3)}$}; \draw [fill=black] (34.295353572762444,27.63553292830615) circle (1.5pt); \draw[color=black] (33.99710092775135,28.406988356340968) node {$(x_{1_1}, 3)^{(3)}$}; \draw [fill=black] (31.59727134871906,25.695211128706887) circle (1.5pt); \draw[color=black] (29.964415621366125,26.317017208200983) node {$(x_{1_1}, 2)^{(3)}$}; \draw [fill=black] (42.25796186329197,16.062570288608036) circle (1.5pt); \draw[color=black] (44.21745711459172,15.809404031465583) node {$(x_{1_2}, 4)^{(3)}$}; \draw [fill=black] (43.0670567138712,18.917465504377393) circle (1.5pt); \draw[color=black] (44.67562844118803,19.835717832798288) node {$(x_{1_2}, 3)^{(3)}$}; \draw [fill=black] (40.601914074464155,20.56917157315409) circle (1.5pt); \draw[color=black] (40.7429431348028,21.316603317640093) node {$(x_{1_2}, 2)^{(3)}$}; \draw [fill=qqqqff] (38.26927728561473,18.735086847313212) circle (1.5pt); \draw[color=qqqqff] (36.474800660066656,19.60344641697374) node {$(x_{1_2}, 1)^{(3)}$}; \draw [fill=black] (39.292771106104425,15.949854079719815) circle (1.5pt); \draw[color=black] (37.35114331325927,15.280318368167404) node {$(x_{1_2}, 5)^{(3)}$}; \draw [fill=black] (43.77958301811022,6.685986462373042) circle (1.5pt); \draw[color=black] (45.77562844118803,6.4181335079229) node {$(x_{1_3}, 4)^{(3)}$}; \draw [fill=black] (44.799858606835414,9.551627803701711) circle (1.5pt); \draw[color=black] (46.52742826273157,10.179904477606513) node {$(x_{1_3}, 3)^{(3)}$}; \draw [fill=black] (42.389754231661996,11.407499425029656) circle (1.5pt); \draw[color=black] (42.16247154052181,12.257604209921953) node {$(x_{1_3}, 2)^{(3)}$}; \draw [fill=qqqqff] (39.879952222644796,9.688849824438034) circle (1.5pt); \draw[color=qqqqff] (39.08795756073289,11.067161467501051) node {$(x_{1_3}, 1)^{(3)}$}; \draw [fill=black] (40.73891365121282,6.770794335193035) circle (1.5pt); \draw[color=black] (38.8429431348028,6.4181335079229) node {$(x_{1_3}, 5)^{(3)}$}; \draw [fill=black] (41.44094045625145,-1.960569491516952) circle (1.5pt); \draw[color=black] (43.29474295634634,-2.1795085206725138) node {$(x_{1_4}, 4)^{(3)}$}; \draw [fill=black] (42.23786817270744,1.0279094451931547) circle (1.5pt); \draw[color=black] (43.81745711459172,1.6031767857129206) node {$(x_{1_4}, 3)^{(3)}$}; \draw [fill=black] (39.6419200138118,2.709323521719836) circle (1.5pt); \draw[color=black] (39.25250039238196,3.359962181326541) node {$(x_{1_4}, 2)^{(3)}$}; \draw [fill=qqqqff] (37.24060810212559,0.760015633465736) circle (1.5pt); \draw[color=qqqqff] (34.78117216511943,0.8627340432920173) node {$(x_{1_4}, 1)^{(3)}$}; \draw [fill=black] (38.35246388200917,-2.1261369725402615) circle (1.5pt); \draw[color=black] (36.39751481831204,-2.576322768146148) node {$(x_{1_4}, 5)^{(3)}$}; \draw [fill=black] (30.76803661726853,-3.022042083545845) circle (1.5pt); \draw [fill=black] (32.47550364612156,-1.894158848478957) circle (1.5pt); \draw [fill=black] (31.93045927504162,0.0782738829771048) circle (1.5pt); \draw [fill=qqqqff] (29.88613629948438,0.1694211164728574) circle (1.5pt); \draw [fill=black] (29.16771958768762,-1.7466795267023065) circle (1.5pt); \draw [fill=black] (25.834439342321645,-4.085710068733035) circle (1.5pt); \draw [fill=black] (27.693937347385617,-3.5544249244290085) circle (1.5pt); \draw [fill=black] (27.76327163345554,-1.6217610912261866) circle (1.5pt); \draw [fill=qqqqff] (25.946624573768485,-0.9585942977832149) circle (1.5pt); \draw [fill=black] (24.754540659249407,-2.4813985124279974) circle (1.5pt); \draw [fill=black] (21.982622046117694,-3.3551929953150093) circle (1.5pt); \draw [fill=black] (23.443656192953675,-1.9605694915169596) circle (1.5pt); \draw [fill=qqqqff] (22.568774802622578,-0.1400810822105365) circle (1.5pt); \draw [fill=black] (20.567034220437215,-0.40958087293199075) circle (1.5pt); \draw [fill=black] (20.20477189431776,-2.3966293128652527) circle (1.5pt); \draw [fill=black] (17.710753784202524,-1.938532158762184) circle (1.5pt); \draw [fill=black] (19.06055375244573,-0.5659459877189097) circle (1.5pt); \draw [fill=qqqqff] (18.17225785949232,1.1419425208701957) circle (1.5pt); \draw [fill=black] (16.27346083733697,0.8248894971303508) circle (1.5pt); \draw [fill=black] (15.988235632861286,-1.0789485563659031) circle (1.5pt); \draw [fill=black] (11.832770627095226,-2.3497029288222673) circle (1.5pt); \draw [fill=black] (13.22647054051915,-1.2680063512512225) circle (1.5pt); \draw [fill=qqqqff] (12.628392920072928,0.39174365839726377) circle (1.5pt); \draw [fill=black] (10.865060709302579,0.33582899961694945) circle (1.5pt); \draw [fill=black] (10.373339090035232,-1.3584781696271235) circle (1.5pt); \draw [fill=black] (7.3058699347198806,-1.2300524180989334) circle (1.5pt); \draw [fill=black] (8.860757139072371,-0.26956791135095415) circle (1.5pt); \draw [fill=qqqqff] (8.42776866061024,1.5060237318714) circle (1.5pt); \draw [fill=black] (6.605279859831052,1.6429152107751133) circle (1.5pt); \draw [fill=black] (5.911908315295607,-0.04807284571450726) circle (1.5pt); \draw [fill=black] (6.176889003073895,2.4889435920291993) circle (1.5pt); \draw [fill=qqqqff] (5.844835787883899,4.414852240131268) circle (1.5pt); \draw [fill=black] (3.9105777317866863,4.6941893679452855) circle (1.5pt); \draw [fill=black] (3.047193725295304,2.940920559152053) circle (1.5pt); \draw [fill=black] (4.447851120037782,1.5780037160887772) circle (1.5pt); \draw [fill=black] (4.915086785351911,5.743065100891315) circle (1.5pt); \draw [fill=black] (14.323593547007713,16.711590337120207) circle (1.5pt); \draw [fill=qqqqff] (6.207423082573568,7.279116409575084) circle (1.5pt); \draw [fill=black] (5.145905354375257,8.982867224907654) circle (1.5pt); \draw [fill=black] (3.1975150214664714,8.49979182845976) circle (1.5pt); \draw [fill=black] (3.0548613005754297,6.497483998993561) circle (1.5pt); \draw [fill=black] (4.317390998009919,10.657452685703491) circle (1.5pt); \draw [fill=qqqqff] (5.5127825726939035,12.184897475577547) circle (1.5pt); \draw [fill=black] (4.429492563513084,13.79378882024563) circle (1.5pt); \draw [fill=black] (2.564590943482164,13.260693565581974) circle (1.5pt); \draw [fill=black] (2.495308365809135,11.322331234290468) circle (1.5pt); \draw [fill=black] (4.070250997468012,14.867178713923998) circle (1.5pt); \draw [fill=qqqqff] (5.645603858769902,15.970304128743685) circle (1.5pt); \draw [fill=black] (5.083280050994329,17.80943823305526) circle (1.5pt); \draw [fill=black] (3.160391963803871,17.84296020456922) circle (1.5pt); \draw [fill=black] (2.5343055771334706,16.02454381802318) circle (1.5pt); \draw [fill=qqqqff] (6.158386431463805,20.441170566351666) circle (1.5pt); \draw [fill=black] (4.9629948567798206,21.96861535622572) circle (1.5pt); \draw [fill=black] (3.1409122245790404,21.303736807638742) circle (1.5pt); \draw [fill=black] (3.2101948022520697,19.36537447634724) circle (1.5pt); \draw [fill=black] (5.075096422282987,18.832279221683585) circle (1.5pt); \draw [fill=qqqqff] (7.903565722061873,22.544957789505915) circle (1.5pt); \draw [fill=black] (8.920805306174907,24.540853926378794) circle (1.5pt); \draw [fill=black] (7.336939598195463,26.125072086783895) circle (1.5pt); \draw [fill=black] (5.340817172935715,25.1082766186362) circle (1.5pt); \draw [fill=black] (5.691011376398764,22.895644299308962) circle (1.5pt); \end{scriptsize} \end{tikzpicture} \end{center}
\section{Main Results} \noindent
\begin{theorem}\label{c.1} Consider the crystal cubic carbon $CCC(n)$, is defined already. If $n\geq 2$ is an integer, then the minimum number of doubly resolving sets of $CCC(n)$ is $24\times 7^{(n-2)}$. \end{theorem} \begin{proof} Let $V(CCC(n))=L_1 \cup L_2\cup ...\cup L_n$, be the vertex set of graph $CCC(n)$, where $L_1, L_2, ..., L_n$ are the layers of $CCC(n)$, is defined already. Based on Theorem 1 in [31], we know that the minimum number of resolving sets of $CCC(n)$ is $16 \times 7^{(n-2)}$. Now, let $$Z_1=\{(x_{1_1}, 2)^{(n)}, ..., (x_{1_{7^{n-2}}}, 2)^{(n)}; ... ; (x_{8_1}, 2)^{(n)},..., (x_{8_{7^{n-2}}}, 2)^{(n)}\},$$ be an arranged set of vertices in $CCC(n)$, consisting of exactly one adjacent vertex in each cubic of the layer $L_n$ with respect to head vertex of each cubic of the layer $L_n$, and also
$$Z_2=\{(x_{1_1}, 4)^{(n)}, ..., (x_{1_{7^{n-2}}}, 4)^{(n)}; ... ; (x_{8_1}, 4)^{(n)},..., (x_{8_{7^{n-2}}}, 4)^{(n)}\},$$
be an arranged set of vertices in $CCC(n)$, consisting of exactly one adjacent vertex in each cubic of the layer $L_n$ with respect to head vertex of each cubic of the layer $L_n$, then the arranged set $Z_3=Z_1\cup Z_2$ of vertices in $CCC(n)$, consisting of exactly two adjacent vertices in each cubic of the layer $L_n$ with respect to head vertex of each cubic of the layer $L_n$ is one of minimal resolving sets in $CCC(n)$. Also it is not hard to see that, for every two vertices $u$ and $v$ so that lie in the layers $L_1 \cup L_2\cup ... \cup L_{n-1}$, we have $r(u|Z_3)-r(v|Z_3)\neq \mu I$, where $\mu$ is an integer and $I$ denotes the unit $16 \times 7^{(n-2)}$-vector. In particular, we can show that the set $Z_3$, cannot be doubly resolved all the vertices of each cubic of the layer $L_n$. For this purpose, we consider the cubic $Q^{(n)}_{1_1}$ in the layer $L_n$ and suppose $x$ is an arbitrarily element of the set $Z_3$ so that $(x_{1_1}, 2)^{(n)}\neq x$, $(x_{1_1}, 4)^{(n)}\neq x$ and the distance between the head vertex $(x_{1_1}, 1)^{(n)}$ and $x$ is a positive integer $c$, that is $r((x_{1_1}, 1)^{(n)}| x)=c$. Now, let $Z=\{(x_{1_1}, 2)^{(n)}, (x_{1_1}, 4)^{(n)}, x\}$ be a subset of the set $Z_3$, we can view that all the vertices in the cubic $Q^{(n)}_{1_1}$ cannot be doubly resolved with respect to $Z$. Because, for every $1\leq i\leq 8$, we have\\
$r((x_{1_1}, 1)^{(n)}| Z)=(1, 1, c)$\\
$r((x_{1_1}, 2)^{(n)}| Z)=(0, 2, c+1)$\\
$r((x_{1_1}, 3)^{(n)}| Z)=(1, 1, c+2)$\\
$r((x_{1_1}, 4)^{(n)}| Z)=(2, 0, c+1)$\\
$r((x_{1_1}, 5)^{(n)}| Z)=(2, 2, c+1)$\\
$r((x_{1_1}, 6)^{(n)}| Z)=(1, 3, c+2)$\\
$r((x_{1_1}, 7)^{(n)}| Z)=(2, 2, c+3)$\\
$r((x_{1_1}, 8)^{(n)}| Z)=(3, 1, c+2)$,\\ and hence $Z_3$, cannot be doubly resolved all the vertices of each cubic of the layer $L_n$, because $x$ is an arbitrarily element of the set $Z_3$. Now if we consider, the set $Z\cup\{(x_{1_1}, 5)^{(n)}\}$ of vertices of $CCC(n)$ then we have,\\
$r((x_{1_1}, 1)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(1, 1, c,1)$\\
$r((x_{1_1}, 2)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(0, 2, c+1,2)$\\
$r((x_{1_1}, 3)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(1, 1, c+2,3)$\\
$r((x_{1_1}, 4)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(2, 0, c+1,2)$\\
$r((x_{1_1}, 5)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(2, 2, c+1, 0)$\\
$r((x_{1_1}, 6)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(1, 3, c+2,1)$\\
$r((x_{1_1}, 7)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(2, 2, c+3,2)$\\
$r((x_{1_1}, 8)^{(n)}| Z\cup\{(x_{1_1}, 5)^{(n)}\})=(3, 1, c+2,1)$,\\ Therefore, every element of $Q^{(n)}_{1_1}$ doubly resolved by the set $Z\cup\{(x_{1_1}, 5)^{(n)}\}$. Besides, we can view that every minimal resolving set of $CCC(n)$, consisting of exactly two adjacent vertices in each cubic of the layer $L_n$ with respect to head vertex of each cubic of the layer $L_n$, and hence $\psi(CCC(n))$ must be greater than $16\times 7^{(n-2)}$. By the discussion above, we deduce that if $$Z_4=\{(x_{1_1}, 5)^{(n)}, ..., (x_{1_{7^{n-2}}}, 5)^{(n)}; ... ; (x_{8_1}, 5)^{(n)},..., (x_{8_{7^{n-2}}}, 5)^{(n)}\},$$ is an arranged set of vertices in $CCC(n)$, consisting of exactly one adjacent vertex in each cubic of the layer $L_n$ with respect to the head vertex of each cubic of the layer $L_n$, then the arranged set $Z_5=Z_3\cup Z_4$ of vertices in $CCC(n)$, consisting of exactly three adjacent vertices in each cubic of the layer $L_n$ with respect to the head vertex of each cubic of the layer $L_n$, is a minimal doubly resolving set for $CCC(n)$. Thus, the minimum number of doubly resolving sets of $CCC(n)$ must be $3\times 8 \times7^{(n-2)}$. \end{proof}
\begin{theorem}\label{c.2} Consider the crystal cubic carbon $CCC(n)$, is defined already. If $n\geq 2$ is an integer, then the minimum number of strong resolving sets of $CCC(n)$ is $32\times 7^{(n-2)}-1$. \end{theorem} \begin{proof} Let $V(CCC(n))=L_1 \cup L_2\cup ...\cup L_n$, be the vertex set of graph $CCC(n)$, where $L_1, L_2, ..., L_n$ are the layers of $CCC(n)$, is defined already. Based on Theorem 1 in [31], we know that the minimum number of resolving sets of $CCC(n)$ is $16 \times 7^{(n-2)}$. Besides, the arranged set $Z_3=Z_1\cup Z_2$ of vertices in $CCC(n)$, which is defined in the previous Theorem, consisting of exactly two adjacent vertices in each cubic of the layer $L_n$ with respect to the head vertex of each cubic of the layer $L_n$ is one of minimal resolving sets in $CCC(n)$. Also, every two vertices $u$ and $v$ so that lie in the layers $L_1 \cup L_2\cup ... \cup L_{n-1}$, are strongly resolved by an element of $Z_3$. Without loss of generality, if we consider the cubic $Q^{(n)}_{1_1}$ in the layer $L_n$, then every two vertices of the cubic $Q^{(n)}_{1_1}$ except two vertices $(x_{1_1}, 3)^{(n)}$ and $(x_{1_1}, 5)^{(n)}$ are strongly resolved by an element of $Z_3$, and hence if we consider the arranged set $Z_5=Z_3\cup Z_4$ of vertices in $CCC(n)$, is defined in the previous Theorem, consisting of exactly three adjacent vertices in each cubic of the layer $L_n$ with respect to the head vertex of each cubic of the layer $L_n$, then every two vertices so that lie in the one cubic of the layer $L_n$ are strongly resolved by an element of the set $Z_5$, and the number of such vertices is $24\times 7^{(n-2)}$. Note that, both vertices of $CCC(n)$ so that lie in distinct cubes in the layer $L_n$ and mutually maximally distant, cannot be strongly resolved by an element of $Z_5$, and hence, from both vertices of distinct cubes so that mutually maximally distant, at least one of them must be belongs to the every minimum strong resolving set of $CCC(n)$. Therefore, in each cube of the layer $L_n$, except one of them, there must be a vertex of that cube that has a maximum distance from the head vertex of that cube in every set of minimum strong resolving set of $CCC(n)$, and hence, the number of such vertices is $8\times7^{(n-2)}-1$. Thus, the minimum number of strong resolving sets of $CCC(n)$ must be $32\times 7^{(n-2)}-1$. \end{proof}
\begin{theorem}\label{d.1} Consider the layer cycle graph $LCG(n, k)$, is defined already. If $n, k$ are integers so that $n\geq 3$ and $k\geq 2$, then the minimum number of resolving sets of $LCG(n, k)$ is $n(n-1)^{k-2}$. \end{theorem} \begin{proof} Let $V(LCG(n, k))=U_1 \cup U_2\cup ...\cup U_k$ be the vertex set of graph $LCG(n, k)$, where $U_1, U_2, ..., U_k$ are the layers of $LCG(n, k)$, is defined already. If we consider an arranged subset $R_1$ of vertices in the layers $U_1 \cup U_2\cup ...\cup U_{k-1}$, then $R_1$ is not a resolving set for $LCG(n, k)$. In particular, we can express that if $R_2$ is an arranged set of vertices in $LCG(n, k)$, consisting of all the head vertices in the layer $U_k$, then the set $R_2$ is not a resolving set for $LCG(n, k)$, because the degree of each head vertex in the layer $U_k$ is $3$, and hence there are two vertices in the cycle $C^{(k)}_{1_1}$ of $LCG(n, k)$ so that they are adjacent to the head vertex $(x_{1_1}, 1)^{(k)}$ in the cycle $C^{(k)}_{1_1}$ with the same representations. Now, let $R_3= \{r_1, r_2, ..., r_z\} $ be a minimal resolving set of $LCG(n, k)$. We claim that there is exactly one vertex of each cycle in the layer $U_k$ belongs to $R_3$. Suppose for a contradiction that none of vertices of each cycle in the layer $U_k$ belong to $R_3$, and hence without loss of generality if we consider the head vertex $(x_{1_1}, 1)^{(k)}$ in the cycle $C^{(k)}_{1_1}$, then we can view that the metric representation of two vertices in the cycle $C^{(k)}_{1_1}$ of $LCG(n, k)$ so that they are adjacent to the head vertex $(x_{1_1}, 1)^{(k)}$ is identical with respect $R_3$. Therefore, we deduce that at least one vertex of each cycle in the layer $U_k$ must be belongs in every minimal resolving set of $LCG(n, k)$. Besides, the layer $U_k$ of graph $LCG(n, k)$ consisting of exactly $n(n-1)^{k-2}$ cycles, and hence we deduce that the minimum number of resolving sets of $LCG(n, k)$ must be equal or greater than $n(n-1)^{k-2}$. Now, suppose that $$R_4=\{(x_{1_1}, n)^{(k)}, ..., (x_{1_{(n-1)^{k-2}}}, n)^{(k)}; ... ; (x_{n_{1}}, n)^{(k)},..., (x_{n_{(n-1)^{k-2}}}, n)^{(k)}\},$$ is an arranged set of vertices in $LCG(n, k)$, consisting of exactly one adjacent vertex in each cycle of the layer $U_k$ with respect to head vertex of each cycle of the layer $U_k$. We claim that the set $R_4$ is a minimal resolving set for
$LCG(n, k)$. Since, each vertex in the layer $U_p$, $1\leq p< k$ is adjacent to exactly one vertex of the layer $U_{p+1}$ say head vertex, it follows that all the vertices in the layer $U_p$, have different representations with respect to $R_4$. Therefore, it is necessary to show that all the vertices in layer $U_k$ have different representations with respect to $R_4$. Since the layer $U_k$ consisting of all the cycles so that these cycles are congruous, and the set $R_4$ consisting of exactly one adjacent vertex in each cycle of the layer $U_k$ with respect to head vertex of each cycle of the layer $U_k$, then it is sufficient to show that all the vertices in an arbitrarily cycle of the layer $U_k$ have different representations with respect to $R_4$. For this purpose, we consider the cycle $C^{(k)}_{1_1}$ in the layer $U_k$ and suppose $x$ is an arbitrarily element of the set $R_4$ so that $(x_{1_1}, n)^{(k)}\neq x$, and the distance between the head vertex $(x_{1_1}, 1)^{(k)}\in C^{(k)}_{1_1}$ and $x$ is a positive integer $c$, that is $r((x_{1_1}, 1)^{(k)}| x)=c$. Now, let $R=\{(x_{1_1}, n)^{(k)}, x\}$ be a subset of the set $R_4$, we can view that all the vertices in the cycle $C^{(k)}_{1_1}$ have different representations with respect to $R$. Because, if $n$ is an even integer then for every $1\leq i\leq[\frac{n}{2}]$, we have $r((x_{1_1}, i)^{(k)}| R)=(i, c+i-1)$, also, if $[\frac{n}{2}]< i\leq n$, then we have $r((x_{1_1}, i)^{(k)}| R)=(n-i, n+c+1-i)$. Note that, if $n$ is an odd integer then there are two vertices in the cycle $C^{(k)}_{1_1}$ with maximum distance from the head vertex $(x_{1_1}, 1)^{(k)}$ and hence for every $1\leq i\leq[\frac{n}{2}]$, we have
$r((x_{1_1}, i)^{(k)}| R)=(i, c+i-1)$, also, if $i=\lceil\frac{n}{2}\rceil$, then we have $r((x_{1_1}, i)^{(k)}| R)=(n-i, c+i-1)$, in particular if $\lceil\frac{n}{2}\rceil< i\leq n$, then we have $r((x_{1_1}, i)^{(k)}| R)=(n-i, n+c+1-i)$. Therefore, by the discussion above we deduce that the minimum number of resolving sets of
$LCG(n, k)$ is $n(n-1)^{k-2}$. \end{proof}
\begin{theorem}\label{d.2} Consider the layer cycle graph $LCG(n, k)$, is defined already. If $n\geq 4$ is an even or odd integer and $k$ is an integer so that $k\geq 2$, then the minimum number of doubly resolving sets of $LCG(n, k)$ is $2n(n-1)^{k-2}$. \end{theorem} \begin{proof} From the previous Theorem, we have already seen that the arranged set $$R_4=\{(x_{1_1}, n)^{(k)}, ..., (x_{1_{(n-1)^{k-2}}}, n)^{(k)}; ... ; (x_{n_{1}}, n)^{(k)},..., (x_{n_{(n-1)^{k-2}}}, n)^{(k)}\},$$ of vertices in the layer $U_k$ of $LCG(n, k)$ is one of minimal resolving sets in $LCG(n, k)$. In particular, by the previous Theorem we know that the set $R_4$, cannot be doubly resolved all the vertices of each cycle of the layer $U_k$. Besides, we can view that every minimal resolving set of $LCG(n, k)$, consisting of exactly one adjacent vertex in each cycle of the layer $U_k$ with respect to head vertex of each cycle of the layer $U_k$, and hence $\psi(LCG(n, k))$ must be greater than $n(n-1)^{k-2}$. Now, let $$R_5=\{(x_{1_1}, [\frac{n}{2}]+1)^{(k)}, ..., (x_{1_{(n-1)^{k-2}}}, [\frac{n}{2}]+1)^{(k)}; ... ; (x_{n_{1}}, [\frac{n}{2}]+1)^{(k)},..., (x_{n_{(n-1)^{k-2}}}, [\frac{n}{2}]+1)^{(k)}\},$$ be an arranged set of vertices in $LCG(n, k)$, consisting of exactly one vertex in each cycle of the layer $U_k$ with maximum distance from the head vertex of each cycle of the layer $U_k$, and $R_6=R_4\cup R_5$. Then by applying the same argument is done in proof of Theorem \ref{c.1}, we can show that the arranged set $R_6$ of vertices in $LCG(n, k)$ is one of minimal doubly resolving sets for $LCG(n, k)$. \end{proof}
\begin{theorem}\label{d.3} Consider the layer cycle graph $LCG(n, k)$, is defined already. If $n\geq 3$ is an even or odd integer and $k$ is an integer so that $k\geq 2$, then the minimum number of strong resolving sets of $LCG(n, k)$ is $\lceil\frac{n}{2}\rceil n(n-1)^{k-2}-1$. \end{theorem} \begin{proof} Let $V(LCG(n, k))=U_1 \cup U_2\cup ...\cup U_k$, be the vertex set of graph $LCG(n, k)$, where $U_1, U_2, ..., U_k$ are the layers of $LCG(n, k)$, which is defined already. We can view that, the arranged set $$R_7=\{(x_{1_1}, 2)^{(k)}, ..., (x_{1_{(n-1)^{k-2}}}, 2)^{(k)}; ... ; (x_{n_{1}}, 2)^{(k)},..., (x_{n_{(n-1)^{k-2}}}, 2)^{(k)}\},$$ of vertices in $LCG(n, k)$, consisting of exactly one adjacent vertex in each cycle of the layer $U_k$ with respect to head vertex of each cycle of the layer $U_k$ is one of minimal resolving sets in $LCG(n, k)$. Also every two vertices $u$ and $v$ in the layers $U_1 \cup U_2\cup ... \cup U_{k-1}$, are strongly resolved by an element of $R_7$. In particular, if we consider a cycle and its head vertex in the layer $U_k$, then each vertex in that cycle that has the maximum distance from the head vertex is strongly resolved by an element of $R_7$. Note that, the set $R_7$, cannot be strongly resolved other vertices of each cycle of the layer $U_k$, and hence if we consider the arranged set $$R_8=\{(x_{1_1}, 2)^{(k)}, ..., (x_{1_1}, \lceil\frac{n}{2}\rceil)^{(k)};...;
(x_{n_{(n-1)^{k-2}}}, 2)^{(k)},..., (x_{n_{(n-1)^{k-2}}}, \lceil\frac{n}{2}\rceil)^{(k)}\},$$ of vertices in $LCG(n, k)$, consisting of exactly $\lceil\frac{n}{2}\rceil-1$ elements in each cycle of the layer $U_k$, then we can see that all the vertices in each cycle of the layer $U_k$ are strongly resolved by an element of $R_8$, and the number of such vertices is $n(n-1)^{k-2}(\lceil\frac{n}{2}\rceil-1)$. Note that, both vertices of $LCG(n, k)$ so that lie in distinct cycles in the layer $U_k$ and mutually maximally distant, cannot be strongly resolved by an element of $R_8$, and hence, from both vertices of distinct cycles in the layer $U_k$ so that mutually maximally distant, at least one of them must be belongs to the every minimum strong resolving set of $LCG(n, k)$. Therefore, in each cycle of the layer $U_k$, except one of them, there must be a vertex of that cycle that has a maximum distance from the head vertex of that cycle in every set of minimum strong resolving set of $LCG(n, k)$, and hence, the number of such vertices is $n(n-1)^{k-2}-1$. Thus, the minimum number of strong resolving sets of $LCG(n, k)$ must be $\lceil\frac{n}{2}\rceil n(n-1)^{k-2}-1$. \end{proof}
\section{Conclusion} In the present work, we have constructed the structure of the crystal cubic carbon $CCC(n)$ by a new method, and we computed the minimum number of doubly resolving sets and strong resolving sets for the crystal cubic carbon $CCC(n)$. In particular, we have constructed the layer cycle graph $LCG(n, k)$. Moreover, we computed the minimum number of some resolving parameters for the layer cycle graph $LCG(n, k)$, also other researchers can get interesting results on the topological indices for this family of graphs. \newline
{\footnotesize
\noindent \textbf{Data Availability}\\ No data were used to support this study.\\[2mm] \noindent \textbf{Conflicts of Interest}\\ The authors declare that there are no conflicts of interest\\[2mm]
\noindent \textbf{Acknowledgements}\\ This work was supported in part by Anhui Provincial Natural Science Foundation under Grant 2008085J01 and Natural Science Fund of Education Department of Anhui Province under Grant KJ2020A0478. \\[2mm] \noindent \textbf{Authors' informations}\\ \noindent Jia-Bao Liu${}^a$ (\url{liujiabaoad@163.com;liujiabao@ahjzu.edu.cn})\\ Ali Zafari${}^{b}$(\textsc{Corresponding Author}) (\url{zafari.math.pu@gmail.com}; \url{zafari.math@pnu.ac.ir})\\ \noindent ${}^{a}$ School of Mathematics and Physics, Anhui Jianzhu University, Hefei 230601, P.R. China.\\ ${}^{b}$ Department of Mathematics, Faculty of Science, Payame Noor University, P.O. Box 19395-4697, Tehran, Iran.
{\footnotesize
\end{document} | arXiv | {
"id": "2201.07799.tex",
"language_detection_score": 0.5639188289642334,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract}
In this paper we present an explicit combinatorial description of a special class of facets of the secondary polytopes of hypersimplices.
These facets correspond to polytopal subdivisions called multi-splits.
We show a relation between the cells in a multi-split of the hypersimplex and nested matroids.
Moreover, we get a description of all multi-splits of a product of simplices.
Additionally, we present a computational result to derive explicit lower bounds on the number of facets of secondary polytopes of hypersimplices. \end{abstract} \maketitle
\section{Introduction} \noindent It is a natural idea to decompose a difficult problem into smaller pieces. There are many natural situations in which one has fixed a finite set of points, i.e, a \emph{point configuration}. All convex combinations of these points form a convex body called \emph{polytope}. For a basic background on polytopes see the monograph \cite{Ziegler:2000} by Ziegler. It is typical to ask for specific subdivisions or even all subdivisions of a polytope into smaller polytopes whose vertices are points of a given point configuration. The given points are often the vertices of the polytope. Famous examples for subdivisions are placing, minimum weight, Delaunay triangulations and regular subdivisions in general. For an overview of applications see the monograph \cite{LoeraRambauSantos:2010} by De Loera, Rambau, and Santos.
All subdivisions form a finite lattice with respect to coarsening and refinement. Gel{\cprime}fand, Kapranov and Zelevinsky showed that the sublattice of regular subdivisions is the face-lattice of a polytope; see \cite{GelfandKapranovZelevinsky:1994}. This polytope is called \emph{secondary polytope} of the subdivision. The vertices of the secondary polytope correspond to finest subdivisions, i.e., triangulations. This polytope can be realized as the convex hull of the GKZ-vectors. An important example in combinatorics is the \emph{associahedron}, which is the secondary polytope of a convex $n$-gon; see \cite{CeballosSantosZiegler:2015}. It is remarkable that the number of triangulations of an $n$-gon is the Catalan number $\frac{1}{n-1}\tbinom{2n-4}{2n-2}$ and the number of diagonals is $\frac{n(n-3)}{2}$, a triangular number minus one. A subdivision into two maximal cells is a coarsest subdivision and called \emph{split}. The coarsest subdivisions of the $n$-gon are the splits along the diagonals. This example shows that the associahedron has $\frac{1}{n-1}\tbinom{2n-4}{2n-2}$ vertices and only $\frac{n(n-3)}{2}$ facets. It is expectable that in general the number of facets of the secondary polytope is much smaller than the number of vertices.
Herrmann and Joswig were the first who systematically studied splits and hence facets of the secondary polytope. Herrmann introduced a generalization of splits in \cite{Herrmann:2011}. A multi-split is a coarsest subdivision, such that all maximal cells meet in a common cell.
The purpose of this paper is to further explore the facet structure of the secondary polytope for two important classes of polytopes -- products of simplices and hypersimplices. In particular, we investigate their multi-splits. Triangulations of products of simplices have been studied in algebraic geometry, optimization and game theory; see \cite[Section 6.2]{LoeraRambauSantos:2010}. An additional motivation to study splits of products of simplices is their relation to tropical convexity \cite{DevelinSturmfels:2004}, tropical geometry and matroid theory.
The focus of our interest is on hypersimplices. The \emph{hypersimplex} $\Delta(d,n)$ is the slice of the $n$-dimensional $0/1$-cube with the hyperplane $x_1+\ldots+x_n=d$. Hypersimplices appear frequently in mathematics. For example, they appear in algebraic combinatorics, graph theory, optimization, analysis and number theory (see \cite[Subsection 6.3.6]{LoeraRambauSantos:2010}), as well as in phylogenetics, matroid theory and tropical geometry. The latter three topics are closely related, and splits of hypersimplices play an important role in all of them. Bandelt and Dress \cite{BandeltDress:1992} were the first who studied the split decomposition of a finite metric in phylogenetic analysis. Later Hirai \cite{Hirai:2006}, Herrmann, Joswig \cite{HerrmannJoswig:2008} and Koichi \cite{Koichi:2014} developed split decompositions of polyhedral subdivisions. In particular they discussed subdivisions of hypersimplices. The special case of a subdivision of a hypersimplex $\Delta(2,n)$ corresponds to a class of finite pseudo-metrics. While the matroid subdivisions of $\Delta(2,n)$ are totally split-decomposable and correspond to phylogenetic trees with $n$ labeled leaves; see \cite{HerrmannJoswig:2008} and \cite{SpeyerSturmfels:2004}.
A product of simplices appears as vertex figures of any vertex of a hypersimplex. Moreover, a subdivision of a product of simplices extends to a subdivision of a hypersimplex via the tropical Stiefel map. This lift has been studied in \cite{HerrmannJoswigSpeyer:2014}, \cite{Rincon:2013} and \cite{FinkRincon:2015}.
This paper comprises three main results, that combine polyhedral and matroid theory as well as tropical geometry. In Section~\ref{sec:splits_hypersimplex} we show that any multi-split of a hypersimplex is the image of a multi-split of a product of simplices under the tropical Stiefel map (Theorem~\ref{thm:main1}). To reach this goal we introduce the concept of \enquote{negligible} points in a point configuration. With this tool we are able to show that the point configuration consisting of the vertices of a product of simplices suffice to describe a given multi-split of the hypersimplex. This already implies that all multi-splits of a hypersimplex are subdivisions into matroid polytopes.
In Section~\ref{sec:nested} we define a relation depending on matroid properties of the occurring cells. We use this relation to enumerate all multi-splits of hypersimplices (Proposition~\ref{prop:enum_ksplits}) and show that all maximal cells in a multi-split of a hypersimplex correspond to matroid polytopes of nested matroids (Theorem~\ref{thm:main2}). This generalizes the last statement of \cite[Proposition 30]{JoswigSchroeter:2017}, which treats $2$-splits, i.e. multi-splits with exactly two maximal cells. As a consequence of the enumeration of all multi-splits of a hypersimplex we get the enumeration of all multi-splits of a product of simplices (Theorem~\ref{thm:simplices}). Nested matroids are a well studied class in matroid theory. Hampe recently introduced the \enquote{intersection ring of matroids} in \cite{Hampe:2017} and showed that every matroid is a linear combination of nested matroids in this ring. Moreover, matroid polytopes of nested matroids describe the intersection of linear hyperplanes in a matroid subdivision locally. Hence they occur frequently in those subdivisions; see \cite{JoswigSchroeter:2017}.
In the last Section~\ref{sec:computations} we take a closer look at coarsest matroid subdivisions of the hypersimplex in general. Matroid subdivisions are important in tropical geometry as they are dual to tropical linear spaces. If they are regular they are also called \enquote{valuated matroids} introduced by Dress and Wenzel \cite{DressWenzel:1992}. Coarsest matroid subdivisions have been studied in \cite{HerrmannJoswigSpeyer:2014}. We compare two constructions of matroid subdivisions, those that are in the image of the tropical Stiefel map and those that appear as a corank vector of a matroid. We present our computational results on the number of coarsest matroid subdivisions of the hypersimplex $\Delta(d,n)$ for small parameters $d$ and $n$ (Proposition~\ref{prop:comp}), which illustrate how fast the number of combinatorial types of matroid subdivisions grows.
\section{Multi-splits of the hypersimplex}\label{sec:splits_hypersimplex} \noindent In this section we will study a natural class of coarsest subdivisions, called \enquote{multi-splits}. Our goal is to show that any \enquote{multi-split} of the hypersimplex can be derived from a \enquote{multi-split} of a product of simplices. We assume that the reader has a basic background on subdivisions and secondary fans. The basics could be found in \cite{LoeraRambauSantos:2010}. We will shortly introduce our notation and definitions.
We consider a finite set of points in $\RR^n$ as a \emph{point configuration} $\cP$, i.e., each point occurs once in $\cP$. A subdivision $\Sigma$ of $\cP$ is a collection of subsets of $\cP$, such that they satisfy the Closure, Union and Intersection Property. We call the convex hull of such a subset a \emph{cell}. The \emph{lower convex hull} of a polytope $Q\subset\RR^{n+1}$ is the collection of all faces with an inner facet normal with a strictly positive $(n+1)$-coordinate. A subdivision is \emph{regular} when it is combinatorially isomorphic to the lower convex hull of a polytope $Q\subset\RR^{n+1}$, this polytope is called the \emph{lifted polytope}. The $(n+1)$-coordinate is called the \emph{height}. The heights of the points in $\cP$ form the \emph{lifting vector}. The set of all lifting vectors whose projection of the lower convex hull coincides form an open cone. The closure of such a cone is called a \emph{secondary cone}. The collection of all secondary cones is the \emph{secondary fan} of the point configuration $\cP$. We call a point $q\in\cP$ \emph{negligible in the subdivision $\Sigma$} if there is a cell containing the point $q$ and $q$ does not occur as a vertex of any $2$-dimensional cell. In particular, a negligible point $q$ lies in a cell $C$ if and only if $q \in \conv(C\setminus \{q\})$. For a regular subdivision this means that $q$ is lifted to a redundant point and to the lower convex hull of the lifted polytope.
\begin{example}\label{ex:fivepoints} Consider the point configuration of the following five points $(0,0),(3,0),(0,3),$ $(3,3),(1,1)$. All nine possible subdivisions of that point configuration are regular and $(1,1)$ is negligible in three of them.
See Figure~\ref{fig:fivepoints}. \end{example}
\begin{figure}
\caption{Nine (regular) subdivisions of the five points of Example~\ref{ex:fivepoints}. The inner point is negligible in all subdivisions in the middle row. This point is lifted above the lower convex hull in the regular subdivisions in the top row.
The subdivision in the middle of the top row is a $1$-split, the left and right in the second row are $2$-splits and in the middle of the bottom row is a $3$-split.}
\label{fig:fivepoints}
\end{figure}
A negligible point $q\in\cP$ can be omitted in the subdivision $\Sigma$. More precisely we have the following relation between the subdivisions of $\cP$ and those subdivisions of $\cP\setminus\{q\}$.
\begin{proposition}\label{prop:negible} Let $q\in\cP$, sucht that $q\in\conv(C\setminus \{q\})$. Consider the following map on the set of all subdivisions of $\cP$ where $q$ is negligible. \[
\Sigma \mapsto \SetOf{C\setminus\{q\}}{C\in\Sigma} \]
This map is a bijection onto all subdivisions of $\cP\setminus\{q\}$. \end{proposition}
A \emph{$k$-split} of a point configuration $\cP$ is a coarsest subdivision $\Sigma$ of the convex hull $P$ of $\cP$ with $k$ maximal faces and a common $k-1$-codimensional face. We call this face the \emph{common cell} and denote this polytope by $H^\Sigma$. We shorten the notation if the point configuration $\cP$ is the vertex set of a polytope $P$ and write this as $k$-split of $P$. If we do not specify the number of maximal cells we will call such a coarsest subdivision a \emph{multi-split}. \begin{example} The point configuration of the points in Example~\ref{ex:fivepoints} has four coarsest subdivisions. These are a $1$-split, two $2$-splits and a $3$-split. See Figure~\ref{fig:fivepoints}. \end{example}
\begin{example} In general not all coarsest subdivisions are multi-splits.
An extremal example is a $4$-dimensional cross polytope with perturbed vertices, such that four points do not lie in a common hyperplane.
The secondary polytope of this polytope has $29$ facets, non of which is a multi-split. \end{example}
\begin{example} Another example for a coarsest subdivision that is not a multi-split is illustrated in Figure~\ref{fig:nonksplit}. \end{example}
Splits have been studied by several people in phylogenetic analysis, metric spaces and polyhedral geometry. For example by Bandelt and Dress \cite{BandeltDress:1992}, Hirai \cite{Hirai:2006}, Herrmann and Joswig \cite{HerrmannJoswig:2008} and by Koichi \cite{Koichi:2014}. The more general multi-splits have been introduced by Herrmann in \cite{Herrmann:2011} under the term $k$-split. The main result there is the following. \begin{proposition}[{\cite[Theorem 4.9]{Herrmann:2011}}]\label{prop:ksplitHer}
Each $k$-split is a regular subdivision. The dual complex of the lower cells, i.e., the subcomplex in the polar of the lifted polytope, is a $k$-simplex modulo its lineality space. \end{proposition} Proposition~\ref{prop:ksplitHer} implies that the subdivision of a multi-split corresponds to a ray in the secondary fan, i.e., this is a coarsest regular and non-trivial subdivision. Furthermore, the number of faces of a fixed dimension of a $k$-split is the same as the number of faces of the same dimension of a $k$-simplex. In particular, the number of maximal, non-trivial inclusionwise minimal cells equals $k$. Note that also the number of $(n-k+2)$-dimensional cells is equal to $k$.
We recall the main construction of Proposition~\ref{prop:ksplitHer}, which proves the regularity. The subdivision $\Sigma$ is induced by a complete fan $\cF$ with $k$ maximal cones, a lineality space $\aff H^\Sigma$ and an apex at $a\in\RR^n$. Here \enquote{induced} means a cell of $\Sigma$ is the intersection of a cone of $\cF$ with $P$. The apex $a$ is not unique, it can be any point in $H^\Sigma$. Later we will take specific choices for it. A lifting function that induces the multi-split is given by the following. All points in $\cP \cap \aff H^\Sigma$ are lifted to height zero. The height of a point $p\in\cP$ that is contained in a ray of $\cF$ is the shortest distance to the affine space $\aff H^\Sigma$. Each other point in the point configuration $\cP$ is a non-negative linear combination of those rays. The height of a point is given by the linear combination with the same coefficients multiplied with the heights of points in the rays of $\cF$.
\begin{figure}
\caption{A coarsest regular subdivision, which is not a multi-split.}
\label{fig:nonksplit}
\end{figure}
The following Lemma summarizes important properties of the common cell $H^\Sigma$. \begin{lemma}\label{lem:propertiesH}
The common cell $H^\Sigma$ is the intersection of the affine space $\aff H^\Sigma$ with $P$ and $\aff H^\Sigma$ intersects $P$ in its relative interior. Hence, the relative interior of the common cell $H^\Sigma$ is contained in the relative interior of the polytope $P$. \end{lemma} \begin{proof}
Let us assume that $\cF$ is the complete fan of the $k$-split $\Sigma$.
The intersection of all maximal cones in $\cF$ is an affine space which shows $H^\Sigma = \aff H^\Sigma \cap P$.
The dual cell of $H^\Sigma$ is a $k$-simplex by Proposition~\ref{prop:ksplitHer}, and therefore a bounded polytope. Cells in the boundary of the polytope $P$ are dual to unbounded polyhedra. Hence, this implies that $H^\Sigma$ is not contained in any proper face of $P$. \end{proof}
Let $N(v)$ be the set of vertices that are neighbours of $v$ in the vertex-edge graph of $P$ and $\varepsilon = \min_{u\in N(v)} \sum_{w\in N(v)} \langle w-v,\, u-v \rangle$. The intersection of the polytope $P$ with a hyperplane that (weakly) separates the vertex $v$ from all other vertices and does not pass through $v$ is the \emph{vertex figure} of $v$ \[
\fig(v) = \SetOf{x\in P }{ \sum_{w\in N(v)} \langle w-v,\, x-v \rangle = \varepsilon } \] Our goal is to relate a $k$-split of a polytope to a $k$-split in a vertex figure.
We will focus on a particular class of convex polytopes, the hypersimplices. We define for $d,n\in\ZZ$, $I\subseteq[n]$ and $0 \leq d \leq \size(I)$ the polytope \[
\Delta(d,I) \ = \ \SetOf{x\in[0,1]^n}{\sum_{i\in I} x_i = d \text{ and } \sum_{i\not\in I} x_i = 0 } \enspace . \] The \emph{$(d,n)$-hypersimplex} is the polytope $\Delta(d,[n])$, that we denote also by $\Delta(d,n)$. Clearly, the polytope $\Delta(d,I)$ is a fixed embedding of the hypersimplex $\Delta(d,\size(I))$ into $n$-dimensional space. We define the $(n-1)$-simplex $\Delta_{n-1}$ as the hypersimplex $\Delta(1,n)$ which is isomorphic to $\Delta(n-1,n)$.
\begin{example} \label{ex:vertex_fig}
The vertex figure $\fig(e_I)$ of $e_I=\sum_{i\in I} e_i$ in the hypersimplex $\Delta(d,n)$ is
\begin{align*}
\fig(e_I) &= \SetOf{x\in\Delta(d,n)}{\sum_{i\in I} \sum_{j\in [n]-I} \langle e_j-e_i,\, x-e_I \rangle = n}\\
&= \SetOf{x\in\Delta(d,n)}{\langle d e_{[n]-I}- (n-d) e_I,\, x-e_I \rangle = n}\\
&= \Delta(d-1,I)\times\Delta(1,[n]-I)
\end{align*} \end{example}
If the point configuration $\cP$ is the vertex set of a polytope $P$, then there is at least one vertex that is contained in the common cell $H^\Sigma$. Even more if $P$ is not $0$-dimensional then also $H^\Sigma$ is at least $1$-dimensional, otherwise it would be a face of $P$.
We say that a subdivision $\Sigma'$ on $\cP'\subsetneq\RR^n$ is \emph{ induced } by another subdivision $\Sigma$ on $\cP\subsetneq\RR^n$ if for all $\sigma\in\Sigma$ with $\dim(\conv\sigma\cap\conv \cP')=0$ we have $\conv\sigma\cap\conv(\cP')\subseteq \cP'$ and $\Sigma' = \smallSetOf{\conv\sigma\cap\cP'}{\sigma\in\Sigma}$. Note that this is not the same concept as a subdivision that is \enquote{induced} by a fan.
\begin{figure}
\caption{
A $2$-split $\Sigma$ in the octahedron $\Delta(2,4)$, with the common cell \textcolor{blue}{$H^\Sigma$}
and
the induced $2$-split in the vertex figure $\fig(e_I)$.}
\label{fig:octsplit}
\end{figure}
\begin{example}\label{ex:octsplit}
A subdivision of the octahedron into two egyptian pyramids is a $2$-split.
The common cell is a square.
Figure~\ref{fig:octsplit} illustrates this subdivision as well as the induced subdivision of the vertex figure.
The induced subdivision is a $2$-split of a square on a point configuration with five points, the four vertices and an interior point $q$.
The point $q$ is the intersection of the vertex figure $\fig(e_I)$ and the convex hull of the two vertices that are not in the vertex figure.
The interior point $q$ is negligible. \end{example}
The situation of Example~\ref{ex:octsplit} generalizes to $k$-splits of arbitrary polytopes. \begin{proposition}\label{prop:induced_subdivision}
Let $\Sigma$ be a $k$-split of the polytope $P$ and $v\in H^\Sigma$ be a vertex of $P$.
Then each cone of $\cF$ intersects the vertex figure $\fig(v)$ of $v$.
In particular, the subdivision $\Sigma$ induces a $k$-split on a point configuration that is contained in $\fig(v)$. \end{proposition} \begin{proof}
Let us assume without loss of generality that the vertex $v$ is the apex of $\cF$.
Each ray of $\cF$ is a cone of the form $\smallSetOf{v+\lambda (w-v)}{\lambda \geq 0}+\aff H^\Sigma$ for another vertex of $w\in P$.
Hence, each ray intersects the vertex figure $\fig(v)$ of $v\in H^\Sigma$.
This implies that the intersection of a $\ell$-dimensional cone with $\fig(v)$ is $\ell-1$ dimensional.
We conclude that the induced subdivision is again a $k$-split. \end{proof}
Our main goal is to classify all multi-splits of the hypersimplices. Recall from Example~\ref{ex:vertex_fig} that for the hypersimplex the vertex figure of $e_I=\sum_{i\in I} e_i$ with $\size I = d$ is the product of simplices \[
\fig(e_I)\ =\ \SetOf{x\in\Delta(d,n)}{\sum_{i\in I} x_i = d-1} \ = \ \Delta(d-1,I)\times\Delta(1,[n]-I)\ \simeq \ \Delta_{d-1}\times\Delta_{n-d-1} \enspace. \] The intersection of the vertex figure of $e_I$ and the line spanned by the two vertices $e_I$ and $e_J$ with $J\in\tbinom{[n]}{d}$ is a point $q$ with coordinates \[
q_i = \left\{\begin{array}{cr}
1 & \text{ if } i \in I\cap J\\
\frac{\size(I-J)-1}{\size(I-J)} & \text{ if } i \in I-J\\
\frac{1}{\size(I-J)} & \text{ if } i \in J-I\\
0 & \text{ if } i \not \in I\cup J
\end{array}\right. \] We denote by $\cQ_I$ the set of all these intersection points. They include the vertices of the vertex figure of $e_I$. For those we have $\size(I-J)=1$. A lifting function $\lambda$ of $\Delta(d,n)$ induces a lifting on each point $q\in\cQ_I$ by taking \[
\lambda(q) = \lambda\left(\frac{\size(I-J)-1}{\size(I-J)} e_I + \frac{1}{\size(I-J)} e_J\right) = \frac{\size(I-J)-1}{\size(I-J)} \lambda(e_I) + \frac{1}{\size(I-J)} \lambda(e_J) \enspace . \]
From Proposition~\ref{prop:induced_subdivision} follows that for each $k$-split of $\Delta(d,n)$ there exists a $d$-set $I$ and a vertex $e_I$ such that the $k$-split on $\Delta(d,n)$ induces a $k$-split on the point configuration $\cQ_I$. Our goal is to show that all interior points of $\conv \cQ_I$ are negligible.
Before we discuss this in general let us take a closer look on a key example where $d = n-d$. In the example, the point configuration consists only of the vertices and exactly one additional point. This example will be central in the rest of the argumentation.
Consider the point configuration $\cP_j$ with the vertices of $\Delta_{j-1}\times\Delta_{j-1}$ and exactly one additional point $q$ which is $\sum_{i=1}^{2j} \frac{1}{j}e_i$ the barycenter of $\Delta_{j-1}\times\Delta_{j-1}$.
\begin{lemma}\label{lem:prod_simplices_one_int_point} There is no $(2j-1)$-split of $\cP_j$. \end{lemma} \begin{proof} Let us assume we have given a $(2j-1)$-split $\Sigma$ of $\cP_j$. The dimension of $\Delta_{j-1}\times\Delta_{j-1}$ is $2j-2$, hence the common cell $H^\Sigma$ is $(2j-2)-(k-1)$ dimensional. In our situation the dimension is $0$. The only $0$ dimensional cell in the interior is $\{q\} = H^\Sigma$. Let $\cF$ be the complete fan that induces $\Sigma$. The apex of $\cF$ has to be $q$. Proposition~\ref{prop:ksplitHer} shows that this fan has $k=2j-1$ rays. Each of these $2j-1$ rays intersects $\Delta_{j-1}\times\Delta_{j-1}$ in a point on the boundary. An intersection point has to be an element of the point configuration and hence it is a vertex of $\Delta_{j-1}\times\Delta_{j-1}$. The convex hull $Q$ of all $2j-1$ vertices that we obtain as an intersection of the boundary with a ray is a $2j-2$-dimensional simplex in $\RR^{2j}$. This simplex $Q$ contains $q$ in its interior, since $\cF$ is complete.
By Lemma~\ref{lem:propertiesH} we have that $q$ is in the relative interior of $\conv\cP_j$.
Hence, no coordinate of $q$ is integral, while the vertices are $0/1$-vectors.
This implies that for each of the $2j$ coordinates of $q$ there is a vertex of the simplex that is $1$ in this coordinate.
A vertex of $\Delta_{j-1}\times\Delta_{j-1}$ has only two non zero entries, hence there is at least one coordinate $\ell\in[2j]$ such that only one vertex $w\in Q$ fulfills $w_\ell = 1$. We deduce that the coefficient of $w$ in the convex combination of the vertices that sums up to $q$ is $\frac{1}{j}$.
The simplex $Q$ is of dimension $2j-2$, which is the dimension of $\Delta_{j-1}\times\Delta_{j-1}$.
Hence another vertex $v$ exists in $Q$, such that the support of $v$ and the support $w$ intersect non-trivially.
The coefficient of $w$ is $\frac{1}{j}$, hence the coefficient of $v$ has to be $0$.
This contradicts the fact that $q$ is in the interior of the simplex. \end{proof}
\begin{remark}
The proof of Lemma~\ref{lem:prod_simplices_one_int_point} shows that the barycenter of $\Delta_{j-1}\times\Delta_{j-1}$ is on the boundary of the constructed simplex $Q$.
In fact, the arguments of the proof apply to any $(j+1)$-dimensional subpolytope of $\Delta_{j-1}\times\Delta_{j-1}$, instead of the subpolytope $Q$.
Hence, in any triangulation the barycenter is contained in a $j$-dimensional simplex. \end{remark}
Our next step is to reduce the general case to the case where $2d = n$, which is equivalent to $\size I = d = n-d$, and the point configuration is $\cQ_I$. This is close to the situation in Lemma~\ref{lem:prod_simplices_one_int_point}, but still not the same.
For any non-vertex $p\in\cQ_I$ we define \[
F_p=\SetOf{x\in\Delta(d,n)}{ \sum_{i\in I} x_i =d-1 \text{ and } x_j=p_j \text{ for all } p_j\in\{0,1\} } \enspace. \] By definition the only point in $\cQ_I\cap \relint \fig(e_I)$ is $q$ and there is a unique $d$-set $J$ such that $q \in \conv(e_I,e_J)$. Clearly $\size(I-J) = \size I - \size(I\cap J) = \size J - \size(I\cap J) = \size(J-I)$ and \[
q_j \text{ is non-integral if and only if } j\in I-J \text{ or } j\in J-I \enspace . \] The coordinatewise affine transformation $x_j\mapsto 1-x_j$ if $j\in I-J$ and $x_j\mapsto x_j$ if $j\in J-I$ is an isomorphism between the face $F_q$ of the vertex figure $\fig(e_I)$ and the product of simplices $\Delta_{j-1}\times\Delta_{j-1}$ for $j= d - \size I\cap J$. The point $q$ is mapped to the barycenter.
The common cell $H^\Sigma$ is either $\{q\}$ or $\fig(e_I)$. Hence, the only possibilities for a multi-split of the point configuration $\cQ_I \cap F_q$ are $2j-1$ or $1$ maximal cell. The multi-split is induced by the polytope $\Delta(d,n)\cap\aff\{e_I, F_q\}$. A $1$-split can not be induced by a polytope. Therefore it has to be a $2j-1$-split. All together we get the following result for arbitrary multi-splits. \begin{lemma}\label{lem:negible_points}
Let $\Sigma$ be a multi-split of the point configuration $\cQ_I$.
All points of $\cQ_I\setminus\{0,1\}^n$ are negligible in $\Sigma$. \end{lemma} \begin{proof}
To each $q\in\cQ_I$ we assign the set $\SetOf{i\in[n]}{q_i\not\in\ZZ}$ of non-integral support.
A point $q\in\cQ_I$ is a $0/1$-vector if and only if its non-integral support is empty.
Consider a ray $R$ in the fan $\cF$, i.e., the dimension of $R$ is $\dim(H^\Sigma)+1$.
Let $V_R \subseteq \cQ_I$ be the set of all points of the intersection $R \cap \conv\cQ_I$.
Fix a point
\[
p \in \SetOf{ q\in V_R}{ \text{The non-integral support of $q$ is non empty} }
\]
whose non-integral support is inclusionwise minimal in the above set.
Our goal is to show that such a $p$ does not exist and hence the above set is empty.
This implies that any point $q\in V_R$ is integral.
From \cite[Proposition 4.8]{Herrmann:2011} follows that the face $F_p$ is either trivially subdivided or a multi-split.
In a trivial subdivision the interior point $p$ is not a vertex of $R\cap \conv\cQ_I$.
By construction all the non integral points in $F_p$ except for $p$ are negligible, otherwise $p$ would not be a vertex of $V_R$.
Moreover, $p$ is the only interior point and $k=2j-1$, where $j$ is the size of the non-integral support.
This contradicts Lemma~\ref{lem:prod_simplices_one_int_point}.
We conclude that the above constructed set is empty.
Hence all non integral points in $\cQ_I$ are negligible. \end{proof}
Proposition~\ref{prop:induced_subdivision} and Lemma~\ref{lem:negible_points} show that the induced subdivision is a subdivision of the vertex figure $\fig(e_I)$, which is a product of simplices. This reverses a construction that lifts regular subdivisions of the product of simplices $\Delta_{d-1}\times\Delta_{n-d-1}$ to the hypersimplex $\Delta(d,n)$. This lift has been studied in the context of tropical convexity in \cite{HerrmannJoswigSpeyer:2014}, \cite{Rincon:2013} and \cite{FinkRincon:2015}.
We define the tropical Stiefel map of a regular subdivision on the product of simplices $\Delta_{d-1}\times\Delta_{n-d-1}$.
We denote by $\lambda(i,j)\in\RR$ the height of the vertex $(e_i,e_j)\in\Delta_{d-1}\times\Delta_{n-d-1}$.
The \emph{tropical Stiefel map} $\pi$ is defined on sets $A \subseteq \{1,\ldots,d\}, B\subseteq\{d+1,\ldots,n\}$ with $\size A = \size B$
\[
\pi: (A,B) \mapsto \min_{\omega \in \Sym(B)} \sum_{i\in A} \lambda(i,\omega_i)
\]
where $\Sym(B)$ is the symmetry group on the set $B$.
Note that $\pi( \{i\},\{j\} ) = \lambda(i,j)$.
Let $e_I\in\Delta(d,n)$ be a vertex and $\lambda$ be a lifting on $\Delta_{d-1}\times\Delta_{n-d-1}$. Then the tropical Stiefel map defines a lifting on a vertex $e_J\in\Delta(d,n)$ by taking the height $\pi(I-J, J-I)$. The polytope $\Delta_{d-1}\times\Delta_{n-d-1}$ is isomorphic to $\fig(e_I) = \Delta_{d-1,I}\times\Delta_{1,[n]-I}\subsetneq\Delta(d,n)$. The tropical Stiefel map extends a lifting of the vertex figure $\fig(e_I)$ to the entire hypersimplex $\Delta(d,n)$. The dual complex of the extended subdivision of $\Delta(d,n)$ is isomorphic to the dual complex of the subdivision of $\Delta_{d-1}\times\Delta_{n-d-1}$; see \cite[Theorem 7]{HerrmannJoswigSpeyer:2014}. In particular, the Stiefel map extends a $k$-split of $\Delta_{d-1}\times\Delta_{n-d-1}$ to a $k$-split of $\Delta(d,n)$.
From Lemma~\ref{lem:negible_points} we deduce. \begin{theorem} \label{thm:main1}
Any $k$-split of the hypersimplex $\Delta(d,n)$ is the image of a $k$-split of a product of simplices $\Delta_{d-1}\times\Delta_{n-d-1}$ under the Stiefel map.
In particular, the $k$-split $\Sigma$ is an extension of a $k$-split of $\Delta(d,I)\times\Delta(n-d,[n]-I)$ if and only if $e_I\in H^\Sigma$. \end{theorem} \begin{proof}
For any $k$-split $\Sigma$ of the hypersimplex $\Delta(d,n)$ and any vertex $e_I\in H^\Sigma$ the $k$-split $\Sigma$ induces a $k$-split on the point configuration $\cQ_I$.
By Proposition~\ref{prop:negible} and Lemma~\ref{lem:negible_points} this is a subdivision on the vertex figure $\fig(e_I)$, which is a product of simplices.
The Stiefel map extends this $k$-split to a $k$-split on $\Delta(d,n)$ by coning over the cells.
This $k$-split coincides with $\Sigma$ on $\fig(e_I)$ and hence do both $k$-splits on the hypersimplex $\Delta(d,n)$. \end{proof}
\section{Matroid subdivisions and multi-splits}\label{sec:nested} \noindent In this section we will further analyze multi-splits of the hypersimplex. Our goal is to describe the polytopes that occur as maximal cells. We will see that these polytopes correspond to a particular class of matroids.
A subpolytope $P$ of the hypersimplex $\Delta(d,n)$ is called a \emph{matroid polytope} if the vertex-edge graph of $P$ is a subgraph of the vertex-edge graph of $\Delta(d,n)$. Note that the vertices of a matroid polytope are $0/1$-vectors and a subset of those of the hypersimplex.
The vertices of a matroid polytope $P$ are the characteristic vectors of the bases of a matroid $\matroid(P)$. The convex hull of the characteristic vectors of the bases of a matroid $M$ is the matroid polytope $\polytope(M)$. See \cite{Oxley:2011} and \cite{White:1986} for the basic background of matroid theory and \cite{Edmonds:1970} for a polytopal description, that we used as definition.
We will give three examples of classes of matroids that are important for this section. \begin{example}
Clearly the hypersimplex $\Delta(d,n)$ itself is a matroid polytope.
The matroid $\matroid(\Delta(d,n))$ is called \emph{uniform matroid} of rank $d$ on $[n]$ elements. The $d$-subsets of $[n]$ are exactly the bases of $\matroid(\Delta(d,n))$.
The uniform matroid has the maximal number of bases among all $(d,n)$-matroids. \end{example}
\begin{example}\label{ex:partition}
Let $C_1,\ldots,C_k$ be a partition of the set $[n]$ and $d_i\leq\size(C_i)$ non-negative integers.
The matroid $\matroid( \Delta(d_1,C_1)\times\cdots\times(\Delta(d_k,C_k) )$ is called \emph{partition matroid} of rank $d_1+ \ldots +d_k$ on $[n]$.
A $d$-subset $S$ of $[n]$ is a basis of this matroid if $\size(S\cap C_i)=d_i$. \end{example}
\begin{example} \label{ex:nested}
Let $\emptyset\subsetneq F_1\subsetneq \ldots \subsetneq F_k\subseteq[n]$ be an ascending chain of sets and $0 \leq r_1 < r_2 < \ldots < r_k$ be integers with $r_\ell < \size(F_\ell)$ for all $\ell\leq k$.
The polytope
\[
P \ = \ \SetOf{x\in\Delta(d,n)}{\sum_{i\in F_\ell} x_i \leq r_\ell}
\]
is a matroid polytope. This follows from the analysis of all $3$-dimensional octahedral faces of the hypersimplex. Non of those is separated by more than one of the additional inequalities and hence the polytope is a matroid polytope. The matroid $\matroid(P)$ is called \emph{nested matroid} of rank $r_k+\size([n]-F_k)$ on $[n]$.
The sets $F_1,\ldots,F_k$ are the \emph{cyclic flats} of the nested matroid $\matroid(P)$ if $r_1=0$.
If $r_1\neq 0$, then the above and $\emptyset$ are the cyclic flats. \end{example}
\begin{remark}
There are many cryptomorphic definitions for matroids.
Bonin and de Mier introduced in \cite{BoninMier:2008} the definition via cyclic flats and their ranks, i.e., unions of minimal dependent sets.
In this paper we only need the very special case of nested matroids, where the lattice of cyclic flats is a chain. \end{remark}
A \emph{matroid subdivision} of $\Delta(d,n)$ is a subdivision into matroid polytopes, i.e., all the (maximal) cells in the subdivision are matroid polytopes. The lifting function of a regular subdivision of a matroid polytope is called \emph{tropical Pl\"ucker vector}, since it arises as valuation of classical Pl\"ucker vectors. Note that the tropical Pl\"ucker vectors form a subfan in the secondary fan of the hypersimplex $\Delta(d,n)$. This fan is called the \emph{Dressian $\Dr(d,n)$}.
Each multi-split of the hypersimplex $\Delta(d,n)$ is a matroid subdivision as Theorem~\ref{thm:main1} in combination with the following proposition shows. \begin{proposition}[{\cite{Rincon:2013},\cite{HerrmannJoswigSpeyer:2014}}]
The image of any lifting function on $\Delta_{d-1}\times\Delta_{n-d-1}$ under the Stiefel map is a
tropical Pl\"ucker vector. \end{proposition}
From now on let $\Sigma$ be a $k$-split of the hypersimplex $\Delta(d,n)$. We investigate which matroid polytopes appear in the subdivision $\Sigma$.
Let us shortly introduce some matroid terms. A set $S$ is \emph{independent} in the matroid $M$ if it is a subset of a basis of $M$. The rank $\rank(S)$ of a set $S$ is the maximal size of an independent set in $S$.
An important operation on a matroid $M$ is the \emph{restriction} $M|F$ to a subset $F$ of the ground set. The set $F$ is the ground set of $M|F$. A set $S\subseteq F$ is independent in $M|F$ if and only if $S$ is independent in $M$. A matroid $M$ is called \emph{connected} if there is no set $\emptyset\subsetneq S \subsetneq [n]$ with
$\polytope(M) = \polytope(M|S)\times \polytope(M|([n]-S))$. For each matroid $M$ there is a unique partition $C_1,\ldots,C_k$ of $[n]$, such that
$\polytope(M) = \polytope(M|C_1)\times\cdots\times\polytope(M|C_k)$. The sets $C_1,\ldots,C_k$ are called \emph{connected components} of $M$. The element $e\in[n]$ is called \emph{loop} if $\{e\}$ is a connected component and $\polytope(M|\{e\}) = \Delta(0,\{e\})$. If instead $\polytope(M|\{e\}) = \Delta(1,\{e\})$, then $e$ is called \emph{coloop}. The dual operation of the restriction is the \emph{contraction} $M/F$. The ground set of the matroid $M/F$ is $[n]-F$. A set $S$ is independent in $M/F$ if $\rank_M(S+F)=\size(S)+\rank_M(F)$.
The following describes a relation of the connected components of a matroid and its matroid polytope. \begin{lemma}[{\cite[Theorem 3.2]{Fujishige:1984} and \cite[Propositions 2.4]{FeichtnerSturmfels:2005}}]\label{lem:dim}
The number of connected components of a matroid $M$ on the ground set $[n]$
equals the difference $n - \dim\polytope(M)$. \end{lemma}
\begin{example}
An element $e$ is a loop in a partition matroid $\matroid( \Delta(d_1,C_1)\times\cdots\times(\Delta(d_k,C_k) )$ if and only if $e\in C_\ell$ and $\rank(C_\ell) = d_\ell = 0$.
The element is a coloop if instead $\rank(C_\ell) = d_\ell = \size(C_\ell)$.
The other connected components are those sets $C_\ell$ with $0 < d_\ell < \size(C_\ell)$.
A nested matroid is loop-free if $d_1 > 0$ and coloop-free if $F_k=[n]$.
A loop- and coloop-free nested matroid is connected. \end{example}
At first we consider the common cell $H^\Sigma$ in a $k$-split $\Sigma$ of $\Delta(d,n)$. \begin{proposition}
The common cell $H^\Sigma$ is a matroid polytope of a loop and coloop-free partition matroid with $k$ connected components. \end{proposition} \begin{proof}
The common cell $H^\Sigma$ is a cell in a matroid subdivision and hence a matroid polytope.
The dimension of this polytope is $n-k+1$.
From Lemma~\ref{lem:dim} follows that the corresponding matroid $M = \matroid(H^\Sigma)$ has $k$ connected components.
Let $C_1,\ldots,C_k$ be the connected components of $M$ and $d_\ell=\rank_M(C_\ell)$.
Clearly, this is a partition of the ground set $[n]$ and the sum $d_1+\ldots+d_k$ equals $d$.
The polytope $H^\Sigma=\polytope(M)$ is the intersection of $\Delta(d,n)$ with an affine space.
Hence, there are no further restrictions to the polytope and each matroid polytope $\polytope(M|C_\ell)$ is equal to $\Delta(d_\ell,C_\ell)$.
The common cell $H^\Sigma$ intersects $\Delta(d,n)$ in the interior, hence $0 < d_\ell < \size(C_\ell)$ and the matroid $M$ is loop and coloop-free. \end{proof}
We define the relation $\preceq_P\,$ on the connected components $C_1,\ldots,C_k$ of $\matroid(H^\Sigma)$ depending on a cell $P\in\Sigma$ by \begin{equation}\label{eq:relation} \begin{aligned}
C_a \preceq_P\, C_b &\text{ if and only if for each $v\in H^\Sigma$ and for each $i\in C_a$ and $j\in C_b$ with}\\&\text{$v_i=1$ and $v_j=0$ we have } v+e_j-e_i\in P \enspace . \end{aligned} \end{equation}
\begin{lemma}
Let $C_1,\ldots,C_k$ be the connected components of the matroid $\matroid(H^\Sigma)$.
The matroid polytope $P$ of a cell in $\Sigma$ defines a partial order on the connected components $C_1,\ldots,C_k$ via $C_a \preceq_P\, C_b$. \end{lemma} \begin{proof} Let $i,j\in[n]$ and $v\in H^\Sigma$ be a vertex with $v_i=1$, $v_j=0$.
Then $v-e_i+e_j\in H^\Sigma$ if and only if there is a circuit in $\matroid(H^\Sigma)$ containing both $i$ and $j$.
The vector $v$ is the characteristic vector of a basis in $\matroid(H^\Sigma)$ and adding $e_j-e_i$ corresponds to a basis exchange. This implies that $i$ and $j$ are in the same connected component, i.e., $\preceq_P\,$ is reflexiv.
Let $C_1 \preceq_P\, C_2 \preceq_P\, C_1$ and $i\in C_1$, $j \in C_2$.
Take $v,w\in H^\Sigma$ with $v_i=w_j=1$ and $v_j=w_i=0$.
By assumption we have $v-e_i+e_j, w+e_i-e_j\in P$ and since $H^\Sigma$ is convex
\[
\frac{1}{2}(v-e_i+e_j) + \frac{1}{2}(w+e_i-e_j) = \frac{1}{2} v + \frac{1}{2} w \in H^\Sigma \enspace .
\]
A convex combination of points in $P$ lies in $H^\Sigma$ if and only if all the points are in $H^\Sigma$.
Hence, we got $v-e_i+e_j\in H^\Sigma$ and therefore $C_1 = C_2$.
Let $C_1 \preceq_P\, C_2 \preceq_P\, C_3$, $i\in C_1$, $j\in C_3$ and $v\in H^\Sigma$ with $v_i=1$ and $v_j=0$.
Consider the cone $Q = \smallSetOf{\lambda x + y }{y\in H^\Sigma, x+y \in P \text{ and } \lambda \geq 0 }$.
This is the cone in the fan $\cF$ that contains $P$ with the same dimension as $P$.
Let $k,\ell\in C_2$ be indices with $v_k = 1$ and $v_\ell = 0$ and $w=v-e_k+e_\ell$.
Then $v,w\in H^\Sigma$ and $v-e_k+e_j,\, w+e_k-e_\ell,\, v-e_i+e_\ell\,\in P$.
That implies
\[
v-e_i+e_j \ =\ \frac{1}{3}\left( v+3(e_j-e_k) + w+4(e_k-e_\ell)+ v+3(e_\ell-e_i)\right) \in Q \enspace .
\]
Clearly $v-e_i+e_j\in \Delta(d,n)$ and hence $v-e_i+e_j\in P$.
This shows that $\preceq_P\,$ is transitive. \end{proof}
Before we further investigate the relation $\preceq_P\,$ we take a look at rays of $\cF$. The next Lemma describes the $(n-k+2)$-dimensional cells in $\Sigma$. The $k$-split $\Sigma$ has exactly $k$ of these cells and each maximal cell contains $k-1$ of those; see Proposition~\ref{prop:ksplitHer}. \begin{lemma}\label{lem:rays}
For each $(n-k+2)$-dimensional cell of $\Sigma$ there are $a,b\in[n]$ such that the cell equals
\[
R_{a,b}\ = \ \left( H^\Sigma + \SetOf{\mu(e_a-e_b)}{\mu\geq 0} \right)\cap \Delta(d,n) \enspace .
\] \end{lemma} \begin{proof}
Let $v$ be a vertex of the $(n-k+2)$-dimensional cell $R$ that is not in $H^\Sigma$.
This cell $R$ is a matroid polytope.
Hence, there is an edge of $v$ that has the direction $e_i-e_j$ for some $i$ and $j$.
At least one of those edges connects $v$ with $H^\Sigma$.
Therefore $R$ is of the desired form. \end{proof}
Now we are able to further investigate $\preceq_P\,$ and hence the cells in the $k$-split $\Sigma$. \begin{lemma}
For a connected matroid $\matroid(P)$ the relation $\preceq_P\,$ is a total ordering on the connected components of $\matroid(H^\Sigma)$. \end{lemma} \begin{proof}
Let us assume that $C_1$ and $C_2$ are two incomparable connected components of $M(H^\Sigma)$.
We define
\[
F = \bigcup_{C \preceq_P\, C_1} C \text{ and } G = \bigcup_{C \preceq_P\, C_2} C \enspace .
\]
Pick $i\in C_1$ and $j\in C_2$. The matroid $M = \matroid(P)$ is connected hence there is a circuit $A$ containing both $i$ and $j$.
The set $A\cap F\cap G$ is independent in $M$, as $i\not\in G$.
Let $S\supseteq A\cap F\cap G$ be a maximal independent set in $F\cap G$.
Let $N$ be the connected component of $i$ in the minor $(M/S)|([n]-F\cap G)$.
Note that the elements of $F\cap G-S$ are exactly the loops in the contraction $M/S$.
Moreover, $A-S$ is a circuit in $M/S$, and hence is $j\in N$.
We conclude that $C_1,C_2\subset N$.
The equation $\sum_{\ell\in N} x_\ell = \rank(N)$ defines a face of $\polytope(M)$.
This face is contained in $\polytope(N)\times\Delta(d-\rank(N),[n]-N)\subsetneq\Delta(\rank(N),N)\times\Delta(d-\rank(N),[n]-N)$.
\cite[Proposition 4.8]{Herrmann:2011} states that the induced subdivision on a face of a $k$-split is either trivial or a multi-split with less than $k$ maximal cells.
We are in the latter case, as the induced subdivision on $\Delta(\rank(N),N)$ is not trivial, since $C_1$ and $C_2$ are contained in $N$.
Hence, we can assume without loss of generality that $F\cap G = \emptyset$.
Clearly, the following two inequalities are valid for $\polytope(M)$ and the face that they define includes $H^\Sigma$
\begin{align*}
\sum_{i\in F} x_i \leq \rank(F) \text{ and } \sum_{i\in G} x_i \leq \rank(G) \enspace .
\end{align*}
Let $R$ be the unique ray in $\Sigma$ that is not contained in $\polytope(M)$.
There is a vertex $v\not\in H^\Sigma$ of $\Delta(d,n)$ that is contained in both $R$ and in $H^\Sigma-e_a+e_b$ for some $a,b\in[n]$. The rays in $\Sigma$ positively span the complete space. Hence, we get the estimation
\begin{align*}
\rank(F)+1 \geq \sum_{i\in F} v_i > \rank(F) \text{ and } \rank(G)+1 \geq \sum_{i\in G} v_i > \rank(G) \enspace .
\end{align*}
This implies that $b\in F\cap G$. We conclude that either $F \preceq_P\, G$ or $G \preceq_P\, F$. \end{proof}
\begin{example}\label{ex:2split}
Consider the octahedron $\Delta(2,4)$.
The hyperplane $x_1+x_2=x_3+x_4$ through the four vertices $e_1+e_3$, $e_2+e_3$, $e_1+e_4$ and $e_2+e_4$
strictly separates the vertices $e_1+e_2$ and $e_3+e_4$.
Moreover the hyperplane splits $\Delta(2,4)$ into two maximal cells, the corresponding subdivision $\Sigma$ is a $2$-split. The partition matroid $\matroid(H^\Sigma)$ has four bases and two connected components $C_1 = \{1,2\}$ and $C_2 = \{3,4\}$.
Let $M$ be the $(2,4)$-matroid with the following five bases $\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\}$.
The polytope $\polytope(M)$ is an egyptian pyramid and a maximal cell in $\Sigma$.
The inequality $x_1+x_2\leq 1$ is valid for $\polytope(M)$ and hence $C_2\not\preceq_P\, C_1$.
It is easy to verify that $C_1\preceq_P\, C_2$ as $e_3+e_4\in \polytope(M)$. \end{example}
We derive the following description for the maximal cells of a $k$-split, which we already saw in Example~\ref{ex:2split}. \begin{lemma} \label{lem:singlepoltope}
Let $P$ be a maximal cell of the $k$-split $\Sigma$ of $\Delta(d,n)$.
Furthermore, let $C_1\preceq_P\,\ldots\preceq_P\, C_k$ be the order of the connected components of $M(H^\Sigma)$.
Then $x\in P\subsetneq \RR^n$ if and only if $x\in\Delta(d,n)$ with
\begin{align} \label{eq:singlepoltope}
\sum_{\ell=1}^h \sum_{i\in C_\ell} x_i &\leq \sum_{\ell=1}^h \rank_M(C_\ell) \text{ for } h\leq k \enspace .
\end{align} \end{lemma} \begin{proof}
First, we will show that each $x\in P$ fulfills the inequalities (\ref{eq:singlepoltope}).
The following equation holds for each $v\in H^\Sigma\subsetneq P$
\[
\sum_{i\in C_\ell} v_i \ =\ \rank_M(C_\ell) \enspace .
\]
Lemma~\ref{lem:rays} shows that a ray of $\cF$ is of the form $H^\Sigma + \pos(e_j-e_i)$ for some $i,j\in [n]$.
Clearly, for each pair $(i,j)$ of such elements and every point
$v\in H^\Sigma$ with coordinates $v_j=0$ and $v_i=1$ we get $v+e_j-e_i\in \Delta(d,n)-H^\Sigma$.
Hence, $v+e_j-e_i\in P$ implies that $C_a\preceq_P\, C_b$ for $i\in C_a$ and $j\in C_b$. This is $a\leq b$.
This proves (\ref{eq:singlepoltope}) for all points that are in a ray and in $P$.
Each point $x\in P$ is a positive combination of vectors in rays of the fan $\cF$, hence the inequalities(\ref{eq:singlepoltope}) are valid for all vectors in $P$.
Conversely, we will show that each point in $\Delta(d,n)$, that is valid for (\ref{eq:singlepoltope}), is already in $P$.
The left hand side of (\ref{eq:singlepoltope}) is a totally unimodular system, i.e., all square minors are either $-1$, $0$ or $1$.
Hence all the vertices of the polytope are integral, even if we add the constraints $0 \leq x_i \leq 1$.
This is precisely a statement of \cite[Theorem 19.3]{Schrijver:1986}.
Take a vertex $v$ of $\Delta(d,n)$ that is valid.
Either $v\in H^\Sigma$ and hence $v\in P$ or at least an inequality of (\ref{eq:singlepoltope}) is strict.
In this case let $a = \min\smallSetOf{\ell\in[n]}{ \sum_{i\in C_\ell} v_i < \rank_M(C_\ell) }$ and $b = \min\smallSetOf{\ell\in[n]}{ \sum_{i\in C_\ell} v_i > \rank_M(C_\ell) }$.
Note that both sides of the inequality (\ref{eq:singlepoltope}) for $h=k$ sum up to $d$.
Hence, both of the minima exist and $a<b$, otherwise the inequality (\ref{eq:singlepoltope}) with $h=a$ would be invalid.
Pick $i\in C_b$ with $v_i=0$ and $j\in C_a$ with $v_j=1$.
The vector $w = v-e_j+e_i$ is another vertex of $\Delta(d,n)$, that is valid for (\ref{eq:singlepoltope}).
Moreover, $w\in P$ implies that $v\in P$ since $C_a\preceq_P\, C_b$.
We conclude that $P$ has the desired exterior description. \end{proof}
Now we are able to state our second main result, which allows us to construct all $k$-splits of the hypersimplex explicitly and relate them to nested matroids. \begin{theorem}\label{thm:main2}
A maximal cell in any $k$-split $\Sigma$ of $\Delta(d,n)$ is the matroid polytope $P(M)$ of a connected nested matroid $M$.
More precisely, the cyclic flats of $M$ are the $k+1$ sets $\emptyset\subsetneq C_1 \subsetneq C_1\cup C_2 \subsetneq \ldots \subsetneq \bigcup_{i=1}^k C_i=[n]$, where $C_1\preceq_P\, \ldots \preceq_P\, C_k$ are the connected components of $M(H^\Sigma)$.
Moreover, the other $k$ maximal cells are given by a cyclic permutation of the sets $C_1,C_2\ldots,C_k$.
In particular, each maximal cell in a multi-split of $\Delta(d,n)$ determinates all the cells. \end{theorem} \begin{proof}
Fix a maximal cell $P$ in $\Sigma$ and let $C_1\preceq_P\,\ldots\preceq_P\, C_k$ be the connected components of the partition matroid $N=\matroid(H^\Sigma)$.
We define $F_\ell=\bigcup_{i=1}^\ell C_i$ for all $1\leq \ell\leq k$.
We have
\[
0 < \rank_N( F_1 ) < \ldots < \rank_N(F_{\ell-1}) < \rank_N(F_{\ell-1})+\rank_N(C_\ell)=\rank_N(F_\ell) < \ldots < \rank_N(F_k) = d \enspace .
\]
The sets $F_\ell$ and $\emptyset$ are the cyclic flats of nested matroid $M$ with ranks given by $\rank_N(F_\ell)$ resp. $0$; see Example~\ref{ex:nested}.
The matroid polytope $P(M)$ of $M$ is exactly described by Lemma~\ref{lem:singlepoltope}.
This implies that the maximal cell $P$ is the matroid polytope $P(M)$ with the desired $k+1$ cyclic flats.
The intersection of all maximal cells of the $k$-split $\Sigma$ excluded the cell $P(M)$ is a ray of $\cF$.
This ray $R_{a,b}$ contains a vertex $w\in\Delta(d,n)$ of the form $v+e_a-e_b$, where $v\in H^\Sigma$.
We can choose this vertex $w$, such that $w\not\in P(M)$. We deduce from \eqref{eq:singlepoltope} the following strict inequalities for $w$:
\[
\sum_{\ell=1}^h \sum_{i\in C_\ell} w_i \ >\ \sum_{\ell=1}^h \rank_M(C_\ell) \text{ for all } h< k \enspace .
\]
As $w=v+e_a-e_b$ and $\sum_{i\in C_\ell} v_i = \rank(C_\ell)$, we get for $h=1$ that $a\in C_1$ and from $h=k-1$ that $b\in C_k$.
This implies that for every maximal cell $Q\neq \polytope(M)$ of $\Sigma$ we have $C_k \preceq_Q\, C_1$.
Moreover, each maximal cell $Q\neq \polytope(M)$ shares a facet with $\polytope(M)$.
Let $\sum_{\ell=1}^m \sum_{i\in C_\ell} x_i = \sum_{\ell=1}^m \rank_M(C_\ell)$ be the facet defining equation.
This facet implies $ C_m \not\preceq_Q\, C_{m+1}$.
All the other inequalities of (\ref{eq:singlepoltope}) are valid for $Q$.
We conclude that $ C_{m+1} \preceq_Q\, \ldots \preceq_Q\, C_k \preceq_Q\, C_1 \preceq_Q\, \ldots \preceq_Q\, C_m$. \end{proof} Note that there is a finer matroid subdivision for any $k$-split of the hypersimplex $\Delta(d,n)$, except for the case $k=d=2$ and $n=4$. Moreover, each matroid polytope of a connected nested matroid with at least four cyclic flats on at least $k+d+1$ elements occurs in a coarsest matroid subdivision, which is not a $k$-split.
In contrast we have that for each connected nested $(d,n)$-matroid $M$ with $k+1$ cyclic flats there is a unique $k$-split of the hypersimplex $\Delta(d,n)$ that contains $\polytope(M)$ as a maximal cell. Conversely, a $k$-split of the hypersimplex $\Delta(d,n)$ determines $k$ such nested matroids. Furthermore, each $k$-split $\Sigma$ determines a unique loop- and coloop-free partition matroid $\matroid(H^\Sigma)$, while each ordering of the connected components of $\matroid(H^\Sigma)$ leads to a unique connected nested $(d,n)$-matroid with $k+1$ cyclic flats. We conclude the following enumerative relations. \begin{corollary}
The following three sets are pairwise in bijection:
\begin{enumerate}
\item The loop- and coloop-free partition $(d,n)$-matroids with $k$ connected components,
\item\label{item:b} the collections of all connected nested $(d,n)$-matroids with $k+1$ cyclic flats,
whose pairwise set differences of all of those cyclic flats coincide,
\item\label{item:c} the collections of $k$-splits of $\Delta(d,n)$ with the same interior cell.
\end{enumerate}
Moreover, the collections in (\ref{item:b}) have all the same size $k!$
and those in (\ref{item:c}) are of size $(k-1)!$. \end{corollary}
Now we are able to count $k$-times all $k$-splits of the hypersimplex $\Delta(d,n)$ by simply counting nested matroids, i.e., ascending chains of subsets. The following is a natural generalization of the formulae that count $2$-splits in \cite[Theorem 5.3]{HerrmannJoswig:2008} and $3$-splits in \cite[Corollary 6.4]{Herrmann:2011}. \begin{proposition}\label{prop:enum_ksplits}
The total number of $k$-splits in the hypersimplex $\Delta(d,n)$ equals
\[
\frac{1}{k}\sum_{a_1=2}^{\beta_1 - 2(k-1)} \cdots \sum_{\alpha_{k-1}=2}^{\beta_{k-1}-2} \mu_k^{d,n}(\alpha_1,\ldots,\alpha_{k-1}) \prod_{j=1}^{k-1}\binom{\beta_j}{\alpha_j}
\] where $\beta_i = n-\sum_{\ell=1}^{i-1}\alpha_\ell$ and
\[
\mu_k^{d,n}(\alpha_1,\ldots,\alpha_{k-1}) \ =\ \size\left(\SetOf{x\in\ZZ^k}{\sum_{i=1}^k x_i=d
\text{ and } 0 < x_j < \alpha_j \text{ for $j\leq k$}}\right)
\] with $\alpha_k=\beta_k$. \end{proposition} \begin{proof}
Fix non-negative numbers $\alpha_1,\ldots,\alpha_k$ that sum up to $n$.
The number of connected nested $(d,n)$-matroids with $k+1$ cyclic flats $\emptyset=F_0\subsetneq F_1,\ldots,F_k=[n]$ that satisfy $\size(F_j-F_{j-1})=\alpha_j$ is determinated by the following product of binomial coefficients weighted by the number $\mu^{d,n}_k$ of possibilities for ranks on the cyclic flats
\[
\mu_k^{d,n}(\alpha_1,\ldots,\alpha_{k-1}) \prod_{j=1}^k\binom{n-\alpha_1-\ldots-\alpha_{j-1}}{\alpha_j} \enspace .
\] Clearly, the rank function satisfies $0 < \rank( F_j )-\rank( F_{j-1} ) < \size( F_j-F_{j-1} ) = \alpha_j$, hence $\alpha_j\geq 2$. Moreover, the last binomial coefficient is equal to one. The number $\alpha_k$ is determinated by $\alpha_k=n-\sum_{j}^{k-1}\alpha_j$. We get that the number of connected nested $(d,n)$-matroids with $k+1$ cyclic flats is given by
\[
\sum_{a_1=2}^{\beta_1 - 2(k-1)} \cdots \sum_{\alpha_{k-1}=2}^{\beta_{k-1}-2} \mu_k^{d,n}(\alpha_1,\ldots,\alpha_{k-1}) \prod_{j=1}^{k-1}\binom{\beta_j}{\alpha_j} \enspace .
\]
We derive the number of $k$-splits by division by $k$.
This completes the proof. \end{proof}
\begin{example}
Consider the case that $n=d+k=2k$.
The number of loop- and coloop-free partition $(k,n)$-matroids equals $(2k-1)!! = (2k-1)(2k-3)\cdots 1$,
as $\alpha_j=2$ for all $j\leq k$.
The number of $k$-splits in $\Delta(k,2k)$ equals $(k-1)! (2k-1)!!$ and those of connected nested matroids with $k+1$ cyclic flats $k! (2k-1)!!$.
Note that in this case all these $k$-splits, partitions and nested matroids are equivalent under reordering of the $[n]$ elements. \end{example}
Combining Theorem~\ref{thm:main1} and Theorem~\ref{thm:main2} leads to an enumeration of all $k$-splits of the product of simplices $\Delta_{d-1}\times\Delta_{\ell-1}$, by splitting the connected component $C_j$ into $A_j$ and $B_j$ with $\rank(C_j) = \size(A_j)$. Note that the number of $k$-splits of a product of simplices can not simply be derived from the number of $k$-splits of a hypersimplex by double counting, since each $k$-split is covered by multiple vertex figures whose number depends on the $k$-split and no product of simplices covers all $k$-splits of a hypersimplex. \begin{theorem}\label{thm:simplices}
The $k$-splits of $\Delta_{d-1}\times\Delta_{\ell-1}$ are in bijection with
collections of $k$ pairs $(A_1,B_1),\ldots,(A_k,B_k)$, such that
$A_1,\ldots,A_k$ is a partition of $[d]$ and $B_1,\ldots,B_k$ is a partition of $[\ell]$.
In particular, the number of $k$-splits of $\Delta_{d-1}\times\Delta_{\ell-1}$ equals
\[
\frac{1}{k} \left(
\sum_{\alpha_1=1}^{\beta_1-(k-1)} \cdots \sum_{\alpha_{k-1}=1}^{\beta_{k-1}-1} \,
\prod_{j=1}^{k-1}\binom{\beta_j}{\alpha_j} \right)\cdot \left(
\sum_{\gamma_1=1}^{\delta_1-(k-1)} \cdots \sum_{\gamma_{k-1}=1}^{\delta_{k-1}-1} \,
\prod_{j=1}^{k-1}\binom{\delta_j}{\gamma_j} \right) \enspace ,
\] where $\beta_i = d-\sum_{j=1}^{i-1} \alpha_j$ and $\delta_i = \ell-\sum_{j=1}^{i-1} \gamma_j$ . \end{theorem}
\section{coarsest matroid subdivisions}\label{sec:computations} \noindent We have enumerated specific coarsest matroid subdivisions. In this section we will compare two constructions for coarsest matroid subdivisions. We have seen already the first of these constructions for matroid subdivisions. The Stiefel map lifts rays of $\Delta_{d-1}\times\Delta_{n-d-1}$ to rays of the Dressian $\Dr(d,n)$. This construction for rays has been studied in \cite{HerrmannJoswigSpeyer:2014} under the name of \enquote{tropically rigid point configurations}. Other (coarsest) matroid subdivisions can be constructed via matroids. Let $M$ be a $(d,n)$-matroid. The \emph{corank vector} of $M$ is the map \[
\rho_M: \binom{[n]}{d} \to \NN ,\quad S \mapsto d-\rank_M(S) \enspace . \] The corank vector is a tropical Pl\"ucker vector. Moreover, the induced subdivision contains the matroid polytope $\polytope(M)$ as a cell; see \cite[Example 4.5.4]{Speyer:2005} and \cite[Proposition 34]{JoswigSchroeter:2017}.
There are coarsest matroid subdivisions, obtained from corank vectors, that are not in the image of the Stiefel map; see \cite[Figure 7]{HerrmannJoswigSpeyer:2014} and \cite[Theorem 41]{JoswigSchroeter:2017}.
There are matroid subdivisions that are both, induced by the Stiefel map and corank subdivisions. \begin{example}
We have seen in Theorem~\ref{thm:main1}, that every multi-split of the hypersimplex is induced by the Stiefel map.
Moreover, each multi-split is a corank subdivision. The maximal cells are nested matroids. This follows from Theorem~\ref{thm:main2} combined with the methods of \cite[Section 4]{JoswigSchroeter:2017}. \end{example}
A subdivision that is induced by a corank vector satisfies the following criteria. With these we are able to certify that a matroid subdivision is not induced by a corank vector. \begin{lemma}\label{lem:corank}
Let $M$ be a $(d,n)$-matroid and $\Sigma$ the corank subdivision of $\polytope(M)$.
For each vertex $v$ of the hypersimplex $\Delta(d,n)$ a (maximal) cell $\sigma\in\Sigma$ exists, such that
$v\in\sigma$ and $\sigma\cap\polytope(M)\neq\emptyset$.
In particular, the cell $P(M)$ together with the neighboring cells cover all vertices of $\Delta(d,n)$. \end{lemma} \begin{proof}
Let $M$ be a $(d,n)$-matroid and $\Sigma$ the corresponding corank subdivision of the hypersimplex $\Delta(d,n)$.
Furthermore, let $v$ be a vertex of the hypersimplex $\Delta(d,n)$.
Then $v = e_S$ for a set $S\in\tbinom{[n]}{d}$.
Given a basis of $M$ and a maximal independent subset of $S$, the set $S$ can be enlarged to a basis with $d-\rank(S)$ elements of the basis.
Hence, there is a sequence $S_{d-\rank(S)},\ldots,S_0\in\tbinom{[n]}{d}$,
such that $S_{d-\rank(S)}=S$, $\size(S_j\cap S_{j+1})=d-1$ and $d-\rank(S_j) = j$ for all $0 \leq j< d-\rank(S)$.
Thus, the corresponding $d-\rank(S)+1$ vertices of the lifted polytope of $\Delta(d,n)$ lie on the hyperplane $\sum_{i\in S_0}x_i=x_{n+1}$, where $x_{n+1}$ is the height coordinate, i.e. the corank. This hyperplane determinates a face of the lifted polytope and hence a cell $\sigma\in\Sigma$.
Both vertices $v$ and $e_{S_0}\in\polytope(M)$ are contained in $\sigma$. \end{proof}
\begin{lemma}\label{lem:pc_corank}
Let $\Sigma$ be a subdivision of the hypersimplex $\Delta(d,n)$, such that the subdivision is the corank subdivision of a connected matroid
and induced by a regular subdivison of the product of simplices via the Stiefel map. The subdivision on the product of simplices $\Delta_{d-1}\times\Delta(n-d-1)$ is realizable with a $0/1$-vector as lifting function. \end{lemma} \begin{proof}
Clearly, the corank subdivision $\Sigma$ of the matroid $M$ is regular.
Moreover, if $\Sigma$ is induced by the Stiefel map, then there is a vertex $v$ that is contained in each maximal cell.
The matroid polytope $\polytope(M)$ is a maximal cell as $M$ is connected.
Hence, the vertex $v$ is a vertex of $\polytope(M)$ and the characteristic vector of a basis of $M$.
This implies that the neighbours of $v$ are of corank $0$ and $1$.
This shows that the restriction of the corank lifting to the neighbours of $v$ has the required form. \end{proof}
We will apply Lemma~\ref{lem:pc_corank} to tropical point configurations. These are vectors in the tropical torus $\RR^d/(1,\ldots,1)\RR$. The line segment in the tropical torus between the two points $v$ and $w$ is the set $\smallSetOf{u\in \RR^d/(1,\ldots,1)\RR}{\lambda,\mu\in\RR \text{ and } u_i = \min (v_i+\lambda,\, w_i+\mu) }$. Note that such a line segment consists of several ordinary line segments, with additional (pseudo-)vertices. The \emph{tropical convex hull} of a set of points is the smallest set such that all line segments between points are in this set. Such a tropical convex hull of finitely many points decomposes naturally in a polyhedral complex. The cells in the tropical convex hull of a tropical point configuration of $(n-d)$ points in $\RR^d/(1,\ldots,1)\RR$ are in bijection with the cells of a regular subdivision of the product $\Delta_{d-1}\times\Delta_{n-d-1}$, where the height of $e_i+e_j$ is the $j$-th coordinate of the $i$-th point in the tropical point configuration; see \cite[Lemma 22]{DevelinSturmfels:2004}. A tropical point configuration is \emph{tropically rigid} if it induces a coarsest (non-trivial) subdivision on the product of simplices $\Delta_{d-1}\times\Delta_{n-d-1}$.
A tropical point configuration corresponds to a corank subdivision if the points are realizable by $0/1$ coordinates in $\RR^d$ or equivalently by $-1$, $0$ and $1$ in the tropical torus. In particular, there is a point that has lattice distance at most one to each other point. This criteria certify that the next examples are not corank subdivisions. \begin{figure}
\caption{The nine rigid tropical point configurations of Example~\ref{ex:nine_tpc}, each of which is a tropical convex hull of six points.}
\label{fig:troppointconf}
\end{figure}
The following illustrates examples of coarsest non-corank subdivisions. \begin{example}\label{ex:nine_tpc}
Figure~\ref{fig:troppointconf} shows nine rigid tropical point configurations out of $36$ symmetry classes.
They correspond to nine coarsest subdivisions of $\Delta_2\times\Delta_5$.
The Stiefel map of those induces coarsest matroid subdivisions of the hypersimplex $\Delta(3,9)$.
None of those is a corank subdivision.
Proposition~\ref{prop:comp} shows that these are all rigid tropical point configurations that do not lift to a corank subdivision of the hypersimplex $\Delta(3,9)$. \end{example} We lifted those to rays of the hypersimplex $\Delta(3,9)$ and checked whether they are equivalent to corank liftings. For this computation we used both the software \polymake \cite{DMV:polymake} and \mptopcom \cite{JordanJoswigKastner:2017}. Before we state our computational result, note that there is a natural symmetry action of the symmetric group on $n$ elements on the hypersimplex $\Delta(d,n)$. This group acts on the hypersimplex, by permutation of the coordinate directions. From our computations we got the following result.
\begin{proposition}\label{prop:comp} The nine liftings illustrated as tropical point configuration in Figure~\ref{fig:troppointconf} lead to coarsest regular subdivisions of $\Delta(3,9)$. These are, up to symmetry, all coarsest regular subdivisions of $\Delta(3,9)$ that are induced by the Stiefel map and not by a corank lift. \end{proposition}
We will close with two enumerative results about the number of coarsest regular matroid subdivisions of the hypersimplex $\Delta(d,n)$ for small parameters $d$ and $n$. With the previously mentioned methods we have computed all coarsest regular subdivisions of $\Delta_{d-1}\times\Delta_{n-d-1}$ for small parameters of $d$ and $n$ and lifted them to the hypersimplex. Note that this is a massive computation, as there are $7402421$ symmetry classes of triangulations for the product $\Delta_{3}\times\Delta_{4}$ and the acting symmetric group has $9!$ elements. Another example is $\Delta_{2}\times\Delta_{6}$ where the number of symmetry classes of triangulations in the regular flip component is $533242$ and the group has $10!=3628800$ elements. For each symmetry class a convex hull computation is necessary and after that another reduction that checks for symmetry.
The number of all these subdivisions up to symmetry is listed in Table~\ref{tab:stiefel} on the last page. Note that we do not count the number of coarsest regular subdivisions of $\Delta_{d-1}\times\Delta_{n-d-1}$.
For our second result we computed all corank subdivisions for all matroids in the \polymake database available at \href{https://db.polymake.org}{db.polymake.org} . This database is based on a classification of matroids of small rank with few elements of Matsumoto, Moriyama, Imai and Bremner \cite{Matsumotoetal:2012}. We got the coarsest subdivisions by computing the secondary cones. The number of all of these subdivisions is given in Table~\ref{tab:corank}.
Combining both techniques we got the following result. \begin{proposition}
The number of coarsest matroid subdivisions of $\Delta(d,n)$ for $d\leq 4$ and $n\leq 10$, excluded $d=4$, $n=10$, is bounded from below by the numbers listed in Table~\ref{tab:total}. \end{proposition}
\begin{table}[t]
\centering
\caption{Numbers of symmetry classes of coarsest matroid subdivisions in the hypersimplex $\Delta(d,n)$.}
\begin{subtable}[c]{0.38\textwidth}
\subcaption{The number in the Stiefel image.\label{tab:stiefel}}
\begin{tabular}{l@{\hskip 2.5mm}rrrrrrr}
\toprule
$d\backslash n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
\midrule
2& 1 & 1& 2& 2& 3& 3& 4\\
3& & 1& 3& 5& 11& 36& 207\\
4& & & 2& 5& 39& 2949& --\\
\bottomrule
\end{tabular}
\end{subtable}
\hspace{1.5cm}
\begin{subtable}[c]{0.38\textwidth}
\subcaption{The number of corank subdivisions.\label{tab:corank}}
\begin{tabular}{l@{\hskip 2.5mm}rrrrrrr}
\toprule
$d\backslash n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
\midrule
2& 1 & 1& 2& 2& 3& 3& 4\\
3& & 1& 3& 5& 12& 38& 139\\
4& & & 2& 5& 33& 356& --\\
\bottomrule
\end{tabular}
\end{subtable} \end{table}
\begin{table}[t]
\centering
\caption{The number of coarsest matroid subdivisions in $\Delta(d,n)$ that are either corank subdivisions or in the image of the Stiefel map.}
\label{tab:total}
\begin{subtable}[c]{0.58\textwidth}
\subcaption{The number without any identifications.}
\begin{tabular}{l@{\hskip 2.5mm}rrrrrrr}
\toprule
$d\backslash n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
\midrule
2& 3& 10& 25& 56& 119& 246& 501\\
3& & 10& 65& 616& 15470& 1220822& 167763972\\
4& & & 25& 616& 217945& 561983523 & --\\
\bottomrule
\end{tabular}
\end{subtable}
\newline
\rule{0pt}{2.1cm}
\begin{subtable}[c]{0.38\textwidth}
\subcaption{The number of symmetry classes.}
\begin{tabular}{l@{\hskip 2.5mm}rrrrrrr}
\toprule
$d\backslash n$ & 4 & 5 & 6 & 7 & 8 & 9 & 10\\
\midrule
2& 1 & 1& 2& 2& 3& 3& 4\\
3& & 1& 3& 5& 12& 47& 287\\
4& & & 2& 5& 43&3147& --\\
\bottomrule
\end{tabular}
\end{subtable} \end{table}
\noindent {\bf Acknowledgements.}\ I am indebted to Michael Joswig and Georg Loho for various helpful suggestions. My research is carried out in the framework of Matheon supported by Einstein Foundation Berlin (Project \enquote{MI6 - Geometry of Equilibria for Shortest Path}).
\end{document} | arXiv | {
"id": "1707.02814.tex",
"language_detection_score": 0.7550859451293945,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Stabilizer states and Clifford operations for systems of arbitrary dimensions,\\ and modular arithmetic.}
\author{Erik Hostens} \email{erik.hostens@esat.kuleuven.ac.be} \affiliation{Katholieke Universiteit Leuven, ESAT-SCD, Belgium} \author{Jeroen Dehaene} \affiliation{Katholieke Universiteit Leuven, ESAT-SCD, Belgium} \author{Bart De Moor} \affiliation{Katholieke Universiteit Leuven, ESAT-SCD, Belgium} \date{\today}
\newcommand{X\!\!Z}{X\!\!Z} \newcommand{\mathbb{Z}}{\mathbb{Z}} \newcommand{{\cal V}_{\mathrm{diag}}}{{\cal V}_{\mathrm{diag}}} \newcommand{{\cal P}_{\mathrm{upps}}}{{\cal P}_{\mathrm{upps}}} \newcommand{{\cal P}_{\mathrm{diag}}}{{\cal P}_{\mathrm{diag}}}
\begin{abstract} We describe generalizations of the Pauli group, the Clifford group and stabilizer states for qudits in a Hilbert space of arbitrary dimension $d$. We examine a link with modular arithmetic, which yields an efficient way of representing the Pauli group and the Clifford group with matrices over $\mathbb{Z}_d$. We further show how a Clifford operation can be efficiently decomposed into one and two-qudit operations. We also focus in detail on standard basis expansions of stabilizer states. \end{abstract}
\pacs{03.67.-a}
\maketitle
\section{Introduction} We study stabilizer states and Clifford operations for systems built from qudits (systems with a $d$-dimensional Hilbert space). We work in a matrix framework using modular arithmetic, generalizing results for qubits from Ref.~\cite{D:03}. We put special emphasis on the less studied case where $d$ is not prime.
The stabilizer formalism has already proved to be useful in many applications such as quantum error correction, entanglement distillation and quantum computation \cite{GPhD,DVD:03,Gfault,R:03}.
The $n$-qu\emph{d}it generalized Pauli group and Clifford group and the related concepts of stabilizer codes and states have been studied in various levels of detail in a number of papers \cite{N:02,V:02,G:98,K:96,AK:01,Gr:03,Gr:04,GrQ,S1,S2}.
Our motivation is not so much the study of stabilizer codes and their error correcting capacities, but the study of mathematically interesting states and operations that could play a role in quantum algorithms. Although it is well known that building quantum algorithms with stabilizer states and Clifford operations only is not sufficient to disallow efficient simulation on a classical computer, we think it is likely that the rich structure of this formalism will play a role in future quantum algorithms. Due to this focus, we pay attention to describing and realizing Clifford operations in more detail than is usually needed for coding applications. (To specify a Clifford operation ''completely'' (that is, up to only a global phase), one has to specify the image under conjugation of $2n$ independent Pauli operations including the resulting phase, whereas to realize an encoding operator for a $k$-dimenional code, only $k$ images are needed and the phases are of minor importance.)
Next to presenting known results in an often different, and in our opinion practical language, we also present results not contained in the references above.
We give a description of an $n$-qudit Clifford operation by a $2n\times 2n$ matrix $C$ with entries in $\mathbb{Z}_d$ and a $2n$-dimensional vector $h$ with entries in $\mathbb{Z}_{2d}$ and derive necessary and sufficient conditions for $C$ and $h$ to define a Clifford operation. We give formulas for multiplying and inverting Clifford operations represented in this way.
We present a decomposition of a general Clifford operation, specified in full detail by a matrix $C$ and $h$, into a selected set of one and two-qudit operations, by thinking in terms of matrix manipulations on $C$ and $h$.
We also focus in detail on the standard basis expansion of stabilizer states. From Ref.~\cite{GrQ,S1,S2} formulas can be derived describing the standard basis expansion of graph states by means of a quadratical form. In Refs.~\cite{S2,GrQ} this is done for the case when the 1-qudit configuration space $\{1,\ldots,d\}$ is given the structure of a finite field. In Ref.~\cite{S1} this space can be any finite Abelian group. In this paper we consider cyclic groups. Refs.~\cite{S2,GrQ} state the equivalence of graph states and general stabilizer states. In Ref.~\cite{GrQ} this equivalence is to be understood as local Clifford equivalence. That is, any $n$-qudit stabilizer state (with a field as $1$-qudit configuration space) can be transformed into a graph state through the action of $n$ one-qudit Clifford operations. In our setting however, as we are not focusing on codes, we want a description of the original stabilizer state (without the local Clifford operations) as well. In Ref.~\cite{S2} another notion of equivalence between graph states and stabilizer states is used (introducing the concept of auxiliary nodes in the graph). As a result the standard basis expansion of a general stabilizer state is not described directly but as a sum of a large number of states. Moreover, for the case where the configuration space is not a field (in our case that is when $d$ is not prime) not all stabilizer states are equivalent to graph states but an extra condition has to be imposed. In the present paper we work with a more general description of stabilizer states without this extra condition (described below by matrices $S$ with possibly more than $n$ columns) and we give a direct description (without sum) of the standard basis expansion of general stabilizer states. We believe that standard basis expansions of stabilizer states can be an essential ingredient in understanding the action of non-Clifford operations on stabilizer states.
This paper is structured as follows. Definitions of generalizations of the Pauli group and the Clifford group for qudits are given in section~\ref{secPC}, together with their matrix representation. Special Clifford operations, that are of particular interest in the decomposition of a Clifford operation, are discussed in section~\ref{secSC}. An efficient decomposition of a Clifford operation on $n$ qudits into a selected set of one and two-qubit Clifford operations, is explained in section~\ref{secDC}. In section~\ref{secS}, we define stabilizer states of $n$ qudits and show the expansion in the standard basis can be described with linear and quadratic operations.
In the following, by $A=B\mod d$ we mean that all corresponding entries of matrices $A$ and $B$ are equal modulo $d$, where $d$ is an integer different from 0. We will also write $a=b\mod c$ with $a$, $b$ and $c$ vectors, as a shorthand notation for $a_i=b_i\mod c_i$, for every $i=1\ldots n$.
\section{The generalized Pauli group and Clifford group}\label{secPC} In this section, we discuss the description of the generalized Pauli group on $n$ qudits and the generalized Clifford group in modular arithmetic. Generalizations of the Pauli group to systems of arbitrary dimensions are discussed in Refs.~\cite{N:02,V:02,G:98}. The Clifford group is defined as the group containing all unitary operations that map the Pauli group to itself under conjugation.
\subsection{The generalized Pauli group} Let $d$ be the Hilbert space dimension of one qudit. We define unitary operations $X^{(d)}$ and $Z^{(d)}$ as follows \begin{equation} \label{XZ}
\begin{array}{rcl} X^{(d)}|j\rangle & = & |j+1\rangle, \\
Z^{(d)}|j\rangle & = & \omega^j|j\rangle, \end{array} \end{equation} where $j\in\mathbb{Z}_d$ and $\omega$ is a primitive $d$-th root of unity. Addition in the ket is carried out modulo $d$. Tensor products of these operations will be denoted as follows: for $v,w\in\mathbb{Z}_d^n$ and $a:=\left[\begin{array}{c} v \\ w \end{array}\right] \in\mathbb{Z}_d^{2n}$, we denote \begin{equation}\label{XZ(a)} X\!\!Z(a):=X^{v_1}Z^{w_1}\otimes\ldots\otimes X^{v_n}Z^{w_n}. \end{equation} From (\ref{XZ}) and (\ref{XZ(a)}), it follows that, for $x\in\mathbb{Z}_d^n$, \begin{equation}
X\!\!Z(a)|x\rangle=\omega^{w^Tx}|x+v\rangle. \end{equation} We define the Pauli group ${\cal P}_n$ on $n$ qudits to contain all $d^{2n}$ tensor products (\ref{XZ(a)}) with an additional complex phase factor $\zeta^\delta$, where $\zeta$ is a square root of $\omega$ and $\delta\in\mathbb{Z}_{2d}$. In the following, we will omit the superscript $(d)$ and refer to the generalized Pauli group simply as Pauli group.
Multiplication of two Pauli group elements can be translated into operations on vectors in $\mathbb{Z}_d^{2n}$ as follows: \begin{equation} \label{paulimult} \zeta^{\delta}X\!\!Z(a)\zeta^{\epsilon}X\!\!Z(b)=\zeta^{\delta+\epsilon+2a^TUb}X\!\!Z(a+b), \end{equation} where $U:=\left[\begin{array}{cc} 0_n & 0_n \\ I_n & 0_n \end{array}\right]$. Addition in the argument of $X\!\!Z$ is done modulo $d$, and addition in the exponent of $\zeta$ is done modulo $2d$. Eq.~(\ref{paulimult}) yields the commutation relation: \begin{equation} \label{paulicomm} X\!\!Z(a)X\!\!Z(b)=\omega^{a^TPb}X\!\!Z(b)X\!\!Z(a), \end{equation} \begin{equation}\mbox{where}\quad P=U-U^T\mod d.\end{equation} Note that the order of $X\!\!Z(a)$ divides $d$ unless $d$ is an even number and $a^TUa$ is odd. In the latter case the order is $2d$. Indeed, with (\ref{paulimult}) one can easily verify that $X\!\!Z(a)^d=\zeta^{d(d-1)a^TUa}I$. The introduction of a phase $\zeta^\delta$ rather than $\omega^\delta$ is only necessary when $d$ is even. Simplifications for odd $d$ are considered in Appendix~\ref{appsimp}.
\subsection{The generalized Clifford group} We now define a generalization of the Clifford group on $n$ qudits in an analogous way as for qubits. A Clifford operation $Q$ is a unitary operation that maps the Pauli group on $n$ qudits to itself under conjugation, or \begin{equation*} Q{\cal P}_nQ^\dag={\cal P}_n. \end{equation*} Because
$Q X\!\!Z(a)X\!\!Z(b)Q^\dag=(Q X\!\!Z(a)Q^\dag)(Q X\!\!Z(b)Q^\dag)$, it is sufficient to know the image of a generating set of the Pauli group in order to know the image of all Pauli group elements. $Q$ is then defined up to a global phase factor. This can be seen as follows. Suppose that two Clifford operations $Q_1$ and $Q_2$ give rise to the same image for every Pauli group element, or: for every $A\in{\cal P}_n:~Q_1AQ_1^\dag=Q_2AQ_2^\dag$. It follows for every $A$ that $Q_2^\dag Q_1A=AQ_2^\dag Q_1$. The only unitary operations that commute with every single Pauli group element are multiples of the identity \footnote{Indeed, let $U=\sum_{x,y\in\mathbb{Z}_d^n}q_{xy}|x\rangle\langle y|$ be a unitary operation that commutes with every Pauli group element. From $Z(w)U=UZ(w)$, for all $w\in\mathbb{Z}_d^n$, where $Z(w)$ stands for $Z^{w_1}\otimes\ldots\otimes Z^{w_n}$, it follows that $q_{xy}=0$ for all $x\not=y$. Thus $U=\sum_{x\in\mathbb{Z}_d^n}q_x|x\rangle\langle x|$. From $U=X(v)UX(v)^\dag$, for all $v\in\mathbb{Z}_d^n$, where $X(v)$ stands for $X^{v_1}\otimes\ldots\otimes X^{v_n}$, it follows that all $q_x$ are equal.}, which completes the proof. We take the generating set of the Pauli group to be $X\!\!Z(E_k),~k=1,\ldots,2n$, where $E_k$ are the standard basis vectors of $\mathbb{Z}_d^{2n}$. We denote their images under conjugation by $Q$ as $\zeta^{h_k}X\!\!Z(C_k)$. We will assemble the vectors $C_k$ as the columns of a matrix $C\in\mathbb{Z}_d^{2n\times 2n}$ and the scalars $h_k$ in a vector $h\in\mathbb{Z}_{2d}^{2n}$. The image $\zeta^\epsilon X\!\!Z(b)$ of $\zeta^\delta X\!\!Z(a)$ under conjugation by $Q$, where $a$ is an arbitrary vector in $\mathbb{Z}_d^{2n}$, can be found by repeated application of (\ref{paulimult}). This yields \begin{equation} \label{imagepauli} \begin{array}{rcl} b & = & Ca \mod d, \\
\epsilon & = & \delta+\bigl(h-{\cal V}_{\mathrm{diag}}(C^TUC)\bigr)^Ta +\\
& & a^T\bigl(2{\cal P}_{\mathrm{upps}}(C^TUC)+{\cal P}_{\mathrm{diag}}(C^TUC)\bigr)a \mod 2d, \end{array} \end{equation} where ${\cal V}_{\mathrm{diag}}(M)$ is defined as the vector containing the diagonal of $M$, ${\cal P}_{\mathrm{diag}}(M)$ the diagonal matrix with the diagonal of $M$ and ${\cal P}_{\mathrm{upps}}(M)$ the strictly upper triangular part of $M$. The Clifford operation $Q$ is (up to a global phase factor) completely defined by $C$ and $h$. Note that the rhs of (\ref{imagepauli}) is calculated modulo $2d$, although it contains matrices over $\mathbb{Z}_d$. It can be verified that every entry modulo $d$ in the expression is multiplied by an even factor.
We can compose two Clifford operations $Q$ and $Q'$, which again yields a Clifford operation $Q''=Q'Q$. To find its corresponding $C''$ and $h''$ we have to find the images under the second operation of the images under the first operation of the standard basis vectors. By using (\ref{imagepauli}), we get \begin{equation} \label{combiclifford} \begin{array}{rcl} C'' & = & C'C \mod d, \\
h'' & = & h+C^Th'+{\cal V}_{\mathrm{diag}}\Bigl(C^T\bigl(2{\cal P}_{\mathrm{upps}}({C'}^TUC')+\\
& & {\cal P}_{\mathrm{diag}}({C'}^TUC')\bigr)C\Bigr)-C^T{\cal V}_{\mathrm{diag}}({C'}^TUC')\\&& \mod 2d. \end{array} \end{equation}
The inverse $Q^\dag$ of a Clifford operation $Q$ defined by $C$ and $h$ is defined by $C'$ and $h'$, where \begin{equation} \begin{array}{rcl} C' & = & C^{-1} \mod d, \\
h' & = & -C^{-T}\Biggl(h+{\cal V}_{\mathrm{diag}}\Bigl(C^T\bigl(2{\cal P}_{\mathrm{upps}}(C^{-T}UC^{-1})+\\
& & {\cal P}_{\mathrm{diag}}(C^{-T}UC^{-1})\bigr)C\Bigr)-C^T{\cal V}_{\mathrm{diag}}(C^{-T}UC^{-1})\Biggr)\\&& \mod 2d, \end{array} \end{equation} which can be verified with (\ref{combiclifford}). $M^{-T}$ is short for $\left(M^{-1}\right)^T$. We will show below that $C^{-1}=-PC^TP\mod d$.
\subsection{Conditions on $C$ and $h$} Not all $C\in\mathbb{Z}_d^{2n\times 2n}$ and $h\in\mathbb{Z}_{2d}^{2n}$ define a Clifford operation. To see this, consider a Clifford operation $Q$ with corresponding $C$ and $h$. From the commutation relation (\ref{paulicomm}) it follows that $C$ is a symplectic matrix, i.e. $C$ satisfies $C^TPC=P\mod d$. Indeed, we have \begin{eqnarray*} X\!\!Z(a) X\!\!Z(b) & = & \omega^{a^TPb} X\!\!Z(b) X\!\!Z(a) \\ Q X\!\!Z(a) Q^{\dag} Q X\!\!Z(b) Q^\dag & = & \omega^{a^TPb} Q X\!\!Z(b) Q^{\dag} Q X\!\!Z(a) Q^{\dag} \\ X\!\!Z(Ca) X\!\!Z(Cb) & = & \omega^{a^TPb} X\!\!Z(Cb) X\!\!Z(Ca), \end{eqnarray*} where we omitted global phase factors on the lhs and rhs, as they cancel each other. Also, \begin{equation*} X\!\!Z(Ca) X\!\!Z(Cb) = \omega^{a^TC^TPCb} X\!\!Z(Cb) X\!\!Z(Ca). \end{equation*} Since this holds for every value of $a$ and $b$, it follows that $C$ is symplectic. Note that the inverse of a symplectic matrix $C$ is simply $C^{-1}=-PC^TP\mod d$. Secondly, $h$ satisfies \begin{equation}\label{condh} (d-1){\cal V}_{\mathrm{diag}}(C^TUC)+h=0\mod 2, \end{equation} for $\zeta^{h_k}X\!\!Z(C_k)=QX\!\!Z(E_k)Q^\dag$ has, like $X\!\!Z(E_k)$, order $d$. With (\ref{paulimult}) we have $\left(\zeta^{h_k}X\!\!Z(C_k)\right)^d=\zeta^{d\bigl((d-1)C_k^TUC_k+h_k\bigr)}I$, and it follows that (\ref{condh}) is satisfied. We will prove below that every symplectic $C$ and $h$ satisfying (\ref{condh}) define a Clifford operation $Q$.
\section{Special Clifford operations}\label{secSC} In this section we present a number of special Clifford operations and their defining $C$ and $h$. These will be of particular interest for the decomposition of an arbitrary Clifford operation into one and two-qubit Clifford operations. \begin{itemize} \item The Pauli group elements $X\!\!Z(a)$ are a special class of the Clifford operations. Note that, like for any Clifford operation, the global phase factor of a Pauli group element cannot be represented. Considering the images of $X\!\!Z(E_k)$, it can be easily verified that $X\!\!Z(a)$ is defined by \begin{equation*} \begin{array}{rccl} C & = & I & \mod d\\ h & = & -2Pa & \mod 2d. \end{array} \end{equation*} \item A Clifford operation acting on a subset $\alpha \subset \{1,\ldots,n\}$ of $n$ qudits gives rise to a symplectic matrix on the rows and columns with indices in $\alpha\cup(\alpha+n)$, embedded in an identity matrix (that is, $C_{kk}=1\mod d$, for every $k\not\in\alpha\cup(\alpha+n)$ and $C_{kl}=0\mod d$ if $k\neq l$ and $k$ or $l$ $\not\in\alpha\cup(\alpha+n)$). Also $h_k=0\mod 2d$ if $k\not\in\alpha\cup(\alpha+n)$.
\item Any invertible linear transformation of the configuration space $|x\rangle\rightarrow|Tx\rangle$ can be realized by a Clifford operation, with $x\in\mathbb{Z}_d^n$ and $T\in\mathbb{Z}_d^{n\times n}$ an invertible matrix modulo $d$. This operation is defined by \begin{equation*} \begin{array}{rccl} C & = & \left[\begin{array}{cc}T & 0\\0 & T^{-T}\end{array}\right] & \mod d\\ h & = & 0 & \mod 2d. \end{array} \end{equation*} This can be verified by looking at the image of $X\!\!Z(a)$, with an arbitrary $a=\left[\begin{array}{c} v \\ w \end{array}\right] \in\mathbb{Z}_d^{2n}$: $QX\!\!Z(a)Q^\dag$ \begin{eqnarray*}
& = & \left(\sum_{x\in\mathbb{Z}_d^n}|Tx\rangle\langle x|\right)\left(\sum_{y\in\mathbb{Z}_d^n}\omega^{w^Ty}|y+v\rangle\langle y|\right)\\&&\left(\sum_{z\in\mathbb{Z}_d^n}|z\rangle\langle Tz|\right)\\
& = & \sum_{y\in\mathbb{Z}_d^n}\omega^{w^Ty}|Ty+Tv\rangle\langle Ty|\\
& = & \sum_{y\in\mathbb{Z}_d^n}\omega^{w^TT^{-1}y}|y+Tv\rangle\langle y|\\ & = & X\!\!Z(\left[\begin{array}{c}Tv\\T^{-T}w\end{array}\right]). \end{eqnarray*}
As $C^TUC=U\mod d$, we see with (\ref{imagepauli}) that $h=0\mod 2d$. Special cases of this class of Clifford operations are qudit permutations, with $C=\left[\begin{array}{cc}\Pi & 0\\0 & \Pi \end{array}\right]$, where $\Pi$ is a permutation matrix, and the two-qudit SUM gate $|x\rangle|y\rangle\rightarrow|x\rangle|x+y\rangle$ with $x,y\in\mathbb{Z}_d$, with \begin{equation*} C=\left[\begin{array}{cccc} 1 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 1\end{array}\right]\mod d. \end{equation*} Note that this operation is a natural generalization of the two-qubit CNOT gate.
\item The \emph{$d$-dimensional discrete Fourier transform} $|x\rangle\rightarrow\frac{1}{\sqrt{d}}\sum_{k=0}^{d-1}\omega^{kx}|k\rangle$ on one qudit, with $x\in\mathbb{Z}_d$, is defined by $C=\left[\begin{array}{cc}0 & -1\\1 & 0 \end{array}\right]\mod d$ and $h=0\mod 2d$. We verify this in the same way as for the invertible configuration space transformation, now with $a=\left[\begin{array}{c} v \\ w \end{array}\right] \in\mathbb{Z}_d^{2}$: $QX\!\!Z(a)Q^\dag$ \begin{eqnarray*}
\qquad & = & \left(\frac{1}{\sqrt{d}}\sum_{t,u\in\mathbb{Z}_d}\omega^{tu}|t\rangle\langle u|\right)\left(\sum_{y\in\mathbb{Z}_d}\omega^{wy}|y+v\rangle\langle y|\right)\\&&\left(\frac{1}{\sqrt{d}}\sum_{x,z\in\mathbb{Z}_d}\omega^{-xz}|z\rangle\langle x|\right)\\
& = & \frac{1}{d}\sum_{t,y,x\in\mathbb{Z}_d}\omega^{t(y+v)+wy-xy}|t\rangle\langle x|\\
& = & \frac{1}{d}\sum_{y\in\mathbb{Z}_d}\omega^{(t+w-x)y}\sum_{t,x\in\mathbb{Z}_d}\omega^{tv}|t\rangle\langle x|\\
& = & \sum_{x\in\mathbb{Z}_d}\omega^{(x-w)v}|x-w\rangle\langle x|\\ & = & \omega^{-vw}X\!\!Z(\left[\begin{array}{c}-w\\v\end{array}\right]). \end{eqnarray*} As $C^TUC=-U^T\mod d$, we see with (\ref{imagepauli}) that $h=0\mod 2d$. This operation is the qudit equivalent of the Hadamard gate on one qubit.
\item Analogous to the qubit phase gate, a \emph{phase gate} on one qudit can be defined as $|x\rangle\rightarrow\zeta^{x(x+d)}|x\rangle$, with $x\in\mathbb{Z}_d$. This operation corresponds to $C=\left[\begin{array}{cc}1 & 0\\1 & 1 \end{array}\right]\mod d$ and $h=\left[\begin{array}{c}d+1\\0\end{array}\right]\mod 2d$. Indeed, for all $a=\left[\begin{array}{c} v \\ w \end{array}\right] \in\mathbb{Z}_d^{2}$: $QX\!\!Z(a)Q^\dag$ \begin{eqnarray*}
\qquad & = & \sum_{y\in\mathbb{Z}_d}\zeta^{2wy+(y+v)(y+v+d)-y(y+d)}|y+v\rangle\langle y|\\
& = & \zeta^{v(v+d)}\sum_{y\in\mathbb{Z}_d}\omega^{(v+w)y}|y+v\rangle\langle y|. \end{eqnarray*} As $C^TUC=\left[\begin{array}{cc}1&0\\1&0\end{array}\right]\mod d$, $v(v+d)$ must be equal to $\left(h-\left[\begin{array}{c}1\\0\end{array}\right]\right)^Ta+v^2\mod 2d$ according to (\ref{imagepauli}), which is the case for the given $h$. \end{itemize}
\section{Decomposition of a Clifford operation in one and two-qudit operations}\label{secDC} In order to prove that any symplectic matrix $C$ and $h$ satisfying (\ref{condh}) define a Clifford operation, we will expand an arbitrary symplectic $C$ into symplectic elementary row operations that can be realized as Clifford operations on maximally two qudits at the same time. What is more, this decomposition is a worthy candidate as a practical realization of a Clifford operation. The possibility of this kind of decomposition into a selected set of one and two-qudit operations is briefly discussed in Ref.~\cite{G:98}. Our scheme is related to the method of Ref.~\cite{N:02} in which Euclid's algorithm is incorporated in order to generate any one-qudit Clifford operation.
First, we mention that the main problem is realizing $C$, not $h$, for once a Clifford operation $Q$ defined by $C$ and $h$ is realized, we can realize $Q'$ defined by $C$ and an arbitrary $h'$ satisfying (\ref{condh}) by doing an extra operation $X\!\!Z\left(CP\frac{h'-h}{2}\right)$ on the left or $X\!\!Z\left(P\frac{h'-h}{2}\right)$ on the right of $Q$. Note that as both $h$ and $h'$ satisfy (\ref{condh}), $h'-h$ is even.
We first give an overview of the elementary row operations that we will use to transform an arbitrary symplectic matrix $C$ into the $2n\times 2n$ identity matrix $I$. As $I$ is formed by left multiplication of such elementary row operations on $C$, a decomposition of $C$ then consists of the inverses of these operations in reverse order. Since these operations act on maximally two qudits at the same time, they are defined by a symplectic $4\times 4$ or $2\times 2$-matrix embedded in the identity matrix as explained in the preceding section. In the following, we will only show this part of the operations.
Firstly, we consider some configuration space transformations (of the form $C=\left[\begin{array}{cc}T & 0\\0 & T^{-T}\end{array}\right]$). These operations combine only rows from the same block (we call rows $1\ldots n$ the upper block and rows $n+1\ldots 2n$ the lower block) and have a similar action in both blocks at the same time. For instance, we can switch two rows $i$ and $j$ in the upper block: at the same time, rows $n+i$ and $n+j$ in the lower block are also switched. This operation is defined by \begin{equation*} C=\left[\begin{array}{cccc}0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\end{array}\right]\mod d. \end{equation*} Multiplying a row $i$ with an invertible number $r\in\mathbb{Z}_d$ results in multiplying the corresponding row $n+i$ in the other block by $r^{-1}$. A number $r\in\mathbb{Z}_d$ has an inverse if $r$ and $d$ are coprime, i.e. $\gcd(r,d)=1$. This operation is defined by $C=\left[\begin{array}{cccc}r & 0 \\ 0 & r^{-1} \end{array}\right]$. The last configuration space transformation we consider, is adding one row $i$ multiplied by an arbitrary factor $g\in\mathbb{Z}_d$ to another row $j$. At the same time, row $n+j$, multiplied by $-g$, is added to row $n+i$. This operation is defined by \begin{equation*} C=\left[\begin{array}{cccc}1 & 0 & 0 & 0\\ g & 1 & 0 & 0\\ 0 & 0 & 1 & -g\\ 0 & 0 & 0 & 1\end{array}\right]\mod d. \end{equation*}
Secondly, we will also need operations that combine rows of different blocks. Switching two rows $i$ and $n+i$ can be carried out by the discrete Fourier transform. Recall that this operation is defined by $C=\left[\begin{array}{cc}0 & -1\\1 & 0 \end{array}\right]$. After switching of the rows, row $i$ is multiplied by $-1$. By applying the inverse of the discrete Fourier transform, row $n+i$ instead of row $i$ is multiplied by $-1$. Applying $\sum_{x\in\mathbb{Z}_d}\zeta^{gx(x+d)}|x\rangle\langle x|$ (which is the same as applying the phase gate $g$ times) on the $i$-th qudit, with $g\in\mathbb{Z}_d$, results in the addition of row $i$ multiplied by $g$ to its corresponding row $n+i$, according to \begin{equation*} \begin{array}{ccc} C & = & \left[\begin{array}{cc}1 & 0\\g & 1\end{array}\right]\mod d. \end{array} \end{equation*}
We could introduce more row operations that define one or two-qudit Clifford operations, but the ones described so far suffice. Next we give a constructive way of transforming $C$ into the identity matrix $I$. If we are able to transform $C$ into $C'$ by transforming columns $C_1$ and $C_{n+1}$ into the corresponding columns $E_1$ and $E_{n+1}$ of $I$, it follows from the symplecticity of $C'$ that the first and $n+1$-th row of $C'$ are equal to the corresponding rows of $I$. We then have \begin{equation*}
C'=\left[\begin{array}{c|ccc|c|ccc}1 & 0 & \ldots & 0 & 0 & 0 & \ldots & 0 \\ \hline
0 & & & & 0 & & & \\
\vdots & & C_{(11)}' & & \vdots & & C_{(12)}' & \\
0 & & & & 0 & & & \\ \hline
0 & 0 & \ldots & 0 & 1 & 0 & \ldots & 0 \\ \hline
0 & & & & 0 & & & \\
\vdots & & C_{(21)}' & & \vdots & & C_{(22)}' & \\
0 & & & & 0 & & & \end{array}\right] \end{equation*}
Leaving the first qudit out, we can continue by transforming the second and $n+2$-th column of $C'$ into the corresponding columns of $I$, and so on. This recursive procedure eventually leads to $I$. Now we only have to show how columns $C_1$ and $C_{n+1}$ are transformed into $E_1$ and $E_{n+1}$. Let us first consider the case where the upper left entry $C_{11}$ has an inverse in $\mathbb{Z}_d$. Multiplying the first row by $C_{11}^{-1}$ changes this entry to 1. Next we add the first row, multiplied by $-C_{k1}$, to row $k$, and this for $k=2\ldots n$, setting the $k$-th entry of $C_1$ to 0. The first column now has the form $[1~0~\cdots~0\ |\ C_{n+1,1}'~C_{n+2,1}~\cdots~C_{2n,1}]^T$. Now we add the first row multiplied by $-C_{n+1,1}'$ to row $n+1$, setting the $n+1$-th entry of $C_1$ to 0. The discrete Fourier transform on the first qudit changes $C_1$ into $[0~0~\cdots~0\ |\ 1~C_{n+2,1}~\cdots~C_{2n,1}]^T$. In the same way as for the upper half of $C_1$, we make zeros below the $n+1$-th position. Note that nothing happens to the upper half, for all entries there are 0. Switching the first (now we use the inverse of the discrete Fourier transform) with the $n+1$-th row again yields $E_1$. We call the matrix made so far $C''$. From the symplecticity of $C''$ it follows that $C_{n+1,n+1}''=1\mod d$, and we can repeat for the $n+1$-th column the same procedure we did for the first column. Note that none of the operations yielding $E_{n+1}$ out of $C_{n+1}''$ will affect $C_1''=E_1$, except the discrete Fourier transform and its inverse on the first qudit, but they cancel each other. Since the number of elementary operations for one column is $O(n)$, the total number of operations transforming $C$ into $I$ is $O(n^2)$.
If the entry $C_{11}$ has no inverse modulo $d$, but there is a $C_{k1}$ in the first row that does have an inverse, this entry can be switched into the first position by a permutation of two qudits and possibly the discrete Fourier transform on the first qudit. Note that it is possible that none of the entries of $C_1$ has an inverse. Indeed, since $C$ is invertible, the only restriction on one single column of $C$ is that the greatest common divisor of all its entries has an inverse. For every two entries $C_{i1}$ and $C_{n+i,1}$ or $C_{i1}$ and $C_{j1}$ from the same block, the $\gcd$ of these two can be formed in one of the two entries by recursively substracting a multiple of one row from the other following Euclid's algorithm \cite{euclid}. The other entry can then be made 0 since it is a multiple of the $\gcd$. A worst case scenario would be that all $2n$ combinations of $2n-1$ entries have a $\gcd$ that is not invertible. The procedure goes as follows \begin{eqnarray*} \lefteqn{\left[\begin{array}{c} C_{11}\\C_{21}\\\vdots\\C_{n1}\\\hline C_{n+1,1}\\C_{n+2,1}\\\vdots\\C_{2n,1}\end{array}\right] \rightarrow \left[\begin{array}{c} \gcd(C_{11},C_{n+1,1})\\\gcd(C_{21},C_{n+2,1})\\\vdots\\\gcd(C_{n1},C_{2n,1})\\\hline 0\\0\\\vdots\\0\end{array}\right] \rightarrow}\\ &&\left[\begin{array}{c} \gcd(C_{11},\ldots,C_{2n,1})\\0\\\vdots\\0\\\hline 0\\0\\\vdots\\0\end{array}\right] \rightarrow \left[\begin{array}{c} 1\\0\\\vdots\\0\\\hline 0\\0\\\vdots\\0\end{array}\right]. \end{eqnarray*} In this way, $C$ is decomposed into $O\bigl(n^2\log(d)\bigr)$ elementary operations, as the computational complexity for finding the $\gcd$ of two positive integers less than $d$ with Euclid's algorithm is $O\bigl(\log(d)\bigr)$ \cite{euclid}.
\section{Stabilizer states}\label{secS} In this section we define stabilizer states for qudits of arbitrary dimensions. A stabilizer state is a state of an $n$-qudit system that is a simultaneous eigenvector, with eigenvalues 1, of a subgroup of $d^n$ commuting elements of the Pauli group, which is called the stabilizer ${\cal S}$ of the stabilizer state. The stabilizer state is completely determined by a generating set for ${\cal S}$. The description of such a generating set in modular arithmetic provides an efficient tool of describing the stabilizer state and its behavior under the action of a Clifford operation. Finally, we give an expansion of an arbitrary stabilizer state in the standard basis.
\subsection{Definition and description in modular arithmetic}
A stabilizer state $|\psi\rangle$ is the simultaneous eigenvector, with eigenvalues 1, of a subgroup of $d^n$ commuting elements of the Pauli group which does not contain multiples of the identity other than the identity itself. We call this subgroup the \emph{stabilizer} ${\cal S}$ of $|\psi\rangle$. A generating set for ${\cal S}$ consists of elements $\zeta^{f_k}X\!\!Z(S_k),~k=1\ldots m$, where $S_k\in\mathbb{Z}_d^{2n}$ and $f_k\in\mathbb{Z}_{2d}$. We will assemble the vectors $S_k$ as the columns of a matrix $S\in\mathbb{Z}_d^{2n\times m}$ and the scalars $f_k$ in a vector $f\in\mathbb{Z}_{2d}^m$. We call $S$ a \emph{generator matrix} and $f$ the corresponding \emph{phase vector} that together define ${\cal S}$. The fact that the elements of ${\cal S}$ commute is reflected by $S^TPS=0\mod d$. We choose $m$ to be the minimal cardinality of a generating set of ${\cal S}$. Note that, as opposed to the situation for qubits, $m$ can be larger than $n$. It can be verified that if $m>n$, the imposed condition in Ref.~\cite{S2} for a stabilizer state to be equivalent to a graph state, is not fulfilled. If $d$ has only single prime factors, then $m=n$. If $d$ has multiple prime factors, then $n\le m\le 2n$. A simple example for $d=4$ and $n=1$ is the state $1/\sqrt{2}(|0\rangle+|2\rangle)$ with stabilizer $\{I,X^2,Z^2,X^2Z^2\}$: in this case $m=2$. We will describe below how to construct such a minimal generating set. The fact that ${\cal S}$ does not contain multiples of the identity other than the identity itself implies that the phase vector $f$ satisfies: \begin{equation}\label{SPC} \begin{array}{c}
\forall r\in\mathbb{Z}_d^m\ |\ Sr=0\mod d:\quad\bigl(f-{\cal V}_{\mathrm{diag}}(S^TUS)\bigr)^Tr+\\ r^T\bigl({\cal P}_{\mathrm{diag}}(S^TUS)+2{\cal P}_{\mathrm{upps}}(S^TUS)\bigr)r=0\mod 2d. \end{array} \end{equation}
The description of $\cal S$ by $S$ and $f$ is not unique, as they represent a generating set for $\cal S$. By applying an invertible linear transformation $R\in\mathbb{Z}_d^{m\times m}$ to the right on $S$ and transforming $f$ appropriately, another generating set $\zeta^{f_k'}X\!\!Z(S_k')$ is formed. By repeated application of (\ref{paulimult}), one finds \begin{equation}\label{SRC} \begin{array}{rcl} S' & = & SR \mod d, \\
f' & = & R^T\bigl(f-{\cal V}_{\mathrm{diag}}(S^TUS)\bigr)+\\
& & {\cal V}_{\mathrm{diag}}\Bigl(R^T\bigl(2{\cal P}_{\mathrm{upps}}(S^TUS)+\\
& & \qquad\quad{\cal P}_{\mathrm{diag}}(S^TUS)\bigr)R\Bigr) \mod 2d. \end{array} \end{equation} We will refer to this as a \emph{stabilizer generator matrix change}.
If $|\psi\rangle$ is operated on by a Clifford operation $Q$, defined by $C$ and $h$, then $Q|\psi\rangle$ is a new stabilizer state whose stabilizer is given by $Q{\cal S}Q^\dag$. By application of (\ref{imagepauli}), we can calculate an $S'$ and $f'$ for this stabilizer, resulting in \begin{equation}\label{IC} \begin{array}{rcl} S' & = & CS \mod d, \\
f' & = & f+S^T\bigl(h-{\cal V}_{\mathrm{diag}}(C^TUC)\bigr)+\\
& & {\cal V}_{\mathrm{diag}}\Bigl(S^T\bigl(2{\cal P}_{\mathrm{upps}}(C^TUC)+\\
& & \qquad\quad{\cal P}_{\mathrm{diag}}(C^TUC)\bigr)S\Bigr) \mod 2d. \end{array} \end{equation}
We can construct a minimal generating set $\zeta^{f_k}X\!\!Z(S_k),~k=1\ldots m$, for an arbitrary stabilizer ${\cal S}$, given a generating set $\zeta^{f_l'}X\!\!Z(S_l'),~l=1\ldots m'$, for ${\cal S}$ using the Smith normal form (see Appendix~\ref{appSNF}). This can be done as follows. The $S_l'$ are assembled in the matrix $S'$. Now we compute the Smith normal form $F=KS'L$ of $S'$, with $K\in\mathbb{Z}_d^{2n\times 2n}$ and $L\in\mathbb{Z}_d^{m'\times m'}$ invertible matrices. $S'L$ is just another generator matrix of the stabilizer. From the definition of the Smith normal form it follows that $S'L$ is a generator matrix having a minimal number of nonzero columns. The rightmost $m-m'$ columns of $S'L$ that are zero (as $S'L=K^{-1}F$) can be omitted. We call this new generator matrix $S$ and $f$ is formed out of $f'$ with (\ref{SRC}). Note that no linear combination of the columns $S_k$ of $S$ is zero unless the coefficients in this linear combination are a multiple of the order of the columns, or, for $k=1\ldots m$, \begin{equation}\label{lincombi} \mathrm{if}~\sum_k r_k S_k=0\mod d,~\mathrm{then}~r_k S_k=0 \mod d. \end{equation} With this, the stabilizer phase condition (\ref{SPC}) can be simplified to, for $k=1\ldots m$: \begin{equation}\label{SPC2} \begin{array}{c}
\forall r_k\in\mathbb{Z}_d~|~r_k S_k=0\mod d:\\\quad(r_k-1)r_kS_k^TUS_k+r_kf_k=0\mod 2d. \end{array} \end{equation}
\subsection{Description of a stabilizer state with linear and quadratic forms} We provide an expansion of an arbitrary stabilizer state in the standard basis for an $n$-qudit state. This is stated in the following theorem \begin{theorem}\label{theom1}
(i) If $S\in\mathbb{Z}_d^{2n\times m}$ and $f\in\mathbb{Z}_{2d}^m$ define a stabilizer state $|\psi\rangle$ as described above, then $S$ and $f$ can be transformed by an configuration space transformation $|x\rangle\rightarrow|T^{-1}x\rangle$, with $T\in\mathbb{Z}_d^{n\times n}$, and a stabilizer generator matrix change $R\in\mathbb{Z}_d^{m\times m}$ into the form $S'$ and $f'$, with \begin{equation}\label{eqsq1} \begin{array}{rcl}
S' =
\left[\begin{array}{cc}
T^{-1} & 0\\
0 & T^T
\end{array}\right] S R & = &
\left[\begin{array}{cc}
{\bar Q} & 0\\
{\bar B} & {\bar {\bar B}}
\end{array}\right]=
\left[\begin{array}{c}
Q \\
B
\end{array}\right]\mod d,\\
{f'}^T & = & \left[\begin{array}{cc} {\bar f}'^T & {\bar {\bar f}'}^T\end{array}\right]\mod 2d, \end{array} \end{equation} where $Q$ is a pseudo-diagonal matrix in \emph{Smith normal form} and $Q^TB\mod d$ is symmetric. ${\bar Q}$ and ${\bar B}$ are the left square $n\times n$ parts of $Q$ and $B$.
(ii) The state $|\psi\rangle$ can be expanded in the standard basis (up to a normalization factor) as \begin{equation}\label{psi}
|\psi\rangle= \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt}\ |T({\bar Q}t+x^\ast)\rangle \end{equation} where $\begin{array}[t]{rcl} M & := & {\bar Q}{\bar B}\mod d,\\ p & := & {\bar f}'-{\cal V}_{\mathrm{diag}}(M)+2{\bar B}^Tx^\ast\mod 2d.\end{array}$\\ If we define the $n$-vector ${\bar q}$ with entries $q_k:=\left\{\begin{array}{cl}d & \mathrm{if}~Q_{kk}=0\mod d\\Q_{kk} & \mathrm{if}~Q_{kk}\not=0\mod d\end{array}\right.,~k=1\ldots n$, and the $m$-vector $q=[{\bar q}^T~\underbrace{d~\ldots~d}_{m-n}]^T$. Then $x^\ast\in G_{\bar q}:=\mathbb{Z}_{q_1}\times\ldots\times\mathbb{Z}_{q_n}$ is defined as the unique solution of \begin{equation}\label{x} B^Tx=y\mod q, \end{equation} where $y\in G_q$ has entries $y_k:=\left\{\begin{array}{rl}-\frac{(d-q_k)B_{kk}+f_k'}{2}\mod q_k, & \mathrm{for}~k=1\ldots n \\-\frac{f_k'}{2}\mod q_k, & \mathrm{for}~k=n+1\ldots m\end{array}\right.$.\end{theorem} Note that from the stabilizer phase condition (\ref{SPC2}) (choose $r_k:=d$), it follows that the numerators in the expressions for $y_k$ are even. An efficient way of solving (\ref{x}) can be found in Appendix~\ref{appunicity}.
A definition of the Smith normal form of a matrix $\in\mathbb{Z}_d^{n\times m}$ is given in Appendix~\ref{appSNF}.
{\bf Proof:}
(i) We assume that $S$ already has a minimal number of columns $m$ as described above and we write $S$ as $\left[\begin{array}{c}S_{(1)}\\S_{(2)}\end{array}\right]$ with $S_{(1)},S_{(2)}\in\mathbb{Z}_d^{n\times m}$. Then we define $Q$ as the Smith normal form of $S_{(1)}$ with invertible transformation matrices $T^{-1}$ and $R$, i.e. $Q=T^{-1}S_{(1)}R$. With $B=T^TS_{(2)}R$, this yields the expression for $S'$ in (\ref{eqsq1}). According to (\ref{SRC}) and (\ref{IC}), $f$ is transformed to $f'$, yielding \begin{eqnarray*} f'&=& R^T\bigl(f-{\cal V}_{\mathrm{diag}}(S^TUS)\bigr)+{\cal V}_{\mathrm{diag}}\Bigl(R^T\bigl(2{\cal P}_{\mathrm{upps}}(S^TUS)\\ &&+{\cal P}_{\mathrm{diag}}(S^TUS)\bigr)R\Bigr)\mod 2d. \end{eqnarray*} Note that $\left[\begin{array}{cc}T^{-1} & 0\\0 & T^T\end{array}\right]^TU\left[\begin{array}{cc}T^{-1} & 0\\0 & T^T\end{array}\right]=U\mod d$. It follows directly from $S^TPS=0\mod d$ that $Q^TB$ is symmetric modulo $d$.
(ii) We show that (\ref{psi}) is a simultaneous eigenvector with eigenvalue 1 of $\zeta^{f_k}X\!\!Z(S_k),~k=1\ldots m$. Equivalently, the state \begin{equation}\label{psiaccent}
|\psi'\rangle:= \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt}\ |{\bar Q}t+x^\ast\rangle \end{equation}
is a simultaneous eigenvector with eigenvalue 1 of $\zeta^{f_k'}X\!\!Z(S_k'),~k=1\ldots m$. First, note that in (\ref{psiaccent}), different values of $t$ may yield the same basis state $|{\bar Q}t+x^\ast\rangle$, since ${\bar Q}t+x^\ast\mod d$ is periodic. The coefficient of $|{\bar Q}t+x^\ast\rangle$ in (\ref{psiaccent}) displays the same periodic behavior: if ${\bar Q}t={\bar Q}t'\mod d$ then $t^TMt+p^Tt={t'}^TMt'+p^Tt'\mod 2d$. It is sufficient to check this for $t'=t+\frac{d}{q_k}E_k,~k=1\ldots n$, where $E_k$ are the standard basis vectors of $\mathbb{Z}_d^n$. We have \begin{equation*} \begin{array}{l} (t+\frac{d}{q_k}E_k)^TM(t+\frac{d}{q_k}E_k)+p^T(t+\frac{d}{q_k}E_k)-t^TMt-p^Tt\\ =\frac{d}{q_k}\bigl((d-q_k)B_{kk}+f_k'+2B_k^Tx^\ast\bigr)=0\mod 2d, \end{array} \end{equation*} for $k=1\ldots n$. Indeed, from the definition of $x^\ast$: $B_k^Tx^\ast=-\frac{(d-q_k)B_{kk}+f_k'}{2}\mod q_k,~k=1\ldots n$, it follows that $2\frac{d}{q_k}B_k^Tx^\ast=-\frac{d}{q_k}\bigl((d-q_k)B_{kk}+f_k'\bigr)\mod 2d,~k=1\ldots n$. We made use of the fact that $M={\bar Q}{\bar B}\mod d\Rightarrow M={\bar Q}{\bar B}+D\mod 2d$, where every entry of $D\mod 2d$ can be either $d$ or 0, i.e. $2D=0\mod 2d$.
Next, we check for $k=1\ldots n$ that (\ref{psiaccent}) is an eigenvector of $\zeta^{f_k'}X\!\!Z(S_k')$ with eigenvalue 1. We have \begin{eqnarray*} \lefteqn{\zeta^{f_k'}X\!\!Z\left(\left[\begin{array}{c}Q_k \\
B_k\end{array}\right]\right)|\psi'\rangle}\\
& = & \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt+f_k'+2B_k^T({\bar Q}t+x^\ast)}\ |{\bar Q}t+x^\ast+Q_k\rangle \\ & = & \sum_{t\in\mathbb{Z}_d^n} \zeta^{(t-E_k)^TM(t-E_k)+p^T(t-E_k)+f_k'+2B_k^T({\bar Q}(t-E_k)+x^\ast)}\\
&&\qquad|{\bar Q}t+x^\ast\rangle \\
& = & \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt}\ |{\bar Q}t+x^\ast\rangle = |\psi'\rangle.\\ \end{eqnarray*}
Finally, $\zeta^{f_k'}X\!\!Z(S_k')$ acting on the left of (\ref{psiaccent}) yields, for $k=n+1\ldots m$, \begin{eqnarray*} \lefteqn{\zeta^{f_{k}'}X\!\!Z\left(\left[\begin{array}{c}0 \\
B_k\end{array}\right]\right)|\psi'\rangle}\\
& = & \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt+f_k'+2B_k^T({\bar Q}t+x^\ast)}\ |{\bar Q}t+x^\ast\rangle \\
& = & \sum_{t\in\mathbb{Z}_d^n} \zeta^{t^TMt+p^Tt}\ |{\bar Q}t+x^\ast\rangle = |\psi'\rangle.\\ \end{eqnarray*} In Appendix~\ref{appunicity} we prove that eq.~(\ref{x}) has a unique solution $x^\ast\in G_{\bar q}$.
$\square$
It is possible to remove all identical terms in the summation of expression~(\ref{psi}) as follows. We define $r$ as the number of nonzero diagonal elements of $Q$. We denote the upper left $r\times r$-part of a matrix $A$ as $A_{(r)}$, the upper $r$-part of a vector $a$ as $a_{(r)}$ and the part of $a$ below $a_{(r)}$ as ${\bar a}_{(r)}$. Then (\ref{psi}) is equivalent to \begin{equation*}
|\psi\rangle= \sqrt{\frac{\prod_{i=1}^{r}q_i}{d^r}}\sum_{t\in G_\ast} \zeta^{t^TM_{(r)}t+p_{(r)}^Tt}\ |T\left[\begin{array}{c}Q_{(r)}t+x_{(r)}^\ast\\{\bar x}_{(r)}^\ast\end{array}\right]\rangle, \end{equation*}
where $G_\ast:=\mathbb{Z}_{\frac{d}{q_1}}\times\ldots\times\mathbb{Z}_{\frac{d}{q_r}}$. Note that the normalizing factor is just the inverse of the square root of the number of terms in the summation, as each basis state is orthogonal to the others and occurs only once. Finally, it is interesting to mention that, for an arbitray $S$ and $f$ defining a stabilizer state $|\psi\rangle$, we have (up to a normalization factor) \begin{equation*}
|\psi\rangle= \sum_{t\in\mathbb{Z}_d^m} \zeta^{t^TMt+p^Tt}\ |S_{(1)}t+x'\rangle, \end{equation*} where $S=\left[\begin{array}{c}S_{(1)}\\S_{(2)}\end{array}\right]$, $M=S_{(1)}^TS_{(2)}\mod d$, $p=f-{\cal V}_{\mathrm{diag}}(M)+2S_{(2)}^Tx'\mod 2d$ and $x'=Tx^\ast\mod d$, where $T$ and $x^\ast$ are the same as in (\ref{psi}). Yet, this formula has two disadvantages: first, to find $x'$, we still have to calculate the Smith normal form of $S_{(1)}$ and second, in (\ref{psi}) it is clearer which basis states have nonzero coefficients.
\section{Conclusion} We have shown that for the Pauli group, the Clifford group and stabilizer states, straightforward extensions in Hilbert spaces of arbitrary dimensions can be compactly described with matrices over $\mathbb{Z}_d$. We have given a way of efficiently decomposing an $n$-qudit Clifford operation in $O(n^2)$ one and two-qudit operations. With these tools in modular arithmetic, we provide an expansion of an arbitrary stabilizer state of $n$ qudits in the standard basis.
\appendix \section{The Smith normal form}\label{appSNF} The Smith normal form is a canonical diagonal form for equivalence of matrices over a principal ideal ring $R$. In this paper we consider matrices over $\mathbb{Z}_d$. For any $A\in\mathbb{Z}_d^{n\times m}$ there exist invertible matrices $K\in\mathbb{Z}_d^{n\times n}$ and $L\in\mathbb{Z}_d^{m\times m}$ such that \begin{equation*} F=KAL=\left[\begin{array}{cccccc} f_1 & & & & & \\ & \ddots & & & & \\ & & f_r & & & \\ & & & 0 & & \\ & & & & \ddots & \\ & & & & & 0 \end{array}\right]\mod d \end{equation*}
with each $f_i$ a nonzero and with $f_i|f_{i+1}$ for $1\le i\le r-1$. The $f_i$ are unique up to units. Uniqueness of $F$ can be ensured by specifying that each $f_i$ should be a positive divisor of $d$ in $\mathbb{Z}$. There exist fast algorithms for computing the Smith normal form \cite{stor}.
\section{A unique solution of (\ref{x})}\label{appunicity} Here we prove that eq.~(\ref{x}) $B^Tx=y\mod q$ has a unique solution $x^\ast\in G_{\bar q}:=\mathbb{Z}_{q_1}\times\ldots\times\mathbb{Z}_{q_n}$. We rewrite (\ref{x}) as the following system of equations: \begin{equation}\label{system} \sum_{i=1}^{n}B_{ij}x_i=y_j\mod q_j,~j=1\ldots m. \end{equation} If, for fixed $j$, the $B_{ij}$ and $q_j$ have a common factor, then also $y_j$ must be a multiple of this factor, otherwise there is no solution. Define $g_j:=\gcd(B_{1j},\ldots,B_{nj},q_j)$ and $r_j:=d/g_j$. Note that $r_j$ is the order of $S_j$. A necessary condition for solvability of (\ref{system}) is or $r_jy_j=0\mod d$, for every $k=1\ldots m$. We show that this condition holds. We have $r_jS_j=0\mod d$. From the stabilizer phase condition~(\ref{SPC2}), it follows that \begin{equation*} \begin{array}{rclc} (r_j-1)r_jq_jB_{jj}+r_jf_j' & = & 0\mod 2d, & 1\le j\le n,\\ r_jf_j' & = & 0\mod 2d, & j>n, \end{array} \end{equation*} and by definition of $y$, consequently $r_jy_j=0\mod d,~j=1\ldots m$.
An equivalent system to (\ref{system}) is now \begin{equation*} \sum_{i=1}^{n}\frac{B_{ij}}{g_j}x_i=\frac{y_j}{g_j}\mod \frac{q_j}{g_j},~j=1\ldots m. \end{equation*}
We define the map $b:x=[x_1~\ldots~x_n]^T\rightarrow b(x)=\left[\sum_{i=1}^n\frac{B_{i1}}{g_1}x_i|\ldots|\sum_{i=1}^n\frac{B_{im}}{g_m}x_i\right]^T\mod\left[\frac{q_1}{g_1}\ldots\frac{q_m}{g_m}\right]^T$, which is a homomorphism from the group of vectors of length $n$ with entries $x_i$ modulo $q_i,~i=1\ldots n$ to the group of vectors of length $m$ with entries $y_j'$ modulo $q_j/g_j,~j=1\ldots m$. (\ref{system}) has a unique solution if $b$ is an isomorphism. We prove this by showing that the number of elements in both groups are the same and that only 0 is in the kernel. It follows from (\ref{lincombi}) and the fact that, by definition, the columns of $S$ generate a set of $d^n$ elements, that the product of the orders of the columns of $S$ is equal to $d^n$, or $\prod_{j=1}^{m}r_j=d^n$. Therefore, \begin{equation*} \prod_{j=1}^{m}\frac{q_j}{g_j} = \frac{d^{m-n}}{\prod_{j=1}^{m}g_j}\prod_{i=1}^{m}q_i = \prod_{i=1}^{m}q_i \end{equation*} thus the number of elements of both groups are the same. Next we show that $B^Tx=0\mod q$ if and only if $x=0\mod{\bar q}$. We rewrite this as \begin{equation}\label{isomo} \begin{array}{cl} \forall x\in\mathbb{Z}_d^n: & \left(\exists v\in\mathbb{Z}_d^n: B^Tx=Q^Tv\mod d\right)\\ & \iff \left(\exists x'\in\mathbb{Z}_d^m: x=Qx'\mod d\right). \end{array} \end{equation}
{\bf Proof:}
$\Leftarrow$) $Q^TB$ is symmetric modulo $d$. We therefore have $B^Tx=B^TQx'=Q^TBx'\mod d$, so $v=Bx'\mod d$.
$\Rightarrow$) We show that the number of $x\in\mathbb{Z}_d^n$ satisfying the lhs of (\ref{isomo}) is equal to the number of $x$ satisfying the rhs. The number of elements generated by the columns of a matrix is equal to the product of the orders of the diagonal elements of its Smith normal form. Therefore the columns of $S^T$, like the columns of $S$, also generate $d^n$ elements. Consequently, the mapping $s: a\in\mathbb{Z}_d^{2n}\rightarrow S^Ta\in\mathbb{Z}_d^{m}$ is a homomorphism from $\mathbb{Z}_d^{2n}$ to a group $Y\subset\mathbb{Z}_d^{m}$, with $|Y|=d^n$. The kernel in $\mathbb{Z}_d^{2n}$ of $s$ contains $|\mathbb{Z}_d^{2n}|/|Y|=d^n$ elements. Equivalently, with $a^T=[v^T~w^T]$, $s$ is a homomorphism from $\mathbb{Z}_d^{n}\times\mathbb{Z}_d^{n}$ to $Y$: \begin{equation*} s(\left[\begin{array}{c}v\\w\end{array}\right])=S^T\left[\begin{array}{c}v\\w\end{array}\right]=\left[\begin{array}{cc}Q^T & B^T\end{array}\right]\left[\begin{array}{c}v\\w\end{array}\right]=Q^Tv+B^Tw. \end{equation*} There are exactly $d^n$ different pairs $(v,w)$ that satisfy $Q^Tv+B^Tw=0\mod d$. Replacing $w$ by $-x$, we have exactly $d^n$ pairs $(x,v)$ satisfying $B^Tx=Q^Tv\mod d$. Fixing such an $x$, we have a total of $\prod_{i=1}^{n}q_i$ different $v$ for which, together with $x$, the equality still holds (this is because we can add an arbitrary multiple of $d/q_i$ to $v_i$). Therefore, the total number of $x$ for which a $v$ exists such that $B^Tx=Q^Tv\mod d$, is equal to $d^n/\prod_{i=1}^{n}q_i$. This is equal to the number of $x$ that can be written as $x=Qx'$.
$\square$
Next, we describe a method for easily finding the solution $x^\ast$ of (\ref{x}). We define a diagonal matrix $Z\in\mathbb{Z}_d^{m\times m}$ with diagonal entries equal to $d/q_k,~k=1\ldots m$. (\ref{x}) is equivalent to the equation $ZB^Tx=Zy\mod d$. We calculate the Smith normal form $F=KZB^TL\mod d$. Defining $x':=L^{-1}x\mod d$ and $y':=KZy\mod d$, we have the following equation $Fx'=y'\mod d$, for which a solution ${x^\ast}'\in\mathbb{Z}_d^n$ can be easily found (note that this solution is most likely not unique). We then find $x^\ast=L{x^\ast}'\mod {\bar q}$.
\section{Simplifications for odd $d$}\label{appsimp} In this section we consider the special case of odd $d$. Most of the formulas in this paper can be simplified for odd $d$. We will only give an overview and omit the derivations, as they are completely analogous to the general case. If $d$ is odd, then 2 has an inverse in $\mathbb{Z}_d$, equal to $\frac{d+1}{2}$, which we will denote by $2^{-1}$.
For odd $d$, we can use a restricted definition for the Pauli group: it contains all $d^{2n}$ tensor products (\ref{XZ(a)}) with an additional complex phase factor $\omega^\delta$ (instead of a power of $\zeta$). Eq. (\ref{paulimult}) becomes \begin{equation} \omega^\deltaX\!\!Z(a)\omega^\epsilonX\!\!Z(b)=\omega^{\delta+\epsilon+a^TUb}X\!\!Z(a+b). \end{equation} The order of an arbitrary element of this newly defined Pauli group is never equal to $2d$. In the same way as for the general case, we find the image $\omega^\epsilonX\!\!Z(b)$ of $\omega^\deltaX\!\!Z(a)$ under conjugation by a Clifford operation, which is now defined by $C$ and $g=\frac{h}{2}$: \begin{equation} \begin{array}{rcl} b & = & Ca \mod d, \\
\epsilon & = & \delta+\left(g-2^{-1}{\cal V}_{\mathrm{diag}}(C^TUC)\right)^Ta+\\
& & 2^{-1}a^T(C^TUC-U)a \mod d. \end{array} \end{equation} Note that, contrary to the general case, $g$ is a vector in $\mathbb{Z}_d^{2n}$. There is no longer a restriction on $g$. Indeed, from (\ref{condh}), it follows that $h$ in the general setting is always even for odd $d$. Symplecticity of $C$ is of course still required. The product of two Clifford operations $Q''=Q'Q$ corresponds to $C''$ and $g''$, where \begin{equation} \begin{array}{rcl} C'' & = & C'C \mod d, \\
g'' & = & g+C^Tg'+2^{-1}\Bigl({\cal V}_{\mathrm{diag}}\bigl(C^T({C'}^TUC'-U)C\bigr)-\\
& & C^T{\cal V}_{\mathrm{diag}}({C'}^TUC')\Bigr) \mod d. \end{array} \end{equation} The inverse $Q^\dag$ of a Clifford operation $Q$ defined by $C$ and $g$ is defined by $C'$ and $g'$, where \begin{equation} \begin{array}{rcl} C' & = & C^{-1} = -PC^TP \mod d, \\
g' & = & -C^{-T}g+2^{-1}\Bigl(C^{-T}{\cal V}_{\mathrm{diag}}(C^TUC)+\\
& & {\cal V}_{\mathrm{diag}}(C^{-T}UC^{-1})\Bigr) \mod d. \end{array} \end{equation}
The definition of a stabilizer state remains the same except for the fact that now the stabilizer is a subgroup of the restricted Pauli group. Note that for odd $d$, no subgroup of the general Pauli group can be found that fulfills all stabilizer conditions but is not a subgroup of the restricted Pauli group. Thus, nothing is lost by restricting the definition of the Pauli group for odd $d$. A generating set for the stabilizer ${\cal S}$ consists of elements $\omega^{b_k}X\!\!Z(S_k),~k=1\ldots m$, where $S_k\in\mathbb{Z}_d^{2n}$ and $b_k\in\mathbb{Z}_d^m$. Analogously to the definition of $g$, $b$ is equal to half the value of $f$ in the general setting (as it is the exponent of $\omega$ instead of $\zeta$). The stabilizer phase condition (\ref{SPC}) on $b$ simplifies to: \begin{equation} \begin{array}{c}
\forall r\in\mathbb{Z}_d^m\ |\ Sr=0\mod d:\\ \bigl(2b-{\cal V}_{\mathrm{diag}}(S^TUS)\bigr)^Tr+r^T(S^TUS)r=0\mod d. \end{array} \end{equation} A stabilizer generator matrix change, by applying an invertible linear transformation $R\in\mathbb{Z}_d^{m\times m}$ to the right on $S$, results in \begin{equation} \begin{array}{rcl} S' & = & SR \mod d, \\
b' & = & R^T\bigl(b-2^{-1}{\cal V}_{\mathrm{diag}}(S^TUS)\bigr)+\\
& & 2^{-1}{\cal V}_{\mathrm{diag}}\bigl(R^TS^TUSR\bigr) \mod d. \end{array} \end{equation} A stabilizer state defined by $S$ and $b$, operated on by a Clifford operation defined by $C$ and $g$, is a new stabilizer state defined by \begin{equation} \begin{array}{rcl} S' & = & CS \mod d, \\
b' & = & b+S^T\bigl(g-2^{-1}{\cal V}_{\mathrm{diag}}(C^TUC)\bigr)+\\
& & 2^{-1}{\cal V}_{\mathrm{diag}}\bigl(S^T(C^TUC-U)S\bigr) \mod d. \end{array} \end{equation} It is not hard to verify that part \emph{(ii)} of Theorem~\ref{theom1} simplifies to \begin{equation}
|\psi\rangle= \sum_{t\in\mathbb{Z}_d^n} \omega^{t^TMt+p^Tt}\ |T({\bar Q}t+x^\ast)\rangle \end{equation} where $\begin{array}[t]{rcl} M & := & 2^{-1}{\bar Q}{\bar B}\mod d,\\ p & := & {\bar b}'-{\cal V}_{\mathrm{diag}}(M)+{\bar B}^Tx^\ast\mod d.\end{array}$\\ In this setting, $x^\ast\in G_{\bar q}:=\mathbb{Z}_{q_1}\times\ldots\times\mathbb{Z}_{q_n}$ is defined as the unique solution of $B^Tx=-b'\mod q$. For calculating $x^\ast$ we refer to Appendix~\ref{appunicity}.
\begin{acknowledgments} We thank Maarten Van den Nest for useful comments. Dr. Bart De Moor is a full professor at the Katholieke Universiteit Leuven, Belgium. Research supported by: Research Council KUL: GOA-Mefisto~666, GOA AMBioRICS, several PhD/postdoc \& fellow grants; Flemish Government: FWO: PhD/postdoc grants, projects, G.0240.99 (multilinear algebra), G.0407.02 (support vector machines), G.0197.02 (power islands), G.0141.03 (Identification and cryptography), G.0491.03 (control for intensive care glycemia), G.0120.03 (QIT), G.0452.04 (new quantum algorithms), G.0499.04 (Robust SVM), research communities (ICCoS, ANMMM, MLDM); AWI: Bil. Int. Collaboration Hungary/Poland; IWT: PhD Grants, GBOU (McKnow); Belgian Federal Science Policy Office: IUAP P5/22 (`Dynamical Systems and Control: Computation, Identification and Modelling', 2002-2006) ; PODO-II (CP/40: TMS and Sustainability); EU: FP5-Quprodis; ERNSI; Eureka 2063-IMPACT; Eureka 2419-FliTE; Contract Research/agreements: ISMC/IPCOS, Data4s, TML, Elia, LMS, Mastercard. \end{acknowledgments}
\end{document} | arXiv | {
"id": "0408190.tex",
"language_detection_score": 0.7065037488937378,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{ Linear atomic quantum coupler } \author{ Faisal A. A. El-Orany} \email{el_orany@hotmail.com; faisal.orany@mimos.my }
\affiliation{Department of Mathematics and computer Science, Faculty of Science, Suez Canal University 41522,
Ismailia, Egypt;
Cyberspace Security Laboratory, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia}
\author{Wahiddin M. R. B.} \email{mridza@mimos.my}
\affiliation{ Cyberspace Security Laboratory, MIMOS Berhad, Technology Park Malaysia, 57000 Kuala Lumpur, Malaysia}
\date{\today}
\begin{abstract} In this paper, we develop the notion of the linear atomic quantum coupler. This device consists of two modes propagating into two waveguides, each of them includes a localized and/or a trapped atom. These waveguides are placed close enough to allow exchanging energy between them via evanescent waves. Each mode interacts with the atom in the same waveguide in the standard way, i.e. as the Jaynes-Cummings model (JCM), and with the atom-mode in the second waveguide via evanescent wave. We present the Hamiltonian for the system and deduce the exact form for the wavefunction. We investigate the atomic inversions and the second-order correlation function. In contrast to the conventional linear coupler, the atomic quantum coupler is able to generate nonclassical effects. The atomic inversions can exhibit long revival-collapse phenomenon as well as subsidiary revivals based on the competition among the switching mechanisms in the system. Finally, under certain conditions, the system can yield the results of the two-mode JCM.
\end{abstract}
\pacs{42.50.Dv,42.50.-p} \maketitle \section{Introduction}
Quantum directional coupler is a device composed of two (or more) waveguides placed close enough to allow exchanging energy between them via evanescent waves \cite{jen1}. The rate of flow of the exchanged energy can be controlled by the device design and the intensity of the input flux as well.
The outgoing fields from the coupler can be examined in the standard ways to observe the nonclassical effects. Quite recently, this device has attracted much attention in the framework of the optics communication and quantum computing networks \cite{qcoup}, which require data transmission and ultra-high-speed data processing \cite{EKer1}. Furthermore, the directional coupler has been experimentally implemented, e.g. in planar structures \cite{exp1}, dual optical fibres \cite{exp2} and certain organic polymers \cite{exp3}. For more details related to the quantum properties of the fields in the directional couplers the reader can consult the review paper \cite{qu20} and the references therein.
The interaction between the radiation field and the matter (, i.e. atom), namely, Jaynes-Cummings model (JCM) \cite{jay1}, is an important topic in the quantum optics and quantum information theories \cite{kni}. The simplest form of the JCM is the two-level atom interacting with the single-mode of the radiation field. The JCM is a rich source for the nonclassical effects, e.g. the revival-collapse phenomenon (RCP) \cite{eber}, sub-Poissonian statistics and squeezing \cite{fa}. Furthermore, the JCM has been experimentally implemented by various means, e.g. one-atom mazer \cite{remp}, the NMR refocusing \cite{meu}, a Rydberg atom in a superconducting cavity \cite{sup}, the trapped ion \cite{vogele} and the micromaser \cite{micr}. Various extensions to the JCM have been reported including the two two-level atoms interacting with the radiation field(s) \cite{tess,faisalob}.
The trapped atoms or molecules are promising systems for quantum information processing and communications \cite{Nielsen}. They can serve as convenient and robust quantum memories for photons, providing thereby an interface between static and flying qubits \cite{Lukin}. The subject of coupling cold atoms to the radiation field sustained by an optical waveguide has already appeared in various contexts. For example, hollow optical glass fibers were used to guide atoms over long distances \cite{Noh}, especially, employing red detuned light field filling out the hollow core \cite{Letokhov,Renn}. Substrate based atom waveguide can also be realized by using guided two-color evanescent light fields \cite{Barnett}. Moreover, the coupling of atomic dipoles to the evanescent field of tapered optical fibers has been demonstrated in \cite{Vetsch,Nayak}. In this respect the optical nanofibers can manipulate and probe single-atom fluorescence.
Moreover, it has been suggested that using a two-color evanescent light field around a subwavelength-diameter fiber traps and guides atoms. The optical fiber carries a red-detuned light and a blue-detuned light, with both modes far from resonance. When both input light fields are circularly polarized, a set of trapping minima of the total potential in the transverse plane appears as a ring around the fiber. This design allows confinement of atoms to a cylindrical shell around the fiber \cite{Kien}. Additionally, it has been shown that sub-wavelength diameter optical fibers can be used to detect, spectroscopically investigate, and mechanically manipulate extremely small samples of cold atoms. In particular, on resonance, as little as two atoms on average, coupled to the evanescent field surrounding the fiber, already absorbed 20 of the total power transmitted through the fiber. By optically trapping one or more atoms around such fibers \cite{Dowling}, it should become possible to deterministically couple the atoms to the guided fiber mode and to even mediate a coupling between two simultaneously trapped atoms \cite{Fam}. This leads to a number of applications, e.g., in the context of quantum information processing, high precision measurements,
single-photon generation in optical fiber or EIT-based parametric four-wave mixing \cite{horak}
using a few atoms around optical nanofibers. Inspired by these facts we develop here the notion of the atomic quantum coupler (AQC), for which the interaction mechanisms inside the waveguides and between the waveguides depend on both the atomic and bosonic systems. These mechanisms are more complicated than those in the JCM, as we shall show shortly. For the AQC we show that the atomic inversions can exhibit long revival-collapse phenomenon as well as subsidiary-revival patterns based on the switching mechanisms in the system. Furthermore, under certain conditions, the system can give the results of the two-mode JCM. Also, the system is able to generate nonclassical effects. It is worth mentioning that the inclusion of one atom in one of the ports of the non-linear coupler has been considered in \cite{abdalla}. Nevertheless, the solution of the equations of motion there is obtained by the rotation of axes, which does not give complete information on the system.
We restrict the study in this paper to the development of the Hamiltonian model, its dynamical wavefunction and how does it work. These issues are discussed in section II. Additionally, in section III, we study two quantities, namely, the atomic inversions and the second-order correlation functions.
\section{Model formalism and its wavefunction}
In this section we describe the linear directional atomic quantum coupler (AQC) and derive its wavefunction. Also we discuss some basic differences between this device and the conventional directional coupler \cite{qu20}. Thus it is reasonable to shed some light on the linear directional coupler, which is described by the following Hamiltonian \cite{qu20}:
\begin{equation}\label{ins1} \frac{\hat{H}}{\hbar}=\sum_{j=1}^{2}\omega_j\hat{a}_j^{\dagger}\hat{a}_j+\lambda (\hat{a}_1\hat{a}_2^{\dagger}+\hat{a}_1^{\dagger}\hat{a}_2), \end{equation} where $\hat{a}_{1}\quad(\hat{a}_{1}^{\dagger})$ and $\hat{a}_{2}\quad(\hat{a} _{2}^{\dagger} $) are the annihilation (creation) operators of the first and the second modes in the first and the second waveguides with the frequencies $\omega_{1}$ and $\omega_{2}$; $\lambda$ is the coupling constant between the waveguides. Basically this device operates as a quantum switcher since it can switch the nonclassical effects as well as the intensities of the modes propagating inside one of the waveguides to the other \cite{jans}. In other words, it can not generate nonclassical effects by itself.
For some reason that will be clear shortly, we calculate the mean-photon numbers for the Hamiltonian (\ref{ins1}) when the two modes are in the states
$|\alpha,0\rangle$. Thus we arrive at: \begin{equation}\label{ins2}
\langle\hat{a}_1^{\dagger}(T)\hat{a}_1(T)\rangle=|\alpha|^2\cos^2(T),\quad
\langle\hat{a}_2^{\dagger}(T)\hat{a}_2(T)\rangle=|\alpha|^2\sin^2(T), \end{equation}
where $T=\lambda t$. These equations indicate strong switching mechanism in the linear coupler, where the intensity $|\alpha|^2$ in the first waveguide has been completely switched to the other one. Moreover, the mean-photon numbers can not exhibit the RCP.
Now we are in a position to develop the AQC, which is the main object of the paper.
The atomic coupler consists of two waveguides, each of which includes a localized and/or a trapped atom.
The wavegudies are placed close enough to each other to allow interchanging
energy between them. The two atoms (in the different waveguides) are located very adjacent to each other.
In each waveguide one mode
propagates along and interacts with the atom inside
in a standard way as the JCM.
The atom-mode in each waveguide interacts with the other one via the evanescent wave. The fields exited from the coupler can be examined as single or compound modes by means of homodyne detection to observe the squeezing of vacuum fluctuations, or by means of a set of photodetectors to measure photon antibunching and sub-Poissonian photon statistics in the standard ways.
The scheme for the AQC is depicted in Fig. 1. From this figure and in the framework of the rotating wave approximation (RWA) the Hamiltonian describing the AQC can be expressed as:
\begin{figure*}
\caption{ Scheme of realization of the Hamiltonian (3). It is composed from two optical waveguides (yellow color). The circles in these waveguides denote the localized and/or trapped atoms. Mode 1 (2) pumped by, e.g., laser sources propagates along the first (second) waveguide and interacts with the first (second) atom via the coupling constant $\lambda_1\quad (\lambda_2)$. The interaction between the first and the second waveguide
occurs via the evanescent wave with the coupling constant $\lambda_{3}$.
The outgoing fields from the coupler can be measured in the standard ways, e.g., using photon detectors.}
\label{fig:wide}
\end{figure*}
\begin{widetext} \begin{eqnarray} \begin{array}{lr} \frac{\hat{H}}{\hbar}=\hat{H}_0+\hat{H}_I,\\ \\ \hat{H}_0= \sum\limits_{j=0}^{2}\omega_j\hat{a}_j^{\dagger}\hat{a}_j+ \frac{\omega_a}{2}(\hat{\sigma}_z^{(1)}+\hat{\sigma}_z^{(2)}),\quad \hat{H}_I=\sum\limits_{j=1}^2 \lambda_j (\hat{a}_j\hat{\sigma}_+^{(j)} + \hat{a}_j^{\dagger }\hat{\sigma}_-^{(j)})+ \lambda_3 (\hat{a}_1\hat{a}_2^{\dagger }\hat{\sigma}_+^{(1)}\hat{\sigma}_-^{(2)} +\hat{a}_1^{\dagger }\hat{a}_2\hat{\sigma}_-^{(1)}\hat{\sigma}_+^{(2)}),
\label{new1}
\end{array} \end{eqnarray} \end{widetext}
where $\hat{H}_0$ and $\hat{H}_I$ are the free and the interaction parts of the Hamiltonian, $\hat{\sigma}_\pm^{(j)}$ and $\hat{\sigma}_z^{(j)}$ are the Pauli spin operators of the $j$th atom ($j=1,2$); $\hat{a}_j\quad (\hat{a}_j^{\dagger})$ is the annihilation (creation) operator of the $j$th-mode with the frequency $\omega_j$ and $\omega_a$ is the atomic transition frequency (we consider that the frequencies of the two atoms are equal) and $\lambda_1\quad (\lambda_2)$ is the atom-field coupling constant in the first (second) waveguide in the framework of the JCM. The derivation of the JCM Hamiltonian is well known, e.g. \cite{lo}. The interaction between the modes in the two waveguides occurs through the evanescent wave with the coupling constant $\lambda_3$. This term is the only one, which is conservative and can execute switching between the two waveguides. Thus it plays an essential role in the behavior of the AQC. We should stress that the switching mechanism occurs through the two JCMs (in the two waveguides) and can be obtained by applying the RWA in each individual waveguide. In other words, the quantity $\lambda_3 (\hat{a}_1\hat{\sigma}_-^{(1)}\hat{a}_2^{\dagger }\hat{\sigma}_+^{(2)} +\hat{a}_1^{\dagger }\hat{\sigma}_+^{(1)}\hat{a}_2\hat{\sigma}_-^{(2)})$ is nonconservative and hence it is cancelled out.
Finally, the treatment of the switching mechanism in (\ref{new1}) is related to the notion of coupler, however, the existence of atoms in the waveguides has been taken into account. In (\ref{new1}) the treatment is considered only at the moment when the two fields interacting with atoms in the waveguides. Also when we treat the atoms (fields) classically the Hamiltonian (\ref{new1}) tends to that of the linear directional coupler (two-atom interaction).
The interaction of two two-level atoms with the two modes has been considered in the optical cavity earlier \cite{faisalob,eberlyy,Federico}, however, in the sense different from that presented above. For instance, as a sum of two separate Jaynes-Cummings Hamiltonians to investigate the entanglement \cite{eberlyy} as well as the entanglement transfer
from a bipartite continuous-variable (CV) system to a pair of localized qubits \cite{Federico}.
Also, the quantum properties of the system of two two-level atoms interacting with the two nondegenerate cavity modes when the atoms and the field are initially in the atomic superposition states and the pair-coherent state has been investigated in \cite{faisalob}.
Next, we evaluate the wave function for the Hamiltonian
(\ref{new1}). We assume that the two modes and atoms are initially prepared in the coherent states $|\alpha,\beta\rangle$ and in the excited atomic states $|e_1,e_2\rangle$, respectively. For resonance case $2\omega_a=\omega_1+\omega_2$ one can easily prove that $[\hat{H}_0,\hat{H}_I]=0$. Under these conditions, the dynamical wave function describing the system can be expressed as:
\begin{widetext} \begin{eqnarray} \begin{array}{lr} \mid \Psi (t)\rangle =\sum\limits_{n,m=0}^{\infty }C_{n,m}\left[ X_{1}(t,n,m)\mid e_{1},e_{2},n,m\rangle +X_{2}(t,n,m)\mid e_{1},g_{2},n,m+1\rangle \right. \\ \\ +\left. X_{3}(t,n,m)\mid g_{1},e_{2},n+1,m\rangle +X_{4}(t,n,m)\mid g_{1},g_{2,}n+1,m+1\rangle \right],\\ \\
C_{n,m}=\exp(-\frac{1}{2}|\alpha|^2-\frac{1}{2}|\beta|^2)\frac{\alpha^n\beta^m}{\sqrt{n!m!}}, \label{new3} \end{array} \end{eqnarray} \end{widetext}
where $|g\rangle$ stands for atomic ground state. From the Schr\"{o}dinger equation we obtain the following system of differential equations: \begin{widetext} \begin{eqnarray} \begin{array}{lr}
i\dot{X}_1(t,n,m) = \lambda_2\sqrt{m+1}X_2(t,n,m)+\lambda_1\sqrt{n+1}X_3(t,n,m), \\
i\dot{X}_2(t,n,m) = \lambda_2\sqrt{m+1}X_1(t,n,m)+\lambda_3\sqrt{(n+1)(m+1)}X_3(t,n,m)
+\lambda_1\sqrt{n+1}X_4(t,n,m), \\
i\dot{X}_3(t,n,m) = \lambda_1\sqrt{n+1}X_1(t,n,m)+\lambda_3\sqrt{(n+1)(m+1)}X_2(t,n,m)
+\lambda_2\sqrt{m+1}X_4(t,n,m), \\
i\dot{X}_4(t,n,m) =
\lambda_1\sqrt{n+1}X_2(t,n,m)+\lambda_2\sqrt{m+1}X_3(t,n,m),
\label{secf1} \end{array} \end{eqnarray} \end{widetext}
where the superscript "$.$" means differentiation w.r.t. time. In the following, we give only the details related to the solution of the coefficient $X_1(t,n,m)$, where the others can be similarly treated. Differentiating the first and last equations in (\ref{secf1}) and re-substitute by the others we obtain: \begin{eqnarray} \begin{array}{lr}
(\hat{D}^2+A_{n,m})X_1(t,n,m) = -(i\lambda_3 c_2D+c_1)X_4(t,n,m), \\
(\hat{D}^2+A_{n,m})X_4(t,n,m) = -(i\lambda_3 c_2D+c_1)X_1(t,n,m), \\
\hat{D}=\frac{d}{dt}, A_{n,m}=\lambda_1^2(n+1)+\lambda_2^2(m+1),
c_1=2\lambda_1\lambda_2\sqrt{(n+1)(m+1)},
c_2=\lambda_3\sqrt{(n+1)(m+1)}. \label{secf2} \end{array} \end{eqnarray} From (\ref{secf2}) one can easily obtain: \begin{equation}\label{adds}
(\hat{D}^2+A_{n,m})^2X_1(t,n,m) = (i\lambda_3
c_2D+c_1)^2X_1(t,n,m). \end{equation} This equation can be easily solved. By means of the initial conditions stated above the exact forms of the coefficients $X_j$ can be expressed as: \begin{widetext} \begin{eqnarray} \begin{array}{lr}
X_1(t,n,m)=\frac{1}{2} \exp(i\frac{t}{2}c_2)\left[\cos(t\Omega_-) -i\frac{c_2}{2\Omega_-}\sin(t\Omega_-)\right]
+\frac{1}{2} \exp(-i\frac{t}{2}c_2)\left[\cos(t\Omega_+) +i\frac{c_2}{2\Omega_+}\sin(t\Omega_+)\right],\\ \\
X_2(t,n,m)= \frac{-i\sqrt{m+1}}{2c_2^2 [A_{n,m}-4\frac{\lambda_1^2\lambda_2^2}{\lambda_3^2}] }\Bigl\{
\exp(i\frac{t}{2}c_2)\left[(c_2^2-2c_1)\lambda_2^3(m+1)+(2A_{n,m}-c_1)\left(\lambda_2c_1- \frac{\lambda_1\lambda_3}{2}(n+1)c_2\right)\right]\\ \\ \times \frac{\sin(t\Omega_-)}{\Omega_-}+\exp(-i\frac{t}{2}c_2)\left[(c_2^2+2c_1)\lambda_2^3(m+1) -(2A_{n,m}+c_1)\left(\lambda_2c_1- \frac{\lambda_1\lambda_3}{2}(n+1)c_2\right)\right]\frac{\sin(t\Omega_+)}{\Omega_+}\Bigr\}, \\ \\
X_3(t,n,m)= \frac{-i\sqrt{n+1}}{2c_2^2 [A_{n,m}-4\frac{\lambda_1^2\lambda_2^2}{\lambda_3^2}] }\Bigl\{
\exp(i\frac{t}{2}c_2)\left[(c_2^2-2c_1)\lambda_1^3(n+1)+(2A_{n,m}-c_1)\left(\lambda_1c_1- \frac{\lambda_2\lambda_3}{2}(m+1)c_2\right)\right]\\ \\ \times \frac{\sin(t\Omega_-)}{\Omega_-}+\exp(-i\frac{t}{2}c_2)\left[(c_2^2+2c_1)\lambda_1^3(n+1) -(2A_{n,m}+c_1)\left(\lambda_1c_1- \frac{\lambda_2\lambda_3}{2}(m+1)c_2\right)\right]\frac{\sin(t\Omega_+)}{\Omega_+}\Bigr\}, \\ \\
X_4(t,n,m)=\frac{1}{2} \exp(i\frac{t}{2}c_2)\left[ - \cos(t\Omega_-) +i\frac{c_2}{2\Omega_-}\sin(t\Omega_-)\right]
+\frac{1}{2} \exp(-i\frac{t}{2}c_2)\left[ \cos(t\Omega_+) +i\frac{c_2}{2\Omega_+}\sin(t\Omega_+)\right]
, \label{ap1}
\end{array} \end{eqnarray} \end{widetext}
where \begin{equation}\label{secf3} \Omega_\pm=\frac{1}{2}\sqrt{\lambda_3^2(n+1)(m+1)+4(\lambda_1\sqrt{n+1}\pm\lambda_2\sqrt{m+1})^2}. \end{equation} It is obvious that the Rabi oscillation in the AQC is more complicated than that of the JCM. From the solution (\ref{ap1}) different limits can be checked. For instance, when $(\lambda_2,\lambda_3)\rightarrow (0,0)\quad(\lambda_3\rightarrow 0)$ the coefficients (\ref{ap1}) reduce to those of the standard JCM (two decoupled JCM \cite{eberlyy}). Moreover, when $(\lambda_1,\lambda_2)\rightarrow (0,0)$ the system reduces to a simple form, which is in a good correspondence with the conventional coupler (\ref{ins1}). Nevertheless, the device, in this case, is a rich source for the nonclassical effects. This depends on the types of initial atomic states and can be explained as follows: (i) The atoms are initially prepared in
$|e_1,e_2\rangle$. In this case the system reduces to the dark state, where $\hat{H}_{int}|e_1,e_2\rangle=0$. These states do not evolve in time. This property has been exploited in the quantum clock synchronization \cite{synch}. (ii) The atoms are initially prepared in $|e_1,g_2\rangle$. The dynamical state of the system takes the form:
\begin{eqnarray} \begin{array}{lr} \mid \Psi (T)\rangle =\sum\limits_{n,m=0}^{\infty }C_{n,m}\left[ \cos[ T\sqrt{(n+1)(m+1)}]\mid e_{1},g_{2},n,m+1\rangle \right.\\ \\ \left.-i\sin [T\sqrt{(n+1)(m+1)}]\mid g_{1},e_{2},n+1,m\rangle \right], \label{dark1} \end{array} \end{eqnarray} where $T=\lambda_3 t$. The expression (\ref{dark1}) reveals that the behavior of the radiation fields is typically that of the two-mode single-atom JCM \cite{twjcm}. Finally, when the two atoms are initially in the Bell state
$[|e_1,g_2\rangle+|g_1,e_2\rangle]/\sqrt{2}$ the wavefunction takes the form: \begin{widetext} \begin{eqnarray} \begin{array}{lr} \mid \Psi (T)\rangle =\frac{1}{\sqrt{2}}\sum\limits_{n,m=0}^{\infty }C_{n,m}\exp[-i T\sqrt{(n+1)(m+1)}]\left[ \mid e_{1},g_{2},n,m+1\rangle +\mid g_{1},e_{2},n+1,m\rangle \right]. \label{dark2} \end{array} \end{eqnarray} \end{widetext}
It is evident that the system exhibits atomic trapping, i.e. $\langle\hat{\sigma}^{(1)}_z(T)\rangle=\langle\hat{\sigma}^{(2)}_z(T) \rangle=0$. Furthermore, the system is able to generate nonclassical effects, in particular, in the quantities, which depend on the off-diagonal elements of the density matrix such as squeezing (we have checked this fact).
\begin{figure*}
\caption{ Evolution of the $\langle\hat{\sigma}^{(1)}_z(t)\rangle$
against the interaction time $T=\lambda_1 t$ with $(\alpha, \beta)=(5,5)$ for
$(\lambda_2,\lambda_3)=(1,0)$ (a), $(1,0.6)$ (b), $(2,3)$ (c) and $(1,1)$ (d).}
\label{fig:wide2}
\end{figure*}
Now, we comment on the switching mechanism in the AQC. For the sake of comparison, we substitute $\beta=0$ in relations (\ref{new3})--(\ref{ap1}) and calculate the mean-photon numbers as:
\begin{widetext} \begin{eqnarray} \begin{array}{lr}
\langle\hat{a}_1^{\dagger}(T)\hat{a}_1(T)\rangle=|\alpha|^2+\sum\limits_{n=0}^{\infty}
|C_{n,0}|^2[|X_3(T,n,0)|^2+|X_4(T,n,0)|^2],\\
\\
\langle\hat{a}_2^{\dagger}(T)\hat{a}_2(T)\rangle=\sum\limits_{n=0}^{\infty}
|C_{n,0}|^2[|X_2(T,n,0)|^2+|X_4(T,n,0)|^2], \label{ins3} \end{array} \end{eqnarray} \end{widetext}
whre $T=t\lambda_1$. From these equations it is obvious that the intensity of the mode in the first waveguide cannot be switched to the other one. This is in a clear contrast with the linear directional coupler (compare (\ref{ins2}) and (\ref{ins3})). This behavior is related to the nature of the atom-field interaction mechanism, which is close to the classic Lee model of quantum field theory. Moreover, this behavior is still valid even if the interaction between the modes and the atoms in the same waveguide is neglected, i.e. $\lambda_1=\lambda_2=0$. In this case, expressions (\ref{ins3}) exhibit the well-known RCP of the standard JCM \cite{eber}. The final remark, AQC is able to switch the nonclassical effects from one waveguide to another based on the values of the interaction parameters. This is remarkable from (\ref{ins3}), where the mean-photon number in the second waveguide $ \langle\hat{a}_2^{\dagger}(T)\hat{a}_2(T)\rangle$ can exhibit RCP even though the second mode is initially in vacuum state.
On the other hand, assume that the mode in the first waveguide is initially prepared in the even coherent state, which can exhibit squeezing, while the second mode is still in vacuum state. In this case, the density matrix of the second mode takes the form: \begin{equation}\label{density} \hat{ \rho}_2=\sum\limits_{n=0}^{\infty}
|C_{2n,0}|^2\Bigl\{[|X_1(T,2n,0)|^2+|X_3(T,2n,0)|^2]|0\rangle\langle 0|+
[|X_2(T,2n,0)|^2+|X_4(T,2n,0)|^2]|1\rangle\langle1|\Bigr\}, \end{equation}
where $|C_{2n}|^2$ is the photon-number distribution of the even coherent sate. From (\ref{density}), squeezing cannot be switched to the second mode. Nevertheless, if the second mode is prepared in the coherent state, it can exhibit squeezing. In this case, the source of the nonclassical effects could be the switching mechanism between the waveguides or the nature of the atom-field interaction.
Now, we use above relations to investigate the atomic inversions and second-order correlation functions in the following section. For the sake of simplicity we consider $\alpha$ and $\beta$ to be real.
\section{Atomic inversions and second-order correlation function}
Atomic inversion of the standard JCM is well known in quantum optics by exhibiting RCP. The RCP has a nonclassical origin and reflects the nature of the statistics of the radiation field. The evolution of the atomic inversion has been realized via, e.g., the one-atom mazer \cite{remp} and using technique similar to that of the NMR refocusing \cite{meu}. In this section we investigate the behavior of the AQC by studying the evolution of the atomic inversions and the second-order correlation functions.
As the system includes two atoms we have two types of the atomic inversion, namely, single atomic inversion and total atomic inversion $\langle\hat{\sigma}_z(T)\rangle=\frac{1}{2}[\langle\hat{\sigma}^{(1)}_z(T)\rangle +\langle\hat{\sigma}^{(2)}_z(T)\rangle]$. From (\ref{new3}) one can obtain the following expressions:
\begin{figure*}
\caption{ Evolution of the single-mode second-order correlation function as indicated against the interaction
time $T=\lambda_1 t$ with $(\lambda_2,\lambda_3)=(1,0)$ (a), $(1,0.6)$ (b), $(2,3)$ (c)--(d).
}
\label{fig:wide3}
\end{figure*}
\begin{widetext} \begin{eqnarray} \begin{array}{lr} \langle\hat{\sigma}^{(1)}_z(T)\rangle = \sum\limits_{n,m=0}^{\infty
}|C_{n,m}|^2[|X_{1}(T,n,m)|^2+|X_{2}(T,n,m)|^2
-|X_{3}(T,n,m)|^2-|X_{4}(T,n,m)|^2],\\ \\ \langle\hat{\sigma}^{(2)}_z(T)\rangle = \sum\limits_{n,m=0}^{\infty
}|C_{n,m}|^2[|X_{1}(T,n,m)|^2-|X_{2}(T,n,m)|^2
+|X_{3}(T,n,m)|^2-|X_{4}(T,n,m)|^2],\\ \\ \langle\hat{\sigma}_z(T)\rangle = \sum\limits_{n,m=0}^{\infty
}|C_{n,m}|^2[|X_{1}(T,n,m)|^2-|X_{4}(T,n,m)|^2]. \label{new4} \end{array} \end{eqnarray} \end{widetext}
As we mentioned in the preceding section the conventional directional coupler
cannot exhibit RCP in the evolution of the mean-photon numbers.
Nevertheless, the standard JCM can exhibit RCP provided that the photon-number distribution of the initial field
has a smooth envelope. Similar conclusion has been reported to
the two-atom single-mode JCM \cite{tess}. For the AQC we have found when $\alpha=\beta$ and $\lambda_j\neq 0$ the different types of the atomic inversions (\ref{new4}) provide quite similar behaviors. It seems that the contributions of the coherence coefficients $X_2, X_3$ are comparable. Moreover, one can easily prove when $\lambda_3=0$ and $\lambda_1=\lambda_2$ the atomic inversions reduce to that of the standard JCM (see Fig. 2(a)). It is worth reminding that for the standard JCM the revival patterns occur in the atomic inversion over certain period of the interaction time afterward they interfere providing chaotic behavior. Additionally, the revival time is connected with the amplitude $\alpha$ through the relation
$T_r=2\pi \sqrt{\bar{n}}\simeq 2\pi |\alpha|$ \cite{eber}. We proceed, for $\lambda_1\neq\lambda_2$ they provide different forms of the revival patterns. Here we restrict the attention to the atomic inversion of the first atom (see Figs. 2(b)-(d) for the given values of the interaction parameters). We study three cases based on the relationship between the strength of the switching mechanisms in and between the waveguides, namely, $\lambda_3<\lambda_j, \lambda_3=\lambda_j, \lambda_3>\lambda_j$. Comparisons between Figs. 2(b)-(d) and Fig. 2(a) are instructive. From Fig. 2(b) one can observe that the atomic inversion, after the zero and first revival patterns, exhibits long series of the subsidiary-revival patterns (see the inset in Fig. 2(b)). This behavior is completely different from that of the JCM. This indicates that the nonclassical effects generated by this device can sustain for an interaction time longer than that of the JCM. It is worth mentioning that the subsidiary-revival patterns have been observed for the JCM against the squeezed coherent state \cite{eberr1}. This has been explained in relation to the photon-number distribution of the initial states. More illustratively, the photon-number distributions of the squeezed states exhibit many peaks structure, each of which gives its own revival patterns in the evolution of the atomic inversion. These patterns interfere with each other to produce these subsidiary-revival patterns. Nevertheless, for the system under consideration the occurrence of these patterns is related to the switching mechanism between the waveguides (compare Figs. 2(a) and (b)). This mechanism reflects itself in very complicated Rabi oscillations $\Omega_\pm$ as well as in the double summations in the atomic inversions formulae (\ref{new4}).
Fig. 2(c) presents the case when the coupling constants are different. It is obvious that the RCP is still remarkable and the subsidiary revivals are smoothly washed out compared to those in Fig. 2(b). Generally, we have found when $\lambda_3\geq \lambda_1=\lambda_2$ the atomic inversion exhibits long RCP (see Fig. 2(d)). Above information indicates that the switching mechanism between the waveguides plays an important role in the behavior of the AQC.
Actually, we have found difficulties in giving mathematical
treatment for the RCP presented by the AQC since the Rabi oscillation is rather complicated.
Now we draw the attention to the second-order correlation functions for the single-mode case, which is defined as: \begin{equation}\label{pw3} g_j^{(2)}(t)=\frac{\langle\hat{a}_j^{\dagger 2}(t)\hat{a}_j^{ 2}(t)\rangle}{\langle \hat{a}_j^{\dagger}(t)\hat{a}_j(t)\rangle^2}-1, \quad j=1,2, \end{equation} where $g_j^{(2)}(t)=0$ for Poissonian statistics (standard case), $g_j^{(2)}(t)<0$ for sub-Poissonian statistics (nonclassical effects) and $g_j^{(2)}(t)>0$ for super-Poissonian statistics (classical effects). The second-order correlation function can be measured by a set of two detectors, e.g. the standard Hanbury Brown-Twiss coincidence arrangement. For the system under consideration, this quantity is plotted in Figs. 3 for the given values of the interaction parameters. From these figures it is obvious that the AQC is able to generate long-lived sub-Piossonian effects, i.e. $g_1^{(2)}(t)<0$. Furthermore, the basic features of the dynamics are still similar to those of the atomic inversion.
Fig. 3(a) presents the well-known shape of the second-order function of the standard JCM. When the switching mechanism between the waveguides is involved the long RCP is dominant in the evolution of the $g_j^{(2)}(t)$. Nevertheless, the shape of this phenomenon is quite different from that in the corresponding atomic inversion (compare Fig. 2(b) to Fig. 3(b)). For instance, the revival times in the two quantities are different. Also, the number of the subsidiary revivals in the atomic inversion is greater than that in the corresponding $g_1^{(2)}(t)$.
In contrast to the atomic inversions, $g_1^{(2)}(t)$ and $g_2^{(2)}(t)$ can provide different behavior for the same values of the interaction parameters. This fact can be realized by comparing Fig. 3(c) to (d).
In conclusion, in this paper we have developed, for the first time, the notion of the AQC. We have explained how does it work. Also we have derived the exact solution for the equations of motion. In contrast to the conventional coupler the AQC can generate nonclassical effects. Nevertheless, the switching mechanism in the former is more effective than that in the latter. Furthermore, the behavior of the AQC is sensitive to the types of the initial atomic states. We have shown that the system can give the results of the two-mode JCM under certain conditions. Additionally, we have discussed the evolution of the atomic inversions and second-order correlation functions. These two quantities can exhibit RCP, long RCP and long subsidiary-revival patterns based on the values of the coupling constants. Second-order correlation function can exhibit long-lived nonclassical effects. From the information given in the Introduction one can realize that the AQC is in the reach of the current technology. Also it may be of interest in the framework of quantum information.
\section*{ Acknowledgement}
The authors would like to thank Professor Jan Pe\v{r}ina for the interesting discussion.
\section*{References}
\end{document} | arXiv | {
"id": "0910.1665.tex",
"language_detection_score": 0.8087517023086548,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{On Calibration and Out-of-domain Generalization}
\begin{abstract}
Out-of-domain (OOD) generalization is a significant challenge for machine learning models. Many techniques have been proposed to overcome this challenge, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we show that under certain conditions, models which achieve \emph{multi-domain calibration} are provably free of spurious correlations. This leads us to propose multi-domain calibration as a measurable and trainable surrogate for the OOD performance of a classifier. We therefore introduce methods that are easy to apply and allow practitioners to improve multi-domain calibration by training or modifying an existing model, leading to better performance on unseen domains. Using four datasets from the recently proposed WILDS OOD benchmark \cite{koh2020wilds}, as well as the Colored MNIST dataset \cite{kim2019learning}, we demonstrate that training or tuning models so they are calibrated across multiple domains leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from both a practical and theoretical point of view. \end{abstract}
\section{Introduction} \label{sec:intro} Machine learning models have recently displayed impressive success in a plethora of fields \cite{huang2017densely, devlin2019bert, DBLP:journals/nature/Senior0JKSGQZNB20}. However, as models are typically only trained and tested on in-domain (ID) data, they often fail to generalize to out-of-domain (OOD) data \cite{koh2020wilds}. The problem is especially pressing when deploying machine learning models in the wild, where they are required to perform well under conditions that were not observed during training. For instance, a medical diagnosis system trained on patient data from a few hospitals could fail when deployed in a new hospital.
Many methods have been proposed to improve the OOD generalization of machine learning models. Specifically, there is rapidly growing interest in learning models that display certain invariance properties under distribution shifts and do not rely on spurious correlations in the training data \cite{peters2016causal, heinze2018invariant, arjovsky2019invariant}. While highlighting the need for learning robust models, so far these attempts have limited success scaling to realistic high-dimensional data, and in learning truly invariant representations~\cite{rosenfeld2020risks,gulrajani2020search,kamath2021does}.
In this paper, we argue that an alternative and relatively simple approach for learning invariant representations could be achieved through model calibration across multiple domains. Calibration asserts that the probabilities of outcomes predicted by a model match their true probabilities. Our claim is that simultaneous calibration over several domains can be used as an observable indicator for favorable performance on unseen domains. For example, if we take all patients for whom a classifier outputs a probability of $0.9$ for being ill, and in one hospital the true probability of illness in these patients is $0.85$ while in the other it is $0.95$, then we may suspect the classifier relies on spurious correlations. Intuitively, the features which lead the classifier to predict a probability of $0.9$ imply different results under different experimental conditions, suggesting that their correlation with the label is potentially unstable. Conversely, if the true probabilities in both hospitals match the classifier's output, it may be a sign of its robustness.
Our contributions are as follows: We prove that in Gaussian-linear models, under a general-position condition, being concurrently calibrated across a sufficient number of domains guarantees a model has no spurious correlations.
We then introduce three methods for encouraging multi-domain calibration in practice. These are, in ascending order of complexity: (i) model selection by a multi-domain calibration score, (ii) robust isotonic regression as a post-processing tool, and (iii) directly optimizing deep nets with a multi-domain calibration objective, based on the method introduced by Kumar et al. \cite{kumar2018trainable}. We show that multi-domain calibration achieves the correct invariant classifier in a learning scenario presented by Kamath et al. \cite{kamath2021does}, unlike the objective proposed in Invariant Risk Minimization \cite{arjovsky2019invariant}. Finally, we demonstrate that the proposed approaches lead to significant performance gains on the WILDS benchmark datasets \cite{koh2020wilds}, and also succeed on the colored MNIST dataset \cite{kim2019learning}.
\begin{comment}
Concretely, let $X \in \mathcal{X}$ be the features, $Y$ the labels and $E$ the domains (or environments) \cite{peters2016causal}. We assume the data generating process for $E,X,Y$ follows the causal graph in Fig. \ref{scm}. Our goal will be to learn models that will generalize to new, unseen environments $E$. We differentiate between causal and anti-causal components of $X$, and we further differentiate between the anti-causal variables $X$ which are affected by $E$ and those which are unaffected by $E$, denoted as $\Xac$ and $\Xacn$, respectively. We do not assume to know how to partition $X$ into $\Xc, \Xac, \Xacn$. The main assumptions made in the causal graph in Fig. \ref{scm} are that there are no hidden variables, and that there is no edge directly from environment $E$ to the label $Y$. If such an arrow exists, it implies the conditional distribution of $Y$ given $X$ can be arbitrarily different in an unseen environment $E$, compared to those present in the training set. Note that for simplicity we do not include arrows from $\Xc$ to $\Xac$ and $\Xacn$ but they may be included as well.
\begin{figure}
\caption{Learning in the presence of causal and anti-causal features. Anti-causal features can be either spurious ($\Xac$), or non-spurious ($\Xacn$).}
\label{scm}
\end{figure}
Generally, a representation $\Phi(X)$ contains a \textit{spurious correlation} with respect to the environments $E$ and label $Y$, if $Y \nindep E \mid \Phi(X)$. Similar observations have been made by \cite{heinze2018invariant, arjovsky2019invariant}. Having a spurious correlations thus implies that the relation between $\Phi$ and the target variable depends on the environment -- it is not transferable or stable across environments. Note that ``spuriousness'' is with respect to the environment random variable $E$, so the same representation might be spurious with respect to one set of environments and not another.
For binary $Y$, if we take our representation to simply be the output $f(X)$ of classifier $f: \mathcal{X} \rightarrow [0,1]$, then a classifier with no spurious correlation w.r.t. $E$ satisfies $Y \indep E \mid f(X)$. The crux of this paper is the observation, proven in \ref{sec:ind_and_calib}, that this conditional independence implies that up-to a simple transformation, $\mathbb{E}[Y \mid f(X), E=e]=f(X)$ for every value of $E$, which in turn is equivalent to stating ``$f$ is a \emph{\textbf{calibrated}} classifier across all environments''.
In the following sections, we crystallize this observation. First, we prove in a Gaussian-linear setting, under a general-position condition, that indeed being calibrated across a sufficient number of environments implies the model has no spurious correlations. Then, we demonstrate on the WILDS benchmark datasets \cite{koh2020wilds} that simply re-calibrating models across multiple environments, improves performance on unseen environments. Finally, we show on the colored MNIST dataset \cite{kim2019learning} that multi-domain calibration is a useful measure for model selection. \end{comment} \section{Calibration and Invariant Classifiers} \label{sec:cal_invar}
\subsection{Problem Setting}
Consider observable features $X$, a label $Y$ and an environment (or domain) $E$ with sample spaces $\mathcal{X}, \mathcal{Y}, \mathcal{E}$ accordingly. We mostly focus on regression and binary classification, therefore $\mathcal{Y}=\mathbb{R}$ or $\mathcal{Y}=\{0, 1\}$. To lighten notation, our definitions will be given for the binary classification setting and we will point out adjustments to regression where necessary. There is no explicit limitation on $|\mathcal{E}|$, but we assume that training data that has been collected from a finite subset of the possible environments $E_{\text{train}}\subset \mathcal{E}$. The number of training environments is denoted by $k$, and $E_{\text{train}}=\{e_i\}_{i=1}^{k}\subset \mathcal{E}$, so that our training data is sampled from a distribution $P[X, Y \mid E=e_i] \quad \forall i\in{[k]}$. Our goal is to learn models that will generalize to new, unseen environments in $\mathcal{E}$.
Ideally, we would like to learn a classifier that is optimal for all environments $\mathcal{E}$. Unfortunately, we only observe data from the limited set $E_{\text{train}}$ and even if this set is extremely large, the Bayes optimal classifiers on each environment do not necessarily coincide. Following other recent work \cite{peters2016causal,heinze2018invariant,arjovsky2019invariant} we therefore aim for a different goal -- learning classifiers whose per-instance output will be stable across environments $E$, as we explain below.
We assume the data generating process for $E,X,Y$ follows the causal graph in \figref{scm}. \footnote{See Appendix \ref{sec:scm} for a brief introduction to causal graphs.} We differentiate between causal and anti-causal components of $X$, and further differentiate between the anti-causal variables which are affected or unaffected by $E$, denoted as $\Xac$ and $\Xacn$, respectively. As an illustrative example, consider again predicting illness across different hospitals. When predicting lung cancer, $Y$, from patient health records, $\Xc$ could be features like smoking. $\Xacn$ are symptoms of $Y$ such as infections that appear in chest X-rays, while $\Xac$ can be marks that technicians put on X-rays as in \cite{zech2018variable}. Smoking habits may vary across hospital populations, as might X-ray markings; but the influence of smoking on cancer and the manifestation of cancer in an X-ray do not vary by hospital.
We do not assume to know how to partition $X$ into $\Xc, \Xac, \Xacn$. The main assumptions made in the causal graph in Fig. \ref{scm} are that there are no hidden variables, and that there is no edge directly from environment $E$ to the label $Y$. Such an arrow would imply the conditional distribution of $Y$ given $X$ can be arbitrarily different in an unseen environment $E$, compared to those present in the training set. Note that for simplicity we do not include arrows from $\Xc$ to $\Xac$ and $\Xacn$ but they may be included as well.
\begin{figure}
\caption{Learning in the presence of causal and anti-causal features. Anti-causal features can be either spurious ($\Xac$), or non-spurious ($\Xacn$).}
\label{scm}
\end{figure}
We will say a representation $\Phi(X)$ contains a \emph{spurious correlation} with respect to the environments $E$ and label $Y$, if $Y \nindep E \mid \Phi(X)$; this motivates our naming of $\Xac$ and $\Xacn$ in Fig. \ref{scm}, as $Y \nindep E \mid \Xac$ but $Y \indep E \mid \Xacn$. Similar observations have been made by \cite{heinze2018invariant, arjovsky2019invariant}. Having a spurious correlation implies that the relation between $\Phi(X)$ and $Y$ depends on the environment -- it is not transferable nor stable across environments.
In this work we will simply consider the output $f(X)$ of a classifier $f: \mathcal{X} \rightarrow [0,1]$ as a representation. The crux of this paper is the observation that having $\mathbb{E}[Y \mid f(X), E=e]=f(X)$ for every value of $E$, i.e. $f$ being a \emph{\textbf{calibrated}} classifier across all environments, is equivalent up-to a simple transformation to having $Y \indep E \mid f(X)$, and thus to $f$ having \emph{\textbf{no spurious correlations}} with respect to $E$. We prove this assertion in section \ref{sec:ind_and_calib}, and as a demonstration of this principle we prove (section \ref{sec:motiv}) that linear models which are calibrated across a diverse set of environments $E$ are guaranteed to discard $\Xac$ as viable features for prediction.
\begin{comment}
Many techniques for OOD generalization thus focus on minimizing the worst-case risk over $\mathcal{E}$: \begin{align} \label{eq:worst_case_risk}
\min_f\max_{e\in{\mathcal{E}}}{\mathbb{E}[l\left(y, f({\mathbf x})\right) \mid E=e]}. \end{align} Prominent examples include Distributionally Robust Optimization (DRO) \cite{sagawa2019distributionally, ben2013robust}, Graph Surgery \cite{subbaswamy2019preventing}, Invariant Causal Predictions \cite{peters2016causal} and Invariant Risk Minimization (IRM) \cite{arjovsky2019invariant}. \ucomment{Is it accurate to say IRM is an example of trying to optimize (1)? }
We will refer to the range of $f$ upon limiting its domain to the support of $P[X \mid E=e_i]$, as the range of $f$ restricted to $e_i$. When taking expectations and conditional expectations over distributions, we will drop the subscripts whenever they are clear from context (e.g. write $\mathbb{E}{[\cdot \mid E=e_i]}$ instead of $\mathbb{E}_{{\mathbf x},y\sim P[X, Y \mid E=e_i]}{[\cdot]}$). \ucomment{not sure all this paragraph is necessary?}
In what follows, we discuss why models that are well-calibrated on all training environments have high potential for good OOD performance. Intuitively, calibration imposes high quality uncertainty estimates for each $e\in{\mathcal{E}}$ \cite{gupta2020distribution}. It turns out that under certain conditions, this property helps $f(X)$ discard $\Xac$ as viable features for prediction; this discarding is often necessary in order to provide meaningful bounds on \eqref{eq:worst_case_risk}. \end{comment}
\subsection{Invariance and Calibration on Multiple Domains}\label{sec:ind_and_calib} We define calibration, along with a straightforward generalization to the multiple environment setting. \begin{definition}
Let $P[X, Y]$ be a joint distribution over the features and label, and $f:\mathcal{X}\rightarrow [0,1]$ a classifier. Then $f({\mathbf x})$ is calibrated w.r.t to $P$ if for all $\alpha\in{[0,1]}$ in the range of $f$, $\mathbb{E}_{P}{\left[ Y \mid f(X)=\alpha\right]} = \alpha$.
In the multiple environments setting, $f({\mathbf x})$ is calibrated on $E_{\text{train}}$ if for all $e_i\in{E_{\text{train}}}$ and $\alpha$ in the range of $f$ restricted to $e_i$,
$\mathbb{E}{\left[ Y \mid f(X)=\alpha, E=e_i\right]} = \alpha \nonumber$. \end{definition} For regression problems, we consider regressors that output estimates for the mean and variance of $Y$, and say they are calibrated if they match the true values similarly to the definition above. The precise definition can be found in the supplementary material.
We now tie the notion of calibration on multiple environments with OOD generalization, starting with its correspondence with our definition of spurious correlations. Recall that a representation $\Phi(X)$ does not contain spurious correlations if $Y \indep E \mid \Phi(X)$. Treating the output $f(X)$ of a classifier as a representation of the data, and considering classifiers satisfying the above conditional independence with respect to training environments, we arrive at a definition of an invariant classifier. \begin{definition}
Let $f:\mathcal{X}\rightarrow [0,1]$. $f$ is an \emph{invariant classifier} w.r.t $E_{\text{train}}$ if for all $\alpha\in{[0,1]}$ and environments $e_i,e_j\in{E_{\text{train}}}$, where $\alpha$ is in the range of $f$ restricted to each of them:
\begin{align} \label{eq:invariant_predictor}
\mathbb{E}[ Y \mid f(X)&=\alpha, E=e_i ] = \mathbb{E}{\left[ Y \mid f(X)=\alpha, E=e_j \right]}.
\end{align} \end{definition}
Lemma \ref{lemma:corresp} gives the correspondence between invariant classifiers and classifiers calibrated on multiple environments. The proof is in \secref{sec:calibration_intro} of the supplementary material.
\begin{lemma}\label{lemma:corresp}
If a binary classifier $f$ is invariant w.r.t $E_{\text{train}}$, then there exists some $g:\mathbb{R}\rightarrow [0,1]$ such that (i) $g\circ f$ is calibrated on all training environments, and (ii) the mean squared error of $g\circ f$ on each environment does not exceed that of $f$. On the other hand, if a classifier is calibrated on all training environments it is also invariant w.r.t $E_{\text{train}}$. \end{lemma}
Now, we can note how the above notion of invariance relates to that of Invariant Risk Minimization \cite{arjovsky2019invariant}, where invariance of a representation $\Phi:\mathcal{X}\rightarrow\mathcal{H}$ is linked to a shared classifier ${\mathbf w}^*:\mathcal{H}\rightarrow [0, 1]$, ${\mathbf w}^*\circ \Phi$ being optimal on all environments w.r.t a loss $l:[0,1]\times\mathcal{Y}\rightarrow \mathbb{R}_{\geq 0}$. Under the representation $\Phi(X)=f(X)$, and the cross-entropy or squared losses it turns out that the original IRM definition coincides with \eqref{eq:invariant_predictor} \footnote{See Observation 2 in \cite{kamath2021does} for a proof.}. Hence we aim for a similar notion of conditional independence, yet we approach it from the point-of-view of calibration. In \secref{sec:algs} we will see that taking this approach leads to different methods that are highly effective in achieving and assessing invariance.
We further note that the original IRM objective was deemed too difficult to optimize by the original IRM authors, leading them to propose an alternative called IRMv1. This alternative however does not capture the full set of required invariances, as shown by \cite{kamath2021does}, whereas we show in section \ref{subsec:2bit} that multi-domain calibration does indeed capture the required invariances.
Having established the connection between calibration on multiple environments and invariance, there are several interesting questions and points to consider: \\ \textbf{Calibration and sharpness.} Calibration alone is not enough to guarantee that a classifier performs well; on a single environment, always predicting $\mathbb{E}[Y]$ will give a perfectly calibrated classifier. Hence, multi-domain calibration should be combined with some sort of guarantee on accuracy. In the calibration literature, this is often referred to as sharpness. To this end, in \secref{sec:algs} we will propose regularizing models during training or fine-tuning with Calibration Loss Over Environments (CLOvE{}). Combining this regularizer with standard empirical loss functions helps balance between sharpness and multi-domain calibration. Even without training a new model, we will propose methods for model selection and post-processing that are very easy to apply and help improve multi-domain calibration without a significant effect on the sharpness of the models. \\
\textbf{Generalization and dependence on $\Xac$.} Suppose that $f(X)$ is calibrated on $E_{\text{train}}$. Under what conditions does this imply it is calibrated on $\mathcal{E}$? It is easy to show that calibration on several environments entails calibration on any distribution which can be expressed as a linear combination of the distributions underlying said environments. However, can we go beyond that? Given a general set $\mathcal{E}$ we would like to know what conditions and how many training environments are required for calibration to generalize. We also wish to understand when does calibration over a finite set of training environments indeed guarantee that a classifier is free of spurious correlations. We now turn to answer these questions in the setting of linear-Gaussian models. \section{Motivation: a Linear-Gaussian Model} \label{sec:motiv} Let us consider data where $X$ is a multivariate Gaussian. Since we will be considering Gaussian data, the set of all environments $\mathcal{E}$ will be parameterized using pairs of real vectors expressing expectations and positive definite matrices of an appropriate dimension expressing covariances: $\mathcal{E} = \{(\mu, \Sigma) \mid \mu\in{\mathbb{R}^d}, \Sigma\in{\mathbb{S}^d_{++}} \}$.
For two scenarios ((a) and (b) in Figure \ref{theoretical_cases}) we prove that when provided with data from $k$ training environments, where $k$ is linear in the number of features, and the environments satisfy some mild non-degeneracy conditions, any predictor that is calibrated on all training environments will not rely on any of the spurious features $X_{\text{ac-sp}}$, and will also be calibrated on all $e\in{\mathcal{E}}$.
\begin{figure}
\caption{Graphs describing the two cases in our theoretical analysis. We use acronyms in subscripts to lighten notation. (a) All features are anti-causal, some are spurious while others are invariant. (b) Features are either causal and may undergo covariate shift, or are anti-causal and spurious.}
\label{theoretical_cases}
\end{figure}
In scenario (a), we take $Y$ to be a binary variable drawn from a Bernoulli distribution with parameter $\eta\in{[0,1]}$, and observed features are generated conditionally on $Y$. The features ${\mathbf x}_{\text{ac-ns}}\in{\mathbb{R}^{d_{\text{ns}}}}$ are invariant, meaning their conditional distribution given $Y$ is the same for all environments, whereas ${\mathbf x}_{\text{ac-sp}}\in{\mathbb{R}^{d_{\text{sp}}}}$ are spurious features, as their distribution may shift between environments, altering their correlation with $Y$. The data generating process for training environment $i\in{[k]}$ in Fig. \ref{theoretical_cases}(a) is given by:
\begin{minipage}{.32\linewidth}
\begin{align*}
y = \begin{cases}
1 & \text{w.p } \eta \\
0 & \text{o.w}
\end{cases}
\end{align*} \end{minipage} \begin{minipage}{.65\linewidth}
\begin{align} \label{eq:setting_a}
X_{\text{ac-ns}} &\mid Y=y \sim\mathcal{N}\left((y-1/2)\mu_{\text{ns}}, \Sigma_{\text{ns}}\right), \nonumber \\
X_{\text{ac-sp}} &\mid Y=y \sim\mathcal{N}\left((y-1/2)\mu_i, \Sigma_i\right).
\end{align} \end{minipage}
For ${\mathbf x}= [{\mathbf x}_{\text{ac-ns}}, {\mathbf x}_{\text{ac-sp}}]$ we consider a linear classifier $f({\mathbf x} ; {\mathbf w},b) = \sigma( {\mathbf w}^\top {\mathbf x} + b)$, where $\sigma: \mathbb{R} \rightarrow [0,1]$ is some invertible function (e.g. a sigmoid). Since the mean of spurious features, $\mu_i$, is determined by $y$, these features can help predict the label in some environments. Yet, these correlations do not carry to all environments, and $f({\mathbf x})$ might rely on spurious correlations whenever the coefficients in ${\mathbf w}$ corresponding to ${\mathbf x}_{\text{ac-sp}}$ are non-zero.
Any such classifier can suffer an arbitrarily high loss in an unseen environment, because a new environment can reverse and magnify the correlations observed in $E_{\text{train}}$. Using these definitions, we may now state our result for this case: \begin{theorem} \label{thm:setting_a} Given $k > 2d_{\text{sp}}$ training environments where data is generated according to \eqref{eq:setting_a} with parameters $\{\mu_i, \Sigma_i\}_{i=1}^{k}$, we say they lie in general position if for all non-zero ${\mathbf x}\in{\mathbb{R}^{d_{\text{sp}}}}$: \begin{align*}
\mathrm{dim}\left(\mathrm{span}\left\{\begin{bmatrix} \Sigma_i{\mathbf x} + \mu_i \\
1 \end{bmatrix}\right\}_{i\in{[k]}}\right) = d_{\text{sp}}+1. \end{align*} If a linear classifier is calibrated on $k$ training environments which lie in general position, then its coefficients for the features ${\mathbf x}_{\text{ac-sp}}$ are zero. Moreover, the set of training environments that do not lie in general position has measure zero in the set of all possible training environments $\mathcal{E}^k$. \end{theorem} As a corollary, we see that calibration on training environments generalizes to calibration on $\mathcal{E}$. The proof of this theorem is given in the supplementary material, \secref{sec:proof1}. The data generating process closely resembles the one considered by \cite{rosenfeld2020risks}, who use diagonal covariance matrices.
In the second scenario we consider the addition of causal features subject to covariate shift ${\mathbf x}_{\text{c}}\in{\mathbb{R}^{d_{\text{c}}}}$, as shown in \figref{theoretical_cases}b. The covariate shift is induced when the environments $E$ alter the distribution of the causal features ${\mathbf x}_{\text{c}}$ \cite{scholkopf2012causal}. In this case, we analyze a regression problem since it is amenable to exact analysis. The data generating process for training environment $i\in{[k]}$ is: \begin{align} \label{eq:setting_b}
X_{c} \sim \mathcal{N}(\mu^c_i, &\Sigma^c_{i}); \:
Y = {{\mathbf w}^*_c}^\top {\mathbf x}_c + \xi, \: \xi\sim\mathcal{N}(0, \sigma^2_y) \nonumber \\
&X_{\text{ac-sp}} = y\mu_i + \eta, \: \eta\sim \mathcal{N}(\textbf{0},\Sigma_i).
\end{align} For ${\mathbf x}= [{\mathbf x}_c, {\mathbf x}_{\text{ac-sp}}]$ it turns out that in this case, calibration on multiple domains forces $f({\mathbf x})$ to discard ${\mathbf x}_{\text{ac-sp}}$, but also forces it to use ${\mathbf w}_c^*$, since it characterizes $P(Y \mid {\mathbf x}_c)$ which is the invariant mechanism in this scenario. The exact statement and proof are in \secref{sec:proof2} of the supplement. \begin{theorem}[informal] Let $f({\mathbf x}; {\mathbf w}) = {\mathbf w}^\top{\mathbf x}$ be a linear regressor and assume we have $k > \max{\{d_\text{c}+2, d_{\text{sp}}\}}$ training environments where data is generated according to \eqref{eq:setting_b}. Under mild non-degeneracy conditions, if the regressor is calibrated across all training environments then the coefficients corresponding to $X_{\text{c}}$ equal ${\mathbf w}_c^*$ and those that correspond to $X_{\text{ac-sp}}$ are zero. \end{theorem}
Together, these results show calibration can generalize across environments, given that the number of environments is approximately that of the spurious features. They also show that for the settings above, the relatively stable and well-known notion of calibration implies avoiding spurious correlations.
\section{Related Work} As discussed in Section \ref{sec:cal_invar}, multi-domain calibration is an instance of an invariant representation \cite{arjovsky2019invariant}. Many extensions to the above work have been proposed, e.g. \cite{krueger2020out, bellot2020generalization}. Yet, recent work claims that many of these approaches still fail to find invariant relations in cases of interest \cite{kamath2021does, rosenfeld2020risks, guo2021out}, where a significant challenge seems to be the gap between what is achieved by the regularization term used in practice and the goal of conditional independence $Y \indep E \mid \Phi(X)$. Gulrajani et al. \cite{gulrajani2020search} give a sobering view on methods for OOD generalization, emphasizing the power of ERM and data augmentation, and the challenge of model selection. We claim that compared to the above approaches, multi-domain calibration studied here is a simpler form of invariance. Furthermore, calibration is attractive because there are standard tools to quantify it such as calibration scores \cite{DBLP:conf/cvpr/NixonDZJT19} and a vast literature on its properties and how it can be obtained \cite{DBLP:conf/icml/ZadroznyE01,vovk2003self,niculescu2005predicting,kumar2018trainable,vaicenavicius2019evaluating,gupta2020distribution,rahimi2020intra}.
Learning models which generalize OOD is a fruitful area of research with many recent developments. Most work focuses on the case of Domain Adaptation where unlabeled samples are available from the target domain, including recent work on OOD calibration \cite{wang2020transferable}. However, important work has also been done on the area of our focus -- the so-called ``proactive'' case \cite{subbaswamy2019preventing}, where no OOD samples are available whatsoever \cite{magliacane2018domain,heinze2018invariant, rothenhausler2018anchor,peters2016causal, sagawa2019distributionally}.
Calibration also plays an important role in uncertainty estimation for deep networks \cite{guo2017calibration}, and recently in fairness, where calibration on subgroups of populations is sought \cite{pleiss2017fairness}. This has interesting resemblance to the multiple environments calibration we consider here. A more general notion of multi-calibration has also been studied in this context \cite{hebert2018multicalibration}, with recent results on sample complexity \cite{shabat2020sample} which may provide tools to finite sample analysis of domain generalization. Finally, multiple methods for training calibrated models \cite{kumar2018trainable, mukhoti2020calibrating, rahimi2020intra} have also been proposed. In Section \ref{sec:algs} we propose a generalization of \cite{kumar2018trainable} to the multi-domain case to achieve multi-domain calibration.
\section{Proactively Achieving Multi-Domain Calibration} \label{sec:algs} So far we have seen a general argument why calibration can limit spurious correlations, and that in linear-Gaussian models multi-domain calibration guarantees OOD generalization. Now we turn to a more applied perspective and show how can we optimize models so they achieve this type of calibration in practice. We propose three approaches: (1) using calibration measures for model selection, (2) post-processing calibration, and (3) a calibration objective building on a method proposed by \cite{kumar2018trainable}. \secref{sec:calibration_intro} in the supplementary provides a slightly broader introduction to notions we use here.
We will assess model calibration by the Expected Calibration Error (\emph{ECE}) of the calibration curve~\cite{degroot1983comparison}, which is the average deviation between model accuracy and model confidence.
\subsection{Model selection with average ECE} \label{sec:model_selection} Model selection is challenging when aimed at OOD generalization. As recently observed by \cite{gulrajani2020search}, since OOD accuracy is often at odds with In-Domain (ID) accuracy, selection based on ID validation error eliminates the advantage of domain generalization methods over vanilla ERM with data augmentation. We suggest that model selection towards OOD generalization should balance ID validation error with another observable surrogate for the stability of a model to distribution shifts between domains. Motivated by multi-domain calibration, we propose using the average ECE across training environments as this surrogate. Concretely, we propose choosing a model with lowest average ECE from those obtaining ID validation accuracy that is above a certain user-defined threshold.
\subsection{Post-Processing Calibration}\label{subsec:postproc} Practitioners interested in (single-domain) calibrated models often apply post-processing calibration methods to binary classifiers, where the most widely used approach is Isotonic Regression Scaling \cite{DBLP:conf/icml/ZadroznyE01, niculescu2005predicting}.
Unlike standard calibration problems, in our case there are multiple domains to calibrate over. We give two ways of extending Isotonic Regression to the multi-domain setting, which we term ``naive calibration'' and ``robust calibration''. \textbf{Naive Calibration} takes predictions of a trained model $f$ on validation data pooled from all domains and fits an isotonic regression $z^*$. We then report the performance of $z^* \circ f$ on the OOD test set.\\ \textbf{Robust Calibration:} In a multiple domain setting, Naive calibration may produce a model that is well calibrated on the pooled data, but uncalibrated on individual environments. Since our goal is simultaneous calibration, the following alternative attempts to bound the worst-case miscalibration across training environments. For each environment $e\in{E_{\text{train}}}$, we denote the number of validation examples we have from it by $N_{e}$, and by $f_{e,i}$ the prediction of the model on the $i$-th example. Then in a similar vein to robust optimization, we fit an isotonic regressor that solves: $\begin{aligned} \label{eq:robust_iso}
z^* = \argmin_{z}\max_{e\in{E_{\text{train}}}}{\frac{1}{N_e}\sum_{i=1}^{N_e}{\left(z(f_{e,i})-y_i\right)^2}}. \end{aligned}$ Since Isotonic Regression can be formulated as a quadratic program, and \eqref{eq:robust_iso} minimizes a pointwise maximum over such objectives, we can cast Eq. \ref{eq:robust_iso} as a convex program and solve with standard optimizers. We then evaluate the OOD performance of $z^* \circ f$.
\subsection{Learning with Multi-Domain Calibration Error}\label{subsec:clove} The above model selection and post-processing methods are easy to apply and (as we will soon see) surprisingly effective. However, both are limited in their power to learn a model that is truly well-calibrated across multiple domains. We now propose a more powerful approach: an objective function that directly penalizes calibration errors on multiple domains during training.
Specifically, we propose learning a parameterized classifier $f_\theta({\mathbf x})$ using a learning rule of the form:
$\min_{\theta}{\sum_{e\in{E_{\text{train}}}}{l^e(f_\theta)} + \lambda \cdot r(f_\theta) }$,
where $l:\mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is an empirical loss function (e.g. cross-entropy) and $l^e(f_\theta)$ denotes the expected loss over data from training environment $e$, and $r(f_\theta)$ is a regularization term over multiple environments. Using this notation the method proposed by \cite{arjovsky2019invariant} learns a classifier $f=w\circ \Phi$ with a regularizer given by $r(f) = \sum_{e\in{E_{\text{train}}}}{r^e_{\text{IRMv1}}}(f)$, where $r^e_{\text{IRMv1}}(f) = \|\nabla_{w \mid w=1}{l^e(w\cdot \Phi)}\|^2$.
Our proposed regularizer $r(f_\theta)$ is based on the work of Kumar et al. \cite{kumar2018trainable}, who introduce a method they call Maximum Mean Calibration Error (MMCE). MMCE harnesses the power of universal kernels to express the ECE as an Integral Probability Measure, and works as follows: For a dataset $D = \{{\mathbf x}_i, y_i\}_{i=1}^{m}$, denote the confidence of a classifier on the $i$-th example by $f_{\theta;i}=\max\{f_\theta(x_i), 1-f_\theta(x_i)\}$
and its correctness by $c_i=\mathbbm{1}_{| y_i-f_{\theta;i} | < \frac{1}{2}}$. For a given universal kernel $k:\mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, MMCE over the dataset $D$ is given by:
$r^{D}_{\text{MMCE}}(f_\theta) = \frac{1}{m^2}\sum_{i,j\in{D}}{(c_i-f_{\theta;i})(c_j-f_{\theta;j})k(f_{\theta;i},f_{\theta;j})}$.
\textbf{Calibration Loss Over Environments (CLOvE{}).}
Given multiple training domains with a dataset $D^e$ for each $e\in{E_{\text{train}}}$, we arrive at our proposed regularizer by aggregating MMCE over them: $r_{\text{CLOvE{}}}(f_\theta) = \sum_{e\in{E_{\text{train}}}}r^{D_e}_{\text{MMCE}}(f_\theta)$. A key property of CLOvE{} is that its minima correspond to perfectly calibrated classifiers over all training domains, a consequence of the correspondence between MMCE and perfect calibration. \begin{corollary}[of Thm.~1 in \cite{kumar2018trainable}]\label{corr:proper_score} CLOvE{} is a proper scoring rule. That is, it equals $0$ if and only if $f_\theta({\mathbf x})$ is perfectly calibrated for every $e \in E_{\text{train}}$. \end{corollary} Additional properties of CLOvE{}, such as large deviation bounds and relation to ECE, can also be derived; see results in \cite{kumar2018trainable} for further details. In the following section, we will see how these properties translate into favorable OOD generalization in practice when training with CLOvE{}.
\section{Experiments and Results} \label{sec:exp}
\input{cmnist}
\subsection{WILDS Benchmarks} \label{subsec:wilds}
WILDS is a recently proposed benchmark of in-the-wild distribution shifts from several data modalities and applications\footnote{\url{https://wilds.stanford.edu}}. Table \ref{tab:wilds} presents the four WILDS datasets we experiment with, chosen to represent diverse OOD generalization scenarios. We follow the models and training algorithms proposed by \cite{koh2020wilds}. In order to perform multi-domain calibration we modify the splits to include a multi-domain validation set whenever possible. See supplemental \secref{sec:datasets_models} for details and for additional results on Amazon Reviews. As in \cite{koh2020wilds}, we use three different training algorithms to train our models: \textbf{ERM}, \textbf{IRM}, \textbf{DeepCORAL}, and further use \textbf{GroupDRO} for one of the datasets, compatible with WILDS version 1.0.0. We apply three calibration approaches described in \ref{subsec:postproc} and \ref{subsec:clove} above to each trained model: \textbf{naive calibration} and \textbf{robust calibration}, which are post-processing methods and therefore applied on the models' outputs; and \textbf{CLOvE}, which we apply as a fine-tuning approach to the top layers of each trained model. We train each (algorithm $\times$ calibration) combination four times with different random seeds, and report average results and their standard deviations.
\begin{table*}[htp!]
\centering
\scalebox{0.8}{
\begin{tabular}{l|lllll} \toprule
Dataset & Type & Label ($y$) & Input ($x$) & Domain ($e$) & Model ($f(x)$) \\ \midrule \midrule
\textbf{PovertyMap} & Regression & Asset Wealth Index & Satellite Image & Country & ResNet \\
\textbf{Camelyon17} & Binary & Tumor Tissue & Histopathological Image & Hospital & DenseNet \\
\textbf{CivilComments} & Binary & Comment Toxicity & Online Comment & Demographics & BERT\\
\textbf{FMoW} & Multi-class & Land Use Type & Satellite Image & Region & DenseNet \\ \bottomrule
\end{tabular}}
\caption{Description of each of the datasets used in our WILDS experiments.}
\label{tab:wilds}
\end{table*}
Table \ref{tab:fmow_camelyon} presents our main results on the \textit{FMoW} (left) and \textit{Camelyon17} (right) datasets. On both datasets, robust calibration already improves performance, and CLOvE then significantly outperforms robust calibration, improving performance by $7\%$ and $2.8\%$ (absolute) over the strongest alternative on \textit{FMoW} and \textit{Camelyon17}, respectively. When compared to the original model, the performance of CLOvE is even more striking, with CLOvE outperforming it by more than $10\%$ (absolute) on \textit{FMoW} and $6\%$ on \textit{Camelyon17}. Another appealing property of CLOvE is the low variance exhibited across different runs. Indeed, CLOvE has lower variance than both naive and robust calibration approaches, and has lower variance than the original (uncalibrated) model on 4 of the 6 experiments.
\begin{table}[ht]
\centering
\scalebox{0.8}{
\begin{tabular}{l|cccc|cccc} \toprule
& \multicolumn{4}{c}{\textit{FMoW}} & \multicolumn{4}{c}{\textit{Camelyon17}} \\
Algorithm & Orig. & Naive Cal. & Rob. Cal. & CLOvE & Orig. & Naive Cal. & Rob. Cal. & CLOvE \\ \midrule \midrule
ERM & 32.63 & 33.09 & 37.19 & \textbf{44.16} & 66.66 & 71.23 & 71.22 & \textbf{75.75} \\
& (\underline{1.6}) & (2.1) & (3.5) & (1.8) & (14.4) & (8.9) & (8.6) & (\underline{4.9}) \\
DeepCORAL & 31.73 & 31.75 & 33.86 & \textbf{40.05} & 72.44 & 75.97 & 76.8 & \textbf{79.96} \\
& (1.) & (1.) & (1.6) & (\underline{0.9}) & (4.4) & (5.4) & (6.5) & (\underline{3.9}) \\
IRM & 31.33 & 31.81 & 34.41 & \textbf{42.24} & 70.87 & 73.25 & 73.4 & \textbf{73.95} \\
& (\underline{1.2}) & (1.6) & (1.5) & (1.4) & (6.8) & (6.6) & (6.9) & (\underline{6.1}) \\ \bottomrule
\end{tabular}}
\caption{Left: worst unseen region accuracy on OOD test set in \textit{FMoW}. Right: Accuracy on unseen hospital test set in \textit{Camelyon17}. Orig.: original algorithm, no changes applied. Best OOD result for each domain in \textbf{bold}. Standard deviation across runs in brackets, lowest OOD std. is \underline{underlined}.}
\label{tab:fmow_camelyon}
\end{table}
\textbf{Analysis.} As can be seen in Figure \ref{fig:camelyon_ece}, improvements in ID calibration are associated with better OOD performance. Interestingly, when our post-processing does not improve OOD performance, it is often linked to our inability to substantially improve ID calibration. This is most visible in IRM experiments, where robust calibration is unable to outperform naive calibration both in terms ID calibration and in OOD performance. Finally, we find it interesting that merely post-processing the data (as in robust calibration) can already have such a marked effect on OOD accuracy, though still inferior to actually optimizing for multi-domain calibration as done by CLOvE.
\begin{wrapfigure}{r}{0.4\textwidth}
\centering
\includegraphics[width=0.4\textwidth]{figures/Camelyon17_ECE_Figure_latest.png}
\caption{OOD accuracy as a function of average ECE over training domains, for all models on the \textit{Camelyon17} dataset.}
\label{fig:camelyon_ece}
\end{wrapfigure}
\textbf{Results on alternative settings.} While our theoretical analysis is focused on OOD generalization of classification models, we also experiment with alternative settings from \textit{WILDS} to test the power of ID calibration in improving OOD performance. Specifically, we experiment with the \textit{PovertyMap} dataset, which introduces a regression task, and the \textit{CivilComments} dataset, which introduces a sub-population shift scenario for a binary classifier. As can be seen in Table \ref{tab:poverty_civilcomments}, results on the \textit{CivilComments} dataset (right), show that calibration consistently improves worst-case performance, with an average improvement of $21.5\%$ across training algorithms. While CLOvE does outperform naive and robust calibration on average, the gain is lower in comparison to \textit{FMoW} and \textit{Camelyon17}.
In \textit{PovertyMap} (left), the model solves a regression task, so we cannot use CLOvE to improve OOD performance. Still, robust calibration improves performance across all experiments, though by a smaller margin. In the case of models pre-trained by IRM, robust calibration improves OOD performance substantially, outperforming the original model by $0.08\%$ (absolute). Interestingly, calibration also leads to more stable results both in \textit{PovertyMap} and in \textit{CivilComments}, as can be seen in the standard deviation across different model runs.
\begin{comment} \begin{table}[ht]
\centering
\scalebox{0.78}{
\begin{tabular}{l|c|ccc|l|cccc} \toprule
& \multicolumn{4}{c|}{\textit{PovertyMap}} & & \multicolumn{4}{c}{\textit{CivilComments}} \\
Algorithm & ID & Orig. & Naive Cal. & Rob. Cal. & Algorithm & Orig. & Naive Cal. & Rob. Cal. & CLOvE \\ \midrule \midrule
ERM & 0.828 & 0.832 & 0.827 & \textbf{0.834} & ERM & 63.65 & 76.98 & 78.99 & \textbf{80.39} \\
& (0.018) & (0.011) & (0.014) & (\underline{0.006}) & & (0.026) & (\underline{0.005}) & (0.008) & (0.007) \\
IRM & 0.823 & 0.735 & 0.812 & \textbf{0.815} & IRM & 40.61 & \textbf{68.97} & 68.92 & 68.45 \\
& (0.012) & (0.117) & (0.016) & (\underline{0.015}) & & (0.16) & (\underline{0.013}) & (\underline{0.013}) & (0.02) \\
DeepCORAL & 0.833 & 0.832 & 0.835 & \textbf{0.837} & GroupDRO & 71.67 & 76.2 & 78.54 & \textbf{80.07} \\
& (0.009) & (0.011) & (\underline{0.009}) & (0.012) & & (0.007) & (0.013) & (0.008) & (\underline{0.003}) \\
\bottomrule
\end{tabular}}
\caption{Left: Pearson correlation $r$ on in-domain (ID) and OOD (unseen countries) test sets in \textit{PovertyMap}. Right: worst-case group accuracy on the test set in the \textit{CivilComments} dataset.}
\label{tab:poverty_civilcomments} \end{table} \end{comment}
\begin{table}[ht]
\centering
\scalebox{0.78}{
\begin{tabular}{l|ccc|l|cccc} \toprule
\multicolumn{4}{c|}{\textit{PovertyMap}} & & \multicolumn{4}{c}{\textit{CivilComments}} \\
Algorithm & Orig. & Naive Cal. & Rob. Cal. & Algorithm & Orig. & Naive Cal. & Rob. Cal. & CLOvE \\ \midrule \midrule
ERM & 0.832 & 0.827 & \textbf{0.834} & ERM & 63.65 & 76.98 & 78.99 & \textbf{80.39} \\
& (0.011) & (0.014) & (\underline{0.006}) & & (2.6) & (\underline{0.5}) & (0.8) & (0.7) \\
IRM & 0.735 & 0.812 & \textbf{0.815} & IRM & 40.61 & \textbf{68.97} & 68.92 & 68.45 \\
& (0.117) & (0.016) & (\underline{0.015}) & & (16) & (\underline{1.3}) & (\underline{1.3}) & (2.) \\
DeepCORAL & 0.832 & 0.835 & \textbf{0.837} & GroupDRO & 71.67 & 76.2 & 78.54 & \textbf{80.07} \\
& (0.011) & (\underline{0.009}) & (0.012) & & (0.7) & (1.3) & (0.8) & (\underline{0.3}) \\
\bottomrule
\end{tabular}}
\caption{Left: Pearson correlation $r$ on in-domain (ID) and OOD (unseen countries) test sets in \textit{PovertyMap}. Right: average group accuracy on the test set in the \textit{CivilComments} dataset.}
\label{tab:poverty_civilcomments}
\end{table}
\section{Conclusion} \label{sec:conc} In this paper we highlight a novel connection between multi-domain calibration and OOD generalization, arguing that such calibration can be viewed as an invariant representation. We proved in a linear setting that models calibrated on multiple domains are free of spurious correlations and therefore generalize out of domain. We then proposed multi-domain calibration as a practical and measurable surrogate for the OOD performance of a classifier. We demonstrated that actively tuning models to achieve multi-domain calibration significantly improves model performance on unseen test domains, and that in-domain calibration on a validation set is a useful criterion for model selection.
A major limitation of our work is that our theoretical findings are limited to linear models in a population (as opposed to finite-sample) setting; we thus consider them more as a motivation rather than a full justification of using multi-domain calibration in practice as we do. Better formal understanding can also inform us on when should we expect to gain from calibration techniques. Even though in our experiments we see that the techniques mostly improve OOD performance while preserving ID accuracy, it is plausible that failure cases exist and should be characterized. We look forward to expanding the scope of theoretical understanding of the conditions under which multi-domain calibration can provably guarantee out-of-domain generalization, including the finite-sample setting and the analysis of specific algorithms. We also expect new practical methods, building on our findings, will help push forward the real-world ability to generalize to unseen test domains.
\section*{Acknowledgments} We wish to thank Ira Shavitt for his helpful comments and to Alexandre Ram\'e for pointing us to an error in the original manuscript. This research was partially supported by the Israel Science Foundation (grant No. 1950/19).
\section*{Checklist}
\begin{enumerate}
\item For all authors... \begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{Limitations that we found most significant are discussed in \secref{sec:conc}}
\item Did you discuss any potential negative societal impacts of your work?
\answerNo{The work does not treat a specific task where we see an immediate risk for negative societal impact. Though the methods described in this paper might be used by practitioners in safety-critical domains such as healthcare, and we suggest doing so with care since the suggested methods need to be studied further in controlled settings before being used in such domains.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{The paper conforms with the provided guidelines.} \end{enumerate}
\item If you are including theoretical results... \begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerYes{Most assumptions, such as population setting (infinite data) assumption, Gaussian data and linearity of models have been given in \secref{sec:motiv}. Other assumptions regarding specific general-position conditions under which the theorems hold are given in the supplementary material. along with detailed statements of the theorems.}
\item Did you include complete proofs of all theoretical results?
\answerYes{The proofs of main theoretical claims can be found in the supplementary material.} \end{enumerate}
\item If you ran experiments... \begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerNo{We are preparing the code for publication and will do our best to have it ready by the end of the review period. Major parts of our code are based on existing publicly available code from papers that we cite, while noting that we used their published code. Hence some parts of our results can be reproduced quite easily by using it.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{We specify hyperparameters and training details in the supplementary material (for both the WILDS benchmark and ColoredMNIST). When using a training setup from other works (e.g. in Colored MNIST), we give a reference to the work and specify changes we made upon their setup.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerYes{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{} \end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{Some of the code we use is adapted from the works of \cite{arjovsky2019invariant, kamath2021does} which we cite several times and specifically in \secref{sec:exp} with regards to the used code. Our experiments on the WILDS benchmark \cite{koh2020wilds} use assets from the project that was also cited.}
\item Did you mention the license of the assets?
\answerYes{Licenses are included in the supplementary material.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNo{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNo{All assets we use are allowed for public use for the purposes of sceintific research.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNo{The data we use is taken from publicly available datasets, where all personally identifiable information have been removed.} \end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{We did not conduct research with human subjects}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{} \end{enumerate}
\end{enumerate}
\appendix \renewcommand{S\arabic{figure}}{S\arabic{figure}} \setcounter{figure}{0} \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{lemma}}{S\arabic{lemma}} \renewcommand{S\arabic{definition}}{S\arabic{definition}} \setcounter{table}{0} \setcounter{theorem}{0} \setcounter{definition}{0} \setcounter{lemma}{0}
\section{Proofs for Theoretical Claims} We begin by supplementing the definition of multiple domain calibration, extending it for the case of regression, then we provide proofs of the theorems in the paper. \subsection{Definition of Calibration} \label{sec:calibration_intro} Recall our definition of a calibrated classifier for binary tasks. \begin{definition}
Let $f:\mathcal{X}\rightarrow [0,1]$ and $P[X, Y]$ be a joint distribution over the features and label. Then $f({\mathbf x})$ is calibrated w.r.t to $P$ if for all $\alpha\in{[0,1]}$ in the range of $f$:
\begin{align*}
\mathbb{E}_{P}{\left[ Y \mid f(X)=\alpha\right]} = \alpha.
\end{align*}
In the multiple environments setting, $f({\mathbf x})$ is calibrated on $E_{\text{train}}$ if for all $e_i\in{E_{\text{train}}}$ and $\alpha$ in the range of $f$ restricted to $e_i$:
\begin{align}
\mathbb{E}{\left[ Y \mid f(X)=\alpha, E=e_i\right]} = \alpha \nonumber.
\end{align} \end{definition} Let us prove the connection between multi-domain calibration and invariance, we repeat the statement of the lemma from the main paper for convenience. \begin{lemma}[\lemref{lemma:corresp} in main paper]
If a binary classifier $f$ is invariant w.r.t $E_{\text{train}}$ then there exists some $g:\mathbb{R}\rightarrow [0,1]$ such that $g\circ f$ is calibrated on all training environments and its mean squared error on each environment does not exceed that of $f$. On the other hand, if a classifier is calibrated on all training environments it is also invariant w.r.t $E_{\text{train}}$. \end{lemma}
\begin{proof} Assume that the classifier is invariant w.r.t $E_{\text{train}}$, let $e_i\in{E_{\text{train}}}$ and note that: \begin{align*} \mathbb{E}[(Y - f(X))^2 \mid E=e_i] \geq \min_{g:\mathbb{R}\rightarrow\mathbb{R}}{\mathbb{E}[(Y - g\circ f(X))^2 \mid E=e_i]}. \end{align*} The solution to the RHS is to take $g(\hat{\alpha}) = \mathbb{E} [Y \mid f(X) = \hat{\alpha}, E=e_i]$ for all $\hat{\alpha}\in{[0,1]}$ and it results in a classifier $g \circ f$ that is calibrated w.r.t $e_i$. Due to invariance, for all $\hat{\alpha}\in{\mathbb{R}}$ the expectation $\mathbb{E} [Y \mid f(X) = \hat{\alpha}]$ is identical across all $e_i\in{E_{\text{train}}}$ where $\hat{\alpha}$ is in the range of $f$ restricted to $e_i$. Therefore there exists a single function $g$ that solves the RHS simultaneously over all environments. The resulting $g\circ f$ is indeed calibrated over all training domains and its mean squared error does not exceed that of $f$ (note that since the square loss is Bayes-consistent, this claim also holds for the classification error). The other part of the statement that a calibrated classifier on all $E_{\text{train}}$ is invariant follows easily from the definitions. \end{proof}
For regression tasks, one may consider a function that outputs a full CDF on $Y$ and define a calibrated classifier as one where all quantiles of the CDF match the true quantiles of $Y$ as the number of examples approached infinity. This leads to the definition in \cite{kuleshov2018accurate}, and one may follow this to analyze more general cases than the scenario we will consider in this work.
Since in this section we consider Gaussian distributions and linear regressors, a definition based on the first two moments of the distribution (instead of all quantiles of a CDF) will suffice. Hence we will be working the following definition: \begin{definition}
Let $f:\mathcal{X}\rightarrow \mathbb{R}^2$ and $P[X, Y]$ a joint distribution over the features and label. Then $f({\mathbf x})$ is calibrated w.r.t to $P$ if for all $(\alpha, \beta)\in{\mathbb{R}^2}$ in the range of $f$:
\begin{align*}
\mathbb{E}{\left[ Y \mid f(X)_1=\alpha\right]} = \alpha, \: \mathbb{E}{\left[ Y^2 \mid f(X)_2=\beta\right]} = \beta.
\end{align*}
In the multiple environments setting, $f({\mathbf x})$ is calibrated on $E_{\text{train}}$ if for all $e_i\in{E_{\text{train}}}$ and $(\alpha, \beta)$ in the range of $f$ restricted to $e_i$:
\begin{align} \label{eq:calibrated_regressor_multidomain}
\mathbb{E}{\left[ Y \mid f(X)=(\alpha, \beta), E=e_i\right]} = \alpha, \: \mathbb{E}{\left[ Y^2 \mid f(X)=(\alpha, \beta), E=e_i\right]} = \beta.
\end{align} \end{definition}
\subsection{Details about ECE, MMCE and Post-Processing Methods} To evaluate calibration and optimize our models towards multi-domain calibration, we use the Expected Calibration Error (ECE) and the Maximum Mean Calibration Error (MMCE) \cite{kumar2018trainable}.
The ECE is a scalar summary of the calibration plot, used throughout the literature to assess how well calibrated is a given classifier. \textbf{Calibration plots} \cite{degroot1983comparison} are a visual representation of model calibration in the case of binary labels. Each example ${\mathbf x}$ is placed into one of $B$ bins that partition the $[0,1]$ interval, in which the output, or \emph{confidence}, of the classifier $f({\mathbf x})$ falls. For each bin $b$, the accuracy of $f$ on the bin's examples $acc(b)$ is calculated along with the average confidence $conf(b)$. These are plotted against each other to form a curve, where deviations from a diagonal represent miscalibration.\\ \textbf{ECE score} summarizes the calibration curve by averaging the deviation between accuracy and confidence: \begin{equation} \label{eq:ece}
ECE = \sum^B_{b=1} \frac{n_b}{N} |acc(b) - conf(b)|. \end{equation} $n_b$ is the number of examples in bin $b$, $N$ is the total number of examples. In all of our experiments we used $B=10$ bins of equal size.
To handle the miscalibration that is often observed in models such as neural networks \cite{guo2017calibration}, the MMCE was proposed in \cite{kumar2018trainable} as a method to improve calibration at training time. Recalling the definition of this loss: We consider a dataset $D = \{{\mathbf x}_i, y_i\}_{i=1}^{m}$, a binary classifier parameterized by a vector $\theta$ which we denote $f_\theta:\rightarrow [0, 1]$. The confidence of $f_\theta$ on the $i$-th example is $f_{\theta;i}=\max\{f_\theta(x_i), 1-f_\theta(x_i)\}$
and its correctness is $c_i=\mathbbm{1}_{| y_i-f_{\theta;i} | < \frac{1}{2}}$. Then we fix a kernel $k:\mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$, associated with a feature map $\phi:[0,1]\rightarrow \mathcal{H}$, and MMCE over the dataset $D$ is given by: \begin{align} \label{eq:mmce_def}
r^{D}_{\text{MMCE}}(f_\theta) = \frac{1}{m^2}\sum_{i,j\in{D}}{(c_i-f_{\theta;i})(c_j-f_{\theta;j})k(f_{\theta;i},f_{\theta;j})}. \end{align} In our experiments we use an RBF kernel $k(r, r') = \exp(-\gamma(r-r')^2)$ with $\gamma=2.5$. \eqref{eq:mmce_def} is the finite sample approximation of the following: \begin{align} \label{eq:mmce_population}
MMCE(f_\theta ; P[X, Y]) = \| \mathbb{E}_{({\mathbf x}, y)\sim P}[(c-f_{\theta}({\mathbf x}))\phi(f_{\theta}({\mathbf x}))] \|_{\mathcal{H}}. \end{align} Here $c$ is the correctness of $f_{\theta}$ on $({\mathbf x}, y)$ as defined for \eqref{eq:mmce_def}. Attractive properties of the MMCE include it being a proper scoring rule: \begin{theorem*}[Adapted from Thm.~1 in \cite{kumar2018trainable}] Let $P[X, Y]$ be a probability measure defined on the space $ (\mathcal{X} \times \{0,1\})$ such that the conditionals on the pushforward measure $P[r, c] = f_\theta \sharp P$,\footnote{we note the abuse of notation here, as $f_\theta\sharp P$ is used to denote the measure that we get by applying $f_\theta$ to $X$ to obtain $r$ and $c$ is obtained by calculating its correctness w.r.t to $Y$.} $P(r \mid c = 1)$ over $([0,1] \times \{0,1\})$, $P (r \mid c = 0)$ are Borel probability measures, and let $k$ be a universal kernel. The MMCE in \eqref{eq:mmce_population} is $0$ if and only if $f_\theta$ is calibrated w.r.t $P$. \end{theorem*} \corref{corr:proper_score} in the paper follows by considering $\sum_{e\in{E_{\text{train}}}}MMCE(f_\theta ; P[X, Y \mid E=e])$ and applying the theorem to each summand. For more details on the MMCE, its derivation as an integral probability measure analogue of the ECE and its properties, we refer the reader to \cite{kumar2018trainable}.
Another popular metric for calibration in binary classification problems is the Brier score, which is simply the squared error between the predicted probability and the outcome \cite{brier1950verification}: \begin{align*}
BS(f) = \frac{1}{m}\sum_{i=1}^{m}{(f({\mathbf x}_i) - y_i)^2}. \end{align*} The Isotonic Regression \cite{niculescu2005predicting} post-processing methods that we use in the paper minimize the Brier score using a monotonic post-processing function. Hence we consider a classifier $f$ and a dataset $\{{\mathbf x}_i, y_i\}_{i=1}^{m}$. Denote the prediction of $f$ on ${\mathbf x}_i$ by $f_{i}$, then isotonic regression solves: \begin{align*}
\min_{z: f_{i} \leq f_{j} \Rightarrow z(f_{i}) \leq z(f_{j})}{\frac{1}{m}\sum_{i=1}^{m}{(z(f_{i}) - y_i)^2}}. \end{align*}
A motivation for using this as a post-processing calibration method is the decomposition of the Brier score to a refinement and calibration score. We may denote the set of prediction values that are obtained by $f$ across the dataset by $F = \{ f_i \mid i\in{[m]}\}$. For each such value $\tilde{f}\in{F}$ then denote $N_{\tilde{f}}=|\{i \mid f_i=\tilde{f}\}|$ as the number of points for which we obtain this prediction and $y_{\tilde{f}} = \frac{1}{N_{\tilde{f}}}\sum_{i:f_i=\tilde{f}}{y_i}$ the average outcome over them: \begin{align*}
BS(f) = CAL(f) + REF(f) = \frac{1}{m}\sum_{\tilde{f}\in{F}}{N_{\tilde{f}}(\tilde{f}-y_{\tilde{f}})^2} + \frac{1}{m}\sum_{\tilde{f}\in{F}}{N_{\tilde{f}}(y_{\tilde{f}}(1-y_{\tilde{f}}))} \end{align*} The calibration score measures how far is the average prediction value from the average outcome, while refinement gives a measure of their sharpness (i.e. it raises the score of uncertain prediction). Due to the monotonicity constraint of isotonic Regression, it is usually thought of as not changing the $REF(f)$ too much, which means it minimizes the Brier score mainly by reducing $CAL(f)$. In the multi-domain cases we are interested in, note that this vanilla isotonic regression does not take domains into account. In our experiments we use it simple by pooling the dataset on all environments and performing post-processing calibration on this dataset using isotonic regression. This procedure could output a classifier that is perfectly calibrated for the entire dataset, but not on single environments.
To give a simple variant that does post-processing while taking environments into account, we proposed a Robust Isotonic Regression method. The method minimizes the Brier score on the worst-case environment, thus aiming to bound the worst miscalibration on each environment. While in practice it will usually not provide perfect calibration on each environment, the method trades off the error between environments so it is better geared towards simultaneous calibration of the classifier on all domains. Formally we solve: \begin{align}
z^* = \argmin_{z:f_i \leq f_j \Rightarrow z(f_i)\leq z(f_j)}\max_{e\in{E_{\text{train}}}}{\frac{1}{N_e}\sum_{i=1}^{N_e}{\left(z(f_{e,i})-y_i\right)^2}}. \end{align} Where $N_e$ are the number of data points in environment $e\in{E_{\text{train}}}$ and $f_{e,i}$ is the output of $f$ on point $i$ in the environment.
\subsection{Causal Graphical Models} \label{sec:scm}
In order to answer queries about unseen distributions based on data from different, observed distributions, one must make certain assumptions about the data generating processes and the relationships between the observed and unobserved distributions. One way of articulating such models of the world is by using causal graphs. In a causal graph, edges from a variable $X$ to a variable $Y$ mean that changing the value of $X$ \emph{may} change the distribution of $Y$. Causal graphs entail all statistical dependencies between variables, and we can read off such independence statements using the d-separation criterion \cite{pearl1994probabilistic}. We refer to background material to discuss how to identify and estimate causal effects with these causal graphical models in hand \cite{pearl2009causality}.
In the main paper, \figref{scm} illustrates our assumed causal graph for a general problem of distribution shift, and \figref{theoretical_cases} illustrates the assumed causal graph for causal and anti-causal simplified examples described in equations \ref{eq:setting_a} and \ref{eq:setting_b}, respectively. For instance according to d-separation, in distributions described by \figref{scm} it holds that $Y\indep E \mid \Xc, \Xacn$ and that in general $Y\nindep E \mid \Xac$. Furthermore, if we introduce a node $\Phi(\mathcal{X})$ whose parents do not include $\Xac$, then $Y \indep E \mid \Phi(X)$ (and conversely, if $\Xac$ is a parent then the independence does not hold in general), which motivates the definition of a representation that has no spurious correlations.
Equipped with the definitions and background given in the previous sections, we now turn to the proofs of the theorems in the paper.
\subsection{Classification with Invariant Features} \label{sec:proof1} We first consider the classification task from the main paper, where the data generating process is described in \figref{fig:scenario_a}. Recall that we are considering linear classifiers of the form $f({\mathbf x}; {\mathbf w}, b)=\sigma({\mathbf w}^\top{\mathbf x} + b)$. Our environments here are defined by the parameters of the multivariate Gaussian distributions that generate the spurious features $\{\mu_i, \Sigma_i\}_{i=1}^{k}$. As a first step we will derive the algebraic form of the constraints that calibration imposes on ${\mathbf w}$ and the parameters defining the environments. For convenience, we modify the notation from the main paper and consider a binary label where $\mathcal{Y} = \{-1, 1\}$ instead of $\mathcal{Y} = \{0, 1\}$. \begin{figure}
\caption{Diagram for data generating process in the invariant features scenario.}
\label{fig:scenario_a}
\end{figure} \begin{lemma} \label{lem:calibration_conditions_spurious_setting} Assume we have $k$ environments with means and covariance matrices for environmental features $\mu_i\in{\mathbb{R}^{d_e}}, \Sigma_{i}\in{\mathbb{S}_{++} ^{d_e}}, i\in{[k]}$ and a common covariance matrix $\Sigma_{\text{ns}}\in{\mathbb{S}^{d_{\text{ns}}}_{++}}$ for invariant features, where data is generated according to: \begin{align*}
\begin{split}
y = \begin{cases}
1 & \text{w.p } \eta \\
-1 & \text{otherwise}
\end{cases}
\end{split},
\begin{split}
{\mathbf x}_\text{ns} \mid Y=y \sim\mathcal{N}(y\mu_\text{ns}, \Sigma_{\text{ns}}), \\
{\mathbf x}_{\text{sp}} \mid Y=y \sim\mathcal{N}(y\mu_i, \Sigma_i),
\end{split} \end{align*} and ${\mathbf x}_{\text{ns}}, {\mathbf x}_{\text{sp}}$ are drawn independently. Let $\sigma:\mathbb{R}\rightarrow (0, 1)$ be an invertible function and define the classifier: \begin{align*}
f({\mathbf x} ; {\mathbf w},b) = \sigma( {\mathbf w}^\top {\mathbf x} - b). \end{align*} Decompose the weights ${\mathbf w} = [ {\mathbf w}_{\text{ns}}, {\mathbf w}_{\text{sp}} ]$ to the coefficients of the invariant and spurious features accordingly. Then if the classifier is calibrated on all environments, it holds that either ${\mathbf w}=\mathbf{0}$ or there exists $t \neq 0$ such that: \begin{align} \label{eq:desideratum_lem1}
\frac{ {\mathbf w}^\top_{\text{ns}}\mu_{\text{ns}} + {\mathbf w}^\top_{sp}\mu_i }{{\mathbf w}_{\text{ns}}^\top\Sigma_{\text{ns}}{\mathbf w}_{\text{ns}} + {\mathbf w}_{sp}^\top\Sigma_i{\mathbf w}_{sp}} = t \quad \forall i\in{[k]}. \end{align} \end{lemma} \begin{proof} Let $i\in{[k]}$, the joint distribution of features in the environment is Gaussian with mean $\hat{\mu}_i=[\mu_{\text{ns}}, \mu_i]$, covariance $ \hat{\Sigma}_i = \begin{bmatrix} \Sigma_{\text{ns}} & 0 \\ 0 & \Sigma_i \end{bmatrix}$. Hence the output of the affine function corresponding to the classifier is a random variable with probability density function: \begin{align*}
P[\sigma^{-1}(f(X))=\alpha \mid Y=y, E = e_i] = (2\pi{\mathbf w}^\top\hat{\Sigma_i}{\mathbf w})^{-\frac{1}{2}}\exp\left(\frac{\left(\alpha-y {\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}}\right). \end{align*} Hence the conditional probability of $Y$ is given by: \begin{align*}
P[Y=1 \mid \sigma^{-1}(f(X))=\alpha, E=e_i] = \frac{\eta\exp\left(\frac{\left(\alpha-{\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}}\right)}{\eta\exp\left(\frac{\left(\alpha-{\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}}\right) + (1-\eta)\exp\left(\frac{\left(\alpha+{\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}}\right)}. \end{align*} Note that unless ${\mathbf w}=\mathbf{0}$ (which results in a calibrated classifier that satisfies \eqref{eq:desideratum_lem1}), the variance of $\sigma^{-1}(f(X))$ is strictly positive since $\hat{\Sigma}_i\succ 0$, so above conditional probabilities are well-defined. Now it is easy to see that if the classifier is calibrated across environments, we need to have equality in the log-odds ratio for each $i,j$ and all $\alpha\in{\mathbb{R}}$: \begin{align*}
\frac{\left(\alpha-{\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}} - \frac{\left(\alpha+{\mathbf w}^\top \hat{\mu}_i + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}} = \frac{\left(\alpha-{\mathbf w}^\top \hat{\mu}_j + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_j{\mathbf w}} - \frac{\left(\alpha+{\mathbf w}^\top \hat{\mu}_j + b\right)^2}{2{\mathbf w}^\top\hat{\Sigma}_j{\mathbf w}} \quad \forall\alpha\in{\mathbb{R}}. \end{align*} After dropping all the terms that cancel out in the subtractions we arrive at: \begin{align*}
\frac{{\mathbf w}^\top \hat{\mu}_i}{{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}} = \frac{{\mathbf w}^\top\hat{\mu}_j}{{\mathbf w}^\top\hat{\Sigma}_j{\mathbf w}}. \end{align*} This may also be written as a system of equations with an additional scalar variable $t\in{\mathbb{R}}$: \begin{align*}
\frac{{\mathbf w}^\top \hat{\mu}_i}{{\mathbf w}^\top\hat{\Sigma}_i{\mathbf w}} = t \quad \forall i\in{[k]}. \end{align*} Now because we assumed $\Sigma_i \succ 0$ for all environments, for any solution to the above system with $t=0$, we must have: \begin{align*}
{\mathbf w}^\top\hat{\mu}_i = 0 \quad \forall i\in{[k]}. \end{align*} Furthermore we will have for any $\alpha\in{\mathbb{R}}$: \begin{align*}
P[Y = 1 \mid \sigma^{-1}(f(X)) = \alpha, E=e_i] = \eta. \end{align*} Since we assume $f$ is calibrated and the right hand side needs to equal $\alpha$, this is only possible if $f({\mathbf x}; {\mathbf w}, b)$ is a constant function. Again, because $\Sigma_i \succ 0$, this is only possible if ${\mathbf w}=\mathbf{0}$. Hence we conclude with our desired result, as can be seen by decomposing ${\mathbf w}$ to the parts corresponding to invariant and spurious features. \end{proof} We now give a result for the special case where the covariance matrices of the spurious features satisfy $\Sigma_i = \sigma^2_i\mathbf{I}$, considered in \cite{rosenfeld2020risks}. The nice correspondence here is that we will see that calibration demands one more environment than IRM to discard all spurious features. This matches the intuition that each environment reduces a degree of freedom from the set of invariant classifiers, while risk minimization reduces one more degree of freedom. \begin{lemma} \label{thm:simple_case} Assume we have $k \geq d_{\text{sp}}+2$ environments and define $M\left(\{\mu_i, \sigma_i\}_{i=1}^{k}\right)\in{\mathbb{R}^{k\times d_e+2}}$: \begin{align*}
M(\{\mu_i, \sigma_i\}_{i=1}^{k}) = \begin{bmatrix}
\mu^\top_1 & \sigma_1^2 & 1 \\
& \vdots & \\
\mu^\top_k & \sigma_k^2 & 1
\end{bmatrix}. \end{align*}
If the matrix has full rank, then for any invariant predictor the linear coefficients on spurious features are zero. \end{lemma} \begin{proof} According to \lemref{lem:calibration_conditions_spurious_setting}, writing down the conditional probability $P[Y \mid \sigma^{-1}(f({\mathbf x})), E = e ]$ and demanding calibration results in the constraint that either ${\mathbf w}=\mathbf{0}$, and then the linear coefficients on spurious features are indeed $0$; or that for some $t\neq 0$: \begin{align*}
\frac{ {\mathbf w}^\top_{\text{ns}}\mu_{\text{ns}} + {\mathbf w}^\top_{\text{sp}}\mu_i }{{\mathbf w}_{\text{ns}}^\top\Sigma_{\text{ns}}{\mathbf w}_{\text{ns}} + \sigma^2_i\|{\mathbf w}_{\text{sp}}\|^2_2} = t \quad \forall i\in{[k]}. \end{align*} Without loss of generality we can phrase these constraints as: \begin{align*}
\frac{ {\mathbf w}^\top_{\text{ns}}\mu_{\text{ns}} + {\mathbf w}^\top_{\text{sp}}\mu_i }{{\mathbf w}_{\text{ns}}^\top\Sigma_{\text{ns}}{\mathbf w}_{\text{ns}} + \sigma^2_i\|{\mathbf w}_{\text{sp}}\|^2_2} = 1 \quad \forall i\in{[k]}. \end{align*} This is true since if ${\mathbf w}$ is a solution to this system of equations where the right hand side is some $t\in{\mathbb{R}}$ then $t{\mathbf w}$ is a solution to the system where $t$ is replaced by $1$. Rewrite the constraints again to isolate the parts depending on ${\mathbf w}_{\text{sp}}$: \begin{align*}
\sigma_i^2\|{\mathbf w}_{\text{sp}}\|_2^2 - \mu^\top_i {\mathbf w}_{\text{sp}} = {\mathbf w}_{\text{ns}}^{\top}\Sigma_{\text{ns}}{\mathbf w}_{\text{ns}} - {\mathbf w}_{\text{ns}}^\top\mu_{\text{ns}} \quad \forall i\in{[k]}. \end{align*} To find whether this system has a solution where ${\mathbf w}_{\text{sp}}$ is non-zero we can replace the right hand side with a scalar variable $t\in{\mathbb{R}}$, and ask whether the following system has a non-zero solution: \begin{align*}
\sigma_i^2\|{\mathbf w}_{\text{sp}}\|_2^2 - \mu^\top_i {\mathbf w}_{\text{sp}} = t \quad \forall i\in{[k]}. \end{align*} For the above equations to have a non-zero solution, the following linear system must also have such a solution: \begin{align*}
M(\{\mu_i, \sigma_i\}_{i=1}^{k}){\mathbf x} = \mathbf{0}. \end{align*} But from our non-degeneracy condition, such a solution does not exist. \end{proof}
Next we generalize the above to prove the result from the main paper, namely when the matrices $\{\Sigma_i\}_{i=1}^{k}$ are not diagonal. For this purpose we introduce a definition of general position for environments, similar to the one given in \cite{arjovsky2019invariant}. \begin{definition} Given $k > 2d_{\text{sp}}$ environments with mean parameters $\{\Sigma_i, \mu_i\}_{i=1}^{k}$, we say they are in general position if for all non-zero ${\mathbf x}\in{\mathbb{R}^d_{\text{sp}}}$: \begin{align*}
\mathrm{dim}\left(\mathrm{span}\left\{\begin{bmatrix} \Sigma_i{\mathbf x} + \mu_i \\
1 \end{bmatrix}\right\}_{i\in{[k]}}\right) = d_e+1. \end{align*} \end{definition} Equipped with this notion of general position, we now need to show that if it holds then the only predictors that satisfy the conditions of \lemref{lem:calibration_conditions_spurious_setting} are those with ${\mathbf w}_{\text{sp}}=\mathbf{0}$. Another claim we will need to prove is that the subset of environments which do not lie in general position have measure zero in the set of all possible environment settings. Hence generic environments are expected to lie in general position. This argument will follow the lines of the one given in \cite{arjovsky2019invariant}, adapted to our case with the fixed coordinate $1$ added in the above definition. \begin{theorem} Under the setting of \lemref{lem:calibration_conditions_spurious_setting}, if the environments lie in general position then all classifiers that are calibrated across environments satisfy ${\mathbf w}_{\text{sp}}=\mathbf{0}$. \end{theorem} \begin{proof} According to \lemref{lem:calibration_conditions_spurious_setting}, if the predictor is calibrated then \eqref{eq:desideratum_lem1} must hold. Following the same arguments laid out in the proof at the main paper, we get that ${\mathbf w}_{sp}$ needs to be a solution for the following system of equations: \begin{align} \label{eq:ellipsoid_system}
{\mathbf w}_{sp}^\top\Sigma_i{\mathbf w}_{sp} - \mu_i^\top{\mathbf w}_{sp} - t = 0 \quad \forall i\in{[k]}. \end{align} Now, let ${\mathbf w}_{sp}\in{\mathbb{R}^{d_{\text{sp}}}}$ be a non-zero vector and let us define the $k\times d_e+1$ matrix: \begin{align*}
M(\{\mu_i, \Sigma_i\}_{i=1}^{k}, {\mathbf w}_{sp}) = \begin{bmatrix}
{\mathbf w}_{sp}^\top\Sigma_1 - \mu^\top_1 & 1 \\
\vdots \\
{\mathbf w}_{sp}^\top\Sigma_k - \mu^\top_k & 1
\end{bmatrix} \end{align*} If the environments are in general position, the above matrix has full rank for any non-zero ${\mathbf w}_{sp}$. Similarly to the proof of \lemref{thm:simple_case}, if \eqref{eq:ellipsoid_system} has a non-zero solution then the following system must also have a solution: \begin{align*}
M(\{\mu_i, \Sigma_i\}_{i=1}^{k}, {\mathbf w}_{sp}) {\mathbf x} = \mathbf{0}. \end{align*} Which is of course impossible due to $M(\{\mu_i, \Sigma_i\}_{i=1}^{k}, {\mathbf w}_{sp})$ having full rank. \end{proof}
We conclude with the statement about the measure of sets of environments which do not lie in general position, this will follow the lines of \cite{arjovsky2019invariant}. \begin{lemma}
Let $k > 2d_{\text{sp}}$ and $\{\mu_{i}\}_{i=1}^{k}$ be arbitrary fixed vectors, then the set of matrices $\{\Sigma_i\}_{i=1}^{k}\in (\mathbb{S}^{d_{\text{sp}}}_{++})^{k}$ for which $\{ \Sigma_i, \mu_i \}_{i=1}^{k}$ do not lie in general position has measure zero within the set $(\mathbb{S}^{d_{\text{sp}}}_{++})^{k}$. \end{lemma}
\begin{proof} We assume $k > 2d_{\text{sp}}$ and denote by $LR(k, d_{\text{sp}}, r)$ the matrices of dimensions $k\times d_{\text{sp}}$ and rank $r$. Also for any $d$ denote by $\mathbf{1}_d$ the vector in $\mathbb{R}^d$ where all entries equal $1$. Define ${\mathbf{M}}^{1}_{*}(k, d_{\text{sp}})$ as the set of $k\times d_{\text{sp}}$ matrices of full column-rank whose columns span the vector of ones $\mathbf{1}_{k}$: \begin{align*}
{\mathbf{M}}^1_{*}(k, d_{\text{sp}}) = \{A\in{LR(k, d_{\text{sp}}, d_{\text{sp}})} \mid \mathbf{1}_{k} \in{\mathrm{colsp}(A)} \}. \end{align*} Let $\{ \Sigma_i \}_{i=1}^{k}\in{(\mathbb{S}^{d_{\text{sp}}}_{++})^{k}}$ and define ${\mathbf{W}}\subseteq \mathbb{R}^{k \times d_{sp}}$ as the image of the mapping $G:\mathbb{R}^{d_{sp}}\setminus{\{0\}}\rightarrow \mathbb{R}^{k \times d_{sp}}$:
\begin{align*}
(G({\mathbf x}))_{i, l} = \left( \Sigma_i{\mathbf x} - \mu_i \right)_l
\end{align*} By the definition of general position given in the paper, the environments defined by $\{\Sigma_{i}, \mu_i\}_{i=1}^{k}$ lie in general position if ${\mathbf{W}}$ does not intersect $LR(k, d_{\text{sp}}, r)$ for all $r<d_{\text{sp}}$ and ${\mathbf{M}}^1_{*}(k, d_{\text{sp}})$. We would like to show that this happens for all but a measure zero of $\left(\mathbb{S}^{d_{sp}}_{++}\right)^k$.
Due to the exact same arguments in Thoerem 10 of \cite{arjovsky2019invariant}, we have that ${\mathbf{W}}$ is transversal to any submanifold of $\mathbb{R}^{k\times d_{\text{sp}}}$ and also does not intersect $LR(k, d_{\text{sp}}, r)$ where $r<d_{\text{sp}}$, for all $\{\Sigma_i \}_{i=1}^{k}$ but a measure zero of $\left(\mathbb{S}^{d_{sp}}_{++}\right)^k$.
It is left to show that it also does not intersect ${\mathbf{M}}^1_{*}(k, d_{\text{sp}})$ for all but a measure zero of $\left(\mathbb{S}^{d_{sp}}_{++}\right)^k$. Because ${\mathbf{M}}^1_{*}(k, d_{\text{sp}})$ is a submanifold of $\mathbb{R}^{k\times d_{\text{sp}}}$, it intersects transversally with ${\mathbf{W}}$ for generic $\{\Sigma_i \}_{i=1}^{k}$. Then by transversality they cannot intersect if $\text{dim}({\mathbf{W}}) + \text{dim}({\mathbf{M}}^1_{*}(k, d_{\text{sp}})) - \text{dim}(\mathbb{R}^{k\times d_{\text{sp}}}) < 0$. We will claim that $\text{dim}({\mathbf{M}}^1_{*}(k, d_{\text{sp}})) = k(d_{\text{sp}} - 1) + d_{\text{sp}}$ and then since $k>2d_{\text{sp}}$ we may obtain: \begin{align*} \text{dim}({\mathbf{W}}) + \text{dim}({\mathbf{M}}^1_{*}(k, d_{\text{sp}})) - \text{dim}(\mathbb{R}^{k \times d_{\text{sp}}}) & \leq d_{\text{sp}} + k(d_{\text{sp}}-1) + d_{\text{sp}} - kd_{\text{sp}} \\ & = 2d_{\text{sp}} - k \\ &< 0. \end{align*} The negativity of the dimension implies that if ${\mathbf{W}}$ and ${\mathbf{M}}^1_{*}(k, d_{\text{sp}})$ are transversal then they do not intersect, and we may conclude our desired result that the environments lie in general position for all but a measure zero of $\left(\mathbb{S}^{d_{\text{sp}}}_{++}\right)^k$.
To show that $\text{dim}({\mathbf{M}}^1_{*}(k, d_{\text{sp}})) = k(d_{\text{sp}} - 1) + d_{\text{sp}}$, consider a matrix $A\in{{\mathbf{M}}^1_{*}(k, d_{\text{sp}})}$. Since it has full rank, it has a $d_{\text{sp}}\timesd_{\text{sp}}$ minor that is invertible. Assume this minor is just the first $d_{\text{sp}}$ rows of $A$, otherwise there is a linear isomorphism that transforms it into such a matrix and the arguments that follow still apply (see \cite{lee2013smooth}, Example 5.30; our proof follows a similar line of reasoning). Now write $A$ as a block matrix using $B\in{\mathbb{R}^{d_{\text{sp}}\timesd_{\text{sp}}}}, C\in{\mathbb{R}^{(k-d_{\text{sp}})\timesd_{\text{sp}}}}$: \begin{align*}
A = \begin{bmatrix}
B \\
C
\end{bmatrix}. \end{align*} Denoting by $\mathbf{U}$ the set of $k\timesd_{\text{sp}}$ matrices whose first $d_{\text{sp}}$ rows are invertible, we consider the mapping $F:\mathbf{U} \rightarrow \mathbb{R}^{k-d_{\text{sp}}}$: \begin{align*}
F(A) = \mathbf{1}_{k-d_{\text{sp}}} - CB^{-1}\mathbf{1}_{d_{\text{sp}}}. \end{align*} Clearly $F^{-1}(\mathbf{0}) = {\mathbf{M}}^1_{*}(k, d_{\text{sp}})$ and $F$ is smooth. We will show that it is a submersion by observing that its differential $DF(U)$ is surjective for each $U\in{\mathbf{U}}$. To this end, for a given $U=\begin{bmatrix} B \\ C \end{bmatrix}$ and any $X\in{\mathbb{R}^{(k-d_{\text{sp}})\timesd_{\text{sp}}}}$ define a curve $\gamma:(-\epsilon, \epsilon) \rightarrow \mathbf{U}$ by: \begin{align*}
\gamma(t) = \begin{bmatrix}
B \\
C + \gamma X
\end{bmatrix}. \end{align*} We have that: \begin{align*}
(F\circ\gamma)'(t) = \frac{d}{dt}|_{t=0}(\mathbf{1}_{k-d_{\text{sp}}} - (C+tX)B^{-1}\mathbf{1}_{d_{\text{sp}}}) = XB^{-1}\mathbf{1}_{d_{\text{sp}}}. \end{align*} Since $B^{-1}\mathbf{1}_{d_{\text{sp}}}$ is not the zero vector, and $X\in{\mathbb{R}^{(k-d_{\text{sp}})\timesd_{\text{sp}}}}$ where $k-d_{\text{sp}}>d_{\text{sp}}$, then it is clear that the above mapping is surjective. Note that the derivatives along the curve are just a subset of the range of $DF(U)$, hence $DF(U)$ is also surjective at each point $U\in{\mathbf{U}}$. It follows from the submersion theorem that $\mathrm{dim}({\mathbf{M}}^1_{*}(k, d_{\text{sp}})) = kd_{\text{sp}} - (k-d_{\text{sp}}) = k(d_{\text{sp}} - 1) + d_{\text{sp}}$ as desired for our result to hold. \end{proof}
\subsection{Regression Under Covariate Shift and Spurious Features} \label{sec:proof2} We now move on to the second scenario presented in the paper where the mechanism $P(Y \mid X)$ is invariant and the diagram depicting the data generating process is given in \figref{fig:scenario_b}. Here for each environment $i\in{[k]}$ we will have: \begin{align} \label{eq:regression_sem}
&X_{c} \sim \mathcal{N}(\mu^c_i, \Sigma^c_{i}) \\
&Y = {{\mathbf w}^*_c}^\top {\mathbf x}_c + \xi, \: \xi\sim\mathcal{N}(0, \sigma^2_y) \nonumber\\
&X_{sp} = y\mu_i + \eta, \: \eta\sim\mathcal{N}(\mathbf{0}, \Sigma_i). \nonumber \end{align} We consider a regressor $f:\mathcal{X}\rightarrow\mathbb{R}^2$, where the estimate of the mean is linear, i.e. $[f({\mathbf x}; {\mathbf w})]_1 = {\mathbf w}^\top{\mathbf x}$, and the estimate of the variance is constant $[f({\mathbf x}; {\mathbf w})]_2=c$.\footnote{Limiting the variance estimate to a constant does not make a difference for the purpose of our proof. The proof does not rely on the correctness of the variance estimate as imposed by \eqref{eq:calibrated_regressor_multidomain}, but only on the variances being equal across environments when conditioned on $f({\mathbf x})$. In other words it relies on the correctness of the mean estimate, and the distribution of $Y$ conditioned on $f(X)$ being the same across environments.} We decompose the weights ${\mathbf w}$ into their parts corresponding to causal and spurious features $[{\mathbf w}_c, {\mathbf w}_{sp}]$. Then our result regarding calibration and generalization to $\mathcal{E}$ is given below. \begin{figure}
\caption{Diagram for data generating process in the covariate shift scenario.}
\label{fig:scenario_b}
\end{figure} \begin{theorem} \label{thm:causal_regression} Denote the dimensions of $X_c, X_{sp}$ by $d_c, d_{sp}$ accordingly. Assume we have $k$ environments with parameters $\{ \mu^c_i, \mu_i, \Sigma^c_i, \Sigma_i \}_{i=1}^{k}$. For any matrix $A$ denote its $i$-th row by $A^i$, and define the matrices $M(\{\mu^c_i,\mu_i\}_{i=1}^{k})\in{\mathbb{R}^{k\times d_{\text{c}}+d_{\text{sp}}+1}}$ and $M_2(\{\mu^c_i,\Sigma^c_i\}_{i=1}^{k}, \sigma^2_y, {\mathbf w}_c^*)\in{\mathbb{R}^{k\times d_c+2}}$ whose rows are given by: \begin{align*}
M(\{\mu^c_i,\mu_i\}_{i=1}^{k}) &= \begin{bmatrix}
{\mu^c_i}^\top & \left({{\mathbf w}^*_c}^\top\mu^c_1\right)\mu_1^\top & 1 \\
& \vdots & \\
{\mu^c_k}^\top & \left({{\mathbf w}^*_c}^\top\mu^c_k\right)\mu_k^\top & 1
\end{bmatrix}, \\ M_2(\{\mu^c_i,\Sigma^c_i\}_{i=1}^{k}, \sigma^2_y, {\mathbf w}_c^*) &= \begin{bmatrix}
{{\mathbf w}_c^*}^\top\Sigma^c_1 + \left(\frac{ {{\mathbf w}_c^*}^\top\Sigma^c_1{{\mathbf w}_c^*} + \sigma^2_y}{{{\mathbf w}^*_c}^\top\mu^c_1} \right){\mu^c_1}^\top & \frac{{{\mathbf w}_c^*}^\top\Sigma^c_1{{\mathbf w}_c^*}}{{{\mathbf w}^*_c}^\top\mu^c_1} & 1 \\
& \vdots & \\
{{\mathbf w}_c^*}^\top\Sigma^c_k + \left(\frac{ {{\mathbf w}_c^*}^\top\Sigma^c_k{{\mathbf w}_c^*} + \sigma^2_y}{{{\mathbf w}^*_c}^\top\mu^c_k} \right){\mu^c_k}^\top & \frac{{{\mathbf w}_c^*}^\top\Sigma^c_k{{\mathbf w}_c^*}}{{{\mathbf w}^*_c}^\top\mu^c_k} & 1
\end{bmatrix}. \end{align*} Let $f({\mathbf x}; {\mathbf w})$ be a calibrated regressor, assume ${{\mathbf w}_c^{*}}^\top\mu_i^c \neq 0$ for all $i\in{[k]}$ and that there exists $i,j\in{[k]}$ such that $\mathbb{E}[Y \mid E=e_i] \neq \mathbb{E}[Y \mid E=e_j]$. Furthermore assume that one of the following conditions hold: \begin{itemize}
\item $k > \max{\{d_\text{c} + 2, d_{\text{sp}}\}}$, $M_2(\{\mu^c_i,\Sigma^c_i\}_{i=1}^{k}, \sigma^2_y, {\mathbf w}_c^*)$ has full rank and the means of spurious features $\{\mu_i\}_{i=1}^{k}$ span $\mathbb{R}^{d_{sp}}$.
\item $k > d_c + d_{sp} + 1$ and $M(\{\mu^c_i,\mu_i\}_{i=1}^{k})$ has full rank. \end{itemize} then the weights of $f$ must be ${\mathbf w} = [{\mathbf w}_c^*, \mathbf{0}]$. \end{theorem} It is rather clear that rank-deficiency of $M_2$ would impose some highly non-trivial conditions on the relationships between $\mu_i^c, {{\mathbf w}_c^*}^\top\Sigma^c_i$ and the conditions given above are satisfied for all settings of environments other than a measure zero under any absolutely continuous measure on the parameters ${\mathbf w}_c^*, \{\mu_i^c, \Sigma_i^c\}_{i=1}^{k}$. The proof proceeds by writing the conditional distribution of $Y$ on $f(X)$, and showing that the conditions in the theorem are the direct result of the calibration constraints. \begin{proof} Since $X_c, X_{sp}, Y$ are jointly Gaussian, we can write their distribution at environment $i\in{[k]}$ as: \begin{align*} \begin{bmatrix} X_c \\ X_{sp} \\ Y \end{bmatrix} \sim \mathcal{N}\Bigg(&\begin{bmatrix} \mu^c_i \\ ({{\mathbf w}^*_c}^\top \mu^c_i)\mu_i\\ {{\mathbf w}^*_c}^\top \mu^c_i \end{bmatrix}, \\ &\begin{bmatrix} \Sigma^c_i & \Sigma^c_i{\mathbf w}_c^*\mu_i^\top & \Sigma^c_i{\mathbf w}_c^* \\ \mu_i{{\mathbf w}_c^*}^\top\Sigma^c_i & \left({{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*}+\sigma_y^2\right)\mu_i\mu_i^\top + \Sigma_i & \left({{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*}+\sigma_y^2\right)\mu_i\\ {{\mathbf w}_c^*}^\top\Sigma^c_i & ({{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*} + \sigma_y^2)\mu_i^\top & {{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*} + \sigma_y^2 \end{bmatrix}\Bigg). \end{align*} The predictions ${\mathbf w}^\top X$ are then also normally distributed, and jointly with $Y$ this can be written as: \begin{align*} \begin{bmatrix} {\mathbf w}^\top X \\ Y \end{bmatrix} \sim \mathcal{N}\Bigg(&\begin{bmatrix} {\mathbf w}_c^\top\mu^c_i + ({\mathbf w}_{sp}^\top\mu_i)({{\mathbf w}^*_c}^\top\mu^c_i) \\ {{\mathbf w}^*_c}^\top \mu^c_i \end{bmatrix}, \begin{bmatrix}
\sigma^2_{f, i} & \sigma_{f,y,i} \\
\sigma_{f,y,i} & \sigma^2_{y, i} \end{bmatrix} \Bigg), \end{align*} where we defined the items of the covariance matrix: \begin{align*} \sigma_{f,i}^2 &= {\mathbf w}_c^\top\Sigma^c_i{\mathbf w}_c + 2({\mathbf w}^\top_c\Sigma^c_i{\mathbf w}^*_{c})(\mu_i^\top{\mathbf w}_{sp}) + {\mathbf w}_{sp}^\top\left(\mu_i\mu_i^\top({{\mathbf w}_c^*}^\top\Sigma^c_i{\mathbf w}_c^* + \sigma_y^2) + \Sigma_i\right){\mathbf w}_{sp}, \\ \sigma_{f,y,i} &= {{\mathbf w}_c^*}^\top\Sigma^c_i{\mathbf w}_c + ({{\mathbf w}^*_c}^\top\Sigma^c_i{\mathbf w}_c^* + \sigma^2_y)\mu_i^\top{\mathbf w}_{sp}, \\ \sigma_{y, i}^2 &= {{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*} + \sigma_y^2. \end{align*} Now we can write the mean of the conditional distribution of $Y$ on $f(X)_1=\alpha$ as: \begin{align*}
\mathbb{E}\left[ Y \mid f(X)_1=\alpha, E=e_i \right] = {{\mathbf w}_c^*}^\top\mu^c_i + \frac{\sigma_{f,y,i}}{\sigma^2_{f,i}}(\alpha - {\mathbf w}_c^\top\mu^c_i - ({\mathbf w}_{sp}^\top\mu_i)({{\mathbf w}^*_c}^\top\mu^c_i)). \end{align*} For each environment $i\in{[k]}$, the above is a linear function of $\alpha$. Demanding $f(X)$ to be calibrated on all environments then imposes both the slopes and intercepts to be equal across environments. Writing this for the slope, we obtain that there must exist $t\in{\mathbb{R}}$ such that: \begin{align} \label{eq:slope_invariance}
\frac{\sigma_{f, y, i}}{\sigma^2_{f, i}} = t \quad \forall i\in{[k]}. \end{align} We note that $t \neq 0$ since if it is zero then we have that $\mathbb{E}[Y \mid f(X)_1=\alpha, E=i]$ does not depend on $\alpha$, where calibration demands that it equals $\alpha$. This can only happen if ${\mathbf w}_c=\mathbf{0}$, otherwise the range of $f({\mathbf x})$ is $\mathbb{R}$ because we assumed in the definition of the environments that $\Sigma_c^i\succ 0$. Furthermore, ${\mathbf w}_c=\mathbf{0}$ cannot be calibrated if $\mathbb{E}[Y \mid E=e_i]$ is not constant across environments; which is also part of the non-degeneracy constraints we required. Next we demand the equality of the intercepts across environments. Taking these equations and replacing \eqref{eq:slope_invariance} into each of them, we get: \begin{align*}
{{\mathbf w}_c^*}^\top\mu^c_i - t\left({\mathbf w}_c^\top\mu^c_i + ({\mathbf w}^\top_{sp}\mu_i)({{\mathbf w}^*_c}^\top\mu^c_i)\right) = {{\mathbf w}_c^*}^\top\mu^c_j - t\left({\mathbf w}_c^\top\mu^c_j + ({\mathbf w}^\top_{sp}\mu_j)({{\mathbf w}^*_c}^\top\mu^c_j)\right) \: \forall i,j\in{[k]}. \end{align*} Dividing both sides by $t$ and defining $\bar{{\mathbf w}}_c = \frac{{\mathbf w}^*_c}{t} - {\mathbf w}_c$, we can introduce another variable $t_2\in{\mathbb{R}}$ and write this as a linear system of equations in variables ${\mathbf w}_{sp}, \bar{{\mathbf w}}_c, t_2$: \begin{align} \label{eq:means_invariance}
\bar{{\mathbf w}}^\top_c\mu^c_i - {\mathbf w}_{sp}^\top\mu_i({{\mathbf w}^*_c}^\top\mu^c_i) + t_2 = 0 \quad \forall i\in{[k]}. \end{align} We see that given $d_c + d_{sp} + 1$ environments, then with mild conditions on their non-degeneracy (i.e. the vectors containing the environment means and an extra entry of $1$ span $\mathbb{R}^{d_c+d_{sp}+1}$), the only solution to the system is $\bar{{\mathbf w}}_c=0, {\mathbf w}_{sp}=0$, proving the last part of our statement.
Moving forward to demand multiple calibration on second moments $\mathbb{E}[Y^2 \mid f(X)_1=\alpha, E=e_i] = \mathbb{E}[Y^2 \mid f(X)_1=\alpha, E=e_j]$ for all $i,j\in{[k]}$, we may write this as: \begin{align*}
\sigma^2_{y,i} - \frac{\sigma^2_{f,y,i}}{\sigma^2_{f,i}} = \sigma^2_{y,j} - \frac{\sigma^2_{f,y,j}}{\sigma^2_{f,j}} \quad \forall i,j\in{[k]}. \end{align*} Plugging \eqref{eq:slope_invariance} into the above, a simplified expression is obtained: \begin{align*}
\sigma^2_{y,i} - t\sigma_{f,y,i} = \sigma^2_{y,j} - t\sigma_{f,y,j} \quad \forall i,j\in{[k]}. \end{align*} Again we can divide by $t$ and obtain an explicit expression using $\bar{{\mathbf w}}_c, {\mathbf w}_{sp}$: \begin{align*}
\bar{{\mathbf w}}_c^\top\Sigma^c_i{\mathbf w}_c^* - ({{\mathbf w}_c^*}^\top\Sigma^c_i{\mathbf w}_c^* + \sigma_y){\mathbf w}^\top_{sp}\mu_i = \bar{{\mathbf w}}_c^\top\Sigma^c_j{\mathbf w}_c^* - ({{\mathbf w}_c^*}^\top\Sigma^c_j{\mathbf w}_c^* + \sigma_y){\mathbf w}^\top_{sp}\mu_j \quad \forall i,j\in{[k]}. \end{align*} Finally, we can plug in \eqref{eq:means_invariance} and introduce another variable $t_3\in{\mathbb{R}}$ to turn the above equations into: \begin{align*}
\bar{{\mathbf w}}_c^\top\left( \Sigma^c_i{{\mathbf w}_c^*} + \left(\frac{ {{\mathbf w}_c^*}^\top\Sigma^c_i{{\mathbf w}_c^*} + \sigma^2_y}{{{\mathbf w}^*_c}^\top\mu^c_i}\right)\mu^c_i \right) + t_2\left(\frac{ {{\mathbf w}_c^*}^\top\Sigma_i{\mathbf w}_c^*}{{{\mathbf w}^*_c}^\top\mu^c_i} \right) + t_3 = 0. \end{align*} It is now easy to see that if $k > d_c + 2$ and $\mathbf{M}_2(\{\mu_i,\Sigma_i\}_{i=1}^{k}, \sigma^2_y, {\mathbf w}_c^*)$ has full rank, the only solution to these equations satisfies $\bar{{\mathbf w}}_c=\mathbf{0}, t_2=t_3=0$. When this is plugged into \eqref{eq:means_invariance}, we find that if $k > d_{sp}$ and the spurious means span $\mathbb{R}^{d_{sp}}$ then the only possible solution is ${\mathbf w}_{sp}=\mathbf{0}$. Finally, $\bar{{\mathbf w}}_c=\mathbf{0}$ means ${\mathbf w}^*_c=t{\mathbf w}_c$, and if $f({\mathbf x})$ is calibrated then we must have $t = 1$ since otherwise its estimate of the conditional mean is incorrect. Hence our proof is concluded. \end{proof}
We note that even though the setting we considered is restricted to causal features, anti-causal non-spurious features as those in \figref{fig:scenario_a} can also be treated (resulting in the graph given in \figref{scm}). This is since for a single environment, the distribution $P[X_{\text{c}}, X_{\text{ac-ns}}, \Xac, Y \mid E=e]$ (we shorten here to $P^e$ for convenience) can always be written as follows, treating $X_{\text{ac-ns}}$ as causal features: \begin{align*}
P^e[X_{\text{c}}, X_{\text{ac-ns}}, X_{\text{ac-sp}}, Y] &= P^e(X_{\text{c}}, X_{\text{ac-ns}})P^e(Y \mid X_{\text{c}}, X_{\text{ac-ns}})P^e(X_{\text{ac-sp}} \mid Y, X_{\text{ac-ns}}, X_{\text{c}}) \\
&= P^e(X_{\text{c}}, X_{\text{ac-ns}})P^e(Y \mid X_{\text{c}}, X_{\text{ac-ns}})P^e(X_{\text{ac-sp}} \mid X_{\text{ac-ns}}). \end{align*} The last equality is due to the separation properties of the graph, and since the joint distribution is a multivariate Gaussian, so are all the factors in the above product. Hence each environment can be described using a structural equation model of the same type as \eqref{eq:regression_sem} and \thmref{thm:causal_regression} applies.
\section{Dataset Statistics and Models} \label{sec:datasets_models}
For each of the four WILDS experiments presented in \secref{sec:exp}, we briefly describe the data and report the splits we use for training, validation and test. In each experiment we train a model on the training set, and the calibrators on the validation set. The post-processing calibrators receive tuples of model predictions and labels as input, whereas fine tuning with CLOvE{} receives a latent representation (values of the last hidden layer for \textit{Camelyon17} and \textit{FMoW}, and average of the representation of the cls token over the last $4$ hidden layers in \textit{CivilComments}). CLOvE{} is trained over a Multilayer Perceptron with $3$ hidden layers, with batch size of $64$ and the Adam optimizer. We then compare all alternatives (Original, Naive Calibration, Robust Calibration and CLOvE{}) on the held-out test set (OOD). Whenever an In-Domain (ID) test set is available (\textit{PovertyMap} and \textit{Camelyon17}), we evaluate the model on it as well. Throughout our experiments, we measure and report the Expected Calibration Error (ECE) using $10$ bins, dividing the $[0,1]$ interval into sub-intervals of equal length. The licenses to the datasets are CC0 for \textit{Camelyon17} and \textit{CivilComments}, \textit{FMoW} is distributed under the FMoW Challenge Public License and \textit{PovertyMap} is public domain. All model training is done on an infrastructure with 4 RTX 2080 Ti GPUs.
\subsection{\textit{PovertyMap}} \textbf{Problem Setting} \textit{PovertyMap} is a regression task of poverty mapping across countries. Input ${\mathbf x}$ is a multispectral satellite image, output $y$ is a real-valued asset wealth index and domain $d$ is a country and whether the satellite image is of an urban or a rural area. The goal is to generalize across countries and demonstrate subpopulation performance across urban and rural areas.
\textbf{Data} \textit{PovertyMap} is based on a dataset collected by \cite{yeh2020using}, which organized satellite images and survey data from 23 African countries between 2009 and 2016. There are 23 countries, and every location is classified as either urban or rural. Each example includes the survey year, and its urban/rural classification.
\begin{enumerate}
\item Training: 10000 images from 13 countries.
\item Validation (OOD): 4000 images from 5 different countries (distinct from training and test (OOD) countries).
\item Test (OOD): 4000 images from 5 different countries (distinct from training and validation (OOD) countries).
\item Validation (ID): 1000 images from the same 13 countries in the training set.
\item Test (ID): 1000 images from the same 13 countries in the training set. \end{enumerate}
\subsection{\textit{Camelyon17}} \textbf{Problem Setting} \textit{Camelyon17} is a tumor identification task across different hospitals. Input ${\mathbf x}$ is an histopathological image, label $y$ is a binary indicator of whether the central region contains any tumor tissue and domain $d$ is an integer identifying the hospital. The training and validation sets include the same four hospitals, and the goal is to generalize to an unseen fifth hospital. We note that in \cite{koh2020wilds} they include data from three hospitals in the training set and validate on data from a fourth hospital. Our setting includes a validation set from multiple hospitals since our fine tuning methods requires multiple domains.
\textbf{Data} The dataset comprises 450000 patches extracted from 50 whole-slide images (WSIs) of breast cancer metastases in lymph node sections, with 10 WSIs from each of five hospitals in the Netherlands \cite{bandi2018detection}. Each WSI was manually annotated with tumor regions by pathologists, and the resulting segmentation masks were used to determine the labels for each patch. Data is split according to the hospital from which patches were taken.
\begin{enumerate}
\item Training: 335996 patches taken from each of the 4 hospitals in the training set.
\item Validation: 60000 patches taken from each of the 4 hospitals in the training set (15000 patches from each hospital).
\item Test (OOD): 85054 patches taken from the 5th hospital, which was chosen because its patches were the most visually distinctive. \end{enumerate}
\subsection{\textit{CivilComments}} \textbf{Problem Setting} \textit{CivilComments} is a toxicity classification task across different demographic identities. Input ${\mathbf x}$ is a comment on an online article, label $y$ indicates if it is toxic, and domain $d$ is a one-hot vector with 8 dimensions corresponding to whether the comment mentions either of the 8 demographic identities \textit{male}, \textit{female}, \textit{LGBTQ}, \textit{Christian}, \textit{Muslim}, \textit{other religions}, \textit{Black}, and \textit{White}. The goal is to do well across all subpopulations, as computed through the average and worst case model performance.
\textbf{Data} \textit{CivilComments} comprises 450000 comments, annotated for toxicity and demographic mentions by multiple crowdworkers, where toxicity classification is modeled as a binary task \cite{borkan2019nuanced}. Each comment was originally made on an online article. Articles are randomly partitioned into disjoint training, validation, and test splits, and then formed the corresponding datasets by taking all comments on the articles in those splits.
\begin{enumerate}
\item Training: 269038 comments.
\item Validation: 45180 comments.
\item Test: 133782 comments. \end{enumerate}
\subsection{\textit{FMoW}} \textbf{Problem Setting} \textit{FMoW} is a building and land multi-class classification task across regions and years. Input ${\mathbf x}$ is an RGB satellite image, label $y$ is one of 62 building or land use categories, and domain $d$ is the time the image was taken and the geographical region it captures. The goal is to generalize across time, and improve subpopulation performance across all regions.
\textbf{Data} \textit{FMoW} is based on the Functional Map of the World dataset \cite{christie2018functional}, which includes over 1 million high-resolution satellite images from over 200 countries, based on the functional purpose of the buildings or land in the image, over the years 2002–2018. We use a subset of this data introduced in \cite{koh2020wilds}, which is split into three time range domains, 2002–2013, 2013–2016, and 2016–2018, as well as five geographical regions as subpopulations: \textit{Africa}, \textit{Americas}, \textit{Oceania}, \textit{Asia} and \textit{Europe}.
\begin{enumerate}
\item Training: 76863 images from the years 2002–2013.
\item Validation (OOD): 19915 images from the years from 2013–2016.
\item Test (OOD): 22108 images from the years from 2016–2018.
\item Validation (ID): 11483 images from the years from 2002–2013.
\item Test (ID): 11327 images from the years from 2002–2013. \end{enumerate}
\paragraph{Models} In the following we briefly describe each of the models used in the experiments reported in \secref{sec:exp}. \begin{itemize}
\itemsep0em
\item \textbf{BERT} - BERT is a 12-layer Transformer model \cite{vaswani2017attention} that represents textual inputs contextually and sequentially \cite{devlin2019bert}. It is widely used in NLP, and is considered the standard benchmark for any state-of-the-art system. It was previously shown to be miscalibrated across its training and test environments \cite{desai2020calibration}. In our \textit{CivilComments} experiments, we use BERT-base-uncased, a smaller variant of BERT which has a layer size of 768
\item \textbf{DenseNet} - Dense Convolutional Network (DenseNet), is a feed-forward neural network where for each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers \cite{huang2017densely}. DenseNets are widely used in computer vision, especially for image classification tasks . We use a DenseNet-121 model, a DenseNet variant with 121 layers, in the \textit{Camelyon17} and \textit{FMoW} experiments.
\item \textbf{ResNet} - Residual Network (ResNet) is a feed-forward neural network where layers are reformulated to learning residual functions with reference to the layer inputs \cite{he2016identity}. DenseNets where shown to be successful in multiple image recognition tasks. We use the 18-layer variant, ResNet-18, in the \textit{PovertyMap} experiment.
\end{itemize}
We run our models using the default setting used in \cite{koh2020wilds}. Each model is trained four times, using a different random seed at each run. We report performance averages and their standard deviation in \secref{sec:exp}.
\paragraph{Robustness to Model Architecture Choice} For each of the five WILDS datasets we report results on (\textit{PovertyMap}, \textit{Camelyon17}, \textit{CivilComments} and \textit{FMoW}) also tested the robustness of our results to different model architectures. In the following we describe the architecture we tested for each dataset, and the relative results achieved.
\begin{itemize}
\item \textbf{BERT} - We used a pre-trained \textit{BERT} in the \textit{Civilcomments} experiments. On the \textit{Civilcomments} dataset, we compared results on the BERT-base-uncased model with the cased and large versions. While we did find the performance increases with model size, perfromance drops on OOD examples remained consistent across models, with CLOvE{} outperforming \textit{Robust Calibration} and \textit{Naive Calibration} by an average of $1.4 \%$ and $3.1 \%$ (absolute), respectively.
\item \textbf{DenseNet} - In the \textit{FMoW} experiments, we tested the relative performance of the $121$ layer version to the $169$ and $201$ layer alternatives available via \url{https://pytorch.org/hub/pytorch_vision_densenet/}. Differences between the three models were not statistically significant.
\item \textbf{ResNet} - In the \textit{PovertyMap} experiments, we compare \textit{ResNet-18} to the $34$ and $50$ layers alternatives available via \url{https://pytorch.org/hub/pytorch_vision_resnet/}. We found that \textit{ResNet-18} performs slightly on the OOD test set, with average gain of $0.01$ in pearson correlation compared with \textit{ResNet-34}. \textit{Robust Calibration} remained better than \textit{Naive Calibration} and the original model across runs. \end{itemize}
\paragraph{Training Algorithms} In the WILDS experiments, for each dataset we train our models using three out of these four alternatives: \begin{itemize}
\itemsep0em
\item \textbf{ERM} - Empirical risk minimization (ERM) is a training algorithms the looks for models that minimize the average training loss, regardless of the training environment.
\item \textbf{IRM} Invariant risk minimization (IRM) \cite{arjovsky2019invariant} is a training algorithm that penalizes feature distributions that have different optimal linear classifiers for each environment.
\item \textbf{DeepCORAL}
DeepCORAL \cite{sun2016deep} is an algorithm that penalizes differences in the means and covariances of the feature distributions for each training environment. It was originally proposed in the context of domain adaptation, and has been subsequently adapted for domain generalization \cite{gulrajani2020search}.
\item \textbf{GroupDRO} - Group DRO \cite{hu2018does} uses distributionally robust optimization (DRO) to explicitly minimize the loss on the worst-case environment. \end{itemize}
We do not perform any hyperparameter search, and use the default version available in \cite{koh2020wilds}.
\section{Experiments on Colored MNIST} \label{sec:cmnist_results} For the colored MNIST\footnote{The MNIST dataset is available under the terms of the Creative Commons Attribution-Share Alike 3.0 license} dataset we trained Multi-Layer Perceptrons (MLPs) with ERM, IRMv1 and CLOvE{}, based on the code provided in \cite{kamath2021does} with the following adjustments: we add CLOvE{} and optimize it using SGD with batches of size $512$ from each training environment, for $5001$ steps at each run (~$50$ epochs). We used either the Adagrad optimizer \cite{duchi2011adaptive} or Adam \cite{kingma2014adam} (Adam was replaced with Adagrad in one environment where it produced highly unstable training metrics). All models were trained on a single NVidia Tesla P100 GPU virtual machine, on the Google Cloud Platform. Other algorithms were trained with Gradient Descent (i.e. without batching the dataset, which is infeasible for CLOvE{} since it is based on kernels) and Adam for $500$ steps/epochs, exactly as done in the code provided by \cite{arjovsky2019invariant, kamath2021does}. For CLOvE{}, hyperparamters are drawn similarly to the rest of the algorithms, except when using Adagrad where we multiply the originally drawn learning rate by $5$.
\subsection{Performance of CLOvE{}} We will refer to environments with tuples $(\alpha, \beta)$ that denote correlation with digit and color respectively, as done in \secref{subsec:2bit}. For each setting of training and test environments we experiment with, $100$ models are trained using each algorithm: ERM, IRM and CLOvE{}. To illustrate the failure case pointed out in \cite{kamath2021does} and \secref{sec:exp} of the paper, we train the algorithms with training environments corresponding to $e_1=(0.1, 0.05), e_2=(0.2, 0.05)$ and use data from test environment $e_3=(0.9, 0.05)$. \figref{fig:box_cmnist_color05} which we produce using code provided in \cite{kamath2021does} shows the results, where each point corresponds to a model trained with some set of drawn hyperparameters. Most models trained by CLOvE{} achieve log-loss that is close to that of the optimal invariant classifier (marked by dashed black line), while the models trained with IRMv1 are more scattered and specifically those that achieve lower log-loss are the ones that also obtain lower training objective. The bold colored lines mark the points that minimize $\sum_{e\in{E_\text{train}}}{l^e(f_\theta) + \lambda\cdot r^e(f_\theta)}$ with $\lambda=10^6$ (expect for ERM where it's the point which minimizes the empirical loss), showing that out of the models trained with IRMv1, the one which minimizes the objective has loss close to that of the solution $\text{OPT}_{\text{IRMv1}}$ from \figref{fig:mnist}(a) in the paper (marked by dashed red line). That is while the CLOvE{} model with the lowest training objective is very close to the optimal invariant classifier in its test loss (marked by black dashed line). \begin{figure}\label{fig:box_cmnist_color05}
\end{figure} Note that in this case color is the invariant feature while the digit is spurious. For the opposite case, where the digit is invariant, the error incurred by MLPs in digit recognition makes it difficult to find the exact invariant classifier by optimizing CLOvE{} (since this error is close to the magnitude of the $0.05$ correlation). Yet in \secref{subsec:cmnist_model_selection} the failure case of IRMv1 in these environments will be illustrated by average ECE (which CLOvE{} is a surrogate for) being a better measure of invariance than the IRMv1 objective.
The experiment presented in \cite{arjovsky2019invariant} used the training environments $e_1=(0.25, 0.1), e_2=(0.25, 0.2)$ with test environments $e_3=(0.25, 0.9)$, where IRMv1 can in principle learn the optimal invariant classifier. We give the results on learning with these environments for completion. As can be observed in \figref{fig:box_cmnist_digit25}, both CLOvE{} and IRMv1 learn models that are close to the optimal invariant one. While IRMv1 learned more of those models during the hyperparameter sweep\footnote{This can be attributed to the choice of ranges for drawing hyperparameters which we did not carefully tune to accommodate CLOvE{}.}, CLOvE{} still obtains some close-to-invariant models during the sweep. \begin{figure}
\caption{Log-loss on test environment $(0.25, 0.9)$ of classifiers trained with ERM, CLOvE{} and IRMv1 on training environments $(0.25, 0.1), (0.25, 0.2)$. Lines denote the same corresponding quantities in \figref{fig:box_cmnist_color05}, except we omit the red dashed line from that figure.
}
\label{fig:box_cmnist_digit25}
\end{figure} The rest of this section will be dedicated to studying model selection with the proposed average ECE criterion and the correlation between ID average ECE and OOD performance. \subsection{Model Selection Experiments} \label{subsec:cmnist_model_selection} \begin{figure}\label{fig:cmnist_selection_curve}
\end{figure} Let us recall and elaborate the selection procedure proposed in \secref{sec:algs}: \begin{itemize}
\item Given a desired threshold for In-Domain accuracy $\text{Thr}_{\text{ID}}$ and a set of models $f_1({\mathbf x}), \ldots, f_n({\mathbf x})$ from which we would like to select a candidate, perform the following.
\item For each candidate model $\hat{f}$, recalibrate it with Isotonic Regression or some other preferred post-processing technique \footnote{This is a crucial step, since models that are highly miscalibrated can become well-calibrated upon post-processing}. Calculate its ID validation error $\text{val}_{\text{ID}}(\hat{f})$ over a held-out dataset. For the held-out dataset from each environment $e\in{E_{\text{train}}}$ also calculate $ECE^e(\hat{f})$: the $ECE$ of $\hat{f}$ over this dataset. Then take $ECE(\hat{f}) = \sum_{e\in{E_{\text{train}}}}{ECE^e(\hat{f})}$.
\item Choose $\mathrm{arg}\min_{\hat{f}: \text{val}_{\text{ID}}(\hat{f}) \geq \text{Thr}_{\text{ID}}}{ECE(\hat{f})}$. \end{itemize} \begin{figure}\label{fig:irm_vs_ece_selection}
\end{figure} \begin{figure}\label{fig:ece_irm_vs_ood_expanded}
\end{figure} \textbf{Selection with minimal ECE facilitates a tradeoff between ID accuracy and stability.} We use the trained models from the last section (all models trained with either ERM, IRMv1 or CLOvE{} are pooled into a set of candidates), over environments $e_1=(0.25, 0.1), e_2=(0.25, 0.2)$. Selecting the model with minimal $\text{val}_{ID}(\hat{f})$ delivers a classifier with $10.96\%(\pm 0.81)$ accuracy on $e_{\text{test}}=(0.25, 0.9)$ and $85.43\%(\pm 0.13)$ accuracy on the training environments. The trade-off achieved by selection with the proposed criterion is shown in \figref{fig:cmnist_selection_curve}. Demanding ID accuracy that is higher than $75\%$ (the ID error obtained by an optimal invariant classifier) yields a relatively sharp drop towards the OOD accuracy obtained by a classifier that purely minimizes empirical error. Going below $75\%$ retrieves a classifier that achieves $64.98\%(\pm 2.67)\%$ OOD accuracy. \\ \textbf{Comparison with IRMv1 Penalty as Selection Criterion.} As a baseline to the average ECE over training environments we compare it with using the value of the IRMv1 regularizer, also calculated with a validation set from each training environment. In \figref{fig:irm_vs_ece_selection} we compare the curves obtained by the proposed model selection procedure, and that same procedure when replacing the ECE with the value of IRMv1. \figref{fig:irm_vs_ece_selection}(a) shows the result on the scenario where $e_1=(0.25, 0.1),e_2=(0.25, 0.2)$ and $e_{\text{test}}=(0.25, 0.9)$. In this case the two methods are quite comparable, expect for the tail of high desired ID accuracies, where the chosen models are trained with ERM and the IRMv1 criterion fails to rank them by their OOD accuracy. \figref{fig:irm_vs_ece_selection}(b) shows the same plot on the scenario where $e_1=(0.05, 0.1),e_2=(0.05, 0.2)$ and $e_{\text{test}}=(0.05, 0.9)$, which corresponds to the failure case of IRM in \figref{fig:mnist}(a). Due the observation of \cite{kamath2021does}, we may expect the IRMv1 objective to fail at capturing invariance in this setting. Indeed, the model selection done using the IRMv1 penalty gives a worst model than the one selected by ECE in this case. In \figref{fig:ece_irm_vs_ood_expanded} we also plot the correspondence between OOD accuracy and these quantities (namely ID average ECE, and IRMv1 penalty) as in \figref{fig:mnist}(b) for both settings depicted in \figref{fig:irm_vs_ece_selection} showing the erratic behavior of the IRM penalty when considered on different training regimes.
\end{document} | arXiv | {
"id": "2102.10395.tex",
"language_detection_score": 0.7785341739654541,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Fluctuations of bridges, reciprocal characteristics and concentration of measure} \newcommand{\Addresses}{{
\footnotesize
Giovanni~Conforti, \textsc{Department of Mathematics, Universit\"at Leipzig,
Germany}\par\nopagebreak
\textit{E-mail address}, \texttt{giovanniconfort@gmail.com}
}} \maketitle \begin{abstract} Conditions on the generator of a Markov process to control the fluctuations of its bridges are found. In particular, continuous time random walks on graphs and gradient diffusions are considered. Under these conditions, a concentration of measure inequality for the marginals of the bridge of a gradient diffusion and refined large deviation expansions for the tails of a random walk on a graph are derived. In contrast with the existing literature about bridges, all the estimates we obtain hold for non asymptotic time scales. New concentration of measure inequalities for pinned Poisson random vectors are also established. The quantities expressing our conditions are the so called \textit{reciprocal characteristics} associated with the Markov generator. \end{abstract} \tableofcontents
\section{Introduction} In this paper we study quantitatively bridges of Markov processes over the time-interval $[0,1]$. As a guideline of our investigations, independently from the details of the model, we have in mind the sketch of the motion of a bridge as divided into two symmetric phases: at first one observes an expansion phase, in which the bridge, starting from its deterministic initial position, increases its randomness. After time $1/2$, a second contraction phase takes place, in which the damping effect of the pinning at the terminal time is so strong that randomness decreases, and eventually dies out. Moreover, one also expects that the two phases enjoy some symmetry with respect to time reversal. To summarize, one can say that the motion of bridge resembles that of an accordion.
The aim of this paper is to obtain a quantitative explanation of this picture. This means that we consider a Markov process and try to understand how its semimartingale characteristics should look like in order to observe bridges where the influence of pinning over randomness is stronger than that of a reference model, for which computations can be carried out in explicit form.
This problem, although quite natural, seems to have received very little attention so far. As we shall see, some precise answers can be given. It is interesting to note that the quantities expressing our conditions are not related to those used to measure the speed of convergence to equilibrium, as one might expect at first glance ( see Remark \ref{remgamma2} for a comparison with the $\Gamma_2$ condition of Bakry and \'Emery \cite{BAKEM} ).\\
There are many possible quantities that could be used to estimate the balance of power between pinning and randomness and make precise mathematical statements. Some of them are discussed in the sequel, depending on the model: in this paper, Brownian diffusions with gradient drift and continuous time random walks on a graph are considered.\\ Our results take the form of comparison theorems, which yield quantitative information on the bridge at non asymptotic time scales. In Theorem \ref{accordeon} we find conditions on the potential of a gradient diffusion for its bridges to have marginals with better concentration properties than those of an Ornsetin Uhlenbeck bridge. This is one of the main novelties with respect to the existing literature about bridges where, to the best of our knowledge, only Large Deviations-type estimates have been proved, and mostly in the short time regime, see among others \cite{bailleul2013large}, \cite{bailleul2015small}, \cite{baldi2002asymptotics}, \cite{baldi2014large}, \cite{dawson1990schrodinger}, \cite{privault2015large}, \cite{wittich2005explicit} and \cite{yang2014large}. The proof of this result is done by first showing an ad hoc Girsanov formula for bridges, which differs from the usual one. We then employ some tools developed in \cite{BrLieb76} to transfer log concavity of the density from the path space to the marginals, and use the well known properties of log concave distributions. \\ Theorems \ref{squarelatticebound} and \ref{treebound} concern continuous time random walks on graphs with constant speed: we find conditions on the jump rates under which the marginals of the bridge have lighter tails than those of the simple random walk. For their proof we rely on some elementary, though non trivial combinatorial constructions that allow to control the growth of the \textit{reciprocal characteristics} associated with the cycles of the graph, as the length of the cycles increases. The study of bridges of continuous random walks brings naturally to consider pinned Poisson random vectors: we derive concentration of measure inequality for these distributions, using a Modified Log Sobolev Inequality and an interpolation argument.
\subsection*{Reciprocal characteristics} An interesting aspect is that the conditions which we impose to derive the estimates are expressed in terms of the so called \textit{reciprocal characteristics}.
The reciprocal characteristics of a Markov processare a set of invariants which fully determine the family of bridges associated with it:
such a concept has been introduced by Krener in \cite{Kre88}, who was interested in developing a theory of stochastic differential equations of second order, motivated by problems in Stochastic Mechanics. Several authors then contributed to the development of the theory of reciprocal processes and second order differential equations. Important contributions are those of Clark \cite{Cl91}, Thieullen \cite{Th93}, L\'evy and Krener \cite{LK96}, and Krener \cite{Kre97}. Roelly and Thieullien in \cite{RT02}, \cite{RT05} introduced a new approach based on integration by parts formulae. This approach was used to study reciprocal classes of continuous time jump processes in \cite{CLMR}, \cite{CDPR}, \cite{CR}.
The precise definitions are given at Definition \ref{def:recchardiff} and \ref{def:recchgr} below. However, let us give some intuition on why they are an interesting object to consider to bring some answers to the aforementioned problems. For simplicity, we assume that $\mathbb{P}^x$ is a diffusion with drift $b$ and unitary dispersion coefficients. It is well known that the bridge $\mathbb{P}^{xy}$ is another Brownian diffusion with unitary diffusion matrix and whose drift field $\tilde{b}$ admits the following representation: \begin{equation*} \tilde{b} (t,z) = b(t,z) + \nabla \log h(t,z),
\end{equation*}
where $h(t,z)$ solves the Kolmogorov backward PDE:
\begin{equation*}
\partial_t h(t,z) + b \cdot \nabla h(t,z) + \frac{1}{2} \Delta h(t,z) = 0 , \quad \lim_{t \uparrow 1}h(t,z)= \mathbf{1}_{z=y}
\end{equation*} This is the classical way of looking at bridges as $h$-transforms, which goes back to Doob \cite{Doob1957}. However, it might not be the most convenient one to perform explicit computations. The first reason is that $h$ is not given in explicit form. Moreover, this representation does not account for the time symmetric nature of bridges. Actually, the problem of restoring this time symmetry was one of the motivations for several definitions of conditional velocity and acceleration for diffusions in the context of stochastic mechanics, see e.g. \cite{Nel67}, \cite{CruZa91}, \cite{Th93}. The theory of reciprocal processes proposes a different approach to bridges: there one looks for a family of (non-linear) differential operators $\mathfrak{A}$ with the property that the system of equations \begin{equation*} \mathscr{A} \tilde{b} = \mathscr{A} b, \quad \mathscr{A} \in \mathfrak{A} \end{equation*} together with some boundary conditions characterizes the drift $\tilde{b}$ of $\mathbb{P}^{xy}$. For diffusions, they were computed for the first time by Krener in \cite{Kre88}, and subsequently used by Clark \cite{Cl91} to characterize reciprocal processes.
For instance, in the case of 1-dimensional Brownian diffusions we have $\mathfrak{A} = \left\{ \mathscr{A} \right\}$ with \begin{equation*} \mathscr{A} b = \frac{1}{2} \partial_{xx} b + b \partial_x b + \partial_t b \end{equation*} The advantage of this approach is to show that the drift of the bridge $\tilde{b}$ depends on $b$ only through the subfields $\mathscr{A} b$, for $\mathscr{A} \in \mathfrak{A}$, and not on anything else. In other words: two different processes with the same reciprocal characteristics share have identical bridges (for results of this type, see \cite{Blee}, \cite{Fitz}, \cite{CLMR}, \cite{RT02}, \cite{RT05}, \cite{CDPR}, \cite{CR}, \cite{CL15}). Therefore, one sees that any optimal condition to control the fluctuations of $\mathbb{P}^{xy}$ shall be formulated in terms of the characteristics since other conditions will necessarily involve some features of $b$ which play no role in the construction of $\mathbb{P}^{xy}$. This simple observation already rules out some naive approaches to the problems studied in this paper. Indeed one might observe that when $\mathbb{P}^x$ is time homogeneous we have: \begin{equation*} \bbP^{xy}(X_t \in dz ) \propto \mathbb{P}^x(X_t \in z+ dz) \, \mathbb{P}^{z}(X_{1-t} \in y+ dy ) \end{equation*}
and then an optimal criterion to control the fluctuations of the marginals of $\mathbb{P}$ suffices. But since any known condition to bound them is not expressed in terms of the reciprocal characteristics, this strategy has to be discarded. Reciprocal characteristics enjoy a probabilistic interpretation: they appear as the coefficient of the leading terms in the short time expansion of either the conditional probability of some events ( see \cite{CL15} for the discrete case) or the conditional mean acceleration (see \cite{Kre97} in the diffusion setting).
Indeed, one can view the results of this article as the global version of the "local" estimates which appear in the works above. A first result in this direction has been obtained in \cite{conforti2016counting}, where a comparison principle for bridges of counting processes is proven.
Reciprocal characteristics have been divided into two families, \textit{harmonic characterisitcs} and \textit{rotational (closed walk)} characteristics. We discuss the role of harmonic characteristics in the diffusion setting and the role of rotational characteristics for continuous time random walks on graphs.
\subsubsection*{Organization of the paper} In Section 2 and 3 we present our main results for diffusions and random walks. They are main results which are Theorem \ref{accordeon}, Theorem \ref{t70}, Theorem \ref{squarelatticebound} and Theorem \ref{treepatch}. Section $4$ is devoted to proofs. We collect in the Appendix some results on which we rely for the proofs. \subsubsection*{General notation}
We consider Markov processes over $[0,1]$ whose state space $\mathcal{X}$ is either $\mathbb{R}^d$ or the set of vertices of a countable directed graph. We always denote by $\Omega$ the cadl\'ag space over $\mathcal{X}$, by $(X_t)_{0\leq t \leq 1}$ the canonical process, and by $ \mathcal{P}(\Omega)$ the space of probability measures over $\Omega$ . On $\Omega$ a Markov probability measure $\mathbb{P}$ is given, and we study its bridges. In our setting, bridges will always be well defined for \textit{every} $x,y \in \mathcal{X}^2$ and not only in the almost sure sense. We will make clear case by case why this is possible. As usual $\mathbb{P}^x$ is $\mathbb{P}( \cdot | X_0 =x)$, $\mathbb{P}^{xy}$ is the $xy$ bridge, $\mathbb{P}^{xy} := \mathbb{P}(\cdot | X_0 =x, X_1 =y)$. For $I\subseteq [0,1]$, we call $X_{I}$ the collection $(X_t)_{t \in I}$ and the image measure of $X_{I}$ is denoted $\mathbb{P}_{I}$. Similarly, we define $\mathbb{P}^x_{I}$, and $\mathbb{P}^{xy}_{I}$. For a general $\mathbb{Q} \in \mathcal{P}(\Omega)$, expectation under $\mathbb{Q}$ is denoted $\mathbb{E}_{\mathbb{Q}}$. We use the notation $\propto$ when two functions differ only by a multiplicative constant.
\section{Bridges of gradient diffusions: concentration of measure for the marginals}
\subsubsection*{Preliminaries} We consider gradient-type diffusions. The potential $U$ is possibly time dependent and satisfies one among hypothesis (2.2.5) and (2.2.6) of Theorem 2.2.19 in \cite{royer2007initiation}, which ensure existence of solutions for \begin{equation}\label{eq:grdiff} d X_t = - \nabla U (t,X_t)dt + dB_t, \quad X_0 =x. \end{equation} Bridges of Brownian diffusions are well defined for any $x,y \in \mathbb{R}^d$. This fact is ensured by \cite[Th.1]{chaumont2011markovian} and the fact that $\mathbb{P}$ admits a smooth transition density.
A special notation is used for Ornstein-Uhlenbeck processes. We use $^{\alpha}\mathbb{P}^{x}$ for the law of : \begin{equation}\label{eq:OUprc} d X_t = - \alpha \, X_t dt + d B_t, \quad X_0 = x \end{equation} where $\alpha>0$ is a positive constant. ${}^\alpha\bbP^{xy}$ is then the $xy$ bridge of $^{\alpha}\mathbb{P}^{x}$
Let us give some standard notation. For $v \in \mathbb{R}^d$, $v^T$ is the transposed vector. If $w$ is another vector in $\mathbb{R}^d$, we denote the inner product of $v$ and $w$ by $v \cdot w$. Similarly, if $H$ is a matrix and $v$ a vector, the product is denoted $H \cdot v$. The Hessian matrix of a function $U$ is denoted $\mathbf{Hess}(U)$, and by $\mathbf{Hess}(U) \geq \alpha \, \mathbf{id} $ we mean, as usual, that \begin{equation*} \inf_{v: v \cdot v =1 } v^T \cdot \mathbf{Hess}(U)(z) \cdot v \geq \alpha
\end{equation*} The norm of $v\in \mathbb{R}^d$ is $\| v\|$.
Let us now give definition of reciprocal characteristics for gradient diffusions. It goes back to Krener \cite{Kre88}.
\begin{mydef}\label{def:recchardiff} Let $U: [0,1] \times \mathbb{R}^d \rightarrow \mathbb{R}$ be a smooth potential. We define $\mathscr{U}: [0,1] \times \mathbb{R}^d \rightarrow \mathbb{R}$ as: \begin{equation}\label{e16}
\mathscr{U} (t,z) := \big[ \frac{1}{2} \| \nabla U\|^2 - \partial_t U - \frac{1}{2}\Delta U \big] (t,z) \end{equation}
The \underline{harmonic characteristic} associated with $U$ is the vector field $\nabla \mathscr{U}$.
\end{mydef}
\subsubsection*{Measuring the fluctuations} Consider the bridge-marginal $\mathbb{P}^{xy}_t$. We denote its density w.r.t. to the Lebesgue measure by $p^{xy}_t(z)$. As an indicator for the "randomness" of $\mathbb{P}^{xy}_t$ we use $\gamma(t)$, defined by: \begin{equation}\label{e90} \gamma(t) = \sup \{ \beta : -\mathbf{Hess}(\log p^{xy}_t)(z) \geq \beta \mathbf{id} \} \end{equation} It is well known that lower bounds on $\gamma(t)$ translate into concentration properties for $\mathbb{P}^{xy}_t$, see Theorem 2.7 of \cite{Led01}. The better the bound, the stronger the concentration. In the Ornstein Uhlenbeck case, $\gamma(t):= \gamma_{\alpha}(t)$ can be explicitly computed. The actual computation will be carried out in the proof of Theorem \ref{accordeon}. We have: \begin{equation}\label{e40} \gamma_{\alpha}(t) =\frac{2\alpha(1-\exp(-2\alpha))}{(1-\exp(-2 \alpha t)) (1-\exp(-2 \alpha(1-t) ) )} \end{equation} Note that $\gamma_{\alpha} $ obeys few stylized facts:
\begin{enumerate}[(i)] \item It is symmetric around $1/2$: this reflects the time symmetry of the bridge. \item It converges to $+ \infty$ as $t$ converges to either $0$ or $1$. This is due to the pinning. \item $\gamma_{\alpha}$ is convex in $t$. This also agrees with the description of the dynamics of a bridge we sketched in the introduction. Convexity reflects the fact that, as time passes, the balance of power between pinning and randomness goes in favor pinning , whose impact on the dynamics grows stronger and stronger, whereas the push towards randomness stays constant, since the diffusion coefficient does not depend on time. \item It is increasing in $\alpha$. \end{enumerate}
Theorem \ref{accordeon} is a comparison theorem for $\gamma(t)$. We show that if the Hessian of $\mathscr{U}$ (see \eqref{e16}) enjoys some convexity lower bound, say $\frac{1}{2}\alpha^2$, then $\gamma(t)$ lies above $\gamma_{\alpha}(t)$: this means that $\mathbb{P}^{xy}$ is more concentrated than $^{\alpha}\mathbb{P}^{xy}$.
\begin{theorem}\label{accordeon} Let $\mathbb{P}^x$ be the law of \eqref{eq:grdiff} and $\mathscr{U}$ be defined at \eqref{e16}. If, uniformly in $r \in [0,1], z \in \mathbb{R}^d$: \begin{equation}\label{e17} \mathbf{Hess}( \mathscr{U} )(r,z) \geq \frac{\alpha^2 }{2} \, \mathbf{id} \end{equation} then the following estimate holds for any $t \in [0,1]$, and any $1$-Lipschitz function $f$: \begin{equation*}
\mathbb{P}^{xy}\Big(f(X_t) \geq\mathbb{E}_{\mathbb{P}^{xy}}(f(X_t)) + R \Big) \leq \exp \Big(-\frac{1}{2}{ }\gamma_{\alpha}(t) R^2\Big) \end{equation*} where $\gamma_{\alpha}(t)$ is defined at \eqref{e40}. \end{theorem}
The proof of Theorem \ref{accordeon} uses three main tools: the first one is an integration by parts formula for bridges of Brownian diffusions due to Roelly and Thieullen, see \cite{RT02} and \cite{RT05}. Such formula has the advantage of elucidating the role of reciprocal characteristics, and we reported it in the appendix. The second one is a statement about the preservation of strong log concavity due to Brascamp and Lieb \cite{BrLieb76}. This theorem is a quantitative version of the well known fact that marginals of log concave distributions are log concave. We refer to Remark \ref{noway} for more comparison between Theorem \ref{accordeon} with some of the results of \cite{BrLieb76}. Finally we will profit from the well known concentration of measure properties of log concave distributions, for which we refer to \cite[Chapter 2]{Led01}.
\begin{remark} The condition \eqref{e17} does not depend on the endpoints $(x,y)$ of the bridge \end{remark}
\begin{remark} The estimates obtained here are sharp, as the Ornstein Uhlenbeck case demonstrates: a simple computations shows that $\frac{\alpha^2}{2} \mathbf{id} $ is indeed the Hessian of $\mathscr{U}$ when $\mathbb{P}^x= ^{\alpha}\mathbb{P}^x$. \end{remark}
\begin{remark}\label{remgamma2} The $\Gamma_2$ condition of Bakry and \'Emery in this case reads as \begin{equation*} \mathbf{Hess}(U) \geq \alpha \, \mathbf{id} \end{equation*} which is clearly very different from \eqref{e17}. In particular, \eqref{e17} involves derivatives of order up to four. However, a simple manipulation of Girsanov's theorem formally relates the two conditions. Consider the density $M$ of $\mathbb{P}^x$ with respect to the Brownian motion started at $x$. We have by Girsanov's formula (for simplicity, we assume $U$ not to depend on time):
\begin{equation*} M= \exp\left(- \int_{0}^1 \nabla U(X_t) \cdot dX_t - \frac{1}{2} \int_{0}^1 \| \nabla U \|^2(X_t) dt \right) \end{equation*} A standard application of It\^o formula allows to rewrite $M$ as: \begin{equation*}
\exp\left(- \underbrace{U(X_1)}_{\Gamma_2} + U(x) - \frac{1}{2} \int_{0}^1 \underbrace{\| \nabla U \|^2(X_t) - \Delta U(X_t) }_{ = 2 \mathscr{U}} dt \right) \end{equation*} Imposing convexity on the first term, one obtains $\Gamma_2$, whereas imposing convexity on the integrand, yields \eqref{e17}. In this sense, the two condition are complementary: what is "seen" from one, is not seen from the other, and viceversa. \end{remark} \begin{remark} Many authors have investigated Logarithmic Sobolev inequalities for the Brownian bridge as a law on path space, or, more generally for the Brownian motion on loop spaces, see e.g. \cite{fang1999integration}. Therefore, starting from those inequalities one should be able to obtain concentration of measure results for the Brownian bridge. Our approach is not based on such inequalities because, to the best of our understanding, they are limited to the bridge of the Brownian motion and we consider bridges of gradient-type SDEs. Moreover, we do not know how precise the concentration bounds derived from these SDEs would be concerning the marginals and there does not seem to be a criterion to construct measures which have concentration properties at least as good as the Brownian bridge measure. This is exactly what we do in this paper. On the other hand, these inequalities are available for curved spaces, a case which we do not touch. \end{remark}
\section{Continuous time random walks}
In this section we prove various estimates for the bridges of continuous time random walks with constant speed. These estimate are obtained by imposing conditions on the \textit{closed walk characteristics} associated with the random walk. It is shown in \cite[Th. 2.4]{CL15} that the closed walk characteristics of a constant speed random walk fully determine its bridges.
\subsubsection*{Preliminaries} Let $\mathcal{X}$ be a countable set and $ \mathcal{A} \subset\mathcal{X}^2$.
The \emph{directed graph} associated with $\mathcal{A}$ is defined by means of the relation $\to$ . For all $z,z'\in\mathcal{X}^2$ we have $z \to z'$ if and only if $(z,z')\in \mathcal{A}.$ We denote $(\mathcal{X}^2,\to)$ this directed graph, say that any $(z,z')\in \mathcal{A}$ is an arc and write $(z \to z')\in \mathcal{A}$ instead of $(z,z')\in \mathcal{A}.$ For any $n\ge1$ and $x_0,\dots,x_n\in\mathcal{X}$ such that $x_0\to x_1$, $x_1\to x_2,\ \cdots,\ x _{ n-1}\to x_n$, the ordered sequence
$(x_0,x_1,\dots,x_n)$ is called a \emph{walk}. We adopt the notation $ \mathbf{w}=(x_0\to x_1 \to \cdots\rightarrow x_n). $ When $x_n=x_0$, the walk $\mathbf{c}=(x_0\to x_1 \to \cdots\rightarrow x_n=x_0)$ is said to be \textit{closed}. The length $n$ of $\mathbf{w}$ is denoted by $\ell(\mathbf{w}).$
We introduce a continuous time random walk $\mathbb{P}^x$ with intensity of jumps $j:\mathcal{A} \rightarrow \mathbb{R}_{+}$. $j(z \to z')$ is the rate at which the walk jumps for $z$ to $z'$. To ensure existence of the process, we make some standard assumptions on $j$, and $(\mathcal{X},\to)$, which are detailed at Assumption \ref{as-03} and Assumption\ref{as-01}. These assumptions also ensure that the bridge is defined between any pair of vertices $x,y \in \mathcal{X}$. In this paper, we consider constant speed random walks (CSRW). This means that the function $z \mapsto \bar{j}(z) = \sum_{z': z \to z'} j(z \to z')$ is a constant.
Let us define the closed walk characteristics associated with $j$. We refer to \cite{CDPR}, \cite{CL15}, \cite{CR} for an extensive discussion.
\begin{mydef}\label{def:recchgr} Let $(\mathcal{X},\to)$ be a graph satisfying Assumption \ref{as-03} and $j$ be a jump intensity satisfying Assumption \ref{as-01}. For any $t \in (0,1)$ and any closed walk $\mathbf{c}=(x_0\to \cdots\to x_{n}=x_0)$ we define the corresponding \underline{closed walk characteristic} as: \begin{equation}\label{eq30} \Phi_j(\mathbf{c}) := \prod_{i=0}^{n-1} j(x_i \to x_{i+1}) . \end{equation} \end{mydef}
\subsection{Concentration of measure for pinned Poisson random vectors}\label{sub1} \subsubsection*{A simple question} We fix $k \in \mathbb{N}$ and consider the graph $(\mathcal{X}, \to)$ where $\mathcal{X} = \mathbb{Z}$, and $z \to z' $ if and only if $z' = z -1$ or $z' = z+k$. We consider a random walk $\mathbb{P}$ with time and space-homogeneous rates: \begin{equation*} j(z \to z+k) \equiv j_k , \quad j(z \to z-1) \equiv j_{-1} \quad \forall z \in \mathbb{Z} \end{equation*}
The simple\footnote{see Definition \ref{defs-01} for the meaning of simple walk} closed walks of $(\mathcal{X},\to)$ are of the form $$ \mathbf{c} = (x \to x-1 \to x-2 \to .. \to x-k \to x)$$ for some $x \in \mathbb{Z}$ and, because of the homogeneity of the rates, we have $$ \forall \mathbf{c} \, \text{simple closed walk}, \quad \Phi_j(\mathbf{c}) \equiv j^k_{-1}j_k:=\Phi $$
We introduce random variables $ N^{k}$ and $N^{-1}$ which count the number of jumps along arcs of the form $(x \to x+k)$ and $(x \to x-1)$ respectively. Obviously, under $\mathbb{P}^0$ the vector $(N^{k},N^{-1}) $ is a two dimensional vector with independent components following a Poisson law of parameter $j_{k}$ and $j_{-1}$ respectively.
Let us consider the $00$ bridge of $\mathbb{P}$, $\mathbb{P}^{00}$. The distribution of $N^{k}$ is that of the first coordinate of a Poisson random vector conditioned to belong to an affine subspace, precisely $\{(n^k, n^{-1}) \in \mathbb{N}^2 : k \, n^k - n^{-1} = 0 \}$. We call this distribution $\rho_{\Phi}$. \begin{equation} \label{eq21}
\rho_{\Phi} ( \cdot ) = \mathbb{P}^{00}(N^k \in \cdot ) = \mathbb{P}^{0} \left( N^k \in \cdot \Big | k N^{k} - N^{-1}=0\right) \end{equation}
We aim at establishing a concentration of measure inequality for $\rho_{\Phi}$. This is very natural in the study of bridges: one wants to know how many jumps of a certain type the bridge performs. The role of pinning against randomness should be visible in the concentration properties of this distribution.
This task is not trivial because $\rho_{\Phi}$ is no longer a Poissonian distribution. This is in contrast with the Gaussian case, where pinning a Gaussian vector to an affine subspace gives back a Gaussian vector.
To gain some insight on what rates to expect let us recall Chen's characterization of the Poisson distribution (see \cite{Chen}) of parameter $\lambda$, which we call $\mu_{\lambda}$: \begin{equation}\label{eq23}
\forall f>0 \quad \quad \lambda \mathbb{E}_{\mu_{\lambda}} \big( f(n+1)\big) = \mathbb{E}_{\mu_{\lambda}} \big( f(n) n \big) \end{equation} Using \cite[Prop. 3.8]{CDPR}, one finds an analogous characterization for $\rho_{\Phi}$ as the only solution of \begin{equation}\label{eq22} \forall f>0 \quad \Phi \, \mathbb{E}_{\rho_{\Phi}} \left( f(n + 1 ) \right) = \mathbb{E}_{\rho_{\Phi}} \left( f(n) n \, \prod_{i=0}^{k-1} (k n - i) \right) \end{equation}
The density on the right hand side of \eqref{eq22} is a polynomial of degree $k+1$. By choosing $f(n)=\mathbf{1}_{n=z}$ in both \eqref{eq23} and \eqref{eq22}, we obtain: \begin{equation}\label{e120} \forall z \in \mathbb{N}, \quad \frac{\mu_{\lambda}(z-1)}{\mu_{\lambda}(z) } = \frac{1}{\lambda} z , \quad \frac{\rho_{\Phi}(z-1)}{\rho_{\Phi}(z)} = \frac{z\prod_{i=0}^{k-1} (k z - i) }{\Phi }\sim \frac{z^{k+1}}{\Phi} \end{equation} from which we deduce that $\rho_{\Phi}$ has much lighter tails than $\mu_{\lambda}$. The corresponding concentration inequalities should reflect this fact.
We derived the following result: \begin{theorem}\label{t70} Let $ \rho_{\Phi} $ be defined by \eqref{eq21}. Consider a $1$-Lipschitz function $f$. Then, for all $R>0$: \begin{equation}\label{eq24} \rho_{\Phi}\big(f \geq \mathbb{E}_{\rho_{\Phi}}(f)+R \big) \leq \exp\big(-(k+1)R \log R +[ \log(\Phi) + c ] R + o(R) \big) \end{equation} The constant $c$ is a structural constant which depends only on $k$. \end{theorem} In \eqref{eq24}, and in the rest of the paper, by $o(R)$ we mean a function $g$ such that $\lim_{R \rightarrow + \infty } g(R)/R = 0$. The $o(R)$ term in \eqref{eq24} can be made explicit: it depends on $\Phi$ and $k$, but not on $f$. By following careful the proof of this theorem, it is possible to see that the bound \eqref{eq24} is interesting (i.e. the right hand side is $<1$) when $R \geq \Phi + \frac{1}{k+1} \Phi^{1/(k+1)}$. The bound is very accurate for $R$ large, see Remark \ref{Herbstrem}.
\begin{remark}\begin{enumerate}[(i)] \item The size of the large jump drives the leading order in the concentration rate, while the reciprocal characteristic is responsible for the exponential correction term. \item The larger $k$, the more concentrated is the random variable. This is because to compensate a large jump a bridge has to make many small jumps, and this reduces the probability of large jumps. \item The smaller $\Phi$, the better the concentration. This fits with the short time interpretation of $\Phi$ given in \cite[Th.2.7]{CL15} \end{enumerate} \end{remark}
\begin{remark}[Sharpness] It can be seen, using Stirling's approximation and \eqref{e120} that the leading order term $ - (k+1)R \log R$ is optimal and the linear dependence on $\log(\Phi)$ at the exponential correction term is correct. \end{remark}
The proof of this theorem is based on the construction of a measure $\pi_{\Phi}$ which interpolates $\rho_{\Phi}$ and for which the modified log Sobolev (MLSI) inequality gives sharp concentration results. Several MLSI have been proposed for the Poisson distribution. We use the one which is considered in \cite{DP02} and \cite[Cor 2.2]{wu2000new}. The reason for this choice is that there are robust criteria (see \cite{CapPos07}) under which such inequality holds. The interpolation argument is crucial to achieve the rate $-(k+1)R \log R$. Indeed, the MLSI cannot yield any better than $-R \log R$ . While doing the proof, we repeat the classical Herbst's argument for the MLSI, improving on some results of \cite{BOB98} (which were obtained by using a different MLSI).
\subsection{Bridges of CSRW on the square lattice: refined large deviations for the marginals.}
Let $v_1= (1,0),v_2=(0,1)$. The square lattice is defined by $\mathcal{X} = \mathbb{Z}^2$ and by saying that the neighbors of $x$ are $x \pm v_1$ and $x \pm v_2$. We associate to any vertex $x \in \mathbb{Z}^2$ the clockwise oriented face $\mathbf{f}_{x}$ and two closed walks of length two, $\mathbf{e}_{x,1},\mathbf{e}_{x,2}$ as follows: \begin{eqnarray*} \mathbf{f}_x &=& (x \to x+v_2 \to x + v_1 + v_2 \to x+v_1 \to x) \\ \mathbf{e}_{x,1} &=& (x \to x+v_1 \to x), \quad \mathbf{e}_{x,2} = ( x \to x + v_2 \to x ) \end{eqnarray*} The set of closed walks of length two is denoted $\mathcal{E}$: \begin{equation}\label{e91} \mathcal{E} = \{ (x\to y \to x) : (x \to y ) \in \mathcal{A} \} = \left\{ \mathbf{e}_{x,i}, x \in \mathcal{X}, i \in \{ 1,2\} \right\} \end{equation} The set of clockwise oriented faces is $\mathcal{F}$: \begin{equation*} \mathcal{F}:= \left\{ \mathbf{f}_{x} : \, x \in \mathbb{Z}^2 \right\}. \end{equation*}
In this subsection we prove an analogous statement to Theorem \ref{accordeon} for CSRWs on the square lattice.
A serious difficulty here is represented by the fact that there is not such a well developed theory to prove concentration of measure inequalities with Poissonian rates. In particular, all the tools we use in the proof of Theorem \ref{accordeon} do not have a "Poissonian" counterpart. To the best of our knowledge, the only result concerning Poisson-type deviation bounds for the marginals of a continuous time Markov chain is due to Joulin \cite{Joulin2007Poisson}. In Theorem 3.1 the author provides abstract curvature conditions under which such bounds hold. However, explicit construction of Markov generators fulfilling these conditions is limited to 1-dimensional birth and death process, see Section 4. Therefore, instead of using $\gamma(t)$ (see \eqref{e90}) we shall use a simpler way to measure the fluctuations of the bridge, adopting a Large Deviations viewpoint. We will look at asymptotic tail expansions, and relate the coefficients in the expansion with reciprocal characteristics. This is a much rougher measurement, but still gives interesting results.
We consider the $00$ bridge $\mathbb{P}^{00}$ of the simple random walk which jumps along any arc with intensity constantly equal to $\lambda$. Using some classical expansions (see Lemma \ref{lastlemma}) one finds that: \begin{equation}\label{e121} \log\Big( \mathbb{P}^{00} \big(d(X_t, \mathbb{E}_{\mathbb{P}^{00}}(X_t)) \geq R \big)\Big) = -2 R\log(R) + \big[ \log (4\lambda^2 t(1-t) \,) +2 \big]R + o(R) \end{equation}
Theorem \ref{squarelatticebound} provides a condition on the reciprocal characteristics for the \eqref{e121} to hold when replacing $=$ with $\leq$. The conditions are expressed as conditions on the closed walks characteristics associated to the walks in $\mathcal{E} \cup \mathcal{F}$. \begin{theorem}\label{squarelatticebound} Let $j:\mathcal{A} \rightarrow \mathbb{R}_{+}$ be the intensity of a CSRW $\mathbb{P}$ on the square lattice. Assume that for some $\lambda>0$: \begin{equation}\label{eq8} \forall x \in \mathbb{Z}^2, i \in \{1,2 \} \quad \Phi_j(\mathbf{e}_{x,i}) \leq \lambda^2 \end{equation} and \begin{equation}\label{eq7} \forall x \in \mathbb{Z}^2 \quad \Phi_j(\mathbf{e}_{x,2}) \Phi_j(\mathbf{e}_{x,1}) \leq \Phi_j(\mathbf{f}_x) \leq \Phi_j(\mathbf{e}_{x+v_1,2} ) \Phi_j(\mathbf{e}_{x+v_2,1}) \end{equation} then for any $x \in \mathbb{Z}^2$: \begin{equation*} \log \mathbb{P}^{xx} \left ( d(X_t,\mathbb{E}_{\mathbb{P}^{xx}}(X_t)) \geq R \right) \leq -2\,R\log(R) + \big[ \log (4\lambda^2 t(1-t) \,) +2 \big] R + o(R) \end{equation*} \end{theorem} \tikzset{middlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
red
} } \tikzset{earlyarrow/.style={
decoration={markings,
mark= at position 0.3 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
blue
} } \tikzset{latearrow/.style={
decoration={markings,
mark= at position 0.7 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
blue
} } \tikzset{glatearrow/.style={
decoration={markings,
mark= at position 0.7 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
yellow
} } \tikzset{gearlyarrow/.style={
decoration={markings,
mark= at position 0.3 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
yellow
} } \begin{figure}
\caption{ A visual explanation of condition \eqref{eq7}: The characteristic associated with the face $\mathbf{f}_x$ (red) has to be larger than the product of the characteristics associated with its left and lower side (blue) , and smaller than the product of the characteristics associated with its upper and right side (yellow)}
\end{figure}
\begin{remark} The function $t \mapsto -\log(4 \lambda^2 t(1-t))$ plays the same role as $\gamma(t)$ in the diffusion case, and it features the same stylized fact we observed for $\gamma(t)$. \end{remark} \begin{remark} \begin{enumerate}[(i)] \item One nice aspect of \eqref{eq8} and \eqref{eq7} is that they are local conditions, that is, for a given $\mathbf{f}_x$ they depend only on the closed walks of length two that intersect $\mathbf{f}$. \item The fact that $j$ fulfills the hypothesis of the Theorem does \textit{not} imply that $j (z \to z')\leq \lambda$ on every arc of the lattice. This means that there exist CSRW whose tails are heavier than the simple random walk, but the tails of their bridges are lighter than those of the bridge of the simple random walk. \item These conditions are easy to check and there are many jump intensities satisfying them: indeed we show in Lemma \ref{faceexistence} that for any $\varphi: \mathcal{E} \cup \mathcal{F} \rightarrow \mathbb{R}_{+}$ there exist at least one intensity $j$ such that $\Phi_j(\mathbf{c}) = \varphi(\mathbf{c})$ over $\mathcal{E} \cup \mathcal{F}$. \item In the proof of Theorem it is seen how condition \eqref{eq7} makes sure that among the simple closed walks with the same perimeter, the ones with smallest area are those which have the largest value of $\Phi_j(\cdot)$. \end{enumerate} \end{remark}
The idea of the proof of Theorem \ref{squarelatticebound} is that the local conditions we impose on the faces ensure that for any closed walk $\Phi_j(\mathbf{c})$ can be controlled in terms of $\lambda^{\ell(\mathbf{c})}$. We then use a modification of Girsanov's theorem for bridges, which gives us a form of the density in terms of the reciprocal closed walk characteristics, and conclude that such density has a global upper bound on path space. It is likely that one can relax \eqref{eq8} \eqref{eq7}, by imposing them only in the limit when $\| x \| \uparrow +\infty$.To simplify the presentation and the proofs, we did not consider this case.
\subsection{General graphs}
Here, we consider a graph $(\mathcal{X},\to)$ satisfying Assumption \ref{as-03} below and a continuous time random walk $\mathbb{P}^x$ on $(\mathcal{X},\to)$ with intensity $j$.
Our aim is to prove a result similar to Theorem \ref{squarelatticebound}. As the notion of faces does not exist for general graphs, we work with its natural substitute: the \textit{basis of closed walks}. This notion is a slight generalization of the notion of cycle basis for an undirected graph, for which we refer to \cite[Sec. 2.6]{bondy2008graph}.
\subsubsection*{Trees and basis of the closed walks}
Prior to the definition, let us recall some terminology about graphs. A subgraph of $(\mathcal{X},\to)$ is a graph on $\mathcal{X}$ whose arc set in included in the arc set $\mathcal{A}$ of $(\mathcal{X},\to)$. We say that two subgraphs intersect if their arc sets do so, and we say that one is included in the other if their arc sets are so. Let us recall that for a given vertex $z\in \mathcal{X}$, its outer degree is $\mathbf{deg}(z):=| \{ z'': (z \to z'') \in \mathcal{A}\}| $ is the outer degree at $z$. As in the previous subsection, the set of closed walks of length two is denoted $\mathcal{E}$. Figure \ref{grafogen} helps in understanding the next definition.
\begin{mydef}[Tree and basis of closed walks]\label{defs-01} Let $(\mathcal{X}, \to)$ be a graph fulfilling Assumption \ref{as-03}.
\begin{enumerate}[(a)] \item We call \emph{tree} a symmetric connected subgraph $\mathcal{T}$ of $(\mathcal{X},\to)$ which spans\footnote{i.e. it connects all vertices of $(\mathcal{X},\to)$} $\mathcal{X}$ and does not have closed walks of length at least three.\footnote{closed walk of length two are allowed, as the graph is symmetric}. \item For a tree $\mathcal{T}$, $\cE^*$ is the the set of closed walks of length two which does not intersect $\mathcal{T}$.
\begin{equation}\label{e61} \cE^* = \{ \mathbf{e} \in \mathcal{E} : \mathbf{e} \cap \mathcal{T} = \emptyset \} \end{equation}
\item For any $(x \to y) \in \mathcal{A} \setminus \mathcal{T}$ we denote $\mathbf{c}_{x \to y}$ the closed walk obtained by concatenating $(x \to y)$ with the only simple directed walk from $y$ to $x$ in $\mathcal{T}$. \item Let $\mathcal{T}$ be a tree. A $\mathcal{T}$-\emph{basis of the closed walks} of $(\mathcal{X}, \to)$ is any subset $\mathcal{C}$ of closed walks of the form: \begin{equation*} \mathcal{C} = \mathcal{C}^*\cup \mathcal{E} \end{equation*}
where $\mathcal{C}^*$ is obtained by choosing for any $\mathbf{e}=(x \to y \to x) \in \cE^* $ exactly one among $\mathbf{c}_{x \to y}$ and $\mathbf{c}_{y \to x}$. We denote the chosen element by $\mathbf{c}_{\mathbf{e}}$. \end{enumerate} \end{mydef}
\GraphInit[vstyle = Shade] \tikzset{
LabelStyle/.style = { rectangle, rounded corners, draw,
minimum width = 2em, fill = yellow!50,
text =black, font = \bfseries },
VertexStyle/.append style = { inner sep=5pt,
font = \Large\bfseries, ball color = lightgray},
EdgeStyle/.append style = {->, bend left=10, double=blue}} \thispagestyle{empty}
\begin{figure}
\caption{Left: The blue arcs form a tree $\mathcal{T}$: each pair of red arcs forms an element of $\mathcal{E}^*$. Right: The closed walk $\mathbf{c}_{d \to a}$ is obtained by concatenating $(d \to a )$ with the unique simple walk in $\mathcal{T}$ from $a$ to $d$ (blue).}
\label{grafogen}
\end{figure}
Theorem \ref{treebound} gives a condition to control the tails of $ d(X_t,x)$ under $\mathbb{P}^{xx}$.
\begin{theorem}\label{treebound} Let $(\mathcal{X},\to)$ be a directed graph satisfying Assumption \eqref{as-03}, $1/\delta$ be its maximum outer degree. Let $j:\mathcal{A} \rightarrow \mathbb{R}_+$ be the intensity of a CSRW $\mathbb{P}$ satisfying Assumption \ref{as-01}. If, for some tree $\mathcal{T}$ and a $\mathcal{T}$-based basis for the closed walks $\mathcal{C}$: \begin{equation}\label{eq1} \forall \mathbf{e} \in \mathcal{E},\quad \Phi_j(\mathbf{e}) \leq (\lambda\delta)^{2} \end{equation}
\begin{equation}\label{eq2} \forall \mathbf{e} \in \mathcal{E}^*, \quad {(\lambda \delta)}^{1 - \ell(\mathbf{e}) } \Phi_j({\mathbf{e}}) \leq \Phi_j({\mathbf{c}_{\mathbf{e}}}) \leq {(\lambda \delta)}^{ \ell(\mathbf{e}) -1 } \prod_{ \stackrel{\mathbf{e}' \in \mathcal{E}, \mathbf{e}' \neq \mathbf{e} } {\mathbf{e}' \cap \mathbf{c}_{\mathbf{e}} \neq \emptyset}} \Phi_j({\mathbf{e}'}) \end{equation} Then for any $x \in \mathcal{X}$ and any $t \in [0,1]$, $R>0$: \begin{equation}\label{e29} \log \mathbb{P}^{xx} \left ( d(X_t,x) \geq R \right) \leq -2R\log R + [2 + 2 \log (\lambda t(1-t) ) + 3 \log(\delta - 1) ]R + o(R) \end{equation} \end{theorem} The proof of the Theorem is divided into two steps. In a first step one shows that for some constant $c$, $\mathbb{P}^{xx}(d(X_t,x) \geq R) \leq c \, {}\bbS^{xx}_{\lambda}(d(X_t,x) \geq R)$, where $\bbS^x_{\lambda}$ is the CSRW defined by: \begin{equation}\label{e93} j(z \to z') = \frac{\lambda}{\mathbf{deg}(z)}, \quad \forall z \to z' \in \mathcal{A}. \end{equation} The second step consists in estimating $\bbS^{xx}_{\lambda}(d(X_t,x) \geq R)$ with the right hand side of \eqref{e29}. Clearly, due to the fact that $(\mathcal{X},\to)$ has no specific structure, the estimate we obtain is less precise than the on of Theorem \ref{squarelatticebound}. However, it displays the same type of decay for the tails: a leading term of order $-R \log R$ and a correction term of order $R$.
\begin{remark} We show in Lemma \ref{constspeedconstrlemma} that to any $\varphi: \mathcal{C} \rightarrow \mathbb{R}_{+}$ we can associate a CSRW whose reciprocal characteristics coincide with $\varphi$ over $\mathcal{C}$. This shows that the conditions \eqref{eq1} and \eqref{eq2} are fulfilled by large class of Markov jump intensities. It can be seen that there exist no tree of the square lattice such that a cycle basis associated with it coincides with the faces of the lattice. Therefore Theorem \ref{squarelatticebound} is not implied by Theorem \ref{treebound}. \end{remark}
\section{Proof of the main results}
\subsection*{Proof of Theorem \ref{accordeon} }
\subsubsection*{Preliminaries} We define $p^x_t(z)$ as the density of the marginal $\mathbb{P}^x_t$, and $p^{xy}_t(z) $ as the density of $\mathbb{P}^{xy}_t$. Clearly, if $U$ does not depend on time, we have the relation: \begin{equation*} p^{xy}_t(z) = \frac{p^x_t(z)p^{z}_{1-t}(y) }{p^x_1(y)}
\end{equation*}
$^{\alpha}p^x_t(\cdot)$ and $^{\alpha}p^{xy}_t(\cdot)$ are defined accordingly.
As Ornstein Uhlenbeck processes are Gaussian processes, for any finite set $I = \{0=t_0, t_1,t_2,..,t_{l}\} \subseteq [0,1]$ there exist a positive definite quadratic form $\Sigma^{\alpha}_{I}$ over $\mathbb{R}^{d \times (l+1)}$ such that $\forall A \subseteq \mathbb{R}^{d \times l }$ and $x \in \mathbb{R}^d$: \begin{eqnarray}\label{eq10}
^{\alpha}\mathbb{P}^{x}(X_{I} \in A) &=& \int_{A} \exp \big(- \Sigma^{\alpha}_{I}( x,x^1,..,x^l ) \big) dx^1..dx^l \\ \nonumber &=& \int_{A} \left[ p^{x}_{t_1} (x^1) \prod_{j=2}^{l} p^{x^{j-1}}_{\Delta t_j}(x^j) \right] dx^1..dx^l \end{eqnarray} where we set $\Delta t_j := t_j - t_{j-1}$. Using the transition density of the Ornstein Uhlenbeck process (see e.g. \cite[Section 5.6]{KarShreve}), we can write down the explicit expression of $\Sigma^{\alpha}_{I}$: \begin{equation*} \Sigma^{\alpha}_{I}(x^0,x^1,..,x^l) = \prod_{j=1}^{l} \sqrt{ \frac{\alpha}{\pi(1- e^{-2\alpha \Delta t_j} ) } } \exp \left( -\frac{\alpha}{ (1- e^{-2\alpha \Delta t_j }) } (x_j -e^{-\alpha \Delta t_j} x_{j-1} )^2\right) \end{equation*} where we set $t_0=0$. In particular, we will be interested in the case when $I$ is the set $\Pi_m$ defined as: \begin{equation}\label{eq44} \Pi_m = \{0, 1/m, .. , (m-1)/m , 1 \} \end{equation} For $t\in [0,1]$, we define \begin{equation}\label{j(t)}
j(t) = \max\{ j: j/m < t \}, \quad \Pi^{ < t}_m =\{0, 1/m,..,j(t)/m,t \} \end{equation} We can now prove Theorem \ref{accordeon}. \begin{proof} In a first step we show that the density of $\bbP^{xy}$ with respect to the Brownian Bridge $\bbW^{xy}$ is given by \begin{equation}\label{e55} \frac{d \mathbb{P}^{xy}}{d \mathbb{W}^{xy}}= \frac{1}{Z}\exp \left(- \int_{0}^{1} \mathscr{U} (t,X_t) dt \right):= M \end{equation} where $\mathscr{U}$ has been defined at \eqref{e16} and $Z$ is a normalization constant. To do this, we show that the measure \begin{equation*} \mathbb{Q} := M \, \mathbb{W}^{xy} \end{equation*} fulfills the hypothesis of the Duality formula by Roelly and Thieullen, see Theorem \ref{IBPF} in the Appendix. It can be easily verified that the regularity hypothesis are verified by $\mathbb{Q}$, because of the regularity of the transition density of the Brownian bridge and of the smoothness of $\mathscr{U}$. Moreover, $\mathbb{Q}((X_0,X_1 )=(x,y))=1$. Let us now compute the derivative $\mathcal{D}_h$ of $M$. We have: \begin{eqnarray}\label{eq:frechetdensity} \nonumber \mathcal{D}_h M(X) &=& \lim_{\varepsilon \rightarrow 0} \frac{1}{Z \,\varepsilon} \left( M(X+ \varepsilon h) -M(X) \right) \\ \nonumber &=& \lim_{\varepsilon \rightarrow 0} \frac{1}{Z\, \varepsilon} \left( \exp(-\int_0^1 \mathscr{U}(t,X_t+ \varepsilon h(t))dt \right)-\left( \exp(-\int_0^1 \mathscr{U}(t,X_t )dt \right)\\ &=& \nonumber \frac{1}{Z} \left[-\int_0^1 \nabla \mathscr{U}(t,X_t) \cdot h(t) dt \right] \exp \left( - \int_0^1 \mathscr{U}(t,X_t) dt \right) \\
&=& \left[-\int_0^1 \nabla \mathscr{U}(t,X_t)\cdot h(t) dt \right] M \end{eqnarray}
Now let us consider any simple functionals $F$. By usingTheorem \ref{IBPF}\footnote{For the application we are going to make of the duality formula to be completely justified one shall extend its validity from the simple functionals to the differentiable functionals. A simple approximation argument, which we do not present here, takes care of that. } for the Brownian bridge $\mathbb{W}^{xy}$, Leibniz's rule and \eqref{eq:frechetdensity} we obtain:
\begin{eqnarray*}
\mathbb{Q} \Big( \mathcal{D}_h F \Big)& = & \mathbb{W}^{xy} \Big( (\mathcal{D}_h F) M\Big) \\ &=& \mathbb{W}^{xy} \Big( \mathcal{D}_h (F M) \Big)- \mathbb{W}^{xy} \Big( F (\mathcal{D}_h M) \Big) =\\ &=&\mathbb{W}^{xy} \Big((F M) \int_{0}^1 \dot{h}(t) \cdot dX_t \Big)+ \mathbb{W}^{xy} \Big( (F M ) \int_{0}^{1}\mathscr{U}(t,X_t) \cdot h(t) dt \Big) \\ &=&\mathbb{Q} \Big(F \left[ \int_{0}^1 \dot{h}(t) \cdot dX_t + \int_{0}^{1}\mathscr{U}(t,X_t) \cdot h(t) dt \right] \Big)
\end{eqnarray*}
from which \eqref{e55} follows, because of the arbitrary choice of $F$. As a by-product, we obtain that, if we choose $\alpha$ as in \eqref{e17}, we have:
\begin{equation*}
\frac{d \mathbb{P}^{xy} }{ d {}^{\alpha}\mathbb{P}^{xy}} = \exp\Big( \int_{0}^{1}V(t,X_t) dt \Big) \end{equation*}
where
\begin{equation*}
V(t,X_t) =\frac{1}{2}\alpha^2 \| x \|^2-\mathscr{U}(t,X_t) - \log (\, Z \,)
\end{equation*} Note tat because of \eqref{e17}, $V(t,\cdot)$ is concave for all $t$. The next step in the proof is to prove that $z \mapsto \frac{d\mathbb{P}^{xy}_t}{d ^{\alpha}\mathbb{P}^{xy}_{t}}(z)$ is log concave. To do this we will show that $(x,z,y)\mapsto \frac{d \mathbb{P}^{xy}_t}{d ^{\alpha}\mathbb{P}^{xy}_{t}}(z)$ is log concave, which is a slightly stronger statement. To this aim, we observe that, applying the Markov property for $^{\alpha}\mathbb{P}^{xy}$ we have: \begin{eqnarray}\label{e51}
\nonumber \frac{d \mathbb{P}^{xy}_t}{d ^{\alpha}\mathbb{P}^{xy}_{t}}(z) &=& \mathbb{E}_{^\alpha \mathbb{P}^{xy}} \left( \exp \left(\int_{0}^{1} V(s,X_s)ds \right) \big | X_t = z \right) \\
&=& \mathbb{E}_{^\alpha \mathbb{P}^{xy}} \left( \exp \left(\int_{0}^{t} V(s,X_s)ds \right) \big | X_t = z \right) \\
\nonumber &\times &\mathbb{E}_{^\alpha \mathbb{P}^{xy}} \left( \exp \left(\int_{t}^{1} V(s,X_s) ds \right) \big | X_t = z \right) \end{eqnarray}
We show that each factor is a log concave function of $(x,y,z)$. Let us consider the first factor. A further application of the Markov property for $\mathbb{P}^x$ gives:
\begin{equation*}
\mathbb{E}_{^\alpha \mathbb{P}^{xy}} \left( \exp \left(\int_{0}^{t} V(s,X_s)ds \right) \big | X_t = z \right) = \mathbb{E}_{^\alpha \mathbb{P}^{x}} \left( \exp \left(\int_{0}^{t} V(s,X_s)ds \right) \big | X_t = z \right):= G(x,z)
\end{equation*} Consider a discretisation parameter $m \in \mathbb{N}$, and $\Pi_m$, $j(t)$, $\Pi^{<t}_m$ as in \eqref{eq44}, \eqref{j(t)} and define: \begin{eqnarray*} \mathcal{I}_m& &: \mathbb{R}^{d \times( j(t)+2)} \rightarrow \mathbb{R} \\ \mathcal{I}_m (x,x^1..,x^{j(t)+1}) &=& \frac{1}{m} V(0,x)+ \\ &+&\frac{1}{m} \sum_{1 \leq j \leq j(t)-1} V(j/m,x^j) \, + (t- j(t)/m) V(j(t)/m,x^{j(t)}) \end{eqnarray*} and
\begin{eqnarray*} G^m(x,z) &:=& ^{\alpha}\mathbb{P}^{x} \Big( \exp( \mathcal{I}_m(X_{\Pi^{<t}_m}) ) \big | X_t = z \Big) \\
&=& ^{\alpha}\mathbb{P}^{x}_{\Pi^{<t}_m}\Big( \exp( \mathcal{I}_m(x,x^1,..,x^{j(t)},x^{j(t+1)} ) \big | x^{j(t)+1} = z \Big) \end{eqnarray*} Clearly, $G^m(x,z) \rightarrow G(x,z)$ pointwise. The conditional density of ${}^{\alpha}\mathbb{P}^{x}_{\Pi^{<t}_m}$ given $X_t =z$ is: \begin{equation}\label{e46} \frac{1}{{^{\alpha}}p^x_t(z)} \times \Big[{^{\alpha}} p^{ x}_{1/m}(x_1) \left( \prod_{j=2}^{j(t)}{^{\alpha}}p^{ x^{j-1} }_{1/m}(x_j) \right) {^{\alpha}}p^{x^j}_{t-j(t)/m}(z) \Big] \end{equation} Using \eqref{eq10} we rewrite both the numerator and the normalization factor at the denominator to obtain the following equivalent expression for the conditional density: \begin{equation*} \frac{\exp \big(- \Sigma^{\alpha}_{\Pi^{<t}_m}(x, x^1,...,x^{j(t)},z) \big)}{ \int_{\mathbb{R}^{d \times j(t)}} \exp \big(- \Sigma^{\alpha}_{\Pi^{<t}_m}(x, x^1,...,x^{j(t)},z \big) dx^1..dx^{j(t)}} \end{equation*}
which then gives
\begin{equation*} G^m(x,z):=\frac{ \int_{\mathbb{R}^{d \times j(t)} } \exp \big( \mathcal{I}_m(x,x^1,..,x^{j(t)},z) - \Sigma^{\alpha}_{\Pi^{<t}_m}(x, x^1,...,x^{j(t)},z) \big) dx^1..dx^{j(t)} }{ \int_{\mathbb{R}^{d \times j(t)}} \exp \big(- \Sigma^{\alpha}_{\Pi^{<t}_m}(x, x^1,...,x^{j(t)},z) \big) dx^1..dx^{j(t)}}
\end{equation*}
By mean of the identifications
\begin{eqnarray*} w &\hookrightarrow & (x,x^1,..,x^{j(t)},z ) \in \mathbb{R}^{d \times j(t)+2}\\ v &\hookrightarrow & (x^1, .. , x^{j(t)}) \in \mathbb{R}^{d \times j(t)} \\ v'&\hookrightarrow & (x,z) \in \mathbb{R}^{d \times 2} \\ F(w)&\hookrightarrow & \exp(\mathcal{I}_m(w) ) \end{eqnarray*} we can then rewrite $G^m(x,z)$ as the right hand side of \eqref{eq50}. By the hypothesis \eqref{e17} $V(t,\cdot)$ is concave for any $t \in [0,1]$. Hence $\mathcal{I}_m$ is concave as well. Therefore we can apply Theorem \ref{thm:logconcpres} to conclude that $G^m(x,z)$ is log-concave for all $m$, and therefore so is the limit. This concludes the proof that the first of the two appearing in \eqref{e51} is log concave. With the same argument we have just used, one shows that also the other factor is log concave and therefore $\frac{d {}^{\alpha}\mathbb{P}^{xy}_t}{ d \mathbb{P}^{xy}_t}$ is log concave. This tells us that: \begin{equation}\label{eq53}
\inf_{z \in \mathbb{R}^d, v \in \mathbb{R}^d, \| v \|=1} - v \cdot \mathbf{Hess}(\log p^{xy}_t) (z) \cdot v \geq \inf_{z \in \mathbb{R}^d, v \in \mathbb{R}^d, \| v\|=1} - v \cdot \mathbf{Hess}(\log {}^{\alpha}p^{xy}_t) (z) \cdot v \end{equation}
The explicit expression for $^{\alpha}p^x_t(z)$ is well known, see e.g. \cite[Section 5.6]{KarShreve}: \begin{equation*}
^{\alpha}p^x_t(z) = \sqrt{\frac{\alpha}{\pi (1 -\exp(-2\alpha t))}}\exp \left( -\frac{\alpha}{(1- e^{-2 \alpha t })} \| z - x e^{-\alpha t} \|^2 \right) \end{equation*} Therefore, as a function of $z$: \begin{eqnarray*} ^{\alpha}p^{xy}_t(z) &\propto & ^{\alpha}p^{x}_t(z) ^{\alpha}p^{z}_{1-t}(y) \\
&\propto & \exp \left(-\frac{\alpha}{(1- e^{-2 \alpha t })} \| z - x e^{-\alpha t} \|^2 - \frac{\alpha}{(1- e^{-2 \alpha (1-t) })} \| y - z e^{-\alpha (1-t)} \|^2 \right) \end{eqnarray*} It is then an easy computation to show that $\mathbf{Hess}(\log {}^{\alpha}p^{xy}_t ) (z) = -{}\gamma_{\alpha}(t) \mathbf{id}$, where $\gamma_{\alpha}(t)$ had been defined at \eqref{e40}. Using \eqref{eq53}, conclusion follows by Theorem 2.7 in \cite{Led01}. \end{proof}
\begin{remark}\label{noway} In \cite[Th. 6.1]{BrLieb76} log-concavity of solutions to \begin{equation*} \partial_t \phi(t,z) - \frac{1}{2} \Delta \phi (t,z) + V(z)\phi(t,z) = 0. \end{equation*} is established when $V$ is convex. Define now $\phi(t,z)$ as the second factor in \eqref{e51} and assume for simplicity that $\alpha=0$ and $V$ not to depend on time: \begin{equation*}
\phi(t,z):= \mathbb{E}_{\mathbb{W}^{xy}} \left( \exp \left(\int_{t}^{1} V(X_s) ds \right) \big | X_t = z \right) \end{equation*} Using Feynamn-Kac formula and the expression for the drift of the Brownian bridge we have that $\phi$ solves \begin{equation*} \partial_t \phi(t,z) + \frac{1}{2} \Delta \phi (t,z) + \frac{(y-z)}{(1-t)} \nabla \phi(t,z) + V(t,z)\phi(t,z) = 0. \end{equation*} Log-concavity of $\phi$ when $V$ is concave is a by-product of the proof of Theorem \ref{accordeon}. \end{remark} \subsection*{Proof of Theorem \ref{t70} }
The main steps of the proof are the Lemmas \ref{p460} and \ref{p600}. In Lemma \ref{p460} we revisit Herbst's argument, while in Lemma \ref{p460} we construct an auxiliary measure $\pi_{\Phi}$ for which sharp concentration bounds can be obtained through MLSI.
\subsubsection*{A refined Herbst's argument} We apply the Herbst's argument to a Modified Log Sobolev Inequality, studied, among others, by Dai Pra, Paganoni, and Posta in \cite{DP02}. In their Proposition 3.1 they show that the Poisson distribution $\mu_{\lambda}(\cdot)$ of mean $\lambda$ satisfies the following inequality: \begin{equation}\label{e314} \forall f >0, \quad \mathbb{E}_{\mu_{\lambda}} \Big( f \log f \Big) - \mathbb{E}_{\mu_{\lambda}} (f) \log(\mathbb{E}_{\mu_{\lambda}} (f)) \leq \lambda \mathbb{E}_{\mu_{\lambda}} \Big(\nabla f \nabla \log f \Big)
\end{equation} where $\nabla f(n)$ is the discrete gradient $f(n+1)-f(n)$.
\begin{lemma}\label{p460} Let $\mu_{\lambda}$ satisfy \eqref{e314}. Then for any 1-Lipschitz function $f: \mathbb{N} \rightarrow \mathbb{R}$: \begin{equation}\label{e315} \mu_{\lambda} \Big(f \geq \mathbb{E}_{\mu_{\lambda}} \big(f \big) + R \Big) \leq \exp\left(-(R+2\lambda)\big[\log\big(1+\frac{R}{2\lambda}\big) +1\big] \right) \end{equation} In particular, \begin{equation*} \mu_{\lambda} \Big(f \geq \mathbb{E}_{\mu_{\lambda}} \big(f \big) + R \Big) \leq \exp\left(-R \log R +[\log(2 \lambda) +1]\, R +o(R)\right). \end{equation*} \end{lemma}
\begin{remark}\label{Herbstrem}
We are able to improve the concentration rate obtained in \cite[Prop. 10]{BOB98} and \cite[Cor 2.2]{wu2000new} for the Poisson distribution. For instance, in \cite{BOB98} the following deviation bound for 1-Lipschitz functions is obtained under the Poisson distribution $\mu_{\lambda}$ of parameter $\lambda$: \begin{equation}\label{e25} \mu_{\lambda} \left( f \geq \mathbb{E}_{\mu_{\lambda}}(f) + R \right) \leq \exp \left( -\frac{R}{4} \log \left(1+\frac{R}{2\lambda}\right)\right) \end{equation} Note that the right hand side can be rewritten as $-\frac{R}{4} \log(R) + \frac{\log (2 \lambda)}{4}R + o(R)$. We improve \eqref{e25} to \begin{equation}\label{e26} \mu_{\lambda} (f \geq \mathbb{E}_{\mu_{\lambda}}(f) + R ) \leq \exp\left(-(R+2\lambda)\log\left(1+\frac{R}{2\lambda}\right) +R \right) \end{equation} In this case, the rate has the form $ \exp(-R \log R + (\log(\lambda)+1+\log(2)) R + o(R)) $. This rate is sharp in the leading term $-R \log(R)$. Indeed, if one uses the explicit form of the Laplace transform of $\mu_{\lambda}$ one gets the following deviation bound for the identity function (see e.g. Example 7.3 in \cite{Ross11}): \begin{equation}\label{e27} \mu_{\lambda} \left( n \geq \mathbb{E}_{\mu_{\lambda}}(n) + R \right) \leq \exp\left(-R\left(\log \left(1+\frac{R}{\lambda} \right)-1\right) - \lambda \log \left(1+\frac{R}{\lambda} \right)\right) \end{equation} The rate here is of the form $-R \log R +(\log(\lambda)+1)R +o(R)$. This shows that \eqref{e315} is sharp concerning the leading term, has the right dependence on $\lambda$ in the exponential correction term. Concerning the constants appearing in the exponential terms, we have $1 + \log(2)$. We do not know whether this is sharp or not. However, nothing better than $1$ is reasonable to expect because of \eqref{e27}. \end{remark}
\begin{proof} Let $f$ be 1-Lipschitz. It is then standard to show that $f$ has exponential moments of all order. Therefore, all the expectations we are going to consider in the next lines are finite. Let us define: \begin{equation*} \varphi_{\tau}:= \mathbb{E}_{\mu_{\lambda}} \big( \exp(\tau f) \big), \quad \psi_{\tau}:= \log \mathbb{E}_{\mu_{\lambda}}\left( \exp\left(\tau f\right) \right) \end{equation*} We apply the inequality \eqref{e314} to $\exp( \tau f)$. Note that the left hand side reads as $ \tau \partial_{\tau} \varphi_{\tau} - \varphi_{\tau} \psi_{\tau}$. The right hand side can be written as \begin{equation*} \lambda \tau \mathbb{E}_{\mu_{\lambda}}( \exp(\tau f) [\exp(\tau \nabla f)-1] \, \nabla f ) \end{equation*} Using that$f$ is 1-Lipschitz and the elementary fact that
for all $\tau>0$ $ \sup_{y \in [-1,1]} | y [\exp(\tau y) -1] | =\exp(\tau)-1$ we can bound the above expression by: $$ \lambda \tau [ \exp(\tau)-1 ] \, \mathbb{E}_{\mu_{\lambda}}\big( \exp(\tau f) \big)= \lambda \tau [\exp(\tau)-1]\varphi_{\tau}$$
We thus get the following differential inequality: \begin{equation}\label{e325} \tau \partial_{\tau} \varphi_{\tau} - \varphi_{\tau} \psi_{\tau} \leq \lambda \tau \varphi_{\tau} (\exp(\tau)-1) \end{equation} Dividing on both sides by $\varphi_{\tau}$, and using the chain rule, it can be rewritten as a differential inequality for $\psi$: \begin{equation}\label{e326} {\tau} \partial_{\tau} \psi_{\tau} - \psi_{\tau} \leq \lambda {\tau}(\exp({\tau})-1) , \quad \partial_{\tau} \psi_0 = \mathbb{E}_{\mu_{\lambda}}(f),\psi_0=0 \end{equation} The ODE corresponding to this inequality is \begin{equation}\label{e321} {\tau} \partial_{\tau} h_{\tau} - h_{\tau} = \lambda {\tau}(\exp({\tau})-1) , \quad \partial_{\tau} h_0 = \mathbb{E}_{\mu_{\lambda}}(f), h_0=0 \end{equation} Note that the condition $h_0=0$ is implied by the form of the equation, and it is not an additional constraint. \eqref{e321} admits a unique solution, given by: \begin{equation}\label{e320} h_{\tau} = {\tau} \mathbb{E}_{\mu_{\lambda}}\big(f\big)+\lambda \tau \gamma({\tau}) \end{equation} where \begin{equation}\label{e322} \gamma({\tau}) = \sum_{k=1}^{+\infty} \frac{1}{k} \frac{{\tau}^k}{k!} \end{equation} The fact that \eqref{e320} is the solution to \eqref{e321} can be checked directly by differentiating term by term the series defining $\gamma$ in \eqref{e322}. We claim that \begin{equation}\label{e323}\forall \tau \geq 0 \quad \psi_{\tau}\leq h_{\tau} \end{equation} The proof of this claim, is postponed to the Appendix section, see Propositon \ref{p461}. Given \eqref{e323}, a standard argument with Markov inequality yields:
\begin{equation*} \mu_{\lambda}\big(f \geq \mathbb{E}_{\mu_{\lambda}}(f)+R \big) \leq \exp\left( \inf_{\tau \geq 0}\psi_{\tau} -\tau \mathbb{E}_{\mu_{\lambda}}(f)- \tau R \right)\leq \exp \left(\inf_{\tau >0} \lambda \tau \gamma(\tau) -\tau R \right) \end{equation*} We can bound $\gamma$ in an elementary way:
\begin{equation*} \gamma(\tau)=\sum_{k=1}^{+\infty} \frac{1}{k} \frac{{\tau}^k}{k!} \leq \frac{2}{\tau} \sum_{k=1}^{+\infty} \frac{{\tau}^{k+1}}{(k+1)!} = 2\frac{\exp(\tau)-\tau-1}{\tau} \end{equation*} and therefore: \begin{equation*} \mu_{\lambda}(f \geq \mathbb{E}_{\mu_{\lambda}}(f) + R ) \leq \exp \left(\inf_{\tau > 0} 2 \lambda \exp(\tau) - (2\lambda + R)\tau -2\lambda \right) \end{equation*} Solving the optimization problem yields the conclusion. \end{proof}
\subsubsection*{An interpolation}The idea behind the proof of Theorem \ref{t70} is to construct a measure $\pi_{\Phi}$ (see Definition \ref{d107}) which ``interpolates'' $\rho_{\Phi}$ and for which the MLSI \eqref{e314} gives sharp concentration bounds. \begin{mydef}\label{d107} Let $\rho_{\Phi}$ be defined by \eqref{eq21}. We define $\pi_{\Phi} \in \mathcal{P}(\mathbb{N})$ as follows: \begin{equation}\label{e330} \pi_{\Phi} \big( m \big)= \frac{1}{Z_{\Phi}} \rho_{\Phi} \big( n(m) \big)^{1- \alpha(m) }\rho_{\Phi} \big( n(m) +1 \big)^{ \alpha(m) } \end{equation} where \begin{equation}\label{e100} n(m) = \lfloor m/(k+1) \rfloor , \quad \alpha(m) = m/(k+1) -n(m) \end{equation} \end{mydef}
Another ingredient we shall use in the proof is the following criterion for MLSI, due to Caputo and Posta. What we make here is a summary of some of their results in Section 2 of the paper \cite{CapPos07}, adapted to our scopes. To keep track of the constants, we also use Lemma 1.2 of \cite{Led01}. We do not reprove these results here.
\begin{lemma}[Caputo and Posta criterion for MLSI,\cite{CapPos07}]\label{l10} Let $\pi \in \mathcal{P}(\mathbb{N})$ be such that \begin{equation}\label{e56} c(m):=\frac{\pi(m-1)}{\pi(m)} \end{equation} has the property that for some $v \in \mathbb{N}$, $c>0$: \begin{equation}\label{e360} \inf_{m \geq 1} \quad c(m+v)-c(m) > 0 \end{equation} and that $\sup_{m \geq 0} c(m+v)-c(m)<+\infty$. Then the function $\tilde{c}$ defined by \begin{equation}\label{e57} \tilde{c}(m):= c(m) + \frac{1}{v} \sum_{i=0}^{v-1} \frac{v-i}{v}[c(m+i)+c(m-i)-2c(m)] \end{equation} is uniformly increasing, that is \begin{equation}\label{e362} \inf_{m \geq 0}\tilde{c}(m+1)-\tilde{c}(m) \geq \delta \end{equation} for some $\delta>0$. Moreover, if we define $\tilde{\pi} \in \mathcal{P}(\mathbb{N})$ by: \begin{equation}\label{e361} \tilde{\pi}(0)= \frac{1}{\tilde{Z}}, \quad \tilde{\pi}(m)= \frac{1}{\tilde{Z}}\prod_{i=1}^{m} \frac{1}{\tilde{c}(i)} \end{equation} then $\tilde{\pi}$ is equivalent to $\pi$ in the sense that there exist $\tilde{C}$ such that: \begin{equation}\label{e363} \varepsilon \leq \frac{\pi(m)}{\tilde{\pi}(m)} \leq \varepsilon^{-1} \end{equation} Finally, $\pi$ satisfies the MLSI \eqref{e314} with $ \delta^{-1} \exp(4 \varepsilon^{-1} )$ instead of $\lambda$. \end{lemma} Using this criterion, we derive MLSI for $\pi_{\Phi}$. \begin{lemma}\label{p600} The measure $\pi_{\Phi}$ satisfies the MLSI \eqref{e314} with a constant of the form $ {\Phi}^{1/(k+1)} c $, where $c$ is a constant independent from $\Phi$. \end{lemma}
\begin{proof}
For $\Phi \in \mathbb{R}_{+}$ we let $c_{\Phi}$ be defined by \eqref{e56} by replacing $\pi$ with $\pi_{\Phi}$. We define $\tilde{c}_{\Phi}$ by \eqref{e57} with the choice $v=k+1$. Moreover,we define $\delta_{\Phi}$ as in \eqref{e362}, $\tilde{\pi}_{\Phi}$ as in \eqref{e361} and $\varepsilon_{\Phi}$ as in \eqref{e363}. Let us prove that: \begin{equation}\label{e102} \inf_{m \geq 1} c_{1}(m+k+1) - c_{1}(m ) >0, \quad \sup_{m \geq 1} c_1(m+k+1) -c_1(m) <+\infty. \end{equation} Equation \eqref{e120} tells that: \begin{equation}\label{hdef} \forall n \in \mathbb{N}, \quad \frac{\rho_{1}(n-1)}{\rho_{1}(n)} = n \times \prod_{i=0}^{k-1}\big( k n -i \big):= h(n) \end{equation} By definition of $n(m)$ and $\alpha(m)$ we have that for all $m \in \mathbb{N}$, $n(m+k+1) = n(m)$ and $\alpha(m+k+1) = \alpha(m)$. Therefore, by definition of $\pi_1$: \begin{eqnarray*} c_1(m+k+1) - c_1(m) &=& \frac{\pi_1(m+k)}{\pi_1(m+k+1)} -\frac{\pi_1(m-1)}{\pi_1(m)} \\ &=&\frac{ \rho_1( n(m-1)+1 )^{ 1-\alpha(m-1)} \rho_1( n(m-1)+2 )^{\alpha(m-1)}} {\rho_1( n(m)+1 )^{ 1-\alpha(m)} \rho_1( n(m)+2 )^{\alpha(m)}}\\ &-&\frac{ \rho_1( n(m-1) )^{ 1-\alpha(m-1)} \rho_1( n(m-1)+1 )^{\alpha(m-1)}} {\rho_1( n(m) )^{ 1-\alpha(m)} \rho_1( n(m)+1 )^{\alpha(m)}} \end{eqnarray*} We have two cases \begin{enumerate} \item [\underline{$m \in (k+1)\mathbb{N}$}] In this case $n(m-1) = n(m) -1$ and $\alpha(m)=0,\alpha(m-1) = k/(k+1)$. Therefore: \begin{eqnarray*} c_1(m+k+1) - c_1(m) &=& \Big[\frac{\rho_1(n(m))}{ \rho_1(n(m)+1)}\Big]^{1/k+1} - \Big[\frac{\rho_1(n(m)-1)}{ \rho_1(n(m))}\Big]^{1/k+1} \\
&=& h^{1/(k+1)}(n(m)+1) - h^{1/(k+1)}(n(m)) \end{eqnarray*} where the function $x \mapsto h(x)$ has been defined in \eqref{hdef}. \item[\underline{$m \notin (k+1)\mathbb{N}$}] In this case $n(m-1) = n(m) $ and $\alpha(m)=\alpha(m-1) +1/(k+1)$. Therefore: \begin{eqnarray*} c_1(m+k+1) - c_1(m) &=& \Big[\frac{\rho_1(n(m)+1)}{ \rho_1(n(m)+2)}\Big]^{1/k+1} - \Big[\frac{\rho_1(n(m))}{ \rho_1(n(m)+1)}\Big]^{1/k+1} \\
&=& h^{1/(k+1)}(n(m)+2) - h^{1/(k+1)}(n(m)+1) \end{eqnarray*} \end{enumerate} It can be checked with a direct computation that $h$ is strictly increasing and $\lim_{x \rightarrow + \infty} \partial_x h^{1/k+1} (x) = k^{k/(k+1)}$. Using this fact in the two expressions above yields \eqref{e102}. We are then entitled to apply Lemma \ref{l10} which tells that $\tilde{\pi}_{1}$ satisfies the MLSI \eqref{e314} with a positive constant $\delta^{-1}_1$, and $\pi_{1}$ satisfies the MLSI with constant $\delta_1^{-1} \exp(4 \varepsilon^{-1}_1)$. Let now consider $\Phi \neq 1$. It is an elementary observation to show that $c_{\Phi}(m) = \Phi^{-1/(k+1)}c_1(m)$. This means that (see Definition \ref{d107}): \begin{equation*} \pi_{\Phi}(m) = \Big[ \sum_{m=0}^{+\infty}\Phi^{-m/(k+1)} \pi_1(m)\Big]^{-1} \Phi^{-m/(k+1)} \pi_1(m) \end{equation*} Moreover, by construction, (see \eqref{e57}) we also have that $\tilde{c}_{\Phi}= \Phi^{-1/(k+1)}c_1$. This implies that $\delta_{\Phi} = \Phi^{1/(k+1)} \delta_1$ and that \begin{equation*} \tilde{\pi}_{\Phi}(m) = \Big[ \sum_{m=0}^{+\infty}\Phi^{-m/(k+1)} \tilde{\pi}_1(m)\Big]^{-1} \Phi^{-m/(k+1)} \tilde{\pi}_1(m) \end{equation*} It is then easy to see that, using the two expressions for $\pi_{\Phi}$ and $\tilde{\pi}_{\Phi}$ we have just derived that $\varepsilon_{\Phi} \geq \varepsilon^2_1$. Another application od Lemma \ref{l10} gives that $\pi_{\Phi}$ satisfies the MLSI with constant $\Phi^{-1/(k+1)} \delta^{-1}_1 \exp(4 \varepsilon_{1}^{-2} ) $. \end{proof}
We can finally prove Theorem \ref{t70}.
\begin{proof}\textit{of Theorem \ref{t70}}
Consider $f: \mathbb{N} \rightarrow \mathbb{R}$ which is 1-Lipschitz. Then define $g:\mathbb{N} \rightarrow \mathbb{R}$ by: \begin{equation}\label{e380}
g(m):= (1-\alpha(m) )f(n(m)) + \alpha(m) f(n(m) +1) \end{equation} where $n(m),\alpha(m)$ have been defined at \eqref{e100}. It is immediate to verify that $g$ is $1/(k+1)$-Lipschitz. Because of Lemma \ref{p600} there exists $c$ independent from $\Phi$ such that $\pi_{\Phi}$ satisfies MLSI \eqref{e314} with constant $c \, \Phi^{1/(k+1)}$. We define $M:= \Phi + \frac{\Phi^{1/K+1} }{k+1} $/ Using the concentration bound from Lemma \ref{p460} on $(k+1)g$ we get that for any $R>M$: \begin{eqnarray*} &{}&\pi_{\Phi} \Big( \{ m: g(m) \geq \mathbb{E}_{\pi_{\Phi}}(g)-M+R \} \Big)\\ &\leq & \exp\Big( -(k+1)(R-M)\log(R-M) +[c+\log \Phi ](R-M)+ o(R) \Big) \\ &=& \exp\Big( -(k+1)R\log(R) +[c+\log \Phi ]R + o(R) \Big) \end{eqnarray*} where to obtain the last inequality we used the fact that the difference $(R-M)\log(R-M)-R\log(R)$ is a function in the class $o(R)$. It is proven in Lemma \ref{ll} (see Appendix) that $\mathbb{E}_{\pi_{\Phi}}(g)-M \leq \mathbb{E}_{\pi}(f)$. This implies that $\pi_{\Phi} \big( \{ m: g(m) \geq \mathbb{E}_{\pi_{\Phi}}(g)-M+R \} \big) \geq \pi_{\Phi} \big( \{ m: g(m) \geq \mathbb{E}_{\rho_{\Phi}}(\rho) +R \} \big) $. Finally we observe that: \begin{eqnarray*} &{}&\pi_{\Phi} \big( \{ m: g(m) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R \} \big)\\
&\geq & \pi_{\Phi} \big( \{ m: g(m) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R, m \in (k+1)\mathbb{N} \} \big)\\
&= & \frac{1}{Z_{\Phi}} \rho_{\Phi} \big( \{ n: f(n) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R \} \big)\\
&\geq & \frac{1}{k+1}\rho_{\Phi} \big( \{ n: f(n) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R \} \big) \end{eqnarray*} where we used the the arithmetic geometric mean inequality to show that:
$$Z_{\Phi} = \sum_{m=0}^{+\infty} \rho_{\Phi}(n(m))^{1-\alpha(m)} \rho_{\Phi} (n(m)+1)^{\alpha(m)} \leq k+1 .$$
Summing up we have:
\begin{eqnarray*}
\rho_{\Phi} \big( \{ n: f(n) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R \} \big) &\leq& (k+1)\pi_{\Phi} \big( \{ m: g(m) \geq \mathbb{E}_{\rho_{\Phi}}(f) +R \} \big) \\
&\leq& (k+1)\exp\Big( -(k+1)R\log(R) +[c+\log \Phi ]R + o(R) \Big)
\end{eqnarray*}
The proof of the Theorem is now concluded \end{proof}
\subsection{Proof of Theorem \ref{squarelatticebound} and \ref{treebound} }
\subsubsection*{Preliminaries}
Let us specify the assumptions on the jump intensity. \begin{assumption}\label{as-01} \ The jump intensity $j:\mathcal{A} \to \mathbb{R}_{+} $ verifies the following requirements. \begin{enumerate}[(1)] \item It has constant speed: there exists $v >0 $ such that \begin{equation}\label{eq-11} \forall z \in \mathcal{X}, \quad v=\sum_{z':z \to z' }j(z \to z'):= \bar{j}(z) . \end{equation} \item It is everywhere positive: $j(z \to z')>0$ for all $z \to z' \in \mathcal{A}$. \end{enumerate} \end{assumption}
Here is some vocabulary about graphs.
\begin{mydef}\label{def-02} Let $ \mathcal{A}\subset\mathcal{X}^2$ specify a directed graph $(\mathcal{X},\to)$ on $\mathcal{X}$ satisfying Assumption \ref{as-03}. \begin{enumerate}[(a)] \item The distance $d(z,z')$ between two vertices $z$ and $z'$ is the length of the shortest walk joining $z$ with $z'$. Due to point (1) of Assumption \ref{as-03}, $d$ is symmetric. \item If $\mathbf{w}=(x_{0} \to x_1 \to .. \to x_n)$ is a walk, then $\mathbf{w}^*$ is the walk obtained by reverting the orientation of all arcs: \begin{equation}\label{eq-60} \mathbf{w}^*:=(x_n \to x_{n-1} \to .. \to x_{0}) \end{equation} \item A closed walk $\mathbf{c}=(x_0\to x_1 \to \cdots\rightarrow x_n=x_0)$ is said to be \textit{simple} if the cardinal of the visited vertices $ \left\{x_0,x_1,\dots, x _{ n-1}\right\} $ is equal to the length $n$ of the walk. This means that a simple closed walk cannot be decomposed into several closed walks. A non-closed walk $\mathbf{w}=(x_0 \to x_1 \to x_2 \to ... \to x_n \neq x_0 )$ is said to be \textit{simple} if the cardinal of the visited vertices $ \left\{x_0,x_1,\dots, x _{ n}\right\} $ is equal to the length $n+1$. \end{enumerate} \end{mydef}
\subsubsection*{Proof of Theorem \ref{squarelatticebound}}
The proof of Theorem \ref{squarelatticebound} is based on the following Lemma, which ensures that we can control $\Phi_j(\mathbf{c})$ in terms of $\lambda^{\ell(\mathbf{c})}$. To ease the notation, we write $\Phi(\cdot)$ instead of $\Phi_j(\cdot)$.
\begin{lemma}\label{squarepatch} Let $j$ be as in the hypothesis of Theorem \ref{squarelatticebound}. Then for any closed walk $\mathbf{c}$, $\Phi_j(\mathbf{c}) \leq \lambda^{\ell(\mathbf{c})}$. \end{lemma}
\begin{proof} We observe that it is sufficient to consider the case when $\mathbf{c}$ is simple . Simple closed walks have an orientation, which is unique, and it can be either clockwise or counterclockwise. The interior of a closed walk is then also well defined and we call area the number of squares in the interior of $\mathbf{c}$. The proof is by induction on the area of the closed walk.
\begin{enumerate} \item[] \underline{Base step} If the area of $\mathbf{c}$ is zero and $\mathbf{c}$ is simple, then $\mathbf{c}$ is a walk of length two, i.e. $\mathbf{c} \in \mathcal{E}$. The conclusion then follows by \eqref{eq8}. \item[]\underline{Inductive step} Consider the minimum in the lexicographic order of the vertices of $\mathbf{c}$. W.l.o.g. such vertex can be chosen to be $x_1$. By construction then, either $(x_0,x_2) = (x_1+ \mathbf{e}_1,x_1 + \mathbf{e}_2 )$ or $ (x_0,x_2) =(x_1+ \mathbf{e}_2,x_1 + \mathbf{e}_1 )$, see Figure \ref{rett}.
\begin{itemize} \item[(a)] \underline{$ (x_0,x_2) = (x_1+ \mathbf{e}_1,x_1 + \mathbf{e}_2 ).$} We define $z,$ $\mathbf{c}_{x_2 \to x_0}$ and $\mathbf{p}$ by:
\begin{equation}\label{eq70}
z: = x_2+v_1=x_0+v_2, \quad \mathbf{c} : = ( x_0 \to x_1 \to x_2 \to \mathbf{c}_{x_2 \to x_0}), \quad \mathbf{p} : = (x_0 \to z \to x_2) \end{equation}
We also define $\tilde{\mathbf{c}}$ by concatenating $\mathbf{p}$ and $\mathbf{c}_{x_2 \to x_0}$ (see Figure \ref{corner}):
\begin{equation*}
\tilde{\mathbf{c}} := (\mathbf{p} \to \mathbf{c}_{x_2 \to x_0} )
\end{equation*}
We then have, recalling that $\mathbf{p}^*$ is obtained by reversing $\mathbf{p}$ (see Definition \ref{defs-01}):
\begin{eqnarray*}
\Phi(\mathbf{c}) &=& j(x_0 \to x_1) j(x_1 \to x_2) \Phi(\mathbf{c}_{x_2 \to x_0} ) \\ &=& \frac{ j(x_0 \to x_1) j(x_1 \to x_2)}{\Phi(\mathbf{p})} \Phi(\mathbf{p}) \Phi(\mathbf{c}_{x_2 \to x_0})\\ &=&\frac{ j(x_0 \to x_1) j(x_1 \to x_2)}{ \Phi(\mathbf{p}) } \Phi(\tilde{\mathbf{c}}) \\ &=& \frac{ j(x_0 \to x_1) j(x_1 \to x_2) \Phi(\mathbf{p}^*)} {\Phi(\mathbf{p}^*) \Phi(\mathbf{p}) } \Phi(\tilde{c}) \\ &=& \frac{ \Phi(\mathbf{f}_{x_1} ) }{\Phi(\mathbf{e}_{x+v_2,1}) \Phi(\mathbf{e}_{x + v_1,2})} \Phi(\tilde{\mathbf{c}})
\end{eqnarray*}
By \eqref{eq8}, $\frac{\Phi(\mathbf{f}_{x_1} ) }{\Phi(\mathbf{e}_{x_1+v_2,1}) \Phi(\mathbf{e}_{x_1 + v_1,2})}\leq 1$. Since ${\ell(\tilde{\mathbf{c}})} ={\ell(\mathbf{c})}$, we would be done if we could show that $\Phi(\tilde{\mathbf{c}}) \leq \lambda^{\ell(\mathbf{c})}$. We have two cases: \begin{itemize} \item[(a.1)] \underline{$z$ was not touched by $\mathbf{c}.$} In this situation, $\tilde{\mathbf{c}}$ is a simple closed walk. By construction, $\tilde{\mathbf{c}}$ lies in the interior of $\mathbf{c}$. Moreover $\mathbf{f}_{x_1}$ belongs to the interior of $\mathbf{c}$ but does not belong to the interior of $\tilde{\mathbf{c}}$. Therefore, we can use the inductive hypothesis and obtain that $ \Phi(\tilde{\mathbf{c}}) \leq \lambda^{\ell(\tilde{\mathbf{c}})}$, which is the desired result. \item[(a.2)] \underline{$z$ was touched by $\mathbf{c}.$} In this case $z = x_{j}$ for some $j \geq 3$. We observe that we can write $\tilde{\mathbf{c}}=(\tilde{\mathbf{c}}_1 \to \tilde{\mathbf{c}}_2)$ with $\tilde{\mathbf{c}}_1=(x_2 \to .. \to x_{j}=z \to x_2 )$ and $\tilde{\mathbf{c}}_2 = ( x_j=z \to x_{j+1} .. \to x_0 \to z)$ and that both $\tilde{\mathbf{c}}_1$ and $\tilde{\mathbf{c}}_2$ are simple closed walks which lie in the interior of $\mathbf{c}$ and have disjoint interiors, see Figure \ref{fig:a2} . Moreover, since none of the walks has $\mathbf{f}_{x_1}$ in its interior, by inductive hypothesis $\Phi(\tilde{\mathbf{c}}_1) \leq \lambda^{\ell(\tilde{\mathbf{c}}_1)}$ and $\Phi(\tilde{\mathbf{c}}_2) \leq \lambda^{\ell(\tilde{\mathbf{c}}_2)} $. But then $\Phi(\tilde{\mathbf{c}}) = \Phi(\tilde{\mathbf{c}}_1)\Phi(\tilde{\mathbf{c}}_2) \leq \lambda^{\ell(\tilde{c}_1) + \ell(\tilde{c}_2) } = \lambda^{\ell(\tilde{c}) }$, which is the desired result. \end{itemize} \item[(b)] \underline{ $ (x_0,x_2) = (x_1+ \mathbf{e}_2,x_1 + \mathbf{e}_1 )$ } In this case the cycle the simple walk $\mathbf{c}$ is counterclockwise oriented. Let $\mathbf{c}_{x_2 \to x_0}$ be defined as in \eqref{eq70} above. Moreover we define \begin{equation*} z := x_0 + v_1= x_2 +v_2 , \quad \mathbf{p}: =(x_0 \to z \to x_2) \end{equation*} and $\tilde{\mathbf{c}}:= ( \mathbf{p} \to \tilde{\mathbf{c}}_{x_2 \to x_0})$. We have:
\begin{eqnarray*}
\Phi(\mathbf{c}) &=& j(x_0 \to x_1) j(x_1 \to x_2) \Phi(\mathbf{c}_{x_2 \to x_0}) \\ &=& \frac{ j(x_0 \to x_1) j(x_1 \to x_2)}{\Phi(\mathbf{p})} \Phi(\mathbf{p}) \Phi(\mathbf{c}_{x_2 \to x_0}) \\ &=&\frac{ j(x_0 \to x_1) j(x_1 \to x_2)}{\Phi(\mathbf{p})} \Phi(\tilde{\mathbf{c}}) \\ &=& \frac{ j(x_0 \to x_1) j(x_1 \to x_2) j(x_2 \to x_1)j(x_1 \to x_0)} { j(x_0 \to z)j(z \to x_2)j(x_2 \to x_1)j(x_1 \to x_0) } \Phi(\tilde{\mathbf{c}}) \\ &=& \frac{ \Phi(\mathbf{e}_{x_1,2}) \Phi(\mathbf{e}_{x_1,1}) }{\Phi(\mathbf{f}_{x_1}) } \Phi(\tilde{\mathbf{c}})
\end{eqnarray*} Thanks to \eqref{eq7}, $\frac{ \Phi(\mathbf{e}_{x_1,2}) \Phi(\mathbf{e}_{x_1,1}) }{\Phi(\mathbf{f}_{x_1}) } \leq 1 $. The proof that $\Phi(\tilde{\mathbf{c}})\leq \lambda^{\ell(\mathbf{c})}$ is the same as in point (a). \end{itemize} \end{enumerate} \end{proof}
\tikzset{middlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
ultra thick,
red
} } \tikzset{bmiddlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
ultra thick,
blue
} }
\begin{figure}
\caption{A simple closed walk $\mathbf{c}$ (red). $x_1$ is the minimum in the lexicographic order among the vertices visited by the closed walk. Because of that the walk cannot pass neither through the vertex left to $x_1$, nor through the vertex below $x_1$ (yellow). Therefore $\mathbf{c}$ must pass through the vertices above $x_1$ and right of $x_1$. If $x_2$ is the vertex above $x_1$ the walk is clockwise oriented}
\label{rett}
\end{figure}
\tikzset{middlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
red
} } \tikzset{bmiddlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
blue
} }
\begin{figure}\label{corner}
\end{figure}
\tikzset{middlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
ultra thick,
purple
} } \tikzset{bmiddlearrow/.style={
decoration={markings,
mark= at position 0.5 with {\arrow{#1}} ,
},
postaction={decorate},
thick,
green
} }
\begin{figure}\label{fig:a2}
\end{figure}
We can now prove Theorem \ref{squarelatticebound}. Let us first state a simple Lemma we shall need, without proving it. \begin{lemma}\label{lm:conddens}
Let $\mathbb{P},\mathbb{Q}$ be two probability measures on the same probability space, and let $\mathbb{Q} << \mathbb{P}$, and $M = \frac{d\mathbb{Q}}{d\mathbb{P}}$. If $A$ is an event such that $\mathbb{Q}(A) >0$ then
\begin{equation*}
\frac{ d \mathbb{Q}[\cdot \big | A]}{d \mathbb{P}[ \cdot \big | A]} = M \mathbf{1}_{A} \frac{\mathbb{P}(A)}{ \mathbb{Q}(A)}
\end{equation*}
\end{lemma} Thanks the last two Lemmas, the proof is then an almost straightforward application of Girsanov's theorem \begin{proof} Let $\mathbb{P}^x$ be a random walk of intensity $j$. We denote $\mathbb{S}^x_{\lambda}$ the random walk with constant intensity $\lambda$ started at $x$. The density of $\mathbb{P}^x$ w.r.t. to $\mathbb{S}^x_{\lambda}$ is given by (see \cite{Jac75} or \cite{CONFphd} for a more ad-hoc version) is: \begin{equation*}\label{eq6} \frac{d\mathbb{P}^x}{d \mathbb{S}^x_{\lambda}}=
\exp \bigg(\sum_{i=1}^{N_1} \log j( X_{T_{i-1}} \to X_{T_i})-\log(\lambda) -\int _{0}^1 \bar{j}(t,X _{t^-}) + 4 \lambda \, dt \bigg) \end{equation*} where $N_1$ is the total number of jumps up to time $1$ and $T_i$ is the $i$-th jump time. Since $\mathbb{P}^x$ is a CSRW, the term $\int _{0}^1\bar{j}(t,X _{t^-})\, dt$ is constant. Moreover, if we call $\mathbf{w}(X)$ the random sequence $(X_0 \to X_{T_1} \to..\to X_{T_{N_1}} )$ and use Lemma \ref{lm:conddens} we obtain \begin{equation*} \frac{d\mathbb{P}^{xy}}{d{}\mathbb{S}^{xx}_{\lambda} } \propto \mathbf{1}_{\{X_0=X_1=x \}}\Phi_j(\mathbf{w}(X)) \lambda^{-\ell(\mathbf{w}(X))} \end{equation*}
But then, since on the event $\{X_0=X_1=x\}$, $\mathbf{w}(X)$ is a closed walk, we can apply Lemma \ref{squarepatch} to conclude that the density has a global upper bound on path space. The conclusion immediately follows from Lemma \ref{lastlemma}, which we prove in the appendix. \end{proof} \subsubsection*{Proof of Theorem \ref{treebound}} Let us first specify the assumptions we make on the graph. \begin{assumption}\label{as-03} The directed graph $(\mathcal{X},\to)$ satisfies the following requirements:
\begin{enumerate}[(1)]
\item $\mathcal{A}$ is symmetric: $(x \to y )\in \mathcal{A} \Rightarrow (y \to x ) \in \mathcal{A}$. \item It is connected: for any $x,y \in \mathcal{X}^2$ there exist a directed walk from $x$ to $y$ \item It is of bounded degree \item It has no loops, meaning that $(z \to z)\not\in \mathcal{A}$ for all $z\in\mathcal{X}.$ \end{enumerate} \end{assumption} Let us prove the correspondent of Lemma \ref{squarepatch}.
\begin{lemma}\label{treepatch} Let $j$ satisfy the assumptions of Theorem \ref{treebound}. Then we have:
\begin{equation*}
\forall \mathbf{c} \in \mathcal{C}, \quad \Phi_j(\mathbf{c}) \leq (\lambda \delta)^{\ell(\mathbf{c})}
\end{equation*}
\end{lemma}
\begin{proof} Again, to ease the notation, we write $\Phi$ instead of $\Phi_j$. The proof goes by induction on the number of elements in $\cE^*$ that intersect $\mathbf{c}$. To this aim we define:
\begin{equation*}
\quad n(\mathbf{c}) = \Big | \left\{ \mathbf{e} \in \cE^* : \mathbf{e} \cap \mathbf{c} \neq \emptyset \right\} \Big |
\end{equation*}
\begin{enumerate} \item[] \underline{Base step} If $n(\mathbf{c})=0$, then $\mathbf{c} \subseteq \mathcal{T}$. It is easy to see that $\mathbf{c}$ can be decomposed into closed walks of length two. The conclusion then follows from \eqref{eq1}. \item[] \underline{Inductive step} Consider any $\mathbf{e} \in \mathcal{E}^*$ such that $\mathbf{e} \cap \mathbf{c} \neq \emptyset$. Then there are two possible cases: \begin{itemize}
\item[] \underline{$\big| \mathbf{e} \cap \mathbf{c} \big| =2$.} In this case $\mathbf{c}$ can be seen as the concatenation of $\mathbf{e} $ with two other closed walks, say $\mathbf{c}_1,\mathbf{c}_2$. Clearly, $n(\mathbf{c}_1) ,n(\mathbf{c}_2)< n(\mathbf{c})$, and therefore applying the inductive hypothesis and \eqref{eq1} we have: \begin{equation*} \Phi(\mathbf{c}) = \Phi(\mathbf{c}_1) \Phi(\mathbf{e}) \Phi( \mathbf{c}_2) \leq (\delta \lambda)^{\ell(\mathbf{c}_1)+2 + \ell(\mathbf{c}_2)} = ( \lambda \delta) ^{\ell(\mathbf{c})} \end{equation*}
\item[]\underline{$\big| \mathbf{e} \cap \mathbf{c} \big| =1$.} In this case, let us call $z \to z'$ the only arc in $\mathbf{e} \cap \mathbf{c} $. By recalling the definition of $\mathbf{c}_{\mathbf{e}}$ at point (e) of Definition \ref{defs-01}, we have two subcases: \begin{enumerate} \item[] \underline{$\mathbf{c}_{\mathbf{e}} = \mathbf{c}_{z \to z'}$.} \ We define $\mathbf{c}_{x_0 \to z}$, $\mathbf{c}_{z' \to x_0}$ and $\mathbf{w}_{z' \to z}$ through the following identities \begin{eqnarray}\label{eq5} \mathbf{c} &=& (\mathbf{c}_{x_0 \to z} \to z \to z' \to \mathbf{c}_{z' \to x_0} )\\ \nonumber \mathbf{c}_{\mathbf{e}}& =&(z \to z' \to .. \to z) = (z \to z' \to \mathbf{w}_{z' \to z}) \end{eqnarray}
Finally, we also define $\tilde{\mathbf{c}}$ as follows: \begin{equation*} \tilde{\mathbf{c}} = (\mathbf{c}_{x_0,z} \to \mathbf{w}^{*}_{z' \to z} \to \mathbf{c}_{z,x_0}) \end{equation*} where $\mathbf{w}^{*}_{z' \to z}$ is the reversed walk (see Definition \ref{defs-01}). Let us remark that, by definition of $\mathbf{c}_{z \to z'}$, we have $\mathbf{w}_{z' \to z} \subseteq \mathcal{T}$. But then also $\mathbf{w}^*_{z' \to z} \subseteq \mathcal{T}$ because $\mathcal{T}$ is a symmetric graph. Therefore $ n( \tilde{\mathbf{c}} ) = n(\mathbf{c})-1$. We have: \begin{eqnarray*} \Phi(\mathbf{c}) &=& \Phi(\mathbf{c}_{x_0 \to z }) j(z \to z') \Phi(\mathbf{c}_{z' \to x_0} ) \\
&=& \Phi(\mathbf{c}_{x_0 \to z }) \Phi(\mathbf{w}^*_{z' \to z}) \Phi(\mathbf{c}_{z' \to x_0} ) \frac{ j(z \to z')}{\Phi(\mathbf{w}^*_{z' \to z}) }\\
&=& \Phi(\tilde{\mathbf{c}}) \, \frac{ j(z \to z')}{\Phi(\mathbf{w}^*_{z' \to z})} \end{eqnarray*} Using the inductive hypothesis on $\Phi(\tilde{\mathbf{c}})$ we have that $\Phi({\tilde{\mathbf{c}}}) \leq (\lambda\delta)^{\ell(\tilde{\mathbf{c}})} = (\lambda\delta)^{+\ell(\mathbf{c}) +\ell(\mathbf{c}_\mathbf{e}) -1}$. If we could show that $ \frac{j(t,z \to z') }{ \Phi(\mathbf{w}^*_{z \to z'}) } \leq (\lambda\delta)^{-\ell(\mathbf{c}_e)+1} $, then we would be done. For this aim, let us observe that by concatenating $\mathbf{w}^*_{z' \to z } $ with $z ' \to z$ we obtain $ \mathbf{c}_{z' \to z}$. Then: \begin{equation}\label{eq4}
\frac{j(z \to z')}{ \Phi(\mathbf{w}^*_{z' \to z}) } = \frac{ j(z \to z') j(z' \to z) }{ \Phi(\mathbf{w}^*_{z' \to z }) j(z' \to z) }=\frac{ \Phi(\mathbf{e}) }{ \Phi(\mathbf{c}_{z'\to z } ) }\end{equation}
Finally, we observe that: \begin{equation*}
\Phi(\mathbf{c}_{z' \to z } ) = \frac{1}{\Phi(\mathbf{c}_{z \to z'}) } \prod_{\stackrel{\mathbf{e}' \in \mathcal{E},}{ \mathbf{e}' \cap \mathbf{c}_{\mathbf{e}} \neq \emptyset} } \Phi(\mathbf{e}') = \frac{1}{\Phi(\mathbf{c}_{\mathbf{e}}) } \prod_{\stackrel{\mathbf{e}' \in \mathcal{E},}{ \mathbf{e}' \cap \mathbf{c}_{\mathbf{e}} \neq \emptyset} } \Phi(\mathbf{e}') \end{equation*} which combined with \eqref{eq4} gives: \begin{equation*} \frac{j(z \to z')}{ \Phi(\mathbf{w}^*_{z' \to z})} = \Phi({\mathbf{c}_{\mathbf{e}}}) \left\{ \prod_{\stackrel{\mathbf{e}' \in \mathcal{E}, \mathbf{e}' \neq \mathbf{e} }{ \mathbf{e}' \cap \mathbf{c}_{\mathbf{e}} \neq \emptyset} } \Phi(\mathbf{e}') \right\}^{-1} \end{equation*} where we used the fact that, by construction, $\mathbf{e}$ is the only element of $\mathcal{E}$ which intersects $\mathbf{c}_{\mathbf{e}}$ and is not in $\mathcal{T}$. Using the upper bound for $\mathbf{c}_{\mathbf{e}}$ in \eqref{eq2} the conclusion follows.
\item[] \underline{$\mathbf{c}_{\mathbf{e}} = \mathbf{c}_{z' \to z}$} Let $ \mathbf{c}_{x_0 \to z}, \mathbf{c}_{z' \to x_0} $ be defied as in \eqref{eq5}, $\mathbf{w}_{z \to z'}$, and $\tilde{\mathbf{c}}$ be defined by: \begin{equation*} \mathbf{c}_{\mathbf{e} } =( z' \to z \to \mathbf{w}_{z \to z'} ), \tilde{\mathbf{c}} = ( \mathbf{c}_{x_0 \to z} \to \mathbf{w}_{z \to z'} \to \mathbf{c}_{z' \to x_0}) \end{equation*} We have: \begin{eqnarray*} \Phi(\mathbf{c}) &=& \Phi(\mathbf{c}_{x_0 \to z} ) j(z \to z') \Phi(\mathbf{c}_{z' \to x_0 } ) \\
&=& \Phi(\mathbf{c}_{x_0 \to z} ) \Phi( \mathbf{w}_{z \to z'} ) \Phi( \mathbf{c}_{z' \to x_0 } ) \frac{ j(z \to z')}{\Phi( \mathbf{w}_{z \to z'} ) } \\
&=& \Phi(\tilde{\mathbf{c}} ) \frac{ j(z \to z') }{\Phi( \mathbf{w}_{z \to z'} ) } \\
&=&\Phi( \tilde{\mathbf{c}} ) \frac{ j(z \to z') j(z' \to z ) }{\Phi( \mathbf{w}_{z \to z'} ) j(z' \to z) }\\
&=&\Phi(\tilde{\mathbf{c}} ) \, \frac{\Phi(\mathbf{e}) }{\Phi( \mathbf{c}_{\mathbf{e}} ) } \end{eqnarray*} By construction, $n(\tilde{\mathbf{c}}) = n(\mathbf{c})-1$, so we can use the inductive hypothesis together with the the lower bound in \eqref{eq2} to obtain: \begin{equation*} \Phi(\tilde{\mathbf{c}} ) \, \frac{\Phi(\mathbf{e}) }{\Phi( \mathbf{c}_{\mathbf{e}} ) } \leq (\lambda \delta)^{\ell(\mathbf{c}) + \ell(\mathbf{c}_{\mathbf{e}} ) - 1} \ (\lambda \delta)^{1-\ell(\mathbf{c}_{\mathbf{e}})} = (\lambda\delta)^{\ell(\mathbf{c})} \end{equation*} from which the conclusion follows.
\end{enumerate} \end{itemize}
\end{enumerate}
\end{proof}
The proof of Theorem \ref{treebound} can be deduced from that of Theorem \ref{squarelatticebound} by replacing $\mathbb{S}^{x}_{\lambda}$ with the random walk defined at \eqref{e93}, Lemma \ref{lastlemma} with Lemma \ref{countingest} (which we prove in the appendix), and Lemma \ref{squarepatch} with Lemma \ref{treepatch}l. Therefore, we shall not repeat it.
\subsection*{On the feasibility of \eqref{eq8},\eqref{eq7} and \eqref{eq1},\eqref{eq2} }
In this section we address the problem of how to construct jump intensities satisfying \eqref{eq8},\eqref{eq7} (resp. \eqref{eq1},\eqref{eq2}). Lemma \ref{faceexistence} (resp. \ref{constspeedconstrlemma}) shows that for any arbitrary assignment of positive numbers $\varphi$ on $\mathcal{E} \cup \mathcal{F}$ (resp. $\mathcal{C}$) there exists at least an intensity $j$ satisfying Assumption \ref{as-01} and such that $\Phi_j \equiv \varphi $ on $\mathcal{E} \cup \mathcal{F}$ (resp. $\mathcal{C}$). It is then possible to construct the desired jump intensities in two steps. W.l.o.g. we restrict to the square lattice, the procedure being identical in the case of a general graph. \begin{itemize} \item[\textbf{Step 1}] Construct a positive function $\varphi$ on $\mathcal{E} \cup \mathcal{F}$ such that \eqref{eq8}, \eqref{eq7} hold when replacing $\Phi_j$ with $\varphi$. It is rather easy to see that this is possible. \item[\textbf{Step 2}] Construct $j$ such that $\Phi_j = \varphi $ on $\mathcal{E} \cup \mathcal{F}$. The existence of such $j$ (and a way of constructing it) are given in Lemma \ref{faceexistence} \end{itemize} \subsubsection*{Square lattice} Although we are interested in the lattice case, Lemma \ref{faceexistence} is easier to prove for a general planar graph. Planar graphs have a privileged set of closed walks: the \textit{faces}, which are uniquely determined once a planar representation is fixed. We choose the representation in such a way that both arcs corresponding to an element of $\mathcal{E}$ on the same segment in the planar representation \footnote{This is because we do not consider the walks of length two as faces. Faces have length at least three}.
As in the case of the square lattice, the set of clockwise oriented faces of a planar graph is denoted $\mathcal{F}$. \begin{lemma}\label{faceexistence} Let $(\mathcal{X},\to)$ be a planar directed graph satisfying Assumption \ref{as-03}. Let $\varphi: \mathcal{F} \cup \mathcal{E} \rightarrow \mathbb{R}_{+}$ be bounded from above. Then there exist at least one $j: \mathcal{A} \rightarrow \mathbb{R}_{+}$ fulfilling Assumption \ref{as-01} and such that \begin{equation}\label{eq124} \forall \, \mathbf{f} \in \mathcal{F}, \quad \Phi_j(\mathbf{f}) = \varphi(\mathbf{f}), \quad \forall \, \mathbf{e} \in \mathcal{E}, \quad \Phi_j(\mathbf{e}) = \varphi(\mathbf{e}) \end{equation} If $\mathcal{X}$ is a finite set, then $j$ is unique. If $\mathcal{X}$ is infinite, then all intensities $k:\mathcal{A} \rightarrow \mathbb{R}_{+}$ with such properties can be written in the form \begin{equation*} k(z \to z') = \exp( \phi(z') - \phi(z) ) j(z \to z') \end{equation*} where $h = \exp(\phi)$ is a positive solution to: \begin{equation*} \forall z \in \mathcal{X}, \sum_{z' : z \to z'} j(z \to z') h(z') = v \, h(z) \end{equation*} for some constant $v>0$. \end{lemma}
\begin{proof} In a first step we show the existence of a function $j:\mathcal{A} \rightarrow \mathbb{R}_{+}$ such that \eqref{eq124} is satisfied. The proof goes by induction on the number of arcs of $(\mathcal{X},\to)$. The base step is trivial. For the inductive step, consider two clockwise orient faces $\mathbf{f}_1,\mathbf{f}_2$ which are \textit{adjacent}. This means that there exist $\mathbf{e}_0=(x \to y \to x) \in \mathcal{E}$ such that $(x \to y) \in \mathbf{f}_1$ and $(y \to x) \in \mathbf{f}_2$. Consider the graph $(\mathcal{X}, \to_{1})$ obtained by removing $\mathbf{e}_{0}$ from $(\mathcal{X},\to)$. This planar graph instead of the two faces $\mathbf{f}_1$ and $\mathbf{f}_2$ has a single face $\mathbf{h}$, which corresponds to the union of $\mathbf{f}_1$ and $\mathbf{f}_2$. On this $(\mathcal{X},\to)$ we define $\psi: \mathcal{E}\setminus \mathbf{e}_0 \cup \mathcal{F} \setminus \{ \mathbf{f}_1,\mathbf{f}_2 \} \cup \mathbf{h}$ as follows: \begin{eqnarray*}\label{e82} \nonumber \forall \mathbf{e} \in \mathcal{E} \setminus \mathbf{e}_{0} \quad \psi(\mathbf{e}) = \varphi(\mathbf{e}) \\ \forall \mathbf{f} \in \mathcal{F} \setminus \{ \mathbf{f}_1 , \mathbf{f}_2 \} \quad \psi(\mathbf{f}) = \varphi(\mathbf{f}) \end{eqnarray*} \begin{equation}\label{e83} \psi(\mathbf{h}) = \frac{ \varphi(\mathbf{f}_1) \varphi(\mathbf{f}_2)}{ \varphi( \mathbf{e}_0)} \end{equation} By the inductive hypothesis there exist $j:\mathcal{A} \setminus \mathbf{e}_0 \rightarrow \mathbb{R}_{+} $ such that \begin{equation}\label{e80} \forall \quad \mathbf{e} \in \mathcal{E} \setminus \mathbf{e}_0 , \quad \Phi_j(\mathbf{e}) = \psi(\mathbf{e}) \end{equation} and \begin{equation} \label{e81} \forall \mathbf{f} \in \mathcal{F} \setminus \{ \mathbf{f}_1,\mathbf{f}_2\} \cup \{ \mathbf{h} \}, \quad \Phi_j(\mathbf{f}) = \psi(\mathbf{f}).\end{equation} Consider $\mathbf{f}_1 =(x \to y \to x_2 \to .. \to x)$ and $\mathbf{f}_2 = ( y \to x \to y_2 \to ..\to y )$. We extend $j$ to $\mathbf{e}_0$ by defining: \begin{eqnarray}\label{e84} \nonumber j(x \to y ) &=& \varphi(\mathbf{f}_1) \big[j(y \to x_2) \prod_{i=2}^{\ell(\mathbf{f}_1)-1} j(x_i \to x_{i+1})\big]^{-1} \\ j(y \to x )&=& \varphi(\mathbf{f}_2) \big[j(x \to y_2) \prod_{i=2}^{\ell(\mathbf{f}_2)-1} j(y_i \to y_{i+1})\big]^{-1} \end{eqnarray} We claim that $j$ as constructed here satisfies \eqref{eq124}. For $\mathbf{f} \neq \mathbf{f}_1,\mathbf{f}_2$ and $\mathbf{e} \neq \mathbf{e}_0$, this is granted by \eqref{e80} and \eqref{e81}. Using \eqref{e84}, it is seen that $\Phi_j(\mathbf{f}_1) = \varphi(\mathbf{f}_1)$ and $\Phi_j(\mathbf{f}_2) = \varphi(\mathbf{f}_2)$. Therefore we only need to check $\mathbf{e}_0$. Using \eqref{e83}, the inductive hypothesis and what we have just proven: \begin{equation*} \Phi_j(\mathbf{e}_0 ) = \frac{\Phi_j(\mathbf{f}_1) \Phi_j(\mathbf{f}_2)}{\Phi_j(\mathbf{h})} = \frac{\varphi(\mathbf{f}_1) \varphi(\mathbf{f}_2)}{\psi(\mathbf{h})} \stackrel{\eqref{e83}}{=} \varphi(\mathbf{e}_0) \end{equation*} which is the desired conclusion. This concludes the proof that an intensity $j:\mathcal{A} \rightarrow \mathbb{R}_{+}$ satisfying \eqref{eq124} exists. To complete the proof we show that it is possible to modify $j$ in such a way that both Assumption \ref{as-01} and \eqref{eq124} are satisfied. For this purpose, we observe if $j$ is an intensity satisfying \eqref{eq124} all other intensities $k: \mathcal{A} \rightarrow \mathbb{R}_{+}$ fulfilling \eqref{eq124} are of the form \begin{equation*} k(z \to z') = \exp(\phi(z') - \phi(z)) j(z \to z') \end{equation*} where $\phi : \mathcal{X} \rightarrow \mathbb{R}$ is some potential on $\mathcal{X}$. For assumption \ref{as-01} to hold, there must exist $v>0$ such that $\bar{k}(z) \equiv v$ for all $z \in \mathcal{X}$. Let us define $h:=\exp(\phi)$. What we look for is then a pair $h,v$ such that \begin{equation*} \forall z \in \mathcal{X}, \quad \sum_{z' :z \to z'} j(z \to z') h(z') = v h(z), \quad h(z)>0 \, \forall \, z \in \mathcal{X} \end{equation*} Since w.l.o.g $\mathcal{X} \subseteq \mathbb{N}$, if we define the matrix $K=(k_{m,n})_{m, \in \mathbb{N}}$ with $k_{m,n}:=k(m \to n)$, we can rewrite the former equation as: \begin{equation*} K \cdot h = v h, \quad v>0, h>0 \end{equation*} If $\mathcal{X}$ is finite, the existence of a solution is ensured by the standard Perron Frobenius Theorem. The uniqueness statement is a consequence of the fact that the eigenspace of the positive eigenvalue has dimension 1. If $\mathcal{X}$ is infinite and countable, we can use Corollary of Theorem 2 at page 1799 of \cite{pruitt1964eigenvalues}. We are entitled to use the Corollary because $(\mathcal{X},\to)$ is of bounded degree. \end{proof} \subsubsection*{General graph} \begin{lemma}\label{constspeedconstrlemma} Let $(\mathcal{X},\to)$ be a graph fulfilling Assumption \ref{as-03}, $\mathcal{T}$ be a tree and $\mathcal{C}$ be a $\mathcal{T}$-basis of the closed walks. Let $\varphi: \mathcal{C} \rightarrow \mathbb{R}_{+}$ be bounded from above. Then there exist $j: \mathcal{A} \rightarrow \mathbb{R}_{+}$ such that Assumption \eqref{as-01} is satisfied and \begin{equation}\label{eq27} \forall \mathbf{c} \in \mathcal{C}, \quad \Phi_j(\mathbf{c}) = \varphi(\mathbf{c}) \end{equation} If $\mathcal{X}$ is a finite set, then $j$ is unique. If $\mathcal{X}$ is infinite, then all other functions $k:\mathcal{A} \rightarrow \mathbb{R}_{+}$ fulfilling Assumption \ref{as-01} and \eqref{eq27} can be written in the form \begin{equation*} k(z \to z') = \exp( \phi(z') - \phi(z) ) j(z \to z') \end{equation*} where $h = \exp(\phi)$ solves is a positive solution to: \begin{equation*} \forall z \in \mathcal{X}, \sum_{z' ; z \to z'} j(z \to z') h(z') = v h(z) \end{equation*} for some constant $v>0$.
\end{lemma}
Here is the proof of Lemma \ref{constspeedconstrlemma}.
\begin{proof} We only show that we can construct $j:\mathcal{A} \rightarrow \mathbb{R}_{+}$ such that \eqref{eq27} is satisfied. The proof that $j$ can be turned into an intensity $k$ satisfying Assumption \ref{as-01} can be done following Lemma \ref{faceexistence} with almost no change. For any $\mathbf{e}=(x\to y \to x) \in \mathcal{E} \setminus \mathcal{E}^*$ (i.e. $\mathbf{e} \subseteq \mathcal{T}$), we choose exactly one among $(x \to y )$ and $(y \to x)$ and set the value of $j(x \to y)$ to an arbitrary positive value. Then we set $j(y \to x) = \frac{\varphi(\mathbf{e})}{j(x \to y)}$. Next, for any $\mathbf{e} \in \mathcal{E}^*$ we let $x \to y$ be the arc of $\mathbf{e}$ such that $\mathbf{c}_{x \to y} = \mathbf{c}_{\mathbf{e}}$. We observe that $\mathbf{c}_{x \to y}$ can be written as $( x \to y \to \mathbf{p}_{y \to x})$ for some simple walk $\mathbf{p}_{y \to x}$ from $y$ to $x$ whose arcs are in $\mathcal{T}$. The value of $j$ has been already set on $\mathbf{p}_{y \to x}$: therefore we can then set $ j(x \to y)$ as $\frac{\varphi(\mathbf{c}_{\mathbf{e}})}{ \Phi_j(\mathbf{p}_{y \to x})}$. Finally we set $j$ on $y\to x$ by $j(y \to x ):= \varphi(\mathbf{e})/j(x \to y)$. It is then easy to check that the intensity $j$ so constructed satisfies \eqref{eq27}. \end{proof}
\begin{center} {$\mathbf{Acknoledgments}$} \end{center} The author wishes to thank Paolo dai Pra and Sylvie Roelly for having introduced him to the subject, and for giving several advises during the preparation of the manuscript. Many thanks to Christian L\'eonard, Cyril Roberto and Max Von Renesse for insightful discussions.
\appendix
\section{ }\label{sec:A}
The appendix is organized as follows: we first recall the main tools used in the proof of Theorem \ref{accordeon}. Then we prove the two Lemmas \ref{p461} and \ref{ll}, which are needed in the proof of Theorem \ref{t70}. Finally, we prove Lemma \ref{lastlemma}, which is part of the proof of Theorem \ref{squarelatticebound}.
\subsubsection*{About Theorem \ref{accordeon}.} We recall two of the main ingredients used in the proof. The first one is the integration by parts (duality) formula proved in \cite[Th.4.1]{RT05} to characterize bridges of Brownian diffusions. Here, we report a slightly simplified version of the formula, which still suffices for the scopes this paper. \begin{theorem}[Integration by parts formula]\label{IBPF} Let $\mathbb{P}^x$ be law of \begin{equation*} d X_t = - \nabla U (t,X_t)dt + dB_t, \quad X_0 =x \end{equation*} Let $\mathbb{Q}$ be a probability measure on $C([0,1],\mathbb{R}^d)$ satisfying the regularity hypothesis (A0),(H1),(H2) of Theorem 4.1 in \cite{RT05}. Then $\mathbb{Q}$ is the bridge $\mathbb{P}^{xy}$ if and only if $\mathbb{Q}((X_0,X_1)=(x,y))=1$ and the formula \begin{equation*} \mathbb{E}_{\mathbb{Q}} \Big( \mathcal{D}_{h} F \Big) =\mathbb{E}_\mathbb{Q} \left( F \int_{0}^{1} \dot{h}(t) \cdot d X_t \right) + \mathbb{E}_\mathbb{Q} \left( F \int_{0}^1 \nabla \mathscr{U} (t,X_t) \cdot h(t) dt \right) \end{equation*} holds for any simple functional $F$, and any direction of differentiation $h$ which is continuous, piecewise linear and satisfies the loop condition \begin{equation*} \quad h(1)=h(0)=0 \end{equation*} \end{theorem} Let us recall that by a simple functional we mean a functional that can be written in the form $\varphi(X_{t_1},..,X_{t_k})$ for some $\mathcal{C}^{\infty}_{b}(\mathbb{R}^{d \times k})$ function $\varphi$ and finitely many $t_1,..,t_k$. The directional Fr\'{e}chet derivative $\mathcal{D}_h F$ of the simple functional $F$ is defined as usual: \begin{eqnarray*} \mathcal{D}_h F &= &\lim_{\varepsilon \rightarrow 0 } \frac{ \varphi(X_{t_1} + \varepsilon h(t_1),..,X_{t_k}+ \varepsilon h(t_k) ) -\varphi(X_{t_1} ,..,X_{t_k})}{\varepsilon}\\ &=& \sum_{j=1}^{k} \sum_{i=1}^d \partial_{x^j_i} \varphi(X_{t_1},..,X_{t_k}) h_i (t_j) \end{eqnarray*}
The second is a Theorem proved in\cite{BrLieb76} that gives a quantitative version of the statement that marginalization preserves log concavity. We follow the presentation of \cite{Simon2011}. \begin{theorem}[Preservation of strong log concavity]\label{thm:logconcpres} Let $F: \mathbb{R}^{m+n} \rightarrow \mathbb{R}_{+}$ be log concave and let $\Sigma(\cdot)$ be a positive quadratic form on $\mathbb{R}^{m+n}$ . Write $w=(v,v')$, with $z \in \mathbb{R}^{m+n}$, $v \in \mathbb{R}^m$, $v' \in \mathbb{R}^n$. Let $F(w)$ be jointly log concave on $\mathbb{R}^{m+n}$ and define on $\mathbb{R}^n$, \begin{equation}\label{eq50} G(v') = \frac{\int_{\mathbb{R}^m} F(w) \exp(- \Sigma (w) ) dv}{\int_{\mathbb{R}^m} \exp(- \Sigma( w ) ) dv } \end{equation} Then $v' \mapsto G(v')$ is log concave. \end{theorem} For the proof we refer to \cite[Theroem 13.3, pag.204]{Simon2011} or \cite[Theorem 4.3]{BrLieb76}.
\subsubsection*{Proof of Lemma \ref{p461}}
\begin{lemma}\label{p461} Let $h$ be defined by \eqref{e321} and $\psi$ be as in \eqref{e326} Then $$ \forall \tau >0 , \quad \psi_{\tau} \leq h_{\tau} $$ \end{lemma} \begin{proof} Consider $\varepsilon>0$ and define $h^{\varepsilon}_{\tau}$ as the unique solution of \begin{equation} {\tau} \partial_{\tau} h^{\varepsilon}_{\tau}- h^{\varepsilon}_{\tau} = {\tau} (\exp({\tau})-1), \quad \partial_{\tau} h^{\varepsilon}_0 = \rho(f) + \varepsilon \end{equation}
Then $\eta^{\varepsilon}_0:=\psi_0 - h^{\varepsilon}_0=0$ satisfies: \begin{equation*} \tau \partial_{\tau}\eta^{\varepsilon}_{\tau} - \eta^{\varepsilon}_{\tau} \leq 0, \quad \partial_{\tau} \eta^{\varepsilon}_0 =- \varepsilon \end{equation*}
Since $\eta^{\varepsilon}$ is continuously differentiable, we have that $T>0$, where $T$ is defined as \begin{equation} T:= \inf \{\tau >0 : \partial_{\tau} \eta^{\varepsilon}_{\tau}=0\} \end{equation} Assume that $T<+\infty$. Then, at $T$, we have: \begin{equation} T \underbrace{\partial_{\tau} \eta^{\varepsilon}_T}_{=0} - \eta^{\varepsilon}_T \leq 0 \Rightarrow \eta^{\varepsilon}_T \geq 0 \end{equation} But this is impossible since $\eta^{\varepsilon}_0=0, \partial_{\tau} \eta^{\varepsilon}_{\tau}<0$ for all $\tau < T$. Therefore $\partial_{\tau} \eta^{\varepsilon}_{\tau} <0 $ for all $\tau>0$. Since $\eta^{\varepsilon}_0=0$, we also have that $\psi^{\varepsilon}_{\tau}<0$ for all $\tau>0$. Therefore, as the choice of $\varepsilon$ was arbitrary: \begin{equation*} \forall \tau>0, \quad \psi_{\tau} \leq \inf_{\varepsilon >0} h^{\varepsilon}_{\tau}= h_{\tau} \end{equation*} \end{proof}
\subsubsection*{Proof of Lemma \ref{ll}} \begin{lemma}\label{ll} \begin{equation*}
\mathbb{E}_{ \pi_{\Phi} } \left( g \right)- (\Phi + \frac{1}{k+1}\Phi^{1/(k+1)}) \leq \mathbb{E}_{\rho_{\Phi}} \left( f \right) \end{equation*} \end{lemma} \begin{proof} By construction of $g$, see \eqref{e380} we can w.l.o.g assume that. $f(0)=g(0)=0$. By \eqref{e120}, we have that $\rho_{\Phi}(n) \leq \frac{\Phi}{n} \rho_{\Phi}(n-1)$ \footnote{Actually, the quotient $\rho_{\Phi}(n)/\rho_{\Phi}(n-1)$ is of the order $1/n^{k+1}$. However, here it suffices to consider $1/n$}. Therefore, using the 1-Lipschitzianity of $f$ and $f(0)=0$: \begin{equation*} \mathbb{E}_{\rho_{\Phi}}(f) \geq - \sum_{n=1}^{+\infty} n \rho_{\Phi}(n) \geq -\Phi \sum_{n=1}^{+\infty} \rho_{\Phi}(n-1) \geq - \Phi. \end{equation*} By construction, $g$ is $1/(k+1)$ Lipschitz, and w.l.o.g. $g(0)=0$. Moreover, it is easy to see from the definition of $\pi_{\Phi}$ given at \eqref{e330} that we have: $\pi_{\Phi}(n) \leq \frac{ \Phi^{1/k+1}}{n } \pi_{\Phi}(n-1) $. Using all this: \begin{equation*} \mathbb{E}_{\pi_{\Phi}}(g) \leq \frac{1}{k+1} \sum_{n=1}^{+ \infty} n \pi_{\Phi}(n) \leq \frac{\Phi^{1/k+1} } {k+1} \sum_{n=1}^{+ \infty} \pi_{\Phi}(n-1) \leq \frac{ \Phi^{1/k+1} }{k+1}
\end{equation*}
The proof is complete. \end{proof}
\subsubsection*{Lemmas \ref{countingest} and \ref{lastlemma} } \begin{comment} \begin{lemma}\label{lastlemma} Let $\mathbb{S}^{x}_{\lambda}$ be the constant speed random walk on the square lattice defined by: $$ j(x \to x+ v_1) = j(x \to x+ v_2) \equiv \lambda $$ Then: \begin{equation}\label{e105} \log {}^{\lambda}\mathbb{S}^{xx}( d(X_t,\mathbb{E}_{{}^{\lambda}\mathbb{S}^{xx}}(X_t) ) \geq R) = -2 R \log R + [\log(4 \lambda^2 t(1-t) ) +2 ]R + o(R) \end{equation} \end{lemma} \begin{proof} It is rather easy to see that $\mathbb{E}_{{}^{\lambda}\mathbb{S}^{xx}}(X^1_t) = x_1$ and $\mathbb{E}_{{}^{\lambda}\mathbb{S}^{xx}}(X^2_t) = x_2$ so that \begin{equation*}
\log {}^{\lambda}\mathbb{S}^{xx}( d(X_t,\mathbb{E}_{{}^{\lambda}\mathbb{S}^{xx}}(X_t) ) \geq R) = {}^{\lambda}\mathbb{S}^{00}( d(X_t, 0 ) \geq R) ={}^{\lambda}\mathbb{S}^{00}( | X^1_t | + |X^2_t| \geq R) \end{equation*} Since under $\mathbb{P}^{00}$ $X^1_t$ and $X^2_t$ are independent: \begin{equation*}
^{\lambda}\mathbb{S}^{00}( | X^1_t| + | X^2_t | \geq R) = \sum_{k =R}^{+\infty}\sum_{i+j=k} \mathbb{Q}^{00}( |X_t| =i) \mathbb{Q}^{00}( |X_t| =j) \end{equation*} where $\mathbb{Q}^{00}$ is the bridge of a one dimensional random walk on $\mathbb{Z}$ with jump rates constantly equal to $\lambda$. The transition density of $\mathbb{Q}$ distribution of this walk is known explicitly. We have: \begin{equation*}
\mathbb{Q}^{0}( X_t =i) = \exp(- 2 \lambda t)\sum_{k=i}^{+\infty} \frac{(\lambda t)^{2k-i}}{k!(k-i)!} \propto I_{i}(2\lambda t) \end{equation*} where $I_{n}(z)$ is the modified Bessel function of the first kind, see \cite[sec 9.6]{abramowitz1964handbook}. Therefore: \begin{equation*} ^{\lambda}\mathbb{S}^{00}( d(X_t, 0 ) \geq R) \propto \sum_{k=R}^{+ \infty} \sum_{i+j=k} I_{i}(2\lambda t) I_{i}(2\lambda (1-t)) I_{j}(2\lambda t) I_{j}(2\lambda (1-t)) \end{equation*}
As $n \uparrow + \infty$ the asymptotic behavior of $I_{n}(z)$ is known (see again \cite{abramowitz1964handbook} ): \begin{equation*} I_n(z) \sim \frac{1}{\sqrt{2 \pi \, n}} \exp \Big( -n \log n + [ \log(z) - \log(2) + 1] n \Big) \end{equation*} Therefore there exist positive constants $a,b,\varepsilon$ such that for any $z$ fixed: \begin{equation}\label{e106} a \frac{1}{\sqrt{n}+\varepsilon}\exp \Big( -n \log n + [ \log(z) - \log(2) + 1] n \Big) \leq I_n(z) \leq b \exp \Big( -n \log n + [ \log(z) - \log(2) + 1] n \Big) \end{equation} Let us define $f(k):=\sum_{i+j=k} I_{i}(2\lambda t) I_{i}(2\lambda (1-t)) I_{j}(2\lambda t) I_{j}(2\lambda (1-t)) $ We first show that $\log(f(R))$ can be expanded as in \eqref{e105} and then that and $\log (\frac{1}{f(R)}\sum_{k=R}^{+\infty} f(k) ) \leq o(R)$, which gives the conclusion. From \eqref{e106} and the definition of $f(R)$we deduce that for some constants $a',b',\varepsilon'$: \begin{eqnarray}\label{e109}
&{}& a'\frac{1}{R + \varepsilon'} \exp\big([ \log(4\lambda^2 t(1-t)) - 2\log(2) + 2] R\big) \times \nonumber \\
&{}& \max_{i+j=R} \exp \big( -2 i \log i - 2j \log j + \big) \leq f(R) \nonumber \\ &\leq & \nonumber b' (R+1) \exp\big([ \log(4\lambda^2 t(1-t)) - 2\log(2) + 2] R\big) \times \\
&{}&\max_{i+j=R} \exp \big( -2 i \log i - 2j \log j \big) \end{eqnarray} We observe that: \begin{equation}\label{e130}\ \max_{i+j=R} \big( -2 i \log i - 2j \log j \big) = -2R \log R + 2 \log(2) R ,\end{equation}
which yields the conclusion. Indeed, from the convexity of $x \log x$ it is easy to see that the maximum is achieved at $i=j=k/2$. With a simple computation \eqref{e130} is then obtained. Using \eqref{e109} and \eqref{e130} we obtain that $f(R)$ can be written as in \eqref{e105}. Now we show that $\log( \frac{1}{f(R)}\sum_{k=R}^{+ \infty} f(k) )$ is $o(R)$, which concludes the proof. To this aim, let us observe that, using again \eqref{e109} and the convexity of $x \log x$ : \begin{eqnarray*} \frac{1}{f(R)}\sum_{k=R}^{+ \infty} f(k) &\leq &\frac{b'}{a'}( R+\varepsilon') \sum_{k=R}^{+ \infty} (k+1) \exp(-2 (k \log k -R \log R) + [\log(4 \lambda^2t(1-t))+2 ] (k-R) \,)\\ &\leq & \frac{b'}{a'}( R+\varepsilon') \, \sum_{k=R}^{+ \infty}(k+1) \exp(-2 (k-R) \log (k-R) + [\log(4 \lambda^2t(1-t))+2 ] (k-R) \,)\\ &=& \frac{b'}{a'}( R+\varepsilon') \, \sum_{k'=0}^{+ \infty} (k'+R+1) \exp(-2 (k' \log k' ) + [\log(4 \lambda^2t(1-t))+2 ] k' \,)\ \end{eqnarray*} The last term has polynomial growth and then its logarithm is $o(R)$. This concludes the proof. \end{proof} \end{comment}
\begin{lemma}\label{countingest} Let $(\mathcal{X},\to)$ be a graph satisfying Assumption \ref{as-03} and let ${}\bbS^x_{\lambda}$ be the simple random walk defined at \eqref{e93}. Then \begin{equation*} \log {}\bbS^{xx}_{\lambda}(d(X_t,x) \geq R) \leq -2R\log R + R [2 + 2 \log (\lambda t(1-t) ) + 3 \log(\delta - 1) ] + o(R) \end{equation*} \end{lemma} \begin{proof} Let $x,y \in \mathcal{X}$. We first show that for some $c_1>0$: \begin{equation}\label{eq203} \bbS^{x}_{\lambda}(X_t = y) \leq c_1 \frac{1}{d(x,y)!} (\delta-1)^{d(x,y)} \end{equation} To this aim we define $\mathbf{W}_{k}$ as the set of walks of length $k$ which begin at $x$ and end at $y$. We have, by conditioning on the total number of jumps up to time $t$: \begin{equation*} {}\bbS^{x}_{\lambda}(X_t = y) = \exp(-\lambda t) \sum_{k= d(x,y)}^{+\infty} \frac{(\lambda t)^k }{k!} \sum_{\mathbf{w} \in \mathbf{W}_{k} } \lambda^{-k} \Phi_j(\mathbf{w}). \end{equation*} It is rather easy to see that $ \lambda^{-k} \Phi_j(\mathbf{c}) \leq 1$. Moreover, the cardinal of $\mathbf{W}_{k} $ can be bounded above by $\delta (\delta-1)^{k-2}$. Using these two observations: \begin{equation}\label{eq200} \bbS^{x}_{\lambda}(X_t = y) \leq \exp(-\lambda t)\sum_{k= d(x,y)}^{+\infty} \frac{(\lambda t)^k }{k!} \delta (\delta-1)^{k-2} \end{equation} A standard argument based on Stirling's formula shows that the sum appearing in \eqref{eq200} can be controlled with its first summand, i.e. there exist a constant $c_1$ independent from $d(x,y)$ such that: \begin{equation}\label{eq201} \bbS^{x}_{\lambda}(X_t = y) \leq c_1 \frac{(\lambda t)^{d(x,y)} }{d(x,y)!} \delta (\delta-1)^{d(x,y)-2} \end{equation} which proves \eqref{eq203}. Since there cannot be more than $(\delta-1)^{R}$ vertices at distance $R$ from $x$, we get that, using twice \eqref{eq201}: \begin{eqnarray*}
\bbS^{xx}(d(X_t,x) = R)= \frac{1}{\bbS^x_{\lambda}(X_1=x)} \sum_{y: d(x,y)=R} {}\bbS^{x}_{\lambda}(X_t = y) {}\bbS^{y}_{\lambda}(X_{1-t} = x )\\
\leq c_2 \frac{(\lambda^2 t(1-t))^{R} }{R!^2} (\delta-1)^{3R} \end{eqnarray*} for some $c_2>0$. Therefore: \begin{equation*}
\bbS^{xx}(d(X_t,x) \geq R) \leq c_2 \sum_{k=R}^{+\infty} \frac{(\lambda^2 t(1-t))^{k} }{k!^2} (\delta-1)^{3k}
\end{equation*} Using again a standard argument with Stirling formula as we did in \eqref{eq200}, we obtain: \begin{equation*}
\bbS^{xx}(d(X_t,x) \geq R) \leq c_3 \frac{(\lambda^2 t(1-t))^{R} }{R!^2} (\delta-1)^{3R} \end{equation*} for some $c_3>0$. The conclusion follows from Stirling's formula, which allows to write $\log R! = R \log R - R + o(R) $. \end{proof}
\begin{lemma}\label{lastlemma} Let $\mathbb{S}^{x}_{\lambda}$ be the constant speed random walk on the square lattice defined by: $$ j(x \to x+ v_1) = j(x \to x+ v_2) \equiv \lambda $$ Then: \begin{equation}\label{e105} \log {}\mathbb{S}^{xx}_{\lambda}( d(X_t,\mathbb{E}_{{}\mathbb{S}^{xx}_{\lambda}}(X_t) ) \geq R) = -2 R \log R + [\log(4 \lambda^2 t(1-t) ) +2 ]R + o(R) \end{equation} \end{lemma} Lemma \ref{lastlemma} is not directly implied by Lemma \ref{countingest}. However, one can derive its proof by going along the same lines of the proof of Lemma \ref{countingest} and use the exact computations that can be performed for the square lattice. A detailed proof is available at...
\Addresses
\end{document} | arXiv | {
"id": "1602.07231.tex",
"language_detection_score": 0.6548680663108826,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{A characterization of centrally symmetric convex bodies in terms of visual cones} \begin{abstract} In this work we prove the following result: Let $K$ be a strictly convex body in the Euclidean space $\mathbb{R}^n, n\geq 3$, and let $L$ be a hypersurface, which is the image of an embedding of the sphere $\mathbb{S}^{n-1}$, such that $K$ is contained in the interior of $L$. Suppose that, for every $x\in L$, there exists $y\in L$ such that the support double-cones of $K$ with apexes at $x$ and $y$, differ by a translation. Then $K$ and $L$ are centrally symmetric and concentric. \end{abstract}
\section{Introduction} Let $K\subset \mathbb{R}^{n}$ be a convex body, i.e., a compact and convex set with non-empty interior, $n \geq 3$, and let $x\in \mathbb{R}^{n}\setminus K$. We call the set
\[ \bigcup _{y\in K} \operatorname*{aff} \{x,y \} \] the \textit{solid double-cone} generated by $K$ and $x$, where $\operatorname*{aff}\{x,y\}$ denotes the affine hull of $x$ and $y$. The boundary of the solid double-cone generated by $K$ and $x$ will be called the support double-cone of $K$ with apex at $x$ and it will be denoted by $C_x$. In what follows, we shall use the names cone or support cone, by simplicity, to refer to the double cone and double support cone, respectively.
A classical problem in convexity is to determine properties of a convex body $K\subset \mathbb{R}^{n}$ from the information of its orthogonal projections. For instance, in dimension 3 one can prove that if all the orthogonal projections of a body $K$ are circles, then $K$ is a Euclidean ball. One can see this problem from the following perspective: consider the family of cylinders where $K$ is inscribed and impose a condition in a particular section of each of them, which is obtained with a hyperplane perpendicular to the lines which generates the cylinder. In our example, this means that we have a convex body $K\subset \mathbb{R}^3$ such that for every cylinder $\Omega$, where $K$ is inscribed, the section $H\cap \Omega$ is a circle, where $H$ is a plane orthogonal to the lines which determines $\Omega$.
We formulate the following general problem.
\textbf{Problem 1.} \emph{For a given subgroup $G$ of the general linear group $\operatorname*{GL}(\mathbb{R},n)$, to determine the convex bodies $K\subset \mathbb{R}^{n}$, $n\geq 3$, such that for every couple of different cylinders $\Lambda, \Gamma$, circumscribed to $K$, there exists an element $\Phi \in G$ such that $\Phi(\Lambda)= \Gamma$}.
Kuzminyh \cite{Kuzminyh} proved, for $n=3$, that the assumption $G=O(\mathbb{R},3)$ implies that $K$ is a ball, where $O(\mathbb{R},3)$ is the real orthogonal group. On the other hand, if $K\subset \mathbb{R}^{n}, n\geq 3$, is centrally symmetric, in virtue of the Aleksandrov Uniqueness Theorem, it follows that $K$ is a ball since all the projections have the same volume. Recently, L. Montejano \cite{Montejano} has considered the case where $G$ is the affine subgroup $A(\mathbb{R},n)$ and he has obtained that $K$ is an ellipsoid.
In virtue that the cylinders are cones with apexes at the infinity, the original problem mentioned at the beginning of this introduction can be formulated in the following manner: \textit{To determine properties of convex bodies imposing conditions on the sections of cones where $K$ is inscribed and whose apexes are contained in a hyperplane}. Naturally, we can replace in the aforesaid problem the condition that the set of apexes is situated in a hyperplane by the condition that they are contained in a hypersurface $S$. In particular, we can assume that $S$ is the boundary of a convex body $M\subset \mathbb{R}^{n}$ such that $K\subset \operatorname*{int} M$. An interesting example of this type is the well known Matsuura's Theorem \cite{Matsuura}:
\textbf{Matsuura's theorem.} Let $K\subset\mathbb R^3$ be a convex body and let $S$ be a closed convex surface which contains $K$ in its interior. If the support cone of $K$ from every point in $S$ is a right circular cone then $K$ is a Euclidean ball.
We are interested in the following problem for cones.
\textbf{Problem 2.} \emph{Given a subgroup $G$ of the general linear group $\operatorname*{GL}(\mathbb{R},n)$ and a hypersurface $S$ which is the image of an embedding of $\mathbb{S}^{n-1}$, to determine the convex bodies $K\subset \mathbb{R}^{n}$, $n\geq 3$, such that for every couple of different cones $\Lambda, \Gamma$, circumscribed to $K$ and with apexes in $S$, there exists an element $\Phi \in G$ such that $\Phi(\Lambda)= \Gamma$.}
A particularly interesting case of Problem 2 is when $G$ is equal to $O(\mathbb{R},n)$, i.e., \textit{we know that all the cones which circumscribes $K$ and with apexes in $S$ are congruent}. Very recently, S. Myroshnychenko \cite{Myros} has proved the following related result: let $P$ and $Q$ be polytopes contained in the interior of the ball $B_r(n)$, $n\geq 3$, and assume that from every point in the sphere $r\mathbb S^{n-1}$ the support cones of $P$ and $Q$ are congruent, then $P=Q$.
We denote by $T(\mathbb{R},n)$ the family of translations in $\mathbb{R}^{n}$. The main result of this work was inspired by Problem 2, however, we involve $T(\mathbb{R},n)$ which is not a subgroup of $GL(\mathbb{R},n)$, nevertheless it is an isometry of $\mathbb{R}^{n}$. Our main theorem claims that if $K \subset \mathbb{R}^n$, $n\geq 3$, is a strictly convex body and $L$ is a hypersurface, which is the image of an embedding of the sphere $\mathbb{S}^{n-1}$, $K \subset \operatorname*{int} L$, and for every $x\in L$, there exists $y\in L$ and $\Phi \in T(\mathbb{R},n)$ such that $C_y=\Phi(C_y)$, then $K$ and $L$ are centrally symmetric and concentric.
More precisely, we are going to prove the following theorem.
\begin{theorem}\label{rosa} Let $K\subset \mathbb{R}^{n},$ $n\geq 3,$ be a strictly convex body and let $L$ be hypersurface which is an embedding of $\mathbb{S}^{n-1}$ such that $K\subset \operatorname*{int} L$. Suppose that for every $x\in L$ there exist two points $y\in L$ and $p\in \mathbb{R}^{n}$ such that \begin{eqnarray}\label{apretada} C_y=p+C_x. \end{eqnarray} Then $K$ and $L$ are centrally symmetric and concentric. \end{theorem}
\section{Proofs and auxiliary results} Let $\mathbb{R}^{n}$ be the Euclidean space of dimension $n$ endowed with the usual interior product $\langle \cdot, \cdot\rangle : \mathbb{R}^{n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}$. We take a orthogonal coordinate system $(x_1,...,x_{n})$ for $\mathbb{R}^{n}$. Let
$B_r(n)=\{x\in \mathbb{R}^{n}: ||x||\leq r\}$ be the $n$-ball of radius $r$ centered at the origin, and let $r\mathbb{S}^{n-1}=\{x\in \mathbb{R}^{n}: ||x|| = r\}$ be its boundary. For $u \in \mathbb{S}^{n-1}$ and a non-negative $s\in \mathbb{R}$, we denote by $\Pi(u,s)$ the hyperplane $\{x\in \mathbb{R}^{n} | \langle u, x\rangle = s \}$ whose unit normal vector is $u$ and by $\Pi^*(u,s)$ the open half-space
$\{x\in \mathbb{R}^{n} | \langle u, x\rangle <s \}$. In particular, $\Pi(x,0)$ is denoted by $x^{\perp}$. For the points $x,y \in \mathbb{R}^{n}$ we will denote by $\operatorname*{aff}\{x,y\}$ the affine hull of $x$ and $y$ and by $[x,y]$ the line segment with endpoints $x$ and $y$. A \textit{convex hypersurface} is the boundary of a convex body in $\mathbb{R}^{n}$. As usual $\operatorname*{int} K$, $\operatorname*{bd} K$ will denote the interior and the boundary of the convex body $K$. An \textit{embedding} of $\mathbb{S}^{n-1}$ in $\mathbb{R}^{n}$ is a map $\alpha: \mathbb{S}^{n-1} \rightarrow \mathbb{R}^{n}$ such that $\alpha$ is homeomorphic onto its image.
Let $M \subset \mathbb R^n$ be a set and let $q \in \mathbb R^n$ be a point. The set $M' = 2q-M$ is said to be centrally symmetric to $M$ with respect to the center $q$. Let $S:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ be the map such that $x\mapsto -x$. The following is en easy consequence of the definition of centrally symmetric sets.
\begin{lemma}\label{cosota} Let $\Lambda$ be the image of an embedding $\beta: \mathbb{S}^{n-2}\rightarrow \mathbb{R}^{n}$. Then $S(\Lambda)$ is a translate of $\Lambda$ if and only if $ \Lambda$ is centrally symmetric. \end{lemma}
\begin{lemma}\label{dorron} For every $x\in L$, the set $\Lambda=C_x\cap C_y$ is an embedding of $\mathbb{S}^{n-2}$, where $y$ satisfies (\ref{apretada}). On the other hand, the set $\Lambda$ is centrally symmetric and, if we choose the center of $\Lambda$ as the origin of coordinates, then \begin{eqnarray}\label{salsa} -C_x=C_y. \end{eqnarray} \end{lemma}
\emph{Proof.} Is easy to see that for a strictly convex body $K\subset \mathbb{R}^{n}$ and a point $x\in \mathbb{R}^{n} \setminus K$ the set $C_x\cap \operatorname*{bd} K$, the \textit{graze} of $K$ with respect to $x$, is the image of an embedding of $\mathbb{S}^{n-2}$. On the other side, we can define a map $\phi:C_x\cap \operatorname*{bd} K \rightarrow \Lambda$ such that $\phi$ is injective, one-to-one and bi-continuous, i.e., we can find an homeomorphism between $C_x\cap \operatorname*{bd} K$ and $\Lambda$. Thus $\Lambda$ is the image of an embedding of $\mathbb{S}^{n-2}$.
Now, note that every double-cone is a centrally symmetric set with respect to its apex. Since there is $p\in\mathbb R^n$ such that $C_y=p+C_x$, and $C_y$ is a symmetric image of $C_x$ with respect to the midpoint of $[x,y]$, we have that $\Lambda =C_x\cap C_y$ is a centrally symmetric set with center at $\frac{x+y}{2}.$ If we choose the origin of coordinates at $\frac{x+y}{2}$ then $C_y=-C_x$.
$\Box$
\begin{lemma}\label{punsada} Let $x,y \in L$ such that (\ref{apretada}) holds. Let $\Pi_1,$ $\Pi_2$, $\Gamma_1,$ $\Gamma_2$, be supporting hyperplanes of $K$ such that $x\in \Pi_1\cap \Pi_2$, $\Gamma_i$ is parallel to $\Pi_i,$ $i=1,2,$ and $\Pi_1\cap \Pi_2$ is not a supporting $(n-2)$-dimensional plane of $L$. Denote by $k_i$ the contact point of $\Pi_i$ with $\operatorname*{bd} K, i=1,2$ and by $l_i$ the contact point of $\Gamma_i$ with $\operatorname*{bd} K, i=1,2$. Then the segments $[k_1,k_2]$ and $[l_1,l_2]$ are parallel. \end{lemma}
\begin{figure}\label{suave}
\end{figure}
\emph{Proof.} Let $\bar{x}$ be any point in $\Pi_1\cap \Pi_2\cap L$, with $\bar{x}\neq x$. In virtue of the hypothesis there exists $\bar{y}\in L$ and $q\in \mathbb R^n$ such that \begin{eqnarray}\label{cuartito} C_{\bar{y}}=q+C_{\bar{x}}. \end{eqnarray} By Lemma \ref{dorron} and relation (\ref{salsa}), we conclude that $y\in \Gamma_1\cap \Gamma_2$ and by (\ref{apretada}) it follows that \[ \operatorname*{aff}\{y,l_1\}=p+\operatorname*{aff}\{x,k_1\} \textrm{ }\textrm{ and }\textrm{ }\operatorname*{aff}\{y,l_2\}=p+\operatorname*{aff}\{x,k_2\}, \] i.e., \begin{eqnarray}\label{vivaldi} \operatorname*{aff}\{y,l_1,l_2\}=p+\operatorname*{aff}\{x,k_1,k_2\} \end{eqnarray} (See Fig. \ref{suave}). Analogously, since $\bar{y}\in \Gamma_1\cap \Gamma_2$ by Lemma \ref{dorron} and by (\ref{cuartito}) we have that \begin{eqnarray}\label{bach} \operatorname*{aff}\{\bar{y},l_1,l_2\}=q+\operatorname*{aff}\{\bar{x},k_1,k_2\}. \end{eqnarray} By (\ref{vivaldi}) and (\ref{bach}) it follows \[ \operatorname*{aff}\{l_1,l_2\}=\operatorname*{aff}\{y,l_1,l_2\} \cap \operatorname*{aff}\{\bar{y},l_1,l_2\}= \] \[ =[q+\operatorname*{aff}\{x,k_1,k_2\} ]\cap [q+\operatorname*{aff}\{\bar{x},k_1,k_2\}]= \] \[ =q+[\operatorname*{aff}\{x,k_1,k_2\} \cap \operatorname*{aff}\{\bar{x},k_1,k_2\}] \] \[ =q+\operatorname*{aff}\{k_1,k_2\}. \]
$\Box$
\begin{lemma}\label{rojo} Let $C\subset \mathbb{R}^{n}$ be a convex cone with apex at the origin $O$ and let $\Delta_1, \Delta_2: \mathbb{S}^{n-2}\rightarrow \mathbb{R}^{n}$ be two embeddings of $\mathbb{S}^{n-2}$ in $\operatorname*{bd} C$. Suppose that for every $u\in \mathbb{S}^{n-1}$ such that $\operatorname*{lin} \{u\}\subset \operatorname*{bd} C$, the sets $\operatorname*{lin} \{u\}\cap \Delta_1$ and $\operatorname*{lin} \{u\}\cap \Delta_2$ consist of only one point; we denote by $\alpha(u)$ and by $\beta(u)$, respectively, such intersections. Suppose that for every two such $u,v\in \mathbb{S}^{n-1}, u\not=v,$ the line segment $[\alpha(u),\alpha(v)]$ is parallel to the segment $[\beta(u),\beta(v)]$. Then $\Delta_1,\Delta_2$ satisfies the relation \begin{eqnarray}\label{gabi} \Delta_2=H(\Delta_1), \end{eqnarray} where $H:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ is an homothety with center of homothety at $O$. \end{lemma}
\emph{Proof.} Let $H:\mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ be an homothety with center at $O$ such that $\Delta_2 \cap H(\Delta_1)\not=\emptyset$. Let $x\in \Delta_2 \cap H(\Delta_1)$ and let $y\in \Delta_2$ such that $y\not=x$. In virtue of the first part of hypothesis $\operatorname*{lin}\{y\}\not=\operatorname*{lin}\{x\}$. We denote by $z$ the point $H(\Delta_1) \cap \operatorname*{lin}\{y\}$. It follows that $z\not=x$, otherwise $\operatorname*{lin}\{y\}=\operatorname*{lin}\{x\}$. We denote by $u,v$ the unit vectors $\frac{x}{\|x\|},\frac{y}{\|y\|}$, respectively. Then \[ x=H(\alpha(u)), z=H(\alpha(v)), x=\beta(u), y=\beta(v). \] In virtue of the hypothesis, the line segment $[\alpha(u),\alpha(v)]$ is parallel to the segment $[\beta(u),\beta(v)]$. Hence the line segment $[H(\alpha(u)),H(\alpha(v))]$ is parallel to the segment $[\beta(u),\beta(v)]$ and then $y=z$. Therefore, the relation (\ref{gabi}) holds.
$\Box$
\begin{lemma}\label{morado} Let $x$ and $y$ be points in $L$ such that (\ref{apretada}) holds. We denote by $\Omega_x$ and $\Omega_y$ the intersections $C_x\cap \operatorname*{bd} K$ and $C_y\cap \operatorname*{bd} K$, respectively. Then $\Omega_x$ and $\Omega_y$ are inversely homothetic. \end{lemma}
\emph{Proof.} In virtue of the hypothesis $\Delta_2=p+\Omega_x\subset C_y$. Denote by $\Delta_1$ the reflection of $\Omega_y$ with respect to $y$. Consider any two points $a,b\in\Omega_x$ and let $\Pi_a$ and $\Pi_b$ be the support hyperplanes of $K$ through $a$ and $b$, respectively, which by hypothesis pass through the point $x$. In other words, $\Pi_a$ and $\Pi_b$ are support hyperplanes of the cone $C_x$. Suppose the hyperplanes $\Pi_{a'}$ and $\Pi_{b'}$, parallel to $\Pi_a$ and $\Pi_b$ through the point $y$, touch $K$ at the points $a'$ and $b'$, respectively. Since the cone $C_y$ is a translate of the cone $C_x$, we have that $\Pi_{a'}$ and $\Pi_{b'}$ are support hyperplanes of the cone $C_y$. Without loss of generality, we suppose the origin is at the point $y$ and consider the unitary vectors $u$ and $v$ in direction of the lines $a'y$ and $b'y$, respectively. Consider the points $\alpha(u),$ $\alpha(v)$, $\beta(u)$, and $\beta(v)$, as defined in Lemma \ref{rojo}. By Lemma \ref{punsada} we have that $[a,b]$ and $[a',b']$ are parallel and so $[\alpha(u),\alpha(v)]$ is parallel to $[\beta(u),\beta(v)]$. We have the conditions of Lemma \ref{rojo}, hence $\Delta_1$ is directly homothetic to $\Delta_2$, therefore, $\Omega_x$ is inversely homothetic to $\Omega_y$.
$\Box$
\begin{figure}\label{tamal}
\end{figure}
\textbf{Remark 1.} From this lemma we can deduce that every pair of parallel support hyperplanes of $K$ cut in $L$ sections which are inversely homothetic, and moreover, the center of homothety is in the segment that joins the points of contact between $K$ and these hyperplanes. With this condition and condition (\ref{apretada}), we could continue with the proof that $K$ and $L$ are centrally symmetric and concentric, however, we will continue with the proof in a little different direction. Nevertheless, this suggest that the following conjecture have chance to be true.
\textbf{Conjecture 1.} Let $K$ and $L$ be convex bodies in $\mathbb R^n$, $n\geq 3$, with $K\subset \text{int} L$. Suppose every pair of parallel support hyperplanes of $K$ cut in $L$ sections which are inversely homothetic with center of homothety in the segment that joins the points of contact between $K$ and these hyperplanes. Then $K$ and $L$ are centrally symmetric and concentric.
\begin{lemma}\label{gansito} Let $x,y\in L$ be two points such that (\ref{apretada}) holds. Consider the center of $\Lambda=C_x\cap C_y$ is at the origin. If $\Omega_x$ and $\Omega_y$ have ratio of homothety equal to $-1$, then \begin{eqnarray}\label{jazz} -\Omega_x=\Omega_y. \end{eqnarray} \end{lemma}
\emph{Proof.} By (\ref{salsa}) we have that $-\Omega_x\subset -C_x=C_y$. By the hypothesis $-\Omega_x$ and $\Omega_y$ are either translated or equal. However, Since $-\Omega_x, \Omega_y\subset C_y$ they can not be translated. Thus $-\Omega_x=\Omega_y$.
$\Box$
\begin{lemma}\label{gelatina} Let $x,y,\bar{x},\bar{y}\in L$ be points such that (\ref{apretada}) holds for each pair $x,y$ and $\bar{x},\bar{y}$. Suppose that $\Omega_x \cap \Omega_{\bar{x}}$ is non-empty, it has more than one point and the ratio of homothety between $\Omega_x$ and $\Omega_y$ is -1. Then the ratio of homothety between $\Omega_{\bar{x}}$ and $\Omega_{\bar{y}}$ is -1 and the center $c$ of $\Lambda$ coincides with the center $\bar{c}$ of $\bar {\Lambda}$. \end{lemma}
\emph{Proof.} Following the notation of Lemma \ref{punsada}, let $\Pi_1,$ $\Pi_2$, $\Gamma_1,$ and $\Gamma_2$ be supporting hyperplanes of $K$ such that $x,\bar{x}\in \Pi_1\cap \Pi_2$ and $\Gamma_i$ is parallel to $\Pi_i,$ for $i=1,2$. We denote by $k_i$ the contact point of $\Pi_i$ with $\operatorname*{bd} K,$ for $i=1,2$ and by $l_i$ the contact point of $\Gamma_i$ with $\operatorname*{bd} K,$ for $i=1,2$. By Lemma \ref{punsada} we have that the segments $[k_1,k_2]$ and $[l_1,l_2]$ are parallel. On the other side, since $k_1,k_2\in \Omega_x$ and $l_1,l_2\in \Omega_y$ and the ratio of homothety between $\Omega_x$ and $\Omega_y$ is -1, which is indeed the ratio of homothety between the segments $[k_1,k_2]$ and $[l_1,l_2]$ such that $k_i$ corresponds to $l_i$ (for $i=1,2$), we have that $\|k_1-k_2\|=\|l_1-l_2\|$. The center of the inverse homothety between $\Omega_x$ and $\Omega_y$ is the center of the parallelogram $k_2k_1l_1l_2,$ which is precisely $c=y-p/2.$ We also have that the center of symmetry of $\Lambda$ is $c$. Now, since $k_1,k_2\in \Omega_{\bar{x}}$ and $l_1,l_2\in \Omega_{\bar{y}}$, it follows that the ratio of homothety between $\Omega_{\bar{x}}$ and $\Omega_{\bar{y}}$ is $-1$ with center at $c$ as well.
$\Box$
Let $N\subset L$ be the set of points $x\in L$ such that, if the point $y\in L$ has the property that for $x,y$ (\ref{apretada}) holds, then the curves $\Omega_x$ and $\Omega_y$ are inversely homothetic with factor of homothety equal to $-1$. We will prove that indeed $N=L$.
\begin{lemma}\label{perrita} We have that $N=L$. \end{lemma}
\emph{Proof.} Consider a pair of points $u,v\in L$ such that (\ref{apretada}) holds for $u,v$ and $u,v\in N$. The existence of this pair of points is guaranteed by an standard argument of continuity, however, we give some details of this argument for the interested reader in the Appendix.
We denote by $D_u$ the set of points $z\in L$ such that $[u,z]\cap K\neq\emptyset$ and denote by $L_u$ the set $L\setminus D_u$. Let $a$ be any point in $L_u$, with $a\neq u$. Since $\Omega_a\cap\Omega_u\neq\emptyset$, by Lemma \ref{gelatina} we have that $a\in N$. Associated with $a$ we have the open region $L_a$, and as before, every point $z\in L_a$ is also in $N$. Continuing with this procedure we construct an open cover of $L$ such that every open set in this cover belong to $N$. Since $L$ is a compact set, we may consider indeed a finite subcover of $L$, hence all the points in $L$ belong to $N$.
$\Box$
\emph{Proof of Theorem \ref{rosa}}. First, by Lemma \ref{gelatina} we have that all the curves $\Lambda$ are concentric at $c$. From here we get that the support function $h_k:\mathbb{S}^{n-1} \rightarrow \mathbb{R}$ of $K$ is symmetric, i.e., $h_k(u)=h_k(-u)$, that is, $K$ is centrally symmetric with center at $c$.
Finally, we prove that $L$ is centrally symmetric and concentric with $K$. We choose a system of coordinates such that the origin $O$ is the centre of $K$. We observe that if a convex body $M\subset \mathbb{R}^{n}$ is centrally symmetric with centre at $O$ and $x\in \mathbb{R}^{n} \setminus M$, there exist $y\in \mathbb{R}^{n} \setminus K$ and $p\in \mathbb{R}^{n}$, both unique, such that the relation \[ C_y=p+C_x \] holds, it is clear that $y=-x$. If $L$ were not be centrally symmetric and concentric with $K$, then would exist $x_0\in L$ such that $-x_0\not\in L$. Thus would not exist $y_0\in L$ and $p\in \mathbb{R}^{n}$ such that $C_{y_0}=p+C_{x_0}$ which would contradict the hypothesis of Theorem \ref{rosa}. Therefore, $L$ is centrally symmetric with center at $O$.
$\Box$
\textbf{Remark 2.} The main theorem proved in this article is not true in the plane, even if all the support cones are congruent. To see this is sufficient to consider a convex body $K\subset\mathbb R^2$ and its \emph{isoptic curve} $K_{\alpha}$, i.e., the set of points $z\in\mathbb R^2$ such that $K$ is seen under the constant angle $\alpha$ from $z$ (see for instance \cite{Cies_Mier_Moz}, \cite{Green}).
\section{Appendix} Here we prove that there exists two points $u,v\in L$ such that (\ref{apretada}) holds for $u,v$ and $u,v\in N$.
Let $\alpha:[0,1] \rightarrow L$ be a continuous curve such that $\alpha(0)=x$ and $\alpha(1)=y$. In virtue of the hypothesis there exists a continuous curve $\beta:[0,1] \rightarrow L$ such that $\beta(0)=y$ and $\beta(1)=x$ and, for every $t\in[0,1]$, the points $\alpha(t)$ and $\beta(t)$ are such the relation (\ref{apretada}) holds for them, i.e., \begin{eqnarray}\label{blues} C_{\beta(t)}=p(t)+ C_{\alpha(t)}, \end{eqnarray} where $p(t)\in \mathbb{R}^3$. We define $d(t)$ as the number $ \operatorname*{diam} \Omega_{\beta(t)}-\operatorname*{diam} \Omega_{\alpha(t)}$. By Lemma \ref{morado}, \begin{eqnarray}\label{todo} \operatorname*{diam} \Omega_{\alpha(t)}=r(t)\operatorname*{diam} \Omega_{\beta(t)}. \end{eqnarray} Thus \[ d(t)=(1-r(t))\operatorname*{diam} \Omega_{\beta(t)}. \] Since the surfaces $L$ and $\operatorname*{bd} K$ are continuous and $L\cap K=\emptyset$, it follows that the functions $r,d:[0,1]\rightarrow \mathbb{R}^+$ are continuous. By our assumption $\operatorname*{diam} \Omega_{\alpha(0)}\not=\operatorname*{diam} \Omega_{\beta(0)}$, say \begin{eqnarray}\label{recarga} \operatorname*{diam} \Omega_{\alpha(0)}<\operatorname*{diam} \Omega_{\beta(0)}, \end{eqnarray} replacing (\ref{todo}), for $t=0$, in (\ref{recarga}) we get $r(0)<1$, that is, $d(0)>0$. On the other hand, in virtue that $\alpha(0)=x=\beta(1)$ and $\alpha(1)=y=\beta(0)$, from (\ref{recarga}) it follows \[ \operatorname*{diam} \Omega_{\beta(1)}<\operatorname*{diam} \Omega_{\alpha(1)}, \] i.e., $1<r(1)$ and $d(1)<0$. Consequently, by the mean value theorem, there exists $t^*\in [0,1]$ such that $d(t^*)=0$, that is, $r(t^*)=1$. Hence \begin{eqnarray}\label{blanquita} \operatorname*{diam} \Omega_{\alpha(t^*)}=\operatorname*{diam} \Omega_{\beta(t^*)}. \end{eqnarray} (See Fig. \ref{guera}). Thus $u=\alpha(t^*)$ and $v=\beta(t^*)$ are the point what we are looking for.
\begin{figure}\label{guera}
\end{figure}
\end{document} | arXiv | {
"id": "2108.01732.tex",
"language_detection_score": 0.7630434036254883,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Negativity Fonts...]{Negativity Fonts, multiqubit invariants and Four qubit Maximally Entangled States} \author{S. Shelly Sharma} \email{shelly@uel.br} \affiliation{Departamento de F\'{\i}sica, Universidade Estadual de Londrina, Londrina 86051-990, PR Brazil } \author{N. K. Sharma} \email{nsharma@uel.br} \affiliation{Departamento de Matem\'{a}tica, Universidade Estadual de Londrina, Londrina 86051-990 PR, Brazil } \thanks{}
\begin{abstract} Recently, we introduced negativity fonts as the basic units of multipartite entanglement in pure states. We show that the relation between global negativity of partial transpose of N- qubit state and linear entropy of reduced single qubit state yields an expression for global negativity in terms of determinants of negativity fonts. Transformation equations for determinants of negativity fonts under local unitaries (LU's) are useful to construct LU invariants such as degree four and degree six invariants for four qubit states. The difference of squared negativity and N-tangle is an N qubit invariant which contains information on entanglement of the state caused by quantum coherences that are not annihilated by removing a single qubit. Four qubit invariants that detect the entanglement of specific parts in a four qubit state are expressed in terms of three qubit subsystem invariants. Numerical values of invariants bring out distinct features of several four qubit states which have been proposed to be the maximally entangled four qubit states. \end{abstract}
\maketitle
\section{Introduction}
Entanglement is an intriguing property of quantum systems and its detection, characterization and quantification are important questions in quantum mechanics. For a pure state of bipartite quantum system consisting of two distinguishable subsystems $A$ and $B$, each of arbitrary dimension, negativity \cite{zycz98,vida02}, and linear entropy calculated from reduced density operator of either element, may be chosen as entanglement measures. For tripartite case, besides the quantity of entanglement we must also know whether the entanglement is GHZ-like or W-like\ \cite{dur00} and states are grouped into distinct entanglement classes for four qubits \cite {vers02,vers03,acin00,acin01,miya03,miya04}. An entanglement measure must have value in the range zero for the product state to a maximum value for a maximally entangled state and satisfy the minimal requirement of local unitary invariance \cite{vida00}. Generally accepted measures of entanglement, such as concurrence \cite{woot98} for two qubits, and three tangle \cite{coff00} for three qubits, turn out to be such invariants \cite {vers03,ging02}. In the case of four qubits, the standard approach from invariant theory, employing the well established W-process by Cayley, has lead to the construction of a complete set of SL-invariants \cite{luqu03}. In ref. \cite{luqu06} the invariants up to degree 6 have been determined together with 5 invariants of degree 8. Local unitary invariants have been reported \ for even number of qubits in ref. \cite{wong01} and for even and odd number of qubits in \cite{li06,li07,li10}. Independent of these approaches, a method based on expectation values of antilinear operators with emphasis on permutation invariance of the global entanglement measure \cite{oste05,oste06}, has been suggested. Permutation invariance has been highlighted as a demand on global entanglement measures already in Ref. \cite {coff00} and later in Ref. \cite{chte07}.
Negativity of global partial transpose is a widely used computable measure of free bipartite entanglement. Negativity is based on Peres-Horodecki NPT criterion \cite{pere96,horo96} and is known to be an entanglement monotone \cite{vida02}. A global partial transpose with respect to a sub system $p$ is obtained by transposing the state of subsystem $p$ in state operator.\ In refs. \cite{shar101,shar102}, we introduced negativity fonts defined as two by two matrices of probability amplitudes that determine the negative eigen values of four by four submatrices of partially transposed state operators. It was shown that relevant $N-$qubit local unitary invariants can be obtained, directly, from transformation properties of determinants of negativity fonts under local unitary transformations. From expression of an invariant in terms of determinants of negativity fonts, one can easily read how subsystems invariants contribute to the composite system invariant. In this article, we obtain an expression for global negativity in terms of determinants of negativity fonts. The squared negativity of $N-$qubit partially transposed operator, is found to be the sum of squares of moduli of determinants of all possible negativity fonts. For the sake of completeness, we briefly outline the procedure for constructing two-qubit local unitary (LU) invariants for an $N-$ qubit state by examining the intrinsic sources of negativity present in global and $K-$way partially transposed matrices. In a four qubit state, the entanglement of a three qubit subsystem may arise due to four-way or three-way correlations. We show that four qubit invariants that detect the entanglement of two qubits in a four qubit state \cite{shar102} are combinations of three qubit invariants. A form of degree six invariant for four qubit states constructed in terms of negativity font determinants demonstrates the ease with which complex invariants can be written down from basic principles and calculated numerically. Numerical values of invariants are found to bring out distinct features of several known four qubit states which have been proposed to be the maximally entangled states.
Definition of negativity fonts and the notation to represent determinants of $N-$way and $K-$way negativity fonts is given in section II. Transformation equations for determinants of negativity fonts are used to obtain an expression for square of global negativity in terms of determinants of negativity fonts in section III. Section IV details degree two, four and six invariants for a generic four qubit state. Numerical values of invariants and entanglement monotones for states known or conjectured to be maximally entangled four qubit states are reported and nature of quantum correlations in these states analyzed in section V followed by a summary of results in section VI.
\section{Definition of a $K-$way negativity font}
Consider a bipartite system consisting of two distinguishable subsystems $A$ and $B$, each of arbitrary dimension, in pure state $\widehat{\rho }$. The global negativity\ \cite{zycz98,vida02} of partial transpose $\widehat{\rho } _{G}^{T_{A}}$ (partial transpose with respect to $A$) is defined as \begin{equation} N_{G}^{A}=\frac{1}{d_{A}-1}\left( \left\Vert \rho _{G}^{T_{A}}\right\Vert _{1}-1\right) , \label{negdef} \end{equation} where $\left\Vert \widehat{\rho }\right\Vert _{1}$ is the trace norm of $ \widehat{\rho }$. A general $N-$qubit pure state reads as \begin{equation} \left\vert \Psi ^{A_{1}A_{2}...A_{N}}\right\rangle =\sum_{i_{1}i_{2}...i_{N}}a_{i_{1}i_{2}...i_{N}}\left\vert i_{1}i_{2}...i_{N}\right\rangle \qquad \widehat{\rho }=\left\vert \Psi ^{A_{1}A_{2}...A_{N}}\right\rangle \left\langle \Psi ^{A_{1}A_{2}...A_{N}}\right\vert , \label{nqubit} \end{equation} where $\left\vert i_{1}i_{2}...i_{N}\right\rangle $ are the basis vectors spanning $2^{N}$ dimensional Hilbert space and $A_{p}$ is the location of qubit $p$ ($p=1$ to $N$). The coefficients $a_{i_{1}i_{2}...i_{N}}$ are complex numbers. The basis states of a single qubit are labelled by $i_{m}=0$ and $1,$ where $m=1,...,N$. The matrix elements of global partial transpose $ \widehat{\rho }_{G}^{T_{p}}$ with respect to qubit $p$ are obtained from $ \widehat{\rho }$ through \begin{equation} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho } _{G}^{T_{p}}\left\vert j_{1}j_{2}...j_{N}\right\rangle =\left\langle i_{1}i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho } \left\vert j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle . \label{gpt} \end{equation} Peres PPT separability criterion \cite{pere96} states that the partial transpose $\widehat{\rho }_{G}^{T_{p}}$ of a separable state is positive.
Rewrite $N-$qubit pure state as $\left\vert \Psi ^{A_{1}A_{2}...A_{N}}\right\rangle =\sum\limits_{i_{3}i_{4}...i_{N}}\left\vert F\right\rangle _{00i_{3}i_{4}...i_{N}}$, where \begin{eqnarray} \left\vert F\right\rangle _{00i_{3}i_{4}...i_{N}} &=&a_{00i_{3}i_{4}...i_{N}}\left\vert 00i_{3}i_{4}...i_{N}\right\rangle +a_{10i_{3}i_{4}...i_{N}}\left\vert 10i_{3}i_{4}...i_{N}\right\rangle \notag \\ &&+a_{01i_{3}+1i_{4}+1...i_{N}+1}\left\vert 01i_{3}+1i_{4}+1...i_{N}+1\right\rangle \notag \\ &&+a_{11i_{3}+1i_{4}+1...i_{N}+1}\left\vert 11i_{3}+1i_{4}+1...i_{N}+1\right\rangle . \end{eqnarray} Here $i_{m}+1=0$ for $i_{m}=1$ and $i_{m}+1=1$ for $i_{m}=0$. The entanglement of $\chi ^{00i_{3}i_{4}...i_{N}}=$ $\left\vert F\right\rangle _{00i_{3}i_{4}...i_{N}}\left\langle F\right\vert $ is quantified by \begin{equation} \left( N_{G}^{A_{1}}(\chi ^{00i_{3}i_{4}...i_{N}})\right) ^{2}=4\left\vert \det \left[ \begin{array}{cc} a_{00i_{3}i_{4}...i_{N}} & a_{01i_{3}+1i_{4}+1...i_{N}+1} \\ a_{10i_{3}i_{4}...i_{N}} & a_{11i_{3}+1i_{4}+1...i_{N}+1} \end{array} \right] \right\vert ^{2}=4\left\vert D^{00i_{3}i_{4}...i_{N}}\right\vert ^{2}. \end{equation} Since determinant $D^{00i_{3}i_{4}...i_{N}}=\det \nu _{N}^{00i_{3}i_{4}...i_{N}}$ determines $N_{G}^{A_{1}}(\chi ^{00i_{3}i_{4}...i_{N}}),$ we refer to $2\times 2$ matrix of probability amplitudes \begin{equation} \nu _{N}^{00i_{3}i_{4}...i_{N}}=\left[ \begin{array}{cc} a_{00i_{3}i_{4}...i_{N}} & a_{01i_{3}+1i_{4}+1...i_{N}+1} \\ a_{10i_{3}i_{4}...i_{N}} & a_{11i_{3}+1i_{4}+1...i_{N}+1} \end{array} \right] , \label{nwayfont} \end{equation} as a negativity font of $N-$way entanglement in $\left\vert \Psi ^{A_{1},A_{2},...A_{N}}\right\rangle $.
In general, if $\widehat{\rho }$ is a pure state, then the negative eigenvalue of $4\times 4$ sub-matrix of global partial transpose $\widehat{ \rho }_{G}^{T_{p}}$or a $K-$way partial transpose $\widehat{\rho } _{K}^{T_{p}}$ \cite{shar09} in the space spanned by distinct basis vectors $ \left\vert i_{1}i_{2}...i_{p}...i_{N}\right\rangle $, $\left\vert j_{1}j_{2}...j_{p}=i_{p}+1...j_{N}\right\rangle $, $\left\vert i_{1}i_{2}...j_{p}...i_{N}\right\rangle $, and $\left\vert j_{1}j_{2}...i_{p}...j_{N}\right\rangle $ is \ $\lambda ^{-}=-\left\vert \det \left( \nu _{K}^{i_{1}i_{2}...i_{p}...i_{N}}\right) \right\vert $ with $ \nu _{K}^{i_{1}i_{2}...i_{p}...i_{N}}$ defined as \begin{equation} \nu _{K}^{i_{1}i_{2}...i_{p}...i_{N}}=\left[ \begin{array}{cc} a_{i_{1}i_{2}...i_{p}...i_{N}} & a_{j_{1}j_{2}...i_{p}...j_{N}} \\ a_{i_{1}i_{2}...j_{p}=i_{p}+1...i_{N}} & a_{j_{1}j_{2}...j_{p}=i_{p}+1...j_{N}} \end{array} \right] , \label{kwayfont} \end{equation} where $K=\sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}})$ $\left( 2\leq K\leq N\right) $. Here $\delta _{i_{m},j_{m}}=1$ for $i_{m}=j_{m}$, and $\delta _{i_{m},j_{m}}=0$ for $i_{m}\neq j_{m}$. The $2\times 2$ matrix $\nu _{K}^{i_{1}i_{2}...i_{p}...i_{N}}$ defines a $K-$way negativity font. To distinguish between different $K-$way negativity fonts we shall replace subscript $K$ in Eq. (\ref{kwayfont}) by a list of qubit states for which $ \delta _{i_{m},j_{m}}=1$. In other words a $K-$way font involving qubits $ A_{q+1}$ to $A_{q+K}$ such that $\sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}})=\sum\limits_{m=q+1}^{q+K}(1-\delta _{i_{m},j_{m}})=K$, reads as \begin{eqnarray} &&\nu _{\left( A_{1}\right) _{i_{1}},\left( A_{2}\right) _{i_{2}},...\left( A_{q}\right) _{i_{q}}\left( A_{q+K+1}\right) _{i_{q+K+1}}...\left( A_{N}\right) _{i_{N}}}^{i_{1}i_{2}...i_{p}...i_{N}} \notag \\ &=&\left[ \begin{array}{cc} a_{i_{1}i_{2}...i_{p}...i_{N}} & a_{i_{1}i_{2}...i_{q},i_{q+1}+1,i_{q+2}+1...i_{p}...,i_{q+K-1}+1,i_{q+K}+1,i_{q+K+1},...i_{N}} \\ a_{i_{1}i_{2}...i_{p}+1...i_{N}} & a_{i_{1}i_{2}...i_{q},i_{q+1}+1,i_{q+2}+1...i_{p}+1...,i_{q+K-1}+1,i_{q+K}+1,i_{q+K+1},...i_{N}} \end{array} \right] , \end{eqnarray} and its determinant is represented by \begin{eqnarray} &&D_{\left( A_{1}\right) _{i_{1}},\left( A_{2}\right) _{i_{2}},...\left( A_{q}\right) _{i_{q}}\left( A_{q+K+1}\right) _{i_{q+K+1}}...\left( A_{N}\right) _{i_{N}}}^{i_{q+1}...i_{p}...i_{q+k-1}i_{q+k}} \notag \\ &=&\det \left( \nu _{\left( A_{1}\right) _{i_{1}},\left( A_{2}\right) _{i_{2}},...\left( A_{q}\right) _{i_{q}}\left( A_{q+K+1}\right) _{i_{q+K+1}}...\left( A_{N}\right) _{i_{N}}}^{i_{1}i_{2}...i_{p}...i_{N}}\right) . \end{eqnarray} Thus the determinant of a $K-$way font in an $N$ qubit state has $N-K$ subscripts and $K$ superscripts. In this notation no subscript is needed for determinant of an $N-$way negativity font. The general rule to represent the determinants of negativity fonts is that the qubit states are ordered according to the location of the qubits with the states that appear in the subscript not being present in the superscript. One can identify the determinants of negativity fonts with Pl\"{u}cker coordinates in ref. \cite {heyd06}, where Pl\"{u}cker coordinate equations of Grassmann variety have been used to construct entanglement monotones for multi-qubit states.
\subsection{Negativity fonts in K-way partial transpose}
To construct a $K-$way partially transposed matrix \cite{shar09} from the state operator $\widehat{\rho }$, every matrix element $\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho }\left\vert j_{1}j_{2}...j_{N}\right\rangle $ is labelled by a number $ K=\sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}})$. The $K-$way partial transpose ($K>2$) of\ $\rho $ with respect to subsystem $p$ is obtained by selective transposition such that \begin{eqnarray} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho } _{K}^{T_{p}}\left\vert j_{1}j_{2}...j_{N}\right\rangle &=&\left\langle i_{1}i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho } \left\vert j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle , \notag \\ \text{if}\quad \sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}}) &=&K,\quad \text{and }\quad \delta _{i_{p},j_{p}}=0 \label{ptk1} \end{eqnarray} and \begin{eqnarray} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho } _{K}^{T_{p}}\left\vert j_{1}j_{2}...j_{N}\right\rangle &=&\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho }\left\vert j_{1}j_{2}...j_{N}\right\rangle , \notag \\ \quad \text{if}\quad \sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}}) &\neq &K. \label{ptk2} \end{eqnarray} while \begin{eqnarray} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho } _{2}^{T_{p}}\left\vert j_{1}j_{2}...j_{N}\right\rangle &=&\left\langle i_{1}i_{2}...i_{p-1}j_{p}i_{p+1}...i_{N}\right\vert \widehat{\rho } \left\vert j_{1}j_{2}...j_{p-1}i_{p}j_{p+1}...j_{N}\right\rangle , \notag \\ \quad \text{if}\quad \sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}}) &=&1 \text{ or }2,\quad \text{and }\quad \delta _{i_{p},j_{p}}=0 \label{pt21} \end{eqnarray} and \begin{eqnarray} \left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho } _{2}^{T_{p}}\left\vert j_{1}j_{2}...j_{N}\right\rangle &=&\left\langle i_{1}i_{2}...i_{N}\right\vert \widehat{\rho }\left\vert j_{1}j_{2}...j_{N}\right\rangle , \notag \\ \quad \text{if}\quad \sum\limits_{m=1}^{N}(1-\delta _{i_{m},j_{m}}) &\neq &1 \text{ or }2. \label{pt22} \end{eqnarray} The $K-$way negativity calculated from $K-$way partial transpose of matrix $ \rho $ with respect to subsystem $p$, is defined as $N_{K}^{A_{p}}=\left( \left\Vert \rho _{K}^{T_{p}}\right\Vert _{1}-1\right) $. Using the definition of trace norm and the fact that $tr(\rho _{K}^{T_{p}})=1$, we get $N_{K}^{A_{p}}=2\sum_{i}\left\vert \lambda _{i}^{K-}\right\vert $, $\lambda _{i}^{K-}$ being the negative eigenvalues of matrix $\rho _{K}^{T_{p}}$. The $K-$way negativity ($2\leq K\leq N)$, defined as the negativity of $K-$way partial transpose, is determined by the presence or absence of $K-$way quantum coherences in the composite system. By $K-$way coherences we mean the type of coherences present in a $K-$qubit GHZ- like state. The negativity $N_{K}^{A_{p}}$ is a measure of all possible types of entanglement attributed to $K-$ way coherences. It was shown in refs. \cite {shar07,shar08,shar09} that the global partial transpose of an $N-$qubit state may be written as a sum of \ $K-$way partial transposes $\left( 2\leq K\leq N\right) $ that is \begin{equation} \widehat{\rho }_{G}^{T_{p}}=\sum\limits_{K=2}^{N}\widehat{\rho } _{K}^{T_{p}}-(N-2)\widehat{\rho }. \label{3n} \end{equation} By rewriting the global partial transpose as a sum of $K-$way partial transposes, the negativity fonts are distributed amongst $N-1$ partial transposes. Contributions of partial transposes to global negativity, referred to as partial $K-$way negativities are not unitary invariants, but their values coincide with those of three tangle and concurrences for three qubit canonical state\cite{shar07}.
\section{Transformation equations for determinants of negativity fonts, global negativity and two-qubit invariants}
To derive expressions for LU invariants which measure genuine $N-$body quantum correlations present in the state, the transformation equations under LU are written, for negativity fonts characterizing the $N-$way partial transpose and $\left( N-1\right) $ way partial transpose. Two qubit invariants obtained from transformation equations pave the way to construction of $N-$qubit LU invariants to be used to write the entanglement monotones. In the following, an invariant named $\mathcal{I}$ represented by $\left( \mathcal{I}_{K}\right) _{\left( A_{x+1}\right) _{i_{x+1}}...\left( A_{N}\right) _{i_{N}}}^{A_{1}...A_{x}}$, is understood to be invariant under the action of local unitaries on qubits $A_{1}$, $A_{2}$,$...,A_{x}$ of the N qubit system. In general, the superscript outside the bracket will list the qubits in the subsystem of which $\mathcal{I}_{K}$ is an invariant, while subscript lists the remaining qubits and their states. In case no state specification is needed, subscript is redunant as such will not be written. When $\left( \mathcal{I}_{K}\right) $ is an N-qubit invariant both sub and superscripts are redundant and will not be posted. Subscript $K$ in $ \mathcal{I}_{K}$ indicates that by suitable choice of local unitaries the invariant can be expressed in terms of determinants of $K-$way negativity fonts. Determinant of an $N-$way negativity font \begin{equation} D^{i_{1}i_{2}...i_{p}=0...i_{N}}=\det \left[ \begin{array}{cc} a_{i_{1}i_{2}...i_{p}=0...i_{N}} & a_{i_{1}+1,i_{2}+1,...i_{p}=0...i_{N}+1} \\ a_{i_{1}i_{2}...i_{p}=1...i_{N}} & a_{i_{1}+1,i_{2}+1,...i_{p}=1...i_{N}+1} \end{array} \right] , \end{equation} is an invariant of U$^{A_{p}}$. Local unitary $U^{A_{q}}=\frac{1}{\sqrt{ 1+\left\vert x\right\vert ^{2}}}\left[ \begin{array}{cc} 1 & -x^{\ast } \\ x & 1 \end{array} \right] $ on qubit $A_{q}$ with $q\neq p$, on the other hand, yields four transformation equations \begin{eqnarray} \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) ^{\prime \prime } &=& \frac{1}{1+\left\vert x\right\vert ^{2}}\left[ D^{i_{1}i_{2}...i_{p}=0,i_{q}=0...i_{N}}-\left\vert x\right\vert ^{2}D^{i_{1}i_{2}...i_{p}=0,i_{q}=1...i_{N}}\right. \notag \\ &&\left. +xD_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}-x^{\ast }D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...i_{q-1}i_{q+1}...i_{N}}\right] \label{t1} \end{eqnarray} \begin{eqnarray} \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) ^{\prime \prime } &=& \frac{1}{1+\left\vert x\right\vert ^{2}}\left[ D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}-\left\vert x\right\vert ^{2}D^{i_{1}i_{2}...i_{p}=0,i_{q}=0...i_{N}}\right. \notag \\ &&\left. +xD_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}-x^{\ast }D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...i_{q-1}i_{q+1}...i_{N}}\right] \label{t2} \end{eqnarray} \begin{eqnarray} \left( D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1},i_{q+1}...i_{N}}\right) ^{\prime \prime } &=&\frac{1}{1+\left\vert x\right\vert ^{2}}\left[ D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1}...i_{N}}+\left( x^{\ast }\right) ^{2}D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1}...i_{N}}\right. \notag \\ &&\left. -x^{\ast }\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0...i_{N}}+D^{i_{1}i_{2}...i_{p}=0,i_{q}=1...i_{N}}\right) \right] \label{t3} \end{eqnarray} \begin{eqnarray} \left( D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}\right) ^{\prime \prime } &=&\frac{1}{1+\left\vert x\right\vert ^{2}}\left[ D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}+x^{2}D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1}...i_{N}}\right. \notag \\ &&\left. +x\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0...i_{N}}+D^{i_{1}i_{2}...i_{p}=0,i_{q}=1...i_{N}}\right) \right] \label{t4} \end{eqnarray} relating $N-$way and $\left( N-1\right) -$way negativity fonts. Eliminating variable $x$, invariants of $U^{A_{p}}U^{A_{q}}$ are found to be \begin{eqnarray} \left( M_{N}\right) ^{A_{p}A_{q}} &=&\left\vert \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) ^{\prime \prime }\right\vert ^{2}+\left\vert \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) ^{\prime \prime }\right\vert ^{2} \notag \\ &&+\left\vert \left( D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1}i_{q+1}...i_{N}}\right) ^{\prime \prime }\right\vert ^{2}+\left\vert \left( D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...i_{q-1}i_{q+1}...i_{N}}\right) ^{\prime \prime }\right\vert ^{2} \notag \\ &=&\left\vert \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) \right\vert ^{2}+\left\vert \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) \right\vert ^{2} \notag \\ &&+\left\vert D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}\right\vert ^{2}+\left\vert D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...i_{q-1}i_{q+1}...i_{N}}\right\vert ^{2}, \label{d0mod} \end{eqnarray} which is real, a degree two invariant \begin{eqnarray} \left( T_{N}\right) ^{A_{p}A_{q}} &=&\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) ^{\prime \prime }-\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) ^{\prime \prime } \notag \\ &=&D^{i_{1}i_{2}...i_{p}=0i_{q}=0...i_{N}}-D^{i_{1}i_{2}...i_{p}=0i_{q}=1...i_{N}}, \label{d1dif} \end{eqnarray} a degree four invariant \begin{eqnarray} \left( I_{N}\right) ^{A_{p}A_{q}} &=&\left( D^{i_{1}i_{2}...i_{p}=0i_{q}=0...i_{N}}+D^{i_{1}i_{2}...i_{p}=0i_{q}=1...i_{N}}\right) ^{2} \notag \\ &&-4D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1},i_{q+1}...i_{N}}D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}} \notag \\ &=&\left( \left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) ^{\prime \prime }+\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) ^{\prime \prime }\right) ^{2} \notag \\ &&-4\left( D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1},i_{q+1}...i_{N}}\right) ^{\prime \prime }\left( D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}\right) ^{\prime \prime }, \label{d2sum} \end{eqnarray} and combining Eqs. (\ref{d1dif}) and (\ref{d2sum}), we obtain \begin{eqnarray} \left( P_{N}\right) ^{A_{p}A_{q}} &=&D^{i_{1}i_{2}...i_{p}=0i_{q}=0...i_{N}}D^{i_{1}i_{2}...i_{p}=0i_{q}=1...i_{N}} \notag \\ &&-D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1},i_{q+1}...i_{N}}D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}} \notag \\ &=&\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=0,...i_{N}}\right) ^{\prime \prime }\left( D^{i_{1}i_{2}...i_{p}=0,i_{q}=1,...i_{N}}\right) ^{\prime \prime } \notag \\ &&-\left( D_{\left( A_{q}\right) _{0}}^{i_{1}i_{2}...i_{p}=0...i_{q-1},i_{q+1}...i_{N}}\right) ^{\prime \prime }\left( D_{\left( A_{q}\right) _{1}}^{i_{1}i_{2}...i_{p}=0...,i_{q-1},i_{q+1},...i_{N}}\right) ^{\prime \prime }. \label{d3prod} \end{eqnarray} Similarly the differences $\left( M_{N}\right) ^{A_{p}A_{q}}-\left\vert \left( I_{N}\right) ^{A_{p}A_{q}}\right\vert $ and $\left( M_{N}\right) ^{A_{p}A_{q}}-\left\vert \left( T_{N}\right) ^{A_{p}A_{q}}\right\vert ^{2}$ are useful to write down different $N-$qubit invariants in alternate forms.
Transformation equations under LU for determinants of negativity fonts characterizing $K-$way partial transpose and $\left( K-1\right) $ way partial transpose with $K<N$, yield two qubit invariants $\left( M_{K}\right) ^{A_{p}A_{q}}$, $\left( I_{K}\right) ^{A_{p}A_{q}}$, $\left( T_{K}\right) ^{A_{p}A_{q}}$, and $\left( P_{K}\right) ^{A_{p}A_{q}}$ analogous to $N-$way case.
\subsection{Global negativity and negativity fonts}
It follows from Eq. (\ref{d0mod}) that \ by summing up the squared moduli of determinants of all negativity fonts in a partial transpose we obtain an N-qubit invariant. Recalling that the maximum value that modulus of determinant of a single negativity font may have is $\frac{1}{2}$, multiplying the invariant by four leads to an invariant with maximum value equal to one. Next, the relation between global negativity and linear entropy of reduced single qubit state is used to demonstrate that the invariant obtained is nothing but the global negativity defined as in Eq. ( \ref{negdef}).
Linear entropy, defined as \begin{equation} S=\frac{d_{A}}{d_{A}-1}\left( 1-Tr\left( \rho ^{A}\right) ^{2}\right) \end{equation} measures the purity of state $\rho ^{A}=Tr_{B}$ $\left( \widehat{\rho } \right) $ and also detects bipartite entanglement of subsystems $A$ with $B$ . If $A=A_{p}$, the $\left( p^{th}\right) $ qubit of an $N-$qubit quantum system, then squared negativity $\left( N_{G}^{A_{p}}\right) ^{2}$ is known to be equal to linear entropy of single qubit reduced state $\widehat{\rho } ^{A_{p}}=tr_{A_{1}...A_{p-1}A_{p+1}...A_{N}}\left( \widehat{\rho }\right) $ that is \begin{equation} \left( N_{G}^{A_{p}}\right) ^{2}=2\left( 1-tr\left[ \left( \widehat{\rho } ^{A_{p}}\right) ^{2}\right] \right) . \label{negred} \end{equation} Choosing $p=1$, we write the pure state as $\widehat{\rho } =\sum\limits_{I,J}\rho _{i_{1}Ij_{1}J}\left\vert i_{1}I\right\rangle \left\langle j_{1}J\right\vert $, where $I=\sum\limits_{m=2}^{N}i_{m}2^{m-1}$ labels the $\left( N-1\right) $ qubit state sans qubit $A_{1}$. Using Eq. ( \ref{negred}) and $tr\left( \widehat{\rho }^{A_{1}}\right) =1,$ we obtain \begin{equation} \left( N_{G}^{A_{1}}\right) ^{2}=4\sum\limits_{I,J}\left( \rho _{1I0I}\rho _{0J1J}-\rho _{0I0I}\rho _{1J1J}\right) . \end{equation} Next defining $L=\sum\limits_{\substack{ m=3 \\ m\neq p}}^{N}i_{m}2^{m-1}$ and $M=\sum\limits_{\substack{ m=3 \\ m\neq p}}^{N}j_{m}2^{m-1}$, expansion of $\left( N_{G}^{A_{1}}\right) ^{2}$ reads as \begin{eqnarray} \left( N_{G}^{A_{1}}\right) ^{2} &=&4\sum\limits_{L,M}\left( \rho _{10L00L}\rho _{00M10M}-\rho _{00L00L}\rho _{10M10M}\right) \notag \\ &&+4\sum\limits_{L,M}\left( \rho _{10L00L}\rho _{01M11M}-\rho _{00L00L}\rho _{11M11M}\right) \notag \\ &&+4\sum\limits_{L,M}\left( \rho _{11L,01L}\rho _{00M10M}-\rho _{01L,01L}\rho _{10M10M}\right) \notag \\ &&+4\sum\limits_{L,M}\left( \rho _{11L,01L}\rho _{01M11M}-\rho _{01L,01L}\rho _{11M11M}\right) \end{eqnarray} which in terms of probability amplitudes has the form \begin{equation} \left( N_{G}^{A_{1}}\right) ^{2}=4\sum\limits_{L,M}\left\vert \left( a_{00L}a_{11M}-a_{10L}a_{01M}\right) \right\vert ^{2} \end{equation} After identifying the determinant $\left( a_{00L}a_{11M}-a_{10L}a_{01M}\right) $ with \begin{equation} \det \nu _{K}^{00L}\equiv \det \left[ \begin{array}{cc} a_{00i_{3}...i_{N}} & a_{01j_{3}...j_{N}} \\ a_{10i_{3}...i_{N}} & a_{11j_{3}...j_{N}} \end{array} \right] , \end{equation} that is the determinant of a $K-$way negativity font, the squared negativity\ is expressed in terms of determinants of all negativity fonts in $\widehat{\rho }_{G}^{T_{1}}$as \begin{equation} \left( N_{G}^{A_{1}}\right) ^{2}=4\sum\limits_{L,K=2\text{ to }N}\left\vert \det \nu _{K}^{00L}\right\vert ^{2}. \label{negativity} \end{equation} Global negativity arising due to all the negativity fonts present in $ \widehat{\rho }_{G}^{T_{p}}$ measures the entanglement of qubit $p$ with it's complement and is known to be an entanglement monotone \cite{vida02}.
\section{Four qubit invariants}
For $N=4$, with determinants of four-way negativity fonts defined as \begin{equation} D^{00i_{3}i_{4}}=\det \left( \begin{array}{cc} a_{00i_{3}i_{4}} & a_{01i_{3}+1,i_{4}+1} \\ a_{10i_{3}i_{4}} & a_{11i_{3}+1,i_{4}+1} \end{array} \right) , \end{equation} four qubit pure state invariant with negativity fonts lying solely in four-way partial transpose is given by \begin{equation} T_{4}=D^{0000}+D^{0011}-D^{0010}-D^{0001}. \label{4-invariant} \end{equation} Invariant $T_{4}$ is identified with degree two invariant H of ref. \cite {luqu03} which is also one of the hyperdeterminants of Cayley. A four qubit state having quantum correlations of the type present in a four qubit GHZ state, is distinguished from other states by a non zero $T_{4}$. These quantum correlations are lost without leaving any residue, on the loss of a single qubit and are a collective property of four qubit state. It is known \cite{luqu03} that four tangle defined as \begin{equation} \tau _{4}=4\left\vert \left( D^{0000}+D^{0011}-D^{0010}-D^{0001}\right) ^{2}\right\vert , \label{tau4} \end{equation} by itself is not enough to detect four qubit genuine entanglement, being non-zero for the product of entangled two qubit states in which case invariants of higher degree are needed to detect GHZ\ like entanglement.
Local unitary transformations may be used to concentrate the negativity fonts on a selected $\rho _{K}^{T_{p}}$ in the expansion of $\rho _{G}^{T_{p}} $ given by Eq. (\ref{3n}). When $\rho _{G}^{T_{p}}=$ $\rho _{4}^{T_{p}}$ and $\tau _{4}\neq 0$, we have a GHZ like four qubit state. Four qubit states with each qubit entangled to at least one qubit and $\tau _{4}\neq 0$, can have canonical states with \begin{eqnarray*} \rho _{G}^{T_{p}} &=&\rho _{4}^{T_{p}}+\rho _{3}^{T_{p}}+\rho _{2}^{T_{p}}-2\rho ,\quad \rho _{G}^{T_{p}}=\rho _{4}^{T_{p}}+\rho _{3}^{T_{p}}-\rho , \\ \rho _{G}^{T_{p}} &=&\rho _{4}^{T_{p}}+\rho _{2}^{T_{p}}-\rho ,\quad \rho _{G}^{T_{p}}=\rho _{4}^{T_{p}}. \end{eqnarray*} The class with $\tau _{4}=0$, allows for two equivalent canonical state descriptions that is \begin{equation*} \rho _{G}^{T_{p}}=\rho _{4}^{T_{p}}+\rho _{2}^{T_{p}}-\rho ,\quad \text{or} \quad \rho _{G}^{T_{p}}=\rho _{3}^{T_{p}}+\rho _{2}^{T_{p}}-\rho . \end{equation*} Therefore the difference \begin{equation} \Delta _{4}=\sum\limits_{p=1}^{4}\left( N_{G}^{A_{p}}\right) ^{2}-\tau _{4}, \end{equation} for four qubit pure state may be taken as a measure of three-way plus two-way coherences.
\subsection{Entanglement of two and three qubits in Four qubit states}
As mentioned before, to distinguish between the product of two qubit entangled states with $\tau _{4}\neq 0$ and states with all four qubits entangled to each other we need additional invariants. In ref. \cite{shar101} , along with the degree two invariant of Eq. (\ref{4-invariant}), we reported three degree four invariants that detect quantum correlations in a four qubit state. In this section, we list those invariants and identify two distinct types of three qubit invariants that constitute a four qubit invariant. Three-way and two-way negativity font determinants for four qubits are defined as \begin{eqnarray} \qquad D_{\left( A_{2}\right) _{i_{2}}}^{0i_{3}i_{4}} &=&\det \left( \begin{array}{cc} a_{0i_{2}i_{3}i_{4}} & a_{0i_{2}i_{3}+1,i_{4}+1} \\ a_{1i_{2}i_{3}i_{4}} & a_{1i_{2}i_{3}+1,i_{4}+1} \end{array} \right) ,\qquad D_{\left( A_{3}\right) _{i_{3}}}^{0i_{2}i_{4}}=\det \left( \begin{array}{cc} a_{00i_{3}i_{4}} & a_{01i_{3},i_{4}+1} \\ a_{10i_{3}i_{4}} & a_{11i_{3},i_{4}+1} \end{array} \right) , \notag \\ \qquad D_{\left( A_{4}\right) _{i_{4}}}^{0i_{2}i_{3}} &=&\det \left( \begin{array}{cc} a_{00i_{3}i_{4}} & a_{01i_{3}+1,i_{4}} \\ a_{10i_{3}i_{4}} & a_{10i_{3}+1,i_{4}} \end{array} \right) ,\qquad D_{\left( A_{p}\right) _{i_{p}}\left( A_{q}\right) _{i_{q}}}^{00}=\det \left( \nu _{\left( A_{p}\right) _{i_{p}}\left( A_{q}\right) _{i_{q}}}^{00i_{p}i_{q}}\right) . \end{eqnarray} Using Eq. (\ref{d1dif})) for four qubits and identifying the terms \begin{equation*} D^{0000}-D^{0001}+D^{0010}-D^{0011}, \end{equation*} \begin{equation*} \left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) \times \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) ,\left( D_{\left( A_{2}\right) _{0}}^{000}-D_{\left( A_{2}\right) _{0}}^{001}\right) \times \left( D_{\left( A_{2}\right) _{1}}^{000}-D_{\left( A_{2}\right) _{1}}^{001}\right) , \end{equation*} \begin{equation} \left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}\right) \times \left( D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}\right) ,\left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{1}}^{00}\right) \times \left( D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}\right) , \end{equation} as invariants of U$^{A_{1}}$U$^{A_{4}},$ application of Eq. (\ref{d2sum})) leads to four qubit invariant \begin{eqnarray} \left( J_{4}^{A_{1}A_{4}}\right) ^{A_{1}A_{2}A_{3}A_{4}} &=&\left( D^{0000}-D^{0001}+D^{0010}-D^{0011}\right) ^{2} \notag \\ &&+8\left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}+D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{1}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}\right) \notag \\ &&-4\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) \notag \\ &&-4\left( D_{\left( A_{2}\right) _{0}}^{000}-D_{\left( A_{2}\right) _{0}}^{001}\right) \left( D_{\left( A_{2}\right) _{1}}^{000}-D_{\left( A_{2}\right) _{1}}^{001}\right) . \label{j14} \end{eqnarray} From the structure of $\left( J_{4}^{A_{1}A_{4}}\right) ^{A_{1}A_{2}A_{3}A_{4}}$ we deduce that four qubit \begin{equation*} \left\vert GHZ\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) \end{equation*} state with $\left( J_{4}^{A_{1}A_{4}}\right) ^{A_{1}A_{2}A_{3}A_{4}}=\left( D^{0000}\right) ^{2}=\frac{1}{4}$ is unitary equivalent to the state \begin{eqnarray} \left\vert 1\right\rangle &=&\frac{1}{\sqrt{8}}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle +\left\vert 0100\right\rangle -\left\vert 1011\right\rangle \right. \notag \\ &&\left. +\left\vert 0010\right\rangle -\left\vert 1101\right\rangle +\left\vert 0110\right\rangle +\left\vert 1001\right\rangle \right) , \notag \end{eqnarray} with four-way coherences transformed to three and two way coherences such that \begin{equation*} \left( D^{0000}-D^{0001}+D^{0010}-D^{0011}\right) ^{2}=0, \end{equation*} and \begin{equation*} \left( J_{4}^{A_{1}A_{4}}\right) ^{A_{1}A_{2}A_{3}A_{4}}=-4\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) =\frac{1}{4}. \end{equation*} In the present context, $J_{4}^{\left( A_{p}A_{q}\right) }$ are always four qubit invariants, therefore, the superscript $A_{1}A_{2}A_{3}A_{4}$ will be understood, from this point on.
To understand the role of three qubit correlations, we rewrite a four qubit state as \begin{equation} \left\vert \Psi \right\rangle =\left\vert \Phi _{0}\right\rangle \left\vert 0\right\rangle _{A_{3}}+\left\vert \Phi _{1}\right\rangle \left\vert 1\right\rangle _{A_{3}}, \end{equation} where \begin{equation} \left\vert \Phi _{0}\right\rangle =\sum\limits_{i_{1}i_{2}i_{4}}a_{i_{1}i_{2}0i_{4}}\left\vert i_{1}i_{2}i_{4}\right\rangle ,\quad \left\vert \Phi _{1}\right\rangle =\sum_{i_{1}i_{2}i_{4}}a_{i_{1}i_{2}1i_{4}}\left\vert i_{1}i_{2}i_{4}\right\rangle , \end{equation} are three qubit states characterized by three qubit invariants $\left( I_{3}\right) _{\left( A_{3}\right) _{0}}^{A_{1}A_{2}A_{4}}$ and $\left( I_{3}\right) _{\left( A_{3}\right) _{1}}^{A_{1}A_{2}A_{4}}$ with three tangles given, respectively, by \begin{equation} \left( \tau _{3}\right) _{\left( A_{3}\right) _{0}}=4\left\vert \left( I_{3}\right) _{\left( A_{3}\right) _{0}}^{A_{1}A_{2}A_{4}}\right\vert =4\left\vert \left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) ^{2}-4D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}\right\vert , \end{equation} and \begin{equation} \left( \tau _{3}\right) _{\left( A_{3}\right) _{1}}=4\left\vert \left( I_{3}\right) _{\left( A_{3}\right) _{1}}^{A_{1}A_{2}A_{4}}\right\vert =4\left\vert \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) ^{2}-4D_{\left( A_{2}\right) _{0}\left( A_{3}\right) 1}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}\right\vert . \end{equation} A polynomial classification scheme in which families of four qubit are identified through tangle patterns has been suggested recently in \cite {vieh11}. We notice that in the context of four qubits, using Eqs. (\ref{t1}- \ref{t4}) overall three qubit invariant for qubits $A_{1}A_{2}A_{4}$ may be written as \begin{eqnarray} \left( I_{3}\right) _{A_{3}}^{A_{1}A_{2}A_{4}} &=&\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}+\left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) \right) ^{2} \notag \\ &&-4\left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}+D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{1}}^{00}\right) \left( D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}+D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}\right) . \notag \\ &=&\left( I_{3}\right) _{\left( A_{3}\right) _{0}}^{A_{1}A_{2}A_{4}}+\left( I_{3}\right) _{\left( A_{3}\right) _{1}}^{A_{1}A_{2}A_{4}}+2\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) \notag \\ &&-4\left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}+D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{1}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}\right) . \end{eqnarray} Therefore the term \begin{eqnarray} \left( P_{3}\right) _{A_{3}}^{A_{1}A_{2}A_{4}} &=&8\left( D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{0}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{1}}^{00}+D_{\left( A_{2}\right) _{0}\left( A_{3}\right) _{1}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{3}\right) _{0}}^{00}\right) \\ &&-4\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{001}\right) \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{001}\right) \\ &=&2\left( I_{3}\right) _{\left( A_{3}\right) _{0}}^{A_{1}A_{2}A_{4}}+2\left( I_{3}\right) _{\left( A_{3}\right) _{1}}^{A_{1}A_{2}A_{4}}-2\left( I_{3}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}, \end{eqnarray} is a three qubit invariant. Since \begin{equation*} \left( I_{4}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}=\left( D^{0000}-D^{0001}+D^{0010}-D^{0011}\right) ^{2}-4\left( D_{\left( A_{2}\right) _{0}}^{000}-D_{\left( A_{2}\right) _{0}}^{001}\right) \left( D_{\left( A_{2}\right) _{1}}^{000}-D_{\left( A_{2}\right) _{1}}^{001}\right) , \end{equation*} is also $A_{1}A_{2}A_{4}$ invariant, $J_{4}^{\left( A_{1}A_{4}\right) }$ in terms of $A_{1}A_{2}A_{4}$ invariants reads as \begin{equation*} J_{4}^{\left( A_{1}A_{4}\right) }=\left( I_{4}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}+\left( P_{3}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}. \end{equation*} Alternatively, it is also the sum of two $A_{1}A_{3}A_{4}$ invariants. In general, a four qubit invariants \ $J_{4}^{\left( A_{p}A_{q}\right) }$ can be expressed in terms of three qubit invariants of sub-system $ A_{p}A_{q}A_{r}$, \ or $A_{p}A_{q}A_{s}$. Three qubit invariants can be manipulated by unitary transformation on the fourth qubit.
Four qubit invariant obtained by combining the invariants of U$^{A_{1}}$U$ ^{A_{3}}$ is \begin{eqnarray} J_{4}^{\left( A_{1}A_{3}\right) } &=&\left( D^{0000}-D^{0010}+D^{0001}-D^{0011}\right) ^{2} \notag \\ &&+8\left( D_{\left( A_{2}\right) _{0}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{2}\right) _{1}\left( A_{4}\right) _{1}}^{00}+D_{\left( A_{2}\right) _{1}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{2}\right) _{0}\left( A_{4}\right) _{1}}^{00}\right) \notag \\ &&-4\left( D_{\left( A_{2}\right) _{0}}^{000}-D_{\left( A_{2}\right) _{0}}^{010}\right) \left( D_{\left( A_{2}\right) _{1}}^{000}-D_{\left( A_{2}\right) _{1}}^{010}\right) \notag \\ &&-4\left( D_{\left( A_{4}\right) _{0}}^{000}-D_{\left( A_{4}\right) _{0}}^{001}\right) \left( D_{\left( A_{4}\right) _{1}}^{000}-D_{\left( A_{4}\right) _{1}}^{001}\right) , \label{j13} \end{eqnarray} and starting with U$^{A_{1}}$U$^{A_{2}}$ invariants we get \begin{eqnarray} J_{4}^{\left( A_{1}A_{2}\right) } &=&\left( D^{0000}-D^{0100}+D^{0010}-D^{0110}\right) ^{2} \notag \\ &&+8D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{1}}^{00}+8D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00} \notag \\ &&-4\left( D_{\left( A_{3}\right) _{0}}^{000}-D_{\left( A_{3}\right) _{0}}^{010}\right) \left( D_{\left( A_{3}\right) _{1}}^{000}-D_{\left( A_{3}\right) _{1}}^{010}\right) \notag \\ &&-4\left( D_{\left( A_{4}\right) _{0}}^{000}-D_{\left( A_{4}\right) _{0}}^{010}\right) \left( D_{\left( A_{4}\right) _{1}}^{000}-D_{\left( A_{4}\right) _{1}}^{010}\right) . \label{j12} \end{eqnarray} These invariants satisfy the condition \begin{equation} \left( \left( T_{4}\right) ^{A_{1}A_{2}A_{3}A_{4}}\right) ^{2}=\frac{1}{3} \left( J_{4}^{\left( A_{1}A_{2}\right) }+J_{4}^{\left( A_{1}A_{3}\right) }+J_{4}^{\left( A_{1}A_{4}\right) }\right) , \end{equation} and are used to define entanglement monotone \ \begin{equation} \beta _{4}=\frac{1}{6}\sum\limits_{m<n}\beta _{4}^{\left( A_{m}A_{n}\right) };\quad \beta _{4}^{\left( A_{m}A_{n}\right) }=\frac{4}{3}\left\vert J_{4}^{\left( A_{m}A_{n}\right) }\right\vert . \end{equation} Cosider the entangled states \begin{equation*} \left\vert B\right\rangle =a\left\vert 0000\right\rangle +b\left\vert 1100\right\rangle +c\left\vert 0011\right\rangle +d\left\vert 1111\right\rangle , \end{equation*} characterized by $\tau _{4}=4\left\vert ad+bc\right\vert ^{2}$ , $ J_{4}^{\left( A_{1}A_{2}\right) }=J_{4}^{\left( A_{3}A_{4}\right) }=\left( ad+bc\right) ^{2}+8abcd$, and $J_{4}^{\left( A_{1}A_{4}\right) }=J_{4}^{\left( A_{1}A_{3}\right) }=\left( ad-bc\right) ^{2}$. If $ad=bc$ then $\tau _{4}=16\left\vert ad\right\vert ^{2}$, but $J_{4}^{\left( A_{1}A_{4}\right) }=J_{4}^{\left( A_{1}A_{3}\right) }=0$ and the state \begin{equation*} \left\vert B\right\rangle _{ad=bc}=\left( a\left\vert 00\right\rangle +b\left\vert 11\right\rangle \right) \left( \left\vert 00\right\rangle + \frac{c}{a}\left\vert 11\right\rangle \right) , \end{equation*} is a product of two qubit entangled states.
\subsection{Sextic Invariant}
Set of transformation equations for negativity fonts can be used to obtain additional invariants to discriminate between different types of quantum correlations in four qubit states. In this section an expression for degree six invariant, obtained from set of transformation equations for negativity fonts is given. A sextic invariantes $\left( I_{6}^{\left( A_{p}Aq\right) }\right) ^{A_{p}A_{q}A_{r}A_{s}}$ may be constructed by starting with a product of three invariants of $U^{A_{p}}U^{A_{q}}$ containing determinants of negativity fonts in $\rho _{G}^{T_{A_{p}}}$. For instance, transformation Eqs. (\ref{t1}-\ref{t4}), when used to construct an invariant by starting from a product of three invariants of $U^{A_{2}}U^{A_{3}}$ containing determinants of negativity fonts in $\rho _{G}^{T_{A_{2}}}$, yield the invariant \begin{eqnarray*} \left( I_{6}^{\left( A_{2}A_{3}\right) }\right) ^{A_{1}A_{2}A_{3}A_{4}} &=&D_{\left( A_{1}\right) _{0}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{1}\right) _{1}\left( A_{4}\right) _{1}}^{00}\left( D^{0000}+D^{0001}-D^{0010}-D^{0011}\right) \\ &&-D_{\left( A_{1}\right) _{0}\left( A_{4}\right) _{1}}^{00}D_{\left( A_{1}\right) _{1}\left( A_{4}\right) _{0}}^{00}\left( D^{0000}+D^{0001}-D^{0010}-D^{0011}\right) \\ &&+D_{\left( A_{1}\right) _{0}\left( A_{4}\right) _{1}}^{00}\left( D_{\left( A_{1}\right) _{1}}^{000}-D_{\left( A_{1}\right) _{1}}^{100}\right) \left( D_{\left( A_{4}\right) _{0}}^{000}-D_{\left( A_{4}\right) _{0}}^{001}\right) \\ &&-D_{\left( A_{1}\right) _{0}\left( A_{4}\right) _{0}}^{00}\left( D_{\left( A_{1}\right) _{1}}^{000}-D_{\left( A_{1}\right) _{1}}^{010}\right) \left( D_{\left( A_{4}\right) _{1}}^{000}-D_{\left( A_{4}\right) _{1}}^{010}\right) \\ &&+D_{\left( A_{1}\right) _{1}\left( A_{4}\right) _{0}}^{00}\left( D_{\left( A_{1}\right) _{0}}^{000}-D_{\left( A_{1}\right) _{0}}^{010}\right) \left( D_{\left( A_{4}\right) _{1}}^{000}-D_{\left( A_{4}\right) _{1}}^{010}\right) \\ &&-D_{\left( A_{1}\right) _{1}\left( A_{4}\right) _{1}}^{00}\left( D_{\left( A_{1}\right) _{0}}^{000}-D_{\left( A_{1}\right) _{0}}^{010}\right) \left( D_{\left( A_{4}\right) _{0}}^{000}-D_{\left( A_{4}\right) _{0}}^{100}\right) , \end{eqnarray*} which is the same as invariant $D_{xt}$ of ref. \cite{luqu06}. However, when expressed in terms of negativity fonts, each term gives a clear picture of how negativity fonts may be distributed in the state to generate a non-zero$ \left( I_{6}^{\left( A_{2}A_{3}\right) }\right) ^{A_{1}A_{2}A_{3}A_{4}}$. Additional degree six invariants can be obtained similarly. The power of sextic invariant lies in distinguishing between states for which degree four invariants have the same value.
\section{Maximally entangled Four qubit states}
The maximally entangled four qubit GHZ \cite{gree89} state \begin{equation} \left\vert \Psi _{GHZ}\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert 0000\right\rangle +\left\vert 1111\right\rangle \right) , \label{ghz} \end{equation} is characterized by a single $4-$way negativity font with determinant $ D^{0000}=a_{0000}a_{1111}=\frac{1}{2}$, which corresponds to $\tau _{4}=1,\beta _{4}^{A_{p}A_{q}}=\frac{1}{3}$. The state has only four-way correlations therefore $\rho _{G}^{T_{p}}=\rho _{4}^{T_{p}}$, and $\left( N_{G}^{A_{p}}\right) ^{2}=\tau _{4}$ for ($p=1-4$). The value of degree six invariant $I_{6}^{\left( A_{2}A_{3}\right) }=0$ for this state.
To characterize the entanglement of state \begin{eqnarray} \left\vert \chi \right\rangle &=&\frac{1}{\sqrt{8}}\left( \left\vert 0000\right\rangle -\left\vert 0011\right\rangle +\left\vert 0110\right\rangle -\left\vert 0101\right\rangle \right) \notag \\ &&+\frac{1}{\sqrt{8}}\left( \left\vert 1100\right\rangle +\left\vert 1111\right\rangle +\left\vert 1010\right\rangle +\left\vert 1001\right\rangle \right) , \end{eqnarray} expectation values of third, fourth and sixth order filter operators \cite {oste05,oste06} have been used in ref. \cite{yeo06} \ and the equivalence of the state to some graph states demonstrated \cite{ye08}. We verify that the state $\left\vert \chi \right\rangle $ is characterized by $\tau _{4}=0$, $ J^{A_{1}A_{2}}=J^{\left( A_{1}A_{3}\right) }=J^{\left( A_{2}A_{4}\right) }=J^{\left( A_{3}A_{4}\right) }=-\frac{1}{4}$, and $J^{\left( A_{1}A_{4}\right) }=J^{\left( A_{2}A_{3}\right) }=\frac{1}{2}$. Therefore, the state has $\beta _{4}^{A_{1}A_{2}}=\beta _{4}^{A_{1}A_{3}}=\beta _{4}^{A_{2}A_{4}}=\beta _{4}^{A_{3}A_{4}}=\frac{1}{3}$, while $\beta _{4}^{A_{1}A_{4}}=\beta _{4}^{A_{2}A_{3}}=\frac{2}{3}$, indicating that the entanglement of state $\left\vert \chi \right\rangle $ is distinct from that of $\left\vert \Psi _{GHZ}\right\rangle $ $\left( \tau _{4}=1,\beta _{4}^{A_{1}A_{2}}=\beta _{4}^{A_{1}A_{3}}=\beta _{4}^{A_{1}A_{4}}=\frac{1}{3} \right) $. Negativity font formalism provides an easy way to determine the local unitary transformations that transform the state $\left\vert \chi \right\rangle $ to canonical form that is a state written in terms of minimum number of local basis product states \cite{acin00}. In general, by examining the determinants of negativity fonts that contribute to a given invariant, it is possible to use transformation equations to determine local unitaries connecting two unitary equivalent states.
We look at the invariant $J^{A_{1}A_{2}}$ for the state $\left\vert \chi \right\rangle $. Manifestly, the state has four-way and two way fonts, however the only nonzero contribution to this invariant is $ J^{A_{1}A_{2}}=8D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{1}}^{00}+8D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}=-\frac{1 }{4}$. Local unitary U$^{A_{3}}=\frac{1}{\sqrt{1+\left\vert x\right\vert ^{2} }}\left[ \begin{array}{cc} 1 & -x^{\ast } \\ x & 1 \end{array} \right] $, transforms the negativity fonts such that \begin{equation} \left( D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{i_{4}}}^{00}\right) ^{\prime }=\frac{1}{1+\left\vert x\right\vert ^{2}}\left( D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{i_{4}}}^{00}+\left( x^{\ast }\right) ^{2}D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}}^{00}\right) , \label{ua31} \end{equation} \begin{equation} \left( D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}}^{00}\right) ^{\prime }=\frac{1}{1+\left\vert x\right\vert ^{2}}\left( D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{i_{4}}}^{00}+x^{2}D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{i_{4}}}^{00}\right) . \label{ua32} \end{equation} The choice \begin{equation*} \left( x^{\ast }\right) ^{2}=-\frac{D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{0}}^{00}}{D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}}=1\text{, } \end{equation*} makes $\left( D_{\left( A_{3}\right) _{i_{3}}\left( A_{4}\right) _{i_{4}}}^{00}\right) ^{\prime }=0,$ ($i_{3},i_{4}=0,1)$ and generates $3-$ way negativity fonts. Next, unitaries U$^{A_{1}}=U^{A_{2}}=\frac{1}{\sqrt{2}} \left[ \begin{array}{cc} 1 & -1 \\ 1 & 1 \end{array} \right] $ on qubits $A_{1}$ and $A_{2}$ transform the state to canonical form \begin{equation*} \left\vert \chi \right\rangle _{c}=\frac{1}{2}\left( \left\vert 0000\right\rangle -\left\vert 0111\right\rangle +\left\vert 1110\right\rangle +\left\vert 1001\right\rangle \right) , \end{equation*} with only three and two-way negativity fonts and $J^{A_{1}A_{2}}=-\frac{1}{4} $. Obviously, no entangled pairs $A_{1}A_{2}$ or $A_{1}A_{3}$ can be obtained from $\left\vert \chi \right\rangle $ on state reduction. Total number of distinct negativity fonts in $\left\vert \chi \right\rangle _{c}$ is six that is four $3-$way fonts and two $2-$way fonts. An interesting feature of $\left\vert \chi \right\rangle _{c}$ is that $4-$way three qubit invariants are zero\ for two of the qubits on this state.
Another four qubit state, conjectured to have maximal entanglement in ref. \cite{higu00}, is \begin{eqnarray} \left\vert HS\right\rangle &=&\frac{1}{\sqrt{6}}\left( \left\vert 0011\right\rangle +\left\vert 1100\right\rangle +\exp \left( \frac{i2\pi }{3} \right) \left( \left\vert 1010\right\rangle +\left\vert 0101\right\rangle \right) \right) \notag \\ &&+\frac{1}{\sqrt{6}}\exp \left( \frac{i4\pi }{3}\right) \left( \left\vert 1001\right\rangle +0110\right) . \end{eqnarray} Two way negativity fonts $D_{\left( A_{3}\right) _{0}\left( A_{4}\right) _{1}}^{00}=D_{\left( A_{3}\right) _{1}\left( A_{4}\right) _{0}}^{00}=\frac{1 }{6}$, and $4-$way negativity fonts $D^{0011}=\frac{1}{6}$, $D^{0001}=\frac{1 }{12}\left( 1-i\sqrt{3}\right) ,$ and D$^{0010}=\frac{1}{12}\left( 1+i\sqrt{3 }\right) $) transform under the action of U$^{A_{3}}$, U$^{A_{4}}$ generating three-way negativity fonts, however, unlike the state $\left\vert \chi \right\rangle $, this state cannot be written in a form with only $3-$ way and $2-$way coherences. It is found that in this case three qubit invariants $\left( P_{3}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}$ as well as $ \left( I_{4}\right) _{A_{3}}^{A_{1}A_{2}A_{4}}$ contribute to $ J^{A_{1}A_{2}} $. Similar observations hold for other J invariants.
Recently, Gilad and Wallach \cite{gour10} have pointed out that three cluster states \cite{brie01,raus01} \begin{equation} \left\vert C_{1}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1100\right\rangle +\left\vert 0011\right\rangle -\left\vert 1111\right\rangle \right) \label{c1} \end{equation} \begin{equation} \left\vert C_{2}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 0110\right\rangle +\left\vert 1001\right\rangle -\left\vert 1111\right\rangle \right) , \label{c2} \end{equation} \begin{equation} \left\vert C_{3}\right\rangle =\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1010\right\rangle +\left\vert 0101\right\rangle -\left\vert 1111\right\rangle \right) , \label{c3} \end{equation} are the only states that maximize the Renyi $\alpha -$entropy of entanglement for all $\alpha \geq 2$. \ The state $\left\vert C_{1}\right\rangle $ with $\rho _{G}^{T_{A}}=\rho _{4}^{T_{A}}+\rho _{2}^{T_{A}}-\rho $, ($\tau _{4}=0$) can be transformed by local unitaries on qubits A$_{1}$ and A$_{2}$ to the form \begin{equation*} \left\vert C_{1}\right\rangle ^{\prime }=\left\vert 0000\right\rangle +\left\vert 1100\right\rangle +\left\vert 1011\right\rangle +\left\vert 0111\right\rangle , \end{equation*} with $\rho _{G}^{T_{A}}=\rho _{3}^{T_{A}}+\rho _{2}^{T_{A}}-\rho $. A similar observation holds for the states $\left\vert C_{2}\right\rangle $, and $\left\vert C_{3}\right\rangle $. Calculation of three qubit invariants shows that the distinguishing feature of the states $\left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, and $\left\vert C_{3}\right\rangle $\ is null invariant $\left( P_{3}\right) ^{A_{p}A_{q}A_{r}}$ for two of the qubits, while $\left( I_{4}\right) ^{A_{p}A_{q}A_{r}}$ is non zero.
Another candidate for maximally entangled state, found through a numerical search in ref. \cite{brow06}, is \begin{eqnarray*} \left\vert \Phi \right\rangle &=&\frac{1}{2}\left( \left\vert 0000\right\rangle +\left\vert 1101\right\rangle \right) \\ &&+\frac{1}{\sqrt{8}}\left( \left\vert 1011\right\rangle +\left\vert 0011\right\rangle +\left\vert 0110\right\rangle -\left\vert 1110\right\rangle \right) , \end{eqnarray*} This state, just like $\left\vert \chi \right\rangle _{c}$, has only three and two way negativity fonts. Unlike $\left\vert \chi \right\rangle _{c}$, however, $J_{4}^{A_{1}A_{3}}=J^{A_{2}A_{4}}=0$, because $\left( I_{4}\right) ^{A_{1}A_{3}A_{2}}=-\left( P_{3}\right) ^{A_{1}A_{3}A_{2}}$.
In Table \ref{table1}, the numerical values of four qubit invariants $\left( T_{4}\right) ^{2}$, $J_{4}^{A_{1}A_{2}}=J^{A_{3}A_{4}}$, $ J_{4}^{A_{1}A_{3}}=J^{A_{2}A_{4}}$, and $J_{4}^{A_{1}A_{4}}=J^{A_{2}A_{3}}$, are listed for $\left\vert GHZ\right\rangle $ state, $\left\vert \chi \right\rangle _{c}$ state, $\left\vert HS\right\rangle $ state, cluster states $\left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, $ \left\vert C_{3}\right\rangle $ \ and state $\left\vert \Phi \right\rangle $ . Four tangle, $\beta _{4}$, average global negativity, and $\Delta _{4}$ are also included therein. Degree six invariant $I_{6}^{\left( A_{2}A_{3}\right) }$ as well as three qubit invariants $\left( I_{3}\right) ^{A_{p}A_{q}A_{r}}$ and $\left( P_{3}\right) ^{A_{p}A_{q}A_{r}}$ are displayed in Table \ref{table2}.
The state $\left\vert \Phi \right\rangle $ is not different from $\left\vert HS\right\rangle $ state, as far as $4-$way correlations are concerned. However, the degree six invariant $\left( I_{6}^{\left( A_{2}A_{3}\right) }\right) $ is zero for the state $\left\vert \Phi \right\rangle $ The values of degree four invariants are the same for cluster states and state $ \left\vert \chi \right\rangle _{c}$, but these are not unitary equivalent states. The difference between $\left\vert \Phi \right\rangle $, $\left\vert \chi \right\rangle _{c}$ and cluster states lies in the entanglement of three qubit subsystems as is manifest in the values of three qubit invariants in Table \ref{table2}.
\begin{table}[t] \caption{Numeical values of fourqubit invariants for $\left\vert GHZ\right\rangle $, state \protect\cite{gree89} , $\left\vert \protect\chi \right\rangle $, state \protect\cite{yeo06,ye08}, $\left\vert HS\right\rangle $, state \protect\cite{higu00} , cluster states $\left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, $\left\vert C_{3}\right\rangle $, \protect\cite{brie01,raus01,gour10} and state $ \left\vert \Phi \right\rangle $ \protect\cite{brow06}.}\centering \par
\begin{tabular}{||c||c||c||c||c||c||c||c||c||} \hline\hline $State$ & $\left( T_{4}\right) ^{2}$ & $J_{4}^{A_{1}A_{2}}$ & $ J_{4}^{A_{1}A_{3}}$ & $J_{4}^{A_{1}A_{4}}$ & $\tau _{4}$ & $\beta _{4}=\frac{ 1}{3}\sum\limits_{j=2}^{4}\beta _{4}^{A_{1}A_{j}}$ & $\frac{1}{4} \sum\limits_{p=1}^{4}\left( N_{G}^{A_{p}}\right) ^{2}$ & $\Delta _{4}$ \\ \hline\hline $\left\vert GHZ\right\rangle $ & $\frac{1}{4}$ & $\frac{1}{4}$ & $\frac{1}{4} $ & $\frac{1}{4}$ & $1$ & $\frac{1}{3}$ & $1$ & $0$ \\ \hline\hline $\left\vert \chi \right\rangle $ & $0$ & $-\frac{1}{4}$ & $-\frac{1}{4}$ & $ \frac{1}{2}$ & $0$ & $\frac{4}{9}$ & $1$ & $1$ \\ \hline\hline $\left\vert HS\right\rangle $ & $0$ & $\frac{1}{3}$ & $\frac{i\sqrt{3}\ -1}{6 }\ $ & $-\frac{i\sqrt{3}\ +1}{6}$ & $0$ & $\frac{4}{9}$ & $1$ & $1$ \\ \hline\hline $\left\vert C_{1}\right\rangle $ & $0$ & $-\frac{1}{2}$ & $\frac{1}{4}$ & $ \frac{1}{4}$ & $0$ & $\frac{4}{9}$ & $1$ & $1$ \\ \hline\hline $\left\vert C_{2}\right\rangle $ & $0$ & $\frac{1}{4}$ & $\frac{1}{4}$ & $- \frac{1}{2}$ & $0$ & $\frac{4}{9}$ & $1$ & $1$ \\ \hline\hline $\left\vert C_{3}\right\rangle $ & $0$ & $\frac{1}{4}$ & $-\frac{1}{2}$ & $ \frac{1}{4}$ & $0$ & $\frac{4}{9}$ & $1$ & $1$ \\ \hline\hline $\left\vert \Phi \right\rangle $ & $0$ & $\frac{3}{8}$ & $0$ & $-\frac{3}{8}$ & $0$ & $\frac{1}{3}$ & $1$ & $1$ \\ \hline\hline \end{tabular} \label{table1} \end{table}
\begin{table}[t] \caption{Numeical values of $\left( T_{4}\right) ^{2}$ , sextic invariant $ I_{6}^{A_{2}A_{3}},$ and three qubit invariants for $\left\vert GHZ\right\rangle $, $\left\vert \protect\chi \right\rangle _{c}$, $ \left\vert HS\right\rangle $ , $\left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, $\left\vert C_{3}\right\rangle $, and $\left\vert \Phi \right\rangle $, States.}\centering \par
\begin{tabular}{||c||c||c||c||c||c||c||c||c||} \hline\hline $State$ & $\left( T_{4}\right) ^{2}$ & $I_{6}^{A_{2}A_{3}}$ & $\left( I_{4}\right) ^{A_{1}A_{2}A_{3}}$ & $\left( P_{3}\right) ^{A_{1}A_{2}A_{3}}$ & $\left( I_{4}\right) ^{A_{1}A_{3}A_{2}}$ & $\left( P_{3}\right) ^{A_{1}A_{3}A_{2}}$ & $\left( I_{4}\right) ^{A_{1}A_{4}A_{2}}$ & $\left( P_{3}\right) ^{A_{1}A_{4}A_{2}}$ \\ \hline\hline $\left\vert GHZ\right\rangle $ & $\frac{1}{4}$ & $0$ & $\frac{1}{4}$ & $0$ & $\frac{1}{4}$ & $0$ & $\frac{1}{4}$ & $0$ \\ \hline\hline $\left\vert HS\right\rangle $ & $0$ & $\frac{i\sqrt{3}\ -1}{6}$ & $\frac{1}{9 }$ & $\frac{2}{9}$ & $\frac{i\sqrt{3}\ -1}{18}$ & $\frac{i\sqrt{3}\ -1}{9}$ & $-\frac{i\sqrt{3}\ +1}{18}$ & $-\frac{i\sqrt{3}\ +1}{9}$ \\ \hline\hline $\left\vert \Phi \right\rangle $ & $0$ & $0$ & $\frac{1}{4}$ & $\frac{1}{8}$ & $\frac{1}{8}$ & $-\frac{1}{8}$ & $-\frac{1}{8}$ & $-\frac{1}{4}$ \\ \hline\hline $\left\vert \chi \right\rangle _{c}$ & $0$ & $0$ & $0$ & $-\frac{1}{4}$ & $0$ & $-\frac{1}{4}$ & $0$ & $\frac{1}{2}$ \\ \hline\hline $\left\vert C_{1}\right\rangle $ & $0$ & $0$ & $0$ & $-\frac{1}{2}$ & $\frac{ 1}{4}$ & $0$ & $\frac{1}{4}$ & $0$ \\ \hline\hline $\left\vert C_{2}\right\rangle $ & $0$ & $0$ & $\frac{1}{4}$ & $0$ & $\frac{1 }{4}$ & $0$ & $0$ & $-\frac{1}{2}$ \\ \hline\hline $\left\vert C_{3}\right\rangle $ & $0$ & $0$ & $\frac{1}{4}$ & $0$ & $0$ & $- \frac{1}{2}$ & $\frac{1}{4}$ & $0$ \\ \hline\hline \end{tabular} \label{table2} \end{table}
We notice that $\left\vert GHZ\right\rangle $ state, $\left\vert HS\right\rangle $ state, $\left\vert \chi \right\rangle $ state$,$ group of states$\ \left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, $ \left\vert C_{3}\right\rangle $ and the state $\left\vert \Phi \right\rangle $ belong to five distinct four qubit entanglement classes. Each state is maximally entangled in its own class with $\frac{1}{4}\sum\limits_{p=1}^{4} \left( N_{G}^{A_{p}}\right) ^{2}=1$ for each qubit, however with different capability for performing information processing tasks.
\section{Conclusions}
To summarize, the transformation equations for negativity fonts under unitary transformations yield relevant $N-$qubit invariants and determine local unitaries relating unitary equivalent states. An expression for global negativity in terms of determinants of negativity fonts has been found. The squared negativity of $N-$qubit partially transposed operator is four times the sum of squared moduli of determinants of all possible negativity fonts. The structure of four qubit invariants of degree four that detect entanglement between pairs of qubits indicates why some of the unitary equivalent states may have\ different sets of $K-$way coherences. It is shown that a four qubit invariant $J_{4}^{A_{p}A_{q}}$ can be expressed in terms of three qubit invariants for qubits $A_{p}A_{q}A_{r}$, \ or $ A_{p}A_{q}A_{s}$. Three qubit invariants can be manipulated by unitary transformation on the fourth qubit but their value for the canonical state is unique. In the context of four qubit states studied in the article, the two types of three qubit entanglement invariants, each corresponding to a different type of quantum correlations present in the canonical state, play an important role in distinguishing between states with inequivalent entanlement types. Degree six invariants can also be constructed easily from Eqs. (\ref{t1}-\ref{t4}), as shown by writing the invariant $\left( I_{6}^{\left( A_{2}A_{3}\right) }\right) ^{A_{1}A_{2}A_{3}A_{4}}$. Decomposition of partially transposed matrix in to $K-$way partial transposes is a tool to identify the type of quantum correlations which entangle the qubits. We have used the expressions of polynomial invariants in terms of negativity fonts to elucidate the difference in microstructure of some well known four qubit pure states. We conclude that the entanglement in four qubit $\left\vert GHZ\right\rangle $ state, $\left\vert \chi \right\rangle $ state, $\left\vert HS\right\rangle $ state, cluster states $ \left\vert C_{1}\right\rangle $, $\left\vert C_{2}\right\rangle $, $ \left\vert C_{3}\right\rangle $, and state $\left\vert \Phi \right\rangle $ is qualitatively different since the states belong to different classes of four qubit entangled states. Cluster states $\left\vert C_{1}\right\rangle $ , $\left\vert C_{2}\right\rangle $, $\left\vert C_{3}\right\rangle ,$ differ from the $\left\vert \chi \right\rangle _{c}$ state,\ in having different type of three qubit correlations in canonical form. These results indicate that along with composite system invariants, one needs subsystem invariants in canonical form to characterize the entanglement of a state. The four qubit entangled states investigated here do not represent all four qubit entanglement types represented by nine families of four qubit states \cite {vers02}. However, the results provide insight to formulate efficient criterion for classification of four qubit entangled states. In ref. \cite {shar102}, the general method for writing N-tangle was given. In general, for n-even square of degree two invariant, having only N-way fonts, can be written as a sum of invariants that detect the entanglement of parts of the composite system. As such, the ideas developed for four qubits may be extended to multi qubit systems.
\begin{acknowledgments} This work is supported by Faep Uel, Funda\c{c}\~{a}o Arauc\'{a}ria and CNPq, Brazil. \end{acknowledgments}
\end{document} | arXiv | {
"id": "1105.0911.tex",
"language_detection_score": 0.5788459777832031,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Entanglement demonstration on board a nano-satellite} \author{Aitor Villar\footnote{Email address: aitor.villar@u.nus.edu.}} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Alexander Lohrmann} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Xueliang Bai} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Tom Vergoossen} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Robert Bedington} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Chithrabhanu Perumangatt} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Huai Ying Lim} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Tanvirul Islam} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Ayesha Reezwana} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Zhongkan Tang} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Rakhitha Chandrasekara} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Subash Sachidananda} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \author{Kadir Durak} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \affiliation{Now at Department of Electrical and Electronics Engineering, \"{O}zye\v{g}in University, 34794, Istanbul, Turkey}
\author{Christoph F. Wildfeuer} \affiliation{FHNW University of Applied Sciences and Arts Northwestern Switzerland, School of Engineering, Klosterzelgstrasse 2, CH-5210 Windisch, Switzerland\\ } \author{Douglas Griffin} \affiliation{University of New South Wales Canberra, School of Engineering and Information Technology, Canberra, Australia\\ } \author{Daniel K. L. Oi} \affiliation{SUPA Department of Physics, University of Strathclyde, John Anderson Building, 107 Rottenrow East, G4 0NG Glasgow, UK\\ } \author{Alexander Ling} \affiliation{
Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, S117543, Singapore\\ } \affiliation{Physics Department, National University of Singapore, 2 Science Drive 3, S117542, Singapore}
\date{\today}
\begin{abstract} Global quantum networks for secure communication can be realised using large fleets of satellites distributing entangled photon-pairs between ground-based nodes. Because the cost of a satellite depends on its size, the smallest satellites will be most cost-effective. This paper describes a miniaturised, polarization entangled, photon-pair source operating on board a nano-satellite. The source violates Bell’s inequality with a CHSH parameter of \SI[separate-uncertainty = true]{2.60(6)}{}. This source can be combined with optical link technologies to enable future quantum communication nano-satellite missions. \end{abstract} \maketitle
\section{Introduction}
Quantum entanglement describes non-local correlation between multiple bodies such that their wavefunction is irreducible to a product of individual wavefunctions. Entanglement correlations~\cite{epr,scat,bellinequality, freedman72, aspect81} have emerged as an essential resource in quantum technologies and entanglement is used in various fields such as computation~\cite{loss1998quantum}, sensing~\cite{degen2017quantum} and communication~\cite{E91}.
Near term applications include quantum key distribution (QKD), where entanglement can be used to quantify knowledge gained by an adversary~\cite{E91} and to enable device-independent encryption~\cite{acin07}. Beyond the immediate advantages for near-term technologies (such as QKD), efforts and resources are being directed towards the development of a Quantum Internet~\cite{kimble08,wehner2018quantum}. Such a network is envisioned to feature quantum nodes that are capable of producing, detecting or verifying quantum entanglement. A global quantum network can be more readily realized using space-based nodes~\cite{wehner2018quantum,vergoossen2019satellite,khatri2019spooky}.
\begin{figure}
\caption{Systems within the SpooQy-1 satellite (some solar panels not shown for clarity). Fully assembled, the CubeSat mass is \SI{2.6}{\kilogram}. The experiment has a volume of \SI{206}{\milli\metre}x \SI{85}{\milli\metre} x \SI{49}{\milli\metre}, a mass of \SI{0.9}{\kilogram} and its peak power consumption is \SI{2.5}{\watt}.}
\label{fig:set1}
\end{figure}
The first steps towards space-based nodes have been taken~\cite{vallone15,tang2016generation,gunthner17,yin17,liao2017satellite,takenaka17} and major milestones that demonstrate space-based quantum communication primitives have been achieved by the Micius satellite~\cite{liao2017satellite,yin17,liao18}. These pioneering space experiments used sizable satellite platforms with significant resources; the satellite mass in the space-based communication experiments have ranged between \SI{50}{\kilogram}~\cite{takenaka17} to \SI{600}{\kilogram}~\cite{liao2017satellite}. One opportunity for accelerating progress within the field is to utilise smaller, standardized spacecraft to enable cost-effective quantum nodes in space~\cite{morong2012quantum,oi2017cubesat}. The de-facto standard spacecraft in the nano-satellite class is the CubeSat, where the most basic platform is a \SI{10}{\centi\metre} cube (1U) and a growing family of proportionally larger (2U, 3U, 6U, 12U) spacecraft are also defined.
Previous work reported that photon-pairs created by spontaneous parametric downconversion (SPDC) could be generated on board CubeSats~\cite{tang2016generation}. This paper reports on the essential next step: the generation and detection of polarization entangled photon-pairs on board a CubeSat in low-Earth orbit (LEO). This demonstration marks a milestone towards realizing space-to-ground entanglement distribution from a CubeSat~\cite{qkdqubesat}.
\begin{figure}\label{fig:set2}
\end{figure}
\section{Methods} The in-orbit experiment occupies approximately 2U of volume in the 3U CubeSat, SpooQy-1 (Fig.~\ref{fig:set1}, designed and built at the Centre for Quantum Technologies, National University of Singapore). The remaining 1U houses the spacecraft avionics. The experiment is composed of a source of entangled photon-pairs coupled to a detector module (see Fig.~\ref{fig:set2}(a)) all controlled by an integrated electronics sub-system. A micro-controller on the experiment interfaces to the satellite's on-board computer to receive commands and to return science data to ground control.
The polarization entangled photon-pair source is based on collinear, non-degenerate type-I SPDC with critically phase-matched non-linear crystals. The source design (Fig.~\ref{fig:set2}(b)) uses a \textit{parallel-crystal} configuration~\cite{villar2018experimental, lohrmann2018high}. The beam overlap found in this design provides the source with better alignment stability in contrast to other two-crystal designs~\cite{trojek2008collinear}.
A collimated laser diode (central wavelength $\lambda$ = \SI{405}{\nano\metre}, spectral linewidth $\Delta\nu$=\SI{160}{\mega\hertz}) with a beam full-width half-maximum of \SI{800}{\micro\metre}$\times$\SI{400}{\micro\metre} is used as a continuous-wave pump for the SPDC process. The pump produces horizontally polarized photon-pairs in two $\beta$-Barium Borate (BBO-1 and BBO-2) crystals (cut angle: \SI{28.8}{\degree}, length: \SI{6}{\milli\metre}). Between the two BBO crystals, an achromatic half-wave plate (HWP) induces a \SI{90}{\degree} rotation in the polarization of the SPDC photons from BBO-1, while the pump polarization remains unaffected.
The photon-pair source produces the state $\ket{\phi} = \frac{1}{\sqrt{2}}\big(\ket{H_{s}H_{i}}+e^{i\Delta\varphi}\ket{V_{s}V_{i}}\big)$, where $s$ ($i$) denotes the signal (idler) photon wavelength, and $\Delta\varphi$ is the relative phase-difference between photon-pairs born in BBO-1 and BBO-2. Excess pump light is removed by a dichroic mirror to a detector that tracks power and pointing. An a-cut yttrium orthovanadate (YVO$_4$) crystal compensates for the birefringent dispersion of the SPDC photons (related to $\Delta\varphi$~\cite{villar2018experimental}). The tilt angle of BBO-1 is adjusted such that the final phase difference $\Delta\varphi$ becomes $\pi$ generating the maximally entangled Bell state $\vert \Phi^-\rangle$.
The relative angle of the pump beam and the optical axis of the BBO crystals must be kept within \SI{100}{\micro \radian} in order to control the phase of the generated photon-pairs (see Fig.~\ref{fig:SI-1} in Appendix~\ref{appendix:tolerance}). This can be achieved without active alignment using titanium flexure stages. To reduce misalignments resulting from a mismatch in the thermal expansion of different materials the rest of the optical bench is also made of titanium.
The SPDC photon-pairs are separated by a dichroic mirror, and signal and idler photons have their polarization state analysed separately. Each polarization analyser is composed of a liquid-crystal polarization rotator (LCPR) followed by a polarizer~\cite{lohrmann2019manipulation}. Photon detection is performed using un-cooled, passively-quenched, Geiger-mode avalanche photodiodes (GM-APDs, with detection efficiencies of 45\% at $\SI{800}{\nano\metre}$) with active areas of $\SI{500}{\micro\metre}$ located $\SI{10}{cm}$ away from the centre of the source. Detection events are identified as correlations if they occur within a time window of \SI[separate-uncertainty = true]{4.84(6)}{}~ns.
To simplify the optical assembly, collection optics were not used. This collection condition, described in Fig.~\ref{fig:set2}(c), restricts the light detection to SPDC photons with an opening angle, $\alpha$, of $\SI{0.3}{\degree}$. While this affects the brightness, it does not detract from the primary objective of demonstrating in-orbit entanglement.
During the course of operation in orbit the satellite experiences varying levels of solar illumination causing the temperature of the experiment to fluctuate. This can be mitigated by running a \SI{2.5}{\watt} heater (see Fig.~\ref{fig:SI-2} in Appendix~\ref{appendix:temps}). The temperature variation affects the breakdown voltage of the GM-APDs. To ensure a constant detection efficiency, the bias voltage of the detectors are optimized by a window comparator technique that tracks changes in the output pulse height of the GM-APDs~\cite{sachidananda2011bias}.
To investigate the polarization correlation of the photon-pairs, one arm is analysed with fixed polarization (either H: horizontal, V: vertical, D: diagonal, or A: anti-diagonal) while the other arm is swept through different polarization states. In principle, the LCPR devices can achieve almost 2$\pi$ of phase shift, but towards the end of the range the performance of the devices lack precision. To improve performance reliability, the LCPR devices were restricted to a phase shift of approximately 150 degrees.
The visibility (contrast) of the polarization correlation curves can be used to assess the quality of the entangled state. Additionally, it is possible to extract 16 data points from these correlation curves. Each curve can provide four data points that are separated by \SI{45}{\degree} (see Fig.~\ref{fig:set3}). These data points are used to obtain a measure of entanglement known as the Clauser-Horne-Shimony-Holt (CHSH)~\cite{clauser1969proposed} parameter, $S$.
After assembly of the satellite, the on-ground detected pair rate (combined for both polarization bases) is \SI[per-mode=symbol]{1400}{pairs/s} at approximately \SI{17}{\milli\watt} of pump power ($\approx$~\SI[per-mode=symbol]{590000}{singles/s} for signal and idler).
The visibilities (corrected for accidentals) recorded in the two bases ($H$/$V$ and $D$/$A$) were: $V_H$=\SI[separate-uncertainty = true]{0.97(5)}{}, $V_V$=\SI[separate-uncertainty = true]{0.97(6)}{}, $V_D$=\SI[separate-uncertainty = true]{0.84(5)}{} and $V_A$=\SI[separate-uncertainty = true]{0.90(5)}{}. From these curves, a CHSH parameter of \SI[separate-uncertainty = true]{2.63(7)}{} was extracted (see Fig.~\ref{fig:set3}(a)). If used for quantum communication, this source would introduce an intrinsic quantum bit error ratio (QBER) of approximately \SI[separate-uncertainty = true]{3.9(4)}{}\%.
\begin{figure}
\caption{(a) On-ground polarization correlation curves after the experiment was integrated into the satellite. Average visibility at \SI{20}{\degree C} (corrected for accidental coincidences) recorded in the two bases ($H$/$V$ and $D$/$A$) were: $V_{H/V}$=\SI[separate-uncertainty = true]{0.97(5)}{} and $V_{D/A}$=\SI[separate-uncertainty = true]{0.87(5)}{}. The corresponding CHSH parameter was \SI[separate-uncertainty = true]{2.63(7)}{}. The dashed, vertical lines indicate the settings used to obtained the CHSH parameter. (b) Correlation curves measured in orbit on 16th July 2019. For clarity, only a subset of data points are shown. The average visibility (corrected for accidental coincidences) at \SI{17.5}{\degree C} was: $V_{H/V}$=\SI[separate-uncertainty = true]{0.98(6)}{} and $V_{D/A}$=\SI[separate-uncertainty = true]{0.88(6)}{}. The corresponding CHSH parameter of \SI[separate-uncertainty = true]{2.60(6)}{}. (c) In-orbit CHSH values at different temperatures obtained over two weeks of operation. The red-coloured data points were taken after the satellite was under direct solar illumination for 100 hours (see Fig.~\ref{fig:SI-2} in Appendix~\ref{appendix:temps}).}
\label{fig:set3}
\end{figure}
\section{Results} The satellite was deployed into orbit from the International Space station on 17th June 2019 (orbit inclination: \SI{51.6}{\degree}, \SI{408}{\kilo\metre} altitude) and operations began on the same day. The temperature of the experiment fluctuated according to the diurnal cycle of the satellite's 90 minute orbital period as expected. To bring the experiment within the range of operating temperature, the on-board heater was activated (see Fig.~\ref{fig:SI-2} in Appendix~\ref{appendix:temps}). The difference between the on-ground and in-orbit temperatures made it necessary to re-calibrate the LCPR devices and to operate the experiment in-orbit with a different pump current (see Fig.~\ref{fig:SI-3} in Appendix~\ref{appendix:heatmap}), which can yield a different laser mode leading to slightly different brightness levels.
The typical in-orbit detected pair rate (combined for both polarization bases) was \SI[per-mode=symbol]{2200}{pairs/s} ($\approx$~\SI[per-mode=symbol]{700000}{singles/s} for signal and idler).
The highest recorded visibilities were: $V_H$=\SI[separate-uncertainty = true]{0.98(5)}{}, $V_V$=\SI[separate-uncertainty = true]{0.97(6)}{}, $V_D$=\SI[separate-uncertainty = true]{0.88(6)}{} and $V_A$=\SI[separate-uncertainty = true]{0.88(6)}{}. These visibilities yielded a CHSH parameter of $V_H$=\SI[separate-uncertainty = true]{2.60(6)}{} (Fig.~\ref{fig:set3}(b)). This value is a slight underestimate of the actual CHSH parameter because the LCPR settings for the diagonal and anti-diagonal polarization states had a systematic error. This can be seen from Fig.~\ref{fig:set3}(b) where the extrema of the correlation curves in the diagonal/anti-diagonal settings do not occur exactly at the $D/A$ (\SI{45}{\degree}/\SI{135}{\degree}) basis setting. Nevertheless, this causes only a slight degradation in the CHSH value compared to the on-ground baseline value.
Entangled photon-pair production was observed over a temperature range from \SI{16}{\degree C} to \SI{21.5}{\degree C} (Fig.~\ref{fig:set3}(c)). The experiment experienced relatively high temperatures when the satellite entered an orbital condition of continuous solar illumination (no data was collected during this period). Data collection resumed after exiting continuous illumination and pre-illumination performance was observed (see red data points in Fig.~\ref{fig:set3}(c)).
\section{Discussion and conclusion} The operation of a polarization entangled photon-pair source on board a CubeSat in LEO has been reported. This shows that entanglement technology can be deployed with minimal resources in novel operating environments, providing valuable `space heritage' for different components and assembling techniques.
The next generation of the experiment can achieve an improvement of two orders of magnitude in the photon-pair rate~\cite{lohrmann2018high}, and other SPDC configurations are under consideration to enable other performance improvements~\cite{lohrmanfeaturedpaper}. A follow-on mission is under development where the goal is to share entanglement between a nano-satellite and a ground receiver~\cite{qkdqubesat}. To achieve this goal, it is necessary to equip a nano-satellite with an optical terminal that has a pointing capability of approximately \SI{10}{\micro rad}~\cite{oi2017cubesat}. While this additional infrastructure is demanding, solutions have been reported from the commercial sector~\cite{storm2017cubesat}.
The result from this in-orbit experiment paves the way for testing a variety of satellite-based quantum communication protocols using small standardised spacecraft such as CubeSats. These include placement of sources of faint laser pulses in space to perform decoy-state QKD, or to install only quantum receivers on the CubeSats to enable an uplink configuration~\cite{jennewein2014nanoqey,kerstel2018nanobob,haber2018qube}. Beyond securing keys, two-way entanglement distribution can also enable secure time-transfer~\cite{lee19} between satellites providing global navigation services. Such a capability can be demonstrated via inter-CubeSat quantum communication~\cite{naughton2019design}.
Miniaturized sources are not restricted to nano-satellites. They can also be useful for the development of quantum communication subsystems in larger spacecrafts. This manuscript also shows that CubeSats are well-placed to perform in-orbit subsystem and device performance characterization to support the development of space missions with larger satellites.
As small standardized spacecraft are cost-effective, we anticipate an acceleration of space-based demonstration for quantum technologies in domains such as time-keeping and sensing. The use of a standardized platform makes it easier to work towards a realistic miniaturization of the required technology, and to scale up the number of space-based nodes with constellations of \textit{quantum} nano-satellites to enable a global Quantum Internet.
\section{acknowledgments} This program was supported by the National Research Foundation (Award No. NRF-CRP12-2013-02), Prime Minister’s Office of Singapore, and the Ministry of Education, Singapore. The experiment and satellite was designed and built at the Centre for Quantum Technologies, National University of Singapore.
Daniel K. L. Oi acknowledges support from the UK Space Agency (NSTP3-FT-063, NSTP-QSTP), Innovate UK (AQKD), and EPSRC (Quantum Technology Hub in Quantum Communications Partnership Resource).
\onecolumngrid \begin{appendices} \renewcommand\thefigure{\thesection.\arabic{figure}} \renewcommand{A\arabic{figure}}{A\arabic{figure}} \setcounter{figure}{0}
\section{Angular tolerance of the SPDC crystals} \label{appendix:tolerance}
The phase-matching conditions for the BBO crystals used in the source are achieved by angle tuning. With this technique, the relative angle between the pump beam and the optical axis of the crystal is crucial. This angle must be maintained within $\pm$\SI{100}{\micro \radian} to maintain source performance (e.g. photon-pair phase). This is depicted in Fig.~\ref{fig:SI-1}, where the visibility in the $D/A$ basis is measured while introducing an angular detuning in one of the two BBO crystals. \begin{figure}
\caption{Visibility in the diagonal/anti-diagonal ($D/A$) basis with respect to angular detuning of a BBO crystal. The green area depicts the design angular tolerance ($\pm$\SI{100}{\micro\radian}). This requirement is met by the optical bench~\cite{tang2018towards}.}
\label{fig:SI-1}
\end{figure}
\section{In-orbit heater operation} \label{appendix:temps}
The safe operating temperature range for the experiment was defined as between \SI{15}{\degree C} to \SI{28}{\degree C}; this was driven by the requirements of the pump laser diode. Most of the time, the experimental apparatus does not experience this temperature. Instead, the temperature fluctuates between \SI{-5}{\degree C} and \SI{10}{\degree C} (Fig.~\ref{fig:SI-2}(a)). These fluctuations depend on the satellite's position and orientation during orbit. Furthermore, during the lifetime of the satellite, the solar illumination condition varies as depicted in Fig.~\ref{fig:SI-2}(b). Due to the specific inclination of the orbit~\cite{wang2014operations}, in some cases the satellite does not spend time in eclipse for several days (note the pronounced valleys in Fig.~\ref{fig:SI-2}(b)). During these non-eclipse periods, the satellite could heat up, potentially damaging the experimental apparatus with excess heat.
In order to restrict heat conduction between the experimental apparatus and the satellite bus, and also maintain passive optical stability across varying temperatures, an isostatic mount was fabricated. This mount is made out of three \SI{0.4}{mm} thick stainless steel blades (manuscript under preparation). The blades serve to absorb any thermal expansion mismatch between the experiment and the satellite structure. To achieve the necessary operating temperature a \SI{2.5}{\watt} heater is activated until the required condition is achieved (see Fig.~\ref{fig:SI-2}(c)). \begin{figure}
\caption{(a) Temperature of the experiment during orbit. The shaded area illustrates the time spent in eclipse. (b) Solar illumination conditions of SpooQy-1 during its expected lifetime. (c) Temperature of the experiment when the heater is operated. There is a 120 second gap in between heating cycles for the on-board electronics to perform system checks.}
\label{fig:SI-2}
\end{figure}
\section{Survey of pump modes that produce high-quality entanglement} \label{appendix:heatmap}
The pump wavelength is a function of temperature and current~\cite{zafra2019building}. Changes in wavelength (\textit{mode-hops}) are accompanied by changes in phase $\Delta\varphi$. This phase change can degrade the entanglement quality produced by the source. Mode-hops are common in orbit due to the fluctuating temperature. To recover the entanglement quality, an optimal laser current can be used. A survey of the in-orbit pump laser was performed at different temperatures to identify laser currents that supported the production of high-quality entanglement. The resulting heatmap (see Fig.~\ref{fig:SI-3}(a)) was used as a reference when operating the experiment in space. Additionally, it is worth noting that the output power of the laser diode does not always scale proportionally with the laser current, as shown in Fig.~\ref{fig:SI-3}(b). \begin{figure}
\caption{(a) Visibility in the $D/A$ basis for different combinations of laser currents and temperatures. (b) Laser current versus output power of the pump laser utilised in the experiment.}
\label{fig:SI-3}
\end{figure}
\section{Satellite Bus and Ground Control} \label{appendix:gs} The SpooQy-1 satellite (with NORAD catalogue number: 44332) is based on a 3U Gom-X platform from GomSpace ApS. The SpooQy-1 satellite bus includes: a half-duplex UHF transceiver combined with deployable canted turnstile UHF antennas used for both uplink and downlink; a 32-bit AVR computer with a \SI{64}{\mega Byte} flash storage used as the onboard computer (OBC) for housekeeping and data handling; an attitude determination system (embedded in the OBC) with 3 magnetorquers for 3-axis detumbling; and a \SI{38}{\watt hr} battery pack (four lithium-ion 18650 cells, \SI{7.7}{\watt hr} maximum depth of discharge) with the electrical power management system.
The peak system power consumption is rated at \SI{3.85}{\watt}, while the peak power consumption of the experiment is rated at \SI{2.5}{\watt}. As photon detection is performed on board the satellite, no optical ground station is needed and only UHF ground stations are used for telemetry and satellite command. To increase the link budget two ground stations were used; one located at the National University of Singapore (NUS) campus in Singapore, and another one at the University of Applied Sciences Northwestern Switzerland (FHNW) campus in Switzerland. Both ground stations are equipped with two WiMo X-Quad antennas (amplification gain of \SI{15}{\decibel i}). The rotor for the tracking mount is controlled by a Linux-based server computer (NanoCom MS100). The ground station radio (NanoCom GS100) is the ground counterpart (with a \SI{25}{\watt} power amplifier) to the NanoCom AX100 radio on board SpooQy-1, designed to work together using the CubeSat Space Protocol.
\section{Geometrical loss} \label{appendix:mode-mismatch}
The decision to forego the use of collection lenses in the experiment leads to an optical configuration in which the geometrical loss dominates the overall performance of the experiment. This is illustrated in Fig.~\ref{fig:SI-4}. As the laser beam (Fig.~\ref{fig:SI-4}(a)) triggers SPDC along the nonlinear crystals, entangled photon-pairs are emitted and directed towards the Geiger-mode avalanche photodiodes (GM-APD). Depending on both the position within the crystal at which the photon-pair is generated and its emission opening angle (up to $0.3 \SI{}{\degree}$), the percentage of photon-pairs successfully detected can be estimated via ray tracing techniques.
\begin{figure*}
\caption{(a) Intensity profile of the laser beam cross-section recorded with a charge-coupled device camera. The collimated laser beam has a full-width half-maximum of \SI{800}{\micro\metre}$\times$\SI{400}{\micro\metre}. (b) Sketch of the geometrical loss. The generated entangled photon-pairs are not always successfully detected due to their intrinsic emission angle and the different generation positions within the crystal ($x_i$). For a pair to be successfully detected (and to generate a coincidence count), both signal and idler photons need to hit the active area of the detectors (e.g. green rays). If only one of the photons is detected (e.g. orange or blue rays), only a single count will be recorded. For clarity, only one detector and no wavelength separation of the photons have been sketched. (c) A ray-tracing simulation to calculate geometrical loss. The black circle depicts the \SI{500}{\micro\metre} diameter of the detector active area. Entangled photon-pairs hitting that region will be registered as coincidences. This simulation predicts a geometric efficiency of $\leq$4\%. p: pump; s: signal; i: idler; c: coincidence; d: \SI{500}{\micro\metre}; D: \SI{100}{\milli\metre}; GM-APD: Geiger-mode avalanche photodiode.}
\label{fig:SI-4}
\end{figure*}
We use the intensity distribution of the pump beam in one crystal (BBO-2) to randomly generate rays of downconverted photon-pairs. Signal and idler wavelengths and opening angles are chosen randomly distributed based on phase-matching considerations. We propagate the individual rays (considering the refraction at the crystal-air interface) towards the single photon detectors (Fig.~\ref{fig:SI-4}(b)). The rays are discarded if none of the photons hits its corresponding detector. The ray tracing results are shown in Fig.~\ref{fig:SI-4}(c). It can be seen how only a small percentage ($\leq$4\%; green, central region of the plot) of the generated pairs is successfully detected. Here, a successful photon-pair detection includes both signal and idler photons reaching the active area of the GM-APDs.
Geometrical loss can be mitigated with appropriate collection lenses and state-of-the-art coincidence rates using the identical source configuration can be achieved~\cite{lohrmann2018high}.
\end{appendices}
\end{document} | arXiv | {
"id": "2006.14430.tex",
"language_detection_score": 0.814281165599823,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} In these notes, we explore possible stable properties for the zeta function of a geometric $\mathbb{Z}_p$-tower of curves over a finite field of characteristic $p$, in the spirit of Iwasawa theory. A number of fundamental questions and conjectures are proposed for those $\mathbb{Z}_p$ towers coming from algebraic geometry. \end{abstract}
\title{Zeta Functions of $\ZZ_p$-Towers of Curves}
\section{$\mathbb{Z}_p$-towers of curves}
Let $\mathbb{F}_q$ be a finite field of $q$ elements with characteristic $p>0$ and let $\mathbb{Z}_p$ denote the ring of $p$-adic integers. Consider a $\mathbb{Z}_p$-tower $$C_{\infty}: \cdots \longrightarrow C_n \longrightarrow \cdots \longrightarrow C_1 \longrightarrow C_0$$ of smooth projective geometrically irreducible curves defined over $\mathbb{F}_q$. The $\mathbb{Z}_p$-tower gives a continuous group isomorphism $$\rho: G_{\infty}:= {\text{Gal}}(C_{\infty}/C_0) \cong \mathbb{Z}_p.$$ For each integer $n\geq 0$, reduction modulo $p^n$ gives an isomorphism $$ G_{n}:= {\text{Gal}}(C_{n}/C_0) \cong \mathbb{Z}/{p^n\mathbb{Z}}.$$ Let $S$ be the ramification locus of the tower, which is a subset $S$ of closet points of $C_0$. The tower is un-ramified on its complement $U = C_0 - S$. We shall assume that $S$ is finite.
By class field theory, the ramification locus $S$ is non-empty, and for each non-empty finite $S$, there are uncountably many such $\mathbb{Z}_p$-towers over $\mathbb{F}_q$. In fact, all $\mathbb{Z}_p$-towers over $C_0$ can be explicitly classified by the Artin-Schreier-Witt theory, see \cite{KW}. In contrast, if $\ell$ is a prime different from $p$, there are no $\mathbb{Z}_{\ell}$-towers over $\mathbb{F}_q$. Note that constant extensions do not give a tower in our sense since that would produce curves which are geometrically reducible. Thus, the $\mathbb{Z}_p$-towers we consider are all geometric $\mathbb{Z}_p$-towers.
The most important class of $\mathbb{Z}_p$-towers naturally comes from algebraic geometry. We describe this briefly now. Let $G_U$ be the Galois group of the maximal abelian extension of the function field of $U$ which is unramified on $U$. Let $\phi: G_U \longrightarrow {\mathbb{Z}_p^*}$ be a continuous rank one surjective $p$-adic representation. Let
$$({\mathbb{Z}_p^*})^{p^n} = \{a^{p^n} | a \in \mathbb{Z}_p^*\}.$$ This is a subgroup of ${\mathbb{Z}_p^*}$. The composition of $\phi$ and the reduction homomorphism give a surjective homomorphism (if $p>2$) $$\Phi: G_U \longrightarrow {\mathbb{Z}_p^*} \longrightarrow {\mathbb{Z}_p^*}/({\mathbb{Z}_p^*})^{p^n} \cong \mathbb{Z}_p/p^n\mathbb{Z}_p.$$
These surjective homomorphisms naturally produce a $\mathbb{Z}_p$-tower of $C_0$, unramified on $U$, which we assume to be geometric in our sense, that is, there is no constant subextensions.
This geometric tower is further called arising (or coming) from {\bf algebraic geometry} if $\phi$ arises from a relative $p$-adic \`etale cohomology of an ordinary family of smooth proper variety $X$ parameterized by $U$, or more generally in the sense of Dwork's unit root conjecture \cite{Dw} as proved in \cite{wan}, that is, $\phi$ comes from the unit root part of an ordinary overconvergent $F$-crystal on $U$. A classical example is the Igusa ${\mathbb{Z}_p}$-tower arising from the universal family of ordinary elliptic curves over $\mathbb{F}_p$, more generally, the ${\mathbb{Z}_p}$-tower arising from the following Dwork family of Calabi-Yau hypersurfaces over $\mathbb{F}_p$ parametrized by $\lambda$: $$X_0^{n+1} + X_1^{n+1}+\cdots + X_n^{n+1} + \lambda X_0X_1\cdots X_n =0.$$
\section{Zeta functions}
For integer $n\geq 0$, let $Z(C_n, s)$ denote the zeta function of $C_n$. It is defined by
$$Z(C_n,s) = \prod_{x\in |C_n|} \frac{1}{1-s^{{\rm deg}(x)}} \in 1 +s\mathbb{Z} [[s]],$$
where $|C_n|$ denotes the set of closed points of $C_n$. The Riemann-Roch theorem implies that the zeta function is a rational function in $s$ of the form $$Z(C_n, s) = \frac{P(C_n, s)}{(1-s)(1-qs)}, \ P(C_n, s) \in 1 +s\mathbb{Z}[s],$$ where $P(C_n, s)$ is a polynomial of degree $2g_n$ and $g_n= g(C_n)$ denotes the genus of the curve $C_n$. By the celebrated theorem of Weil, the polynomial $P(C_n, s)$ is pure of $q$-weight $1$, that is, the reciprocal roots of $Z(C_n, s)$ all have complex absolute value equal to $\sqrt{q}$.
The $q$-adic valuations (also called $q$-slopes) of the reciprocal roots of $Z(C_n, s)$ remain quite mysterious in general. Our aim is to study this question in the spirit of Iwasawa theory. Namely, we want to investigate possible stable properties for the slopes when $n$ varies. Classical geometric Iwasawa theory corresponds to the study of the slope zero part of the zeta function. Although much more complicated, we feel that there is a rich theory in all higher slopes, for $\mathbb{Z}_p$-towers arising from algebraic geometry. The aim of these notes is to explore some of the basic questions, suggesting what might be true and what might be false, and giving supporting examples when available.
\section{Main questions and conjectures}
We now make our questions a little more precise, beginning with some simpler questions. Write $$P(C_n, s) = \prod_{i=1}^{2g_n} (1-\alpha_i(n)s) \in \mathbb{C}_p [s], \ \ 0\leq v_q(\alpha_1(n)) \leq \cdots \leq v_q(\alpha_{2g_n}(n)) \leq 1, $$ where the $q$-adic valuation is normalized such that $v_q(q)=1$. We would like to understand how the polynomial $P(C_n, s)$ (and its zeros) varies when $n$ goes to infinity. The first and simplest question is about the degree of the polynomial $P(C_n, s)$.
\begin{Qn} How the genus $g_n$ varies as $n$ goes to infinity? \end{Qn}
The answer depends on how the $\mathbb{Z}_p$-tower is given. Classically, the construction of all $\mathbb{Z}_p$-towers was given by Witt using Witt vectors, and the genus was explicitly computed by Schmid (1937) in the same framework of Witt vectors, see \cite{KW} for a simplified complete treatment. In modern applications, a $\mathbb{Z}_p$-tower naturally arises from algebraic geometry, and it is not clear how to compute the genus. In this direction, we propose
\begin{Con}\label{Con1.0} Assume that the tower comes from algebraic geometric. Then, the genus sequence $g_n$ is stable, that is, there are constants $a, b, c$ with $a>0$ depending on the tower such that for all sufficiently large $n$, we have $$g_n = ap^{2n} + bp^n + c.$$ \end{Con}
As an example, one can check that the conjecture is true in the case of Igusa $\mathbb{Z}_p$-tower, using the results in Katz-Mazur [KM]. The converse of the conjecture is not true. There are only countably many $\mathbb{Z}_p$-towers coming from algebraic geometry, but there are uncountably many $\mathbb{Z}_p$-towers with stable genus sequence. The genus growth question was first studied in Gold-Kisilevsky \cite{GK}, where a simple lower bound of the form $g_n \geq cp^{2n}$ is proved for some positive constant $c$. It is easy to construct towers so that $g_n$ grows as fast as one wishes. In \cite{KW}, we give a general explicit formula for $g_n$ via a different explicit construction of $\mathbb{Z}_p$-towers. This new construction is better suited for studying the slopes of zeta functions. In this way, one can quickly read off the genus and see its variational behavior. In particular, genus stable $\mathbb{Z}_p$-towers are completely classified in \cite{KW} \cite{KW2} in terms of a simple condition on our explicit Witt construction of $\mathbb{Z}_p$-towers. To prove the above conjecture, one needs to check that this simple condition is satisfied for all towers coming from algebraic geometry, which are usually in multiplicative forms. This is not easy to do in practice. In \cite{K2}, Joe Krammer-Miller develops a multiplicative approach which is better suited to prove this conjecture.
{\bf Remarks}. One can ask the same question for the stable behavior of the discriminant of a $\mathbb{Z}_p$-tower of number fields. Let $$K= K_0 \subset K_1 \subset K_2 \subset \cdots $$ be a $\mathbb{Z}_p$-tower of number fields. Let $d_n$ denote the absolute discriminant of $K_n$. The analogue of the genus in a number field is the logarithm of the discriminant. Since the prime to $p$-part of $d_n$ is fixed for large $n$, this is equivalent to asking for the $p$-adic valuation $v_p(d_n)$. One can show \cite{Tate} that for sufficiently large $n$, $$v_p (d_n) = anp^n +bp^n +c, $$ where $a>0, b, c$ are constants depending only on the tower. In the beautiful paper \cite{Up}, Upton has generalized this stable formula to any $p$-adic Lie extension of number fields, not necessarily abelian. For the discriminant problem, the number field case is easier as the ramification is much more controlled. In contrast, the function field case is much more complicated as the ramification is completely wild, and so some condition is necessary to insure stability.
We now return to the function field case. Having understood the degree of the polynomial $P(C_n, s)$, our second natural question is about the splitting field of $P(C_n, s)$. In particular, how the extension degree of the splitting field of $P(C_n, s)$ varies as $n$ varies. \begin{Con}\label{Con1.1} Let $\mathbb{Q}_n$ (resp. $\mathbb{Q}_{p,n}$) denote the splitting field of $P(C_n, s)$ over $\mathbb{Q}$ (resp. $\mathbb{Q}_p$).
\item{(1)}. The extension degree $[\mathbb{Q}_{n}:\mathbb{Q}]$ goes to infinity as $n$ goes to infinity.
\item{(2)}. The extension degree $[\mathbb{Q}_{p,n}:\mathbb{Q}_p]$ goes to infinity as $n$ goes to infinity.
\item{(3)}. The ramification degree $[\mathbb{Q}_{p,n}:\mathbb{Q}_p]^{\rm{ram}}$ goes to infinity as $n$ goes to infinity.
\item{(4)}. There is a positive constant $c$ depending on the tower such that for all large $n$, we have $[\mathbb{Q}_{p,n}:\mathbb{Q}_p]^{\rm{ram}}\geq cp^n$. \end{Con}
Clearly, each part is significantly stronger than its previous part. In all the examples we know, the ramification degree of $\mathbb{Q}_{p,n}$ over $\mathbb{Q}_p$ is at least as large as $cp^n$ as $n$ grows, that is, the strongest part holds and thus all parts hold. Note that this conjecture is for all $\mathbb{Z}_p$-towers, not necessarily those coming from algebraic geometry. If the $\mathbb{Z}_p$-tower comes from algebraic geometry, then one expects the much stronger stability that $[\mathbb{Q}_{p,n}:\mathbb{Q}_p]^{\rm{ram}}= cp^n$ for some positive constant $c$ and all large $n$. This would be a consequence of our later slope stability conjecture. The work of \cite{GK} shows that the above conjecture is true if $\mu=0$ and $\lambda\not=0$, where $\mu$ and $\lambda$ are the invariants in Iwasawa theory which will be discussed later.
We now refine the above geometric genus question into many arithmetic questions in terms of slopes. Fix a rational number $\alpha\in [0, \infty )$, let $d_{\alpha}(n)$ denote the multiplicity of $\alpha/p^{n}$ in the slope sequence of $P(C_n, s)$. That is,
$$d_{\alpha}(n) = \# \{ 1\leq i \leq 2g_n | v_q(\alpha_i(n))=\frac{\alpha}{p^{n}}\}.$$ The reason to re-scale the slope by the factor $1/p^{n}$ is that the first few slopes of $P(C_n, s)$ is expected to be of the form $\alpha/p^n$ for a few rational numbers $\alpha$, independent of $n$, if the tower comes from algebraic geometry. By definition, $d_{\alpha}(n)=0$ for $\alpha>p^n$, and $$\sum_{\alpha} d_{\alpha}(n) = 2g_n.$$
\begin{Qn} For a fixed rational number $\alpha \in [0,\infty )$, how the number $d_{\alpha}(n)$ varies
as $n$ goes to infinity? \end{Qn}
Note that $d_0(n)$ is the $p$-rank of of $C_n$, namely, the rank of the $p$-adic Tate module of the Jacobian of $C_n$. Furthermore, the class number $h_n$ (the number of divisor classes of degree $0$ of $C_n$ ) is given by the residue formula $$h_n: = \#{\rm Jac}(C_n)(\mathbb{F}_q) = P(C_n, 1).$$ One has the following result from classical geometric Iwasawa theory.
\begin{Thm}\label{Thm1.1} (i). There are integer constants $\mu_1$ and $\mu_2$ such that for all sufficiently large $n$, we have $$d_0(n) = p^n\mu_1 + \mu_2.$$
(ii). There are integer constants $\mu, \lambda, \nu$ such that for all sufficiently large $n$, we have $$v_p(h_n) = \mu p^n + \lambda n +\nu.$$ (iii). For all sufficiently large $n$, we have the congruence $$\frac{h_n/h_{n-1}}{p^{v_p(h_n/h_{n-1})}} \equiv 1 \mod p.$$ \end{Thm}
This theorem is really about the slope zero part of the zeta function. To illustrate our new point of views, we shall give a simple new proof of the above result, using the definition of the $T$-adic L-function introduced later. This new approach is successfully extended in \cite{W19} to prove the functional field version of Greenberg's conjecture for $\mathbb{Z}_p^r$-towers for any positive integer $r$. It is a pleasure to thank Ralph Greenberg who asked if part (i) of the above theorem is true. For higher slope $\alpha>0$, the problem is completely new and apparently more complicated.
\begin{Qn}\label{Qn1.3} For each fixed $\alpha \in [0,\infty )$, are there constants $\mu_1(\alpha)$ and $\mu_2(\alpha)$ such that for all sufficiently large $n$, we have $$d_{\alpha}(n) = p^n\mu_1(\alpha) + \mu_2(\alpha)?$$ \end{Qn}
If we do not re-scale the slope in the definition of $d_{\alpha}(n)$ and look at all slopes, not necessarily the first few slopes, we can ask the following question.
\begin{Qn} Are the $q$-slopes $$\{ v_q(\alpha_1(n)), \cdots, v_q(\alpha_{2g_n}(n))\} \subset [0, 1] \cap \mathbb{Q} \subset [0, 1]$$ equi-distributed in the interval $[0, 1]$ as $n$ goes to infinity? \end{Qn}
A weaker version is to ask if the set of $q$-slopes for all $n$ is dense in $[0, 1]$. A stronger version is the following possible finiteness property.
\begin{Qn} Is the slope sequence stable in some sense? More precisely, is there a positive integer $n_0$ depending on the tower such that the re-scaled $q$-slopes $$\{ p^nv_q(\alpha_1(n)), \cdots, p^nv_q(\alpha_{2g_n}(n))\}$$ for all $n> n_0$ are determined explicitly in a simple way by their values for $0\leq n \leq n_0$, using a finite number of arithmetic progressions? \end{Qn}
The precise meaning of this finiteness property will be made clearer later. These three questions are too general to have a positive answer in full generality, as there are too many $\mathbb{Z}_p$-towers, most of them are not natural. In fact, we believe that each of the above three questions has a negative answer in full generality. It would be interesting to find examples showing that the above three questions indeed have a negative answer. However, we conjecture that the answers to all three questions are positive for all $\mathbb{Z}_p$ towers coming from algebraic geometry. More precisely, our main conjecture is the following.
\begin{Con}\label{Con1.2} Assume that the tower comes from algebraic geometry.
\item{(1)}. The genus sequence $g_n$ is stable. That is, there are constants $a, b, c$ with $a>0$ depending on the tower such that for all sufficiently large $n$, we have $$g_n = ap^{2n} + bp^n + c.$$
\item{(2)}. For each fixed $\alpha \in [0,\infty )$, there are constants $\mu_1(\alpha)$ and $\mu_2(\alpha)$ such that for all sufficiently large $n$, we have $$d_{\alpha}(n) = p^n\mu_1(\alpha) + \mu_2(\alpha).$$
\item{(3)}. The $q$-slopes $$\{ v_q(\alpha_1(n)), \cdots, v_q(\alpha_{2g_n}(n))\} \subset [0, 1] \cap \mathbb{Q} \subset [0, 1]$$ are equi-distributed in the interval $[0, 1]$ as $n$ goes to infinity.
\item{(4)}. The slope sequence is stable. That is, there is a positive integer $n_0$ depending on the tower such that the re-scaled $q$-slopes $\{ p^nv_q(\alpha_1(n)), \cdots, p^nv_q(\alpha_{2g_n}(n))\}$ for all $n> n_0$ are determined explicitly by their values for $0\leq n \leq n_0$, using a finite number of arithmetic progressions. \end{Con}
We shall make part (4) of the conjecture more precise later and give some supporting examples. Parts (2)-(4) of the conjecture are in increasing level of difficulties.
Each of parts (1)-(3) is a weak consequence of part (4). Thus, a proof for each of parts (1)-(3) is also an evidence for the strongest part (4) of the conjecture. Recent work of Kosters-Zhu \cite{KZ} suggests that part (1) almost implies part (3), which has been proved when $U$ is the affine line.
Before moving on, we note the following consecutive divisibility
$$P(C_0, s) | P(C_1, s) | \cdots | P(C_n, s) | \cdots.$$ Thus, it is enough to study for $n\geq 1$, the primitive part of $Z(C_n, s)$ defined by $$Q(C_n, s) = \frac{P(C_n, s)}{P(C_{n-1}, s)} = \frac{Z(C_n, s)}{Z(C_{n-1}, s)}.$$
\section{L-functions}
Note that if $C_n$ is ramified over $C_0$ at a closed point $x\in S$, then $C_{m}$ is totally ramified over $C_n$ at $x$ for all $m\geq n$ since the Galois group is a cyclic $p$-group. Without loss of generality, by going to a larger $n$ if necessary, we can assume that $C_1$ is already ramified at every point of $S$. From now on, we assume that $C_1$ is indeed (totally) ramified at every point of $S$.
Recall that for $n\geq 1$, the Galois group $$G_{n}= {\text{Gal}}(C_{n}/C_0) \cong \mathbb{Z}/{p^n\mathbb{Z}}.$$ For a primitive character $\chi_n: G_n \rightarrow \mathbb{C}_p^*$ of order $p^n>1$, the L-function of $\chi_n$ over $C_0$ is
$$L(\chi_n, s) = \prod_{x\in |U|} \frac{1}{1-\chi_n(\text{Frob}_x)s^{\text{deg}(x)}} \in 1 +s\mathbb{Z}[\chi_n(1)][[s]],$$
where $|U|$ denotes the set of closed points of $U$ and $\text{Frob}_x$ denotes the arithmetic Frobenius element of $G_n$ at $x$. Note that $\zeta_{p^n}:=\chi_n(1)$ is a primitive $p^n$-th root of unity. Again, Weil's theorem shows that the L-function $L(\chi_n, s)$ is a polynomial in $s$, pure of weight $1$. One has the decomposition $$Q(C_n, s) = \prod_{\chi_n: G_n \rightarrow \mathbb{C}_p^*} L(\chi_n, s),$$ where $\chi_n$ denotes a primitive character of order $p^n$. For $\sigma \in \text{Gal}(\mathbb{Q}(\zeta_{p^n})/\mathbb{Q}) = \text{Gal}(\mathbb{Q}_p(\zeta_{p^n})/\mathbb{Q}_p)$, one checks that $$L(\chi_n, s)^{\sigma} = L(\chi_n^{\sigma}, s).$$ It follows that the degree and the slopes for $L(\chi_n, s)$ depend only on $n$, not on the choice of the primitive character $\chi_n$ of $G_n$. We can just choose and fix one character $\chi_n$ of order $p^n$ for each $n\geq 1$, if desired.
The extension degree conjecture for the splitting field of $P(C_n, s)$ is reduced to
\begin{Con}\label{Con2}
Let $L_{p,n}$ denote the splitting field of $L(\chi_n, s)$ over $\mathbb{Q}_p$.
\item{(1)}. The extension degree $[L_{p,n}:\mathbb{Q}_p]$ goes to infinity as $n$ goes to infinity.
\item{(2)}. The ramification degree $[L_{p,n}:\mathbb{Q}_p]^{\rm{ram}}$ goes to infinity as $n$ goes to infinity.
\item{(3)}. There is a positive constant $c$ depending on the tower such that for all large $n$, we have $[L_{p,n}:\mathbb{Q}_p]^{\rm{ram}}\geq cp^n$. \end{Con}
Let $\ell(n)$ denote the degree of the polynomial $L(\chi_n, s)$. The degree of $Q(C_n, s)$ is simply $p^{n-1}(p-1)\ell(n)$. The genus $g_n$ of $C_n$ is given by the formula $$2g_n -2g_0= (p-1)\sum_{i=1}^n p^{i-1}\ell(i). $$ To understand the variation of $g_n$ as $n$ varies, it is enough to understand the variation of $\ell(n)$ as $n$ varies. Conjecture \ref{Con1.0} is equivalent to
\begin{Con}\label{Con2.0} Assume that the tower comes from algebraic geometry. Then, the degree sequence $\ell(n)$ is stable in the sense that there are constants $a, b$ with $a>0$ depending on the tower such that for all sufficiently large $n$, we have $$\ell(n) = ap^{n} + b.$$ \end{Con}
Again, this conjecture can be refined in terms of slopes. Write $$L(\chi_n, s) = \prod_{i=1}^{\ell(n)} (1-\beta_i(n)s) \in \mathbb{C}_p [s], \ \ 0\leq v_q(\beta_1(n)) \leq \cdots \leq v_q(\beta_{\ell(n)}(n)) \leq 1.$$ The slope sequence for $Q(C_n, s)$ is just equal to the slope sequence for $L(\chi_n, s)$, repeated $p^{n-1}(p-1)$ times.
Fix a rational number $\alpha\in [0, \infty )$, let $\ell_{\alpha}(n)$ denote the multiplicity of $\alpha/p^n$ in the slope sequence of $L(\chi_n, s)$. That is,
$$\ell_{\alpha}(n) = \# \{ 1\leq i \leq \ell(n) | v_q(\beta_i(n))=\frac{\alpha}{p^n}\}.$$ It is clear that for every $n\geq 1$, we have $$\sum_{\alpha} \ell_{\alpha}(n) = \ell(n),$$ and for every $\alpha$, we have $$(p-1)\sum_{i=1}^n p^{i-1}\ell_{\alpha}(i) = d_{\alpha}(n) - d_{\alpha}(0). $$
\begin{Qn} For a fixed rational number $\alpha \in [0,\infty )$, how the number $\ell_{\alpha}(n)$ varies
as $n$ goes to infinity? \end{Qn}
In the case $\alpha = 0$, the situation is quite simple and clean. The following result follows quickly from the definition of $T$-adic L-functions introduced later.
\begin{Lem}\label{Lem1.1} For every $n\geq 1$, we have $\ell_0(n) =d_0(0)-1 + \sum_{x\in S} {{\rm deg}(x)}$. \end{Lem}
In particular, the number $\ell_0(n)$ for $n\geq 1$ is a constant independent of $n$. It follows that $$d_0(n) - d_0(0) = \ell_0(1)\sum_{i=1}^n (p-1)p^{i-1} = \frac{p^n-1}{p-1}(d_0(1)-d_0(0)).$$ Substituting the value of $\ell_0(1)$ in the lemma, we get an alternative formula for $d_0(n)$: $$d_0(n) -d_0(0) = (p^n-1)(d_0(0) -1 + \sum_{x\in S} {{\rm deg}(x)}).$$ This proves part (i) of Theorem \ref{Thm1.1}. It implies that the sequence $\{d_0(n)\}_{n\geq 0}$ is determined by the first two terms $d_0(0)$ and $d_0(1)$. Alternatively, it is determined by the first term $d_0(0)$ and the degree of the divisor $S$. For $\alpha>0$, the slope sequence $\{d_{\alpha}(n)\}_{n\geq 0}$ is more complicated. Similar to the zeta function case, we can ask
\begin{Qn}\label{Qn2.1}
\item{(1)}. For each fixed $\alpha \in [0,\infty )$, the number $\ell_{\alpha}(n)$ is a constant for all sufficiently large $n$?
\item{(2)}. As $n$ goes to infinity, are the $q$-slopes $$\{ v_q(\beta_1(n)), \cdots, v_q(\beta_{\ell(n)}(n))\} \subset [0, 1] \cap \mathbb{Q} \subset [0, 1]$$ equi-distributed in the interval $[0, 1]$?
\item{(3)}. Is there a positive integer $n_0$ depending on the tower such that the re-scaled $q$-slopes $\{ p^nv_q(\beta_1(n)), \cdots, p^nv_q(\beta_{\ell(n)}(n))\}$ for all $n> n_0$ are determined explicitly by their values for $0\leq n \leq n_0$, using a finite number of arithmetic progressions? \end{Qn}
Again, these questions are too general to have a positive answer in full generality. However, we conjecture all of them have a positive answer for $\mathbb{Z}_p$-towers coming from algebraic geometry. To gain some feeling about what part (3) means, we give an example next.
\section{$\mathbb{Z}_p$-towers over the affine line} In this section, we explain that all the above questions for all slopes have a simple positive answer for many $\mathbb{Z}_p$-towers over the affine line, as first studied in \cite{DWX}, and more recently in \cite{KZ}\cite{Li}.
Fix an element $\beta$ of $W(\mathbb{F}_q)=\mathbb{Z}_q$ with trace $1$. By the classification in \cite{KW}, any $\mathbb{Z}_p$-tower over $C_0=\mathbb{P}^1$, totally ramified at infinity $\infty$ and unramified on $\mathbb{A}^1 =\mathbb{P}^1 -\{ \infty\}$ can be uniquely constructed from a constant $c\in \mathbb{Z}_p$ and a primitive convergent power series $$f(x) = \sum_{(i,p)=1} c_i x^i \in \mathbb{Z}_q[[x]], \ c_i \in \mathbb{Z}_q, \ \lim_i c_i =0,$$ where $f(x)$ is called primitive if not all $c_i$ are divisible by $p$, that is, $f(x)$ is not divisible by $p$. The construction is explicitly given by the following equation $$C_{\infty}: [y_0^p, y_1^p, \cdots] - [y_0, y_1, \cdots] = c\beta + \sum_{(i,p)=1}c_i [x^i, 0, \cdots],$$ where both sides are Witt vectors. The constant $c$ has no contribution to all our questions, and thus we shall asumme that $c=0$. The tower is then uniquely constructed from the primitive convergent power series $f(x)$.
Comparing the coordinates, one finds that $C_n$ is defined by a system of $n$ polynomial equations over $\mathbb{F}_q$ in $n+1$ variables $(y_1,\cdots, y_n, x)$. It is clear that $C_0 = \mathbb{P}^1$, $C_1$ is the Artin-Schreier curve whose affine equation over $\mathbb{F}_q$ is given by $$y_0^p -y_0 = f(x),$$ and $C_2$ is the curve above $C_1$ given by an additional equation (over $\mathbb{F}_q$) \[ y_1^p - y_1 + \frac{y_0^{p^2}-y_0^p - (y_0^p - y_0)^p}{p} = \frac{f^\sigma(x^p) - f(x)^p}{p}, \] where $ f^\sigma(x) = \sum_{(i,p)=1} \sigma(c_i) x^i$ and $\sigma$ is the Frobenius automorphism on $\mathbb{Z}_q$. Note that over $\mathbb{F}_q$, the power series $f(x)$ reduces to a polynomial.
The map $C_n \longrightarrow C_{n-1}$ is given by the projection $$(y_1, \cdots, y_n, x) \longrightarrow (y_1, \cdots, y_{n-1}, x).$$ Let $\zeta_{p^n}$ be a primitive $p^n$-th root of unity in $\mathbb{C}_p^*$. Set $t_n = \zeta_{p^n}-1$, which is a uniformizer in the local field $\mathbb{Q}_p(\zeta_{p^n})$. For all $n\geq 1$, it is clear that $\chi_n \equiv 1 \mod (t_n)$ and $$L(\chi_n, 1) \equiv Z(\mathbb{A}^1, 1) \equiv 1 \mod t_n.$$ We obtain \begin{Thm} Let $h_n$ denote the class number of $C_n$. Then, for all $n\geq 0$, we have $$h_n \equiv 1 \mod (p).$$ In particular, $d_0(n) = \ell_0(n) = \mu = \lambda = \nu =0$. \end{Thm} It would be interesting to explore possible improvements of the above congruence to a congruence modulo higher powers of $p$, that is, to understand the first few digits in the $p$-adic expansion of $h_n$.
For integer $n\geq 1$, the Artin conductor $a(\chi_n)$ of the character $\chi_n$ is calculated explicitly in \cite{KW}: $$a(\chi_n) = 1 + \max_{v_p(c_i)<n} \{ ip^{n-1-v_p(c_i)}\}.$$ It follows that the degree of $L(\chi_n, s)$ is given by the formula $$\ell(n) = -1 + p^{n-1} \max_{v_p(c_i)<n} \{ ip^{-v_p(c_i)}\}.$$ This tower is genus stable if and only if $\ell(n)$ is a linear polynomial in $p^n$ for large $n$. This is the case when $$d: = \max_{(i,p)=1} \{ \frac{i}{p^{v_p(c_i)}}\}$$ exits and is a finite rational number, in which case, for all sufficiently large $n$, we have the stable degree formula $$\ell(n) = dp^{n-1} -1.$$ This case is called {\bf strongly genus stable}. There is a more complicated class of genus stable towers \cite{KW2} that we do not discuss here for simplicity.
\begin{Def} Assume that the tower is strongly genus stable as above. Let $n_0$ be the smallest positive integer $k$ such that $\ell(k)=p^{k-1}d-1$, and if
the $q$-slope sequence of $L(\chi_k, s)$ is denoted by $\{\alpha_1, \cdots, \alpha_{p^{k-1}d-1}\}$, then for every integer $n\geq k$, the $q$-slope sequence of $L(\chi_n, s)$ is given by \[ \bigcup_{i=0}^{p^{n-k}-1} \big\{\frac{i}{p^{n-k}}, \frac{\alpha_1+i}{p^{n-k}}, \dots, \frac{\alpha_{dp^{k-1}-1}+i}{p^{n-k}}\big\} - \{0\}. \] If such positive integer $k$ does not exist, we define $n_0 =\infty$. \end{Def}
The finiteness of the number $n_0$ hence implies a striking stable property for the slope sequence of $L(\chi_n, s)$ as $n$ goes to infinity. It implies that the slopes of $L(\chi_n, s)$ normalized by the factor $p^n$ for all large $n$ are given by a finite number of arithmetic progressions. For this reason, the tower is called {\bf slope stable} if $n_0$ is finite. Note that if the tower is slope stable, then clearly the tower must be genus stable. It is tempting to conjecture that the converse is also true. Although we do not have a counter-example, we are a little cautions here and will just state it as a question.
\begin{Qn} Assume the tower is genus stable.
Is the tower slope stable? \end{Qn}
Note that we conjectured that the answer is positive for towers coming from algebraic geometry. An important example is the Igusa $\mathbb{Z}_p$-tower which is genus stable, but the finiteness property of $n_0$ for Igusa tower seems open. This might be related to the geometry of eigencurves in the framework of modular forms, see \cite{wan-xiao-zhang} \cite{liu-wan-xiao} for a recent progress and \cite{BP} for a precise conjectural slope description in the case of regular primes $p$. Another related slope problem in the setting of symmetric powers of Kloosterman sums is recently studied in Haessig \cite{Ha}, and Fresan-Sabbah-Yu \cite{FSY}. We now give some examples of $\mathbb{Z}_p$-towers constructed using the primitive convergent power series $f$, where the above question has a positive answer.
\begin{Def} \item{(1)}. The tower is called a {\bf polynomial tower} of degree $d$ if the primitive convergent power series $f(x)=\sum_{(i,p)=1} c_i x^i$ is a polynomial of degree $d$.
\item{(2)}. The tower is called a {\bf unit root tower} of degree $d$ if $f(x)$ is a polynomial of degree $d$ and furthermore all non-zero coefficients $c_i$ are roots of unity. \end{Def}
Clearly, a polynomial tower is strongly genus stable and its degree $d$ is not divisible by $p$.
\begin{Thm}[\cite{DWX}] For a unit root tower of degree $d$, the number $n_0$ is finite. Furthermore, we have the following explicit upper bound $$n_0 \leq 1 + \lceil \log_p(\frac{d}{8} \log_pq)\rceil.$$ In particular, if $p\geq \frac{d}{8} \log_pq$, then $n_0 \leq 2$. \end{Thm}
\begin{Cor} For a unit root tower of any degree, we have the following
\item{(1)}. The $q$-slopes of $L(\chi_n, s)$ (resp., $P(C_n, s)$) are equi-distributed in $[0, 1]$ as $n$ goes to infinity.
\item{(2)}. For every rational number $\alpha \in [0, \infty )$, the sequence $\ell_{\alpha}(n)$ is a constant independent of $n \geq n_0$.
\item{(3)}. The ramification degree of $P(C_n, s)$ over $\mathbb{Q}_p$ is equal to $cp^n$ for some positive constant $c$ for all $n>n_0$.
\end{Cor}
The above explicit bound for $n_0$ can be improved in various special cases.
\begin{Eg} For a unit root tower of degree $d$ satisfying $p\equiv 1 \mod d$, we have $n_0=1$, $\alpha_i = i/d$ for $1\leq i \leq d$ and hence the slope sequence of $L(\chi_n, s)$ for all $n\geq 1$ is given by $$\{ \frac{1}{dp^{n-1}}, \frac{2}{dp^{n-1}}, \cdots, \frac{dp^{n-1}-1}{dp^{n-1}}\}. $$ This was first proved in \cite{LW}. The $T$-adic L-function introduced there plays a crucial role in the proof of the above more general theorem. \end{Eg}
\begin{Eg} Let $f(x) = x^d +ax \in \mathbb{Z}_q[x]$ with $a^{q-1} =1$, $d$ not divisible by $p$ and $$p > \max \{ \frac{d^2(d-1)}{2}, 1 + \frac{d(d-1)}{4}\log_pq\}.$$ It is proved in \cite{OY} that $n_0=1$ and $$\alpha_i = \frac{i}{d} + \frac{d-1}{d(p-1)} (ip - [\frac{ip}{d}]d -i).$$ \end{Eg}
\begin{Eg} If $p> 3d$, one can show that $n_0=1$ for a generic $\bar{f}$ over $\bar{\mathbb{F}}_p$, see \cite{LLN}. \end{Eg}
The slope stable property is proved to be true for any polynomial tower in \cite{Li} and more generally for a much larger class of strongly genus stable towers over the affine line in [KZ]. The $T$-adic L-function introduced in next section played an essential role in [DWX] and [Li]. A novel feature of [KZ] is the introduction of the $\pi$-adic L-function in infinitely many variables which refines and interpolates the $T$-adic L-function. It would be very interesting to prove the slope stable property for all strongly genus stable towers, or all genus stable towers or to find a counter-example.
\section{$T$-adic L-functions}
We now return to the situation of an arbitrary $\mathbb{Z}_p$-tower and define the $T$-adic L-function first introduced in [LW]. Instead of just finite characters $\chi_n: G_n \longrightarrow \mathbb{C}_p^*$, we will also consider all continuous $p$-adic characters $\chi: G_{\infty} \longrightarrow \mathbb{C}_p^*$, not necessarily of finite order. The isomorphism $$\rho: G_{\infty} \cong \mathbb{Z}_p$$ is crucial for us. The $p$-adic valued Frobenius function it induces
$$F_{\rho}: |U| \longrightarrow \mathbb{Z}_p, \ F_{\rho}(x) = \rho({\rm Frob}_x)$$ determines the $\mathbb{Z}_p$-tower by class field theory. Any condition we would impose on the tower is a condition on this Frobenius function $F_{\rho}$.
Consider the universal continuous $T$-adic character $\mathbb{Z}_p \longrightarrow \mathbb{Z}_p[[T]]^*$ determined by sending $1$ to $1+T$. Composing this universal $T$-adic character of $\mathbb{Z}_p$ with the isomorphism $\rho$, we get the universal $T$-adic character of $G_{\infty}$: \[ \rho_T: G_{\infty} \longrightarrow \mathbb{Z}_p \longrightarrow \text{GL}_1(\mathbb{Z}_p[[T]]) = \mathbb{Z}_p[[T]]^*. \] Let $D_p(1)$ denote the open unit disk in $\mathbb{C}_p$. For any element $t \in D_p(1)$, we have a natural evaluation map $\mathbb{Z}_p[[T]]^* \rightarrow \mathbb{C}_p^*$ sending $T$ to $t$. Composing all these maps, we get, for fixed $t\in D_p(1)$, a continuous character \begin{equation} \label{varying character} \rho_t: G_{\infty} \longrightarrow \mathbb{C}_p^*. \end{equation} The open unit disk $D_p(1)$ parametrizes all continuous $\mathbb{C}_p$-valued characters $\chi$ of $G_{\infty}$ via the relation $t = \chi(1)-1$. The L-function of $\rho_t$ is defined in the usual way:
$$L({\rho_t}, s) = \prod_{x\in |U|} \frac{1}{1-\rho_t({\rm Frob}_x)s^{{\rm deg}(x)}}\in 1 +s\mathbb{C}_p[[s]].$$
In the case that $\chi=\chi_n$ is a finite $p$-adic character of $G_{\infty}$ of order $p^n$, then $\chi(1)$ is a primitive $p^n$-th roots of unity and we have
$$t=t_n:= \chi_n(1)-1, \ |t_n|_p = p^{-\frac{1}{p^{n-1}(p-1)}}, \ L(\chi_n, s) = L({\rho}_{t_n}, s). $$ Elements of the form $t_n =\chi_n(1)-1$ for $n\geq 0$ are called the classical points in $D(1)$. As $n$ goes to infinity, $t_n$ approaches to the boundary of the disk $D_p(1)$. Thus, to understand the behavior of $L(\chi_n, s)$ as $n$ grows, its is enough to understand the L-function $L(\rho_t, s)$ for all $t$ near the boundary of $D_p(1)$. More precisely, we should understand the following universal L-function.
\begin{Def} The $T$-adic L-function of the tower is the L-function of the $T$-adic character $\rho_T$:
$$L_{\rho}(T,s):= L(\rho_T, s) = \prod_{x\in |U|} \frac{1}{1-(1+T)^{\rho({\rm{Frob}}_x)}s^{{\rm{deg}}(x)}}\in 1 +s\mathbb{Z}_p[[T]][[s]].$$ \end{Def} This is a $p$-adic power series in the two variables $T$ and $s$. For $t\in D_p(1)$, we have
$$L(\rho_{t}, s) = L_{\rho}(T,s)|_{T=t} =L_{\rho}(t, s).$$ As noted above, the specialization of $L_{\rho}(T, s)$ at every classical point $T=t_n$ is a rational function $L_{\rho}(t_n, s)$ in $s$, in fact, a polynomial in $s$ of degree $\ell(n)$ for $n\geq 1$. In this case, the character $\rho_{t_n}$ is of finite order. If $t\in D_p(1)$ is not a classical point, i.e., $\rho_t$ is of infinite order, we do NOT know a single example for which $L_{\rho}(t, s)$ is rational.
\begin{Qn}Is there a non-classical $t\in D_p(1)$ such that $L_{\rho}(t, s)$ is rational? \end{Qn}
To see the significance of the $T$-adic L-function, we now use the definition of $L_{\rho}(T, s)$ to prove Theorem \ref{Thm1.1}. Since the character $\rho_T$ is trivial modulo $T$, the L-function $L_{\rho}(T,s)$ modulo $T$ is the same as the zeta function $Z(U,s)$ of $U$. This gives the congruence $$L_{\rho}(T, s) \equiv Z(C_0, s) \prod_{x\in S} (1- s^{{\rm deg}(x)}) = \frac{P(C_0, s)}{(1-s)(1-qs)} \prod_{x\in S} (1- s^{{\rm deg}(x)}) \mod T .$$ Replacing $T$ by $t_n$ for $n\geq 1$, we deduce $$L_{\rho}(t_n, s) \equiv \frac{P(C_0, s)}{(1-s)} \prod_{x\in S} (1- s^{{\rm deg}(x)}) \mod t_n.$$ Comparing the number of reciprocal roots of slope zero, one finds that for $n\geq 1$, $$\ell_0(n) = d_0(0) -1 +\sum_{x\in S} {{\rm deg}(x)}.$$ This proves Lemma \ref{Lem1.1} and hence part (i) of Theorem \ref{Thm1.1}.
To get the stable formula for $v_p(h_n)$, we need to specialize $s$ at $1$. Write $$L_{\rho}(T,s) =\sum_{k=0}^{\infty} L_k(T) s^k, \ L_k(T) \in \mathbb{Z}_p[[T]].$$ Since $L_{\rho}(t_n, s)$ is a polynomial of degree $\ell(n)$, we have $L_k(t_n)=0$ for all $k>\ell(n)$. The $p$-adic Weierstrass preparation theorem implies that $$L_k(T) = \frac{(1+T)^{p^n}-1}{T} u_k(T), \ u_k(T)\in \mathbb{Z}_p[[T]]$$ for all $k> \ell(n)$. It follows that the series $L_{\rho}(T,s)$ is $(p, T)$-adically convergent for $s\in \mathbb{Z}_p[[T]]$. Taking $s=1$, noting that $L_{\rho}(T, 1) \not =0$ as its specialization at classical points $t_n$ is non-zero, we can write $$L_{\rho}(T, 1) = p^{\mu}(T^{\lambda}+pa_1T^{{\lambda} -1} + \cdots +pa_{\lambda}) u(T),$$ where $a_i\in \mathbb{Z}_p$ and $u(T)$ is a unit in $\mathbb{Z}_p[[T]]$. It follows that $$v_p(h_m) -v_p(h_0)= \sum_{n=1}^m (p-1)p^{n-1}v_p(L_{\rho}(t_n, 1)) .$$ Since $v_p(u(t_n))=0$, and for $p^{n-1}(p-1) > \lambda$, we have $$v_p(t_n^{\lambda}+pa_1t_n^{{\lambda} -1} + \cdots +pa_{\lambda}) = \frac{ \lambda}{p^{n-1}(p-1)},$$ we conclude the stable formula that for $m$ sufficiently large, $$v_p(h_m) =\mu p^m + \lambda m + \nu$$ for some constant $\nu$. Part (ii) of Theorem \ref{Thm1.1} is proved.
Finally, note that for $n\geq 1$, we have $$\frac{h_n}{h_{n-1}} = {\rm Norm}_{\mathbb{Q}_p(t_n)/\mathbb{Q}_p}(L_{\rho}(t_n, 1)).$$ The minimal polynomial of $t_n$ is the $p$-Eisenstein polynomial $$ \frac{(1+T)^{p^n}-1}{(1+T)^{p^{n-1}}-1} = T^{p^{n-1}(p-1)} + \cdots + p.$$ Thus, for $n\geq 2$ (or $p>2$), we have $${\rm Norm}_{\mathbb{Q}_p(t_n)/\mathbb{Q}_p}(t_n) = p, \ {\rm Norm}_{\mathbb{Q}_p(t_n)/\mathbb{Q}_p}(u(t_n)) \equiv 1 \mod p, $$ where $u(T) $ is any unit in $\mathbb{Z}_p[[T]]$. It follows that for sufficiently large $n$, $$\frac{h_n/h_{n-1}}{p^{v_p(h_n/h_{n-1})}} \equiv {1\over p^{\lambda}} {\rm Norm}_{\mathbb{Q}_p(t_n)/\mathbb{Q}_p}((t_n^{\lambda} +pa_1t_n^{{\lambda} -1} + \cdots +pa_{\lambda})u(t_n))\equiv 1 \mod p.$$ Part (iii) of Theorem \ref{Thm1.1} is proved.
\section{$T$-adic Meromorphic continuation}
The power series ring $\mathbb{Z}_p[[T]]$ has three obvious topologies: the $p$-adic topology, the $T$-adic topology and the $(p, T)$-adic topology. All these topologies are useful to us. In this section, we will focus on the $T$-adic topology, which will be our starting point. Viewing $L_{\rho}(T, s)$ as a power series in $s$ with coefficients in the complete discrete valuation field $\mathbb{Q}_p((T))$ with uniformizer $T$, we are interested in the $T$-adic meromorphic continuation.
Clearly, $L_{\rho}(T, s)$ is $T$-adic analytic in the $T$-adic open unit disk $|s|_T <1$. One can prove
\begin{Prop} There is a decomposition $$L_{\rho}(T, s) =\frac{D_1(T,s)}{D_2(T,s)},$$
where $D_i(T,s)\in 1+ s\mathbb{Z}_p[[T]][[s]]$ are $T$-adic analytic on $|s|_T \leq 1$ for $1\leq i\leq 2$. Furthermore, $D_2(T,s) \in 1 + qs\mathbb{Z}_p[[T]][[s]]$. \end{Prop}
\begin{Cor}
\item{(i)}. $L_{\rho}(T,s)$ is $T$-adic meromorphic on the closed $T$-adic unit disk $|s|_T \leq 1$, i.e., well defined for $s\in \mathbb{Q}_p[[T]]$.
\item{(ii).} $L_{\rho}(T,s)$ is $(p, T)$-adic analytic on the closed $(p, T)$-adic unit disk $|s|_{(p, T)} \leq 1$, i.e., convergent for $s \in \mathbb{Z}_p[[T]]$. \end{Cor}
Part (ii) of the corollary was already proved in the previous section. Crew \cite{crew} further showed that the $(p, T)$-adic slope zero part of $L_{\rho}(T, s)$ has a cohomological interpretation in terms of $p$-adic \`etale cohomology. This is the main conjecture in function fields. Note that part (i) of the corollary is stronger. It cannot be deduced from the results in \cite{crew}.
To understand higher slopes of the $\mathbb{Z}_p$-tower, we need to study the analytic properties of the two variable function $L_{\rho}(T, s)$
beyond the closed unit disk $|s|_T \leq 1$. This leads to the following two questions.
\begin{Qn}For which tower, the L-function $L_{\rho}(t, s)$ is $p$-adic meromorphic in $|s|_p < \infty $ for all $t\in D_p(1)$. \end{Qn}
\begin{Qn}For which tower, the $T$-adic L-function $L_{\rho}(T, s)$ is {\bf integrally} $T$-adic meromorphic in $|s|_T < \infty $ in the sense that $$L_{\rho}(T, s) =\frac{D_1(T,s)}{D_2(T,s)},$$
where $D_i(T,s)\in 1+ s\mathbb{Z}_p[[T]][[s]]$ are $T$-adic analytic in $|s|_T < \infty$ for $1\leq i\leq 2$.
\end{Qn}
The second question is stronger than the first one, as the integrally $T$-adic meromorphic continuation of $L_{\rho}(T,s)$ implies the $p$-adic meromorphic continuation of $L_{\rho}(t, s)$ for all $t\in D_p(1)$. There are examples where $L_{\rho}(t,s)$ is not $p$-adic meromorphic, see \cite{wan96}. Thus, some conditions are necessary to have a positive answer.
Composing with the map $\mathbb{Z}_p \longrightarrow \mathbb{Z}_p^*$ sending $1$ to $1+{\bf p}$, the $p$-adic valued Frobenius function $F_{\rho}$ defines a rank one $p$-adic representation of the Galois group $G_U$, equivalently a rank one unit root $\sigma$-module $M_{{\rho}}$ on $U$, see \cite{wan}. It is clear that $F_{\rho}$ and $M_{\rho}$ determine each other. Using the Monsky trace formula \cite{Mo}, one can prove
\begin{Thm}\label{Thm2.1} Assume that $M_{\rho}$ is $\infty\log$-convergent on $U$. Then $L_{\rho}(T, s)$ is integrally $T$-adic
meromorphic in $|s|_T < \infty$. It follows that $L_{\rho}(t, s)$ is $p$-adic meromorphic in
$|s|_p < \infty $ for all $t\in D_p(1).$ \end{Thm}
This result is not general enough for applications, although both the unit root tower and the polynomial tower do satisfy the condition of the above theorem. More generally, the method in \cite{wan} can be used to prove the following Coleman's generalization of the rank once case of Dwork's unit root conjecture. In fact, this was already worked out in Grosse-Kl\"onne \cite{EG} in the case that $M_{\rho}$ comes from a pure slope piece of some finite rank overconvergent $\sigma$-module on $U$.
\begin{Thm}\label{Thm2.2} Assume that $M_{\rho}$ comes from a pure slope piece of some finite rank $\infty\log$-convergent $\sigma$-module on $U$.
Then $L_{\rho}(T, s)$ is integrally $T$-adic meromorphic in $|s|_T < \infty$. It follows that $L_{\rho}(t, s)$ is $p$-adic meromorphic in
$|s|_p < \infty $ for all $t\in D_p(1)$. \end{Thm}
In all natural applications arising from higher dimensional arithmetic geometry, the rank one $\sigma$-module $M_{\rho}$ satisfies the assumption of Theorem \ref{Thm2.2}, but usually not the assumption of Theorem \ref{Thm2.1}. These results make it possible to talk about the zeros and poles of the L-function $L_{\rho}(T, s)$ if $\rho$ comes from algebraic geometry.
We can now state the L-function version of our main conjecture, which answers Question \ref{Qn2.1} and is equivalent to Conjecture \ref{Con1.2} (the zeta function version of our main conjecture).
\begin{Con} Assume that the $\mathbb{Z}_p$-tower comes from algebraic geometry.
\item{(1)}. The L-degree is stable. That is, there are constants $a, b$ with $a>0$ such that for sufficiently lawge $n$, we have $\ell(n)=ap^n +b$.
\item{(2)}. For each fixed rational number $\alpha \in [0,\infty )$, the number $\ell_{\alpha}(n)$ is a constant for all sufficiently large $n$.
\item{(3)}. The $q$-slopes of $L(\chi_n, s)$ are equi-distributed in $[0, 1]$ as $n$ goes to infinity.
\item{(4)}. The slopes are stable. That is, there is a positive integer $n_0$ depending on the tower such that the re-scaled $q$-slopes $\{ p^n v_q(\alpha_1(n)), \cdots, p^n v_q(\alpha_{2g_n}(n))\}$ for all $n> n_0$ are determined explicitly by their values for $0\leq n \leq n_0$, using a finite number of arithmetic progressions.
\end{Con}
Part (1) is just Conjecture \ref{Con2.0} stated previously, which is equivalent to Conjecture \ref{Con1.0} on the genus stable property. The remarkable work of Kosters-Zhu suggests that part (1) almost implies part (3) for any $\mathbb{Z}_p$-tower. In fact, they have proved this implication for all $\mathbb{Z}_p$-towers over the affine line. As seen above, this main conjecture is completely proven for many $\mathbb{Z}_p$-towers over the affine line. An important example to consider is the Igusa $\mathbb{Z}_p$-tower over $\mathbb{F}_p$ for which the full conjecture seems still open. This example is important because of its possible connection to arithmetic of modular forms and Galois representations. In fact, part of our conjecture was motivated by Coleman-Mazur's question \cite{coleman-mazur} on the geometry of the eigencurve near the boundary of the weight disk, see the introduction in \cite{liu-wan-xiao} for additional information. Loosely speaking, the eigencurve (or its spectral curve) is the ``zero locus" of the two variable L-function $L_{\rho}(T,s)$. The eigencurve was introduced to extend Hida's theory from slope zero to all higher slopes. In one aspect, our general conjecture can be viewed as an attempt to extend geometric Iwasawa theory from slope zero to all higher slopes.
In these notes, for simplicity we only considered $\mathbb{Z}_p$ towers of curves, which is already sufficiently interesting. One can also consider various generalizations in a number of directions, for example, replacing $\mathbb{Z}_p$ by a more general compact $p$-adic Lie group \cite{W19}, and replacing curves by higher dimensional varieties. For example, the unit root tower with higher rank Galois group $\mathbb{Z}_p^r$ is considered in \cite{RWXY}, where some new difficulty already arises. The genus growth behavior of non-abelian $p$-adic towers is considered in Kramer-Miller \cite{K1}.
\end{document} | arXiv | {
"id": "1912.01571.tex",
"language_detection_score": 0.816815197467804,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{The Hopf algebra of Fliess operators and its dual prelie algebra }
ABSTRACT. We study the Hopf algebra $H$ of Fliess operators coming from Control Theory in the one-dimensional case. We prove that it admits a graded, finite-dimensional, connected gradation. Dually, the vector space $\mathbb{R}\langle x_0,x_1\rangle$ is both a prelie algebra for the prelie product dual to the coproduct of $H$, and an associative, commutative algebra for the shuffle product. These two structures admit a compatibility which makes $\mathbb{R}\langle x_0,x_1\rangle$ a Com-Prelie algebra. We give a presentation of this object as a Com-Prelie algebra and as a prelie algebra. \\
KEYWORDS. Fliess operators; prelie algebras; Hopf algebras.\\
AMS CLASSIFICATION. 16T05, 17B60, 93B25, 05C05.
\tableofcontents
\section*{Introduction}
Right prelie algebras, or shortly prelie algebras \cite{Gerstenhaber,Chapoton}, are vector spaces with a bilinear product $\bullet$ satisfying the following axiom: $$(x\bullet y)\bullet z-x\bullet (y\bullet z)=(x\bullet z)\bullet y-x\bullet (z\bullet y).$$ Consequently, the antisymmetrization of $\bullet$ is a Lie bracket. These objects are also called right-symmetric algebras or Vinberg algebra \cite{Matsushima,Vinberg}. If $A$ is a prelie algebra, the symmetric algebra $S(A)$ inherits a product $\star$ making it a Hopf algebra, isomorphic to the enveloping algebra of the Lie algebra $A$ \cite{Oudom1,Oudom2}. Whenever it is possible, we can consider the dual Hopf algebra $S(A)^*$ and its group of characters $G$, which is the exponentiation, in some sense, of the Lie algebra $A$.\\
We here consider an inverse construction, departing from a group used in Control Theory, namely the group of Fliess operators \cite{Ferfera,Gray,Gray2}; this group is used to study the feedback product. We limit ourselves here to the one-dimensional case. This group is the set $\mathbb{R}\langle\langle x_0,x_1\rangle\rangle$ of noncommutative formal series in two indeterminates, with a certain product generalizing the composition of formal series (definition \ref{1}). The Hopf algebra $H$ of coordinates of this group is described in \cite{Gray}, where it is also proved that it is graded by the length of words; note that this gradation is not connected and not finite-dimensional. We first give a way to describe the composition in the group $\mathbb{R}\langle\langle x_0,x_1\rangle\rangle$ and the coproduct of $H$ by induction on the length of words (lemma \ref{2} and proposition \ref{3}). We prove that $H$ admits a second gradation, which is connected; the dimensions of this gradation are given by the Fibonacci sequence (proposition \ref{8}). As the product of $\mathbb{R}\langle\langle x_0,x_1\rangle\rangle$ is left-linear, $H$ is a commutative, right-sided combinatorial Hopf algebra \cite{Loday}, so, dually, $\mathbb{R}\langle x_0,x_1\rangle$ inherits a prelie product $\bullet$, which is inductively defined in proposition \ref{11}. We prove that the words $x_1^n$, $n\geq 0$, form a minimal subset of generators of this prelie algebra (theorem \ref{12}).\\
The prelie algebra $\mathbb{R}\langle x_0,x_1\rangle$ has also an associative, commutative product, namely the shuffle product $\, \shuffl \,$ \cite{Reutenauer}. We prove that the following axiom is satisfied (proposition \ref{14}): $$(x\, \shuffl \, y)\bullet z=(x\bullet z)\, \shuffl \, y+x\, \shuffl \, (y\bullet z).$$ So $\mathbb{R}\langle x_0,x_1\rangle$ is a Com-Prelie algebra \cite{Mansuy} (definition \ref{15}). We give a presentation of this Com-Prelie algebra in theorem \ref{27}. We use for this a description of free Com-Prelie algebras in terms of partitioned trees (definition \ref{17}), which generalizes the construction of prelie algebras in terms of rooted trees in \cite{Chapoton}. We deduce a presentation of $\mathbb{R}\langle x_0,x_1\rangle$ as a prelie algebra in theorem \ref{30}. This presentation induces a new basis of $\mathbb{R}\langle x_0,x_1\rangle$ in terms of words with letters in $\mathbb{N}^*$, see corollary \ref{31}. The prelie product of two elements of this basis uses a dendriform structure \cite{Eilenberg,Loday2} on the algebra of words with letters in $\mathbb{N}^*$ (theorem \ref{34}). The study of this dendriform structure is postponed to the appendix, as well as the enumeration of partitioned trees; we also prove that free Com-Prelie algebras are free as prelie algebras, using Livernet's rigidity theorem \cite{Livernet}.\\
{\bf Aknowledgment.} The research leading these results was partially supported by the French National Research Agency under the reference ANR-12-BS01-0017.\\
{\bf Notation.} We denote by $\mathbb{K}$ a commutative field of characteristic zero. All the objects (algebra, coalgebras, prelie algebras$\ldots$) in this text will be taken over $\mathbb{K}$.
\section{Construction of the Hopf algebra}
\subsection{Definition of the composition}
Let us consider an alphabet of two letters $x_0$ and $x_1$. We denote by $\K\langle\langle x_0,x_1\rangle\rangle$ the completion of the free algebra generated by this alphabet, that is to say the set of noncommutative formal series in $x_0$ and $x_1$. Note that $\K\langle\langle x_0,x_1\rangle\rangle$ is an algebra for the concatenation product and for the shuffle product, which we denote by $\, \shuffl \,$. \\
{\bf Exemples.} If $a,b,c,d \in \{x_0,x_1\}$: \begin{align*} abc\, \shuffl \, d&=abcd+abdc+adbc+dabc,\\ ab\, \shuffl \, cd&=abcd+acbd+cabd+acdb+cadb+cdab,\\ a\, \shuffl \, bcd&=abcd+bacd+bcad+bcda. \end{align*}
The unit for both these products is the empty word, which we denote by $\emptyset$. The algebra $\K\langle\langle x_0,x_1\rangle\rangle$ is given its usual ultrametric topology.
\begin{defi}\label{1}\cite{Gray}. \begin{enumerate} \item For any $d\in \K\langle\langle x_0,x_1\rangle\rangle$, we define a continuous algebra map $\varphi_d$ from $\K\langle\langle x_0,x_1\rangle\rangle$ to $End(\K\langle\langle x_0,x_1\rangle\rangle)$ in the following way: for all $X \in \K\langle\langle x_0,x_1\rangle\rangle$, $$\varphi_d(x_0)(X)=x_0X,\: \varphi_d(x_1)(X)=x_1X+x_0(d\, \shuffl \, X).$$ \item We define a composition $\circ$ on $\K\langle\langle x_0,x_1\rangle\rangle$ in the following way: for all $c,d \in \K\langle\langle x_0,x_1\rangle\rangle$, $c \circ d=\varphi_d(c)(\emptyset)+d$. \end{enumerate}\end{defi}
It is proved in \cite{Gray} that this composition is associative. \\
{\bf Notation}. For all $c,d\in \K\langle\langle x_0,x_1\rangle\rangle$, we put $c \tilde{\circ} d=c\circ d-d=\varphi_d(c)(\emptyset)$. \\
{\bf Remark.} If $c_1,c_2,d \in \K\langle\langle x_0,x_1\rangle\rangle$, $\lambda \in \mathbb{K}$: $$(c_1+\lambda c_2)\tilde{\circ} d=\varphi_d(c_1+\lambda c_2)(\emptyset)=(\varphi_d(c_1)+\lambda \varphi_d(c_2))(\emptyset) =\varphi_d(c_1)(\emptyset)+\lambda\varphi_d(c_2)(\emptyset)=c_1\tilde{\circ} d+\lambda c_2\tilde{\circ} d.$$ So the composition $\tilde{\circ}$ is linear on the left. As $\varphi_d$ is continuous, the map $c\longrightarrow c\tilde{\circ} d$ is continuous for any $d\in \K\langle\langle x_0,x_1\rangle\rangle$. Hence, it is enough to know how to compute $c \tilde{\circ} d$ for any word $c$, which is made possible by the next lemma, using an induction on the length:
\begin{lemma}\label{2} For any word $c$, for any $d \in \K\langle\langle x_0,x_1\rangle\rangle$: \begin{enumerate} \item $\emptyset \tilde{\circ} d=\emptyset$. \item $(x_0c)\tilde{\circ} d=x_0 (c \tilde{\circ} d)$. \item $(x_1c)\tilde{\circ} d=x_1(c \tilde{\circ} d)+x_0(d \, \shuffl \,(c \tilde{\circ} d))$. \end{enumerate}\end{lemma}
\begin{proof} 1. $\emptyset \tilde{\circ} d=\varphi_d(\emptyset)(\emptyset)=Id(\emptyset)=\emptyset$. \\
2. $(x_0c)\tilde{\circ} d=\varphi_d(x_0c)(\emptyset)=\varphi_d(x_0)\circ \varphi_d(c)(\emptyset)= \varphi_d(x_0)(c \tilde{\circ} d)=x_0(c \tilde{\circ} d)$.\\
3. $(x_1c)\tilde{\circ} d=\varphi_d(x_1c)(\emptyset)=\varphi_d(x_1)\circ \varphi_d(c)(\emptyset)= \varphi_d(x_1)(c \tilde{\circ} d)=x_1(c \tilde{\circ} d)+x_0 (d \, \shuffl \, (c \tilde{\circ} d))$. \end{proof}
\subsection{Dual Hopf algebra}
We here give a recursive description of the Hopf algebra of the coordinates of the group $\K\langle\langle x_0,x_1\rangle\rangle$ of \cite{Gray}. \\
For any word $c$, let us consider the map $X_c \in \K\langle\langle x_0,x_1\rangle\rangle^*$, such that for any $d \in \K\langle\langle x_0,x_1\rangle\rangle$, $X_c(d)$ is the coefficient of $c$ in $d$. We denote by $V$ the subspace of $A^*$ generated by these maps. Let $H=S(V)$, or equivalently the free associative, commutative algebra generated by the $X_c$'s. The elements of $H$ are seen as polynomial functions on $\K\langle\langle x_0,x_1\rangle\rangle$; the elements of $H\otimes H$ are seen as polynomial functions on $\K\langle\langle x_0,x_1\rangle\rangle \times \K\langle\langle x_0,x_1\rangle\rangle$. Then $H$ is given a multiplicative coproduct defined in the following way: for any word $c$, for any $f,g \in \K\langle\langle x_0,x_1\rangle\rangle$, $$\Delta(X_c)(f, g)=X_c(f\circ g).$$ As $\circ$ is associative, $\Delta$ is coassociative, so $H$ is a bialgebra. \\
{\bf Notations.}\begin{enumerate} \item The space of words is a commutative algebra for the shuffle product $\, \shuffl \,$. Dually, the space $V$ inherits a coassociative, cocommutative coproduct, denoted by $\Delta_{\, \shuffl \,}$. For example, if $a,b,c \in \{x_0,x_1\}$: \begin{align*} \Delta_{\, \shuffl \,}(X_\emptyset)&=X_\emptyset\otimes X_\emptyset,\\ \Delta_{\, \shuffl \,}(X_a)&=X_a\otimes X_\emptyset+X_\emptyset\otimes X_a,\\ \Delta_{\, \shuffl \,}(X_{ab})&=X_{ab}\otimes X_\emptyset+X_a\otimes X_b+X_b \otimes X_a+X_\emptyset\otimes X_{ab},\\ \Delta_{\, \shuffl \,}(X_{abc})&=X_{abc}\otimes X_\emptyset+X_a\otimes X_{bc}+X_b\otimes X_{ac}+X_c\otimes X_{ab}\\ &+X_{ab}\otimes X_c+X_{ac}\otimes X_b+X_{bc}\otimes X_a+X_\emptyset\otimes X_{abc}. \end{align*} \item We define two linear endomorphisms $\theta_0,\theta_1$ of $V$ by $\theta_i(X_c)=X_{x_ic}$ for any word $c$. \end{enumerate}
The following proposition allows to compute $\Delta(X_c)$ for any word $c$ by induction on the length.
\begin{prop}\label{3} For all $x \in V$, we put $\tilde{\Delta}(x)=\Delta(x)-1\otimes x$. \begin{enumerate} \item $\tilde{\Delta}(X_\emptyset)=X_\emptyset \otimes 1$. \item $\tilde{\Delta} \circ \theta_0=(\theta_0\otimes Id) \circ \tilde{\Delta}+(\theta_1 \otimes m)\circ (\tilde{\Delta} \otimes Id) \circ \Delta_{\, \shuffl \,}$. \item $\tilde{\Delta} \circ \theta_1=(\theta_1\otimes Id)\circ \tilde{\Delta}$. \end{enumerate}\end{prop}
\begin{proof} For any word $c$, for any $f,g \in \K\langle\langle x_0,x_1\rangle\rangle$: $$\tilde{\Delta}(X_c)(f,g)=\Delta(X_c)(f,g)-(1\otimes X_c)(f,g)=X_c(f\circ g)-X_c(g)=X_c(f\otimes g-g)=X_c(f\tilde{\circ} g).$$ As $\tilde{\circ}$ is linear on the left, $\tilde{\Delta}(X_c) \in V \otimes H$, so formulas in points 2 and 3 make sense. \\
Let $f \in \K\langle\langle x_0,x_1\rangle\rangle$. It can be uniquely written as $f=x_0f_0+x_1f_1+\lambda \emptyset$, with $f_0,f_1 \in \K\langle\langle x_0,x_1\rangle\rangle$, $\lambda \in K$. For all $g\in \K\langle\langle x_0,x_1\rangle\rangle$: $$f\tilde{\circ} g=(x_0f_0)\tilde{\circ} g+(x_1f_1)\tilde{\circ} g+\lambda \emptyset\tilde{\circ} g=x_0 (f_0\tilde{\circ} g+g \, \shuffl \, (f_1\tilde{\circ} g))+x_1(f_1 \tilde{\circ} g)+\lambda \emptyset.$$
1. We obtain:
$$\tilde{\Delta}(X_\emptyset)(f,g)=X_\emptyset(x_0 (f_0\tilde{\circ} g+g \, \shuffl \, (f_1\tilde{\circ} g))+x_1(f_1 \tilde{\circ} g)+\lambda \emptyset) =0+0+\lambda =(X_\emptyset \otimes 1)(f,g),$$ so $\Delta(X_\emptyset)=X_\emptyset \otimes 1$.\\
2. Let $c$ be a word. \begin{align*} \tilde{\Delta} \circ \theta_0(X_c)(f,g)&=\tilde{\Delta}(X_{x_0c})(f,g)\\ &=X_{x_0c}(x_0 (f_0\tilde{\circ} g+g \, \shuffl \, (f_1\tilde{\circ} g))+x_1(f_1 \tilde{\circ} g)+\lambda \emptyset)\\ &=X_c(f_0 \tilde{\circ} g+g\, \shuffl \, (f_1\tilde{\circ} g))+0+0\\ &=X_c(f_0 \tilde{\circ} g+(f_1\tilde{\circ} g)\, \shuffl \, g)+0+0\\ &=\tilde{\Delta}(X_c)(f_0,g)+(\tilde{\Delta}\otimes Id)\circ \Delta_{\, \shuffl \,}(X_c)(f_1,g,g)\\ &=\tilde{\Delta}(X_c)(f_0,g)+(Id \otimes m)\circ(\tilde{\Delta}\otimes Id)\circ \Delta_{\, \shuffl \,}(X_c)(f_1,g)\\ &=(\theta_0\otimes Id) \circ \tilde{\Delta}(X_c)(f,g)+(\theta_1\otimes Id)\circ (Id \otimes m)\circ(\tilde{\Delta}\otimes Id) \circ \Delta_{\, \shuffl \,}(X_c)(f,g), \end{align*} so $\tilde{\Delta} \circ \theta_0(X_c)=(\theta_0\otimes Id) \circ \tilde{\Delta}(X_c)+(\theta_1\otimes Id)\circ (Id \otimes m)\circ(\tilde{\Delta}\otimes Id) \circ \Delta_{\, \shuffl \,}(X_c)$.\\
3. Let $c$ be a word. \begin{align*} \tilde{\Delta} \circ \theta_1(X_c)(f,g)&=\tilde{\Delta}(X_{x_0c})(f,g)\\ &=X_{x_1c}(x_0 (f_0\tilde{\circ} g+g \, \shuffl \, (f_1\tilde{\circ} g))+x_1(f_1 \tilde{\circ} g)+\lambda \emptyset)\\ &=0+X_c(f_1 \tilde{\circ} g)+0\\ &=\tilde{\Delta}(X_c)(f_1 ,g)\\ &=(\theta_1\otimes Id) \circ \tilde{\Delta}(X_c)(f,g), \end{align*} so $\tilde{\Delta} \circ \theta_1(X_c)=(\theta_1\otimes Id) \circ \tilde{\Delta}(X_c)$. \end{proof}\\
{\bf Examples.} \begin{align*} \Delta(X_{x_0})&=X_{x_0}\otimes 1+1\otimes X_{x_0}+X_{x_1}\otimes X_\emptyset,\\ \Delta(X_{x_0^2})&=X_{x_0^2}\otimes 1+1\otimes X_{x_0^2}+ X_{x_0x_1}\otimes X_\emptyset+X_{x_1x_0}\otimes X_\emptyset+X_{x_1x_1}\otimes X_\emptyset^2+X_{x_1} \otimes X_{x_0},\\ \Delta(X_{x_0x_1})&=X_{x_0x_1}\otimes 1+1\otimes X_{x_0x_1}+X_{x_1x_1}\otimes X_\emptyset +X_{x_1}\otimes X_{x_1},\\ \Delta(X_{x_1x_0})&=X_{x_1x_0}\otimes 1+1\otimes X_{x_1x_0}+X_{x_1x_1}\otimes X_\emptyset. \end{align*}
\begin{cor}\label{4} For all $n \geq 1$, $\tilde{\Delta}(X_{x_1^n})=X_{x_1^n} \otimes 1$ and $\Delta(X_{x_1^n})=X_{x_1^n} \otimes 1 +1\otimes X_{x_1^n}$. \end{cor}
\begin{proof} It comes from an easy induction on $n$. \end{proof}
\subsection{gradation}
It is proved in \cite{Gray} that the Hopf algebra $H$ is graded by the length of words, but this gradation is not connected, that is to say that the homogeneous component of degree $0$ is not $(0)$, as it contains $X_{\emptyset}$. Moreover, the homogeneous components of $H$ are not finite-dimensional, as for example $X_{\emptyset}^n X_{x_0^k}$ is homogeneous of degree $k$ for all $n \geq 0$. We now define another gradation on $H$, which is connected and finite-dimensional.
\begin{defi}\begin{enumerate} \item Let $c=c_1\ldots c_n$ be a word. We put: $$deg(c)=n+1+\sharp\left\{i\in \{1,\ldots,n\}\mid c_i=x_0\right\}.$$ \item For all $k\geq 1$, we put: $$V_k=Vect(X_c\mid deg(x)=k).$$ This define a connected gradation of $V$, that is to say: $$V=\bigoplus_{k\geq 1} V_k.$$ \item This gradation induces a connected gradation of the algebra $H$: $$H=\bigoplus_{k\geq 0} H_k,\mbox{ and } H_0=\mathbb{K}.$$ \end{enumerate}\end{defi}
\begin{lemma} If $c$ is a word of degree $n$, then: $$\tilde{\Delta}(X_c) \in \bigoplus_{i+j=n} V_i \otimes H_j.$$ \end{lemma}
\begin{proof} Let us start by the following observations: \begin{enumerate} \item Let $c$ be a word of degree $k$. Then $x_0c$ is a word of degree $k+2$. Hence, $\theta_0$ is homogeneous of degree $2$ on $V$. \item Let $c$ be a word of degree $k$. Then $x_1c$ is a word of degree $k+1$. Hence, $\theta_1$ is homogeneous of degree $1$ on $V$. \item Let $c$ and $d$ be two words of respective degrees $k$ and $l$. Then any word obtained by shuffling $c$ and $d$ is of degree $k+l-1$: its length is the sum of the length of $c$ and $d$, and the number of $x_0$ in it is the sum of the numbers of $x_0$ in $c$ and $d$. Hence, dually, the coproduct $\Delta_{\, \shuffl \,}$ is homogeneous of degree $1$ from $V$ to $V \otimes V$. \end{enumerate}
Let us prove the result by induction on the length $k$ of $c$. If $k=0$, then $c=\emptyset$ so $n=1$, and $\tilde{\Delta}(X_c)=X_c \otimes 1 \in V_1 \otimes H_0$. Let us assume the result for all words of length $<k-1$. Two cases can occur. \begin{enumerate} \item If $c=x_0d$, then $deg(d)=n-2$. we put $\Delta_{\, \shuffl \,}(X_d)=\sum x'_i\otimes x''_i$. By the preceding third observation, we can assume that for all $i$, $x'_i,x''_i$ are homogeneous elements of $V$, with $deg(x'_i)+deg(x'_i)=n-2+1=n-1$. Then: $$\tilde{\Delta}(X_c)=(\theta_0\otimes Id) \circ \tilde{\Delta}(X_d)+\sum_i(\theta_1 \otimes m)\circ(\tilde{\Delta}(x'_i) \otimes x''_i).$$ By the induction hypothesis, $\tilde{\Delta}(X_d) \in (V\otimes H)_{n-1}$. By the second observation, $(\theta_0\otimes Id) \circ \tilde{\Delta}(X_d)\in (V\otimes H)_n$. By the induction hypothesis applied to $x'_i$, for all $i$, $(\tilde{\Delta}(x'_i) \otimes x''_i) \in (V \otimes H\otimes V)_{n-1}$, so by the first observation, $(\theta_1 \otimes m)\circ(\tilde{\Delta}(x'_i) \otimes x''_i) \in (V \otimes H)_{n-1+1}\subseteq (V \otimes H)_n$. So $\Delta(X_c)\in (V \otimes H)_n$. \item $c=x_1d$, then $deg(d)=n-1$. Moreover, $\tilde{\Delta}(X_c)=(\theta_1\otimes Id)\circ \tilde{\Delta}(X_d)$. By the induction hypothesis, $\tilde{\Delta}(X_d) \in (V \otimes H)_{n-1}$. By the second observation, $\tilde{\Delta}(X_c) \in (V \otimes H)_n$. \end{enumerate} So the result holds for any word $c$. \end{proof}
\begin{prop} With this gradation, $H$ is a graded, connected Hopf algebra. \end{prop}
\begin{proof} We have to prove that for all $n \geq 0$: $$\Delta(H_n)\subseteq \bigoplus_{i+j=n} H_i\otimes H_j.$$ This comes from the multiplicativity of $\Delta$. \end{proof}\\
Let us now study the formal series of $V$ and $H$.
\begin{prop}\label{8}\begin{enumerate} \item For all $k$, let us put $p_k=dim(V_k)$ and $F_V=\displaystyle \sum_{k=1}^\infty p_k X^k$. Then: $$F_V=\frac{X}{1-X-X^2},$$ and for all $k\geq 1$: $$p_k=\frac{1}{\sqrt{5}}\left(\left(\frac{1+\sqrt{5}}{2}\right)^k-\left(\frac{1-\sqrt{5}}{2}\right)^k\right).$$ This is the Fibonacci sequence (A000045 in \cite{Sloane}). \item We put $F_H=\displaystyle \sum_{k=0}^\infty dim(H_k)X^k$. Then: $$F_H=\prod_{k=1}^\infty \frac{1}{(1-X^k)^{p_k}}.$$ \end{enumerate}\end{prop}
\begin{proof} Let us consider the formal series: $$F(X_0,X_1)=\sum_{i,j\geq 0} \sharp\{\mbox{words in $x_0, x_1$ with $i$ $x_0$ and $j$ $x_1$}\} X_0^i X_0^j.$$ Then $\displaystyle F(X_0,X_1)=\frac{1}{1-X_0-X_1}$. Moreover, by definition of the degree of a word: $$F_V=XF(X^2,X)=\frac{X}{1-X-X^2}.$$ As $H$ is the symmetric algebra generated by $V$, its formal series is given by the second point. \end{proof}\\
{\bf Examples.} We obtain:
$$\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c} k&0&1&2&3&4&5&6&7&8&9&10\\ \hline dim(V_k)&0&1&1&2&3&5&8&13&21&34&55\\ \hline dim(H_k)&1&1&2&4&8&15&30&56&108&203&384 \end{array}$$ The third row is sequence A166861 of \cite{Sloane}. \\
{\bf Remark.} Consequently, the space $V$ inherits a bigradation: $$V_{k,n}=Vect(X_c\mid deg(c)=k \mbox{ and }lg(c)=n).$$ If $c$ is a word of length $n$ and of degree $k$, denoting by $a$ the number of its letters equal to $x_0$ and by $b$ the number of its letters equal to $x_1$, then: $$\begin{cases} a+b&=n,\\ 2a+b+1&=k, \end{cases}$$ so $a=k-n-1$. Hence: $$dim(V_{k,n})=\binom{n}{k-n-1},$$ and the formal series of this bigradation is: $$\sum_{k,n\geq 0} dim(V_{k,n})X^kY^n=\frac{X}{1-XY-X^2Y}.$$ \section{Prelie structure on $\K\langle x_0,x_1\rangle$}
\subsection{Prelie coproduct on $V$}
As the composition $\circ$ is linear on the left, the dual coproduct satisfies $\tilde{\Delta}(V)\subseteq V\otimes H$, so $H$ is a commutative right-sided Hopf algebra in the sense of \cite{Loday}, and $V$ inherits a right prelie coproduct: if $\pi$ is the canonical projection from $H=S(V)$ onto $V$, $$\delta=(\pi \otimes \pi) \circ \Delta=(Id \otimes \pi) \circ \tilde{\Delta}.$$ It satisfies the right prelie coalgebra axiom: $$(23).((\delta \otimes Id)\circ \delta-(Id \otimes \delta)\circ\delta)=0.$$ The following proposition allows to compute $\delta(X_c)$ by induction on the length of $c$.
\begin{prop}\label{9}\begin{enumerate} \item $\delta(X_\emptyset)=0$. \item $\delta \circ \theta_0=(\theta_0 \otimes Id)\circ \delta+(\theta_1\otimes Id)\circ \Delta_{\, \shuffl \,}$. \item $\delta \circ \theta_1=(\theta_1 \otimes Id)\circ \delta$. \end{enumerate}\end{prop}
\begin{proof} The first point comes from $\Delta(X_\emptyset)=X_\emptyset\otimes 1+1\otimes X_\emptyset$. Let $x \in V$. We put $\Delta_{\, \shuffl \,}(x)=x'\otimes x'' \in V \otimes V$. For any $y\in V$, we put $\tilde{\Delta}(y)-y\otimes 1 =y^{(1)}\otimes y^{(2)} \in V\otimes H_+$. Then: \begin{align*} (\theta_1 \otimes m)\circ (\tilde{\Delta} \otimes Id) \circ \Delta_{\, \shuffl \,}(x) &=(\theta_1\otimes m)(x'\otimes 1\otimes x''+x'^{(1)}\otimes x'^{(2)}\otimes x'')\\ &=\theta_1(x') \otimes \underbrace{x''}_{\in V}+x'^{(1)}\otimes \underbrace{x'^{(2)} x''}_{\in Ker(\pi)}. \end{align*} Applying $Id \otimes \pi$, it remains: $$(Id \otimes \pi)\circ(\theta_1 \otimes m)\circ (\tilde{\Delta} \otimes Id) \circ \Delta_{\, \shuffl \,}(x) =(\theta_1 \otimes Id)\circ \Delta_{\, \shuffl \,}(x).$$ Let $i=0$ or $1$. Then: $$(Id \otimes \pi) \circ (\theta_i\otimes Id)\circ \tilde{\Delta}=(\theta_i\otimes Id)\circ (Id \otimes \pi)\circ \tilde{\Delta} =(\theta_i\otimes Id)\circ \delta.$$ The result is induced by these remarks, combined with proposition \ref{3}. \end{proof}\\
{\bf Examples.} \begin{align*} \delta(X_{x_0})&=X_{x_1}\otimes X_\emptyset,\\ \delta(X_{x_0^2})&=X_{x_0x_1}\otimes X_\emptyset+X_{x_1x_0}\otimes X_\emptyset+X_{x_1}\otimes X_{x_0},\\ \delta(X_{x_0x_1})&=X_{x_1x_1}\otimes X_\emptyset+X_{x_1}\otimes X_{x_1},\\ \delta(X_{x_1x_0})&=X_{x_1x_1}\otimes X_\emptyset. \end{align*}
\begin{prop}\label{10} $Ker(\delta)=Vect(X_{x_1^n},n\geq 0)$. \end{prop}
\begin{proof} The inclusion $\supseteq$ is trivial by corollary \ref{4}. Let us prove the other inclusion. \\
{\it First step.} Let us prove the following property: if $x \in V_k$ is such that $$\delta(x)=\lambda \sum_{i+j=k-2} \frac{(k-2)!}{i!j!}X_{x_1^i}\otimes X_{x_1^j},$$ then there exists $\mu \in \mathbb{K}$ such that $x=\mu x_1^{k-1}$. It is obvious if $k=1$, as then $x=\mu \emptyset$. Let us assume the result at all ranks $<k$. We put $x=x_1^\alpha(x_0f_0+x_1f_1)$, where $\alpha\geq 0$, $f_0$ is homogeneous of degree $k-2-\alpha$ and $f_1$ is homogeneous of degree $k-1-\alpha$. $$\delta(x)=(\theta_1^\alpha\otimes Id)\left((\theta_0\otimes Id)\circ \delta(f_0)+(\theta_1\otimes Id)\circ \delta(f_1) +(\theta_1\otimes Id)\circ \Delta_{\, \shuffl \,}(f_0)\right).$$ Let us consider the terms in this expression of the form $X_\emptyset \otimes X_c$, with $c$ a word. This gives: $$\lambda X_\emptyset\otimes X_{x_1^{k-2}}=0,$$ so $\lambda=0$ and $\delta(x)=0$. Let us now consider the terms of the form $X_{x_1^\alpha x_0c}\otimes X_d$, with $c,d$ words. We obtain: $$(\theta_1^\alpha \circ \theta_0 \otimes Id)\circ \delta(f_0)=0.$$ As both $\theta_0$ and $\theta_1$ are injective, we obtain $\delta(f_0)=0$. By the induction hypothesis, $f_0=\nu X_1{x_1^l}$, with $l=k-2-\alpha<k$. Hence: $$\Delta_{\, \shuffl \,}(f_0)=\nu \sum_{i+j=l}\frac{l!}{i!j!}X_{x_1^i} \otimes X_{x_1^j},$$ and: $$(\theta_1^{\alpha+1}\otimes Id)\left(\delta(f_1)+\nu \sum_{i+j=l} \frac{l!}{i!j!}X_{x_1^i} \otimes X_{x_1^j}\right)=0.$$ As $\theta_1$ is injective, we obtain with the induction hypothesis that $f_1=\mu X_{x_1^{k-2-\alpha}}$, so: $$x=\mu X_{x_1^{k-1}}+\nu X_{x_1^\alpha x_0 x_1^{k-\alpha-2}}.$$ This gives: \begin{align*} \delta(x)&=\nu (\theta_1^{\alpha+1}\otimes Id)\left(\sum_{i+j=k-\alpha-2} \frac{(k-\alpha-2)!}{i!j!} X_{x_1^i}\otimes X_{x_1^j}\right)\\ &=\nu \sum_{i+j=k-\alpha-2} \frac{(k-\alpha-2)!}{i!j!} X_{x_1^{i+\alpha}}\otimes X_{x_1^j} \\ &=0, \end{align*} so necessarily $\nu=0$ and $x=\mu X_{x_1^{k-1}}$.\\
{\it Second step.} Let $x\in Ker(\delta)$. As $\delta$ is homogeneous of degree $0$, the homogeneous components of $x$ are in $Ker(\delta)$. By the first step, with $\lambda=0$, these homogeneous components, hence $x$, belong to $Vect(X_{x_1^k},k\geq 0)$. \end{proof}
\subsection{Dual prelie algebra}
As $V$ is a graded prelie coalgebra, its graded dual is a prelie algebra. We identify this graded dual with $\K\langle x_0,x_1\rangle\subseteq \K\langle\langle x_0,x_1\rangle\rangle$; for any words $c,d$, $X_c(d)=\delta_{c,d}$. The prelie product of $\K\langle x_0,x_1\rangle$ is denoted by $\bullet$. Dualizing proposition \ref{9}, we obtain:
\begin{prop}\label{11}\begin{enumerate} \item For all word $c$, $\emptyset \bullet c=0$. \item For all words $c,d$, $(x_0c)\bullet d=x_0(c\bullet d)$. \item For all words $c,d$, $(x_1c)\bullet d=x_1(c \bullet d)+x_0(c \, \shuffl \, d)$. \end{enumerate}\end{prop}
\begin{proof} Let $u,v,w$ be words. Then $X_w(u\bullet v)=\delta(X_w)(u\otimes v)$. Hence, if $d$ is a word: \begin{align*} X_\emptyset(u\bullet v)&=0,\\ X_{x_0d}(u\bullet v)&=(\theta_0\otimes Id)\circ \delta(X_d)(u\otimes v)+(\theta_1\otimes Id)\circ \Delta_{\, \shuffl \,}(X_d) (u\otimes v)\\ &=X_d(\theta_0^*(u)\bullet v+\theta_1^*(u)\, \shuffl \, v),\\ X_{x_1d}(u\bullet v)&=(\theta_1\otimes Id)\otimes \delta(X_d)(u\otimes v)\\ &=X_d(\theta_1^*(u)\bullet v). \end{align*} Moreover, for all word $c$: \begin{align*} \theta_0^*(\emptyset)&=0,&\theta_0^*(x_0c)&=c,&\theta_0^*(x_1c)&=0,\\ \theta_1^*(\emptyset)&=0,&\theta_1^*(x_0c)&=0,&\theta_1^*(x_1c)&=c. \end{align*} Hence, for any words $c,d$: \begin{align*} X_{x_0d}(x_0c \bullet v)&=X_d(c \bullet v)&X_{x_0d}(x_1c \bullet v)&=X_d(c \, \shuffl \, v)\\ &=X_{x_0d}(x_0(x \bullet v)),&&=X_{x_0d}(x_1(c \bullet v)+x_0(c \, \shuffl \, v)),\\ \\ X_{x_1d}(x_0c \bullet v)&=0&X_{x_1d}(x_1c \bullet v)&=X_d(c \bullet v)\\ &=X_{x_1d}(x_0(x \bullet v)),&&=X_{x_1d}(x_1(c \bullet v)+x_0(c \, \shuffl \, v)). \end{align*} Hence, for any $w$, $X_w(x_0c\bullet v)=X_w(x_0(x \bullet v))$ and $X_w(x_1c\bullet v)=X_w((x_1(c \bullet v)+x_0(c \, \shuffl \, v))$. \end{proof}\\
{\bf Examples.} \begin{align*} x_0\bullet x_0&=0&x_0\bullet x_0x_0&=0&x_1\bullet x_0x_0&=x_0x_0x_0\\ x_0\bullet x_1&=0&x_0\bullet x_0x_1&=0&x_1\bullet x_0x_1&=x_0x_0x_1\\ x_1\bullet x_0&=x_0x_0&x_0\bullet x_1x_0&=0&x_1\bullet x_1x_0&=x_0x_1x_0\\ x_1\bullet x_1&=x_0x_1&x_0\bullet x_1x_1&=0&x_1\bullet x_1x_1&=x_0x_1x_1 \end{align*} \begin{align*} x_0x_0\bullet x_0&=0&x_0x_0\bullet x_1&=0\\ x_0x_1\bullet x_0&=x_0x_0x_0&x_0x_1\bullet x_1&=x_0x_0x_1\\ x_1x_0\bullet x_0&=2x_0x_0x_0&x_1x_0\bullet x_1&=x_0x_0x_1+x_0x_1x_0\\ x_1x_1\bullet x_0&=x_1x_0x_0+x_0x_1x_0+x_0x_0x_1&x_1x_1\bullet x_1&=x_1x_0x_1+2x_0x_1x_1 \end{align*}
Dualizing proposition \ref{10}:
\begin{theo}\label{12} $\K\langle x_0,x_1\rangle=Vect(x_1^n,n\geq 0)\oplus (\K\langle x_0,x_1\rangle\bullet \K\langle x_0,x_1\rangle)$. Hence, $(x_1^n)_{n\geq 0}$ is a minimal system of generators of the prelie algebra $\K\langle x_0,x_1\rangle$. \end{theo}
\begin{proof} As $\bullet=\delta^*$, $Im(\bullet)=Ker(\delta)^\perp=Vect(X_{x_1^n},n\geq 0)^\perp$. The first assertion is then immediate. As $\K\langle\langle x_0,x_1\rangle\rangle$ is a graded, connected prelie coalgebra, $\K\langle x_0,x_1\rangle$ is a graded, connected prelie algebra. The result then comes from the next lemma. \end{proof}
\begin{lemma} Let $A$ be a graded, connected prelie algebra, and $V$ be a graded subspace of $A$. \begin{enumerate} \item $V$ generates $A$ if, and only if, $A=V+A\bullet A$. \item $V$ is a minimal subspace of generators of $A$ if, and only if, $A=V\oplus A\bullet A$. \end{enumerate}\end{lemma}
\begin{proof} 1. $\Longrightarrow$. Let $x \in A$. Then it can be written as an element of the prelie subalgebra generated by $v$, so as the sum of an element of $V$ and of iterated prelie products of elements of $V$. Hence, $x\in V+A\bullet A$. Note that we did not use the gradation of $A$ to prove this point.\\
1. $\Longleftarrow$. Let $B$ be the prelie subalgebra generated by $V$. Let $x\in A_n$, let us prove that $x\in B$ by induction on $n$. As $A_0=(0)$, it is obvious if $n=0$. Let us assume the result at all ranks $<n$. We obtain, by the gradation: $$A_n=V_n\oplus \sum_{i=1}^{n-1} A_i \bullet A_{n-i}.$$ So we can write $x=\lambda x_1^{n-1}+\sum x_i \bullet y_i$, where $x_i$, $y_i$ are homogeneous of degree $<n$. By the induction hypothesis, these elements belong to $B$, so $x \in B$. \\
2. $\Longrightarrow$. By 1. $\Longrightarrow$, $A=V+A\bullet A$. If $V \cap A\bullet A\neq (0)$, we can choose a graded subspace $W\subsetneq V$, such that $A=W\oplus A\bullet A$. By 1. $\Longleftarrow$, $W$ generates $A$, so $V$ is not a minimal system of generators of $A$: contradiction. So $A=V\oplus A\bullet A$. \\
2. $\Longleftarrow$. By 1. $\Longleftarrow$, $V$ is a space of generators of $A$. If $W \subsetneq V$, then $W \oplus A\bullet A \subsetneq A$. By 1. $\Longrightarrow$, $W$ does not generate $V$. So $V$ is a minimal subspace of generators. \end{proof}
\begin{prop}\label{14} For all $x,y,z\in \K\langle x_0,x_1\rangle$, $(x\, \shuffl \, y)\bullet z=(x\bullet z)\, \shuffl \, y+x\, \shuffl \, (y\bullet z)$. \end{prop}
\begin{proof} We prove it if $x,y,z$ are words. If $x=\emptyset$, then: $$(\emptyset \, \shuffl \, y)\bullet z=y\bullet z=(\emptyset \bullet z)\, \shuffl \, y+\emptyset\, \shuffl \,(u\bullet z).$$ If $y=\emptyset$, the result is also true, using the commutativity of $\, \shuffl \,$. We can now consider that $x,y$ are nonempty words.
Let us proceed by induction on $k=lg(x)+lg(y)$. If $k=0$ or $1$, there is nothing to prove. Let us assume the result at all rank $<k$. Four cases can occur.\\
{\it First case.} $x=x_0a$ and $y=x_0b$. Then: \begin{align*} (x\, \shuffl \, y)\bullet z&=(x_0(a\, \shuffl \, x_0b) \bullet z+(x_0(x_0a\, \shuffl \, b))\bullet z\\ &=x_0((a\, \shuffl \, x_0b)\bullet z)+x_0((x_0a\, \shuffl \, b)\bullet z)\\ &=x_0((a\bullet z)\, \shuffl \, x_0b)+x_0(a\, \shuffl \, ((x_0b)\bullet z)) +x_0(((x_0a)\bullet z)\, \shuffl \, b)+x_0(x_0a\, \shuffl \, (b\bullet z))\\ &=x_0((a\bullet z)\, \shuffl \, x_0b)+x_0(a\, \shuffl \, (x_0(b\bullet z)) +x_0((x_0(a\bullet z))\, \shuffl \, b)+x_0(x_0a\, \shuffl \, (b\bullet z))\\ &=x_0(a\bullet z)\, \shuffl \, x_0b+x_0a\, \shuffl \, x_0(b\bullet z)\\ &=(x\bullet z)\, \shuffl \, y+x\, \shuffl \, (y\bullet z). \end{align*}
{\it Second case.} $x=x_1a$ and $y=x_0b$. This gives: \begin{align*} (x\, \shuffl \, y)\bullet z&=(x_1(a\, \shuffl \, x_0b))\bullet z+(x_0(x_1a\, \shuffl \, b))\bullet z\\ &=x_1((a\bullet z)\, \shuffl \, x_0b)+x_1(a\, \shuffl \, x_0(b\bullet z))\\ &+x_0(a\, \shuffl \, x_0b\, \shuffl \, z)+x_0(((x_1a)\bullet z)\, \shuffl \, b)+x_0(x_1a\, \shuffl \,(b\bullet z))\\ &=x_1((a\bullet z)\, \shuffl \, x_0b)+x_1(a\, \shuffl \, x_0(b\bullet z))\\ &+x_0(a\, \shuffl \, x_0b\, \shuffl \, z)+x_0((x_1(a\bullet z))\, \shuffl \, b) +x_0((x_0(a\, \shuffl \, z))\, \shuffl \, b)+x_0(x_1a\, \shuffl \,(b\bullet z)),\\ \\ (x\bullet z)\, \shuffl \, y&=(x_1(a\bullet z))\, \shuffl \, x_0b+(x_0(a\, \shuffl \, z))\, \shuffl \,(x_0b)\\ &=x_1((a\bullet z)\, \shuffl \, (x_0b))+x_0(x_1(a\bullet z)\, \shuffl \, b)\\ &+x_0(a\, \shuffl \, z\, \shuffl \, x_0b)+x_0((x_0(a\, \shuffl \, z))\, \shuffl \, b), \\ \\ x\, \shuffl \,(y\bullet z)&=x_1a \, \shuffl \, x_0(b\bullet z)\\ &=x_1(a\, \shuffl \, x_0(b\bullet z))+x_0(x_1a\, \shuffl \, (b\bullet z)). \end{align*} These computations imply the required equality.\\
{\it Third case.} $x=x_0a$ and $y=x_1b$. This is a consequence of the second case, using the commutativity of $\, \shuffl \,$.\\
{\it Last case.} $x=x_1a$ and $y=x_1b$. Similar computations give: \begin{align*} (x\, \shuffl \, y)\bullet z&=x_1((a\bullet z)\, \shuffl \, x_1b)+x_1(a\, \shuffl \, x_1(b\bullet w))+x_1(a\, \shuffl \, x_0(b\, \shuffl \, z))+ x_0(a\, \shuffl \, x_1b\, \shuffl \, z)\\ &+x_1(x_1a\, \shuffl \,(b\bullet z))+x_1((x_1(a\bullet z))\, \shuffl \, b)+x_1((x_0(a\, \shuffl \, z))\, \shuffl \, b) +x_0(a\, \shuffl \, x_1b\, \shuffl \, z),\\ \\ (x\bullet z)\, \shuffl \, y&=x_1((a\bullet z)\, \shuffl \, x_1b)+x_1((x_1(a\bullet z))\, \shuffl \, b)+x_0(a\, \shuffl \, x_1b\, \shuffl \, z) +x_1((x_0(a\, \shuffl \, z))\, \shuffl \, b),\\ \\ x\, \shuffl \, (y\bullet z)&=x_1(a\, \shuffl \, x_1(b\bullet w))+x_1(a\, \shuffl \, x_0(b\, \shuffl \, z))+x_1(x_1a\, \shuffl \,(b\bullet z)) +x_0(a\, \shuffl \, x_1b\, \shuffl \, z). \end{align*} So the result holds in all cases. \end{proof}
\section{Presentation of $\K\langle x_0,x_1\rangle$ as a Com-Prelie algebra}
Proposition \ref{14} motivates the following definition:
\begin{defi}\label{15} \cite{Mansuy} A \emph{Com-Prelie algebra} is a triple $(V,\bullet,\, \shuffl \,)$, such that: \begin{enumerate} \item $(V,\bullet)$ is a prelie algebra. \item $(V,\, \shuffl \,)$ is a commutative, associative algebra (non necessarily unitary). \item For all $a,b,c \in V$, $(a\, \shuffl \, b)\bullet c=(a\bullet c) \, \shuffl \, b+a\, \shuffl \, (b\bullet c)$. \end{enumerate}\end{defi}
For example, $\K\langle x_0,x_1\rangle$ is a Com-Prelie algebra. See \cite{Mansuy} for an example of Com-Prelie algebra based on rooted trees.
\subsection{Free Com-Prelie algebras}
\begin{defi}\begin{enumerate} \item A \emph{partitioned forest} is a pair $(F,I)$ such that: \begin{enumerate} \item $F$ is a rooted forest (the edges of $F$ being oriented from the leaves to the roots). \item $I$ is a partition of the vertices of $F$ with the following condition: if $x,y$ are two vertices of $F$ which are in the same part of $I$, then either they are both roots, or they have the same direct descendant. \end{enumerate} \item We shall say that a partitioned forest is a \emph{partitioned tree} if all the roots are in the same part of the partition. \item Let $\mathcal{D}$ be a set. A \emph{partitioned tree decorated by $\mathcal{D}$} is a pair $(t,d)$, where $t$ is a partitioned tree and $d$ is a map from the set of vertices of $t$ into $\mathcal{D}$. For any vertex $x$ of $t$, $d(x)$ is called the \emph{decoration} of $x$. \item The set of isoclasses of partitioned trees will be denoted by $\mathcal{PT}$. For any set $\mathcal{D}$, the set of isoclasses of partitioned trees decorated by $\mathcal{D}$ will be denoted by $\mathcal{PT}(\mathcal{D})$. \end{enumerate}\end{defi}
{\bf Examples.} We represent partitioned trees by the Hasse graph of the underlying rooted forest, the partition being represented by horizontal edges, of different colors. Here are all the partitioned trees with $\leq 4$ vertices: $$\tun;\tdeux, \hdeux;\ttroisun, \htroisun,\ttroisdeux, \htroisdeux=\htroistrois, \htroisquatre; \tquatreun, \hquatreun=\hquatredeux,\hquatretrois,\tquatredeux=\tquatretrois,\hquatrequatre=\hquatrecinq,\tquatrequatre,\hquatresix,\tquatrecinq,$$ $$\hquatresept=\hquatrehuit,\hquatreneuf=\hquatredix,\hquatreonze=\hquatredouze,\hquatretreize,\hquatrequatorze=\hquatrequinze=\hquatreseize, \hquatredixsept.$$
\begin{defi}\label{17} Let $t=(t,I)$ and $t'=(t',J) \in \mathcal{PT}$. \begin{enumerate} \item Let $s$ be a vertex of $t'$. The partitioned tree $t\bullet_s t'$ is defined as follows: \begin{enumerate} \item As a rooted forest, $t \bullet_s t'$ is obtained by grafting all the roots of $t'$ on the vertex $s$ of $t$. \item We put $I=\{I_1,\ldots,I_k\}$ and $J=\{J_1,\ldots,J_l\}$. The partition of the vertices of this rooted forest is $I\sqcup J=\{I_1,\ldots,I_k,J_1,\ldots,J_l\}$. \end{enumerate} \item The partitioned tree $t\, \shuffl \, t'$ is defined as follows: \begin{enumerate} \item As a rooted forest, $t \, \shuffl \, t'$ is $tt'$. \item We put $I=\{I_1,\ldots,I_k\}$ and $J=\{J_1,\ldots,J_l\}$ and we assume that the set of roots of $t$ is $I_1$ and the set of roots of $t'$ is $J_1$. The partition of the vertices of $t \, \shuffl \, t'$ $\{I_1\sqcup J_1,I_2,\ldots,I_k,J_2,\ldots,J_l\}$. \end{enumerate}\end{enumerate}\end{defi}
{\bf Examples.} \begin{enumerate} \item Here are the three possible graftings $\htroisun \bullet_s \tun$: $\hquatreun$, $\hquatrequatre$ and $\hquatrecinq$. \item Here are the two possible graftings $\tdeux \bullet_s \hdeux$: $\hquatreun$ and $\hquatresix$. \end{enumerate}
These operations can also be defined for decorated partitioned trees.
\begin{prop} Let $\mathcal{D}$ be a set. We denote by $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ the vector space generated by $\mathcal{PT}(\mathcal{D})$. We extend $\, \shuffl \,$ by bilinearity on $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ and we define a second product $\bullet$ on $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ in the following way: if $t,t'\in \mathcal{PT}(\mathcal{D})$, $$t\bullet t'=\sum_{s\in V(t)} t\bullet_s t'.$$ Then $(\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}, \bullet,\, \shuffl \,)$ is a Com-Prelie algebra. \end{prop}
\begin{proof} Let $t,t',t''$ be three partitioned trees.
If $s',s''$ are two vertices of $t$, we define by $t\bullet_{s,s'}(t',t'')$ the partitioned trees obtained by grafting the roots of $t'$ on $s'$, the roots of $t''$ on $s''$, the partition of the vertices of the obtained rootes forest being the union of the partitions of $t$, $t'$ and $t''$. Then: \begin{align*} (t\bullet t')\bullet t''&=\sum_{s'\in V(t)} (t\bullet_{s'} t') \bullet t''\\ &=\sum_{s',s''\in V(t)}(t\bullet_{s'} t')\bullet_{s''} t''+\sum_{s'\in V(t), s''\in V(t')} (t\bullet_{s'} t')\bullet_{s''} t''\\ &=\sum_{s',s''\in V(t)} t\bullet_{s's''}(t',t'')+\sum_{s'\in V(t), s''\in V(t')} t\bullet_{s'} (t'\bullet_{s''} t'')\\ &=\sum_{s',s''\in V(t)} t\bullet_{s's''}(t',t'')+t\bullet (t'\bullet t''). \end{align*} So $(t\bullet t')\bullet t''-t\bullet (t'\bullet t'')$ is clearly symmetric in $t$ and $t'$, and $\bullet$ is prelie. \\
Moreover, $(t\, \shuffl \, t') \, \shuffl \, t''=t\, \shuffl \, (t'\, \shuffl \, t'')$ is the rooted forest $tt't''$, the partition being $\{I_1\sqcup J_1 \sqcup K_1,I_2,\ldots,I_k, J_2,\ldots,J_l,K_2,\ldots,K_m\}$, with immediate notations; $t \, \shuffl \, t'=t'\, \shuffl \, t$ is the rooted forest $tt'$, the partition being $\{I_1\sqcup J_1,I_2,\ldots,I_k,J_2,\ldots,J_l\}$. So $\, \shuffl \,$ is an associative, commutative product.
Finally: \begin{align*} (t\, \shuffl \, t')\bullet t''&=\sum_{s\in V(t)} (t\, \shuffl \, t') \bullet_s t''+\sum_{s'\in V(t')} (t\, \shuffl \, t')\bullet_{s'} t''\\ &=\sum_{s\in V(t)} (t\bullet_s t'')\, \shuffl \, t' +\sum_{s'\in V(t')} t\, \shuffl \, (t'\bullet_{s'} t'')\\ &=(t\bullet t')\, \shuffl \, t''+t\, \shuffl \, (t'\bullet t''). \end{align*} So $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ is Com-Prelie. \end{proof}\\
In particular, $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ is prelie. Let us use the extension of the prelie product $\bullet$ to $S(\mathfrak{g}_{\mathcal{PT}(\mathcal{D})})$ defined by Oudom and Guin \cite{Oudom1,Oudom2}: \begin{enumerate} \item If $t_1,\ldots, t_k \in \mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$, $t_1\ldots t_k \bullet 1=t_1\ldots t_k$. \item If $t,t_1,\ldots,t_k \in \mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$, $t\bullet t_1\ldots t_k=(t\bullet t_1\ldots t_{k-1})\bullet t_k-t\bullet (t_1\ldots t_{k-1} \bullet t_k)$. \item If $a,b,c \in S(\mathfrak{g}_{\mathcal{PT}(\mathcal{D})})$, $ab \bullet c=(a\bullet c^{(1)} )(b \bullet c^{(2)})$, where $\Delta(c)=c^{(1)} \otimes c^{(2)}$ is the usual coproduct of $S(\mathfrak{g}_{\mathcal{PT}(\mathcal{D})})$. In particular, if $t_1,\ldots,t_k,t\in \mathcal{PT}(\mathcal{D})$: $$t_1\ldots t_k \bullet t=\sum_{i=1}^k t_1\ldots (t_i \bullet t)\ldots t_k.$$ \end{enumerate}
\begin{lemma}\label{19} Let $t=(t,I),t_1=(t_1,I^{(1)}),\ldots,t_k=(t_k,I^{(k)})$ be partitioned trees $(k\geq 1$). Let $s_1,\ldots,s_k \in V(t)$. The partitioned tree $t\bullet_{s_1,\ldots,s_k}(t_1,\ldots,t_k)$ is obtained by grafting the roots of $t_i$ on $s_i$ for all $i$, the partition being $I\sqcup I^{(1)}\sqcup\ldots \sqcup I^{(k)}$. Then: $$t\bullet t_1\ldots t_k=\sum_{s_1,\ldots,s_k \in V(t)} t\bullet_{s_1,\ldots,s_k} (t_1,\ldots,t_k).$$ \end{lemma}
\begin{proof} By induction on $k$. This is obvious if $k=1$. Let us assume the result at rank $k$. \begin{align*} t\bullet t_1\ldots t_{k+1}&=(t\bullet t_1\ldots t_k) \bullet t_{k+1}-\sum_{i=1}^k t\bullet (t_1\ldots (t_i\bullet t_{k+1}) \ldots t_k)\\ &=\sum_{s_1,\ldots,s_k \in V(t)}(t\bullet_{s_1,\ldots,s_k} (t_1,\ldots, t_k)) \bullet t_{k+1} -\sum_{i=1}^k \sum_{s \in V(t_i)} t\bullet (t_1\ldots (t_i \bullet_s t_{k+1})\ldots t_i)\\ &=\sum_{s_1,\ldots,s_{k+1} \in V(t)}(t\bullet_{s_1,\ldots,s_k} (t_1,\ldots,t_k) )\bullet_{s_{k+1}} t_{k+1}\\ &+\sum_{i=1}^k \sum_{s\in V(t_i)}(t\bullet_{s_1,\ldots,s_k} (t_1,\ldots, t_k)) \bullet_s t_{k+1}\\ &-\sum_{i=1}^k \sum_{s_1,\ldots,s_k \in V(t)}\sum_{s \in V(t_i)} t\bullet_{s_1,\ldots,s_k} (t_1,\ldots, t_i \bullet_s t_{k+1},\ldots, t_i)\\ &=\sum_{s_1,\ldots,s_{k+1} \in V(t)}t\bullet_{s_1,\ldots,s_{k+1}} (t_1,\ldots,t_{k+1}). \end{align*} Hence, the result holds for all $k$. \end{proof}
\begin{theo} Let $\mathcal{D}$ be a set, let $A$ be a Com-Prelie algebra, and let $a_d \in A$ for all $d\in \mathcal{D}$. There exists a unique morphism of Com-Prelie algebra $\phi:\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}\longrightarrow A$, such that $\phi(\tdun{$d$})=a_d$
for all $d\in \mathcal{D}$. In other words, $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ is the free Com-Prelie algebra generated by $\mathcal{D}$. \end{theo}
\begin{proof} {\it Unicity.} Let $t\in \mathcal{T}^d$. We denote by $r_1,\ldots,r_n$ its roots. For all $1\leq i\leq n$, let $t_{i,1},\ldots,t_{i,k_i}$ be the partitioned trees born from $r_i$ and let $d_i$ be the decoration of $r_i$. Then: $$t=(\tdun{$d_1$}\bullet t_{1,1}\ldots t_{1,k_1} )\, \shuffl \, \ldots \, \shuffl \, (\tdun{$d_n$} \bullet t_{n,1}\ldots t_{n,k_n}).$$ So $\phi$ is inductively defined by: \begin{equation}\label{E1} \phi(t)=(a_{d_1}\bullet\phi(t_{1,1})\ldots \phi(t_{1,k_1}))\, \shuffl \,\ldots \, \shuffl \,(a_{d_n}\bullet \phi(t_{n,1})\ldots \phi(t_{n,k_n})). \end{equation}
{\it Existence.} As the product $\, \shuffl \,$ of $A$ is commutative and associative, (\ref{E1}) defines inductively a morphism $\phi$ from $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ to $A$. By definition, it is compatible with the product $\, \shuffl \,$. Let us prove the compatibility with the product $\bullet$. Let $t,t'$ be two partitioned trees, let us prove that $\phi(t\bullet t')=\phi(t)\bullet \phi(t')$ by induction on the number $N$ of vertices of $t$. If $N=1$, then $t=\tdun{$d$}$ and: $$\phi(t\bullet t')=a_d \bullet \phi(t')=\phi(t)\bullet \phi(t'),$$ by definition of $t'$. If $N>1$, two cases are possible.
{\it First case.} If $t$ has only one root, then $t=\tdun{$d$} \bullet t_1\ldots t_k$, and: $$t\bullet t'=\tdun{$d$}\bullet t_1\ldots t_k t'+\sum_{i=1}^k \tdun{$d$} \bullet t_1\ldots t_i \circ t'\bullet t_k.$$ Using the induction hypothesis on $t_1,\ldots,t_k$: \begin{align*} \phi(t\bullet t')&=a_d \bullet \phi(t_1)\ldots \phi(t_k) \phi(t') +\sum_{i=1}^k a_d \bullet \phi(t_1)\ldots \phi(t_1 \circ t')\ldots \phi(t_k)\\ &=a_d \bullet \phi(t_1)\ldots \phi(t_k) \phi(t') +\sum_{i=1}^k a_d \bullet (\phi(t_1)\ldots \phi(t_1) \circ \phi(t')\ldots \phi(t_k))\\ &=(a_d \bullet \phi(t_1)\ldots \phi(t_k))\bullet \phi(t')\\ &=\phi(t)\bullet \phi(t'). \end{align*}
{\it Second case.} If $t$ has $k>1$ roots, we put $t=t_1\, \shuffl \, \ldots \, \shuffl \, t_k$. The induction hypothesis holds for $t_1,\ldots,t_k$, so: \begin{align*} \phi(t\circ t')&=\sum_{i=1}^k \phi(t_1\, \shuffl \, t_i \bullet t'\, \shuffl \, \ldots \, \shuffl \, t_k)\\ &=\sum_{i=1}^k \phi(t_1)\, \shuffl \, \phi(t_i \bullet t')\, \shuffl \, \ldots \, \shuffl \, \phi(t_k)\\ &=\sum_{i=1}^k \phi(t_1)\, \shuffl \, \phi(t_i) \bullet \phi(t')\, \shuffl \, \ldots \, \shuffl \, \phi(t_k)\\ &=(\phi(t_1)\, \shuffl \, \ldots \, \shuffl \, \phi(t_k))\bullet \phi(t')\\ &=\phi(t)\bullet \phi(t'). \end{align*} Hence, $\phi$ is a morphism of Com-Prelie algebras. \end{proof}
\subsection{Presentation of $\K\langle x_0,x_1\rangle$ as a Com-Prelie algebra}
\begin{prop}\label{21} As a Com-Prelie algebra, $\K\langle x_0,x_1\rangle$ is generated by $\emptyset$ and $x_1$. \end{prop}
\begin{proof} Let $A$ be the Com-Prelie subalgebra of $\K\langle x_0,x_1\rangle$ generated by $\emptyset$ and $x_1$. For all $n \geq 1$, it contains $x_1^{\, \shuffl \, n}=n! x_1^n$, so it contains $x_1^n$ for all $n \geq 0$. As $\K\langle x_0,x_1\rangle$ is generated by these elements as a prelie algebra, $A=\K\langle x_0,x_1\rangle$. \end{proof}\\
We denote by $\phi_{CPL}:\mathfrak{g}_{\mathcal{PT}(\{1,2\})}\longrightarrow \K\langle x_0,x_1\rangle$ the unique morphism of Com-Prelie algebras which sends $\tdun{$1$}$ to $\emptyset$ and $\tdun{$2$} $ to $\tdun{$2$}$. By proposition \ref{21}, it is surjective.
\begin{lemma}\label{22} Let $t_1,\ldots,t_k \in \mathcal{PT}(\{1,2\})$. \begin{enumerate} \item $\phi_{CPL}(\tdun{$1$}\bullet t_1\ldots t_k)=0$ if $k\geq 1$. \item $\phi_{CPL}(\tdun{$2$}\bullet t_1\ldots t_k)=0$ if $k\geq 2$. \item If $t\in \mathcal{PT}(\{1,2\})$, $\phi_{CPL}(\tdun{$2$}\bullet t)=x_0\phi_{CPL}(t)$. \end{enumerate} \end{lemma}
\begin{proof} We prove 1.-3. by induction on $k$. If $k=1$: $$\begin{array}{rcl} \phi_{CPL}(\tdun{$1$}\bullet t)&=\emptyset \bullet \phi_{CPL}(t)&=0,\\ \phi_{CPL}(\tdun{$2$}\bullet t)&=x_1 \bullet \phi_{CPL}(t)&=x_0 \phi_{CPL}(t). \end{array}$$ Let us assume the results at rank $k-1\geq 1$. Then: \begin{align*} \phi_{CPL}(\tdun{$1$} \bullet t_1\ldots t_k)&=\emptyset \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_k)\\ &=(\emptyset \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_{k-1}))\bullet \phi_{CPL}(t_k)\\ &-\sum_{i=1}^k \emptyset \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_i\bullet t_k)\ldots \phi_{CPL}(t_{k-1})\\ &=0,\\ \phi_{CPL}(\tdun{$2$} \bullet t_1\ldots t_k)&=x_1 \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_k)\\ &=(x_1 \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_{k-1}))\bullet \phi_{CPL}(t_k)\\ &-\sum_{i=1}^k x_1 \bullet \phi_{CPL}(t_1)\ldots \phi_{CPL}(t_i\bullet t_k)\ldots \phi_{CPL}(t_{k-1}). \end{align*} If $k\geq 3$, the induction hypothesis immediately allows to conclude that $\phi_{CPL}(\tdun{$2$} \bullet t_1\ldots t_k)=0-0=0$. If $k=2$, this gives: \begin{align*} \phi_{CPL}(\tdun{$2$} \bullet t_1t_2)&=(x_1 \bullet \phi_{CPL}(t_1))\bullet \phi_{CPL}(t_2)-x_1\bullet \phi_{CPL} (t_1\bullet t_2)\\ &=(x_0\phi_{CPL}(t_1))\bullet \phi_{CPL}(t_2)-x_0\phi_{CPL}(t_1\bullet t_2)\\ &=x_0\left(\phi_{CPL}(t_1)\bullet \phi_{CPL}(t_2))\phi_{CPL}(t_1\bullet t_2)\right)\\ &=0. \end{align*} Hence, the result holds for all $k\geq 1$. \end{proof}
\begin{lemma} \label{23} For all $t\in \mathcal{PT}(\{1,2\})$, $\phi_{CPL}(t)$ is a linear span of words of length the number of vertices of $t$ decorated by $2$. \end{lemma}
\begin{proof} By induction on the number of vertices $N$ of $t$. If $N=1$, then $t=\tdun{$1$}$ or $\tdun{$2$}$ and the result is obvious. Let us assume the result at all rank $<N$. \\
{\it First case.} If $t$ has only one root, we put $t=\tdun{$i$}\bullet t_1\ldots t_k$. By the preceding lemma, we can assume that $i=2$ and $k=1$. Then $\phi_{CPL}(t)=x_0\phi_{CPL}(t_1)$ and the result is obvious.\\
{\it Second case.} If $t$ has $k>1$ roots, we put $t=t_1\, \shuffl \, \ldots \, \shuffl \, t_k$. Then $\phi_{CPL}(t_1)$ is equal to $\phi_{CPL}(t_1)\, \shuffl \, \ldots \, \shuffl \, \phi_{CPL}(t_k)$ and the result is immediate. \end{proof}
\begin{lemma} We define inductively a family $F$ of elements of $\mathcal{PT}(\{1,2\})$ by: \begin{enumerate} \item $F(1)=\{\tdun{$1$},\tdun{$2$}\}$. \item $\displaystyle F(n+1)=(\tdun{$2$}\bullet F(n))\cup \bigcup_{i=1}^{n} (F(i) \, \shuffl \, F(n+1-i))$. \item $\displaystyle F=\bigcup_{n\geq 1} F(n)$. \end{enumerate} Let $t\in \mathcal{PT}(\{1,2\})$. Then $\phi_{CPL}(t) \neq 0$ if, and only if, $t\in F$. \end{lemma}
\begin{proof} $\Longrightarrow$. We proceed by induction on the number $N$ of vertices of $t$. This is obvious if $N=1$. Let us assume the result at all rank $<N$. \\
{\it First case.} If $N$ has only one root, we put $N=\tdun{$i$}\bullet t_1\ldots t_k$. By lemma \ref{22}, $i=2$ and $k=1$. Then $\phi_{CPL}(t)=x_0 \phi_{CPL}(t_1)$. By the induction hypothesis, $t_1 \in F$, so $t\in F$.\\
{\it Second case.} If $N$ has $k>N$ roots, we put $t=t_1\, \shuffl \, \ldots \, \shuffl \, t_k$. Then: $$\phi_{CPL}(t)=\phi_{CPL}(t_1)\, \shuffl \, \phi_{CPL}(t_2\, \shuffl \, \ldots \, \shuffl \, t_k)\neq 0,$$ so by the induction hypothesis, $t_1$ and $t_2\, \shuffl \, \ldots \, \shuffl \, t_k \in F$, and $t\in F$. \\
$\Longleftarrow$. Let $t\in T(n)$. We proceed by induction on $n$. It $n=1$, this is obvious. If $n>1$ then $t=\tdun{$2$}\bullet t'$, with $t'\in F(n-1)$, or $t=t'\, \shuffl \, t''$, with $t'\in F(i)$, $t''\in F(n-i)$. In the first case, by the induction hypothesis, $\phi_{CPL}(t')\neq 0$ and $\phi_{CPL}(t)=x_0 \phi_{CPL}(t')\neq 0$. In the second case, $\phi_{CPL}(t'),\phi_{CPL}(t'')\neq 0$ by the induction hypothesis, so $\phi_{CPL}(t)=\phi_{CPL}(t')\, \shuffl \, \phi_{CPL}(t'')\neq 0$. \end{proof}\\
{\bf Examples}. \begin{align*} F(1)&=\{\tdun{$1$},\tdun{$2$}\},\\ F(2)&=\{\tddeux{$2$}{$1$},\tddeux{$2$}{$2$},\hddeux{$1$}{$1$},\hddeux{$1$}{$2$},\hddeux{$2$}{$2$}\},\\ F(3)&=\left\{\tdtroisdeux{$2$}{$2$}{$1$},\tdtroisdeux{$2$}{$2$}{$2$},\hdtroisun{$2$}{$1$}{$1$}, \hdtroisun{$2$}{$2$}{$1$},\hdtroisun{$2$}{$2$}{$2$},\hdtroisdeux{$1$}{$2$}{$1$}, \hdtroisdeux{$2$}{$2$}{$1$},\hdtroisdeux{$1$}{$2$}{$2$},\hdtroisdeux{$2$}{$2$}{$2$}, \hdtroisquatre{$1$}{$1$}{$1$},\hdtroisquatre{$2$}{$1$}{$1$},\hdtroisquatre{$2$}{$2$}{$1$},\hdtroisquatre{$2$}{$2$}{$2$}\right\}. \end{align*}
We define a second family of elements of $\mathcal{PT}(\{1,2\})$ in the following way: \begin{enumerate} \item $F'(1)=\{\tdun{$1$},\tdun{$2$}\}$. \item $F'(2)=\{\tddeux{$2$}{$2$},\tddeux{$2$}{$1$},\hddeux{$2$}{$2$}\}$. \item $\displaystyle F'(n+1)=(\tdun{$2$}\bullet F'(n))\cup \bigcup_{i=2}^{n-1} \left(F'(i) \, \shuffl \, F'(n+1-i)\right) \cup \left(\tdun{$2$}\, \shuffl \, F'(n)\right)$ if $n \geq 2$. \item $\displaystyle F'=\bigcup_{n\geq 1} F'(n)$. \end{enumerate} For example: \begin{align*} F'(3)&=\left\{\tdtroisdeux{$2$}{$2$}{$1$},\tdtroisdeux{$2$}{$2$}{$2$},\hdtroisun{$2$}{$2$}{$2$}, \hdtroisdeux{$2$}{$2$}{$1$},\hdtroisdeux{$2$}{$2$}{$2$},\hdtroisquatre{$2$}{$2$}{$2$}\right\},\\ F'(4)&=\left\{\tdquatrecinq{$2$}{$2$}{$2$}{$1$},\tdquatrecinq{$2$}{$2$}{$2$}{$2$}, \hdquatresix{$2$}{$2$}{$2$}{$2$},\hdquatrecinq{$2$}{$2$}{$1$}{$2$},\hdquatrecinq{$2$}{$2$}{$2$}{$2$}, \hdquatretrois{$2$}{$2$}{$2$}{$2$},\hdquatredouze{$2$}{$2$}{$2$}{$2$}, \hdquatredix{$2$}{$2$}{$2$}{$1$},\hdquatredix{$2$}{$2$}{$2$}{$2$}, \hdquatretreize{$2$}{$1$}{$2$}{$1$},\hdquatretreize{$2$}{$1$}{$2$}{$2$},\hdquatretreize{$2$}{$2$}{$2$}{$2$}, \hdquatreseize{$2$}{$2$}{$2$}{$1$},\hdquatreseize{$2$}{$2$}{$2$}{$2$}, \hdquatredixsept{$2$}{$2$}{$2$}{$2$}\right\}. \end{align*} We define a map $\pi$ from $F$ to $\mathcal{PT}(\{1,2\})$ in the following way: \begin{enumerate} \item $\pi(\tdun{$i$})=\tdun{$i$}$ if $i=1,2$. \item $\pi(\tdun{$1$}\, \shuffl \, \ldots \, \shuffl \,\tdun{$1$})=\tdun{$1$}$. \item If $t=\tdun{$1$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$1$} \, \shuffl \, t_1\, \shuffl \, \ldots \, \shuffl \, t_k$, $k \geq 1$, with $t_1,\ldots,t_k \neq \tdun{$1$}$, then $\pi(t)=\pi(t_1)\, \shuffl \, \ldots \, \shuffl \, \pi(t_k)$. \item If $t=\tdun{$2$} \bullet t_1\ldots t_k$, then $\pi(t)=\tdun{$2$}\bullet \pi(t_1)\ldots \pi(t_k)$. \end{enumerate}
\begin{lemma} $\pi$ is a projection on $F'$ and $\phi_{CPL}\circ \pi={\phi_{CPL}}_{\mid F}$. \end{lemma}
\begin{proof} Let $t\in F$. Let us prove by induction on the number $N$ of vertices of $t$ that: \begin{enumerate} \item $\pi(t)\in F'$. \item If $t\in F'$, $\pi(t)=t$. \item $\phi_{CPL} \circ \pi(t)=\phi_{CPL}(t)$. \item If $\pi(t)=\tdun{$1$}$, then $t=\tdun{$1$}^{\, \shuffl \, N}$. \end{enumerate} All these points are immediate if $N=1$. Let us assume the result at all ranks $<N$, $N\geq 2$. We put $t=\tdun{$1$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$1$} \, \shuffl \, t_1\, \shuffl \, \ldots \, \shuffl \, t_k$, $k \geq 0$, with $t_1,\ldots,t_k \neq \tdun{$1$}$.\\
{\it First case.} If $k \geq 2$, then $\pi(t)=\pi(t_1)\, \shuffl \, \ldots \, \shuffl \, \pi(t_k)$. Following the induction hypothesis, $\pi(t_1),\ldots,\pi(t_k)\in F'$ and are not equal to $\tdun{$1$}$, so $\pi(t)\in F'$: moreover, $\pi(t_1)\neq \tdun{$1$}$, so $\pi(t) \neq \tdun{$1$}$. \begin{align*} \phi_{CPL}(t)&=\phi_{CPL}(\tdun{$1$})\, \shuffl \,\ldots \, \shuffl \, \phi_{CPL}(\tdun{$1$})\, \shuffl \, \phi_{CPL}(t_1)\, \shuffl \, \ldots \, \shuffl \, \phi_{CPL}(t_k)\\ &=\emptyset \, \shuffl \,\ldots \, \shuffl \, \emptyset \, \shuffl \, \phi_{CPL}\circ \pi(t_1)\, \shuffl \, \ldots \, \shuffl \, \phi_{CPL}\circ \pi(t_k)\\ &=\phi_{CPL}(\pi(t_1)\, \shuffl \, \ldots \, \shuffl \, \pi(t_k))\\ &=\phi_{CPL}\circ \pi(t). \end{align*} If $t\in F'$, necessarily $t=t_1\, \shuffl \, \ldots \, \shuffl \, t_k$, and $t_1,\ldots,t_k \in F'$. By the induction hypothesis, $\pi(t_1)=t_1,\ldots,\pi(t_k)=t_k$, so $\pi(t)=t$. \\
{\it Second case.} If $k=1$, as $t_1\in F$, we put $t_1=\tdun{$2$}\bullet s$. Then $\pi(t)=\tdun{$2$}\bullet \pi(s)$. By the induction hypothesis, $\pi(s) \in F'$, so $\pi(t)=F'$. Moreover: \begin{align*} \phi_{CPL}(t)&=\phi_{CPL}(\tdun{$1$})\, \shuffl \,\ldots \, \shuffl \, \phi_{CPL}(\tdun{$1$})\, \shuffl \, (\phi_{CPL}(\tdun{$2$})\bullet \phi_{CPL}(s))\\ &=\emptyset\, \shuffl \,\ldots \, \shuffl \, \emptyset \, \shuffl \, (\phi_{CPL}(\tdun{$2$})\bullet \phi_{CPL}(s))\\ &=\phi_{CPL}\circ \pi(\tdun{$2$})\bullet \phi_{CPL}\circ \pi(s)\\ &=\phi_{CPL}\circ \pi(t). \end{align*} If $t'\in F'$, then $s \in F'$, and $t=\tdun{$2$}\bullet s$. Then $\pi(t)=\tdun{$2$}\bullet \pi(s)=\tdun{$2$}\bullet s=t$. \\
{\it Last case.} If $k=0$, all the results are obvious. \end{proof}
\begin{lemma}\label{26} Let $t,t'\in \mathcal{PT}(\{1,2\})$. Then: $$\phi_{CPL}\left((\tdun{$2$} \bullet t) \, \shuffl \, (\tdun{$2$} \bullet t')\right)= \phi_{CPL}\left(\tdun{$2$}\bullet((\tdun{$2$}\bullet t) \, \shuffl \, t'+t\, \shuffl \, (\tdun{$2$}\bullet t'))\right).$$ \end{lemma}
\begin{proof} Indeed, putting $w=\phi_{CPL}(t)$ and $w'=\phi_{CPL}(t')$: \begin{align*} \phi_{CPL}\left((\tdun{$2$} \bullet t) \, \shuffl \, (\tdun{$2$} \bullet t')\right)&=x_0w \, \shuffl \, x_0w'\\ &=x_0 (w \, \shuffl \, x_0w')+x_0(x_0w \, \shuffl \, w')\\ &=\phi_{CPL}\left(\tdun{$2$}\bullet((\tdun{$2$}\bullet t) \, \shuffl \, t'+t\, \shuffl \, (\tdun{$2$}\bullet t'))\right). \end{align*} We used lemma \ref{22} for the first and third equalities. \end{proof}
\begin{theo}\label{27} The kernel of $\phi_{CPL}$ is the Com-Prelie ideal generated by the elements: \begin{enumerate} \item $\tdun{$1$}\bullet t_1\ldots t_k$, where $k\geq 1$, $t_1,\ldots,t_k \in \mathcal{PT}(\{1,2\})$. \item $\tdun{$2$}\bullet t_1\ldots t_k$, where $k\geq 2$, $t_1,\ldots,t_k \in \mathcal{PT}(\{1,2\})$. \item $\tdun{$1$} \, \shuffl \, t-t$, where $t\in \mathcal{PT}(\{1,2\})$. \item $(\tdun{$2$} \bullet t) \, \shuffl \, (\tdun{$2$} \bullet t') -\tdun{$2$}\bullet((\tdun{$2$}\bullet t) \, \shuffl \, t'-t\, \shuffl \, (\tdun{$2$}\bullet t'))$, where $t,t'\in \mathcal{PT}(\{1,2\})$. \end{enumerate}\end{theo}
\begin{proof} Let $I$ be the ideal generated by these elements. Lemmas \ref{22} and \ref{26} prove that the elements 1, 2 and 4 belong to $Ker(\phi_{CPL})$. Moreover, for all $t \in \mathcal{PT}(\{1,2\})$, $\pi(\tdun{$1$}\, \shuffl \, t)=\pi(t)$. For all $t\in \mathcal{PT}(\{1,2\})$: $$\phi_{CPL}(\tdun{$1$}\, \shuffl \, t)=\emptyset \, \shuffl \, \phi_{CPL}(t)=\phi_{CPL}(t),$$ so elements 3. also belong to $Ker(\phi_{CPL})$. Hence, $I\subseteq Ker(\phi_{CPL})$. \\
Let $h=\mathfrak{g}_{\mathcal{PT}(\{1,2\})}/I$. As the elements 1 and 2 belong to $I$, $h$ is linearly spanned by the elements $\overline{t}$, $t\in F$. As the elements 3 belong to $I$, for all $t\in F$, $\overline{\pi(t)}=\overline{t}$. As $\pi$ is a projection on $F'$, $h$ is linearly spanned by the elements $\overline{t}$, $t\in F'$. \\
We now define inductively two families of partitionned trees in the following way: \begin{enumerate} \item $T''(1)=\{\tdun{$2$}\}$ and $F''(1)=\{\tdun{$1$},\tdun{$2$}\}$. \item $T''(n+1)=\tdun{$2$}\bullet F''(n)$. \item $\displaystyle F''(n+1)=\bigcup_{i=1}^{n+1} T''(i)\, \shuffl \, \tdun{$2$}^{\, \shuffl \,(n+1-i)}$. \item $\displaystyle F''=\bigcup_{n\geq 1} F''(n)$. \end{enumerate} For example: \begin{align*} F''(3)&=\left\{\tdtroisdeux{$2$}{$2$}{$1$},\tdtroisdeux{$2$}{$2$}{$2$},\hdtroisun{$2$}{$2$}{$2$}, \hdtroisdeux{$2$}{$2$}{$1$},\hdtroisdeux{$2$}{$2$}{$2$},\hdtroisquatre{$2$}{$2$}{$2$}\right\},\\ F''(4)&=\left\{\tdquatrecinq{$2$}{$2$}{$2$}{$1$},\tdquatrecinq{$2$}{$2$}{$2$}{$2$}, \hdquatresix{$2$}{$2$}{$2$}{$2$},\hdquatrecinq{$2$}{$2$}{$1$}{$2$},\hdquatrecinq{$2$}{$2$}{$2$}{$2$}, \hdquatretrois{$2$}{$2$}{$2$}{$2$},\hdquatredouze{$2$}{$2$}{$2$}{$2$}, \hdquatredix{$2$}{$2$}{$2$}{$1$},\hdquatredix{$2$}{$2$}{$2$}{$2$}, \hdquatreseize{$2$}{$2$}{$2$}{$1$},\hdquatreseize{$2$}{$2$}{$2$}{$2$}, \hdquatredixsept{$2$}{$2$}{$2$}{$2$}\right\}. \end{align*} Let us prove that for all $t\in F'$, there exists $t' \in Vect(F'')$ such that $\overline{t}=\overline{t'}$. We proceed by induction on the number $N$ of vertices of $t$. If $N=1$, then $t=\tdun{$1$} $ or $\tdun{$2$}$ and we take $t'=t$. Let us assume the result at all rank $<N$. We put $t=t_1\, \shuffl \, \ldots \, \shuffl \, t_k \, \shuffl \, \tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}$, with $t_i=\tdun{$2$}\bullet s_i$, $s_i \neq 1$, for all $1\leq i \leq k$. We proceed by induction on $k$. If $k=0$, we take $t'=t=\tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}$. If $k=1$, then, by the induction hypothesis on $N$ applied to $s_1$: $$\overline{t}=(\overline{\tdun{$2$}} \bullet \overline{s_1})\, \shuffl \, \overline{\tdun{$2$}}\, \shuffl \, \ldots \, \shuffl \, \overline{\tdun{$2$}} =(\overline{\tdun{$2$}} \bullet \overline{s_1'})\, \shuffl \, \overline{\tdun{$2$}}\, \shuffl \, \ldots \, \shuffl \, \overline{\tdun{$2$}} =\overline{(\tdun{$2$} \bullet s_1')\, \shuffl \, \tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}}.$$ We take $t'=(\tdun{$2$} \bullet s_1')\, \shuffl \, \tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}$, which clearly belongs to $Vect(F'')$, as $s_1' \in Vect(F'')$. Let us assume the result at all rank $<k$. Then, as the elements 4 belong to $I$: $$\overline{t_1\, \shuffl \, t_2}=\overline{\underbrace{\tdun{$2$}\bullet (t_1\, \shuffl \, s_2)}_{t'_1}} +\overline{\underbrace{\tdun{$2$}\bullet(s_1 \bullet t_2)}_{t''_1}},$$ so: $$\overline{t}=\overline{t'_1\, \shuffl \, t_3\, \shuffl \, \ldots \, \shuffl \, t_k\, \shuffl \, \tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}} +\overline{t''_1\, \shuffl \, t_3\, \shuffl \, \ldots \, \shuffl \, t_k\, \shuffl \, \tdun{$2$}\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}}.$$ By the induction hypothesis on $k$ applied to these two partitionned trees, there exists $x'_1$ and $x''_1 \in Vect(F''),$ such that $\overline{t}= \overline{x'_1}+\overline{x''_1}$. We take $t'=x_1'+x_1''$. Consequently, the elements $\overline{t}$, $t\in F''$, linearly span $h$. \\
Let $t\in F''(n)$. Then it has $n$ vertices, and at most one of them is decorated by $1$. We denote by $F''_1(n)$ the set of elements of $F''(n)$ with one vertex decorated by $1$, and we put $F''_2(n)=F''(n)\setminus F''_1(n)$.
Let us prove that for all $n \geq 1$, $|F''_1(n+1)| \leq 2^{n-1}$ and $|F''_2(n)| \leq 2^{n-1}$. For $n=0$, as $FF'_1(2)=\{\tddeux{$2$}{$1$}\}$ and $F''_2(1)=\{\tdun{$2$}\}$, this is immediate. Let us assume the result at all rank $\leq n$. Then: $$F''_2(n+1)=\bigcup_{i=1}^{n+1} \tdun{$2$}^{\, \shuffl \, (n+1-i)} \, \shuffl \, T''(i)\cap F''_2(i) =\{\tdun{$2$}^{\, \shuffl \, (n+1)}\} \cup \bigcup_{i=1}^n
\tdun{$2$}^{\, \shuffl \, (n+1-i)} \, \shuffl \, \tdun{$2$}\bullet F''_2(i).$$
Hence, $|F''_2(n+1)|\leq 1+1+2+\ldots+2^{n-1}=2^n$. $$F''_1(n+2)=\bigcup_{i=1}^{n+2} \tdun{$2$}^{\, \shuffl \,(n+2-i)} \, \shuffl \, T''(i)\cap F''_1(i) =\bigcup_{i=2}^{n+2} \tdun{$2$}^{\, \shuffl \,(n+2-i)} \, \shuffl \, \tdun{$2$}\bullet F''_1(i-1).$$
Hence, $|F''_1(n+2)|\leq +1+1+\ldots+2^{n-1}=2^n$. \\
Let $\overline{\phi}_{APL}$ be the linear map induced by $\phi_{CPL}$ on $h$. If $t \in F''_1(n)$, by lemma \ref{23}, $\overline{\phi}_{APL}(\overline{t})$ is a linear span of words of length $n-1$. If $t \in F''_2(n)$, by lemma \ref{23}, $\overline{\phi}_{APL}(\overline{t})$ is a linear span of words of length $n$. Hence, for all $n \geq 0$: $$\overline{\phi}_{APL}(Vect(F''_2(n))+Vect(F''_1(n+1)))\subseteq Vect(\mbox{words of length $n$}).$$ As $\phi_{CPL}$ is surjective, we obtain: $$\overline{\phi}_{APL}(Vect(F''_2(n))+Vect(F''_1(n+1)))=Vect(\mbox{words of length $n$}).$$ Moreover, as $dim(Vect(\mbox{words of length $n$}))=2^n$ and
$dim(Vect(F''_2(n))+Vect(F''_1(n+1)))\leq |F''_2(n)|+|F''_1(n)|\leq 2^{n-1}+2^{n-1}=2^n$, the restriction of $\overline{\phi}_{APL}$ to $Vect(F''_2(n))+Vect(F''_1(n+1))$ is injective. Finally, $\overline{\phi}_{APL}$ is injective, so $Ker(\phi_{CPL})=I$. \end{proof}
\section{Presentation of $\K\langle x_0,x_1\rangle$ as a prelie algebra}
\subsection{A surjective morphism}
Let $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$ be the free prelie algebra generated by $\mathbb{N}^*$, as described in \cite{Chapoton}. It can be seen as the subspace of $\mathfrak{g}_{\mathcal{PT}(\mathbb{N}^*)}$ generated by rooted trees (which are seen as partitioned trees such that any part of the partition is a singleton), with the restriction of the prelie product $\bullet$ defined by graftings. For example, in $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$, if $a,b,c,d>0$: $$\tddeux{$a$}{$b$}\bullet \tddeux{$c$}{$d$}=\tdquatretrois{$a$}{$c$}{$d$}{$b$}+\tdquatrecinq{$a$}{$b$}{$c$}{$d$}.$$ This prelie algebra is graded, the degree of a tree being the sum of its decorations. \\
By theorem \ref{12}, there exists a unique surjective map of prelie algebras $\Phi_{PL}:\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}\longrightarrow \K\langle x_0,x_1\rangle$, sending $\tdun{$n$}$ to $x_1^{n-1}$ for all $n\geq 1$. As $x_1^{i-1}$ is homogeneous of degree $i$ for all $i$, this morphism is homogeneous of degree $0$.\\
{\bf Notation.} If $t_1\ldots t_k\in \mathcal{T}(\mathbb{N}^*)$ and $n\in \mathbb{N}^*$, we put: $$B_n(t_1\ldots t_k)=\tdun{$n$}\bullet t_1\ldots t_k.$$ This is the tree obtained by grafting $t_1,\ldots,t_k$ on a common root decorated by $n$.
\begin{prop}\label{28} Let $t=B_n( t_1\ldots t_k )\in \mathcal{T}(\mathbb{N}^*)$. We put $\phi_{PL}(t_i)=w_i$ for all $1\leq i \leq k$. Then: $$\phi_{PL}(t)=\begin{cases} x_0w_1\, \shuffl \, \ldots \, \shuffl \, x_0w_k \, \shuffl \, x_1^{n-1-k} \mbox{ if }k<n,\\ 0\mbox{ otherwise}. \end{cases}$$ \end{prop}
\begin{proof} As $\mathfrak{g}_{\mathcal{PT}(\{1,2\})}$ is prelie, there exists a unique morphism of prelie algebras: $$\psi:\begin{cases} \mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}&\longrightarrow\mathfrak{g}_{\mathcal{PT}(\{1,2\})}\\ \tdun{$n$}&\longrightarrow\frac{1}{(n-1)!} \tdun{$2$}^{\, \shuffl \, (n-1)}. \end{cases}$$ Then $\phi_{APL}\circ \psi$ is a prelie algebra morphism sending $\tdun{$n$} $ to $\frac{1}{(n-1)!} x_1^{\, \shuffl \, (n-1)} =x_1^{n-1}$ for all $n \geq 1$, so $\phi_{APL}\circ \psi=\phi_{PL}$. We obtain, by lemma \ref{19}: \begin{align*} \psi(\tdun{$n$} \bullet t_1\ldots t_k)&=\frac{1}{(n-1)!} \tdun{$2$}^{\, \shuffl \,(n-1)}\bullet (\psi(t_1)\ldots \psi(t_k))\\ &=\frac{1}{(n-1)!}\sum_{I_1\sqcup \ldots \sqcup I_{n-1}=\{1,\ldots,k\}} \tdun{$2$}\bullet \left(\prod_{i\in I_1} t_i\right)\, \shuffl \, \ldots \, \shuffl \, \tdun{$2$}\bullet \left(\prod_{i\in I_{n-1}} t_i\right) \end{align*}
Let us apply $\phi_{APL}$ to this expression. If $|I_j|\geq 2$, by theorem \ref{27}: $$\phi_{APL}\left( \tdun{$2$}\bullet \left(\prod_{i\in I_j} t_i\right)\right)=0.$$ Consequently, if $k\geq n$, at least one of the $I_j$ contains two elements, so $\phi_{APL}\circ \psi(t)=\phi_{PL}(t)=0$. Let us assume that $k<n$. Hence, using the commutativity of $\, \shuffl \,$: \begin{align*}
\phi_{PL}(\tdun{$n$} \bullet t_1\ldots t_k)&=\frac{1}{(n-1)!}\sum_{I_1\sqcup \ldots \sqcup I_{n-1}=\{1,\ldots,k\},\: |I_j|\leq 1} x_1 \bullet \left(\prod_{i\in I_1} w_i\right)\, \shuffl \, \ldots \, \shuffl \, x_1\bullet \left(\prod_{i\in I_k} w_i\right)\\ &=\frac{1}{(n-1)!}\sum_{\iota:\{1,\ldots,k\}\longrightarrow \{1,\ldots,n-1\},\mbox{\scriptsize{ injective}}} x_1 \bullet w_1\, \shuffl \, \ldots x_1\bullet w_k \, \shuffl \, x_1^{\, \shuffl \, (n-1-k)}\\ &=\frac{1}{(n-1)!}\sum_{\iota:\{1,\ldots,k\}\longrightarrow \{1,\ldots,n-1\},\mbox{\scriptsize{ injective}}} x_0 w_1\, \shuffl \, \ldots x_0 w_k \, \shuffl \, x_1^{\, \shuffl \, (n-1-k)}\\ &=\frac{(n-1)\ldots (n-k)}{(n-1)!}x_0 w_1\, \shuffl \, \ldots x_0 w_k \, \shuffl \, x_1^{\, \shuffl \, (n-1-k)}\\ &=\frac{(n-1)\ldots (n-k)(n-1-k)! }{(n-1)!}x_0 w_1\, \shuffl \, \ldots x_0 w_k \, \shuffl \, x_1^{n-1-k}\\ &=x_0 w_1\, \shuffl \, \ldots x_0 w_k \, \shuffl \, x_1^{n-1-k}, \end{align*} which is the announced result. \end{proof}
\begin{cor} \label{29} Let $s_1,\ldots,s_k,t_1,\ldots,t_l \in \mathcal{T}(\{N^*)$, $k,l\geq 0$. For all $i,j,n \geq 1$: \begin{align*} &\phi_{PL}\left(B_{n+1}((B_i(s_1\ldots s_k) B_j(t_1\ldots t_l))\right)\\ &=\phi_{PL}\left(B_n(B_{i+1}(s_1\ldots s_k B_j(t_1\ldots t_l))\right) +\phi_{PL}\left(B_n(B_{j+1}(B_i(s_1\ldots s_k)t_1\ldots t_l)\right). \end{align*}\end{cor}
\begin{proof} We note: $$\begin{array}{rcccl} T_1&=B_{n+1}((B_i(s_1\ldots s_k) B_j(t_1\ldots t_l)) &=\tdun{$n+1$}\hspace{.4cm}\bullet ((\tdun{$i$}\bullet s_1\ldots s_k) (\tdun{$j$} \bullet t_1\ldots t_l)),\\ T_2&=B_n(B_{i+1}(s_1\ldots s_k B_j(t_1\ldots t_l)) &=\tdun{$n$} \bullet (\tdun{$i+1$}\hspace{.3cm}\bullet (s_1\ldots s_k( \tdun{$j$}\bullet t_1\ldots t_l))),\\ T_3&=B_n(B_{j+1}(B_i(s_1\ldots s_k)t_1\ldots t_l) &=\tdun{$n$} \bullet(\tdun{$j+1$}\hspace{.3cm}\bullet((\tdun{$i$}\bullet s_1\ldots s_k) t_1\ldots t_l)). \end{array}$$ If $k\geq i$, or $l\geq j$, or $n=1$, all these elements are sent to zero by $\phi_{PL}$ by proposition \ref{28}. Let us assume now that $k<i$, $l<j$, $n<1$. We put $v_i=\phi_{PL}(s_i)$ and $w_i=\phi_{PL}(t_i)$. Then: \begin{align*} \phi_{PL}(T_1)&=x_0(\underbrace{x_0v_1\, \shuffl \, \ldots \, \shuffl \, x_0v_k \, \shuffl \, x_1^{i-1-k})}_{X} \, \shuffl \, x_0(\underbrace{x_0w_1\, \shuffl \,\ldots \, \shuffl \, x_0w_l \, \shuffl \, x_1^{j-1_l})}_{Y}\, \shuffl \, x_1^{n-2}\\ &=x_0X \, \shuffl \, x_0Y\, \shuffl \, x_1^{n-2},\\ \phi_{PL}(T_2)&=x_0(x_0v_1\, \shuffl \, \ldots \, \shuffl \, x_0(x_0w_1\, \shuffl \, \ldots \, \shuffl \, x_0w_l \, \shuffl \, x_1^{j-1-l}) \, \shuffl \, x_1^{i-1-k}) \, \shuffl \, x_1^{n-2}\\ &=x_0(X \, \shuffl \, x_0Y) \, \shuffl \, x_1^{n-2},\\ \phi_{PL}(T_3)&=x_0(x_0(x_0v_1\, \shuffl \, \ldots \, \shuffl \, x_0v_k \, \shuffl \, x_1^{i-1-k})\, \shuffl \, x_0w_1 \, \shuffl \, x_0w_l\, \shuffl \, x_1^{j-1-l}) \, \shuffl \, x_1^{n-2}\\ &=x_0(x_0X \, \shuffl \, Y) \, \shuffl \, x_1^{n-2}. \end{align*} As $x_0X \, \shuffl \, x_0Y=x_0(X \, \shuffl \, x_0Y)+x_0(x_0X\, \shuffl \, Y)$, we obtain the result. \end{proof}
\begin{theo}\label{30} The kernel of $\phi_{PL}$ is the prelie ideal generated by: \begin{enumerate} \item $B_1(t_1\ldots t_k)$, where $k\geq 1$, $t_1,\ldots, t_k \in \mathcal{T}(\mathbb{N}^*)$. \item $B_{n+1}(B_i(s_1\ldots s_k)B_j(t_1\ldots t_l))-B_n(B_{i+1}(s_1\ldots s_kB_j(t_1\ldots t_l))- B_{j+1}(B_i(s_1\ldots s_k) t_1\ldots t_l))$, where $k,l\geq 0$, $s_1,\ldots,s_k,t_1,\ldots,t_l \in \mathcal{T}(\mathbb{N}^*)$. \end{enumerate}\end{theo}
\begin{proof} Let $I$ be the ideal generated by these elements. By proposition \ref{28} and corollary \ref{29}, $I\subseteq Ker(\phi_{PL})$. We put $h=\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}/I$. Applying repeatedly the relation given by elements of the second form, it is not difficult to prove that for any $t \in \mathcal{T}(\mathbb{N}^*)$, there exists a linear span of ladders $t'$ such that $\overline{t}=\overline{t'}$ in $h$. Moreover, by the relation given by elements 1., if one of the vertices of a ladder $t$ which is not the leaf is decorated by $1$, then $\overline{t}=0$. Let us denote by $L(n)$ the set of ladders decorated by $\mathbb{N}^*$, of weight $n$, such that all the vertices which are not the leaf are decorated by integerS $>1$. It turns out that $h$ is generated by the elements $\overline{t}$, $t\in L=\bigcup L(n)$.
Let $\overline{\phi_{PL}}$ be the morphism form $h$ to $\K\langle x_0,x_1\rangle$ induced by $\phi_{PL}$. By homogeneity, as $\phi_{PL}$ is surjective, for all $n \geq 1$: $$\overline{\phi}_{PL}(Vect(L(n)))=Vect(\mbox{words of degree }n).$$ In order to prove that $I=Ker(\phi_{PL})$, it is enough to prove that $\overline{\phi}_{PL}$ is injective. By homogeneity, it is enough to prove that $\overline{\phi}_{\mid Vect(L(n))}$ is injective for all $n\geq 1$. Hence, it is enough to prove that for all $n \geq 1$,
$$|L(n)| = dim(Vect(\mbox{words of degree }n))=p_n,$$
where the $p_n$ are the integers defined in proposition \ref{8}. Let $l_n=|L(n)|$ and $q_n$ be the number of $t\in L(n)$ with no vertex decorated by $1$. Then for all $n \geq 2$, $l_n=q_n+q_{n-1}$, and $l_1=1$. We put: $$L=\sum_{n=1}^\infty l_n X^n,\: Q=\sum_{n=1}^\infty q_nX^n.$$ We obtain $P=X+Q+XQ$. Moreover: $$Q=\frac{1}{\displaystyle 1-\sum_{i\geq 2}X^i}-1=\frac{1}{1-\frac{X^2}{1-X}}-1=\frac{X^2}{1-X-X^2},$$ Finally: $$L=\frac{X}{1-X-X^2}=F.$$
So, for all $n \geq 1$, $|L(n)|=p_n$. \end{proof}\\
As an immediate corollary, a basis of $h$ is given by the classes of the elements of $L$. Turning to $\K\langle x_0,x_1\rangle$, we obtain:
\begin{cor}\label{31} Let $w=a_1\ldots a_k$ be a word with letters in $\mathbb{N}^*$. \begin{enumerate} \item We put: $$m_w=x_1^{a_1-1}\bullet(x_1^{a_1-1}\bullet(\ldots(x_1^{a_{k-1}-1}\bullet x_1^{a_k})\ldots).$$ \item We shall say that $w$ is \emph{admissible} if $a_1,\ldots,a_{k-1}>1$. The set of admissible words is denoted by $\mathcal{A}dm$. \end{enumerate} Then $(m_w)_{w \in \mathcal{A}dm}$ is a basis of $\K\langle x_0,x_1\rangle$. \end{cor}
{\bf Remark.} If $w$ is not admissible, that is to say if there exists $1\leq i<k$, such that $a_i=1$, then $m_w=0$ by proposition \ref{28}. \\
We extend the map $w\longrightarrow m_w$ by linearity.
\subsection{Prelie product in the basis of admissible words}
{\bf Notations.} \begin{enumerate} \item For all $k,l$, we denote by $Sh(k,l)$ the set of $(k,l)$- shuffles, that is to say permutations $\zeta \in \mathfrak{S}_{k+l}$ such that $\zeta(1)<\ldots<\zeta(k)$, $\zeta(k+1)<\ldots<\zeta(k+l)$. \item For all $k,l$ we denote by $Sh_\prec(k,l)$ the set of $(k,l)$-shuffles $\zeta$ such that $\zeta^{-1}(k+l)=k$. \item For all $k,l$ we denote by $Sh_\succ(k,l)$ the set of $(k,l)$-shuffles $\zeta$ such that $\zeta^{-1}(k+l)=k+l$. \item The symmetric group $\mathfrak{S}_n$ acts on the set of words with letters in $\mathbb{N}^*$ of length $n$ by permutation of the letters: $$\sigma.(a_1\ldots a_n)=a_{\sigma^{-1}(1)}\ldots a_{\sigma^{-1}(n)}.$$ \end{enumerate}
\begin{prop}\label{32} Let $\mathbb{K} \langle\mathbb{N}^*\rangle$ be the space generated by words with letters in $\mathbb{N}^*$. We define a dendriform structure on this space by: \begin{align*} (a_1\ldots a_k)\prec(b_1\ldots b_l)&=\sum_{\zeta \in Sh_\prec(k,l)}\zeta.a_1\ldots a_k b_1\ldots b_{k-1}(b_k+1)\\ (a_1\ldots a_k)\succ(b_1\ldots b_l)&=\sum_{\zeta \in Sh_\succ(k,l)}\zeta.a_1\ldots a_{k-1}(a_k+1) b_1\ldots b_k. \end{align*} The associative product $\prec+\succ$ is denoted by $\star$. \end{prop}
\begin{proof} We denote by $Sh(k,l,m)$ the set of $k+l+m$-permutations such that $\zeta(1)<\ldots <\zeta(k)$, $\zeta(k+1)<\ldots<\zeta(k+l)$, $\zeta(k+l+1)<\ldots \zeta(k+l+m)$. Then: \begin{align*} &(a_1\ldots a_k \prec b_1\ldots b_l)\prec c_1\ldots c_m=a_1\ldots a_k \prec (b_1\ldots b_l \star c_1\ldots c_m)\\ &=\sum_{\zeta \in Sh(k,l,m), \zeta^{-1}(k+l+m)=k}\zeta.a_1\ldots a_k b_1\ldots (b_l+1)c_1\ldots (c_m+1);\\ &(a_1\ldots a_k \succ b_1\ldots b_l)\prec c_1\ldots c_m=a_1\ldots a_k \succ (b_1\ldots b_l \prec c_1\ldots c_m)\\ &=\sum_{\zeta \in Sh(k,l,m), \zeta^{-1}(k+l+m)=k+l}\zeta.a_1\ldots (a_k+1) b_1\ldots b_l c_1\ldots (c_m+1);\\ &(a_1\ldots a_k \star b_1\ldots b_l)\succ c_1\ldots c_m=a_1\ldots a_k \succ (b_1\ldots b_l \succ c_1\ldots c_m)\\ &=\sum_{\zeta \in Sh(k,l,m), \zeta^{-1}(k+l+m)=k+l+m}\zeta.a_1\ldots (a_k+1) b_1\ldots (b_l+1)c_1\ldots c_m. \end{align*} So $\mathbb{K}\langle \langle\mathbb{N}^*\rangle\rangle$ is a dendriform algebra. \end{proof}\\
We postpone the study of this dendriform algebra to section \ref{s52}.\\
{\bf Notations.} For all $a_1,\ldots,a_k \in \mathbb{N}^*$, we denote by $l(a_1\ldots a_k)=B_{a_1}\circ \ldots \circ B_{a_k}(1)$ the ladder decorated from the root to the leaf by $a_1,\ldots,a_k$. Note that $m_{a_1\ldots a_k}=\phi_{PL}(l(a_1\ldots a_k))$.
\begin{lemma} Let $k,l\geq 1$ and let $a_1,\ldots,a_l,b_1,\ldots,b_l\in \mathbb{N}^*$. Then: $$\phi_{PL}(B_{a_1+1}(l(a_2\ldots a_k)l(b_1\ldots b_l))+B_{b_1+1}(l(a_1\ldots a_k)l(b_2\ldots b_l)) =m_{a_1\ldots a_k \star b_1\ldots b_l}.$$ \end{lemma}
\begin{proof} By induction on $k+l$. If $k=l=1$, then: $$\phi_{PL}(\tddeux{$a_1+1$}{$b_1$}\hspace{.5cm}+\tddeux{$b_1+1$}{$a_1$}\hspace{.5cm}) =m_{(a_1+1)b_1+(b_1+1)a_1}=m_{a_1 \star b_1}.$$ Let us assume the result at all ranks $<k+l$. If $k=1$, then: \begin{align*} &\phi_{PL}(B_{a_1+1}(l(b_2\ldots b_l))+B_{b_1+1}(l(a_1)l(b_2\ldots b_l))\\ &=\phi_{PL}(\tdun{$a_1+1$}\hspace{.5cm} \bullet l(b_2\ldots b_l)+\tdun{$b_1+1$}\hspace{.5cm}\bullet (l(a_1) l(b_2\ldots b_l)))\\ &=\phi_{PL}(l((a_1+1)b_2\ldots b_l))+\phi_{PL}(\tdun{$b_1$}\bullet (l((a_1+1)b_2\ldots b_l)+\tdun{$b_2+1$} \hspace{.5cm}\bullet(l(a_1)l(b_3\ldots b_l)))\\ &=m_{(a_1+1)b_2\ldots b_l}+m_{b_1(a_1\star b_2\ldots b_l)}\\ &=m_{(a_1+1)b_2\ldots b_l}+\sum_{i=1}^{l-1}m_{b_1\ldots b_i (a_1+1)\ldots b_l}+m_{b_1\ldots (b_l+1)a_1}\\ &=m_{a_1\star b_1\ldots b_l}. \end{align*} If $l=1$, a similar computation, permuting the $a_i$'s and the $b_j$'s, proves the result. If $k,l>1$, then: \begin{align*} &\phi_{PL}(B_{a_1+1}(l(a_2\ldots a_k)l(b_1\ldots b_l))+B_{b_1+1}(l(a_1\ldots a_k)l(b_2\ldots b_l))\\ &=\phi_{PL}(\tdun{$a_1$}\bullet(\tdun{$a_2+1$}\hspace{.5cm} \bullet l(a_3\ldots a_k)l(b_1\ldots b_l))+ \tdun{$b_1+1$} \hspace{.5cm} \bullet l(a_1\ldots a_k)l(b_2\ldots b_l)))\\ &+\phi_{PL}(\tdun{$b_1$}\bullet(\tdun{$a_1+1$}\hspace{.5cm} \bullet l(a_2\ldots a_k)l(b_2\ldots b_l))+ \tdun{$b_2+1$} \hspace{.5cm} \bullet l(a_1\ldots a_k)l(b_3\ldots b_l)))\\ &=m_{a_1(a_2\ldots a_k \star b_1\ldots b_l)+b_1(a_1\ldots a_k\star b_2\ldots b_l)}\\ &=m_{a_1\ldots a_k \star b_1\ldots b_l}. \end{align*} Hence, the result holds for all $k,l\geq 1$. \end{proof}
\begin{theo}\label{34} For all $a_1,\ldots,a_k,b_1,\ldots,b_l \in \mathbb{N}^*$: $$m_{a_1\ldots a_k} \bullet m_{b_1\ldots b_l}=\displaystyle \sum_{i=1}^{k-1} m_{a_1\ldots a_{i-1}(a_i-1)(a_{i+1}\ldots a_k \star b_1\ldots b_l)}+m_{a_1\ldots a_kb_1\ldots b_l}.$$ \end{theo}
\begin{proof} By definition of $m_{a_1 b_1\ldots b_l}$, if $k=1$, $m_{a_1}\bullet m_{b_1\ldots b_l}=m_{a_1b_1\ldots b_l}$. So the result holds if $k=1$. Let us assume that $k\geq 2$. In $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$, we have: $$l(a_1\ldots a_k) \bullet l(b_1\ldots b_l) =\tdun{$a_1$}\bullet (l(a_2\ldots a_k)\bullet l(b_1\ldots b_l))+\tdun{$a_1$}\bullet l(a_2\ldots a_k)l(b_1\ldots b_l).$$ Applying $\phi_{PL}$: \begin{align*} m_{a_1\ldots a_k}\bullet m_{b_1\ldots b_l}&=m_{a_1 (a_2\ldots a_k)\bullet (b_1\ldots b_l)}\\ &+\phi_{PL}(\tdun{$a_1-1$}\hspace{.5cm}\bullet(\tdun{$a_2+1$}\hspace{.6cm}l(a_3\ldots a_k)l(b_1\ldots b_l)) +\tdun{$b_1+1$}\hspace{.5cm}\bullet l(a_1\ldots a_k)l(b_2\ldots b_l)))\\ &=m_{a_1 (a_2\ldots a_k)\bullet (b_1\ldots b_l)}+m_{(a_1-1)(a_2\ldots a_k\star b_1\ldots b_l)}, \end{align*} by the preceding lemma. The result follows from an easy induction. \end{proof}\\
{\bf Remark.} In particular, $m_1\circ m_{b_1\ldots b_l}=0$.
\begin{cor} Let $a_1\ldots a_k,b_1\ldots b_l$ be two words with letters in $\mathbb{N}^*$. Then $m_{a_1\ldots a_k}\bullet m_{b_1\ldots b_l}$ is a span of $m_w$, where $w$ is a word with $k+l$ letters and of weight $a_1+\ldots+a_k+b_1+\ldots+b_l$. \end{cor}
Hence, $\K\langle x_0,x_1\rangle$ is a bigraded prelie algebra, with: $$\K\langle x_0,x_1\rangle_{n,k}=Vect(m_{a_1\ldots a_k}\mid a_1+\ldots +a_k=n).$$ We put: $$G=\sum_{k,n\geq 0} dim(\K\langle x_0,x_1\rangle_{n,k})X^nY^k.$$
\begin{prop} $\displaystyle G=\frac{XY}{1-X-X^2Y}=\sum_{k=1}^\infty \sum_{l=2k-1}^\infty \binom{l-k}{k-1}X^l Y^k$. \end{prop}
\begin{proof} Note that $dim(\K\langle x_0,x_1\rangle_{n,k})$ is the number of words $a_1\ldots a_k$ of length $k$, such that $a_1,\ldots,a_{k-1}\geq 2$, and $a_1+\ldots+a_k=n$. Hence: $$G=\sum_{k=1}^\infty \left(\frac{X^2Y}{1-X}\right)^{k-1} \frac{XY}{1-X} =\frac{XY}{1-X}\frac{1}{1-\frac{X^2Y}{1-X}}=\frac{XY}{1-X-X^2Y}.$$ An easy developement in formal series gives the second formula. \end{proof}
\subsection{An associative product on $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$}
We now define an associative product on $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$, in such a way that $\phi_{PL}$ becomes a morphism of Com-Prelie algebras.
\begin{prop} We define a product $\, \shuffl \,$ on $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$ by: $$B_p(s_1\ldots s_k) \, \shuffl \, B_q (t_1\ldots t_l)=\binom{p+q-k-l-2}{p-k-1} B_{p+q-1}(s_1\ldots s_k t_1\ldots t_l).$$ Then $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$ is a Com-Prelie algebra and $\phi_{PL}$ is a morphism of Com-Prelie algebras. \end{prop}
\begin{proof} As $\binom{p+q-k-l-2}{p-k-1}=\binom{p+q-k-l-2}{q-l-1}$, $\, \shuffl \,$ is commutative. Let $t=B_p( s_1\ldots s_k)$, $t'=B_q(\bullet t_1\ldots t_l)$ and $t''=B_r(u_1\ldots u_m)$. Then: \begin{align*} t\, \shuffl \, (t'\, \shuffl \, t'')=\underbrace{\binom{q+r-l-m-2}{q-l-1}\binom{p+q+r-k-l-m-3}{q+r-l-m-2}}_{A} B_{p+q+r-2} (s_1\ldots s_kt_1\ldots t_lu_1\ldots u_m),\\ (t\, \shuffl \, t')\, \shuffl \, t''=\underbrace{\binom{p+q-k-l-2}{p-k-1}\binom{p+q+r-k-l-m-3}{p+q-k-l-2}}_{B} B_{p+q+r-2} (s_1\ldots s_kt_1\ldots t_lu_1\ldots u_m). \end{align*} If $p\leq k$ or $q\leq l$ or $r\leq m$, then $A=B=0$. If $p>k$ and $q>l$ and $r>m$, then: $$A=B=\frac{(p+q+r-k-l-m-3)!}{(p-k-1)!(q-l-1)!(r-m-1)!}.$$ So $\, \shuffl \,$ is associative. \\
Let $t_1=B_p( s_1\ldots s_k)$, $t_2=B_q(t_1\ldots t_l)$ and $t \in \mathcal{T}(\mathbb{N}^*)$. Then: \begin{align*} (t_1\, \shuffl \, t_2)\circ T&=\binom{p+q-k-l-2}{m-k-1}B_{p+q-1}(s_1\ldots s_kt_1\ldots t_lt)\\ &+\sum_{i=1}^k\binom{p+q-k-l-2}{p-k-1} B_{p+q-1}(s_1\ldots (s_i\bullet t) \ldots s_kt_1\ldots t_l)\\ &+\sum_{j=1}^l\binom{p+q-k-l-2}{p-k-1} B_{p+q-1}(s_1\ldots s_kt_1\ldots (t_j\bullet t) \ldots t_l),\\ (t_1\bullet t)\, \shuffl \, t_2&=\left(\sum_{i=1}^k B_p(s_1\ldots (s_i \bullet t)\ldots s_k)+B_p(s_1\ldots s_kt)\right)\, \shuffl \, t_2\\ &=\sum_{i=1}^k\binom{p+q-k-l-2}{p-k-1} B_{p+q-1}(s_1\ldots (s_i\bullet t) \ldots s_kt_1\ldots t_l)\\ &+\binom{p+q-k-l-3}{p-k-2}B_{p+q-1}(s_1\ldots s_kt_1\ldots t_lt),\\ t_1\, \shuffl \, (t_2\bullet t)&=t_1\, \shuffl \, \left(\sum_{j=1}^l B_q(t_1\ldots (t_j\bullet t)\ldots t_l)+B_q(t_1\ldots t_jt)\right)\\ &=\sum_{j=1}^l\binom{p+q-k-l-2}{p-k-1} B_{p+q-1}(s_1\ldots s_kt_1\ldots (t_j\bullet t) \ldots t_l)\\ &+\binom{p+q-k-l-3}{p-k-1}B_{p+q-1}(s_1\ldots s_kt_1\ldots t_lt). \end{align*} As $\displaystyle \binom{p+q-k-l-3}{p-k-2}+\binom{p+q-k-l-3}{p-k-1}=\binom{p+q-k-l-2}{p-k-1}$, we obtain $(t_1\, \shuffl \, t_2)\bullet t=(t_1\bullet t)\, \shuffl \, t_2+t_1\, \shuffl \, (t_2\bullet t)$. So $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$ is Com-Prelie.\\
Let $t_1=B_p(s_1\ldots s_k)$ and $t_2=B_q( t_1\ldots t_l)$. If $k\geq p$, then $\displaystyle \binom{p+q-k-l-2}{p-k-1}=0$, so $t_1\, \shuffl \, t_2=0$. By proposition \ref{28}, $\phi_{PL}(t_1)=0$, so $\phi_{PL}(t_1\, \shuffl \, t_2) =\phi_{PL}(t_1)\, \shuffl \, \phi_{PL}(t_2)=0$. Similarly, if $l\geq q$, $\phi_{PL}(t_1\, \shuffl \, t_2)=\phi_{PL}(t_1) \, \shuffl \, \phi_{PL}(t_2)=0$. If $k<p$ and $l<q$, we put $w_i=\phi_{PL}(s_i)$ and $w'_j=\phi_{PL}(t_j)$. Then: \begin{align*} \phi_{PL}(t_1)\, \shuffl \, \phi_{PL}(t_2)&=x_0w_1\, \shuffl \, \ldots \, \shuffl \, x_0w_k \, \shuffl \, x_1^{p-1-k} \, \shuffl \, x_0w'_1\, \shuffl \, \ldots \, \shuffl \, x_0w'_l \, \shuffl \, x_1^{q-1-l}\\ &=\binom{p+q-k-l-2}{p-k-1}x_0w_1\, \shuffl \, \ldots x_0w'_l\, \shuffl \, x_1^{p+q-k-l-2}\\ &=\binom{p+q-k-l-2}{p-k-1}\phi_{PL}(B_{p+q-1}(s_1\ldots s_k t_1\ldots t_l))\\ &=\phi_{PL}(t_1\, \shuffl \, t_2). \end{align*} So $\phi_{PL}$ is a Com-Prelie algebra morphism. \end{proof}\\
{\bf Remark.} By the proof of proposition \ref{28}, we have a commutative diagram of prelie algebra morphisms: $$\xymatrix{\mathfrak{g}_{\mathcal{PT}(\{1,2\}}\ar[r]^{\phi_{CPL}}&\K\langle x_0,x_1\rangle\\ \mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}\ar[u]^{\psi} \ar[ru]_{\phi_{PL}}&}$$ Moreover, $\phi_{CPL}$ is a morphism of Com-Prelie algebra. With the commutative, associative product previously defined on $\mathfrak{g}_{\mathcal{T}(\mathbb{N}^*)}$, $\phi_{PL}$ is now a morphism of Com-Prelie algebra. However, $\psi$ is not compatible with $\, \shuffl \,$. Indeed, $\psi(\tddeux{$2$}{$1$})=\psi(\tdun{$2$})\bullet \psi(\tdun{$1$})=\tddeux{$2$}{$1$}$, so: $$\psi(\tddeux{$2$}{$1$})\, \shuffl \, \psi(\tddeux{$2$}{$1$})=\tddeux{$2$}{$1$}\, \shuffl \, \tddeux{$2$}{$1$} =\hdquatretreize{$2$}{$1$}{$2$}{$1$}.$$ Moreover, $\tddeux{$2$}{$1$} \, \shuffl \, \tddeux{$2$}{$1$}=\tdtroisun{$3$}{$1$}{$1$}$, so: $$\psi(\tddeux{$2$}{$1$} \, \shuffl \, \tddeux{$2$}{$1$})=\psi(\tdun{$3$})\bullet \psi(\tdun{$1$})\psi(\tdun{$1$}) =\frac{1}{2}\hddeux{$2$}{$2$} \bullet \tdun{$1$}\tdun{$1$} =\hdquatretreize{$2$}{$1$}{$2$}{$1$}+\hdquatresept{$2$}{$1$}{$1$}{$2$}.$$
\section{Appendix}
\subsection{Enumeration of partitioned trees}
Let $d \geq 1$. For all $n \geq 1$, let $f_n$ be the number of partitioned trees decorated by $\{1,\ldots,d\}$ with $n$ vertices and let $t_n$ be the number of partitioned trees decorated by $\{1,\ldots,d\}$ with $n$ vertices and one root. By convention, $f_0=1$. We put: $$T=\sum_{n=1}^\infty t_n X^n, \: F=\sum_{n=0}^\infty f_n X^n.$$ Let $V_T$ be the vector space generated by the set of partitioned trees decorated by $\{1,,\ldots,d\}$ and $V_F$ be the vector space generated by the set of partitioned trees decorated by $\{1,,\ldots,d\}$ with only one root. There is a bijection: $$\begin{cases} S(V_T)&\longrightarrow V_F\\ t_1\ldots t_k&\longrightarrow t_1\, \shuffl \, \ldots \, \shuffl \, t_k. \end{cases}$$ Hence: \begin{equation} \label{E2}F=\prod_{i=1}^\infty \frac{1}{(1-X^k)^{t_k}}. \end{equation} There is a bijection: $$\begin{cases} \displaystyle \bigoplus_{i=1}^dS(V_F)&\longrightarrow V_T\\ (F_{1,1}\ldots,F_{1,k_1},\ldots,F_{d,1}\ldots F_{d,k_d})&\longrightarrow \displaystyle \sum_{i=1}^d \tdun{$i$}\bullet(F_{i,1}\ldots F_{i,k_i}). \end{cases}$$ This gives: \begin{equation} \label{E3}T=dX \prod_{i=1}^\infty \frac{1}{(1-X^k)^{f_{k-1}}}. \end{equation} Formulas (\ref{E2}) and (\ref{E3}) allow to compute inductively $f_k$ and $t_k$ for all $k\geq 1$. This gives: $$\begin{cases} f_1&=\displaystyle d\\ f_2&=\displaystyle \frac{d(3d+1)}{2}\\ f_3&=\displaystyle \frac{d(19d^2+9d+2)}{6}\\ f_4&=\displaystyle \frac{d(63d^2+34d^2+13d+2)}{8}\\ f_5&=\displaystyle \frac{d(644d^4+400d^3+175d^2+35d+6)}{30} \end{cases}$$ Here are examples of $f_n$ for $d=1$ or $2$:
$$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|} \hline n&1&2&3&4&5&6&7&8&9&10\\ \hline d=1&1&2&5&14&42&134&444&1518&5318&18989\\ \hline d=2&2&7&32&167&952&5759&36340&236498&1576156&10702333\\ \hline\end{array}$$ The row $d=1$ is sequence $A035052$ of \cite{Sloane}.
\subsection{Study of the dendriform structure on admissible words}
\label{s52} We here study the dendriform algebra $\mathbb{K}\langle\mathbb{N}^*\rangle$ of proposition \ref{32}. It is clearly commutative, via the bijection from $Sh_\prec(k,l)$ to $Sh_\succ(l,k)$ given by the composition (on the left) by the permutation $(l+1\ldots l+k\: 1\ldots l)$: in other terms, it is a Zinbiel algebra \cite{Loday3}.
Let $V$ be a vector space. The shuffle dendriform algebra $Sh(V)$ is $T_+(V)$, with the products given by: \begin{eqnarray*} (a_1\ldots a_k)\prec(b_1\ldots b_l)&=&\sum_{\zeta \in Sh_\prec(k,l)}\zeta.a_1\ldots a_k b_1\ldots b_{k-1}b_k\\ (a_1\ldots a_k)\succ(b_1\ldots b_l)&=&\sum_{\zeta \in Sh_\succ(k,l)}\zeta.a_1\ldots a_{k-1}a_k b_1\ldots b_k. \end{eqnarray*} Moreover, this is the free commutative dendriform algebra generated by $V$, that is to say if $A$ is a commutative dendriform algebra and $f:V\longrightarrow A$ is any linear map, there exists a morphism of dendriform algebras $\phi:Sh(V)\longrightarrow A$ such that $\phi_\mid V=f$. As $a_1\ldots a_k \succ b=a_1\ldots a_k b$ in $Sh(V)$ for all $a_1,\ldots,a_k,b \in V$, this morphism $\phi$ is defined by: $$\phi(a_1\ldots a_k)=(\ldots(a_1\succ a_2)\succ a_3)\ldots)\succ a_k.$$
\begin{prop}\begin{enumerate} \item Let $V$ be the space generated by the words $1^k i$, $k\in \mathbb{N}$, $i\geq 1$. Then $K \langle\mathbb{N}^*\rangle$ is isomorphic, as a dendriform algebra, to $Sh(V)$. \item Let $A$ be the subspace of $K \langle\mathbb{N}^*\rangle$ generated by admissible words. Then it is a dendriform subalgebra of $K \langle\mathbb{N}^*\rangle$. Moreover, if $W$ is the space generated by the letters $i$, $i\geq 1$, then $A$ is isomorphic, as a dendriform algebra, to $Sh(W)$. \end{enumerate}\end{prop}
\begin{proof} Let $w=a_1\ldots a_k$ be a word with letters in $\mathbb{N}^*$. We denote by $o(w)$ the sequence of indices $j\in \{1,\ldots,k-1\}$ such that $a_j\neq 1$. This sequences are totally ordered in this way: $(j_1,\ldots,j_k)<(j'_1,\ldots,j'_l)$ if there exists a $p$ such that $j_k=j'_l$, $j_{k-1}=j'_{l-1}$, $\ldots$, $j_{k-p+1}=j'_{l-p+1}$, $j_{k-p}<j'_{l-p}$, with the convention $j_0=j_{-1}=\ldots=j'_0=j'_{-1}=\ldots=0$. \\
Let $\phi:Sh(V)\longrightarrow K \langle\mathbb{N}^*\rangle$ be the unique morphism of dendriform algebras which extends the identity of $V$. Then: \begin{eqnarray*} \phi((1^{k_1-1}a_1)\ldots (1^{k_n-1}a_n))&=&=1^{k_1-1}(a_1+1)\ldots 1^{k_{n-1}-1}(a_{n-1}+1)1^{k_n-1} a_n\\ &&+\mbox{words $w'$ such that $o(w')>(k_1,\ldots,k_{n-1})$}. \end{eqnarray*} By triangularity, $\phi$ is an isomorphism. Moreover, for all $a_1,\ldots,a_n \geq 1$: $$\phi(a_1\ldots a_n)=(a_1+1)\ldots(a_{n-1}+1)a_n.$$ Consequently, $\phi(Sh(W))=A$, so $A$ is a dendriform subalgebra of $K\langle\mathbb{N}^*\rangle$ and is isomorphic to $Sh(W)$. \end{proof}
\subsection{Freeness of the pre-Lie algebra $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$}
{\bf Notations.} Let $k\geq 1$, $d_1,\ldots,d_k \in \mathcal{D}$ and let $F_1,\ldots, F_k$ be decorated partitioned forests. We put: $$B_{d_1,\ldots,d_k}(F_1,\ldots F_k)=(\tdun{$d_1$}\bullet F_1) \, \shuffl \,\ldots \, \shuffl \, (\tdun{$d_k$}\bullet F_k).$$ Note that any partitioned tree can be written under the form $B_{d_1,\ldots,d_k}(F_1,\ldots F_k)$. This writing is unique up to a common permutation of the $d_i$'s and the $F_i$'s.
\begin{prop} We define a coproduct $\delta$ on $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ in the following way: for any decorated partitioned tree $t=B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{k,1}\ldots t_{k,n_k})$, $$\delta(t)=\frac{1}{k} \sum_{i=1}^k \sum_{j=1}^{n_i}B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1}, \ldots,t_{i,1}\ldots t_{i,j-1}t_{i,j+1}\ldots t_{i,n_i},\ldots,t_{k,1}\ldots t_{k,n_k})\otimes t_{i,j}.$$ \begin{enumerate} \item For all $x \in \mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$, $(\delta \otimes Id)\circ \delta(x)=(23)(\delta \otimes Id)\circ \delta(x)$. \item For all $x,y\in \mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$, $\delta(x \bullet y)=x \otimes y+\delta(x) \bullet y$. \end{enumerate}\end{prop}
\begin{proof} 1. Let $t=B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{k,1}\ldots t_{k,n_k})$. For all $i,j$, we put: $$t/t_{i,j}= B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1}, \ldots,t_{i,1}\ldots t_{i,j-1}t_{i,j+1}\ldots t_{i,n_i},\ldots,t_{k,1}\ldots t_{k,n_k}).$$ Then: $$\delta(t)=\frac{1}{k}\sum_{i,j} t/t_{i,j}\otimes t_{i,j}.$$ Hence: $$(\delta \otimes Id)\circ \delta(t)=\sum_{(i,j)\neq (i',j')} (t/t_{i,j})/t_{i',j'}\otimes t_{i',j'} \otimes t_{i,j}$$ As $(t/t_{i,j})/t_{i',j'}$ and $(t/t_{i',j'})/t_{i,j}$ are both the partitioned tree obtained by cutting $t_{i,j}$ and $t_{i',j'}$ in $t$, they are equal, so $(\delta \otimes Id)\circ \delta(t)$ is invariant under the action of $(23)$. \\
2. Let $t'$ be a decorated partitioned tree. \begin{eqnarray*} \delta(t\bullet t')&=&\sum_{i=1}^k \delta(B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{i,1}\ldots t_{i,n_i}t',\ldots,t_{k,1}\ldots t_{k,n_k}))\\ &&+\sum_{i,j} \delta(B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{i,1}\ldots t_{i,j}\bullet t' \ldots t_{i,n_i},\ldots,t_{k,1}\ldots t_{k,n_k}))\\ &=&\frac{1}{k} kt\otimes t'+\frac{1}{k}\sum_i\sum_{i',j'} B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{i,1}\ldots t_{i,n_i}t',\ldots,t_{k,1}\ldots t_{k,n_k})/t_{i',j'}\otimes t_{i',j'} \\ &&+\frac{1}{k}\sum_{(i,j) \neq (i',j')}B_{d_1,\ldots,d_k}(t_{1,1}\ldots t_{1,n_1},\ldots, t_{i,1}\ldots t_{i,j}\bullet t' \ldots t_{i,n_i},\ldots,t_{k,1}\ldots t_{k,n_k})/t_{i',j'}\otimes t_{i',j'}\\ &&+\frac{1}{k}\sum_{i,j}t/t_{i,k}\otimes t_{i,j} \bullet t' \\ &=&t\otimes t'+\sum t^{(1)} \otimes t^{(2)} \bullet t'+\sum t^{(1)}\otimes t^{(2)}\bullet t'. \end{eqnarray*} So $\delta(t \bullet t')=t\otimes t'+\delta(t) \bullet t'$. \end{proof} \\
By Livernet's pre-Lie rigidity theorem \cite{Livernet}:
\begin{cor} The pre-Lie algebra $\mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$ is freely generated by $Ker(\delta)$. \end{cor}
{\bf Remarks.} \begin{enumerate} \item It is not difficult to prove that for any $x,y \in \mathfrak{g}_{\mathcal{PT}(\mathcal{D})}$: $$\delta(x \, \shuffl \, y)=\sum x^{(1)}\otimes x^{(2)}\, \shuffl \, y+\sum y^{(1)} \otimes x \, \shuffl \, y^{(2)}.$$ Hence, $Ker(\delta)$ is an algebra for the product $\, \shuffl \,$. \item Here are elements of $Ker(\delta)$ in the non decorated case. Let $t_1,t_2,t_3,t_4$ be partitioned trees. \begin{eqnarray*} X&=&B(t_1t_2,1)-B(t_1,t_2),\\ Y&=&B(t_1t_2t_3,1,1)-B(t_1t_2,t_3,1)-B(t_1t_3,t_2,1)-B(t_2t_3,t_1,1)+2B(t_1,t_2,t_3),\\ Z&=&B(t_1t_2t_3t_4,1)-B(t_1t_2t_3,t_4)-B(t_1t_2t_4,t_3)-B(t_1t_3t_4,t_2)-B(t_2t_3t_4,t_1)\\ &&+B(t_1t_2,t_3t_4)+B(t_1t_3,t_2t_4)+b(t_1t_4,t_2t_3),\\ T&=&B(t_1t_2,t_3t_4,1,1)+B(t_1t_3,t_2t_4,1,1)+B(t_1t_4,t_2t_3,1,1)-B(t_1t_2,t_3,t_4,1)\\ &&-B(t_1t_3,t_2,t_4,1)-B(t_1t_4,t_2,t_3,1)-B(t_2t_3,t_1,t_4,1)-B(t_2t_4,t_1,t_3,1)\\ &&-B(t_3t_4,t_1,t_2,1)+3B(t_1,t_2,t_3,t_4). \end{eqnarray*}
\end{enumerate}
\end{document} | arXiv | {
"id": "1304.1726.tex",
"language_detection_score": 0.5085060000419617,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Limit theorems for topological invariants] {Limit theorems for topological invariants of the dynamic multi-parameter simplicial complex} \author{Takashi Owada} \address{Department of Statistics\\ Purdue University \\ IN, 47907, USA} \email{owada@purdue.edu} \author{Gennady Samorodnitsky} \address{School of Operations Research and Information Engineering\\ Cornell University \\ NY, 14853, USA} \email{gs18@cornell.edu} \author{Gugan Thoppe} \address{Department of Computer Science and Automation \\ Indian Institute of Science \\ Bengaluru, India} \email{gthoppe@iisc.ac.in}
\thanks{Owada's and Thoppe's research are partially supported by NSF grants,
DMS-1811428 and DMS 17-13012, respectively. Samorodnitsky's research is partially supported by the ARO grant W911NF-18 -10318 at Cornell University.}
\subjclass[2010]{Primary 60F17. Secondary 55U05, 60C05, 60F15. } \keywords{Functional central limit theorem, Functional strong law of large numbers, Betti number, Euler characteristic, multi-parameter simplicial complex.}
\begin{abstract}
\noindent Topological study of existing random simplicial complexes is non-trivial and has led to several seminal works. The applicability of such studies is, however, limited since the randomness in these models is usually governed by a single parameter. With this in mind, we focus here on the topology of the recently proposed multi-parameter random simplicial complex. In particular, we introduce a dynamic variant of this model and look at how its topology evolves. In this dynamic setup, the temporal evolution of simplices is determined by stationary and possibly non-Markovian processes with a renewal structure. Special cases of this setup include the dynamic versions of the clique complex and the Linial-Meshulum complex. Our key result concerns the regime where the face-count of a particular dimension dominates. We show that the Betti number corresponding to this dimension and the Euler characteristic satisfy a functional strong law of large numbers and a functional central limit theorem. Surprisingly, in the latter result, the limiting Gaussian process depends only upon the dynamics in the smallest non-trivial dimension.
\end{abstract}
\maketitle
\section{Introduction} \label{s:introduction}
The classical Erd\"os-R\'enyi graph $G(n, p)$ is a random graph on $n$ vertices in which each edge is present with probability $p$ independently. Even in such a simple model, answering topological questions such as the threshold (in terms of the rate of decay of $p = p_n$ as $n\to\infty$) for connectivity (\cite{erdos:renyi:1959}) or for the existence of cycles (\cite{pittel:1988}) is completely non-trivial. Not surprisingly then, such a study becomes even more interesting and difficult when posed in the context of random simplicial complexes---the higher dimensional generalizations of random graphs. Our focus in this work is on the general multi-parameter model of combinatorial random simplicial complexes introduced by \cite{costa:farber:2016,costa:farber:2017}.
A summary of the recent progress made in the study of random complexes generalizing the Erd\"os-R\'enyi graph is as follows. The natural complex built over any graph is its {\it clique complex}, otherwise known as the {\it flag complex}, in which a set of vertices form a face or a simplex if they form a clique in the original graph. The topological properties of the random clique complex built over the Erd\"os-R\'enyi graph were studied in \cite{kahle:2009}. This paper revealed, in particular, the existence of a ``dominating dimension'', i.e.,~Betti numbers\footnote{The $k$th Betti number is a count of ``holes" of dimension $k + 1.$} of this dimension significantly exceed those of other dimensions, at least on average.
The $k$-dimensional Linial-Meshulam complex is another important extension of the Erd\"os-R\'enyi graph. The $k = 2$ case of this model was introduced by \cite{linial:meshulam:2006}, which was then extended to general $k$ by \cite{meshulam:wallach:2009}. Here, one starts with a full $(k-1)$-skeleton on $n$ vertices and then adds $k$-simplices with probability $p$ independently. Recently, topological features of the $k$-dimensional Linial-Meshulam complex, with potential $k$-simplices weighted by independent standard uniform random variables, were investigated by \cite{hiraoka:shirai:2017}, \cite{hino:2019}, \cite{Skraba:2020}, and \cite{Fraiman:2020}.
The multi-parameter model introduced in \cite{costa:farber:2016,costa:farber:2017} is a generalization of all of these models (see the next section for the formal definition). It was analyzed to some extent in \cite{fowler:2019}, in which it was shown that a dominating dimension exists in this model as well. In this work, we go beyond and examine the topological behavior in this dominating dimension as well as study its deviation from the expected behavior.
\cite{kahle:meckes:2013} did such a study in the context of random clique complexes and proved a central limit theorem for the dominating Betti number. To obtain an even deeper understanding, \cite{thoppe:yogeshwaran:adler:2016} investigated the topological fluctuations in the dynamic variant of this model. Specifically, they considered the setup in which every edge can change its state between being ON and being OFF, i.e., between being present and being absent, at the transition times of a continuous-time Markov chain. They then derived a functional central limit theorem for the Euler characteristic and the dominating Betti number of the resulting dynamic clique complex.
Within the context of the combinatorial simplicial complexes, few attempts have been made at deriving ``functional-level" limit theorems for topological invariants (with a few exceptions such as \cite{thoppe:yogeshwaran:adler:2016}, \cite{Skraba:2020}, and \cite{Fraiman:2020}). Our work fills in this gap. We introduce a dynamic variant of the general multi-parameter random simplicial complex and derive a functional strong law of large numbers and a functional central limit theorem for the Euler characteristic and the dominating Betti number. Both of our results are proved in the space $D[0,\infty)$ of right continuous functions with left limits. Additionally, unlike \cite{thoppe:yogeshwaran:adler:2016}, we do not assume a Markovian structure for the process according to which the faces of the complex are switched on or off. Instead, the evolution here is determined by a stationary process with a renewal structure. Surprisingly, our key results indicate that the limiting Gaussian process in the central limit theorem depends only upon the dynamics of the faces in the smallest non-trivial dimension, irrespective of the dominating dimension. This happens mainly because the faces in the smallest non-trivial dimension are crucial for the existence of all higher order faces.
The generality of our multi-parameter setup forces us to devise new tools not needed under the random clique complex assumptions of \cite{kahle:meckes:2013} and \cite{thoppe:yogeshwaran:adler:2016}. In the latter case, for example, all Betti numbers of order greater than the dominating dimension vanish with high probability. This is, generally, not the case under our general setup. We solve this difficulty by devising new ways of a much more detailed analysis of these Betti numbers; see Section \ref{s:proofs.betti}. New coupling arguments play a crucial role as well, especially in the proof of functional strong laws of large numbers. Such coupling arguments enable one to stochastically dominate the face-counts in the dynamic complex by those of a suitably defined static complex, e.g., see \eqref{e:coupling.trick}. We believe that such arguments could have applications beyond the present context.
This paper is organized as follows. In Section \ref{s:dynamic.multi-para.complex}, we construct the dynamic multi-parameter simplicial complex and study some of its elementary properties. A functional central limit theorem for the face counts in this complex is stated in Section \ref{sec:face.counts}. Section \ref{sec:top.invariants} contains the main theorems for the Euler characteristic and the Betti number in the dominating dimension. The limit theorem for the face counts is proved in Section \ref{s:proofs.facecounts}, and the limit theorems for the Euler characteristic are proved in Section \ref{s:proofs.euler}, while the limit theorems for the Betti numbers in the critical (dominating dimension) are proved in Section \ref{s:proofs.betti}. Some of the proofs are postponed to the Appendix.
The following notation will be used throughout the paper.
The cardinality of a set $A$ will be denoted by $|A|$.
The indicator function of an event will be denoted by ${\mathbbm 1} \{ \cdot \}$. For two positive sequences $(a_n)$ and $(b_n)$ the notation $a_n \sim b_n$ means that $a_n/b_n\to 1$ as $n\to\infty$. The ``fat arrow'' $\Rightarrow$ is reserved for weak convergence, where the topology is obvious from the context (in this paper it is mostly the Skorohod $J_1$-topology on $D[0,\infty)$). The stochastic domination of a random variable $X$ by a random variable $Y$ (meaning that $P(X\leq x)\geq P(Y\leq x)$ for all $x$) is denoted by $X \stackrel{st}{\le} Y$.
\section{The dynamic multi-parameter simplicial
complex} \label{s:dynamic.multi-para.complex}
We begin by recalling the original multi-parameter simplicial complex introduced by \cite{costa:farber:2016,costa:farber:2017}. Starting with the alphabet $[n]=\{1,\ldots, n\}$ and parameters ${\bf p}={\bf p}(n)=(p_1,\ldots,p_{n-1})$ with $p_i\in [0,1], \, i=1,\ldots, n-1,$ one constructs the complex $X([n], {\bf p})$ incrementally, one dimension at a time. Specifically, begin with $X([n], {\bf p})^{(0)}=[n]$. For $i=1,\ldots, n-1$, once the skeleton\footnote{The $i$th skeleton of a complex consists of all of its faces with dimension less than or equal to $i.$} $X([n], {\bf p})^{(i-1)}$ has been constructed, add to $X([n], {\bf p})$ each $i$-simplex\footnote{a subset of $[n]$ with cardinality $i + 1.$} whose boundary is in $X([n], {\bf p})^{(i-1)}$, with probability $p_i$ independently of all other potential $i$-simplices. Note that the probabilities in ${\bf p}$ may depend on $n$.
Next, we define the ``dynamic" version of the multi-parameter simplicial complex with a parameter sequence ${\bf p}$. The key ingredient for our construction is a collection of independent stochastic processes \begin{equation} \label{e:on.off.proc} \big( \Delta_{i,A}(t), \, t\ge 0 \big), \ \ 1\le i \le n-1, \ A\in \mathcal W_i, \end{equation}
where $\mathcal W_i := \big\{ A \subseteq [n]: |A|=i+1 \big\}$. Each of the processes in \eqref{e:on.off.proc} is a $\{ 0,1 \}$-valued stationary process and, for $1 \le i \le n-1$ and $A\in \mathcal W_i$, \begin{equation} \label{e:dynamic.rule} A \text{ forms an } i\text{-face at time } t \ \ \Leftrightarrow \ \ \Delta_{\ell, B}(t)=1 \ \ \text{for all } \ell\in\{ 1,\dots,i \}, \, B\in \mathcal W_\ell \, \text{ with } B\subseteq A. \end{equation} Equivalently, $A$ does not form an $i$-face at time $t$ if and only if $\Delta_{\ell, B}(t)=0$ for some $\ell \in \{ 1,\dots,i \}$ and $B\in \mathcal W_\ell$ with $B\subseteq A$. We say that the process $\Delta_{i,A}$ is ``on" at time $t$ if $\Delta_{i,A}(t)=1$, and it is ``off" otherwise. We assume that,
for each $i \ge 1$, $(\Delta_{i,A}, \, A\in \mathcal W_i)$ constitutes a
family of (independent) processes with a common distribution. We often
drop the subscript $A$ when only the dimension $i$ matters.
To give a clear picture of our model, we provide a simple example for $n = 4$ in Figure~\ref{fig:01.proc}. In this case, there appears a $3$-face on $[4]=\{ 1,2,3,4\}$ if and only if the eleven independent processes $(\Delta_{i,A}, \, 1\le i \le 3, A\in \mathcal W_i)$ are all at an ``on" state. For example, such a $3$-face is present at time $t_0$. At time $t_1$, the process $\Delta_{1, \{1,3\}}$ is ``off", while all the others are ``on." Then, the $2$-faces $[1,2,3]$, $[1,3,4]$ and the $3$-face $[1,2,3,4]$ do not appear in the model, whereas all the other $2$-faces do exist.
\begin{figure}
\caption{ \footnotesize{Eleven independent stochastic processes with $n=4$. Each process stays at an ``on" state whenever a line segment appears, and it is at an ``off" state if the line segment disappears. }}
\label{fig:01.proc}
\end{figure}
We now model each $\Delta_i,$ $i = 1, \ldots, n - 1,$ via a specific $ \{0, 1 \}$-valued stationary renewal process. Let $\big( Z_j^{(i)}, \, j \ge 2 \big)$ be a sequence of iid positive random variables with a common distribution function $G_i$ and a finite positive mean $\mu_i$. The following assumption on the distribution functions $(G_i)$ will be a standing assumption throughout the paper: letting $q:=\min\{ i\ge 1: p_i <1 \}$, assume that \begin{equation} \label{e:assumption.Gi} \text{there is $a>0$ such that} \ \ G_i(a)\leq 1/2 \ \ \text{for each
$i=q,q+1,\ldots$.} \end{equation}
Separately, let $\big( I_j^{(i)}, \, j \ge 0 \big)$ be a sequence of iid Bernoulli variables with parameter $p_i.$ Finally, let $D^{(i)}$ be an \textit{equilibrium random variable} with the distribution \begin{equation} \label{e:equilibrium} \mathbb{P}(D^{(i)} \le x) = \frac{1}{\mu_i} \int_0^x \big( 1-G_i(y) \big)dy =: (G_i)_e(x), \ \ x \ge 0. \end{equation} All the random objects $\big( Z_j^{(i)} \big)$, $\big( I_j^{(i)} \big)$, and $D^{(i)}$ are independent. We define a delayed renewal sequence by $S_0^{(i)}=0$, $S_1^{(i)}=D^{(i)}$, and \begin{equation} \label{e:renewal.seq} S_j^{(i)} = D^{(i)} + \sum_{\ell=2}^j Z_\ell^{(i)}, \ \ j \ge 2, \end{equation} and the corresponding counting process, \begin{equation} \label{e:stat.renewal.proc} N_i(t) = \sum_{j=1}^\infty {\mathbbm 1} \{ S_j^{(i)} \le t \}, \ \ t \ge 0. \end{equation} Since the first renewal time has the equilibrium distribution given by \eqref{e:equilibrium}, the delayed process $N_i$ in \eqref{e:stat.renewal.proc} has stationary increments (\cite{ross:1996}). In particular, $\mathbb{E}\big( N_i(t) \big)=t/\mu_i$ . We finally define
\begin{equation} \label{e:Delta.process.rule} \Delta_i(t) := \sum_{j=0}^\infty {\mathbbm 1} \big\{ S_j^{(i)} \le t < S_{j+1}^{(i)} \big\} I_j^{(i)}, \ \ t \ge 0.
\end{equation}
\begin{definition}
The dynamic multi-parameter simplicial complex $\big( X([n], {\bf p}; t), \, t \ge 0\big)$ on $n$ vertices is defined by \eqref{e:dynamic.rule}. For each dimension $i,$ the temporal evolution of the $i$-dimensional faces is determined by the independent processes $\big( \Delta_{i,A}, \, 1\le i\le n-1, \, A\in \mathcal W_i \big)$ described in \eqref{e:Delta.process.rule}.
\end{definition}
\begin{remark}
As stated below in Lemma~\ref{l:cond.prob.Delta}, $\Delta_i$ is a stationary process for every $i$ which implies that $(X([n], {\bf p}; t), t\geq 0)$ itself is stationary. In fact, for each $t \geq 0,$ $X([n], {\bf p}; t)$ has the same distribution as that of the static multi-parameter simplicial complex in \cite{costa:farber:2016,costa:farber:2017}.
\end{remark}
\begin{remark} If ${\bf p} = (p, 1, 1, \ldots)$ and $G_1(x)=1-e^{-\lambda x}$, $x\ge0$ for some $\lambda>0$, then $X([n], {\bf p}; t)$ is a reparametrization of the dynamic clique complex, for which the evolution of the edges is determined by the $\{ 0,1 \}$-valued stationary continuous-time Markov chain (\cite{thoppe:yogeshwaran:adler:2016}). \end{remark}
The next result formally records the fact that, for each $i,$ $\Delta_i$ is a stationary process. It also states and proves a couple of useful properties concerning it. In particular, it shows that if $p_i$ is small, then $\Delta_i$ is most of time off. \begin{lemma} \label{l:cond.prob.Delta} (i) For every $i \in \{1, \dots,n-1\}$, $\big( \Delta_i(t), \, t \ge 0 \big)$ is a stationary process with $\mathbb{P}\big(\Delta_i(t)=1 \big)=p_i$. In addition, $$
\mathbb{P}\big( \Delta_i(t)=1\, \big|\, \Delta_i(0)=1\big) = 1 - (1-p_i)(G_i)_e(t), \ \ t \ge 0. $$
(ii) For every $i \ge q$ and $T>0$, \begin{equation} \label{e:sup.bound1} \mathbb{P}\big( \sup_{0\le t \le T} \Delta_i(t)=1 \big)
\leq p_i \left( 1+ (1-p_i)\frac{(G_i)_e(T)}{1-G_i(T)} \right). \end{equation} \end{lemma} \begin{proof} The first statement in part (i) is obvious, because the process $N_i(t)$ has stationary increments. For the second one, \begin{align*}
\mathbb{P}\big( \Delta_i(t)=1\, \big|\, \Delta_i(0)=1\big) &= \mathbb{P} \big(
0 \le t < D^{(i)} \bigr) + p_i P \big(
t \geq D^{(i)} \bigr) \\ &=1-(1-p_i)(G_i)_e(t). \end{align*} For Part $(ii)$, denote $$ K= N_i(T) = \max\bigl\{ j\geq 1: S_j^{(i)}\leq T\bigr\} \ \ \text{($K=0$ if $S_1^{(i)}>T$).} $$ Then, \begin{align*} \mathbb{P}\big( \sup_{0\le t \le T} \Delta_i(t)=1 \big) &= p_i + \mathbb{P}\big( \Delta_i(0)=0, \, \sup_{0 < t \le T} \Delta_i(t)=1 \big) \\ &=p_i + (1-p_i) \mathbb{E}\bigl[ 1-(1-p_i)^K\bigr]. \end{align*} It is clear that $K$ is dominated by $$ K' := \begin{cases} \min\{ j\ge 2: Z_j^{(i)}>T \}-1 & \text{if } D^{(i)} \le T \\ 0 & \text{if } D^{(i)} >T. \end{cases} $$ Evaluating the above expression with $K$ replaced by $K'$ gives us \eqref{e:sup.bound1}. \end{proof}
Sometimes we will also impose the following additional assumption on the distributions $(G_i)$. \begin{equation} \label{e:cond.regularity} c := \sup_{i\ge q}\sup_{h>0,\, 0\leq y\leq 1 }\frac{G_i(y+h)-G_i(y)}{h^\gamma} < \infty \ \ \text{for some } 0 < \gamma \le 1, \end{equation} Note that \eqref{e:cond.regularity} holds if $G_i$'s have a common bounded density function (such as an exponential density).
Under this additional assumption, we have the following estimates. \begin{lemma} \label{l:triples} Assume \eqref{e:cond.regularity}. Then for all $0\leq r<s<t\leq 1$, \begin{equation} \label{e:triple.1} \mathbb{P}\bigl( \Delta_i(r)=0, \Delta_i(s)=1, \Delta_i(t)=0\bigr) \leq \frac{2c}{a} p_i(t-r)^{1+\gamma} \end{equation} and \begin{equation} \label{e:triple.2} \mathbb{P}\bigl( \Delta_i(r)=1, \Delta_i(s)=0, \Delta_i(t)=1\bigr) \leq \frac{2c}{a} p_i^2(t-r)^{1+\gamma}.
\end{equation} \end{lemma} \begin{proof}
Rewrite \eqref{e:triple.1} as $$
p_i \mathbb{P}\bigl( \Delta_i(r)=0, \Delta_i(t)=0 \, \big|\, \Delta_i(s)=1 \bigr) \leq p_i \mathbb{P}\bigl( A_i(s) \leq s-r, R_i(s) \leq t-s\bigr), $$ where $A_i$ and $R_i$ are respectively, the age and the residual lifetime of a renewal process \eqref{e:stat.renewal.proc} with the interarrival distribution $G_i$. It then follows from standard calculation in renewal theory (see e.g., \cite{resnick:1992}) that
\begin{align*} \mathbb{P}\bigl( A_i(s)\leq s-r, R_i(s)\leq t-s\bigr) &= \mathbb{P}(r \le S_{N_i(s)}^{(i)}, \, S_{N_i(s)+1}^{(i)} \le t)\\ &= \frac{1}{\mu_i} \int_0^{s-r} \big( G_i(y+t-s)-G_i(y) \big)dy \leq \frac{2c}{a}(s-r)(t-s)^\gamma. \end{align*} The last inequality comes from \eqref{e:assumption.Gi} and \eqref{e:cond.regularity}. The argument for \eqref{e:triple.2} is similar; since the process $\Delta_i$ is now required to be ``on" in two distinct time intervals, $p_i$ in \eqref{e:triple.1} is replaced by $p_i^2$. \end{proof}
Recall that the probabilities in ${\bf p}$ for the dynamic multi-parameter simplicial complex $X([n],{\bf p}; t)$ may depend on $n$. In the sequel, following \cite{costa:farber:2017}, we ``couple'' ${\bf p}$ with $n$ in a particular way: we set $p_i =n^{-\alpha_i}$, $\alpha_i \in [0,\infty]$ for $i=1,2,\ldots$. Accordingly, we can work with an infinite sequence ${\bm \alpha} = (\alpha_1,\alpha_2,\dots)$, independent of $n$, to control the rates at which the entries in ${\bf p}$ decay. Below, we introduce some additional terms and notation, which we try to keep as consistent as possible with those in \cite{costa:farber:2017}.
Let $$ \psi_j({\bm \alpha}) = \sum_{i=1}^j \binom{j}{i} \alpha_i, \ \ \ j\geq 1. $$ By convention, we set $\binom{j}{i}=0$ whenever $j < i$. Note that $\psi_j({\bm \alpha})$ is non-decreasing in $j$, i.e., $\psi_i ({\bm \alpha}) \leq \psi_j({\bm \alpha})$ for each ${\bm \alpha}$ and $i \le j$. We also let $$ \tau_j ({\bm \alpha}) := j+1 - \sum_{i=1}^j \psi_i({\bm \alpha}) = j+1 - \sum_{i=1}^j\binom{j+1}{i+1}\alpha_i, \ \ 1 \le j \le n-1. $$ Additionally, we consider the following sets of parameters: $$ \mathcal D_j := \big\{ {\bm \alpha}: \psi_j({\bm \alpha}) < 1 < \psi_{j+1}({\bm \alpha}) \big\} $$ for $j\geq 1$ and $\mathcal D_0 := \{ {\bm \alpha}: \psi_1({\bm \alpha})>1 \}$.
Recalling the notation $q = q({\bm \alpha})= \min \{ i \ge 1:\alpha_i>0 \}$ in \eqref{e:assumption.Gi}, note that $$ \psi_j({\bm \alpha}) = 0, \ \ \tau_j({\bm \alpha}) = j+1, \ \ j=1,\dots,q-1. $$ Importantly, if ${\bm \alpha} \in \mathcal D_k$ for some $k\ge q$, then $$ 0< \psi_q({\bm \alpha}) < \dots < \psi_k({\bm \alpha}) < 1 < \psi_{k+1}({\bm \alpha}) < \dots, $$ so that, $$ q = \tau_{q-1}({\bm \alpha}) < \tau_{q}({\bm \alpha}) < \dots < \tau_k({\bm \alpha}) > \tau_{k+1}({\bm \alpha}) > \dots. $$ In this case, the index $k$ is referred to as \textit{the critical
dimension}. Note that $\tau_j({\bm \alpha})$, $j \ge k+1,$ can be negative. Observe also that, for $j > k$, \begin{align} \label{e:simple.lemma} \tau_j({\bm \alpha}) - ( \tau_{j+1}({\bm \alpha}) + \alpha_{j+1}) =& - 1 + \sum_{i=1}^j
\bigg( \binom{j+2}{i+1}-\binom{j+1}{i+1} \bigg)\alpha_i \\ =& - 1 + \sum_{i=1}^j
\binom{j+1}{i}\alpha_i > - 1+\psi_j({\bm \alpha}) >0. \notag \end{align}
\section{Limit theorems for the face counts} \label{sec:face.counts}
We consider the dynamic multi-parameter simplicial complex $\big( X([n], {\bf p}; t), \, t \ge 0\big)$ constructed in the previous section. Our basic assumption from now on will be that \begin{equation} \label{e:basic.ass} {\bm \alpha} \in \mathcal D_k \ \text{for some $k\geq q$. } \end{equation} Let $\beta_{j,n}(t) := \beta_{j,n} \big( X([n], {\bf p}; t) \big)$ be the $j$th (reduced) Betti number of the complex at time $t$. Note that $\bigl( \beta_{j,n}(t), \, t\geq 0\bigr)$ is a stationary process. We will often use $\beta_{j, n}$ to mean $\beta_{j,n}(0)$. Similarly, we let $\chi_n(t)$ denote the Euler characteristic of the complex at time $t$. Then, $\bigl( \chi_{n}(t), \, t\geq 0\bigr)$ also is a stationary process, and $\chi_n$ will be used to denote $\chi_n := \chi_n(0)$. Recall that our goal is to establish functional strong laws of large numbers (SLLN) and functional central limit theorems (FCLT) for the Euler characteristic and the Betti number in the critical dimension $k$ of the dynamic multi-parameter simplicial complex. This section is of preparatory nature and deals with the face counts of the complex.
We write the face counts in dimension $j$ as \begin{align} \label{e:k.face.counts}
f_{j,n}(t) &= \sum_{\sigma \subset [n], \, |\sigma| = j+1} {\mathbbm 1} \{
\sigma \text{ forms a } j\text{-face in } X([n],{\bf p}; t)
\} =: \sum_{\sigma \subset [n],\, |\sigma| = j+1}
\xi_\sigma(t), \ \ t \ge 0. \end{align}
Once again, let $\xi_\sigma := \xi_\sigma(0)$. As in \cite{kahle:meckes:2013} and \cite{thoppe:yogeshwaran:adler:2016}, we analyze the face counts first, and then relate them to the Euler characteristic and the Betti numbers through the relations \begin{equation} \label{e:EC.faces} \chi_n (t) := \sum_{j=0}^{n-1} (-1)^j f_{j,n}(t), \ \ t\ge 0, \end{equation} and \begin{equation} \label{e:Euler.characteristic} \chi_n (t) := 1+ \sum_{j=0}^{n-1} (-1)^j\beta_{j,n}(t), \ \ t\ge 0. \end{equation}
We start with the asymptotic behaviour of the expected value and the covariances of the face counts. Note that not all results below require the assumption \eqref{e:basic.ass}. \begin{proposition} \label{p:moment.face.count} For any $j\ge 1$, we have $$ \mathbb{E} (f_{j,n}) \sim \frac{n^{\tau_j({\bm \alpha})}}{(j+1)!}, \ \ n\to\infty. $$ Furthermore, for $j \ge q$ and $0 \le s \le t < \infty$, we have \begin{align}
\text{\rm Cov} \big( f_{j,n}(t), & f_{j,n}(s) \big) \label{e:covfj} \\ & \sim \frac{n^{2\tau_j({\bm \alpha})-\tau_q({\bm \alpha})}}{(q+1)! \big( (j-q)!
\big)^2}\, \big( 1-(G_q)_e(t-s) \big)
\vee\frac{n^{\tau_j({\bm \alpha})}}{(j+1)!}\, \prod_{i=q}^j \big(
1-(1-p_i)(G_i)_e(t-s) \big)^{\binom{j+1}{i+1}} \notag \end{align} as $ n\to\infty$, where $a\vee b = \max\{ a,b \}$ for $a, b \in \reals$. In particular, if \eqref{e:basic.ass} holds, then \begin{equation}\label{e:varfk} \text{\rm Cov} \big( f_{k,n}(t), f_{k,n}(s) \big) \sim \frac{n^{2\tau_k({\bm \alpha})-\tau_q({\bm \alpha})}}{(q+1)! \big( (k-q)! \big)^2}\, \big( 1-( G_q)_e(t-s) \big), \ \ n\to\infty. \end{equation}
\end{proposition} \begin{remark} \label{rem:non-random} For $j<q$, $f_{j,n}(t)$ is, of course, nonrandom, so in this case, $\text{\rm Cov}\big(f_{j,n}(t), f_{j,n}(s) \big)=0$. \end{remark} \begin{proof} The asymptotics of the mean face count is easy to obtain. In fact, \begin{equation} \label{e:calculation.expectation} \mathbb{E}(f_{j,n}) = \binom{n}{j+1} \prod_{i=q}^j p_i^{\binom{j+1}{i+1}} = \binom{n}{j+1} n^{\tau_j({\bm \alpha}) - (j+1)} \sim \frac{n^{\tau_j({\bm \alpha})}}{(j+1)!} \ \ \text{as } n\to\infty. \end{equation} For the covariances, we write \begin{align*}
\mathbb{E}\big( f_{j,n}(t) f_{j,n}(s) \big) &= \sum_{\ell=0}^{j+1} \mathbb{E} \bigg( \sum_{\substack{\sigma \subset [n] \\ |\sigma|=j+1}} \sum_{\substack{\tau \subset [n] \\ |\tau|=j+1, \, |\sigma \cap \tau|=\ell}} \xi_{\sigma}(t)\xi_{\tau}(s)\bigg) \\
&=\sum_{\ell=0}^{j+1} \binom{n}{j+1} \binom{j+1}{\ell} \binom{n-j-1}{j+1-\ell} \mathbb{E} \big( \xi_\sigma(t)\xi_\tau(s) \big) {\mathbbm 1} \big\{ |\sigma \cap \tau|=\ell \big\}. \end{align*} If $\ell\in \{ 0,1,\dots,q \}$, all faces of $\sigma\cap \tau$ exist with probability one; thus, $$
\mathbb{E} \big( \xi_\sigma(t)\xi_\tau(s) \big)\, {\mathbbm 1} \big\{ |\sigma\cap\tau|=\ell \big\} = \bigg( \prod_{i=q}^j p_i^{\binom{j+1}{i+1}} \bigg)^2=n^{2\tau_j({\bm \alpha})-2(j+1)}. $$ On the other hand, if $\ell\in\{ q+1, \dots, j+1\}$, we have \begin{align*}
\mathbb{E} \big( \xi_\sigma(t)\xi_\tau(s) \big) {\mathbbm 1} \big\{ |\sigma\cap\tau|=\ell \big\} &= \prod_{i=q}^j p_i^{\binom{j+1}{i+1}} \times \prod_{i=q}^j \mathbb{P} \big( \Delta_i(t)=1\, \big|\, \Delta_i(s)=1 \big)^{\binom{\ell}{i+1}} \times \prod_{i=q}^j p_i^{\binom{j+1}{i+1}-\binom{\ell}{i+1}} \notag\\ &=: A_n \times B_n \times C_n. \end{align*}
Here, $A_n$ is the probability of $\tau$ spanning a $j$-face at time $s$, while $B_n$ is the conditional probability that all faces of $\sigma\cap \tau$ are present at time $t$, given that $\tau$ spans a $j$-face at time $s$. Finally, $C_n$ is the conditional probability of $\sigma$ forming a $j$-face at time $t$, given that all faces of $\sigma \cap \tau$ are present at time $t$. Calculating the product of three terms via Lemma \ref{l:cond.prob.Delta}, $$ A_n \times B_n \times C_n = n^{2\tau_j({\bm \alpha})-\tau_{\ell-1}({\bm \alpha})-2(j+1)+\ell} \prod_{i=q}^j \big( 1-(1-p_i)(G_i)_e(t-s) \big)^{\binom{\ell}{i+1}} . $$
By the stationarity of face counts, together with \eqref{e:calculation.expectation}, we have that \begin{align*} \mathbb{E}\big( f_{j,n}(t) \big) \mathbb{E}\big( f_{j,n}(s) \big) &= \big( \mathbb{E}(f_{j,n}) \big)^2 = \binom{n}{j+1}^2 n^{2\tau_j({\bm \alpha}) - 2(j+1)} \\ &= \sum_{\ell = 0}^{j+1} \binom{n}{j+1} \binom{j+1}{\ell} \binom{n-j-1}{j+1-\ell} n^{2\tau_j({\bm \alpha}) - 2(j+1)}. \end{align*} Combining all these results yields \begin{align*} \text{\rm Cov}\big( f_{j,n}(t), f_{j,n}(s)\big) &= \sum_{\ell=q+1}^{j+1} \binom{n}{j+1}\binom{j+1}{\ell}\binom{n-j-1}{j+1-\ell} \\ &\quad \times n^{2\tau_j({\bm \alpha})-\tau_{\ell-1}({\bm \alpha})-2(j+1)+\ell} \Big\{ \prod_{i=q}^j \big( 1-(1-p_i)(G_i)_e(t-s) \big)^{\binom{\ell}{i+1}} - n^{\tau_{\ell-1}({\bm \alpha})-\ell} \Big\} \\ &\sim \sum_{\ell=q+1}^{j+1} \frac{n^{2\tau_j({\bm \alpha})-\tau_{\ell-1}({\bm \alpha})}}{\ell ! \big( (j+1-\ell)! \big)^2}\, \prod_{i=q}^{\ell-1} \big( 1-(1-p_i)(G_i)_e(t-s) \big)^{\binom{\ell}{i+1}}\\ &\sim \frac{n^{2\tau_j({\bm \alpha})-\tau_q({\bm \alpha})}}{(q+1)! \big( (j-q)! \big)^2}\, \big( 1-(G_q)_e(t-s) \big) \\ &\qquad \qquad \qquad \qquad \vee\frac{n^{\tau_j({\bm \alpha})}}{(j+1)!}\, \prod_{i=q}^j \big( 1-(1-p_i)(G_i)_e(t-s) \big)^{\binom{j+1}{i+1}}, \ \ \ n\to\infty, \end{align*} where the last equivalence comes from the fact that $\bigl( \tau_\ell({\bm \alpha}), \, \ell\geq q\bigr)$ is a sequence that increases for $\ell\leq k$ and then decreases. For the derivation of \eqref{e:varfk}, use the fact that $2\tau_k({\bm \alpha}) - \tau_q({\bm \alpha}) \ge \tau_k({\bm \alpha})$. \end{proof}
\begin{remark} \label{rk:critical} It follows immediately from the proposition that, under the assumption \eqref{e:basic.ass}, for every $j\not= k$, \begin{equation} \label{e:critical.dom} \lim_{n\to\infty} \frac{\mathbb{E} (f_{j,n})}{\mathbb{E} (f_{k,n})} = \lim_{n\to\infty} \frac{\text{\rm Var} (f_{j,n})}{\text{\rm Var} (f_{k,n})} =0. \end{equation} That is, the face counts in the critical dimension dominate those in the other dimensions both in their means and their variances. \end{remark}
The following corollary will be useful in the sequel. Since time parameter plays no role due to stationarity, we remove it to simplify the notation. Denote \begin{equation} \label{e:M.alpha} M({\bm \alpha})= \min \big\{ i: \tau_i({\bm \alpha})<0 \big\}; \end{equation} this is a finite number since $\tau_i({\bm \alpha}) \to -\infty$ as $i\to\infty$. \begin{corollary} \label{cor:neg.tau} As $n\to\infty$, $$ \sum_{j=M({\bm \alpha})}^\infty \mathbb{E}(f_{j,n}) \to 0\,. $$ \end{corollary} \begin{proof}
It follows from \eqref{e:calculation.expectation} that $$ \mathbb{E}(f_{j,n}) \le n^{\tau_j({\bm \alpha})} \le \left(\frac{1}{n^\beta}\right)^{j+1}, $$ where $$ \beta = \inf_{j \ge M({\bm \alpha})} \bigl[-\tau_j({\bm \alpha})/(j+1)\bigr]. $$ Note that $\beta>0$, since $\tau_j({\bm \alpha}) < 0$ for all $j \ge M({\bm \alpha})$, and $$ \lim_{j\to\infty} \frac{-\tau_j({\bm \alpha})}{j+1} = \lim_{j\to\infty} \Big\{ \sum_{i=1}^j\binom{j}{i} \frac{\alpha_i}{i+1}-1 \Big\} \ge \lim_{j\to\infty} \Big\{ \binom{j}{q} \frac{\alpha_q}{q+1} - 1 \Big\} = \infty. $$ Hence, $$ \sum_{j=M({\bm \alpha})}^\infty \mathbb{E}(f_{j,n}) \le \sum_{j=M({\bm \alpha})}^\infty \left( \frac{1}{n^\beta} \right)^{j+1} \to 0, \ \ \ n\to\infty, $$ as desired. \end{proof}
As stated below, the face counts in the critical dimension $k$ turn out to satisfy a functional central limit theorem. The limit turns out to be a stationary Gaussian process whose covariance function is given by the limit in \eqref{e:varfk}. Specifically, let $\big( Z_k(t), \, t\ge 0\big)$ be a zero-mean stationary Gaussian process with covariance function \begin{equation} \label{e:covariance.func} R_k(t) = \mathbb{E} \big( Z_k(t)Z_k(0) \big) = 1-(G_q)_e(t), \ \ t \ge 0. \end{equation} The basic sample path properties of this process are described in the next proposition. \begin{proposition} \label{p:holder.continuous} The process $Z_k$ admits a continuous version, whose sample paths are $\delta$-H\"older continuous~for any $\delta\in (0,1/2)$. \end{proposition} \begin{proof} Since $Z_k$ is a stationary Gaussian process and $$
\mathbb{E} \big[ \big( Z_k(t)-Z_k(s) \big)^2 \big] = 2(G_q)_e(|t-s|) \le
\frac{2}{\mu_q} |t-s|, $$ the claim follows from the Kolmogorov continuity criterion. \end{proof}
The statement below is a FCLT for the face counts in the critical dimension $k$. We view $f_{k,n}(\cdot)$ as a (piecewise constant) random element of $D[0,\infty)$, the space of right continuous functions with left limits, which is equipped with the Skorohod $J_1$-topology. \begin{proposition} \label{p:clt.face.counts} Assume \eqref{e:basic.ass}. Then, as $n\to\infty$, \begin{equation} \label{e:clt.face.counts} \left( \frac{f_{k,n}(t)-\mathbb{E}(f_{k,n})}{\sqrt{\text{\rm Var} (f_{k,n})}}, \, t\geq 0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr) \end{equation} in the sense of convergence of the finite-dimensional distributions. If the assumption \eqref{e:cond.regularity} is satisfied then \eqref{e:clt.face.counts} also holds in the sense of weak convergence in the $J_1$-topology on $D[0,\infty)$. \end{proposition} The proof is deferred to Section \ref{s:proofs.facecounts}.
\begin{remark} \label{rk:whyq} It is interesting and, initially, unexpected that only the state change distribution $G_q$ in the lowest nontrivial dimension $q$ contributes to the asymptotics of the face counts in the critical dimension. This is due to the fact that the ``flipping'' of a $q$-simplex from ``on'' to ``off'' or vice versa affects the distribution of $k$-simplices more than does any flipping in a different dimension. Note that if $G_q$ is exponential with mean $1/\lambda$, then $R_k(t)=e^{-\lambda t}$ and $Z_k$ is the Ornstein-Uhlenbeck Gaussian process, as in \cite{thoppe:yogeshwaran:adler:2016}.
\end{remark}
\section{FCLT for topological invariants} \label{sec:top.invariants}
In this section, we present the main results of this paper: the functional SLLN and the FCLT for the Euler characteristic and the Betti numbers in the critical dimension. We defer the proofs to Sections \ref{s:proofs.euler} and \ref{s:proofs.betti}.
We start with the strong laws of large numbers. \begin{theorem} \label{t:SLLN} Assume \eqref{e:basic.ass}. Then, as $n\to\infty$, \begin{equation} \label{e:SLLN.Euler.char} \left( \frac{\chi_n(t)}{n^{\tau_k({\bm \alpha})}}, \, t\geq 0\right)\to \frac{(-1)^k}{(k+1)!} \ \ \text{a.s.} \end{equation} and \begin{equation} \label{e:SLLN.betti} \left(\frac{\beta_{k,n}(t)}{n^{\tau_k({\bm \alpha})}}, \, t\geq 0\right)\to \frac{1}{(k+1)!} \ \ \text{a.s.} \end{equation} in the $J_1$-topology on $D[0,\infty)$, where the right hand sides of \eqref{e:SLLN.Euler.char} and \eqref{e:SLLN.betti} are viewed as constant elements of $D[0,\infty)$. \end{theorem}
After stating the functional strong law of large numbers, we proceed, as it is frequently done, with the functional central limit theorem. Note the similarity with the corresponding limit theorem for the face counts in Proposition \ref{p:clt.face.counts}.
\begin{theorem} \label{t:clt.topological.invariants} Assume \eqref{e:basic.ass}. Then, as $n\to\infty$, \begin{equation} \label{e:Euler.characteristic.func.conv} \left( \frac{\chi_n(t) - \mathbb{E}(\chi_n)}{\sqrt{\text{\rm Var} (f_{k,n})}}, \, t\geq
0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr) \end{equation} and \begin{equation} \label{e:betti.func.conv} \left( \frac{\beta_{k,n}(t) - \mathbb{E}(\beta_{k,n})}{\sqrt{\text{\rm Var} (f_{k,n})}}, \, t\geq 0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr) \end{equation} in the sense of convergence of the finite-dimensional distributions.
In addition, assume \eqref{e:cond.regularity} and \begin{equation} \label{e:sharp.drop.k+1} \tau_k({\bm \alpha})-\frac{\tau_q({\bm \alpha})}{2}>\tau_{k+1}({\bm \alpha}). \end{equation} Then, \eqref{e:Euler.characteristic.func.conv} and \eqref{e:betti.func.conv} also hold in the sense of weak convergence in the $J_1$-topology on $D[0,\infty)$. \end{theorem} \begin{remark} \label{rk:clt.explicit} By Proposition \ref{p:moment.face.count}, \eqref{e:Euler.characteristic.func.conv} can be restated as $$ \left( \frac{\chi_n(t) - \mathbb{E}(\chi_n)}{n^{\tau_k({\bm \alpha})-\tau_q({\bm \alpha})/2}}, \, t\geq
0\right) \Rightarrow \bigl( \big\{ (q+1)!\big\}^{1/2} (k-q)! Z_k(t), \, t\geq 0\bigr). $$ A similar reformulation is possible for \eqref{e:betti.func.conv}. \end{remark} \begin{remark} \label{rk:conjecture} We think that \eqref{e:cond.regularity} alone is sufficient for weak convergence in the $J_1$-topology on $D[0,\infty)$ in \eqref{e:Euler.characteristic.func.conv} and \eqref{e:betti.func.conv}. We have chosen to assume \eqref{e:sharp.drop.k+1} in order to simplify an already long and technical argument. \end{remark}
\begin{example} \label{ex:LM.clique} The dynamic variants of the Linial-Meshulam complex and the clique complex are special cases of our model. An explicit form of Theorem~\ref{t:clt.topological.invariants} is stated here for these two setups.
The Linial-Meshulam simplicial complex (see \cite{linial:meshulam:2006,meshulam:wallach:2009}) corresponds, in our description, to ${\bm \alpha}=(0,\dots,0,\alpha_k,\infty, \infty,\dots )$, with $0<\alpha_k<1$ in some position $k\geq 2$. This $k$ is then the critical dimension with $q=k$, and $\tau_k({\bm \alpha})=k+1-\alpha_k$. Furthermore, \eqref{e:basic.ass} holds. If $X([n],{\bf p}; t)$ is the dynamic Linial-Meshulam complex, then Theorem \ref{t:clt.topological.invariants} says that \begin{equation} \label{e:LM} \left( \frac{\chi_n(t) - \mathbb{E}(\chi_n)}{\sqrt{n^{k+1-\alpha_k}}} , \, t\geq
0\right) \Rightarrow \bigl( \big\{ (k+1)!\big\}^{1/2} Z_k(t), \, t\geq 0\bigr), \end{equation} at least in the sense of finite-dimensional distributions.
Consider now the dynamic clique complex, for which
${\bm \alpha}=(\alpha_1,0, 0,\dots )$ with $0<\alpha_1<1$ and $\alpha_1 \neq 1/m$ for any $m\in {\mathbb N}$. Then,
$q=1$ and the critical dimension is $k=\lfloor 1/\alpha_1\rfloor\geq
q$. Once again, \eqref{e:basic.ass} holds. Here,
$\tau_k({\bm \alpha})=k+1-\binom{k+1}{2}\alpha_1$ and
$\tau_q({\bm \alpha})=2-\alpha_1$. Now, Theorem \ref{t:clt.topological.invariants} says that \begin{equation} \label{e:CC} \left( \frac{\chi_n(t) - \mathbb{E}(\chi_n)}{n^{k-\binom{k+1}{2}\alpha_1+\alpha_1/2}} , \, t\geq
0\right) \Rightarrow \bigl( \sqrt{2}(k-1)! Z_k(t), \, t\geq 0\bigr),
\end{equation}
once again, at least in the finite-dimensional distributions.
For both models, we also obtain corresponding results for the Betti numbers in the critical dimension. In the dynamic clique complex, if $G_1$ is an exponential distribution, then, as mentioned above, $Z_k$ is a zero-mean stationary Ornstein-Uhlenbeck Gaussian process, as in \cite{thoppe:yogeshwaran:adler:2016}.
As for the technical conditions for tightness, in the dynamic Linial-Meshulam complex, we only need to check \eqref{e:cond.regularity} just for $i=k$, while \eqref{e:sharp.drop.k+1} always holds as $\tau_{k+1}({\bm \alpha})=-\infty$. In the case of a dynamic clique complex, one needs to check \eqref{e:cond.regularity} just for $i=1$.
On the other hand, \eqref{e:sharp.drop.k+1} reduces to $\alpha_1 > 4/(2k+3)$, implying that the corresponding functional convergence follows only when $4/5 < \alpha_1 < 1$ and the critical dimension is $k=\lfloor 1/\alpha_1\rfloor=1$.
\end{example} \begin{remark} For the dynamic clique complex, the assumption \eqref{e:sharp.drop.k+1} fails in a certain range of the parameter. Therefore, Theorem \ref{t:clt.topological.invariants} does not claim the functional convergence in full generality, for the Euler characteristic and the Betti numbers in the critical dimension. On the other hand, \cite{thoppe:yogeshwaran:adler:2016} who only discuss this model, established tightness in full generality, and hence FCLT in the $J_1$-topology on $D[0,\infty)$.
The reason for this discrepancy is the generality of our setup. In particular, in the dynamic clique complex, all Betti numbers except that in the critical dimension are known to vanish with a very high probability (see \cite{kahle:meckes:2013}, \cite{kahle:2014}), which makes it possible to obtain the required tightness in \cite{thoppe:yogeshwaran:adler:2016}. In the general multi-parameter simplicial complex, however, this is no longer necessarily the case, and the Betti number in the dimension greater than the critical one may not vanish; see Corollary~1.7 of \cite{fowler:2019}. To overcome the resulting difficulty, we have imposed an extra condition \eqref{e:sharp.drop.k+1}. We anticipate that the tightness holds without that extra condition; one way to avoid this is via very complicated fourth moment estimates for the Betti numbers based on the expression in Proposition \ref{p:betti.representation}.
\end{remark}
\section{Proof of the FCLT for the face counts} \label{s:proofs.facecounts}
In the sequel, we omit the subscript $n$ from all face count and Betti number notations. For example, we simply write $f_j(t)$, $\beta_j(t)$ etc. Everywhere, $C$ denotes a generic positive constant, which is independent of $n$ but may vary between (or even within) the lines.
We start with proving the finite-dimensional convergence in Proposition \ref{p:clt.face.counts}. By the Cram\'er-Wold device, it is enough to show that for all $0 \le t_1 < \dots <t_m <\infty$, $a_i \in \reals$, $i=1,\dots,m$, $m\ge1$, \begin{equation} \label{e:CW.faces} \frac{\sum_{i=1}^ma_i \big( f_k(t_i)-\mathbb{E}(f_k) \big)}{\sqrt{\text{\rm Var}(f_k)}} \Rightarrow\sum_{i=1}^m a_i Z_k(t_i) \ \ \text{in } \reals. \end{equation}
Clearly, it is enough to consider such choices of the coefficients for which the variance in the right hand side of \eqref{e:CW.faces} does not vanish, so fix such a set of coefficients.
Let $J$ be the collection of $k-$faces or, equivalently, words of length $k+1$ in $[n].$ For ${\bf j} \in J,$ let $$ X_{\bf j} =\frac{\sum_{i=1}^m a_i \big( \xi_{{\bf j}}(t_i) - \mathbb{E}
(\xi_{{\bf j}})\big)}{\sqrt{\text{\rm Var} \big( \sum_{i=1}^m a_i f_k(t_i)
\big)}}; $$ recall that $\xi_{{\bf j}}(t)$ is the indicator function that the $k$-face associated with the word ${\bf j}$ is ``on" at time $t$. Finally, define $$ W := \sum_{{\bf j} \in J}X_{{\bf j}} =\frac{\sum_{i=1}^m a_i \big( f_k(t_i)-
\mathbb{E} (f_k)\big)}{\sqrt{\text{\rm Var} \big( \sum_{i=1}^m a_i f_k(t_i) \big)}}, $$ so that $\mathbb{E}(W)=0$ and $\text{\rm Var} (W)=1$.
In the terminology of \cite{barbour:karonski:rucinski:1989} (see Equ.~(2.7) therein), $\big\{ X_{\bf j}, \, {\bf j} \in J \big\}$ constitutes a \textit{dissociated} set of random variables. To see this, identify each $k-$face ${\bf j} \in J$ by the tuple ${\bf j}_{q} \equiv \Big(j_1, \ldots, j_{\binom{k + 1}{q + 1}}\Big),$ where each $j_i$ corresponds to a $q-$face in ${\bf j}.$ For example, when $k = 3$ and $q = 1,$ identify the $3-$face $[1, 2, 3, 4]$ by the tuple $\big([1, 2], [1, 3], [1, 4], \ldots, [3, 4]\big).$ Then, for any sets $K, L \subset J_q := \{{\bf j}_q: {\bf j} \in J\}$ such that
\[
\left|\bigcup_{{\bf j}_q
\in K} \left\{j_1, \ldots, j_{\binom{k + 1}{q + 1}}\right\} \cap \bigcup_{{\bf j}_q' \in L} \left\{j'_1, \ldots, j'_{\binom{k + 1}{q + 1}}\right\}\right| = \emptyset, \] we have that $(X_{{\bf j}}: \, {\bf j}_q \in K)$ is independent of $(X_{{\bf j}}: \, {\bf j}_q \in L)$. This verifies the claim that $\{X_{\bf j}: {\bf j} \in J\}$ is a dissociated set of random variables. We can thus invoke the central limit theorem of \cite{barbour:karonski:rucinski:1989} for sums of dissociated random variables.
The approach is to estimate the $L_1$-Wasserstein metric between the distribution ${\mathcal L}_W$ of $W$ and the standard normal distribution, i.e. $$
d_1({\mathcal L}_W,{\mathcal L}_Y) = \sup_\phi \Big| \mathbb{E} \big( \phi(W) \big) - \mathbb{E} \big( \phi(Y) \big) \Big|, $$ where $Y$ has the standard normal distribution and the supremum is taken over all $\phi: \reals \to
\reals$ such that $\sup_{y_1\neq y_2}\big| \phi(y_1) - \phi(y_2)
\big|/|y_1-y_2|\le 1$. Assuming we have shown that $d_1({\mathcal L}_W,{\mathcal L}_Y)\to0$, we have $W\Rightarrow Y$ as $n\to\infty$. Furthermore, direct applications of Proposition \ref{p:moment.face.count} and \eqref{e:covariance.func} yield $$ \frac{\text{\rm Var} \big( \sum_{i=1}^m a_i f_k(t_i) \big)}{\text{\rm Var}(f_k)} \to \text{\rm Var} \big( \sum_{i=1}^m a_i Z_k(t_i) \big), \ \ \ n\to\infty. $$ Therefore, $d_1({\mathcal L}_W,{\mathcal L}_Y)\to0$ would give us $$ \frac{\sum_{i=1}^m a_i\big( f_k(t_i)-\mathbb{E}(f_k) \big)}{\sqrt{\text{\rm Var}(f_k)}} \Rightarrow \Big\{ \text{\rm Var} \big( \sum_{i=1}^m a_i Z_k(t_i) \big)\Big\}^{1/2} Y \stackrel{d}{=} \sum_{i=1}^m a_i Z_k(t_i), \ \ \ n\to\infty, $$ as required.
It remains to actually show that $d_1({\mathcal L}_W,{\mathcal L}_Y) \to 0$ as
$n\to\infty$. Let $L_{{\bf j}} =\{ {\bf k} \in J: |{\bf k} \cap {\bf j}| \geq q+1 \}$ be the dependency neighborhood of ${\bf j} \in J$, that is, a collection of simplices ${\bf k}$ having at least one $q$-face in common with ${\bf j}$. Then a slight reformulation of (3.4) in \cite{barbour:karonski:rucinski:1989} and Proposition \ref{p:moment.face.count} shows that for a constant $C$ that may depend on the coefficients $a_1,\ldots, a_m$, but on nothing else, \begin{align}
&d_1({\mathcal L}_W,{\mathcal L}_Y) \le C \sum_{{\bf j} \in J} \sum_{{\bf k} \in L_{\bf j}}\sum_{{\bf l} \in L_{\bf j}} \Big\{ \mathbb{E} \big(|X_{{\bf j}}X_{\bf k} X_{\bf l}|\big) + \mathbb{E} \big(|X_{\bf j} X_{\bf k}|\big) \mathbb{E}\big(|X_{\bf l}|\big) \Big\} \notag \\
&\quad \le \frac{C}{n^{3\tau_k({\bm \alpha})-3\tau_q({\bm \alpha})/2}}
\sum_{i_1, i_2, i_3=1}^m \sum_{{\bf j} \in J} \sum_{{\bf k}\in
L_{\bf j}}\sum_{{\bf l}\in L_{\bf j}} \bigg\{ \mathbb{E}\Big[ \big(
\xi_{{\bf j}}(t_{i_1}) + \mathbb{E}(\xi_{{\bf j}}) \big) \big(
\xi_{{\bf k}}(t_{i_2}) + \mathbb{E}(\xi_{{\bf k}}) \big) \big(
\xi_{{\bf l}}(t_{i_3}) + \mathbb{E}(\xi_{{\bf l}}) \big)\Big] \label{e:big.brace} \\ &\qquad \qquad\qquad \qquad\qquad \qquad \qquad \qquad \qquad \quad + 2\mathbb{E} \Big[
\big( \xi_{{\bf j}}(t_{i_1}) + \mathbb{E}(\xi_{{\bf j}}) \big) \big( \xi_{{\bf k}}(t_{i_2}) + \mathbb{E}(\xi_{{\bf k}}) \big) \Big] \mathbb{E}(\xi_{{\bf l}}) \bigg\}. \notag \end{align} For fixed ${\bf j}\in J,{\bf k}\in L_{\bf j},{\bf l}\in L_{\bf j}$ denote $$
\ell_{12}= |{\bf j} \cap {\bf k}|, \ \ \ell_{13} =|{\bf j} \cap {\bf l}|, \ \ \ell_{23} = | {\bf k} \cap {\bf l}|, \ \ \ell_{123}= |{\bf j} \cap {\bf k} \cap {\bf l}|. $$ Since ${\bf k}, {\bf l} \in L_{\bf j}$, it must be that $\ell_{12}\ge q+1$ and $\ell_{13}\ge q+1$, whereas $\ell_{23}$ and $\ell_{123}$ can be less than $q+1$. Given $\ell_{12}$, $\ell_{13}$, $\ell_{23}$, and $\ell_{123}$ as above, the expression between the braces in the right hand side of \eqref{e:big.brace} can, up to a constant factor, be bounded by $$ \prod_{i=q}^k p_i^{3\binom{k+1}{i+1} - \binom{\ell_{12}}{i+1} - \binom{\ell_{13}}{i+1} - \binom{\ell_{23}}{i+1} + \binom{\ell_{123}}{i+1}} $$ For example, for $0 \le r \le s \le t<\infty$, by the inclusion-exclusion formula, \begin{align*} & \mathbb{E} \big( \xi_{{\bf j}}(r)\xi_{{\bf k}}(s)\xi_{{\bf l}}(t) \big) \\
=& \prod_{i=q}^k p_i^{3\binom{k+1}{i+1} - \binom{\ell_{12}}{i+1} - \binom{\ell_{13}}{i+1} - \binom{\ell_{23}}{i+1} + \binom{\ell_{123}}{i+1}} \\
\times &\prod_{i=q}^k \mathbb{P}\big( \Delta_i(s)=1 \, \big| \, \Delta_i(r)=1 \big)^{\binom{\ell_{12}}{i+1}-\binom{\ell_{123}}{i+1}} \prod_{i=q}^k \mathbb{P}\big( \Delta_i(t)=1 \, \big| \, \Delta_i(s)=1 \big)^{\binom{\ell_{23}}{i+1}-\binom{\ell_{123}}{i+1}} \\
\times &\prod_{i=q}^k \mathbb{P}\big( \Delta_i(t)=1 \, \big| \, \Delta_i(r)=1 \big)^{\binom{\ell_{13}}{i+1} - \binom{\ell_{123}}{i+1}}\prod_{i=q}^k \mathbb{P}\big( \Delta_i(s)=\Delta_i(t)=1 \, \big| \, \Delta_i(r)=1 \big)^{\binom{\ell_{123}}{i+1}}
\\ \le &\prod_{i=q}^k p_i^{3\binom{k+1}{i+1} - \binom{\ell_{12}}{i+1} - \binom{\ell_{13}}{i+1} - \binom{\ell_{23}}{i+1} + \binom{\ell_{123}}{i+1}}, \end{align*} and the terms of the other types can be bounded in a similar manner.
Furthermore, observe that for every $\ell_{12}\ge q+1$, $\ell_{13} \ge q+1$, $\ell_{23}\ge0$, and $\ell_{123} \ge 0$, the number of the corresponding terms in \eqref{e:big.brace} does not exceed a constant multiple of $ n^{3(k+1)-\ell_{12} - \ell_{13} - \ell_{23} + \ell_{123}}$. Therefore, \begin{align*} d_1(W,Y) &\le \frac{C}{n^{3\tau_k({\bm \alpha})-3\tau_q({\bm \alpha})/2}}\, \sum_{\ell_{12}=q+1}^{k+1} \sum_{\ell_{13}=q+1}^{k+1} \sum_{\ell_{23}=0}^{k+1} \sum_{\ell_{123}=0}^{\ell_{12} \wedge \ell_{13} \wedge \ell_{23}} \prod_{i=q}^k p_i^{3\binom{k+1}{i+1} - \binom{\ell_{12}}{i+1} - \binom{\ell_{13}}{i+1} - \binom{\ell_{23}}{i+1} + \binom{\ell_{123}}{i+1}} \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \times n^{3(k+1)-\ell_{12} - \ell_{13} - \ell_{23} + \ell_{123}} \\ &=C \sum_{\ell_{12}=q+1}^{k+1} \sum_{\ell_{13}=q+1}^{k+1} \sum_{\ell_{23}=0}^{k+1} \sum_{\ell_{123}=0}^{\ell_{12} \wedge \ell_{13} \wedge \ell_{23}} n^{3\tau_q({\bm \alpha})/2 - \tau_{\ell_{12}-1}({\bm \alpha}) - \tau_{\ell_{13}-1}({\bm \alpha}) - \tau_{\ell_{23}-1}({\bm \alpha}) + \tau_{\ell_{123}-1}({\bm \alpha})} \end{align*} ($a \wedge b = \min\{a,b \}$ for $a, b \in \reals$). The latter sum is a finite sum, and each term in it does not exceed $C n^{-\tau_q({\bm \alpha})/2}$ which can be seen by noticing that $\tau_{\ell_{23}-1}({\bm \alpha}) - \tau_{\ell_{123}-1}({\bm \alpha}) > 0$ and setting $\ell_{12} = \ell_{13} = q+1$. Therefore, the sum goes to $0$ as $n\to\infty$ and, hence, we have established the convergence of the finite-dimensional distributions in Proposition \ref{p:clt.face.counts}.
In order to prove tightness in the $J_1$-topology, we use Theorem 13.5 in \cite{billingsley:1999}. By the stationarity of $f_k(t)$, it is sufficient to show that for every $T>0$, there exists $B>0$ such that $$ \frac{\mathbb{E} \Big[ \big( f_k(t)-f_k(s) \big)^2 \big( f_k(s)-f_k(r) \big)^2
\Big]}{\big(\text{\rm Var}(f_k)\big)^2} \le B (t-r)^{1+\gamma} $$ for all $0 \le r \le s \le t \le T$, $n\ge 1$, with $\gamma$ as in \eqref{e:cond.regularity}. By Proposition \ref{p:moment.face.count}, we only need to show existence of $B$ such that \begin{equation} \label{e:tightness.Thm13.5.face.counts} \frac{\mathbb{E} \Big[ \big( f_k(t)-f_k(s) \big)^2 \big( f_k(s)-f_k(r) \big)^2
\Big]}{n^{4\tau_k({\bm \alpha})-2\tau_q({\bm \alpha})}} \le B (t-r)^{1+\gamma}. \end{equation} This will be established while proving tightness in the proof of Theorem \ref{t:clt.topological.invariants} below. \qed
\section{Proofs of the limit theorems for the Euler characteristic} \label{s:proofs.euler}
We start with the strong law of large numbers. As in the last section, $C$ denotes a generic positive constant, which is independent of $n$. \begin{proof}[Proof of \eqref{e:SLLN.Euler.char} in Theorem \ref{t:SLLN}] Fix $0< T < \infty$ for the duration of the proof. We first check that for each $j\ge 0$, \begin{equation} \label{e:first.claim.SLLN}
\sup_{0 \le t \le T} \frac{\big| f_j(t)-\mathbb{E}(f_j) \big|}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.} \end{equation} If $j \in \{ 0,\dots, q-1 \}$, the left hand side is identically zero (see Remark \ref{rem:non-random}). For $j \ge q$, by the Borel-Cantelli lemma, it suffices to show that for every $\epsilon>0$, \begin{equation} \label{e:Borel.Cantelli1}
\sum_{n=1}^\infty \mathbb{P}\Big( \sup_{0 \le t \le T} \big| f_j(t)-\mathbb{E}(f_j)
\big| > \epsilon \mathbb{E}(f_k) \Big)<\infty, \end{equation} which will follow once we prove the following two statements: \begin{align} \sum_{n=1}^\infty \mathbb{P}\Big( \sup_{0 \le t \le T} f_j(t) > \mathbb{E}(f_j) + \epsilon\mathbb{E}(f_k) \Big)&<\infty, \ \ \text{and} \label{e:Borel.Cantelli2}\\ \sum_{n=1}^\infty \mathbb{P}\Big( \inf_{0 \le t \le T} f_j(t) < \mathbb{E}(f_j) - \epsilon\mathbb{E}(f_k) \Big)&<\infty. \label{e:Borel.Cantelli3} \end{align} Choose a positive integer $m$ so large that \begin{equation} \label{e:restriction.epsilon} \prod_{i=q}^j \left( 1+ \frac{ (G_i)_e(T/m)}{1-G_i(T/m)} \right)^{\binom{j+1}{i+1}} < 1 + \frac{\epsilon}{2}. \end{equation} By stationarity, $$ \mathbb{P}\Big( \sup_{0 \le t \le T} f_j(t) > \mathbb{E}(f_j) + \epsilon\mathbb{E}(f_k) \Big) \le m \mathbb{P} \Big( \sup_{0 \le t \le T/m} f_j(t) > \mathbb{E}(f_j) + \epsilon\mathbb{E}(f_k) \Big). $$
We now construct a new static multi-parameter simplicial complex $X([n], {\bf p}^{(1)})$ by setting $p_i^{(1)}=\mathbb{P}\big( \sup_{0\le t
\le T/m}\Delta_i(t)=1 \big)$ for $i\geq 1$. If $f_j^{(1)}$ is the $j$-face count in this static complex, then, by a straightforward coupling argument,
\begin{equation}
\label{e:coupling.trick}
\sup_{0\le t \le T/m}f_j(t) \stackrel{st}{\le} f_j^{(1)}.
\end{equation}
\remove{To show \eqref{e:stochastically.dominated} note that $$
\sup_{0\le t \le T/m} f_j(t) \le \sum_{\sigma \subset [n], \, |\sigma|=j+1} \sup_{0\le t \le T/m} \xi_\sigma (t), $$ and, $$ \mathbb{P} \big( \sup_{0\le t \le T/m} \xi_\sigma(t)=1 \big) \le \mathbb{P}\big( \sigma \text{ forms a } j\text{-simplex in } X([n], p^{(1)}) \big) $$ as required. } Since by part (ii) of Lemma \ref{l:cond.prob.Delta}
and \eqref{e:restriction.epsilon}, \begin{align*} \mathbb{E}(f_j^{(1)}) &= \binom{n}{j+1} \prod_{i=q}^j
(p_i^{(1)})^{\binom{j+1}{i+1}} \\
&\le \binom{n}{j+1}
\prod_{i=q}^j p_i^{\binom{j+1}{i+1}} \prod_{i=q}^j
\left( 1+ \frac{(G_i)_e(T/m)}{1-G_i(T/m)} \right)^{\binom{j+1}{i+1}} \le
\Big(1+\frac{\epsilon}{2}\Big)\mathbb{E}(f_j), \end{align*} we conclude that \begin{align*} \mathbb{P} \Big( \sup_{0 \le t \le T/m} f_j(t) > \mathbb{E}(f_j) + \epsilon\mathbb{E}(f_k)
\Big) &\le \mathbb{P} \big( f_j^{(1)}-\mathbb{E}(f_j^{(1)}) > \mathbb{E}(f_j) +\epsilon
\mathbb{E}(f_k) -\mathbb{E}(f_j^{(1)}) \big) \\ &\le \mathbb{P}\Big( f_j^{(1)}-\mathbb{E}(f_j^{(1)}) > \epsilon\mathbb{E}(f_k)
-\frac{\epsilon}{2}
\mathbb{E}(f_j) \Big). \end{align*} As $\mathbb{E}(f_j)/\mathbb{E}(f_k)\to0$, $n\to\infty$ for $j\neq k$, it holds that, for sufficiently large $n$, \begin{align*}
\mathbb{P}\Big( f_j^{(1)}-\mathbb{E}(f_j^{(1)}) > \epsilon\mathbb{E}(f_k) -\frac{\epsilon}{2} \mathbb{E}(f_j) \Big) &\le \mathbb{P} \Big( \big| f_j^{(1)}-\mathbb{E}(f_j^{(1)}) \big| > \frac{\epsilon}{2}\mathbb{E}(f_k) \Big) \\ &\le \frac{4}{\epsilon^2}\, \frac{\text{\rm Var}(f_j^{(1)})}{\big( \mathbb{E}(f_k) \big)^2} \le C \frac{\text{\rm Var}(f_j^{(1)})}{n^{2\tau_k({\bm \alpha})}}, \end{align*} where the last inequality comes from Proposition \ref{p:moment.face.count}. Further, since each $p_i^{(1)}$ is asymptotically bounded by $p_i$ times a positive constant for $i=q,\ldots, j$, the argument of the above proposition shows that for large enough $n$, $$ \text{\rm Var}(f_j^{(1)})\le C_j^{(1)} n^{2\tau_j ({\bm \alpha})-\tau_q({\bm \alpha})} \vee C_j^{(2)} n^{\tau_j({\bm \alpha})} $$ for some finite positive constants $C_j^{(1)}$ and $C_j^{(2)}$. Hence, \begin{align*} \mathbb{P} \Big( \sup_{0 \le t \le T/m} f_j(t) > \mathbb{E}(f_j) + \epsilon\mathbb{E}(f_k) \Big) &\le C \frac{n^{2\tau_j ({\bm \alpha})-\tau_q({\bm \alpha})} \vee n^{\tau_j({\bm \alpha})} }{n^{2\tau_k({\bm \alpha})}} \\ &\le Cn^{-\tau_q({\bm \alpha})} \le Cn^{-\tau_1({\bm \alpha})} = Cn^{-(2-\alpha_1)}. \end{align*} As $\alpha_1 = \psi_1({\bm \alpha}) \le \psi_k({\bm \alpha}) < 1$, we get $\sum_{n=1}^\infty n^{-(2-\alpha_1)}<\infty$, and so \eqref{e:Borel.Cantelli2} holds.
We now turn our attention to \eqref{e:Borel.Cantelli3}. The stationarity of $f_j(t)$ implies that $$ \mathbb{P} \Big( \inf_{0 \le t \le T}f_j(t) < \mathbb{E}(f_j) - \epsilon \mathbb{E}(f_k) \Big) \le m \mathbb{P} \Big( \inf_{0 \le t \le T/m}f_j(t) < \mathbb{E}(f_j) - \epsilon \mathbb{E}(f_k) \Big), $$ where this time $m$ is chosen so that $$ \prod_{i=q}^j \big( 1-(G_i)_e(T/m) \big)^{\binom{j+1}{i+1}} > 1-\frac{\epsilon}{2}. $$ Once again, we construct a new static multi-parameter simplicial complex $X([n], {\bf p}^{(2)})$ by setting this time $p_i^{(2)}=\mathbb{P}\big( \inf_{0\le t
\le T/m}\Delta_i(t)=1 \big)$ for $i\geq 1$. If $f_j^{(2)}$ is the $j$-face count in this static complex, then, $f_j^{(2)}\stackrel{st}{\le}\inf_{0\le t \le T/m}f_j(t)$. Notice that for $i \ge q$, $$ p_i^{(2)} \ge \mathbb{P}\big( \Delta_i(0)=1, \, D^{(i)} \ge T/m \big) = p_i \big( 1-(G_i)_e (T/m) \big), $$ so by the choice of $m$, $$ \mathbb{E}(f_j^{(2)}) \ge \Big( 1-\frac{\epsilon}{2} \Big) \mathbb{E}(f_j). $$ Proceeding as above we conclude that, for sufficiently large $n$, $$ \mathbb{P} \Big( \inf_{0 \le t \le T/m}f_j(t) < \mathbb{E}(f_j) - \epsilon \mathbb{E}(f_k) \Big) \le C\frac{\text{\rm Var}(f_j^{(2)})}{n^{2\tau_k({\bm \alpha})}}. $$ Noting that $p_{i}^{(2)} \leq p_i^{(1)}$, the same logic as above tells that $$ \text{\rm Var}(f_j^{(2)})\le C_j^{(1)} n^{2\tau_j ({\bm \alpha})-\tau_q({\bm \alpha})} \vee C_j^{(2)} n^{\tau_j({\bm \alpha})} $$ for some finite positive constants $C_j^{(1)}, C_j^{(2)}$, and \eqref{e:Borel.Cantelli3} follows in the same way as \eqref{e:Borel.Cantelli2} did.
The next step is to show that as $n\to\infty$, \begin{equation} \label{e:Euler.as.conv0}
\sup_{0\le t \le T} \frac{\big| \chi_n(t) - \mathbb{E}(\chi_n) \big|}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.,} \end{equation} and by stationarity it is enough to prove that \begin{equation} \label{e:Euler.as.conv}
\sup_{0\le t \le T/m} \frac{\big| \chi_n(t) - \mathbb{E}(\chi_n) \big|}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.} \end{equation} for an integer $m$ large enough so that $T/m\leq a/4$; the constant $a$ is given in the assumption \eqref{e:assumption.Gi}. It is not difficult to see that the choice of $m$ implies $(G_i)_e(T/m) \leq 1/2.$ Combining this with part (ii) of Lemma \ref{l:cond.prob.Delta} and recalling that $p_i^{(1)}=\mathbb{P}\big( \sup_{0\le t \le T/m}\Delta_i(t)=1 \big)$, we get $p_i^{(1)}\leq p_i(2-p_i)$. It is now elementary to check that there is a function $h:[0,\infty]\to [0,\infty]$ with $h(0)=0$, $h(\infty)=\infty$, and $h(\alpha)\in (0,\infty)$ for $0<\alpha<\infty$, such that \begin{equation} \label{e:alpha.tilde.rate} p_i^{(1)}\leq p_i (2-p_i) \le n^{-h(\alpha_i)} \ \ \text{if} \ \ p_i=n^{-\alpha_i}, \ i\geq 1; \end{equation}
(for example, one may take $h(\alpha)=\alpha-\log(2-2^{-\alpha})/\log2$). Define now $\tilde{\bm \alpha}$ by $\tilde\alpha_i=h(\alpha_i)$, $i=1,2,\ldots$. Then, $M(\tilde{\bm \alpha})$ defined by \eqref{e:M.alpha} is finite, and we use \eqref{e:EC.faces} to bound $$
\sup_{0\le t \le T/m} \frac{\big| \chi_n(t) - \mathbb{E}(\chi_n)
\big|}{\mathbb{E}(f_k)} \le \sum_{j=0}^{ M(\tilde{\bm \alpha})-1}
\sup_{0\le t \le T/m} \frac{\big| f_j(t) - \mathbb{E}(f_j) \big|}{\mathbb{E}(f_k)} + \sum_{j= M(\tilde{\bm \alpha})}^{n-1} \sup_{0\le t \le T/m}
\frac{\big| f_j(t) - \mathbb{E}(f_j) \big|}{\mathbb{E}(f_k)}. $$ By \eqref{e:first.claim.SLLN}, the first sum in the right hand side almost surely goes to $0$ as $n\to\infty$. For the second sum, we again use the Borel-Cantelli lemma by initially showing that, for every $\epsilon>0$, $$
\sum_{n=M(\tilde{\bm \alpha})+1}^\infty \mathbb{P} \Big( \sum_{j=M(\tilde{\bm \alpha}) }^{n-1} \sup_{0 \le t \le T/m} \big| f_j(t)-\mathbb{E}(f_j) \big| > \epsilon \mathbb{E}(f_k) \Big) < \infty. $$ Using Markov's inequality and recalling our notation for the face counts in the static multi-parameter simplicial complex $X([n], {\bf p}^{(1)}),$ we bound the above sum by \begin{equation} \label{e:Markov.fj1} \frac{2}{\epsilon}\sum_{n=M(\tilde{\bm \alpha})+1}^\infty\frac{1}{\mathbb{E}(f_k)} \sum_{j=M(\tilde{\bm \alpha})}^{n-1} \mathbb{E} \Big( \sup_{0\le t \le T/m}f_j(t) \Big) \le \frac{2C}{\epsilon}\sum_{n=1}^\infty \frac{1}{n^{\tau_1({\bm \alpha})}} \sum_{j=M(\tilde{\bm \alpha})}^\infty \mathbb{E} \big( f_j^{(1)} \big)<\infty \end{equation} since $\sum_{n=1}^\infty n^{-\tau_1({\bm \alpha})}<\infty$ and $\sum_{j= M(\tilde {\bm \alpha})}^\infty \mathbb{E} \big( f_j^{(1)} \big)\to 0$ as $n\to\infty$ by Corollary \ref{cor:neg.tau}. We have now obtained \eqref{e:Euler.as.conv} and, hence, also \eqref{e:Euler.as.conv0}.
Finally, we can use \eqref{e:EC.faces} to write $$ \frac{\mathbb{E}(\chi_n)}{\mathbb{E}(f_k)} = (-1)^k + \frac{\sum_{j=0, \, j \neq
k}^{n-1}(-1)^j \mathbb{E}(f_j)}{\mathbb{E}(f_k)}. $$ With $M({\bm \alpha})$ defined by \eqref{e:M.alpha}, $$
\bigg| \frac{\sum_{j=0, \, j \neq k}^{n-1}(-1)^j \mathbb{E}(f_j)}{\mathbb{E}(f_k)}
\bigg| \le \sum_{j=0,\, j \neq k}^{M({\bm \alpha})-1}\frac{\mathbb{E}(f_j)}{\mathbb{E}(f_k)} + C \sum_{j=M({\bm \alpha})}^\infty \mathbb{E}(f_j) \to 0, \ \ \ n\to\infty $$ by Proposition \ref{p:moment.face.count} and Corollary \ref{cor:neg.tau}. Hence $\mathbb{E}(\chi_n)/\mathbb{E}(f_k) \to (-1)^k$, and \eqref{e:SLLN.Euler.char} follows. \end{proof}
We now prove the functional central limit theorem for the Euler characteristic.
\begin{proof}[Proof of \eqref{e:Euler.characteristic.func.conv} in
Theorem \ref{t:clt.topological.invariants}] Note, first of all, that for every $M\ge k+1$ the truncated Euler characteristic $$ \chi_n^{(M)}(t) = \sum_{j=0}^{M-1} (-1)^j f_j(t) $$ satisfies, in terms of convergence of the finite-dimensional distributions, $$ \left( \frac{\chi_n^{(M)}(t) - \mathbb{E}(\chi_n^{(M)})}{\sqrt{\text{\rm Var} (f_{k,n})}}, \, t\geq
0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr). $$ This follows from finite-dimensional convergence in Proposition \ref{p:clt.face.counts} and the fact that by \eqref{e:critical.dom} and Chebyshev's inequality, $$ \frac{f_j(t)-\mathbb{E}(f_j)}{\sqrt{\text{\rm Var}(f_k)}} \stackrel{p}{\to} 0, \ \ \ n\to\infty, $$ for each $j\not= k$.
Choosing now $M=M({\bm \alpha})$ defined by \eqref{e:M.alpha}, we have by Corollary \ref{cor:neg.tau} that
\begin{equation} \label{e:Markov.chi}
\mathbb{P}\bigg( \bigg|\frac{\chi_n(t) - \mathbb{E}(\chi_n)}{\sqrt{\text{\rm Var} (f_k)}}
-\frac{\chi_n^{(M({\bm \alpha}))} (t)- \mathbb{E}(\chi_n^{(M({\bm \alpha}))})}{\sqrt{\text{\rm Var} (f_k)}} \bigg| >\epsilon \bigg) \le \frac{2}{\epsilon
\sqrt{\text{\rm Var}(f_k)}}\sum_{j=M({\bm \alpha})}^\infty \mathbb{E}(f_j) \to 0 \end{equation} as $n\to\infty$ for any $\epsilon>0$. Therefore, $$ \frac{\chi_n(t) - \mathbb{E}(\chi_n)}{\sqrt{\text{\rm Var} (f_k)}} -\frac{\chi_n^{(M({\bm \alpha}))}(t) - \mathbb{E}(\chi_n^{(M({\bm \alpha}))})}{\sqrt{\text{\rm Var} (f_k)}} \stackrel{p}{\to} 0, $$ so we have established \eqref{e:Euler.characteristic.func.conv} in terms of convergence of the finite-dimensional distributions.
\remove{Assuming \eqref{e:cond.regularity}, we now establish tightness in the Skorohod $J_1$-topology. We may and will assume that $T\leq 1/m$, where $m$ is given in \eqref{e:restriction.epsilon} with $\epsilon=1$ and $T=1$. Denote \begin{equation} \label{e:M1.alpha} M_1({\bm \alpha})= \min \big\{ i>k: \tau_i({\bm \alpha})< \tau_q({\bm \alpha})\big\}, \end{equation} and note that $M_1({\bm \alpha})\leq M({\bm \alpha})$ defined in \eqref{e:M.alpha}. Write \begin{align} \label{e:split.tightness} \chi_n(t) = \sum_{j=0}^{M_1({\bm \alpha})-1} (-1)^j f_j(t)+
\sum_{j=M_1({\bm \alpha})}^{n-1} (-1)^j f_j(t) := \chi^{(1)}_n(t)
+\chi^{(2)}_n(t), \ 0\leq t\leq 1. \end{align} We start with proving that \begin{equation} \label{e:chi.2}
\sup_{0\leq t\leq 1/m} \left| \frac{\chi^{(2)}_n(t) - \mathbb{E}(\chi^{(2)}_n)}{\sqrt{\text{\rm Var}
(f_k)}}\right| \stackrel{p}{\to} 0. \end{equation} Indeed, we have \begin{align*}
\mathbb{E} \sup_{0\leq t\leq 1/m} \left| \frac{\chi^{(2)}_n(t) - \mathbb{E}(\chi^{(2)}_n)}{\sqrt{\text{\rm Var}
(f_k)}}\right| \leq & C \sum_{j=M_1({\bm \alpha})}^{n-1} \frac{\mathbb{E} \sup_{0\leq t\leq
1/m}f_j(t)}{n^{2\tau_k(\alpha)-\tau_q(\alpha)}} \\
\leq & 2C \sum_{j=M_1({\bm \alpha})}^{n-1} \frac{\mathbb{E}
f_j}{n^{2\tau_k(\alpha)-\tau_q(\alpha)}} \to 0 \end{align*} as $n\to\infty$ by Corollary \ref{cor:neg.tau} since $$ \mathbb{E} f_j = Cn^{\tau_j({\bm \alpha})} = o\bigl( n^{\tau_q({\bm \alpha})}\bigr) = o\bigl( n^{2\tau_k(\alpha)-\tau_q({\bm \alpha})}\bigr) $$ for $j\geq M_1({\bm \alpha})$. Therefore, \eqref{e:chi.2} follows, and so it remains to prove tightness of the process $\bigl( \chi^{(1)}_n(t),\, 0\leq t\leq T\bigr)$. }
Assuming \eqref{e:cond.regularity} and \eqref{e:sharp.drop.k+1}, we now establish tightness in the Skorohod $J_1$-topology. Denote $$ M_1({\bm \alpha})= \min \big\{ i>k: \tau_i({\bm \alpha})< \tau_q({\bm \alpha})\big\}. $$ Fix $T>0$ and choose $m$ so that $T/m \le a /4,$ where $a$ is the constant from \eqref{e:assumption.Gi}. Recall once again the notation $p_i^{(1)}=\mathbb{P}\big( \sup_{0\le t \le T/m} \Delta_i(t)=1 \big)$, $i\ge 1$, so that $p_i^{(1)} \le p_i (2-p_i) \le n^{-\tilde \alpha_i},$ where $\tilde {\bm \alpha} = (\tilde \alpha_1, \tilde \alpha_2, \dots)$ is as defined below \eqref{e:alpha.tilde.rate}. Note that $M_1({\bm \alpha}) \le M({\bm \alpha}) \leq M(\tilde {\bm \alpha}) <\infty$, where $M({\bm \alpha})$ and $M(\tilde {\bm \alpha})$ are as defined in \eqref{e:M.alpha}. Recall also that for $j \ge q$, $f_j^{(1)}$ is the $j$-face counts in $X([n], {\bf p}^{(1)})$, such that $f_j^{(1)} \stackrel{sd}{\ge} \sup_{0 \le t \le T/m} f_j(t)$. Write \begin{align*} \chi_n(t) &= \sum_{j=0}^{M_1({\bm \alpha})-1} (-1)^j f_j(t)+
\sum_{j=M_1({\bm \alpha})}^{n-1} (-1)^j f_j(t) =: \chi^{(1)}_n(t)
+\chi^{(2)}_n(t), \ 0\leq t\leq T. \end{align*} We start with proving that, as $n\to\infty$, $$
\frac{\sup_{0\leq t\leq T} \big| \chi^{(2)}_n(t) - \mathbb{E}(\chi^{(2)}_n) \big|}{\sqrt{\text{\rm Var}
(f_k)}} \stackrel{p}{\to} 0. $$ By stationarity, it suffices to show that \begin{equation} \label{e:chi2and3}
\frac{\sup_{0\leq t\leq T/m} \big| \chi^{(2)}_n(t) - \mathbb{E}(\chi^{(2)}_n) \big|}{\sqrt{\text{\rm Var}
(f_k)}} \stackrel{p}{\to} 0. \end{equation}
Let $\epsilon > 0$ be arbitrary. Then, by Markov's inequality, for all sufficiently large $n,$
\begin{align}
\mathbb{P} \big( \sup_{0\le t \le T/m} \big| \chi_n^{(2)}-\mathbb{E}(\chi_n^{(2)}) \big| > \epsilon \sqrt{\text{\rm Var}(f_k)} \big) &\le \frac{2}{\epsilon \sqrt{\text{\rm Var}(f_k)}}\, \sum_{j=M_1({\bm \alpha})}^{n-1} \mathbb{E} \big[ \sup_{0\le t \le T/m} f_j(t) \big] \label{e:Markov.chi2}\\ &\le \frac{2}{\epsilon \sqrt{\text{\rm Var}(f_k)}}\, \sum_{j=M_1({\bm \alpha})}^{n-1} \mathbb{E}(f_j^{(1)}) \notag \\ &\le \frac{2}{\epsilon} \sum_{j=M_1({\bm \alpha})}^{M(\tilde {\bm \alpha})-1} \frac{\prod_{i=q}^j 2^{\binom{j+1}{i+1}}\mathbb{E}(f_j)}{\sqrt{\text{\rm Var}(f_k)}} +\frac{2}{\epsilon} \sum_{j=M(\tilde {\bm \alpha})}^\infty \mathbb{E}(f_j^{(1)}), \notag \end{align} where the last inequality is due to Proposition 3.1, together with the fact that $p_i^{(1)} \le 2p_i$. The second term vanishes because $\sum_{j=M(\tilde {\bm \alpha})}^\infty \mathbb{E}(f_j^{(1)}) \to 0$, as $n\to\infty,$ by Corollary 3.4. On the other hand, the first vanishes since, by \eqref{e:sharp.drop.k+1}, $$ \mathbb{E}(f_j) \le n^{\tau_j({\bm \alpha})} \le n^{\tau_{k+1}({\bm \alpha})} = o \big( n^{\tau_k({\bm \alpha}) -\tau_q({\bm \alpha})/2} \big) = o \big( \sqrt{\text{\rm Var}(f_k)} \big), \ \ n\to\infty. $$ Now \eqref{e:chi2and3} follows as desired, and so it remains to prove tightness of the process $\big( \chi_n^{(1)}(t), \, 0 \le t \le T \big)$. To this aim, it is enough to show the existence of $B\in (0,\infty)$ such that \begin{equation} \label{e:tightness.Thm13.5.Euler} \frac{\mathbb{E} \Big[ \big( \chi^{(1)}_n(t)-\chi^{(1)}_n(s) \big)^2 \big(
\chi^{(1)}_n(s)-\chi^{(1)}_n(r) \big)^2 \Big]}{n^{4\tau_k(\alpha)-2\tau_q(\alpha)}} \le B (t-r)^{1+\gamma} \end{equation} for all $0\leq r\leq s \leq t\leq T$ and $n\ge 1$. In the course of the proof, we will also establish \eqref{e:tightness.Thm13.5.face.counts} needed for the tightness in Proposition \ref{p:clt.face.counts}.
We begin by setting up the notation. For $q+1 \le j_1, j_2 <M_1({\bm \alpha})$ and $0 \le r \le s \le t \le T,$ denote $$ F_{j_1,j_2}(t,s,r) := \mathbb{E} \Big[ \big( f_{j_1}(t)-f_{j_1}(s) \big)^2 \big( f_{j_2}(s)-f_{j_2}(r) \big)^2 \Big]. $$ Consider a potential subcomplex $\bar \sigma$ in $[n]$ consisting of the 4 simplices $\sigma_1, \sigma_2, \sigma_3, \sigma_4$ and their faces, with
$|\sigma_1|=|\sigma_2|=j_1+1$, $|\sigma_3|=|\sigma_4|=j_2+1$, and let $$
a_{ij} = |\sigma_i \cap \sigma_j|, \ \ 1 \le i < j \le 4, \ \ \ a_{ijk} = |\sigma_i \cap \sigma_j \cap \sigma_k|, \ \ 1 \le i < j < k \le 4, $$ $$
a_{1234} = |\sigma_1 \cap \sigma_2 \cap \sigma_3 \cap \sigma_4|. $$ The number of $i$-faces in $\bar \sigma$ is \begin{align*} \text{comb}_i(\bar \sigma) &:= 2\binom{j_1+1}{i+1} + 2 \binom{j_2+1}{i+1} - \binom{a_{12}}{i+1} - \binom{a_{13}}{i+1} - \binom{a_{14}}{i+1} - \binom{a_{23}}{i+1} \\ &\quad - \binom{a_{24}}{i+1} - \binom{a_{34}}{i+1} +
\binom{a_{123}}{i+1} + \binom{a_{124}}{i+1} + \binom{a_{134}}{i+1} + \binom{a_{234}}{i+1} - \binom{a_{1234}}{i+1}; \end{align*} it depends only on $j_1, j_2$, and ${\bf a} =(a_{12}, \dots, a_{1234})$. We let \begin{align} \Psi({\bf a}, {\bm \alpha})&:=\tau_{a_{12}-1}({\bm \alpha}) + \tau_{a_{13}-1}({\bm \alpha}) + \tau_{a_{14}-1}({\bm \alpha}) + \tau_{a_{23}-1}({\bm \alpha}) + \tau_{a_{24}-1}({\bm \alpha}) + \tau_{a_{34}-1}({\bm \alpha}) \label{e:psi.ba.al} \\ &\quad - \tau_{a_{123}-1}({\bm \alpha})- \tau_{a_{124}-1}({\bm \alpha})- \tau_{a_{134}-1}({\bm \alpha})- \tau_{a_{234}-1}({\bm \alpha}) + \tau_{a_{1234}-1}({\bm \alpha}) \notag \end{align} (with $\tau_{-1}({\bm \alpha})\equiv0$). By independence, \begin{align} F_{j_1, j_2}(t,s,r) &= \sum_{\bar \sigma \subset \Xi(j_1, j_2)} \mathbb{E} \Big[ \big( \xi_{\sigma_1}(t) - \xi_{\sigma_1}(s) \big) \big( \xi_{\sigma_2}(t) - \xi_{\sigma_2}(s) \big) \big( \xi_{\sigma_3}(s) - \xi_{\sigma_3}(r) \big) \big( \xi_{\sigma_4}(s) - \xi_{\sigma_4}(r) \big)\Big] \notag\\ &=: \sum_{\bar \sigma \subset \Xi(j_1, j_2)} \mathbb{E} \big[ g(t,s,r; \bar \sigma) \big], \label{e:expectation.g} \end{align} with the summation restricted to the set \begin{align*}
\Xi(j_1,j_2) &= \big\{ \bar \sigma = (\sigma_1, \dots, \sigma_4): \, |\sigma_1|=|\sigma_2|=j_1+1, \, |\sigma_3|=|\sigma_4|=j_2+1, \\ & \qquad \quad \text{and } (\sigma_1,\dots,\sigma_4) \text{ satisfies at least one of the conditions in \eqref{e:def.as} below} \big\}:
\end{align*}
\begin{align} &\text{(i)} \, a_{12}\ge q+1, a_{34}\ge q+1, \ \ \text{(ii)}\, a_{13}\ge q+1, a_{24}\ge q+1, \ \ \text{(iii)} \, a_{14}\ge q+1, a_{23} \ge q+1, \notag \\ &\text{(iv)}\, a_{12} \ge q+1, a_{13}\ge q+1, a_{14} \ge q+1, \ \ \text{(v)}\, a_{12} \ge q+1, a_{23}\ge q+1, a_{24} \ge q+1, \label{e:def.as} \\ &\text{(vi)}\, a_{13} \ge q+1, a_{23}\ge q+1, a_{34} \ge q+1, \ \ \text{(vii)}\, a_{14} \ge q+1, a_{24}\ge q+1, a_{34} \ge q+1. \notag \end{align}
Indeed, if none of the conditions in \eqref{e:def.as} holds, then the corresponding term in \eqref{e:expectation.g} vanishes by independence and stationarity.
Our goal is to bound the expectation $\mathbb{E}\big[ g(t,s,r; \bar \sigma) \big]$ in \eqref{e:expectation.g}. Note that $g(t, s, r; \bar \sigma) \in \{-1, 0, +1\}.$ Hence, for $g(t, s, r; \bar \sigma)$ not to vanish, every $i$-face of the simplex $\sigma_1$ must exist either at time $s$ or at time $t$, $i=q,\ldots, j_1$, and the same is true for the simplex $\sigma_2$. Similarly, every $i$-face of the simplex $\sigma_3$ must exist either at time $r$ or at time $s$, $i=q,\ldots, j_2$, and the same is true for the simplex $\sigma_4$. The probability that this happens is bounded from above by \begin{equation} \label{onetime.prod} 16 \prod_{i=q}^{j_1\vee j_2} p_i^{\text{comb}_i(\bar \sigma)}, \end{equation} where we take into account only the first (smallest) time a face exists if it is required to exist multiple times. Additionally, at least one face of the complex spanned by the simplices $\sigma_1,\sigma_2$ must switch from existence to non-existence, or vice versa, between times $s$ and $t$, and at least one face of the complex spanned by the simplices $\sigma_3,\sigma_4$ must switch from existence to non-existence, or vice versa, between times $r$ and $s$. This may be the same face or two different faces. Let us denote the corresponding (non-disjoint) events by $A_1$ and $A_2$. Consider the event $A_1$ first. The number of possible faces that can change their status does not exceed the total number of faces in $\bar\sigma$, which is, in turn, bounded by $2^{2(j_1+j_2)}$. For such an $i$-face the probability $p_i$ in \eqref{onetime.prod} will be replaced by one by of following two probabilities: $$ \mathbb{P}\bigl( \Delta_i(r)=0, \Delta_i(s)=1, \Delta_i(t)=0\bigr) $$ and $$ \mathbb{P}\bigl( \Delta_i(r)=1, \Delta_i(s)=0,\Delta_i(t)=1\bigr), $$ both of which are bounded by $(2c/a)p_i(t-r)^{1+\gamma}$ by Lemma \ref{l:triples}. Therefore, $$ \mathbb{P}(A_1)\leq C 2^{2(j_1+j_2)}(t-r)^{1+\gamma} \prod_{i=q}^{j_1\vee j_2} p_i^{\text{comb}_i(\bar \sigma)}. $$ Considering the event $A_2$ now, we see that the number of possible pairs of faces that can change their status does not exceed $2^{4(j_1+j_2)}$. For each such a pair of an $i_1$-face and an $i_2$-face, the product $p_{i_1}p_{i_2}$ in \eqref{onetime.prod} will be, up to renaming, replaced by $$ \mathbb{P}\bigl( \Delta_{i_1}(r)=1,\Delta_{i_1}(s)=0\bigr) \mathbb{P}\bigl( \Delta_{i_2}(s)=1,\Delta_{i_2}(t)=0\bigr), $$
or similar expressions obtained by flipping $1$s and $0$s. By Lemma \ref{l:cond.prob.Delta}, any such expression is bounded by $$ p_{i_1}p_{i_2} \left(\frac{2}{a}\right)^2 (t-r)^2. $$ Since $\gamma\leq 1$, we conclude that $$ \mathbb{P}(A_2)\leq C 2^{4(j_1+j_2)} (t-r)^{1+\gamma}\prod_{i=q}^{j_1\vee j_2} p_i^{\text{comb}_i(\bar \sigma)}, $$ and so \begin{equation} \label{e:comb.prod}
\mathbb{E} \big[ |g(t,s,r; \bar \sigma) | \big] \leq C 2^{4(j_1+j_2)} (t-r)^{1+\gamma}\prod_{i=q}^{j_1\vee j_2} p_i^{\text{comb}_i(\bar \sigma)}. \end{equation}
Substituting this back into \eqref{e:expectation.g}, we obtain \begin{align*} F_{j_1, j_2}(t,s,r) &\le C 2^{4(j_1+j_2)} (t-r)^{1+\gamma} \sum_{\bar \sigma \in \Xi(j_1, j_2)} \prod_{i=q}^{j_1 \vee j_2} p_i^{\text{comb}_i(\bar \sigma)} \\ &= C 2^{4(j_1+j_2)} (t-r)^{1+\gamma} \sum_{{\bf a} \in \mathcal A} \sum_{\bar \sigma \in \Xi(j_1, j_2)}
\hspace{-10pt} {\mathbbm 1} \big\{
|\sigma_1 \cap \sigma_2|= a_{12}, \, |\sigma_1 \cap
\sigma_3|=a_{13}, \dots, \\
&\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad |\sigma_1 \cap \sigma_2 \cap \sigma_3 \cap \sigma_4| =a_{1234} \big\} \prod_{i=q}^{j_1 \vee j_2}
p_i^{\text{comb}_i(\bar \sigma)}, \end{align*}
where $\mathcal A$ is the collection of ${\bf a} = (a_{12}, \dots, a_{1234})$ satisfying at least one of the conditions in \eqref{e:def.as}. Note that $\text{comb}_i(\bar \sigma)$ depends only on ${\bf a}$, and for any ${\bf a}$, \begin{align*} &\sum_{\bar \sigma \in \Xi(j_1, j_2)} \hspace{-10pt} {\mathbbm 1} \big\{
|\sigma_1 \cap \sigma_2|= a_{12}, \, |\sigma_1 \cap
\sigma_3|=a_{13}, \dots, |\sigma_1 \cap \sigma_2 \cap \sigma_3 \cap \sigma_4| =a_{1234} \big\} \le n^{\text{comb}_0(\bar \sigma)}. \end{align*} Since $$ n^{\text{comb}_0(\bar \sigma)} \prod_{i=q}^{j_1\vee j_2} p_i^{\text{comb}_i(\bar \sigma)} = n^{2(\tau_{j_1}({\bm \alpha})+\tau_{j_2}({\bm \alpha}))-\Psi({\bf a}, {\bm \alpha})} $$ with $\Psi({\bf a}, {\bm \alpha})$ given in \eqref{e:psi.ba.al}, we obtain \begin{align} F_{j_1, j_2}(t,s,r) &\le C 2^{4(j_1+j_2)} (t-r)^{1+\gamma} \sum_{{\bf a} \in \mathcal A} n^{2(\tau_{j_1}({\bm \alpha})+\tau_{j_2}({\bm \alpha}))-\Psi({\bf a}, {\bm \alpha})}. \label{e:final.Fj1j2} \end{align}
We proceed with the following lemma. \begin{lemma} \label{l:ratio.4thmoment} For $q+1 \le j_1, j_2 < M_1(\alpha)$ and ${\bf a} = (a_{12}, \dots, a_{1234})\in \mathcal A$, we have \begin{equation} \label{e:ratio.4thmoment.1} \frac{n^{2(\tau_{j_1}({\bm \alpha})+\tau_{j_2}({\bm \alpha}))-\Psi({\bf a}, {\bm \alpha})}}{n^{4\tau_k({\bm \alpha})-2\tau_q({\bm \alpha})}} \le 1. \end{equation} \end{lemma} \begin{proof} Notice that \begin{align*} D &:= 2(\tau_{j_1}({\bm \alpha})+\tau_{j_2}({\bm \alpha}))-\Psi({\bf a}, {\bm \alpha}) \\ &\leq
2(\tau_{j_1}({\bm \alpha})+\tau_{j_2}({\bm \alpha}))
- \tau_{a_{12}-1}({\bm \alpha})-\tau_{a_{13}-1}({\bm \alpha})- \tau_{a_{14}-1}({\bm \alpha}) -
\tau_{a_{23}-1}({\bm \alpha}) - \tau_{a_{24}-1}({\bm \alpha}) -
\tau_{a_{34}-1}({\bm \alpha}) \\
&\quad +\tau_{a_{123}-1}({\bm \alpha})+ \tau_{a_{124}-1}({\bm \alpha})+
\tau_{a_{134}-1}({\bm \alpha})+ \tau_{a_{234}-1}({\bm \alpha}), \end{align*} and, by the choice of $j_1,j_2,$ all the terms $\tau_\cdot({\bm \alpha})$ in the right hand side are non-negative. Since the sequence $(\tau_i({\bm \alpha}), \, i\ge -1)$ is unimodal -- it increases until $i=k$ and then decreases - we have \begin{align} \label{e:unim.bounds} &\tau_{a_{12}-1}({\bm \alpha})\geq \min\bigl( \tau_{j_1}({\bm \alpha}), \tau_{a_{123}-1}({\bm \alpha})\vee
\tau_{a_{124}-1}({\bm \alpha})\bigr), \\ &\tau_{a_{13}-1}({\bm \alpha})\geq \min\bigl( \tau_{j_1}({\bm \alpha}) \vee
\tau_{j_2}({\bm \alpha}), \tau_{a_{123}-1}({\bm \alpha})\vee
\tau_{a_{134}-1}({\bm \alpha})\bigr), \notag\\ &\tau_{a_{14}-1}({\bm \alpha})\geq \min\bigl( \tau_{j_1}({\bm \alpha}) \vee
\tau_{j_2}({\bm \alpha}), \tau_{a_{124}-1}({\bm \alpha})\vee
\tau_{a_{134}-1}({\bm \alpha})\bigr), \notag \\ &\tau_{a_{23}-1}({\bm \alpha})\geq \min\bigl( \tau_{j_1}({\bm \alpha}) \vee
\tau_{j_2}({\bm \alpha}), \tau_{a_{123}-1}({\bm \alpha})\vee
\tau_{a_{234}-1}({\bm \alpha})\bigr), \notag \\ &\tau_{a_{24}-1}({\bm \alpha})\geq \min\bigl( \tau_{j_1}({\bm \alpha}) \vee
\tau_{j_2}({\bm \alpha}), \tau_{a_{124}-1}({\bm \alpha})\vee
\tau_{a_{234}-1}({\bm \alpha})\bigr), \notag \\ &\tau_{a_{34}-1}({\bm \alpha})\geq \min\bigl(
\tau_{j_2}({\bm \alpha}), \tau_{a_{134}-1}({\bm \alpha})\vee
\tau_{a_{234}-1}({\bm \alpha})\bigr). \notag \end{align} Since ${\bf a} \in \mathcal A$, at least one of the 6 conditions in \eqref{e:def.as} holds. We will consider in detail what happens under condition $(i)$; the situation under the other conditions is similar.
Under condition $(i)$ in \eqref{e:def.as} the first and the last bounds in \eqref{e:unim.bounds} are supplemented by the bounds $ \tau_{a_{12}-1}({\bm \alpha})\geq \tau_q({\bm \alpha})$,
$ \tau_{a_{34}-1}({\bm \alpha})\geq \tau_q({\bm \alpha})$. We now use the remaining 4 inequalities in \eqref{e:unim.bounds}. Note that $\tau_{a_{13}-1}({\bm \alpha})$ ``kills" (i.e., is at least as large as) $\tau_{j_1}({\bm \alpha})$,
$\tau_{j_2}({\bm \alpha})$ or $\tau_{a_{123}-1}({\bm \alpha})$. Similarly, $\tau_{a_{14}-1}({\bm \alpha})$ ``kills"
$\tau_{j_1}({\bm \alpha})$, $\tau_{j_2}({\bm \alpha})$ or
$\tau_{a_{134}-1}({\bm \alpha})$. Further, $\tau_{a_{23}-1}({\bm \alpha})$ ``kills"
$\tau_{j_1}({\bm \alpha})$, $\tau_{j_2}({\bm \alpha})$ or
$\tau_{a_{234}-1}({\bm \alpha})$. Finally, $\tau_{a_{24}-1}({\bm \alpha})$ ``kills"
$\tau_{j_1}({\bm \alpha})$, $\tau_{j_2}({\bm \alpha})$ or
$\tau_{a_{124}-1}({\bm \alpha})$. This leaves 4 non-negative terms in the
upper bound for $D$, neither of which exceeds $\tau_k({\bm \alpha})$, so
$D\leq 4\tau_k({\bm \alpha}) -2\tau_q({\bm \alpha})$, as required. \end{proof}
Since $\mathcal A$ is parameterized by the $11$ variables $a_{12}, \ldots, a_{1234},$ its cardinality does not exceed $(j_1+j_2+1)^{11}.$ Hence, by Lemma \ref{l:ratio.4thmoment} and \eqref{e:final.Fj1j2} \begin{align*} & \frac{\mathbb{E} \Big[ \big( \chi^{(1)}_n(t)-\chi^{(1)}_n(s) \big)^2 \big(
\chi^{(1)}_n(s)-\chi^{(1)}_n(r) \big)^2
\Big]}{n^{4\tau_k({\bm \alpha})-2\tau_q({\bm \alpha})}} \le
\sum_{j_1=q+1}^{M_1({\bm \alpha})-1}\sum_{j_2=q+1}^{M_1({\bm \alpha})-1}
\frac{F_{j_1,
j_2}(t,s,r)}{n^{4\tau_k({\bm \alpha})-2\tau_q({\bm \alpha})}}
\leq B (t-r)^{(1+\gamma)} \end{align*} for some $0 < B<\infty$, as required for \eqref{e:tightness.Thm13.5.Euler}. \end{proof}
\section{Proofs of the limit theorems for the Betti numbers in the critical
dimension } \label{s:proofs.betti}
Once again, we start with the strong law of large numbers.
\begin{proof}[Proof of \eqref{e:SLLN.betti} in Theorem \ref{t:SLLN}] For $0<T<\infty$, we have to demonstrate that $$
\sup_{0 \le t \le T}\frac{\big| \beta_k(t)-\mathbb{E}(f_k) \big|}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.} $$ By the Morse inequalities $$ f_k(t)-f_{k+1}(t)-f_{k-1}(t) \le \beta_k(t) \le f_k(t), $$ we have $$
\big| \beta_k(t)-\mathbb{E}(f_k) \big| \le \big| f_k(t)-\mathbb{E}(f_k) \big| + f_{k+1}(t) + f_{k-1}(t). $$ By \eqref{e:first.claim.SLLN} with $j=k$, it is enough to prove that as $n\to\infty$, $$ \sup_{0\le t \le T}\frac{f_{k+1}(t)}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.} \ \ \text{and } \sup_{0\le t \le T}\frac{f_{k-1}(t)}{\mathbb{E}(f_k)} \to 0 \ \ \text{a.s.} $$ This is, however, an immediate conclusion of \eqref{e:first.claim.SLLN} with $j=k\pm 1$, since by Proposition \ref{p:moment.face.count}, $$ \lim_{n\to\infty} \frac{\mathbb{E}( f_{k+1})}{\mathbb{E}( f_k)} = \lim_{n\to\infty} \frac{\mathbb{E}( f_{k-1})}{\mathbb{E}( f_k)} = 0. $$ \end{proof}
We continue with the functional central limit theorem for Betti numbers.
\begin{proof}[Proof of \eqref{e:betti.func.conv} in Theorem \ref{t:clt.topological.invariants}] For convenience, we drop the subscript $n$ in expressions such as $\beta_{j,n}$ for the duration of the proof. We start with introducing some terminology related to the connectivity of a simplicial complex. It is analogous to the terminology used in \cite{kahle:2009} and \cite{fowler:2019}. An $\ell$-dimensional simplicial complex $X$, is called \textit{pure} if every face of $X$ is contained in an $\ell$-face. A simplicial complex $K$ is said to be \textit{strongly connected} of order $\ell$ if the following two conditions hold:
\begin{itemize} \item The $\ell$-skeleton of $K$ is pure. \item Every pair of $\ell$-faces $\sigma, \tau \in K$, can be connected by a sequence of $\ell$-faces, $$ \sigma=\sigma_0, \sigma_1, \dots, \sigma_{j-1}, \sigma_j = \tau $$ for some $j\geq 1$, such that $\text{dim}(\sigma_i \cap \sigma_{i+1}) = \ell-1$, $0 \le i \le j-1$. \end{itemize} In this case, we will simply say that $K$ is an $\ell$-strongly connected simplicial complex. Note that the dimension of $K$ itself may be greater than $\ell$. We call an $\ell$-strongly connected subcomplex $K$ of $X$ \textit{maximal} if there is no other $\ell$-strongly connected subcomplex $K' \supset K$. We start with a useful estimate similar to the computation in \cite{fowler:2019}, p.117.
\begin{lemma} \label{l:formula.fowler} Let $K$ be a $(k+1)$-strongly connected simplicial complex
on $j\geq k+3$ vertices with a non-zero $(k+1)$-st Betti number. Then, for $\sigma \subset [n]$ with $|\sigma|=j$, \begin{equation*} \mathbb{P}(\text{the restriction of $X([n],{\bf p})$ to $\sigma$ is isomorphic to } K) \le j!\prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3}. \end{equation*} \end{lemma} \begin{proof} The argument consists of estimating the number of faces of different dimensions $K$ has to contain. We start by denoting by $m$ the number of the $(k+1)$-faces in $K$. We order these faces as follows. Fix an arbitrary $(k+1)$-cycle in $K$ and choose any $(k+1)$-face from this cycle to be $f_1$. Since $K$ is $(k+1)$-strongly connected, we can order the rest of the $(k+1)$-faces in the order $f_1, \ldots, f_m$ such that each $f_p$, $p>1,$ has a $k$-dimensional intersection with at least one $f_q$ with $q<p$. This ordering of the $(k+1)$-faces induces an ordering on the vertices in $K$, as follows. First, let $v_1,\dots,v_{k+2}$ be the vertices, chosen in an arbitrary order, in the support of $f_1.$ Each vertex after $v_{k+2}$ corresponds to the addition of a $(k+1)$-face $f_\ell;$ in that, it lies in the support of $f_{\ell}$ but is not contained in $f_1\cup \dots \cup f_{\ell-1}$. Since each vertex of $K$ belongs to some $(k+1)$-face, we obtain, in this way, an ordering $v_{k+3}, \dots, v_j$ of all remaining vertices in $K$. Note at this point that each vertex after $v_{k+2}$, for each $1 \le i \le k+1$,
is a vertex of $\binom{k+1}{i}$ of $i$-faces of some new
$(k+1)$-face $f_\ell$ being considered at that point. We let $$ c = \max\{ k+3 \le m \le j: v_m \text{ is a vertex of the initially fixed } (k+1)\text{-cycle} \} $$ and note that $c$ is well defined since the cycle must contain at least $k+3$ vertices. The corresponding vertex $v_c$ is, actually, contained in at least $k+2$ faces of dimension $k+1$, just as other vertices in the initially fixed $(k+1)$-cycle.
Furthermore, $v_c$ is contained in the fewest number of $i$-faces if it is a part of exactly $k + 2$ faces of dimension $k + 1.$ The latter occurs when, excluding $v_c,$ there are precisely $k + 2$ other vertices in this cycle and they together form a $(k +1)$-face. Therefore, when $v_c$ entered our enumeration of the vertices, for each $1\le i \le k+1$, it was a vertex of at least $\binom{k+2}{i}$ new $i$-faces in $K$. We now see that for each $1\le i \le k+1$, \begin{itemize} \item $f_1$ contains $\binom{k+2}{i+1}$ distinct $i$-faces in $K$; \item each vertex in $\{ v_{k+3}, \dots, v_j \}\setminus \{ v_c \}$
corresponds to $\binom{k+1}{i}$ new distinct $i$-faces in $K$; \item $v_c$ corresponds to at least $\binom{k+2}{i}$ new distinct $i$-faces
in $K$. \end{itemize} Therefore, for each $1\le i \le k+1$, $K$ contains at least $$ \binom{k+2}{i+1} + (j-k-3)\binom{k+1}{i} + \binom{k+2}{i} = \binom{k+3}{i+1} + (j-k-3)\binom{k+1}{i} $$ $i$-faces. Finally, since there are $j!$ ways of ordering vertices in $\sigma$, we get the assertion of the lemma. \end{proof}
By \eqref{e:Euler.characteristic}, the already established convergence in \eqref{e:Euler.characteristic.func.conv} tells us that \begin{equation} \label{e:rephrase.to.Betti} \left( \frac{\sum_{j=0}^{n-1} (-1)^j \beta_j (t)- \mathbb{E} \big(
\sum_{j=0}^{n-1} (-1)^j \beta_j \big)}{\sqrt{\text{\rm Var}
(f_{k})}}, \, t\geq 0\right) \Rightarrow \bigl(Z_k(t), \, t\geq 0\bigr) \end{equation} in finite dimensional distributions. In order to prove convergence in finite dimensional distributions in \eqref{e:betti.func.conv}, we need to show that all (normalized) Betti numbers except that of critical dimension are asymptotically negligible in \eqref{e:rephrase.to.Betti}. Proposition \ref{p:vanishing.lower.order.betti} in the Appendix shows negligibility of the Betti numbers in dimension smaller than the critical dimension. Together with \eqref{e:rephrase.to.Betti}, this gives that $$ \left( \frac{\sum_{j=k}^{n-1} (-1)^j \beta_j(t) - \mathbb{E} \big(
\sum_{j=k}^{n-1} (-1)^j \beta_j \big)}{\sqrt{\text{\rm Var} (f_k)}}, \,
t\geq 0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr). $$
Furthermore, by repeating the same argument as in \eqref{e:Markov.chi}, along with an obvious bound $\beta_j \le f_j$, we obtain that $$ \left( \frac{\sum_{j=M({\bm \alpha})}^{n-1} (-1)^j \beta_j(t) - \mathbb{E} \big( \sum_{j=M({\bm \alpha})}^{n-1} (-1)^j \beta_j \big)}{\sqrt{\text{\rm Var} (f_k)}}, \,
t\geq 0\right) \to {\bf 0}, $$ in finite-dimensional distributions, where $M({\bm \alpha})$ is defined in \eqref{e:M.alpha} and $\bf 0$ is the constant zero process. Hence, we can conclude that \begin{equation} \label{e:k.and.higher} \left( \frac{\sum_{j=k}^{M({\bm \alpha})-1} (-1)^j \beta_j(t) - \mathbb{E} \big( \sum_{j=k}^{M({\bm \alpha})-1} (-1)^j \beta_j \big)}{\sqrt{\text{\rm Var} (f_k)}}, \,
t\geq 0\right) \Rightarrow \bigl( Z_k(t), \, t\geq 0\bigr),\ \ \ n\to\infty, \end{equation} in finite-dimensional distributions.
Note that if $M({\bm \alpha})=k+1$, then \eqref{e:betti.func.conv} is automatic, so only the case $M({\bm \alpha})>k+1$ needs to be considered.
It is, of course, sufficient to show that for any $j=k+1, \ldots, M({\bm \alpha})-1$, $\text{\rm Var} (\beta_j)$ is negligible relative to $\text{\rm Var} (f_k)$ as $n\to\infty$. We will consider in detail the case $M({\bm \alpha})=k+2$, and prove negligibility of the variance of $\beta_{k+1}$. If $M({\bm \alpha})>k+2$, the higher-order Betti numbers can be treated in a similar way.
Our argument relies on an explicit representation of $\beta_{k+1}(t)$ given by \begin{equation} \label{e:beta.k+1}
\beta_{k+1}(t) = \beta_{k+1} \big( X([n],{\bf p}; t) \big) = \sum_{j=k+3}^n \sum_{r\ge 1} \sum_{\sigma \subset [n], \, |\sigma|=j}r \eta_\sigma^{(j,r,k+1)}(t), \end{equation} where $\eta_\sigma^{(j,r,k+1)}(t)$ is the indicator function of the event that $\sigma$ forms a maximal $(k+1)$-strongly connected subcomplex $X(\sigma, {\bf p};t)$, such that $\beta_{k+1} \big( X(\sigma, {\bf p};t) \big)=r$. See Proposition \ref{p:betti.representation} for a formal derivation of \eqref{e:beta.k+1}. We often omit superscripts from the indicator if the context is clear enough. Note that the second sum over $r \ge 1$ is a sum of at most $\binom{j}{k+2}$ terms, because $\beta_{k+1} \big( X(\sigma, {\bf p};t) \big)$ is bounded by the number of $(k+1)$-faces of $\sigma$, which itself is bounded by $\binom{j}{k+2}$.
As $M({\bm \alpha})=k+2$, it follows that $\tau_{k+1}({\bm \alpha}) > 0$, and we can find a positive integer $D$ such that \begin{equation} \label{e:constraint.D} D > \frac{k+2+\tau_{k+1}({\bm \alpha})}{\psi_{k+1}({\bm \alpha})-1} >0, \end{equation} and we use it to define a truncated version of the representation of the Betti number in \eqref{e:beta.k+1} as $$
\tilde{\beta}_{k+1}(t) = \tilde{\beta}_{k+1} \big( X([n],{\bf p};t) \big) = \sum_{j=k+3}^{D+k+1} \sum_{r\ge 1} \sum_{\sigma \subset [n], \, |\sigma|=j}r \eta_\sigma^{(j,r,k+1)}(t). $$ As before, we write $\tilde{\beta}_{k+1} := \tilde{\beta}_{k+1}(0)$ and $\eta_\sigma^{(j,r,k+1)} := \eta_\sigma^{(j,r,k+1)}(0)$. We claim that \begin{equation} \label{e:beta.k.and.beta.tilde} \left( \frac{\beta_k(t)-\mathbb{E}(\beta_k)}{\sqrt{\text{\rm Var}(f_k)}} -
\frac{\tilde{\beta}_{k+1}(t)-\mathbb{E}(\tilde{\beta}_{k+1})}{\sqrt{\text{\rm Var}(f_k)}}, \,
t\geq 0\right) \Rightarrow \bigl(
Z_k(t), \, t\geq 0\bigr), \ \ \ n\to\infty \end{equation} in finite-dimensional distributions. Indeed, by \eqref{e:k.and.higher} with $M({\bm \alpha})=k+2$, it is enough to prove that $\mathbb{E} (\beta_{k+1}-\tilde{\beta}_{k+1}) \to 0$, $n\to\infty$. Since the sum over $r\ge 1$ in \eqref{e:beta.k+1} contains at most $\binom{j}{k+2}$ terms, \begin{align*} \mathbb{E}(\beta_{k+1}-\tilde{\beta}_{k+1}) \le&\, \mathbb{E} \bigg[ \sum_{j=D+k+2}^n
\binom{j}{k+2}\,
\sum_{\sigma \subset [n], \,
|\sigma|=j} \sum_{r\ge 1}
\eta_\sigma^{(j,r,k+1)} \biggr] \\ \le&\, \mathbb{E} \bigg[ \, n^{k+2} \sum_{j=D+k+2}^n
\sum_{\sigma \subset [n], \,
|\sigma|=j} \sum_{r\ge 1}
\eta_\sigma^{(j,r,k+1)} \biggr]. \end{align*}
Whenever a $(k+1)$-strongly connected subcomplex is formed on $j \ge D+k+2$ vertices, it contains a further $(k+1)$-strongly connected subcomplex on exactly $D+k+2$ vertices. Furthermore, no two different such maximal subcomplexes can contain the same $(k+1)$-strongly connected subcomplex on $D+k+2$ vertices. Therefore, \begin{align*}
\mathbb{E}(\beta_{k+1}-\tilde{\beta}_{k+1}) \leq{} & n^{k + 2} \binom{n}{D + k + 2} \sum_{K: |K|=D+k+2} \mathbb{P}(\sigma_{D+k+2} \text{ is isomorphic to } K) \\
\le {} & \frac{n^{D+2k+4}}{(D + k + 2)!} \sum_{K: |K|=D+k+2} \mathbb{P}(\sigma_{D+k+2} \text{ is isomorphic to } K), \end{align*} where $\sigma_{D+k+2}$ is the restriction of the complex to fixed $D+k+2$ vertices, and the sum above is taken over all isomorphism classes of $(k+1)$-strongly connected complexes on $D+k+2$ points. Note that the number of terms in this sum is independent of $n$. Any such complex $K$ contains at least $\binom{k+2}{i+1} + D\binom{k+1}{i}$ faces of dimension $i$ for each $1 \le i \le k+1$; this counting is presented in the proof of Lemma~8.1 in \cite{fowler:2019}. Hence, $$ \mathbb{P}(\sigma_{D+k+2} \text{ is isomorphic to } K) \le (D+k+2)!\prod_{i=q}^{k+1} p_i^{\binom{k+2}{i+1}+D\binom{k+1}{i}}, $$ and so, by \eqref{e:constraint.D}, \begin{align*} \mathbb{E}(\beta_{k+1}-\tilde{\beta}_{k+1}) &\le C n^{k+2+\tau_{k+1}({\bm \alpha})-D(\psi_{k+1}({\bm \alpha})-1)} \to 0, \ \
\ n\to\infty. \end{align*} Thus, \eqref{e:beta.k.and.beta.tilde} follows and, by Chebyshev's inequality, the claim \eqref{e:betti.func.conv} is established once we check that $$ \frac{\text{\rm Var}(\tilde{\beta}_{k+1})}{\text{\rm Var}(f_k)} \to 0, \ \ \ n\to\infty. $$ It suffices to show that for every $j=k+3, \dots, D+k+1$ and $r\ge 1,$ we have \begin{equation} \label{e:main.goal}
\frac{\text{\rm Var}\Big( \sum_{\sigma \subset [n], \, |\sigma|=j} \eta_\sigma^{(j,r,k+1)}
\Big)}{\text{\rm Var}(f_k)} \to 0, \ \ \ n\to\infty. \end{equation} Simplifying the notation, we get \begin{align*}
\text{\rm Var} \Big( \sum_{\sigma \subset [n], \, |\sigma|=j} \eta_\sigma \Big) &= \sum_{\substack{\sigma \subset [n], \\ |\sigma|=j}} \sum_{\substack{\tau \subset [n], \\ |\tau|=j}} \Big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \Big]\\
&=\sum_{\ell=0}^j \sum_{\substack{\sigma \subset [n], \\ |\sigma|=j}} \sum_{\substack{\tau \subset [n], \\ |\tau|=j}} \Big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \Big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=\ell \big\} \\
&=\sum_{\ell=0}^j \binom{n}{j} \binom{j}{\ell} \binom{n-j}{j-\ell}\Big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \Big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=\ell \big\}. \end{align*}
We consider six cases, depending on the value of $\ell:= |\sigma \cap \tau|$.
\noindent $(I)$ $\ell \in \{ 0,\dots, q-2 \}$.
We claim that in this case the events underlying the indicator functions $\eta_\sigma$ and $\eta_\tau$ are independent, so that the corresponding terms have no contribution to the numerator in \eqref{e:main.goal}. Indeed, the event underlying $\eta_\sigma$ can be stated as saying that the restriction of the complex to $\sigma$ is a $(k+1)$-strongly connected subcomplex with Betti number in dimension $k+1$ equal to $r$ and that no $(k+1)$-simplex carried by $\sigma$ has $k+1$ common vertices, i.e., a common $k$-face, with a $(k+1)$-simplex not carried by $\sigma$. We also have an analogous description of the event underlying $\eta_\tau$. Stated this way, it is clear if a face $s_1$ plays a role in the former event, and a face $s_2$ plays a role in the latter event, then these faces have at most $q$ vertices in common and, hence, the restrictions of the complex to these faces are independent.
\noindent $(I\hspace{-1.5pt}I)$ $\ell =q-1$.
First, let $$ \gamma_k :=\prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}}=n^{-\psi_{k+1}({\bm \alpha})} $$ denote the probability that a fixed $k$-face and a vertex not in that face form a $(k+1)$-simplex.
For $j\in \{k+3,\dots, D+k+1\}$ and $r\ge1$, let $K$ denote a fixed $(k+1)$-strongly connected complex on $j$ vertices whose Betti number in dimension $k+1$ is equal to $r$. For $\sigma \subset [n]$ with $|\sigma|=j$, let $A_K$ be the event that the restriction of the complex to $\sigma$ is isomorphic to $K$, and define $q_K:=\mathbb{P}(A_K)$.
We first claim that, for every $\sigma \subset [n]$ with $|\sigma|=j$, \begin{equation} \label{e:prob.etasig}
\mathbb{E}(\eta_\sigma)=\sum_{K: |K|=j} q_K (1-s_K \gamma_k +u_K)^{n-j}, \end{equation} where the sum is taken over all $(k+1)$-strongly connected complexes, up to an isomorphism class, such that the Betti number in dimension $k+1$ is equal to $r$. Moreover, $s_K$ is the number of $k$-faces in $K$, and $u_K=\mathcal O(\gamma_k)$ as functions of $n,$ i.e., there exists $C>0$ such that $u_K/\gamma_k < C$ for all $n\geq 1$ and all $K$. Note that $q_K, \gamma_k$, and $u_K$ depend on $n$, whereas $s_K$ is independent of $n$. For the proof of \eqref{e:prob.etasig}, write $$
\mathbb{E}(\eta_\sigma) = \sum_{K: |K|=j} q_K \mathbb{P}(\sigma \text{ is maximal } | A_K). $$ Let us fix a vertex $v\in\sigma^c$. By the inclusion-exclusion formula, the probability of forming at least one $(k+1)$-simplex between $v$ and a $k$-face in $\sigma$, can be written as $s_K \gamma_k -u_K$. The largest term in $u_K$ corresponds to $v$ forming two $(k+1)$-simplices with $k$-faces $f_1$ and $f_2$ respectively, such that $\dim(f_1 \cap f_2)=k-1$. Therefore, the largest term in $u_K$ is of the order $\gamma_k^2 \gamma_{k-1}^{-1} = \mathcal O(\gamma_k)$. Since there are $n-j$ vertices in $\sigma^c$, we have $$
\mathbb{P}(\sigma \text{ is maximal } | A_K)=(1-s_K \gamma_k +u_K)^{n-j}, $$ and \eqref{e:prob.etasig} follows as required.
Next, let $K$, $K'$ be fixed $(k+1)$-strongly connected complexes on $j$ vertices with Betti number in dimension $k+1$ equal to $r$. Denote by $A_{K, K'}$ the event that the restriction of the complex to $\sigma$ and that to $\tau$ are isomorphic to $K$ and $K'$, respectively. It then follows from \eqref{e:prob.etasig} that \begin{align}
&\big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=q-1 \big\}\label{e:case1.goal}\\
&= \sum_{K: |K|=j} \sum_{K': |K'|=j} \Big[ \mathbb{P}(\sigma \text{ and } \tau \text{ are maximal }| A_{K, K'}) \mathbb{P}(A_{K, K'}) \notag\\
&\qquad \qquad \qquad \qquad \quad - q_Kq_{K'} (1-s_K\gamma_k +u_K)^{n-j}(1-s_{K'}\gamma_k +u_{K'})^{n-j} \Big]\, {\mathbbm 1} \{ |\sigma \cap \tau|=q-1 \}, \notag \end{align} where the sums are again taken over all $(k+1)$-strongly connected complexes whose Betti numbers in dimension $k+1$ are equal to $r$, and
$s_{K'}$, $u_{K'}$ are defined analogously to those for $K$. Since $|\sigma\cap \tau|=q-1$ and all the $(q-2)$-faces exist with probability one, we have $\mathbb{P}(A_{K, K'})=q_K q_{K'}$. For every $v\in (\sigma\cup \tau)^c$, let $B_v$ be the event that $v$ forms a $(k+1)$-simplex with a $k$-face in $\sigma \cup \tau$. Further, let $D_1$ denote the event that at least one $(k+1)$-simplex exists between a $k$-face in $\sigma$ and a point in $\tau\setminus (\sigma \cap \tau)$, and $D_2$ is an event obtained by switching the role of $\sigma$ and $\tau$. Then, by independence we see that \begin{align}
\mathbb{P}(\sigma \text{ and } \tau \text{ are maximal }| A_{K, K'}) &= \mathbb{P} \bigg( \Big(\bigcap_{v \in (\sigma\cup \tau)^c} B_v^c\Big) \cap D_1^c \cap D_2^c\, \Big| \, A_{K, K'} \bigg) \label{e:conditional.maximal} \\
&= \prod_{v\in (\sigma \cup \tau)^c} \big( 1-\mathbb{P}(B_v | A_{K, K'}) \big) \mathbb{P}(D_1^c \cap D_2^c | A_{K, K'}). \notag \end{align}
By the inclusion-exclusion formula, we have \begin{align}
\mathbb{P}(B_v | A_{K, K'}) = (s_K+s_{K'})\gamma_k - u_K - u_{K'} -s_Ks_{K'} \gamma_k^2 + s_K\gamma_k u_{K'}+ s_{K'}\gamma_k u_K - u_Ku_{K'} =: a_{K, K'}. \label{e:def.aKK'} \end{align} Indeed, the probabilities that $v$ forms $(k+1)$-simplices with multiple $k$-faces in $\sigma$ are grouped into $u_K$, while the probabilities that $v$ forms $(k+1)$-simplices with multiple $k$-faces in $\tau$ are grouped into $u_{K'}$. Moreover, the probabilities that $v$ forms $(k+1)$-simplices with both $k$-faces in $\sigma$ and those in $\tau$, are grouped into one of the last four terms in \eqref{e:def.aKK'}. Above, we have also exploited the fact that the events concerning $v$ forming $(k + 1)$-simplices with $k$-faces in $\sigma$ are independent from events concerning $v$ forming $(k + 1)$-simplices with $k$-faces in $\tau.$
Noting that there are $n-2j+q-1$ points in $(\sigma \cup \tau)^c$, the right hand side of \eqref{e:case1.goal} is equal to \begin{equation} \label{e:bound.case1.goal}
\sum_{K: |K|=j} \sum_{K': |K'|=j} q_Kq_{K'} (1-a_{K, K'})^{n-2j+q-1} \big[ \mathbb{P}(D_1^c\cap D_2^c | A_{K, K'}) - (1-a_{K, K'})^{j-q+1} \big]. \end{equation}
By the binomial expansion, it is easy to see that \begin{align*} &(1-a_{K,K'})^{n-2j+q-1} = \big( 1-\mathcal O(\gamma_k) \big)^{n-2j+q-1} = \mathcal O(1), \\ &(1-a_{K, K'})^{j-q+1} = 1-(j-q+1) a_{K, K'} + \mathcal O(\gamma_k^2), \end{align*} and, further, \begin{align*}
\mathbb{P}(D_1|A_{K,K'}) &= 1-(1-s_K\gamma_k + u_K)^{j-q+1} = (j-q+1)(s_K\gamma_k-u_K) - \mathcal O(\gamma_k^2), \\
\mathbb{P}(D_2|A_{K,K'}) &= 1-(1-s_{K'}\gamma_k + u_{K'})^{j-q+1} = (j-q+1)(s_{K'}\gamma_k-u_{K'}) - \mathcal O(\gamma_k^2). \end{align*}
Suppose now that \begin{equation} \label{e:q.pts.concentrated}
\text{there exist two } k\text{-faces } f_1\subset \sigma \text{ and } f_2 \subset \tau \text{ such that } |f_1\cap f_2|=q-1. \end{equation} Under \eqref{e:q.pts.concentrated}, we claim that
$$
\mathbb{P}(D_1\cap D_2|A_{K,K'}) = \mathcal O(\gamma_k^2p_q^{-1}). $$ Indeed, the largest term in the right hand side corresponds to the case in which a vertex in $f_1\setminus (f_1\cap f_2)$ forms a $(k+1)$-simplex with $f_2$, and a vertex in $f_2\setminus (f_1\cap f_2)$ forms a $(k+1)$-simplex with $f_1$. Because of a double-count of a $q$-face consisting of the vertices in $f_1 \cap f_2$ and the two selected vertices, the largest rate is of order $\gamma_k^2 p_q^{-1}$. By combining all these results, it is now straightforward to get that \begin{equation} \label{e:main.term.D}
\mathbb{P}(D_1^c\cap D_2^c | A_{K, K'}) - (1-a_{K, K'})^{j-q+1} =\mathcal O(\gamma_k^2p_q^{-1}). \end{equation}
If \eqref{e:q.pts.concentrated} does not hold, the same analysis gives the behavior as in \eqref{e:main.term.D}, but with a smaller correction term; $\mathcal O(\gamma_k^2)$ instead of $\mathcal O(\gamma_k^2 p_q^{-1})$. From all of these results, \eqref{e:bound.case1.goal} can be written as $$
C \sum_{\substack{K: |K|=j \\ \eqref{e:q.pts.concentrated} \text{holds}}} \sum_{K': |K'|=j} q_Kq_{K'} \mathcal O(\gamma_k^2p_q^{-1}), $$
and, thus, \begin{align*}
&\binom{n}{j}\binom{j}{q-1}\binom{n-j}{j-q+1} \big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=q-1 \big\} \\
&\le C \sum_{\substack{K: |K|=j \\ \eqref{e:q.pts.concentrated} \text{holds}}} \sum_{K': |K'|=j} n^{2j-q+1}q_Kq_{K'} \mathcal O(\gamma_k^2p_q^{-1}). \end{align*}
By Lemma \ref{l:formula.fowler}, \begin{align*} n^j q_K &\le C n^j \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} = C n^{\tau_{k+2}({\bm \alpha})+\alpha_{k+2}}\big(n^{1-\psi_{k+1}({\bm \alpha})}\big)^{j-k-3}. \end{align*} Since $\psi_{k+1}({\bm \alpha})>1$ and $j\ge k+3$, we get $\big( n^{1-\psi_{k+1}({\bm \alpha})} \big)^{j-k-3}\le 1$, and hence, \begin{equation} \label{e:bound.nj.qK} n^{2j}q_Kq_{K'} \le C n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}. \end{equation}
It now remains to check that $$ \frac{n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}\mathcal O(n^{-q+1}\gamma_k^2p_q^{-1})}{\text{\rm Var} (f_k)} \to 0 \ \ \text{as } n\to\infty. $$
But this actually follows, since by Proposition \ref{p:moment.face.count} and \eqref{e:simple.lemma}, the expression on the left hand side is bounded by $$ Cn^{2(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha}))} \mathcal O(n^{\tau_q({\bm \alpha})-q+1}\gamma_k^2p_q^{-1}) = Co(1)\mathcal O(n^{2(1-\psi_{k+1}({\bm \alpha}))})\to 0, \ \ \ n\to\infty. $$
\noindent $(I\hspace{-1.5pt}I\hspace{-1.5pt}I)$ $\ell=q$.
The case $\ell=q$ is similar but easier. Using the same notation as in Case ($I\hspace{-1.5pt}I$), we once again consider \begin{align}
&\big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=q \big\} \label{e:case2.goal} \\
&= \sum_{K: |K|=j} \sum_{K': |K'|=j} \Big[ \mathbb{P}(\sigma \text{ and } \tau \text{ are maximal }| A_{K, K'}) \mathbb{P}(A_{K, K'}) \notag \\
&\qquad \qquad \qquad \qquad \quad - q_Kq_{K'} (1-s_K\gamma_k +u_K)^{n-j}(1-s_{K'}\gamma_k +u_{K'})^{n-j} \Big]\, {\mathbbm 1} \{ |\sigma \cap \tau|=q \}. \notag \end{align}
Since $|\sigma \cap \tau|=q$ and all the $(q-1)$-faces exist with probability one, we still get $\mathbb{P}(A_{K,K'})=q_Kq_{K'}$. By the same reasoning as before, we only consider the situation that \begin{equation} \label{e:q.pts.concentrated1}
\text{there exist two } k\text{-faces } f_1\subset \sigma \text{ and } f_2 \subset \tau \text{ such that } |f_1\cap f_2|=q. \end{equation} Under this assumption, for each $v\in(\sigma\cup \tau)^c$, the inclusion-exclusion formula gives that $$
\mathbb{P}(B_v|A_{K,K'}) = (s_K+s_{K'})\gamma_k -u_K-u_{K'} -\mathcal O(\gamma_k^2p_q^{-1}), $$ The largest term in the big-$\mathcal O$ expression is associated with the case in which $v$ forms two $(k+1)$-simplices with $f_1$ and $f_2,$ respectively. By \eqref{e:conditional.maximal}, we see that \begin{align*}
&\mathbb{P}(\sigma \text{ and } \tau \text{ are maximal }| A_{K, K'}) \le \prod_{v\in(\sigma\cup \tau)^c} \big( 1-\mathbb{P}(B_v|A_{K,K'}) \big) \\ &\qquad =\big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} +\mathcal O(\gamma_k^2p_q^{-1})\big)^{n-2j+q} \\ &\qquad = \big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} \big)^n (1 + \mathcal{O}(\gamma_k))^{-(2j - q)}(1 + \mathcal{O}(\gamma_k^2 p_q^{-1}))^n\\ &\qquad = \big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} \big)^n \big(1+\mathcal O(\gamma_kp_q^{-1}) \big). \end{align*}
Here, we have made use of the following facts: $\gamma_k^2 p_q^{-1} = \mathcal O(\gamma_k),$ $(1 + O(\gamma_k))^{-(2j - q)} = 1 + O(\gamma_k),$ and $(1 + \mathcal O(\gamma_k^2 p_q^{-1}))^{n} = 1 + \mathcal{O}(n \gamma_k^2 p_q^{-1}) = 1+ \mathcal{O}(\gamma_k p_q^{-1}).$
Similarly, we derive that \begin{align*}
(1-s_K\gamma_k +u_K)^{n-j}(1-s_{K'}\gamma_k +u_{K'})^{n-j} &= \big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} +\mathcal O(\gamma_k^2) \big)^{n-j} \\
&= \big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} \big)^n \big( 1+\mathcal O(\gamma_k) \big). \end{align*} Putting all these results together, along with the binomial expansion $\big( 1- (s_K+s_{K'})\gamma_k +u_K+u_{K'} \big)^n = \mathcal O(1)$ as $n\to\infty$,
we can conclude that $$
\binom{n}{j} \binom{j}{q} \binom{n-j}{j-q} \big[ \mathbb{E}(\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma) \mathbb{E}(\eta_\tau) \big]\, {\mathbbm 1} \big\{ |\sigma\cap \tau|=q \big\} \le C \hspace{-10pt} \sum_{\substack{K: |K|=j \\ \eqref{e:q.pts.concentrated1} \text{ holds}}} \sum_{K': |K'|=j} n^{2j-q} q_Kq_{K'} \mathcal O(\gamma_k p_q^{-1}). $$ Using Lemma \ref{l:formula.fowler} as in \eqref{e:bound.nj.qK}, it follows that the right hand side above can be bounded by $C n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})} \mathcal O(n^{-q}\gamma_k p_q^{-1})$. Finally, Proposition \ref{p:moment.face.count} and \eqref{e:simple.lemma} help to conclude that \begin{align*} \frac{n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}\mathcal O(n^{-q}\gamma_k p_q^{-1})}{\text{\rm Var}(f_k)} &\le C n^{2(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha}))}\mathcal O(n^{\tau_q({\bm \alpha})-q-\psi_{k+1}({\bm \alpha})+\alpha_q}) \\ &=C o(1)\mathcal O(n^{1-\psi_{k+1}({\bm \alpha})}) \to 0, \ \ \ n\to\infty. \end{align*}
\remove{\noindent $(I\hspace{-1.5pt}I)$ $\ell\in \{q-1,q\}$.
We start with the more difficult case $\ell=q$. Recall that $\mathbb{E} (\eta_\sigma\eta_\tau) $ is the probability that the restrictions of the complex to $\sigma$ and to $\tau$ are $(k+1)$-strongly connected subcomplexes with betti number in dimension $k+1$ equal to $r$, that no $k+1$-simplex carried by $\sigma$ has $k+1$ common vertices with a $k+1$-simplex not carried by $\sigma$, and no $k+1$-simplex carried by $\tau$ has $k+1$ common vertices with a $k+1$-simplex not carried by $\tau$, with the simiular interpretation of the marginal expectations.
Denote by $q_j$ the probability that the restriction of the complex to particular $j$ vertices is a $(k+1)$-strongly connected subcomplex whose betti number in dimension $k+1$ is equal to $r$
(maximality of the strongly connected subcomplex is not required). Let $$ \gamma_k =\prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}}=n^{-\psi_{k+1}(\alpha)} $$ denote the probability that a fixed $k$-face and a vertex not in that face form a $(k+1)$-simplex.
We will estimate the covariance $\mathbb{E} (\eta_\sigma \eta_\tau) - \mathbb{E}(\eta_\sigma)
\mathbb{E}(\eta_\tau)$ through first conditioning on the restrictions of the complex to $\sigma$ and to $\tau$. Since $|\sigma \cap \tau|=q$ and all the $(q-1)$-faces exist with probability one, the event that the restrictions of the complex to $\sigma$ and to $\tau$ are $(k+1)$-strongly connected subcomplexes with betti number in dimension $k+1$ equal to $r$ has probability $q_j^2$. Recall that a covariance can be written as the expectation of the conditional covariance plus the covariance of the conditional expectations. so we will estimate both of these terms. We start by estimating the conditional covariance.
Denote by $s_\sigma$ the number of $k$-faces contained in the restriction of the complex to $\sigma$, with a similar definition for $s_\tau$. These are random variables, but they satisfy $s_\sigma \vee s_\tau\leq {j\choose k+1}$,
Suppose first that \begin{equation} \label{e:q.pts.concentrated}
\text{there exist $k$-faces } f_1\subset \sigma \text{ and } f_2 \subset \tau \text{ such that } |f_1\cap f_2|=q. \end{equation} Consider first the event $B_n$ that there are no $(k+1)$-simplices formed by some vertex $v \in (\sigma\cup \tau)^c$ and a $k$-face in $\sigma \cup \tau$. Since there are $n-2j+q$ points in $(\sigma\cup \tau)^c$, the conditional probability of $B_n$ is equal to $(1-p_{\sigma\tau})^{n-2j+q}$, where $p_{\sigma\tau}$ is the probability that a fixed vertex $v \in (\sigma\cup \tau)^c$ forms at least one $(k+1)$-simplex with a $k$-face in $\sigma \cup \tau$. By the inclusion-exclusion formula, $$ p_{\sigma\tau} =(s_\sigma + s_\tau)\gamma_k -u_\sigma - u_\tau -u_{\sigma,\tau}, $$ where $u_\sigma$, $u_\tau$ and $u_{\sigma,\tau}$ are the terms in the inclusion-exclusion formula of order 2 or larger, grouped as follows. The probabilities that $v$ forms $(k+1)$-simplices with multiple $k$-faces in $\sigma$ are grouped into $u_\sigma$, while the probabilities that $v$ forms $(k+1)$-simplices with multiple $k$-faces in $\tau$ are grouped into $u_\tau$. Finally, the probabilities that $v$ forms $(k+1)$-simplices with both with $k$-faces in $\sigma$ and in $\tau$ are grouped into $u_{\sigma,\tau}$. A simiple analysis of the terms of different orders shows that there is $c_k$ depending only on $k$ such that $u_\sigma\vee u_\tau\leq c_k\gamma_k$. Similarly, the largest term in $u_{\sigma,\tau}$ corresponds to $v$ forming $(k+1)$-simplices with $f_1$ and $f_2$ in \eqref{e:q.pts.concentrated}, and it is of the order $\mathcal O(\gamma_k^2p_q^{-1}) $, so that $u_{\sigma,\tau}=O(\gamma_k^2p_q^{-1})$. Since $n\gamma_k\to 0$, it follows that the conditional probability of $B_n$ satisfies $$ \mathbb{P}(B_n) =\big( 1-(s_\sigma+s_\tau)\gamma_k+u_\sigma + u_\tau \big)^n \big(1+\mathcal O(\gamma_k p_q^{-1}) \big). $$
Next, consider the event $C_n$ that there are no $(k+1)$-simplices formed by some vertex $w \in \tau\setminus (\sigma\cap \tau)$ and $k$-face in $\sigma$ and the event $D_n$
that there are no $(k+1)$-simplices formed by some vertex in $\tau\setminus (\sigma\cap \tau)$ and $k$-face in $\tau$. We claim that $$ \mathbb{P}(C_n) =1+\mathcal O(\gamma_kp_q^{-1}), \ \ \ \mathbb{P}(D_n) =1+\mathcal O(\gamma_kp_q^{-1}), \ \ \ n\to\infty. $$ Indeed, the most likely way to have such a $(k+1)$-simplex is to consider the $k$-faces $f_1,f_2$ in \eqref{e:q.pts.concentrated}, and take a vertex $w$ from $f_2\setminus f_1$. Then a $(k+1)$-simplex on $f_1$ and $w$ exists with probability $\gamma_kp_q^{-1}$, where $p_q^{-1}$ accounts for an existing $q$-face created by $w$ and the $q$ vertices of $f_1\cap f_2$.
In conclusion, if \eqref{e:q.pts.concentrated} holds, then the conditional expectation has the form \begin{align} \label{e:cross.term.expectation}
&\mathbb{E} (\eta_\sigma\eta_\tau) = \big( 1-(s_\sigma+s_\tau)\gamma_k + u_\sigma + u_\tau \big)^n \big( 1+\mathcal O(\gamma_kp_q^{-1}) \big), \ \ n\to\infty. \end{align}
If \eqref{e:q.pts.concentrated} does not hold, the same analysis gives the behaviour as in \eqref{e:cross.term.expectation}, but with a smaller correction term: $O(\gamma_k)$ instead of $O(\gamma_kp_q^{-1})$. Therefore, \eqref{e:cross.term.expectation} holds in all cases.
The above analysis also gives the following behaviour of the ``marginal'' conditional expectations: \begin{align*} &\mathbb{E} \eta_\sigma = (1-s_\sigma \gamma_k + u_\sigma)^n \big( 1+\mathcal O(\gamma_k)
\big), \ \ \mathbb{E} \eta_\tau = (1-s_\tau \gamma_k + u_\tau)^n \big( 1+\mathcal O(\gamma_k)
\big) \end{align*} and, hence, \begin{equation} \label{e:prod.expectations} \mathbb{E}\eta_\sigma \mathbb{E}\eta_\tau = \big( 1-(s_\sigma+s_\tau)\gamma_k + u_\sigma + u_\tau\big)^n \big( 1+\mathcal O(\gamma_k) \big), \ \ n\to\infty. \end{equation} From \eqref{e:cross.term.expectation} and \eqref{e:prod.expectations} we obtain a bound on the conditional covaraince: \begin{align*}
\mathbb{E}(\eta_\sigma\eta_\tau)-\mathbb{E}(\eta_\sigma)\mathbb{E}(\eta_\tau) &= \big( 1-(s_\sigma+s_\tau)\gamma_k + u_\sigma + u_\tau\big)^n \mathcal O(\gamma_kp_q^{-1}) \\ &= \mathcal O(\gamma_kp_q^{-1}), \end{align*} where the last equality follows from $\big( 1+\mathcal O(\gamma_k) \big)^n = \mathcal O(1)$ as $n\to\infty$. Since all bounds are absolute, we also conclude that the expectation of the conditional covariance has an upper bound of $ q_j^2O(\gamma_kp_q^{-1})$.
Next, we switch to estimating the covariance of the conditional expectations. We already know that it can be written in the form \begin{align*} q_j^2 \Bigl[ &\mathbb{E} \Bigl( (1-s_\sigma \gamma_k + u_\sigma)^n (1-s_\tau \gamma_k +
u_\tau)^n \big( 1+\mathcal O(\gamma_k) \bigr)\Bigr) \\ - & \mathbb{E} \Bigl( (1-s_\sigma \gamma_k + u_\sigma)^n \big( 1+\mathcal
O(\gamma_k) \bigr)\Bigr)
\mathbb{E} \Bigl( (1-s_\tau \gamma_k + u_\tau)^n \big( 1+\mathcal
O(\gamma_k) \bigr)\Bigr)\Bigr], \end{align*} and the expression in the square brackets is easily seen to be \begin{align*}
& O(\gamma_k) + {\rm Cov} \Bigl( (1-s_\sigma \gamma_k + u_\sigma)^n,
(1-s_\tau \gamma_k + u_\tau)^n\Bigr) \\
\leq & O(\gamma_k) +\Bigl[ {\rm Var} (1-s_\sigma \gamma_k + u_\sigma)^n
{\rm Var} (1-s_\tau \gamma_k + u_\tau)^n\Bigr]^{1/2} =
O(\gamma_k) + {\rm Var} (1-s_\sigma \gamma_k + u_\sigma)^n. \end{align*} However, $s_\sigma$ has the Binomial distribution with parameters $\binom{j+1}{k+1}$ and $$ \hat\gamma_k=n^{-\sum_{i=1}^k {\binom{k+1}{i+1}} \alpha_i} = n^{\tau_l({\bm \alpha})-(k+1)}. $$ which gives us a bound of $O(\gamma_k)+O(\hat\gamma_k)$.
Summarizing, the contribution of the terms with $\ell=q$ in the sum can be bounded by \begin{align*} \binom{n}{j}\binom{j}{q}\binom{n-j}{j-q}\big[
\mathbb{E}(\eta_\sigma\eta_\tau)-\mathbb{E}(\eta_\sigma)\mathbb{E}(\eta_\tau) \big]= (n^jq_j)^2 \mathcal
O\bigl(n^{-q}(\gamma_kp_q^{-1}\vee \hat\gamma_k)\bigr). \end{align*} By Lemma \ref{l:formula.fowler}, \begin{align*} n^j q_j &\le C n^j \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} \\ &= C n^{\tau_{k+2}({\bm \alpha})+\alpha_{k+2}}\big(n^{1-\psi_{k+1}({\bm \alpha})}\big)^{j-k-3}. \end{align*} Since $\psi_{k+1}({\bm \alpha})>1$ and $j\ge k+3$, we get $\big( n^{1-\psi_{k+1}({\bm \alpha})} \big)^{j-k-3}\le 1$, and hence, $$ (n^jq_j)^2 \le n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}. $$ We first check that $$ \frac{n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}\mathcal O(n^{-q}\gamma_kp_q^{-1})}{\text{\rm Var} (f_k)} \to 0 \ \ \text{as } n\to\infty. $$ However, the expression in the left hand side is bounded by $$ Cn^{2(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha}))} \mathcal O(n^{\tau_q({\bm \alpha})- q}\gamma_k p_q^{-1}) = o(1)\mathcal O(n^{1-\psi_{k+1}({\bm \alpha})}) \to 0, \ \ n\to\infty, $$ as desired. Next we check that $$ \frac{n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})}\mathcal O(n^{-q}\hat \gamma_k )}{\text{\rm Var} (f_k)} \to 0 \ \ \text{as } n\to\infty. $$ Now the expression in the left hand side is bounded by $$ Cn^{2(\tau_{k+2}({\bm \alpha}) +\alpha_{k+2})-\tau_k({\bm \alpha})-k} \leq n^{2(1-\psi_{k+1}({\bm \alpha}))}\to 0, \ \ n\to\infty, $$ once again as desired.
The case $\ell=q-1$ is similar, but easier. We condition, once again, on he restrictions of the complex to $\sigma$ and to $\tau$. In this case the conditional expectations are clearly uncorrelated, so we only need to estimate the expectation of the conditional covariance. We keep the notation a bit different from the previous case. let $A_\sigma$ be the event that that there are no $(k+1)$-simplices formed by some vertex $v \in (\sigma\cup \tau)^c$ and a $k$-face in $\sigma$, $B_\sigma$ the event that that there are no $(k+1)$-simplices formed by some vertex $v \in \tau\setminus (\sigma\cup \tau)$ and a $k$-face in $\sigma$, $A_\tau$ the event that that there are no $(k+1)$-simplices formed by some vertex $v \in (\sigma\cup \tau)^c$ and a $k$-face in $\tau$ and $B_\tau$ the event that that there are no $(k+1)$-simplices formed by some vertex $v \in \sigma\setminus (\sigma\cup \tau)$ and a $k$-face in $\sigma$. The events $A_\sigma, A_\tau$ are jointly independent of the events $B_\sigma, B_\tau$. Furthermore, by the definition of $q$, the events $A_\sigma$ and $A_\tau$ are also independent. Note that $\eta_\sigma={\mathbbm 1}_{A_\sigma}{\mathbbm 1}_{B_\sigma}$ and $\eta_\tau={\mathbbm 1}_{A_\tau}{\mathbbm 1}_{B_\tau}$, so that the conditional covariance is \begin{align*} \mathbb{E}(\eta_\sigma\eta_\tau) - \mathbb{E}(\eta_\sigma)\mathbb{E}(\eta_\tau) =& \mathbb{P} (A_\sigma) \mathbb{P}
(A_\tau) \bigl[ \mathbb{P} \bigl(B_\sigma\cap B_\tau\bigr) - \mathbb{P}
\bigl(B_\sigma\bigr) \mathbb{P} \bigl(B_\tau\bigr)\bigr] \\
\leq & \mathbb{P} \bigl(B_\sigma^c\cap B_\tau^c\bigr). \end{align*} However, the only way the event $B_\sigma^c\cap B_\tau^c$ can occur is that an equivalent of \eqref{e:q.pts.concentrated} holds, with the cardinality of the intersection equal to $q-1$ and, in that case, a vertex in $\tau\setminus (\sigma\cup \tau)$ must form a $(k+1)$-simplex with a $k$-face in $\sigma$, and a vertex in $\sigma\setminus (\sigma\cup \tau)$ must form a $(k+1)$-simplex with a $k$-face in $\tau$, the probability of which is $O(\gamma_k^2p_q^{-1})$. Summarizing, the contribution of the terms with $\ell=q-1$ in the sum can be bounded by \begin{align*} \binom{n}{j}\binom{j}{q-1}\binom{n-j}{j-q+1}\big[
\mathbb{E}(\eta_\sigma\eta_\tau)-\mathbb{E}(\eta_\sigma)\mathbb{E}(\eta_\tau) \big] = (n^jq_j)^2 \mathcal
O\bigl(n^{-q+1}\gamma_k^2p_q^{-1}\bigr)\bigr), \end{align*} so in comparison with the case $\ell=q$ is an extra factor of $n^{1-\psi_{k+1}({\bm \alpha})}$, and this factor vanishes as $n\to\infty$.
}
\noindent $(IV)$ $\ell\in \{q+1,\dots,k+2\}$.
Note first that \begin{align*} &\binom{n}{j}\binom{j}{\ell}\binom{n-j}{j-\ell}\Big[
\mathbb{E}(\eta_\sigma\eta_\tau) - \mathbb{E}(\eta_\sigma)\mathbb{E}(\eta_\tau)\Big]\, {\mathbbm 1} \big\{ |\sigma \cap \tau|=\ell \big\}
\le n^{2j-\ell} \mathbb{E}(\eta_\sigma\eta_\tau)\, {\mathbbm 1} \big\{ |\sigma \cap \tau|=\ell \big\} \\
&\qquad \qquad \qquad \le n^{2j-\ell}\sum_{K:|K|=j}\sum_{K':|K'|=j} \mathbb{P}(A_{K, K'}) \, {\mathbbm 1} \big\{ |\sigma \cap \tau|=\ell \big\}. \end{align*}
where $A_{K, K'}$ is as in Case ($I\hspace{-1.5pt}I$).
Since there are finitely many isomorphism classes of $(k+1)$-strongly connected complexes on $j$ vertices, we only have to show that for all such $K, K'$ with $|\sigma \cap \tau|=\ell$, \begin{equation} \label{e:transfer.to.Case4} \big( \text{\rm Var}(f_k) \big)^{-1} n^{2j-\ell} \mathbb{P}(A_{K, K'})
\to 0, \ \ n\to\infty. \end{equation} By Lemma \ref{l:formula.fowler}, \begin{align*} & \mathbb{P}(A_{K, K'})
\le C \bigg[ \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big(
\prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3}
\bigg]^2\times
\prod_{i=q}^{k+1}p_i^{-\binom{\ell}{i+1}}, \end{align*} with the last factor accounting for the faces on the vertices common to $\sigma$ and $\tau$. We conclude that \begin{align*} &n^{2j-\ell} \mathbb{P}(A_{K, K'})
\le C n^{2j-\ell} \prod_{i=q}^{k+1} p_i^{2\binom{k+3}{i+1}-\binom{\ell}{i+1}} \bigg( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \bigg)^{2(j-k-3)} \\ =& Cn^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})-\tau_{\ell-1}({\bm \alpha})} \big( n^{1-\psi_{k+1}({\bm \alpha})} \big)^{2(j-k-3)} \le Cn^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})-\tau_{\ell-1}({\bm \alpha})}. \end{align*} By Proposition \ref{p:moment.face.count} and \eqref{e:simple.lemma}, $$ \frac{n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2})-\tau_{\ell-1}({\bm \alpha})}}{\text{\rm Var}(f_k)} \le C n^{2(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha}))+(\tau_q({\bm \alpha})-\tau_{\ell-1}({\bm \alpha}))}\to 0 $$ because the exponent is clearly negative if $\ell\in\{q+1,\dots,k+1 \}$, and it is still true in the case $\ell=k+2$, because $$ 2\big(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha})\big)+\big(\tau_q({\bm \alpha})-\tau_{\ell-1}({\bm \alpha})\big) = \big(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha})\big) + \big(\tau_q({\bm \alpha})-\tau_k({\bm \alpha})\big) < 0. $$
\noindent $(V)$ $\ell\in \{ k+3,\dots,j-1 \}$. \remove{ As in Case $(I\hspace{-1.5pt}I\hspace{-1.5pt}I)$ it suffices to verify \eqref{e:transfer.to.Case4}.
Given a $(k+1)$-cycle $K$ of strongly connected support with $|K|=j$, we shall establish a nice subcomplex $\tilde{K}$ of $K$ via the scheme proposed in Section 8 of \cite{fowler:2015}. Roughly speaking, the scheme requires to remove one vertex at a time from $K$, and count all faces containing the removed vertex. Consequently we will obtain reversely ordered vertices $v_j,\dots,v_1$ of $K$ and a sequence of $(k+1)$-faces $f_j,\dots,f_1$. Then we shall define $\tilde{K}$ as a strongly connected $(k+1)$-dimensional subcomplex formed by $f_1,\dots,f_j$.
We first note that every vertex of $K$ is contained in \textit{at least} $k+2$ $(k+1)$-simplices. Choose an arbitrary vertex of $K$ and name it $v_j$, and pick \textit{exactly} $k+2$ $(k+1)$-simplices containing $v_j$. Denote the selected $(k+1)$-simplices by $f_j, f_{j-1}, \dots, f_{j-k-1}$. We then remove all faces of $f_j, \dots, f_{j-k-1}$ that contain $v_j$. As a result $\binom{k+2}{i}$ $i$-faces are removed for every $0\le i \le k+1$.
Next choose a neighboring $(k+1)$-simplex $f_{j-k-2}$ and a vertex $v_{j-1}$ so that $$ \text{dim}(f_{j-k-2}\cap f_p)=k, \ \ \ v_{j-1}\in f_{j-k-2}\cap f_p $$ for some $p\in \{j-k-1,\dots,j \}$. Then we remove all faces of $f_{j-k-2}$ containing $v_{j-1}$. (now $\binom{k+1}{i}$ $i$-faces are removed for each $0\le i \le k+1$). Subsequently, pick a $(k+1)$-simplex $f_{j-k-3}$ and a vertex $v_{j-2}$, such that $$ \text{dim}(f_{j-k-3}\cap f_p)=k, \ \ \ v_{j-1}\in f_{j-k-3}\cap f_p $$ for some $p\in\{ j-k-2,\dots,j \}$, and remove all faces of $f_{j-k-3}$ containing $v_{j-2}$.
Implementing this repeatedly, one can define a sequence of $(k+1)$-faces $f_{j-k-4}, \dots, f_2$ in a reverse order, along with the corresponding vertices $v_{j-3}, \dots, v_{k+3}$ (i.e., for each $m \in \{2, \dots, j-k-4 \}$ we are to remove all faces of $f_m$ containing $v_{m+k+1}$). Finally there will remain only one $(k+1)$-simplex. We denote it by $f_1$, and its vertex set by $\{v_1,\dots, v_{k+2} \}$. As expected we now form a $(k+1)$-dimensional subcomplex $\tilde{K}$ from $f_1,\dots,f_j$.
By construction $\tilde{K}$ is a subcomplex of $K$ with strongly connected support.
Going through an entire procedure one more time for $K'$, we get another $(k+1)$-dimensional subcomplex $\tilde{K}' \subset K'$ with strongly connected support. It is then clear that \begin{align} \mathbb{P} &(\sigma \text{ is isomorphic to } K, \, \tau \text{ is isomorphic to } K') \label{e:subgraph.bound} \\ &\le \mathbb{P} (\sigma \text{ is isomorphic to } \tilde{K}, \, \tau \text{ is isomorphic to } \tilde{K}'). \notag \end{align} For later reference we denote the reversely ordered $(k+1)$-simplices forming $\tilde{K}'$ by $g_j,\dots,g_1$, and the corresponding vertices by $w_j,\dots,w_1$. }
It is still sufficient to prove \eqref{e:transfer.to.Case4}, which we presently do. We note that \begin{align*}
\mathbb{P}(A_{K, K'}) = &\mathbb{P} (\text{the complex
restricted to $\sigma$ is isomorphic to $K$ }) \\
&\mathbb{P} (\text{the complex
restricted to $\tau$ is isomorphic to $K'$ })
D(K,K'), \end{align*} where $D(K,K')$ is the correction term, resulting from the fact that some of the faces in the restriction of the complex to $\sigma\cap\tau$ are used in both $K$ and $K'$. Hence, for each fixed $\ell$, we obtain an upper bound on $\mathbb{P}(A_{K, K'})$
by considering the worst case scenario (from the perspective of showing \eqref{e:transfer.to.Case4}).
To see how it works, consider the case $\ell=k+3$. Clearly, the worst case scenario is when both $K$ and $K'$ have the least number of $i$-faces for $q \leq i \leq k + 1;$ further, in the complex restricted to $\sigma \cap \tau,$ there is a maximum overlapping of faces. However, since $K$ and $K'$ are $(k + 1)$-strongly connected, even in this worst case scenario, the complex restricted to the $k + 3$ vertices in $\sigma \cap \tau$ should have at least two $(k + 1)$-faces; of course, these two may have a common shared $k$-face.
Hence, $$ D(K,K')=\prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}}; $$ so, by Lemma \ref{l:formula.fowler}, we have \begin{align*} &n^{2j-(k+3)}
\mathbb{P}(A_{K, K'}) \le Cn^{2j-(k+3)}\bigg[
\prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}}
\Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} \bigg]^2 \prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}}. \end{align*}
Suppose next that $\ell=k+4$. In the worst case scenario now,
the restriction of the complex to $k+3$ (out of the $k+4$) common points of the intersection should have the same setup as in the previous case, while
the last $(k+4)$th common point should form a $(k+1)$-simplex with one of the two $(k+1)$-simplices constructed before.
Once again, this is the minimal requirement since both $K$ and $K'$ are $(k+1)$-strongly connected. Hence, $$ D_{\sigma,\tau}(K,K')=\prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \left( \prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}}\right)^2; $$ so, by Lemma \ref{l:formula.fowler} \begin{align*} &n^{2j-(k+4)}
\mathbb{P}(A_{K, K'})
\le C n^{2j-(k+4)}\bigg[
\prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}}
\Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} \bigg]^2 \prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \left(\prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}}\right)^2. \end{align*} Proceeding in the same manner for any $\ell \in \{ k+3,\dots,j-1\}$, we see that \begin{align*} &n^{2j-\ell}
\mathbb{P}(A_{K, K'})
\le Cn^{2j-\ell}\bigg[ \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} \bigg]^2 \prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}} \Big)^{\ell-(k+2)}. \end{align*}
Therefore, as before, \begin{align*} &n^{2j-\ell}\bigg[ \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}} \Big)^{j-k-3} \bigg]^2 \prod_{i=q}^{k+1}p_i^{-\binom{k+2}{i+1}} \Big( \prod_{i=q}^{k+1} p_i^{-\binom{k+1}{i}} \Big)^{\ell-(k+2)} \\ &= n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2}) - \tau_{k+1}({\bm \alpha})} \big( n^{1-\psi_{k+1}({\bm \alpha})} \big)^{2j-k-\ell-4} \\ &\le n^{2(\tau_{k+2}({\bm \alpha})+\alpha_{k+2}) - \tau_{k+1}({\bm \alpha})}, \end{align*} which is the same bound as that for $\ell=k+2$ in the previous case. Thus, we get \eqref{e:transfer.to.Case4}, as desired.
\noindent $(VI)$ $\ell=j$.
We again prove \eqref{e:transfer.to.Case4}, this time only with $K=K'$. Now, by Lemma \ref{l:formula.fowler}, \begin{align*} n^{j}
\mathbb{P}(A_{K, K'}) &\le C n^j \prod_{i=q}^{k+1} p_i^{\binom{k+3}{i+1}} \Big(
\prod_{i=q}^{k+1} p_i^{\binom{k+1}{i}}
\Big)^{j-k-3} \\ &= C n^{\tau_{k+2}({\bm \alpha}) + \alpha_{k+2}} \big( n^{1-\psi_{k+1}({\bm \alpha})}
\big)^{j-k-3} \le
n^{\tau_{k+2}({\bm \alpha})+\alpha_{k+2}}, \end{align*} and, by Proposition \ref{p:moment.face.count} and \eqref{e:simple.lemma}, $$ \frac{n^{\tau_{k+2}({\bm \alpha})+\alpha_{k+2}}}{\text{\rm Var}(f_k)} \le C n^{(\tau_{k+1}({\bm \alpha})-\tau_k({\bm \alpha})) + (\tau_q({\bm \alpha})-\tau_k({\bm \alpha}))} \to 0,\ \ n\to\infty. $$
This completes the proof of \eqref{e:main.goal} and, hence, of \eqref{e:betti.func.conv} in Theorem \ref{t:clt.topological.invariants}.
Finally, assuming \eqref{e:cond.regularity} and \eqref{e:sharp.drop.k+1}, we establish tightness in the Skorohod $J_1$-topology. First of all, we already proved that under these assumptions, the convergence in \eqref{e:Euler.characteristic.func.conv} holds in the sense of weak convergence in the $J_1$-topology on $D[0,\infty)$. Fixing $T>0$ and choosing $m$ so large that $T/m \le a/4$ with $a$ defined in \eqref{e:assumption.Gi}, we again consider a static multi-parameter simplicial complex $X([n], {\bf p}^{(1)})$ and the corresponding $j$-face counts $f_j^{(1)}$, that were used for the proof of \eqref{e:Euler.as.conv}.
By Proposition \ref{p:vanishing.lower.order.betti} in the Appendix, all we have to do is to show that $$ \left(\frac{\sum_{j=k+1}^{n-1} (-1)^j \beta_j(t) -\mathbb{E} \Big(
\sum_{j=k+1}^{n-1} (-1)^j \beta_j \Big)}{\sqrt{\text{\rm Var}(f_k)}} , \, 0 \le t \le \frac{T}{m}
\right) \to {\bf 0} $$ in probability in the $J_1$-topology. This will follow once we show that for every $\epsilon>0$, \begin{equation} \label{e:sup.upper.cond.tightness}
\mathbb{P} \bigg( \sup_{0\le t \le T/m} \left| \sum_{j=k+1}^{n-1} (-1)^j \beta_j(t) - \mathbb{E}\Big(\sum_{j=k+1}^{n-1} (-1)^j \beta_j \Big) \right| > \epsilon \sqrt{\text{\rm Var}(f_k)} \bigg) \to 0, \ \ \ n\to\infty. \end{equation} To this end, observe that by \eqref{e:sharp.drop.k+1}, for any $j \ge k+1$, we have \begin{equation} \label{e:under4.5} E(f_j) =\mathcal O(n^{\tau_{k+1}({\bm \alpha})}) = o \big( n^{\tau_k({\bm \alpha})-\tau_q({\bm \alpha})/2} \big) = o \big( \sqrt{\text{\rm Var}(f_k)} \big), \ \ \ n\to\infty. \end{equation} Proceeding as in \eqref{e:Markov.chi2}, while using $M(\tilde {\bm \alpha})$ defined in \eqref{e:M.alpha} and \eqref{e:alpha.tilde.rate}, we can bound the left hand side of \eqref{e:sup.upper.cond.tightness} by \begin{align*} &\frac{2}{\epsilon \sqrt{\text{\rm Var}(f_k)}}\, \sum_{j=k+1}^{n-1} \mathbb{E}\big[\sup_{0\le t \le T/m}f_j(t)\big] \le \frac{2}{\epsilon \sqrt{\text{\rm Var}(f_k)}}\, \sum_{j=k+1}^{n-1} \mathbb{E}(f_j^{(1)}) \\ &\le \frac{2}{\epsilon} \sum_{j=k+1}^{M(\tilde {\bm \alpha})-1} \frac{\prod_{i=q}^j 2^{\binom{j+1}{i+1}}\mathbb{E}(f_j)}{\sqrt{\text{\rm Var}(f_k)}} +\frac{2}{\epsilon} \sum_{j=M(\tilde {\bm \alpha})}^\infty \mathbb{E}(f_j^{(1)}). \end{align*} The last term converges to $0$ as $n\to\infty$ due to \eqref{e:under4.5} and Corollary \ref{cor:neg.tau}.
\end{proof}
\section{Appendix} \subsection{Analysis of the Betti numbers in lower dimensions}
We begin with introducing additional notions of connectivity. Given a simplicial complex $X$ and an $\ell$-dimensional simplex $\sigma$ in $X$, let the simplicial complex $\text{lk}_X(\sigma) := \{\tau \in X: \sigma \cap \tau = \emptyset, \sigma \cup \tau \in X\}$ denote the \textit{link} of $\sigma$ in $X$. In other words, $\text{lk}_X(\sigma)$ denotes the subcomplex of $X$ consisting of all simplices whose vertex support is disjoint from that of $\sigma$ but, together with $\sigma$, they form a simplex in $X$. If $X$ is pure $\ell$-dimensional and $\sigma$ is $(\ell-2)$-dimensional for some $\ell \geq 2$, then $\text{lk}_X(\sigma)$ necessarily is a one-dimensional simplicial complex. We say that an $(\ell-1)$-face in $X$ is \textit{free} if it is not contained in any of the $\ell$-faces in $X$. Given a graph $G$, we denote by $\lambda_2(G)$ the second smallest eigenvalue of the normalized graph Laplacian of $G$. We will use the \textit{cohomology vanishing theorem} of \cite{ballmann:swiatkowski:1997}: if $X$ is a finite pure $\ell$-dimensional simplicial complex such that for every $(\ell-2)$-simplex $\sigma\in X$, the link $\text{lk}_X(\sigma)$ is connected and has spectral gap $\lambda_2\bigl( \text{lk}_X(\sigma)\bigr)>1-1/\ell$, then $H^{\ell-1}(X; {\mathbb Q})=0$. In particular, $\beta_{\ell-1}(X)=0$.
\begin{proposition} \label{p:vanishing.lower.order.betti} Under the assumptions of Theorem \ref{t:clt.topological.invariants}, $$ \left( \frac{\beta_j(t)-\mathbb{E}(\beta_j)}{\sqrt{\text{\rm Var}(f_k)}}, \, t\geq
0\right) \to {\bf 0} \ \text{ in } D[0,\infty) $$ in probability as $n\to\infty$ for all $j=0,1,\dots, k-1$, where $\bf 0$ is the constant zero process. \end{proposition} \begin{proof} If $k=1$ the claim is trivial, so assume that $k\geq 2$. We consider $j=k-1$ only; smaller dimensions can be treated in a similar way. Proposition \ref{p:vanishing.lower.order.betti} will be established by combining a series of lemmas provided below. Let $F_{j}(t)$ be the number of free $j$-faces of $X([n],{\bf p}; t)$, and $X_k(t)$ the $k$-skeleton of $X([n],{\bf p}; t)$. For a $(k-2)$-face $\sigma$ in $X_k(t)$, write $L_\sigma(t) :=
|\text{lk}_{X_k(t)}(\sigma)|$, i.e., the number of vertices
in the link of $\sigma$ in $X_k(t)$. We set $F_{j}:= F_{j}(0)$, $X_k:=X_k(0)$, and $L_\sigma:= L_\sigma(0)$.
Consider the delayed renewal sequences defined in \eqref{e:renewal.seq} corresponding to the stationary renewal processes $\big( \Delta_{i,A}, \, q\le i \le k, \, A\in \mathcal W_i \big)$. Enumerating the different arrival times, we denote the resulting sequence by $ \eta_1 \leq \eta_2 \leq \cdots$, and set $\eta_0=0$. For $0 < T < \infty,$ we denote by $N(T)$ the number of these points in the interval $[0,T]$. Clearly, $\mathbb{E} \big(N(T)\big) = \mathcal O(n^{k + 1})$ for every such $T$. \begin{lemma} \label{l:exp.number.free.face} For each $0\le j\leq k-1$, $$ \mathbb{E}(F_{j}) = o (e^{-n^\epsilon}), \ \ n\to\infty, $$ for some $\epsilon>0$. \end{lemma} \begin{proof} A simple calculation shows that $$ \mathbb{E}(F_j) \le n^{\tau_{j}({\bm \alpha})}\left( 1-n^{-\psi_{j+1}({\bm \alpha})}\right)^{n-j-1}. $$ If $\psi_{j+1}({\bm \alpha})=0$, the claim is trivial. Otherwise, $$ \mathbb{E}(F_{j}) \le C n^{\tau_{j}({\bm \alpha})} e^{-n^{1-\psi_{j+1}({\bm \alpha})}}. $$ Since $\psi_{j+1}({\bm \alpha})\leq \psi_k({\bm \alpha}) < 1$, the result follows. \end{proof} \begin{lemma} \label{l:pureness} $$ \mathbb{P}(X_k \text{ is pure}) = 1-o(e^{-n^\epsilon}), \ \ n\to\infty, $$ for some $\epsilon>0$. \end{lemma} \begin{proof} By Lemma \ref{l:exp.number.free.face}, \begin{align*} \mathbb{P}(X_k \text{ is pure}) &= \mathbb{P}(F_j=0, \ \ j=0,\dots,k-1) \\ &\ge 1-\sum_{j=0}^{k-1} \mathbb{P}(F_j \ge 1) \ge 1-\sum_{j=0}^{k-1} \mathbb{E}(F_j) = 1-o(e^{-n^\epsilon}). \end{align*} \end{proof} \begin{lemma} \label{l:log.L.sigma} Fix $\delta>0$. For a $(k-2)$-face $\sigma$ of $X_k$, $$ \mathbb{P} \left( \frac{(1+\delta)\log L_\sigma}{L_\sigma} > p_1 \right) = o(e^{-n^\epsilon}), \ \ n\to\infty, $$ for some $\epsilon>0$. \end{lemma} \begin{proof} Note that $$ 1-\psi_{k-1}({\bm \alpha}) >\psi_k({\bm \alpha})-\psi_{k-1}({\bm \alpha})\geq \alpha_1, $$ and $(1+\delta)x^{-1}\log x$ is decreasing for $x\ge e$. Therefore, if $L_\sigma\geq n^{1-\psi_{k-1}({\bm \alpha})}/2$, then $$ \frac{(1+\delta)\log L_\sigma}{L_\sigma} \leq \frac{(1+\delta)\log \bigl(n^{1-\psi_{k-1}({\bm \alpha})}/2\bigr)} {n^{1-\psi_{k-1}({\bm \alpha})}/2}<n^{-\alpha_1}=p_1 $$
for large $n$. Hence, for large $n$, $$ \mathbb{P} \left( \frac{(1+\delta)\log L_\sigma}{L_\sigma} > p_1 \right) \leq \mathbb{P}\left( L_\sigma<\frac{n^{1-\psi_{k-1}({\bm \alpha})}}{2}\right), $$ and the claim follows from the basic properties of the binomial distribution because $L_\sigma$ has a binomial distribution with parameters $n-k+1$ and $n^{-\psi_{k-1}({\bm \alpha})}$; see, e.g., Lemma 4.2 in \cite{fowler:2019}. \end{proof} \begin{lemma} \label{l:betti.k-1} For every $0<T<\infty$, $$ \mathbb{P}\Big(\sup_{0\le t \le T}\beta_{k-1}(t) \neq 0 \Big) = \mathcal O(n^{-k-1}), \ \ n\to\infty. $$ \end{lemma} \begin{proof} By the cohomology vanishing theorem, \begin{align*} &\mathbb{P} \big( \sup_{0\le t \le T} \beta_{k-1}(t) =0 \big) = \mathbb{P}\Big( \sup_{0 \le t \le T} \beta_{k-1}\big( X_k(t) \big) =0 \Big) \\ &=\mathbb{P} \Big(\beta_{k-1}\big( X_k(\eta_\ell) \big)=0 \text{ for } \ell = 0,1,\dots, \ N(T) \Big) \\ &\ge \mathbb{P} \Big(\beta_{k-1}\big( X_k(\eta_\ell) \big)=0 \text{ for } \ell = 0,1,\dots, n^{2k+2}, \, N(T) \le n^{2k+2} \Big) \\ &\ge \mathbb{P} \Big( \bigcap_{\ell=0}^{n^{2k+2}} \Big( \Big\{ \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma) \big)> 1-\frac{1}{k} \text{ and } \text{lk}_{X_k(\eta_\ell)}(\sigma) \text{ is connected} \\ &\qquad \qquad \text{for every }(k-2)\text{-face } \sigma \text{ in } X_k(\eta_\ell) \Big\} \cap \Big\{ X_k(\eta_\ell) \text{ is pure}
\Big\}\Big) \cap \Big\{ N(T) \le n^{2k+2} \Big\} \Big) \\
&\ge 1-\sum_{\ell=0}^{n^{2k+2}} \binom{n}{k-1}
\mathbb{P} \left( \lambda_2
\big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le
1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \right) \\
& \quad \ \ -\sum_{\ell=0}^{n^{2k+2}}\mathbb{P} \bigl( X_k(\eta_\ell) \text{ is
not pure} \big) - \mathbb{P}\Big( N(T) > n^{2k+2} \Big). \end{align*} Here $\sigma_0$ is a fixed $(k-2)$-simplex. Clearly,
\begin{align*}
\mathbb{P}\big( N(T) > n^{2k+2} \big) &\le \frac{\mathbb{E} [N(T)]}{n^{2k+2}} =\mathcal O (n^{-k-1}); \end{align*}
so, by Lemma \ref{l:pureness} and the stationarity of $X_k$, \begin{align*} &\mathbb{P} \big( \sup_{0\le t \le T} \beta_{k-1}(t) =0 \big) \\ &\ge 1-n^{3k+1}
\mathbb{P}\Big( \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \Big) - \mathcal O(n^{-k-1}). \end{align*} Given $\sigma_0\in X_k$, we have by Lemma \ref{l:log.L.sigma} and its proof that, for some $\epsilon>0$, \begin{align*} &\mathbb{P} \Big( \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \Big) \\ &= \mathbb{P} \Big( \Big\{ \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \Big\} \\ &\qquad \qquad \cap \Big\{ \frac{(1+\delta)\log L_{\sigma_0}}{L_{\sigma_0}} \le p_1, \ L_{\sigma_0} \ge \frac{n^{1-\psi_{k-1}({\bm \alpha})}}{2} \Big\} \Big)+ o(e^{-n^\epsilon}) \\
&=\sum_{m=1}^{n-k+1} \mathbb{P} \Big( \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \, \Big|\, L_{\sigma_0}=m \Big) \\ &\qquad \qquad \qquad \qquad \times {\mathbbm 1} \Big\{ \frac{(1+\delta)\log m}{m} \le p_1, \ m \ge \frac{n^{1-\psi_{k-1}({\bm \alpha})}}{2} \Big\}\, \mathbb{P}(L_\sigma=m) + o(e^{-n^\epsilon}). \end{align*}
However, $\text{lk}_{X_k}(\sigma_0) | L_{\sigma_0}=m$ has the law of the Erd\"os-R\'enyi graph with parameters $m$ and $p_1$; see Lemma~4.2 in \cite{fowler:2019}. Furthermore, in the range of $m$ we are considering, $p_1 \ge \frac{(1+\delta)\log m}{m}$. It follows from the spectral gap theorem of Theorem 1.1 in \cite{hoffman:kahle:paquette:2019} that for some $\delta$-dependent constant $C$, $$
\mathbb{P} \Big( \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \, \Big|\, L_\sigma=m \Big) \leq Cm^{-\delta}. $$ We conclude that \begin{align*} &\mathbb{P} \Big( \lambda_2 \big( \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \big) \le 1-\frac{1}{k} \text{ or } \text{lk}_{X_k(\eta_\ell)}(\sigma_0) \text{ is disconnected} \Big) \\ &\quad \le C\Big(
\frac{n^{1-\psi_{k-1}({\bm \alpha})}}{2} \Big)^{-\delta}
+ o(e^{-n^\epsilon}) = \mathcal
O\big(n^{-\delta(1-\psi_{k-1}({\bm \alpha}))} \big), \end{align*} and so $$ \mathbb{P}\Big(\sup_{0\le t \le T}\beta_{k-1}(t)= 0 \Big) \ge 1 - \mathcal O\big(n^{3k+1-\delta(1-\psi_{k-1}({\bm \alpha}))} \big) - \mathcal O(n^{-k-1}). $$ As $1-\psi_{k-1}({\bm \alpha})>0$, the claim follows by taking large enough $\delta>0$. \end{proof} We can now complete the proof of the proposition. Since $\text{\rm Var}(f_k)\to\infty$, we have for any $0<T< \infty$ and $\epsilon>0$, using Lemma \ref{l:betti.k-1}, \begin{align*}
&\mathbb{P} \Big( \sup_{0\le t \le T} \big| \beta_{k-1}(t)-\mathbb{E}(\beta_{k-1}) \big| > \epsilon \sqrt{\text{\rm Var}(f_k)} \Big) \le \frac{2}{\epsilon} \mathbb{E} \Big( \sup_{0\le t \le T} \beta_{k-1}(t) \Big) \\ &\le \frac{2}{\epsilon} \mathbb{E} \Big( \sup_{0\le t \le T} f_{k-1}(t)\, {\mathbbm 1}
\big\{ \sup_{0\le t \le T} \beta_{k-1}(t) \neq 0 \big\} \Big) \le \frac{2}{\epsilon} \binom{n}{k} \mathbb{P}\Big( \sup_{0\le t \le T} \beta_{k-1}(t) \neq 0 \Big)\\ &=\frac{2}{\epsilon} \mathcal O(n^{-1}) \to 0, \ \ \ n\to\infty, \end{align*} as required. \end{proof}
\remove{\section{Formal proof of Eq.~at page 9, line 15}
From Section \ref{sec:4.3} and (4.4) in Takashi-note 01.22.2019, we now have $$ \frac{\sum_{j=k}^{M-1} (-1)^j \beta_j - \mathbb{E} \big( \sum_{j=k}^{M-1} (-1)^j \beta_j \big)}{\sqrt{\text{\rm Var} (f_k)}} \Rightarrow \mathcal N(0,1), \ \ \ n\to\infty, $$ where $M=\min \{ i>k: \tau_i({\bm \alpha})<0 \}$. Set a positive integer $D$ such that $$ D > \frac{M+1 + \tau_{k+1}({\bm \alpha})}{\psi_{k+1}({\bm \alpha})-1}. $$ (this definition is slightly different from that in Takashi-note 01.22.2019). Define also $$
\tilde{\beta}_{k+1} = \tilde{\beta}_{k+1} \big( X([n],p) \big) = \sum_{j=k+3}^{D+k+1} \sum_{r\ge 1} \sum_{\sigma \subset [n], \, |\sigma|=j}r \eta_\sigma^{(j,r,k+1)}. $$ Let us assume, without loss of generality, $M=k+2$. We shall prove the following result. The proof is essentially the same as Gugan's proof for Claim 2. \begin{proposition} $$ \frac{\beta_k-\mathbb{E}(\beta_k)}{\sqrt{\text{\rm Var}(f_k)}} - \frac{\tilde{\beta}_{k+1}-\mathbb{E}(\tilde{\beta}_{k+1})}{\sqrt{\text{\rm Var}(f_k)}} \Rightarrow \mathcal N(0,1). $$ \end{proposition} \begin{proof}
The statement holds if we can show that $\mathbb{E}(\beta_{k+1}-\tilde{\beta}_{k+1}) \to 0$ as $n\to\infty$. Using the fact that for $\sigma \subset [n]$ with $|\sigma|=j$, $$ \beta_{k+1} \big( X(\sigma,p) \big) \le \binom{j}{k+2} \le j^{k+2}, $$ which dropping the maximal condition, \begin{align*} &\mathbb{E} (\beta_{k+1}-\tilde{\beta}_{k+1}) \\
&\le \sum_{j=D+k+2}^n j^{k+2} \mathbb{E} \Big[ \sum_{\substack{\sigma\subset [n], \\ |\sigma|=j}} {\mathbbm 1} \big\{ \sigma \text{ spans a strongly connected } (k+1)\text{-dim subcomplex} \big\} \Big]. \end{align*} Since a strongly connected subcomplex on $j (\ge D+k+2)$ vertices must contain a subcomplex with strongly connected support on $D+k+2$ vertices, the rightmost term above is bounded by $$ \sum_{j=D+k+2}^n j^{k+2} n^{D+k+2} \mathbb{P} \big( \sigma \text{ spans a strongly connected } (k+1)\text{-dim subcomplex on } D+k+2 \text{ vertices} \big). $$ As there are at most finitely many isomorphism classes of strongly connected $(k+1)$-dim subcomplexes on $D+k+2$ vertices, \begin{align*} &\mathbb{P} \big( \sigma \text{ spans a strongly connected } (k+1)\text{-dim subcomplex on } D+k+2 \text{ vertices} \big) \\
&\le \sum_{K: |K|=j} \mathbb{P}(\sigma \text{ is isomorphic to } K), \end{align*}
where $\sum_{K: |K|=j}$ is a finite sum of all such complexes.
According to the proof of Lemma 18 in \cite{fowler:2015}, there are at least $$ \binom{k+2}{i+1} + D \binom{k+1}{i} $$ $i$-faces for each $1\le i \le k+1$. Hence for every $K$, $$ \mathbb{P}(\sigma \text{ is isomorphic to } K) \le C\prod_{i=1}^{k+1} p_i^{\binom{k+2}{i+1} + D \binom{k+1}{i}}, $$ and thus, as $n\to\infty$, \begin{align*} \mathbb{E}(\beta_{k+1}-\tilde{\beta}_{k+1}) &\le C\sum_{j=D+k+2}^n j^{k+2} n^{\tau_{k+1}({\bm \alpha})-D(\psi_{k+1}({\bm \alpha})-1)} \\ &\le n^{k+3+\tau_{k+1}({\bm \alpha})-D(\psi_{k+1}({\bm \alpha})-1)} \to 0 \end{align*} by the choice of $D$. \end{proof}}
\subsection{Representation of the Betti number} In this section we verify \eqref{e:beta.k+1}. \begin{proposition} \label{p:betti.representation} For $\ell \ge 1$, \begin{equation} \label{e:assertion.representation}
\beta_\ell(t) = \beta_\ell \big( X([n],{\bf p}; t) \big) = \sum_{j=\ell+2}^n \sum_{r \ge 1} \sum_{\sigma \subset [n], \, |\sigma|=j} r\eta_\sigma^{(j,r,\ell)}(t), \end{equation} where $\eta_\sigma^{(j,r,\ell)}(t)$ is the indicator function in \eqref{e:beta.k+1}. \end{proposition} \begin{proof} For $\ell$-simplices $\sigma, \tau$ in $X([n],{\bf p}; t)$, write $\sigma \sim \tau$ if they can be connected by a sequence of $\ell$-simplices $\sigma=\sigma_0, \sigma_1, \dots, \sigma_{j-1}, \sigma_j =\tau$ such that dim$(\sigma_i \cap \sigma_{i+1})=\ell-1$, $0 \le i \le j-1$. Clearly $\sim$ is an equivalence relation. Consider the equivalence classes $\mathcal G_1, \dots, \mathcal G_N$ associated with this relation. For each $i=1,\dots,N$, let $X_i$ be the smallest subcomplex of $X([n],{\bf p}; t)$ containing all the simplices for which some $\ell$-simplex in $\mathcal G_i$ is a face. Then $X_i$ is necessarily a maximal $\ell$-strongly connected subcomplex, such that dim$(X_{i_1}\cap X_{i_2}) \le \ell-2$ for any distinct $1 \le i_1 \neq i_2 \le N$. Let $X^{(N)} := \bigcup_{i=1}^NX_i$ and let $X_{N+1}$ be a subcomplex of $X([n],{\bf p}; t)$ containing all simplices in $X([n],{\bf p}; t)\setminus X^{(N)}$. By construction dim$(X_{N+1}) \le \ell-1$ and dim$(X_{N+1}\cap X^{(N)}) \le \ell-2$. With this setup, establishing the claim of the proposition reduces to proving the following statements: \begin{equation}\label{e:first.representation} \beta_\ell \big( X([n],{\bf p};t) \big)= \beta_\ell(X^{(N)}), \end{equation} and \begin{equation}\label{e:second.representation} \beta_\ell(X^{(N)}) = \sum_{i=1}^N\beta_\ell(X_i). \end{equation} Indeed, since $\sum_{i=1}^N\beta_\ell(X_i)$ in \eqref{e:second.representation} is clearly equal to the right hand side of \eqref{e:assertion.representation}, our proof will be done once \eqref{e:first.representation} and \eqref{e:second.representation} are both established. For the proof of \eqref{e:first.representation} we exploit the following Mayer-Vietoris exact sequence: \begin{align*} \dots \rightarrow H_\ell\big( X^{(N)}\cap X_{N+1} \big) &\stackrel{\lambda_\ell}{\to} H_\ell\big( X^{(N)} \big) \oplus H_\ell(X_{N+1}) \to H_\ell \big( X([n],{\bf p};t) \big) \\ &\to H_{\ell-1}\big( X^{(N)}\cap X_{N+1} \big) \stackrel{\lambda_{\ell-1}}{\to} H_{\ell-1} \big( X^{(N)} \big) \oplus H_{\ell-1}(X_{N+1}) \to \dots, \end{align*} where $H_\ell$ represents the homology group of order $\ell$, and $\lambda_\ell = (\lambda^{(1)}_\ell, \lambda^{(2)}_\ell)$ denotes the homomorphism induced by the inclusions $X^{(N)} \cap X_{N+1} \hookrightarrow X^{(N)}$ and $X^{(N)} \cap X_{N+1} \hookrightarrow X_{N+1}$. An elementary rank calculation (see e.g., Lemma 2.3 in \cite{yogeshwaran:subag:adler:2017}) yields \begin{align*} \beta_\ell \big( X([n],{\bf p};t) \big) = \beta_\ell \big( X^{(N)} \big) +
\beta_\ell (X_{N+1}) + \text{rank} (\text{ker} \lambda_\ell) +
\text{rank} (\text{ker} \lambda_{\ell-1}) - \beta_\ell
\big(X^{(N)}\cap X_{N+1} \big). \end{align*} Since dim$(X_{N+1})\le \ell -1$ and dim$\big( X^{(N)}\cap X_{N+1}\big) \le \ell-2$, we have that $$ H_\ell (X_{N+1}) \cong 0, \ \ H_\ell \big( X^{(N)}\cap X_{N+1} \big) \cong 0, \ \ \ H_{\ell-1}\big( X^{(N)}\cap X_{N+1} \big) \cong 0. $$
In particular, ker$\lambda_\ell$ and ker$\lambda_{\ell-1}$ are both trivial. Combining all these observations we obtain \eqref{e:first.representation}.
We now turn to deriving \eqref{e:second.representation}. The statement is trivial for $N=1$. If $N>1$, we denote $X^{(j)} := \bigcup_{i=1}^jX_i$ and prove that $\beta_\ell\big( X^{(j)} \big) = \sum_{i=1}^{j}\beta_\ell (X_i)$ for $j=1,\ldots, N$ inductively. Once again, the case $j=1$ is trivial, so suppose for induction that $\beta_\ell\big( X^{(j-1)} \big) = \sum_{i=1}^{j-1}\beta_\ell (X_i)$ for some $1\leq j<N$. We consider another Mayer-Vietoris exact sequence, given by \begin{align*} \dots \rightarrow H_\ell\big( X^{(j-1)}\cap X_j \big) &\stackrel{\nu_\ell}{\to} H_\ell\big( X^{(j-1)} \big) \oplus H_\ell(X_j) \to H_\ell \big( X^{(j)} \big) \\ &\to H_{\ell-1}\big( X^{(j-1)}\cap X_j \big) \stackrel{\nu_{\ell-1}}{\to} H_{\ell-1} \big( X^{(j-1)} \big) \oplus H_{\ell-1}(X_j) \to \dots, \end{align*} where $\nu_\ell$, $\nu_{\ell-1}$ are group homomorphisms analogous to the earlier situation. Since dim$\big( X^{(j-1)}\cap X_j \big)\le \ell-2$, the same rank computation as above gives us $$ \beta_\ell \big( X^{(j)} \big) = \beta_\ell \big( X^{(j-1)} \big) + \beta_\ell (X_j) + \text{rank} (\text{ker} \nu_\ell) + \text{rank} (\text{ker} \nu_{\ell-1}) - \beta_\ell \big(X^{(j-1)}\cap X_j \big) = \sum_{i=1}^j \beta_\ell(X_i), $$ completing the induction step. \end{proof}
\noindent \textbf{Acknowledgement}: The authors would like to thank the anonymous referee and the Associate Editor for their comments that lead to a substantial improvement of the paper.
\end{document} | arXiv | {
"id": "2001.06860.tex",
"language_detection_score": 0.6122516393661499,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Frobenius number and minimum genus of numerical semigroups with fixed multiplicity and embedding dimension}
\begin{abstract} Fixed two
positive integers $m$ and $e,$ some algorithms for computing the minimal Frobenius number and minimal genus of the set of numerical semigroups with multiplicity $m$ and embedding dimension $e$ are provided. Besides, the semigroups where these minimal values are achieved are computed too.
{\small \emph{Keywords:} embedding dimension, Frobenius number, genus, multiplicity, numerical semigroup.}
{\small \emph{MSC-class:} 20M14 (Primary), 20M05 (Secondary).} \end{abstract}
\section{Introduction} Given a subset $S$ of the set of non-negative integers denoted by $\mathbb{N},$ $S$ is called a numerical semigroup if $0\in S,$ $\mathbb{N}\setminus S$ is finite and $S$ is closed by sum.
If $S$ is a numerical semigroup, it is well-known there exists a unique inclusion-wise minimal finite subset $A$ of $\mathbb{N}$ such that $S=\{\lambda_1 a_1+\cdots+\lambda_n a_n \mid n\in\mathbb{N}, ~ a_1,\ldots,a_n\in A$ and $\lambda_1,\ldots,\lambda_n\in\mathbb{N}\}$ (see Theorem 2.7 from \cite{libro}). The set $A$ is called the minimal system of generators of $S.$ In general, given a finite set $A\subset \mathbb{N},$ the monoid generated by $A$, $\langle A \rangle= \{\lambda_1 a_1+\cdots+\lambda_n a_n \mid n\in\mathbb{N}, ~ a_1,\ldots,a_n\in A$ and $\lambda_1,\ldots,\lambda_n\in\mathbb{N}\},$ is a numerical semigroup if and only if ${\mathrm gcd}(A)=1$ (see \cite[Lemma 2.1]{libro}). We denote by $msg(S)$ the minimal system of generators of $S$; its cardinality is called the embedding dimension of $S$ and it is denoted by $e(S)$.
Other important invariants of a numerical semigroup $S$ are the following: \begin{itemize}
\item The multiplicity of $S$ (denoted by $m(S)$) is the least positive integer that belongs to $S$.
\item The Frobenius number of $S$ (denoted by $F(S)$) is the greatest integer which does not belong to $S$.
\item The genus of $S$ (denoted by $g(S)$) is the cardinality of $\mathbb{N}\setminus S$. \end{itemize}
For those readers who are not familiarized with Numerical Semigroup Theory, the terminology used (genus, multiplicity, embedding dimension,\dots) may result bizarre in the scope of semigroups. In the literature, one finds many manuscripts devoted to the study of analytically unramified one dimensional local domains via their value semigroups, which turn out to be numerical semigroups. All these invariants (see \cite{barucci}) have interpretations in this theory, and hence its names.
The relation of these invariantes is of great interest. Indeed, in \cite{wilf} the following conjeture appears: if $S$ is a numerical semigroup then $e(S)g(s)\leq (e(S)-1)(F(S)+1)$. Nowadays, this conjecture is still open and it is one of the most important problems in Numerical Semigroup Theory.
If $m$ and $e$ are positive integers we give the following notations: $\mathcal{L}(m,e)=\{S \mid S \mbox{ is a numerical semigroup, } m(S)=m, ~ e(S)=e \}$, $g(m,e)=\min\{g(S) \mid S\in\mathcal{L}(m,e) \}$ and $F(m,e)=\min\{F(S) \mid S\in\mathcal{L}(m,e) \}$. In this work we are interested in giving algorithms for computing $g(m,e)$, $F(m,e)$, $\{S\in\mathcal{L}(m,e) \mid g(S)=g(m,e) \}$ and $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$.
The results of this work are illustrated with several examples. To this aim, we have used the library \texttt{FrobeniusNumberAndGenus} developed by the authors in Mathematica (\cite{mathematica}). The library \texttt{FrobeniusNumberAndGenus} is freely available online at \cite{PROGRAMA}
This work is organized as follows. Section \ref{pseudo} contains some known results on Frobenius pseudo-varieties which allow us to construct the tree of numerical semigroup with fixed multiplicity. In Section \ref{mingenus} and Section \ref{minFrob}, the minimal genus and minimal Frobenius number of the set of numerical semigroups with fixed multiplicity and embedding dimension are studied, giving some algorithm for computing them and obtaining the semigroups with these minimal values.
\section{Frobenius pseudo-variety of numerical semigroups with a fixed multiplicity}\label{pseudo}
According to the notation of \cite{pseudo-variedades}, a Frobenius pseudo-variety is a non-empty family $\mathcal{P}$ of numerical semigroups which verifies the following conditions: \begin{enumerate}
\item $\mathcal{P}$ has a maximum (according to the inclusion order).
\item If $\{S,T\}\subseteq\mathcal{P}$ then $S\cap T\in\mathcal{P}$.
\item If $S\in\mathcal{P}$ and $S\neq\max(\mathcal{P})$ then $S\cup\{F(S)\}\in\mathcal{P}$. \end{enumerate}
A graph $G$ is a pair $(V,E)$ where $V$ is a set (vertices of $G$) and $E$ is a subset of $\{(u,v)\in V\times V \mid u\neq v\}$ (edges of $G$). A path of length $n$ connecting the vertices $x$ and $y$ is a sequence of different edges of the form $(v_0,v_1), (v_1,v_2),\ldots,(v_{n-1},v_n)$ such that $v_0=x$ and $v_n=y$.
A graph $G$ is a tree if there exists a vertex $r$ (known as the root) such that for any other vertex $x$ of $G$ there exists a unique path connecting $x$ and $r$. If $(x,y)$ is an edge of a tree, we say that $x$ is a son of $y$.
If $\mathcal{P}$ is a Frobenius pseudo-variety we define the graph $G(\mathcal{P})$ as follows: $\mathcal{P}$ is its set of vertices and $(S,T)\in\mathcal{P}\times\mathcal{P}$ is an edge if $T=S\cup\{F(S)\}$.
The following result is a direct consequence of Lemma 12 and Theorem 3 of \cite{pseudo-variedades}.
\begin{proposition}\label{p1} If $\mathcal{P}$ is a Frobenius pseudo-variety, then $G(\mathcal{P})$ is a tree with root $\max(\mathcal{P})$. Moreover, the set of sons of a vertex $S\in\mathcal{P}$ is $\{S\setminus\{x\}\in\mathcal{P} \mid x\in msg(S),\ x>F(S) \}$. \end{proposition}
Let $m$ be a positive integer. We denote by $\mathcal{L}(m)$ the set \[\{S \mid S \mbox{ is a numerical semigroup with } m(S)=m \}.\] Clearly $\mathcal{L}(m)$ is a Frobenius pseudo-variety and $\max(\mathcal{L}(m))=\{0,m,\rightarrow\}=\langle m,m+1,\ldots,2m-1 \rangle$. So, as a consequence of Proposition \ref{p1} we have the following result which is fundamental in this work.
\begin{theorem}\label{t2} The graph $G(\mathcal{L}(m))$ is a tree rooted in $\langle m,m+1,\ldots,2m-1 \rangle$. Moreover, the set formed by the sons of a vertex $S\in\mathcal{L}(m)$ is $\{S\setminus\{x\} \mid x\in msg(S),\ x>F(S) \mbox{ and } x\neq m \}$. \end{theorem}
The previous theorem allows us to build recursively $\mathcal{L}(m)$ from its root and adding to the computed vertices its sons.
We illustrate it with an example.
\begin{example}\label{e3} We show some levels of tree $G(\mathcal{L}(4))$ giving its vertices and edges, and the minimal removed generators for obtaining the sons.
{\footnotesize \xymatrix@C=0.5em{
& & & & & & \langle 4,5,6,7\rangle \ar@{<-}[rrd]|*+[F]{7}\ar@{<-}[llld]|*+[F]{5}\ar@{<-}[d]|*+[F]{6} & & & & & \\
& & & \langle 4,6,7,9 \rangle \ar@{<-}[rrd]|*+[F]{9}\ar@{<-}[lld]|*+[F]{6}\ar@{<-}[d]|*+[F]{7} & & & \langle 4,5,7\rangle \ar@{<-}[d]|*+[F]{7} & & \langle 4,5,6\rangle \\
& \langle 4,7,9,10\rangle \ar@{<-}[rd]|*+[F]{10}\ar@{<-}[ld]|*+[F]{7}\ar@{<-}[d]|*+[F]{9} & & \langle 4,6,9,11 \rangle \ar@{<-}[d]|*+[F]{9}\ar@{<-}[rd]|*+[F]{11} & & \langle 4,6,7 \rangle & \langle 4,5,11 \rangle \ar@{<-}[d]|*+[F]{11} & & \\ \langle 4,9,10,11 \rangle & \langle 4,7,10,13 \rangle & \langle 4,7,9 \rangle & \langle 4,6,11,13 \rangle & \langle 4,6,9 \rangle & & \langle 4, 5\rangle & & \\ } } \end{example}
If $G$ is a tree with root $r$, the level of a vertex $x$ is the length of the only path which connect $x$ and $r$. The height of a tree is the value of its maximum level. If $k\in\mathbb{N}$, we denote by $N(k,G)=\{v\in G \mid v \mbox{ has level }k \}$. So, for Example \ref{e3} we have:
$$N(0,\mathcal{L}(4))=\{\langle 4,5,6,7 \rangle\}.$$ $$N(1,\mathcal{L}(4))=\{\langle 4,6,7,9 \rangle,\langle 4,5,7 \rangle,\langle 4,5,6 \rangle\}.$$ $$N(2,\mathcal{L}(4))=\{\langle 4,7,9,10 \rangle,\langle 4,6,9,11 \rangle,\langle 4,6,7 \rangle,\langle 4,5,11 \rangle\}.$$ $$N(3,\mathcal{L}(4))=\{\langle 4,9,10,11 \rangle,\langle 4,7,10,13 \rangle,\langle 4,7,9 \rangle,\langle 4,6,11,13 \rangle,\langle 4,6,9 \rangle,\langle 4,5 \rangle\}.$$
\section{Elements of $\mathcal{L}(m,e)$ with minimum genus}\label{mingenus}
Our aim in this section is to give an algorithm that allows us to compute $g(m,e)$ and $\{S \mid S\in\mathcal{L}(m,e) \mbox{ and } g(S)=g(m,e) \}$. The following result is a consequence of Theorem \ref{t2}.
\begin{proposition} If $m$ is a positive integer and $(S,T)$ an edge of $G(\mathcal{L}(m))$, then $g(S)=g(T)+1$. \end{proposition}
As a direct consequence of the previous proposition we have the following result.
\begin{corollary} Fixed $m,e\in \mathbb{N},$ if $P=\min\{k\in\mathbb{N} \mid N(k,G(\mathcal{L}(m)))\cap\mathcal{L}(m,e)\neq\emptyset\}$ then $\{S\in\mathcal{L}(m,e) \mid g(S)=g(m,e)\}=N(P,G(\mathcal{L}(m)))\cap\mathcal{L}(m,e)$. Moreover, $g(m,e)=m-1+P$. \end{corollary}
Note that if $S$ is a numerical semigroup and $e(S)=1$ then $S=\mathbb{N}$. Note also that by Proposition 2.10 of \cite{libro}, if $S$ is a numerical semigroup then $e(S)\leq m(S)$. Moreover, it is clear that if $m\geq e\geq 2$ then $\langle m, m+1,\ldots,m+e-1\rangle\in\mathcal{L}(m,e)$. In this way, we have the following result.
\begin{proposition} Let $m$ and $e$ be positive integers.
\begin{enumerate}
\item If $m<e$ then $\mathcal{L}(m,e)=\emptyset$.
\item If $e=1$ and $\mathcal{L}(m,e)\neq\emptyset$ then $m=1$ and $\mathcal{L}(m,e)=\{\mathbb{N}\}$.
\item If $m\geq e\geq 2$ then $\mathcal{L}(m,e)\neq\emptyset$.
\end{enumerate} \end{proposition}
Now, we give an algorithm to compute $g(m,e)$ and $\{S\in\mathcal{L}(m,e) \mid g(S)=g(m,e) \}$.
\begin{algorithm} \caption{Sketch of the algorithm to compute $g(m,e)$ and the set of semigroups with a fixed multiplicity and embedding dimension such that its genus is $g(m,e)$.}\label{a7} \textbf{INPUT:} $m$ and $e$ positive integers such that $m\geq e\geq 2$.\\ \textbf{OUTPUT:} $g(m,e)$ and $\{S \mid S\in\mathcal{L}(m,e) \mbox{ and } g(S)=g(m,e) \}$. \begin{algorithmic}[1]
\State Set $k=0$ and $A=\{\langle m,m+1,\ldots,2m-1\rangle\}$
\While{True}
\If {$A\cap\mathcal{L}(m,e)\neq\emptyset$}
\State \Return{$m-1+k$ and $A\cap\mathcal{L}(m,e)$}
\EndIf
\For{$S\in A$}
\State $C(S)=\{T \mid T \mbox{ is a son of } S\}$
\EndFor
\State $\displaystyle{A=\bigcup_{S\in A}C(S)}$, $k=k+1$.
\EndWhile \end{algorithmic}
\end{algorithm}
We illustrate now the previous algorithm with an example.
\begin{example} We compute $g(5,3)$ and $\{S\in\mathcal{L}(5,3) \mid g(S)=g(5,3) \}$ using Algorithm \ref{a7}.
\begin{itemize}
\item $k=0$ and $A=\{\langle 5,6,7,8,9 \rangle\}$.
\item $k=1$ and $A=\{\langle 5,7,8,9,11 \rangle,\langle 5,6,8,9 \rangle,\langle 5,6,7,9 \rangle,\langle 5,6,7,8 \rangle\}$.
\item $k=2$ and
\begin{multline*}
A=\{\langle 5,8,9,11,12 \rangle,\langle 5,7,9,11,13 \rangle,\langle 5,7,8,11 \rangle,\\
\langle 5,7,8,9 \rangle, \langle 5,6,9,13 \rangle,\langle 5,6,8 \rangle,\langle 5,6,7 \rangle\}.
\end{multline*}
\end{itemize} It returns $g(5,3)=6$ and $\{S\in\mathcal{L}(5,3) \mid g(S)=6 \}=\{\langle 5,6,8 \rangle,\langle 5,6,7 \rangle\}$. In the package \texttt{FrobeniusNumberAndGenus}, we can run the command \texttt{Algorithm1[5,3]} to obtain this result. \end{example}
If $S$ is a numerical semigroup and $n\in S\setminus\{0\}$, then the Ap\'{e}ry set of $n$ in $S$ is $\mathrm{Ap}(S,n)=\{s\in S \mid s-n\not\in S\}$ (see \cite{apery}). Lemma 2.4 of \cite{libro} shows that $\mathrm{Ap}(S,n)=\{w(0)=0,w(1),\ldots,w(n-1)\}$ where $w(i)$ is the least element of $S$ congruent with $i$ modulus $n$. Note that $w(i)=k_in+i$ for some $k_i\in\mathbb{N}$ and $km+i\in S$ if and only if $k\geq k_i$. Therefore, we have the following result.
\begin{lemma}\label{l9} Let $S$ be a numerical semigroup, $n\in S\setminus\{0\}$ and $\mathrm{Ap}(S,n)=\{0,k_1n+1,\ldots,k_{n-1}n+n-1\}$. Then $g(S)=k_1+\cdots+k_{n-1}$. \end{lemma}
Next result can be easily deduced from Corollary 4 of \cite{intervalos}.
\begin{lemma}\label{l10} Let $m$ and $e$ integers such that $m\geq e\geq 2$, $S=\langle m,m+1,\ldots,m+e-1 \rangle$ and $m-1=q(e-1)+r$ with $q,r\in\mathbb{N}$ and $r\leq e-2$. Then $\mathrm{Ap}(S,m)=\{0,m+1,\ldots,m+e-1,2m+(e-1)+1,\ldots,2m+2(e-1),\ldots,qm+(q-1)(e-1)+1,\ldots,qm+q(e-1),(q+1)m+q(e-1)+1,\ldots,(q+1)m+q(e-1)+r\}$. \end{lemma}
If $a,b\in\mathbb{N}$ and $b\neq 0$ we denote by $a\mod b$ the remainder of dividing $a$ by $b$. If $q$ is a rational number we denote by $\floor{q}=\max\{z\in\mathbb{Z} \mid z\leq q\}$. Note that $a=\floor{\frac a b} b+(a\mod b)$. From Lemma \ref{l9} and Lemma \ref{l10} we have the following result.
\begin{proposition} Let $m$ and $e$ be integers such that $m\geq e\geq 2$ and $S=\langle m,m+1,\ldots,m+e-1 \rangle$. Then, $$g(S)=\left(\left\lfloor{\frac{m-1}{e-1}+1}\right\rfloor\right)\left(\frac{\left\lfloor{\frac{m-1}{e-1}}\right\rfloor(e-1)}{2}+(m-1)\mod (e-1)\right).$$ \end{proposition}
Clearly $\langle m,m+1,\ldots,m+e-1 \rangle \in\mathcal{L}(m,e)$ and therefore we have the following result.
\begin{corollary} If $m$ and $e$ are integers such that $m\geq e\geq 2$ then $$g(m,e)\leq\left(\left\lfloor{\frac{m-1}{e-1}+1}\right\rfloor\right)\left(\frac{\left\lfloor{\frac{m-1}{e-1}}\right\rfloor(e-1)}{2}+(m-1)\mod (e-1)\right).$$ \end{corollary}
For many examples the equality holds. However, there are some cases where the semigroup $\langle m,m+1,\ldots,m+e-1 \rangle$ does not have minimum genus in the set $\mathcal{L}(m,e)$ as we show in the next example.
\begin{example} $S=\langle 8,9,10 \rangle$ is a numerical semigroup and $g(S)=16$. $\bar{S}=\langle 8,9,11 \rangle$ is a numerical semigroup and $g(\bar{S})=14$. Therefore, in this case $g(\langle 8,9,10 \rangle)\neq g(8,3)$. \end{example}
Using the notation of \cite{cadiz}, a numerical semigroup is packed if $msg(S)\subseteq\{m(S),m(S)+1,\ldots,2m(S)-1\}$. The set of all packed numerical semigroups with multiplicity $m$ and embedding dimension $e$ is denote by $\mathcal{C}(m,e)$. The following result is obtained from \cite{cadiz}.
\begin{proposition}\label{p14} If $S\in\mathcal{L}(m,e)$ then $\bar{S}=\langle \{m\}+\{x\mod m \mid x\in msg(S)\} \rangle\in\mathcal{C}(m,e)$ and $g(\bar{S})\leq g(S)$. Moreover, if $S\not\in\mathcal{C}(m,e)$ then $g(\bar{S})<g(S)$. \end{proposition}
We illustrate the previous proposition with an example.
\begin{example} If $S=\langle 5,11,17 \rangle\in\mathcal{L}(5,3)$ then $\bar{S}=\langle \{5\}+\{0,1,2\} \rangle=\langle 5,6,7 \rangle\in\mathcal{C}(5,3)$. Therefore, $g(\bar{S})\leq g(S)$. Moreover, $S\not\in\mathcal{C}(5,3)$, so $g(\bar{S})<g(S)$. \end{example}
Next result is a consequence of Proposition \ref{p14}.
\begin{corollary} Let $m$ and $e$ be integers such that $m\geq e \geq 2$. Then
\begin{enumerate}
\item $g(m,e)=\min\{g(s) \mid S\in\mathcal{C}(m,e)\}$.
\item $\{S\in\mathcal{L}(m,e) \mid g(S)=g(m,e) \}=\{S\in\mathcal{C}(m,e) \mid g(S)=g(m,e) \}$.
\end{enumerate} \end{corollary}
Note that $\mathcal{C}(m,e)$ is finite and therefore the previous corollary give us another algorithm for computing $g(m,e)$ and $\{S\in\mathcal{L} \mid g(S)=g(m,e) \}$. We give more details about this method. Following result is Proposition 4 of \cite{cadiz}.
\begin{proposition}\label{p17} Let $m$ and $e$ integers such that $m\geq e\geq 2$ and $A$ a subset of $\{1,\ldots,m-1\}$ with cardinality $e-1$ such that $gcd(A\cup\{m\})=1$. Then $S=\langle \{m\}+(A\cup\{0\}) \rangle\in\mathcal{C}(m,e)$. Moreover, every element of $\mathcal{C}$ has this form. \end{proposition}
We illustrate the previous proposition with an example.
\begin{example} We are going to compute $\mathcal{C}(6,3)$. We start computing $\{A\subseteq\{1,2,3,4,5\} \mid \#A=2 \mbox{ and } {\mathrm gcd}(A\cup\{6\})=1 \}$. This set is equal to \[ \{ \{1,2\},\{1,3\},\{1,4\},\{1,5\},\{2,3\},\{2,5\},\{3,4\}, \{3,5\},\{4,5\}\}. \]
Therefore, \begin{multline*} \mathcal{C}(6,3)=\{\langle 6,7,8\rangle,\langle 6,7,9\rangle,\langle 6,7,10\rangle,\langle 6,7,11\rangle,\\ \langle 6,8,9\rangle,\langle 6,8,11\rangle,\langle 6,9,10\rangle,\langle 6,9,11\rangle,\langle 6,10,11\rangle\}. \end{multline*} A simple computation shows us $g(\langle 6,7,8 \rangle)=9$, $g(\langle 6,7,9 \rangle)=9$, $g(\langle 6,7,10 \rangle)=9$, $g(\langle 6,7,11 \rangle)=10$, $g(\langle 6,8,9 \rangle)=10$, $g(\langle 6,8,11 \rangle)=11$, $g(\langle 6,9,10 \rangle)=12$, $g(\langle 6,9,11 \rangle)=13$ and $g(\langle 6,10,11 \rangle)=13$.\\ Therefore, $g(6,3)=9$ and the set $\{S\in\mathcal{L}(6,3) \mid g(S)=9 \}$ is equal to $\{\langle 6,7,8 \rangle,\langle 6,7,9 \rangle,\langle 6,7,10 \rangle\}$. \end{example}
\section{Elements of $\mathcal{L}(m,e)$ with minimum Frobenius Number}\label{minFrob}
Our aim in this section is to obtain algorithmic methods for computing $F(m,e)$ and $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$. Next result is a consequence of Theorem \ref{t2}.
\begin{proposition} If $m$ is a positive integer and $(S,T)$ is an edge of $G(\mathcal{L}(m))$ then $F(T)<F(S)$. \end{proposition}
The following result can be deduced from \cite{numerical}.
\begin{proposition} If $m$ is a positive integer and $(S,T)$ is an edge of $G(\mathcal{L}(m))$ then $e(S)\leq e(T)$. \end{proposition}
Clearly $F(m,m)=m-1$ and $\{S\in\mathcal{L}(m,m) \mid F(S)=m-1 \}=\{\langle m,m+1,\ldots,2m-1 \rangle\}$. It is well known (see \cite{sylvester} for example) that if $S=\langle n_1,n_2 \rangle$ is a numerical semigroup,
then $F(S)=n_1n_2-n_1-n_2$. Therefore, we obtain the following result.
\begin{proposition} Let $m$ be an integer such that $m\geq 2$.
\begin{enumerate}
\item $F(m,m)=m-1$ and $\{S\in\mathcal{L}(m,m) \mid F(S)=m-1\}=\{\langle m,m+1,\ldots,2m-1\rangle\}$.
\item $F(m,2)=m^2-m-1$ and $\{S\in\mathcal{L}(m,2) \mid F(S)=m^2-m-1\}=\{\langle m,m+1 \rangle\}$.
\end{enumerate} \end{proposition}
If $q$ is a rational number we denote by $\ceil{q}=\min\{z\in\mathbb{Z} \mid q\leq z \}$. Next result is deduced from \cite[Corollary \S 4.5]{intervalos}.
\begin{proposition} If $m$ and $e$ are integers such that $m\geq e\geq 2$ then $F(\langle m,m+1,\ldots,m+e-1 \rangle)=\ceil{\frac{m-1}{e-1}}m-1$. \end{proposition}
As a consequence of the previous proposition we get the following result.
\begin{corollary} If $m$ and $e$ are integers such that $m\geq e\geq 2$ then $F(m,e)\leq \ceil{\frac{m-1}{e-1}}m-1$. \end{corollary}
In the previous corollary, the equality holds plenty of times, but in some cases $F(\langle m,m+1,\dots,m+e-1 \rangle)\neq\min\{F(S) \mid S\in\mathcal{L}(m,e) \}$. For example, $F(\langle 4,5,6 \rangle)=7$ and $F(\langle 4,5,7 \rangle)=6$.
From the above results, we obtain the following algorithm where the projections from the cartesian product $\mathcal{L}(m)\times \mathbb{N}$ are denoted by $\pi_1$ and $\pi_2$.
\begin{algorithm} \caption{Sketch of the algorithm to compute $F(m,e)$ and the set of semigroup with a fixed multiplicity and embedding dimension such that its Frobenius number is $F(m,e)$.}\label{a24} \textbf{INPUT:} $m$ and $e$ integers such that $m\geq e\geq 2$.\\ \textbf{OUTPUT:} $F(m,e)$ and $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$. \begin{algorithmic}[1]
\State $A=\{\langle m,m+1,\ldots,2m-1 \rangle\}$, $I=\emptyset$ and $\alpha=\ceil{\frac{m-1}{e-1}}m-1$
\While{True}
\State $C=\{(S,F(S)) \mid S \mbox{ is son of some element of } A \mbox{ and } F(S)\leq\alpha \}$
\State $K=\{S\in \pi_1(C) \mid e(S)\geq e \}$
\If{$K=\emptyset$}
\State \Return $F(m,e)=\pi_2(I)$ and $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}=\pi_1(I)$
\EndIf
\State $A=K$, $B=\{(S,F(S)) \mid S\in K \mbox{ and } e(S)=e \}$
\State $\alpha=\min(\pi_2(B)\cup\{\alpha\})$, $I=\{(S,F(S))\in I\cup B \mid F(S)=\alpha \}$.
\EndWhile \end{algorithmic}
\end{algorithm}
We illustrate with an example how this algorithm works.
\begin{example} We compute $F(4,3)$ and $\{S\in\mathcal{L}(4,3) \mid F(S)=F(4,3) \}$ using Algorithm \ref{a24}.
\begin{itemize}
\item $A=\{\langle 4,5,6,7 \rangle\}$, $I=\emptyset$ and $\alpha=\ceil{\frac{3}{2}}4-1=7$.
\item $C=\{(\langle 4,6,7,9 \rangle,5),(\langle 4,5,7 \rangle,6),(\langle 4,5,6 \rangle,7)\}$ and\\
$K=\{\langle 4,6,7,9 \rangle,\langle 4,5,7 \rangle,\langle 4,5,6 \rangle\}$.
\item $A=\{\langle 4,6,7,9 \rangle,\langle 4,5,7 \rangle,\langle 4,5,6 \rangle\}$, $B=\{(\langle 4,5,7 \rangle,6),(\langle 4,5,6 \rangle,7)\}$, $\alpha=\min\{6,7\}=6$ and $I=\{(\langle 4,5,7 \rangle,6)\}$.
\item $C=\{(\langle 4,7,9,10 \rangle,6)\}$ and $K=\{\langle 4,7,9,10 \rangle\}$.
\item $A=\{\langle 4,7,9,10 \rangle\}$, $B=\emptyset$, $\alpha=6$ and $I=\{(\langle 4,5,7 \rangle,6)\}$.
\item $C=\emptyset$ and $K=\emptyset$.
\end{itemize} Therefore, $F(4,3)=6$ and $\{S\in \mathcal{L}(4,3) \mid F(S)=6 \}=\{\langle 4,5,7 \rangle\}$. Using the Mathematica package \cite{PROGRAMA}, running the commands \texttt{MinFrob[4,3]} and \texttt{FrobeniusEmbeddingDimensionMultiplicity[6,3,4]}, we obtain $6$ and $\langle 4,5,7 \rangle$, respectively. \end{example}
Our next goal is to give an alternative algorithm for computing $F(m,e)$ and $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$. Next result is deduced from \cite{cadiz}.
\begin{proposition} If $S\in\mathcal{L}(m,e)$ then $\bar{S}=\langle \{m\}+\{x\mod m \mid x\in msg(S) \}\rangle\in\mathcal{C}(m,e)$ and $F(\bar{S})\leq F(S)$. \end{proposition}
As a consequence of the previous proposition we get the following result.
\begin{corollary}\label{c17} If $m$ and $e$ are integers such that $m\geq e\geq 2$ then $F(m,e)=\min\{F(S) \mid S\in\mathcal{C}(m,e)\}$. \end{corollary}
The set $\mathcal{C}(m,e)$ is finite, so previous corollary give us an algorithmic method for computing $F(m,e)$.
\begin{example} We compute $F(6,5)$. First, we calculate $\mathcal{C}(6,5)$ by using Proposition \ref{p17} and then we apply Corollary \ref{c17}. So, \begin{multline*} \mathcal{C}(6,5)=\{\langle 6,7,8,9,10 \rangle, \langle 6,7,8,9,11 \rangle,\\ \langle 6,7,8,10,11 \rangle, \langle 6,7,9,10,11 \rangle, \langle 6,8,9,10,11 \rangle\} \end{multline*} and therefore $F(6,5)=\min\{F(\langle 6,7,8,9,10 \rangle)=11$, $F(\langle 6,7,8,9,11 \rangle)=10$, $F(\langle 6,7,8,10,11 \rangle)=9$, $F(\langle 6,7,9,10,11 \rangle)=8$, $F(\langle 6,8,9,10,11 \rangle)=13\}=8$. \end{example}
Now, we are interested in giving an algorithmic method for computing $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$. Next example shows us that there exist semigroups $S\in\mathcal{L}(m,e)$ such that $S\not\in\mathcal{C}(m,e)$ and $F(S)=F(m,e)$.
\begin{example}The numerical semigroups $S_1=\langle 7,9,10,15 \rangle$ and $S_2=\langle 7,8,10,19 \rangle$ verify that $S_1,S_2\in \mathcal{L}(7,4)\setminus \mathcal{C}(7,4)$ and $F(S_1)=F(S_2)=13=F(7,4).$
\end{example}
If $S\in\mathcal{L}(m,e)$ we denote by $\theta(S)$ the numerical semigroup generated by $\{m\}+\{x\mod m \mid x\in msg(S)\}$. Clearly, $\theta(S)\in\mathcal{C}(m,e)$.\\
We define in $\mathcal{L}(m,e)$ the following equivalence relation: $S\mathcal{R}T$ if and only if $\theta(S)=\theta(T)$. We denote by $[S]$ the set $\{T\in\mathcal{L}(m,e) \mid S\mathcal{R}T\}$. Therefore, the quotient set ${\mathcal{L}(m,e)}/{\mathcal{R}}=\{[S]\mid S\in\mathcal{L}(m,e)\}$ is a partition of $\mathcal{L}(m,e)$. Next result is Theorem 3 from \cite{cadiz}.
\begin{proposition} Let $m$ and $e$ be integers such that $m\geq e\geq 2$. Then, $\{[S] \mid S\in\mathcal{C}(m,e)\}$ is a partition of $\mathcal{L}(m,e)$. Moreover, if $\{S,T\}\subseteq\mathcal{C}(m,e)$ and $S\neq T$ then $[S]\cap[T]=\emptyset$. \end{proposition}
As a consequence of the previous proposition we have that for computing $\{S\in\mathcal{L}(m,e) \mid F(S)=F(m,e) \}$ is enough doing the following two steps: \begin{enumerate}
\item Compute $A=\{S\in\mathcal{C}(m,e) \mid F(S)=F(m,e) \}$.
\item For every $S\in A$, compute $\{T\in[S] \mid F(T)=F(S) \}$. \end{enumerate} We already know how to compute 1. Now, we focus on giving an algorithm that allows us to compute 2.
Using Algorithm 12 from \cite{cadiz}, for $S\in\mathcal{C}(m,e)$ and $F\in\mathbb{N}$ we get the set $\{T\in[S] \mid F(T)\leq F \}$. Clearly if $S\in\mathcal{C}(m,e)$ then $\{T\in[S] \mid F(T)=F(S)\}=\{T\in [S] \mid F(T)\leq F(S) \}$. Therefore we are going to adapt Algorithm 12 from \cite{cadiz} to our needs for computing 2. But before of that, we introduce some concepts and results.\\
If $S$ is a numerical semigroup, we denote by $M(S)=\max(msg(S))$. If $S\in\mathcal{C}(m,e)$ we define the graph $G([S])$ as follows: $[S]$ is its set of vertices and $(A,B)\in [S]\times[S]$ is an edge if $msg(B)=(msg(A)\setminus\{M(A)\})\cup\{M(A)-m\}$.
The following result is Theorem 9 from \cite{cadiz}.
\begin{proposition} If $S\in\mathcal{C}(m,e)$ then $G([S])$ is a tree with root $S$. Moreover, if $P\in [S]$ and $msg(P)=\{n_1<n_2<\cdots<n_e\}$ then the sons of $P$ in $G([S])$ are the numerical semigroups of the form $\langle (\{n_1,\ldots,n_e\}\backslash\{n_k\})\cup\{n_k+n_1\}\rangle$ such that $k\in\{2,\ldots,e\}$, $n_k+n_1>n_e$ and $n_k+n_1\notin\langle \{n_1,\ldots,n_e\}\backslash\{n_k\} \rangle$. \end{proposition}
Now we give an algorithm such that for a semigroup $S\in\mathcal{C}(m,e)$ it computes the set $\{T\in[S] \mid F(T)=F(S) \}$.
\begin{algorithm}\caption{Sketch of the algorithm to compute the semigroups of each equivalence class such that their Frobenius number is minimum}\label{a32} \textbf{INPUT:} $S\in\mathcal{C}(m,e)$.\\ \textbf{OUTPUT:} $\{T\in [S] \mid F(T)=F(S) \}$. \begin{algorithmic}[1]
\State $A=\{S\}$ and $B=\{S\}$
\While{True}
\State $C=\{H \mid H \mbox{ is son of an element of } B \mbox{ and } F(H)=F(S) \}$
\If{$C=\emptyset$}
\State \Return A
\EndIf
\State $A=A\cup C$, $B=C$.
\EndWhile \end{algorithmic}
\end{algorithm}
We finish this section with an example for illustrate the above algorithm.
\begin{example} We use now Algorithm \ref{a32} for computing $\{T\in [S] \mid F(T)=F(S)=10 \}$ where $S=\langle 6,7,8,9,11 \rangle\in\mathcal{C}(6,5)$.
\begin{itemize}
\item $A=\{\langle 6,7,8,9,11 \rangle\}$ and $B=\{\langle 6,7,8,9,11 \rangle\}$.
\item $C=\{\langle 6,8,9,11,13 \rangle, \langle 6,8,11,13,15 \rangle\}$.
\item $A=\{\langle 6,7,8,9,11 \rangle,\langle 6,8,9,11,13 \rangle, \langle 6,8,11,13,15 \rangle\}$ and\\ $B=\{\langle 6,8,9,11,13 \rangle, \langle 6,8,11,13,15 \rangle \}$.
\item $C=\emptyset$.
\end{itemize} Therefore, $\{T\in [S] \mid F(T)=10 \}=\{\langle 6,7,8,9,11 \rangle, \langle 6,8,9,11,13 \rangle, \langle 6,8,11,13,15 \rangle\}$. This result is also obtained by executing the command \texttt{Algorithm3[\{6,7,8,9,11\}]} of \cite{PROGRAMA}).
\end{example}
\end{document} | arXiv | {
"id": "1712.05220.tex",
"language_detection_score": 0.5412325263023376,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\def\vrule width0pt height0pt depth12pt{\vrule width0pt height0pt depth12pt} \def{\mathbb C}{{\mathbb C}} \def{\mathbb N}{{\mathbb N}} \def{\mathbb Z}{{\mathbb Z}} \def{\mathbb R}{{\mathbb R}} \def{\mathbb K}{{\mathbb K}} \def{\mathbb T}{{\mathbb T}} \def{\mathbb D}{{\mathbb D}} \def{\mathbb P}{{\mathbb P}} \def{\mathbb G}{{\mathbb G}} \def{\cal F}{{\cal F}} \def{\cal U}{{\cal U}} \def{\cal M}{{\cal M}} \def{\cal L}{{\cal L}} \def{\cal H}{{\cal H}} \def{\cal C}{{\cal C}} \def{\cal E}{{\cal E}} \def{\cal R}{{\cal R}} \def{\cal P}{{\cal P}} \def\varepsilon{\varepsilon} \def\varkappa{\varkappa} \def\varphi{\varphi} \def\leqslant{\leqslant} \def\geqslant{\geqslant} \def\text{\tt Re}\,{\text{\tt Re}\,} \def\mathop{\hbox{$\overline{\hbox{\rm lim}}$}}\limits{\mathop{\hbox{$\overline{\hbox{\rm lim}}$}}\limits} \def\mathop{\hbox{$\underline{\hbox{\rm lim}}$}}\limits{\mathop{\hbox{$\underline{\hbox{\rm lim}}$}}\limits} \def\hbox{\tt supp}\,{\hbox{\tt supp}\,} \def\hbox{\tt dim}\,{\hbox{\tt dim}\,} \def\hbox{\tt ker}\,{\hbox{\tt ker}\,} \def\hbox{\tt span}\,{\hbox{\tt span}\,} \def\hbox{\tt Re}\,{\hbox{\tt Re}\,} \def\ssub#1#2{#1_{{}_{{\scriptstyle #2}}}} \font\Goth=eufm10 scaled 1200 \def\hbox{{\Goth U}}{\hbox{{\Goth U}}} \def\hbox{{\Goth H}}{\hbox{{\Goth H}}} \def\bin#1#2{\left({{#1}\atop {#2}}\right)} \def\widetilde w{\widetilde w}
\title{Remarks on common hypercyclic vectors}
\author{Stanislav Shkarin}
\date{}
\maketitle
\begin{abstract} We treat the question of existence of common hypercyclic vectors for families of continuous linear operators. It is shown that for any continuous linear operator $T$ on a complex Fr\'echet space $X$ and a set $\Lambda\subseteq {\mathbb R}_+\times{\mathbb C}$ which is not of zero three-dimensional Lebesgue measure, the family $\{aT+bI:(a,b)\in\Lambda\}$ has no common hypercyclic vectors. This allows to answer negatively questions raised by Godefroy and Shapiro and by Aron. We also prove a sufficient condition for a family of scalar multiples of a given operator on a complex Fr\'echet space to have a common hypercyclic vector. It allows to show that if
${\mathbb D}=\{z\in{\mathbb C}:|z|<1\}$ and $\varphi\in {\cal H}^\infty({\mathbb D})$ is non-constant, then the family $\{zM_\varphi^\star:b^{-1}<|z|<a^{-1}\}$ has a common hypercyclic vector, where $M_\varphi:{\cal H}^2({\mathbb D})\to {\cal H}^2({\mathbb D})$, $M_\varphi f=\varphi f$, $a=\inf\{|\varphi(z)|:z\in{\mathbb D}\}$ and
$b=\sup\{|\varphi(z)|:|z|\in{\mathbb D}\}$, providing an affirmative answer to a question by Bayart and Grivaux. Finally, extending a result of Costakis and Sambarino, we prove that the family $\{aT_b:a,b\in{\mathbb C}\setminus\{0\}\}$ has a common hypercyclic vector, where $T_bf(z)=f(z-b)$ acts on the Fr\'echet space ${\cal H}({\mathbb C})$ of entire functions on one complex variable. \end{abstract}
\small \noindent{\bf MSC:} \ \ 47A16, 37A25
\noindent{\bf Keywords:} \ \ Hypercyclic operators, hypercyclic vectors \normalsize
\section{Introduction \label{s1}}\rm
All vector spaces in this article are assumed to be over ${\mathbb K}$ being either the field ${\mathbb C}$ of complex numbers or the field ${\mathbb R}$ of real numbers. Throughout this paper all topological spaces and topological vector spaces {\bf are assumed to be Huasdorff}. As usual, ${\mathbb Z}_+$ is the set of non-negative integers, ${\mathbb R}_+$ is the set of non-negative real numbers, ${\mathbb N}$ is the set of positive integers,
${\mathbb K}^\star={\mathbb K}\setminus\{0\}$, ${\mathbb D}=\{z\in{\mathbb C}:|z|<1\}$ and
${\mathbb T}=\{z\in{\mathbb C}:|z|=1\}$. By a {\it compact interval} of the real line we mean a set of the shape $[a,b]$ with $-\infty<a<b<\infty$. That is, a singleton is {\bf not} considered to be an interval. For topological vector spaces $X$ and $Y$, $L(X,Y)$ stands for the space of continuous linear operators from $X$ to $Y$. We write $L(X)$ instead of $L(X,X)$ and $X^*$ instead of $L(X,{\mathbb K})$. For $T\in L(X,Y)$, the dual operator $T^*:Y^*\to X^*$ acts according to the formula $T^*f(x)=f(Tx)$. Recall \cite{sch} that an ${\cal F}$-space is a complete metrizable topological vector space and a Fr\'echet space is a locally convex ${\cal F}$-space. For a subset $A$ of a vector space $X$, symbol $\hbox{\tt span}\,(A)$ stands for the linear span of $A$.
\begin{definition}\label{def1}\rm Let $X$ and $Y$ be topological spaces and ${\cal F}=\{T_a:a\in A\}$ be a family of continuous maps from $X$ to $Y$. An element $x\in X$ is called {\it universal} for ${\cal F}$ if the orbit $\{T_ax:a\in A\}$ is dense in $Y$ and ${\cal F}$ is said to be {\it universal} if it has a universal element. We denote the set of universal elements for ${\cal F}$ by the symbol $\hbox{{\Goth U}}({\cal F})$. A continuous linear operator $T$ acting on a topological vector space $X$ is called {\it hypercyclic} if the family of its powers $\{T^n:n\in {\mathbb Z}_+\}$ is universal. Corresponding universal elements are called {\it hypercyclic vectors} for $T$. The set of hypercyclic vectors for $T$ is denoted by $\hbox{{\Goth H}}(T)$. That is, $\hbox{{\Goth H}}(T)=\hbox{{\Goth U}}(\{T^n:n\in{\mathbb Z}_+\})$. If $\{T_a:a\in A\}$ is a family of continuous linear operators on topological vector space $X$, we denote $$ \hbox{{\Goth H}}\{T_a:a\in A\}=\bigcap_{a\in A}\hbox{{\Goth H}}(T_a). $$ That is, $\hbox{{\Goth H}}\{T_a:a\in A\}$ consists of all vectors $x\in X$ that are hypercyclic for each $T_a$, $a\in A$. \end{definition}
Recall that a topological space $X$ is called {\it Baire} if the intersection of any countable family of dense open subsets of $X$ is dense. Hypercyclic operators and universal families have been intensely studied during last few decades, see surveys \cite{ge1,ge2} and references therein. It is well-known \cite{ge1} that the set of hypercyclic vectors of a hypercyclic operator on a separable metrizable Baire topological vector space is a dense $G_\delta$-set. It immediately follows that any countable family of hypercyclic operators on such a space has a dense $G_\delta$-set of common hypercyclic vectors (=hypercyclic for each member of the family). We are interested in the existence of common hypercyclic vectors for uncountable families of continuous linear operators. First results in this direction were obtained by Abakumov and Gordon \cite{ag} and L\'eon--Saavedra and M\"uller \cite{muller}.
\begin{thmAG}Let $T$ be the backward shift on $\ell_2$. That is, $T\in L(\ell_2)$, $Te_0=0$ and $Te_n=e_{n-1}$ for $n\in{\mathbb N}$, where
$\{e_n\}_{n\in{\mathbb Z}_+}$ is the standard orthonormal basis of $\ell_2$. Then $\hbox{{\Goth H}}\{aT:a\in{\mathbb K},\ |a|>1\}$ is a dense $G_\delta$-set. \end{thmAG}
The following result is of completely different flavor. It is proven in \cite{muller} for continuous linear operators on Banach spaces although the proof can be easily adapted \cite{sh1} for continuous linear operators acting on arbitrary topological vector spaces.
\begin{thmLM} Let $X$ be a complex topological vector space and $T\in L(X)$. Then $\hbox{{\Goth U}}({\cal F})=\hbox{{\Goth H}}(zT)=\hbox{{\Goth H}}(T)$ for any $z\in{\mathbb T}$, where ${\cal F}=\{wT^n:w\in{\mathbb T},\ n\in{\mathbb Z}_+\}$. In particular, $\hbox{{\Goth H}}\{zT:z\in{\mathbb T}\}=\hbox{{\Goth H}}(T)$. \end{thmLM}
It follows that the family $\{zT:z\in{\mathbb T}\}$ has a common hypercyclic vector, whenever $T$ is a hypercyclic operator. A result similar to the above one was recently obtained by Conejero, M\"uller and Peris \cite{semi} for operators acting on separable ${\cal F}$-spaces (see \cite{sh1} for a proof in a more general setting). Recall that a family $\{T_t\}_{t\in{\mathbb R}_+}$ of continuous linear operators on a topological vector space is called an {\it operator semigroup} if $T_0=I$ and $T_{t+s}=T_tT_s$ for any $t,s\in{\mathbb R}_+$.
\begin{thmCMP} Let $X$ be a topological vector space and $\{T_t\}_{t\in{\mathbb R}_+}$ be an operator semigroup on $X$. Assume also that the map $(t,x)\mapsto T_tx$ from ${\mathbb R}_+\times X$ to $X$ is continuous. Then $\hbox{{\Goth H}}(T_t)=\hbox{{\Goth U}}({\cal F})$ for any $t>0$, where ${\cal F}=\{T_s:s>0\}$. In particular, $\hbox{{\Goth H}}\{T_s:s>0\}=\hbox{{\Goth H}}(T_t)$ for any $t>0$. \end{thmCMP}
It follows that if $\{T_t\}_{t\in{\mathbb R}_+}$ is an operator semigroup such that the map $(t,x)\mapsto T_tx$ is continuous and there exists $t>0$ for which $T_t$ is hypercyclic, then the family $\{T_s:s>0\}$ has a common hypercyclic vector. Bayart \cite{bay} provided families of composition operators on the space of holomorphic functions on ${\mathbb D}$, which have common hypercyclic vectors. Costakis and Sambarino \cite{cs}, Bayart and Matheron \cite{bm}, Chan and Sanders \cite{chs} and Gallardo-Guti\'errez and Partington \cite{gp} proved certain sufficient conditions for a set of families of continuous linear operators to have a common universal vector. In all the mentioned papers the criteria were applied to specific sets of families. For instance, Costakis and Sambarino \cite{cs} proved the following theorem.
\begin{thmCS} Let ${\cal H}({\mathbb C})$ be the complex Fr\'echet space of entire functions on one variable, $D\in L({\cal H}({\mathbb C}))$ be the differentiation operator $Df=f'$ and for each $a\in{\mathbb C}$, $T_a\in L({\cal H}({\mathbb C}))$ be the translation operator $T_af(z)=f(z-a)$. Then $\hbox{{\Goth H}}\{T_a:a\in{\mathbb C}^\star\}$, $\hbox{{\Goth H}}\{aT_1:a\in{\mathbb C}^\star\}$ and $\hbox{{\Goth H}}\{aD:a\in{\mathbb C}^\star\}$ are dense $G_\delta$-sets. \end{thmCS}
The criteria by Bayart and Matheron were applied to various families of operators including families of weighted translations on $L^p({\mathbb R})$, composition operators on Hardy spaces ${\cal H}^p({\mathbb D})$ and backward weighted shifts on $\ell_p$. We would like to mention just one example of the application of the criterion from \cite{bm}, which is related to our results.
\begin{exBM} As in Theorem~{\rm CS}, let $T_a$ be translation operators on ${\cal H}({\mathbb C})$. For each $s\in{\mathbb R}_+$ and $z\in{\mathbb T}$, consider the family ${\cal F}_{s,z}=\{n^sT_{nz}:n\in{\mathbb Z}_+\}$. Then $$ \smash{\bigcap_{(s,z)\in{\mathbb R}_+\times {\mathbb T}} \hbox{{\Goth U}}({\cal F}_{s,z})\ \ \ \text{is a dense $G_\delta$-subset of ${\cal H}({\mathbb C})$.}}\vrule width0pt height0pt depth12pt $$ \end{exBM}
Chan and Sanders \cite{cs} found common universal elements of certain sets of families of backward weighted shifts on $\ell_2$. Gallardo-Guti\'errez and Partington \cite{gp} proved a modification of the Costakis--Sambarino criterion and applied it to obtain common hypercyclic vectors for families of adjoint multipliers and composition operators on Hardy spaces. Finally, we would like to mention the following application by Costakis and Mavroudis \cite{cm} of the Bayart--Matheron criterion.
\begin{thmCM}Let $D$ be the differentiation operator on ${\cal H}({\mathbb C})$ and $p$ be a non-constant polynomial. Then $\hbox{{\Goth H}}\{ap(D):a\in{\mathbb C}^\star\}$ is a dense $G_\delta$-set. \end{thmCM}
Although the most of the mentioned criteria look quite general, they are basically not applicable to finding common hypercyclic vectors of families that are not smoothly labeled by {\bf one} real parameter. Note that although the families in Theorems~AG, CS and CM are formally speaking labeled by a complex parameter $a$, Theorem~LM allows to reduce them to families labeled by one real parameter. Example~BM is, of course, genuinely two-parametric, but it is not about a common hypercyclic vector. On the other hand, one can artificially produce huge families of operators with a common hypercyclic vector. For example, take all operators for which a given vector is hypercyclic. The following result provides a common hypercyclic vector for a two-parametric family of operators. It strengthens the first part of Theorem~CS and kind of improves Example~BM.
\begin{theorem}\label{t1}Let $T_a$ for $a\in{\mathbb C}$ be the translation operator $T_af(z)=f(z-a)$ acting on the complex Fr\'echet space ${\cal H}({\mathbb C})$ of entire functions on one complex variable. Then $\hbox{{\Goth H}}\{bT_a:a,b\in{\mathbb C}^\star\}$ is a dense $G_\delta$-set. \end{theorem}
A common hypercyclic vector from the above theorem is even more monstrous than the holomorphic monsters provided by Theorem~CS. Godefroy and Shapiro \cite{gosh} considered adjoint multiplication operators on function Hilbert spaces. Recall that if $U$ is a connected open subset of ${\mathbb C}^m$, then a {\it function Hilbert space} ${\cal H}$ on $U$ is a Hilbert space consisting of functions $f:U\to {\mathbb C}$ holomorphic on $U$ such that for any $z\in U$ the evaluation functional $\chi_z:{\cal H}\to{\mathbb C}$, $\chi_z(f)=f(z)$ is continuous. A {\it multiplier} for ${\cal H}$ is a function $\varphi:U\to{\mathbb C}$ such that $\varphi f\in {\cal H}$ for each $f\in {\cal H}$. It is well-known \cite{gosh} that any multiplier is bounded and holomorphic. Each multiplier gives rise to the multiplication operator $M_\varphi\in L({\cal H})$, $M_\varphi f=\varphi f$ (continuity of $M_\varphi$ follows from the Banach closed graph theorem). Its Hilbert space adjoint $M_\varphi^\star$ is called an {\it adjoint multiplication operator}. Godefroy and Shapiro proved that there is $f\in {\cal H}$, which is cyclic for $M_\varphi^\star$ for any non-constant multiplier $\varphi$ for ${\cal H}$ and demonstrated that if $\varphi:U\to{\mathbb C}$ is a non-constant multiplier for ${\cal H}$ and $\varphi(U)\cap {\mathbb T}\neq\varnothing$, then $M_\varphi^\star$ is hypercyclic, see also the related paper by Bourdon and Shapiro \cite{bosh}. Godefroy and Shapiro also raised the following question \cite[p.~263]{gosh}.
\begin{qGS} Let ${\cal H}$ be a Hilbert function space on a connected open subset $U$ of ${\mathbb C}^m$. Does the family of all hypercyclic adjoint multiplications on ${\cal H}$ have a common hypercyclic vector? \end{qGS}
Recall that any $T\in L({\cal H}({\mathbb C}))$ such that $T$ is not a scalar multiple of the identity and $TD=DT$ is hypercyclic. The following question was raised by Richard Aron.
\begin{qA} Let ${\cal D}$ be the family of all continuous linear operators on ${\cal H}({\mathbb C})$, which are not scalar multiples of the identity and which commute with the differentiation operator $D$. Is it true that there is a common hypercyclic vector for all operators from the family $\cal D$? \end{qA}
The next result allows us to answer negatively both of the above questions.
\begin{theorem}\label{t2} Let $X$ be a complex topological vector space such that $X^*\neq \{0\}$, $T\in L(X)$ and $\Lambda$ be a subset of ${\mathbb R}_+\times {\mathbb C}$. Assume also that the family $\{aT+bI:(a,b)\in\Lambda\}$ has a common hypercyclic vector. Then the set $\Lambda$ has zero three-dimensional Lebesgue measure. \end{theorem}
\begin{corollary}\label{c1} The family $\{aD+bI:a>0,\ b\in{\mathbb C}\}$ of continuous linear operators on ${\cal H}({\mathbb C})$ does not have a common hypercyclic vector. \end{corollary}
\begin{corollary}\label{c2} Let ${\cal H}$ be a Hilbert function space on a connected open subset $U$ of ${\mathbb C}^m$ and $\varphi$ be a non-constant multiplier for ${\cal H}$. Then the family $\{M_{\overline{b}+a\varphi}^\star:a>0,\ b\in{\mathbb C},\ (\overline{b}+a\varphi)(U)\cap{\mathbb T}\neq\varnothing\}$ of hypercyclic operators does not have a common hypercyclic vector. \end{corollary}
Corollaries~\ref{c1} and~\ref{c2} follow from Theorem~\ref{t2} because $M_{\overline{b}+a\varphi}^\star=aM_\varphi^\star+bI$ and the sets of pairs $(a,b)$ involved in the definition of the families in Corollaries~\ref{c1} and~\ref{c2} are non-empty open subsets of ${\mathbb R}_+\times{\mathbb C}$ and therefore have non-zero 3-dimensional Lebesgue measure. In fact, Theorem~\ref{t2} shows that even relatively small subfamilies of the families from Questions~GS and~A fail to have common hypercyclic vectors. As usual, ${\cal H}^2({\mathbb D})$ is the Hardy space of the unit disk. It is well-known that ${\cal H}^2({\mathbb D})$ is a function Hilbert space on ${\mathbb D}$ and the set of multipliers for ${\cal H}^2({\mathbb D})$ is the space ${\cal H}^\infty({\mathbb D})$ of bounded holomorphic functions $f:{\mathbb D}\to{\mathbb C}$. Let $\varphi\in{\cal H}^\infty({\mathbb D})$ be non-constant. Using the mentioned criterion by Godefroy and Shapiro for hypercyclicity of adjoint multiplications together with the fact that a contraction or its inverse can not be hypercyclic, we see that
$zM_\varphi^\star=M_{\overline{z}\varphi}^\star$ is hypercyclic if and only if $b^{-1}<|z|<a^{-1}$, where $a=\inf\limits_{z\in
{\mathbb D}}|\varphi(z)|$ and $b=\sup\limits_{z\in {\mathbb D}}|\varphi(z)|$. Probably, expecting the answer to Question~GS to be negative, Bayart and Grivaux \cite{bagr} raised the following question.
\begin{qBG} Let $\varphi\in {\cal H}^\infty({\mathbb D})$ be non-constant, $a=\inf\limits_{z\in
{\mathbb D}}|\varphi(z)|$ and $b=\sup\limits_{z\in {\mathbb D}}|\varphi(z)|$. Is it true that the family $\{zM_\varphi^\star:b^{-1}<|z|<a^{-1}\}$ has common hypercyclic vectors? \end{qBG}
We prove a sufficient condition on a family of scalar multiples of a given operator to have a common hypercyclic vector and use it to answer Question~BG affirmatively. It is worth noting that Gallardo-Guti\'errez and Partington \cite{gp} found a partial affirmative answer to the above question.
\begin{theorem}\label{t3a} Let $X$ be a separable complex ${\cal F}$-space, $T\in L(X)$ and $0\leqslant a<b\leqslant\infty$. Assume also that there is a map $(k,c)\mapsto F_{k,c}$ sending a pair $(k,c)\in{\mathbb N}\times (a,b)$ to a subset $F_{k,c}$ of $X$ satisfying the following properties: \begin{itemize}\itemsep=-2pt \item[\rm(\ref{t3a}.1)]$F_{k,c}\subseteq \bigcup\limits_{w\in{\mathbb T}}\hbox{\tt ker}\,(T^k-wc^kI)$ for each $(k,c)\in{\mathbb N}\times(a,b);$ \item[\rm(\ref{t3a}.2)]$\{c\in(a,b):F_{k,c}\cap V\neq\varnothing\}$ is open in $(a,b)$ for any open subset $V$ of $X$ and $k\in{\mathbb N};$ \item[\rm(\ref{t3a}.3)]$F_c=\bigcup\limits_{k=1}^\infty F_{k,c}$ is dense in $X$ for any $c\in(a,b);$ \item[\rm(\ref{t3a}.4)]For any $k_1,\dots,k_n\in{\mathbb N}$, there is $k\in{\mathbb N}$ such that $\smash{\bigcup\limits_{j=1}^n F_{k_j,c}\subseteq F_{k,c}}$ for each $c\in(a,b)$. \end{itemize}
Then $\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-set. \end{theorem}
Note that (\ref{t3a}.1) is satisfied if $F_{k,c}\subseteq \hbox{\tt ker}\,(T^k-c^kI)$, which is the case in all following applications of Theorem~\ref{t3a}. If $X$ is a complex locally convex topological vector space and $U$ is a non-empty open subset of ${\mathbb C}^m$, then we say that $f:U\to X$ is {\it holomorphic} if $f$ is continuous and for each $g\in X^*$, $g\circ f:U\to {\mathbb C}$ is holomorphic.
\begin{theorem}\label{t4} Let $m\in{\mathbb N}$, $X$ be a complex Fr\'echet space, $T\in L(X)$ and $U$ be a connected open subset of ${\mathbb C}^m$. Assume also that there exist holomorphic maps $f:U\to X$ and $\varphi:U\to{\mathbb C}$ such that $\varphi$ is non-constant, $Tf(z)=\varphi(z)f(z)$
for each $z\in U$ and $\hbox{\tt span}\,\{f(z):z\in U\}$ is dense in $X$. Denote $a=\inf\limits_{z\in U}|\varphi(z)|$ and $b=\sup\limits_{z\in U}|\varphi(z)|$. Then $\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-set. \end{theorem}
\begin{corollary}\label{co1} Let $m\in{\mathbb N}$, $U$ be connected non-empty open subset of ${\mathbb C}^m$, ${\cal H}$ be a function Hilbert space on $U$ and
$\varphi$ be a non-constant multiplier for ${\cal H}$, $a=\inf\limits_{z\in U}|\varphi(z)|$ and $b=\sup\limits_{z\in U}|\varphi(z)|$. Then
$\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-set. \end{corollary}
\begin{corollary}\label{co2} Let $T\in L({\cal H}({\mathbb C}))$ be such that $TD=DT$ and $T\neq cI$ for any $c\in{\mathbb C}$. Then $\hbox{{\Goth H}}\{zT:z\in{\mathbb C}^\star\}$ is a dense $G_\delta$-set. \end{corollary}
\begin{corollary}\label{co3} Let $X$ be a separable Fr\'echet space, $T\in L(X)$, $0\leqslant a<b\leqslant\infty$ and $T\in L(X)$. Assume also that for any $\alpha,\beta\in{\mathbb R}$ such that $a<\alpha<\beta<b$, there exists a dense subset $E$ of $X$ and a map $S:E\to E$ such that $TSx=x$, $\alpha^{-n}T^n x\to 0$ and $\beta^nS^nx\to 0$ for each
$x\in E$. Then $\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-set. \end{corollary}
Note that Corollary~\ref{co1} gives an affirmative answer to Question~BG, Corollary~\ref{co2} contains Theorem~CM as a particular case, while Corollary~\ref{co3} may be considered as an analog of the Kitai Criterion. The above results on common hypercyclic vectors for scalar multiples of a given operator may lead to an impression that for $0<a<b<\infty$ and a continuous linear operator $T$ on a Fr\'echet space, hypercyclicity of $aT$ and $bT$ implies the existence of common hypercyclic vectors for the family $\{cT:a\leqslant c\leqslant b\}$. This impression is utterly false as follows from the next treorem. For a continuous linear operator $T$ on a topological vector space $X$, we denote $$ M_T=\{c>0:cT\ \ \text{is hypercyclic}\}. $$
\begin{theorem}\label{t5} {\rm I. }\ There exists $S\in L(\ell_2)$ such that $M_S=\{1,2\}$. {\rm II. }\ There exists $T\in L(\ell_2)$ such that $M_T$ is an open interval, but any $A\subset{\mathbb R}_+$ for which the family $\{cT:c\in A\}$ has common hypercyclic vectors is of zero Lebesgue measure. \end{theorem}
\section{Yet another general criterion}
\begin{lemma}\label{gc1} Let $A$ be a set and $X$, $Y$ and $\Omega$ be topological spaces such that $\Omega$ is compact. For each $a\in A$ let $(\omega,x)\mapsto F_{a,\omega}x$ be a continuous map from $\Omega\times X$ to $Y$. For any $\omega\in\Omega$ let ${\cal F}_\omega=\{F_{a,\omega}:a\in A\}$ treated as a family of continuous maps from $X$ to $Y$. Denote $\hbox{{\Goth U}}^*=\bigcap\limits_{\omega\in\Omega}\hbox{{\Goth U}}({\cal F}_\omega)$. Then \begin{equation}\label{uu2} \smash{G_V=\bigcap_{\omega\in\Omega}\bigcup_{a\in A}F_{a,\omega}^{-1}(V)\quad \text{is open in $X$ for any open subset $V$ of $Y$.}}\vrule width0pt height0pt depth12pt \end{equation} Moreover, for any base $\cal V$ of topology of $Y$, \begin{equation}\label{uu1} \smash{\hbox{{\Goth U}}^*=\bigcap_{V\in{\cal V}}G_V}.\vrule width0pt height0pt depth12pt \end{equation} In particular, $\hbox{{\Goth U}}^*$ is a $G_\delta$-set if $Y$ is second countable. \end{lemma}
\begin{proof} Let $x\in G_V$. Then for any $\omega\in\Omega$, there exists $a(\omega)\in A$ such that $F_{a(\omega),\omega}x\in V$. Continuity of the map $\omega\mapsto F_{a,\omega}x$ implies that for each $\omega\in\Omega$, $W_\omega=\{\alpha\in\Omega:F_{a(\omega),\alpha}x\in V\}$ is an open neighborhood of $\omega$ in $\Omega$. Since any Hausdorff compact space is regular, for any $\omega\in\Omega$, we can pick an open neighborhood $W'_\omega$ of $\omega$ in $\Omega$ such that, $\overline{W'_\omega}\subseteq W_\omega$. Since $\{W'_\omega:\omega\in\Omega\}$ is an open covering of the compact space $\Omega$, there are $\omega_1,\dots,\omega_n\in\Omega$ such that $\Omega=\bigcup\limits_{j=1}^n W'_{\omega_j}$. Continuity of the map $(\alpha,z)\mapsto F_{a,\alpha}z$ and compactness of $\overline{W'_\omega}$ imply that for any $j\in\{1,\dots,n\}$, there is a neighborhood $U_j$ of $x$ in $X$ such that $F_{a(\omega_j),\alpha}z\in V$ for any $\alpha\in \overline{W'_{\omega_j}}$ and $z\in U_j$. Let $U=\bigcap\limits_{j=1}^n U_j$. Since $\Omega=\bigcup\limits_{j=1}^n W'_{\omega_j}$, for any $z\in U$ and $\omega\in\Omega$, there exists $j\in\{1,\dots,n\}$ such that $F_{a(\omega_j),\omega}z\in V$. Hence $U\subseteq G_V$. Thus any point of $G_V$ is interior and therefore $G_V$ is open. The equality (\ref{uu1}) follows immediately from the definition of $\hbox{{\Goth U}}^*$. \end{proof}
The main tool in the proof of Theorem~\ref{t1} is the following criterion. It is a simultaneous generalization of results by Chan and Sanders \cite[Theorem~2.1]{chs} and Grosse-Erdmann \cite[Theorem~1]{ge1}. The latter is exactly the next proposition in the case when $\Omega$ is a singleton.
\begin{proposition}\label{gc} Let $A$ be a set and $X,Y,\Omega$ be topological spaces such that $X$ is Baire, $Y$ is second countable and $\Omega$ is compact. For each $a\in A$, let $(\omega,x)\mapsto F_{a,\omega}x$ be a continuous map from $\Omega\times X$ to $Y$. Let ${\cal F}_\omega=\{F_{a,\omega}:a\in A\}$ for $\omega\in\Omega$ and $\hbox{{\Goth U}}^*=\bigcap\limits_{\omega\in\Omega}\hbox{{\Goth U}}({\cal F}_\omega)$. Then $\hbox{{\Goth U}}^*$ is a $G_\delta$-subset of $X$. Moreover, the following conditions are equivalent. \begin{itemize}\itemsep=-2pt \item[\rm(\ref{gc}.1)]$\hbox{{\Goth U}}^*$ is dense in $X.$ \item[\rm(\ref{gc}.2)]For any non-empty open set $U$ in $X$ and any non-empty open set $V$ in $Y$, there exists $x\in U$ such that $V\cap\{F_{a,\omega}x:a\in A\}\neq \varnothing$ for each $\omega\in\Omega$. \end{itemize} \end{proposition}
\begin{proof} Let $\cal V$ be a countable base of the topology of $Y$. By Lemma~\ref{gc1}, $\hbox{{\Goth U}}^*$ is a $G_\delta$-set. Assume that (\ref{gc}.2) is satisfied. For any $V\in{\cal V}$ and $n\in{\mathbb N}$, condition (\ref{gc}.2) implies that $G_V$ defined by (\ref{uu2}) is dense in $X$. By Lemma~\ref{gc1}, each $G_V$ is a dense open subset of $X$. Since $X$ is Baire, (\ref{uu1}) implies that $\hbox{{\Goth U}}^*$ is a dense $G_\delta$-subset of $X$. Hence (\ref{gc}.2) implies (\ref{gc}.1). Next, assume that (\ref{gc}.1) is satisfied and $U$, $V$ are non-empty open subsets of $X$ and $Y$ respectively. Since $\hbox{{\Goth U}}^*$ is dense in $X$, there is $x\in \hbox{{\Goth U}}^*\cap U$. Let $\omega\in\Omega$. Since $x\in\hbox{{\Goth U}}({\cal F}_\omega)$, there is $a\in A$ such that $F_{a,\omega}x\in V$. Hence (\ref{gc}.2) is satisfied. \end{proof}
Using Proposition~\ref{gc} and the fact that in a Baire topological space the class of dense $G_\delta$ sets is closed under countable intersections, we immediately obtain the following corollary.
\begin{corollary}\label{gc2} Let $A$ be a set and $X,Y,\Omega$ be topological spaces such that $X$ is Baire, $Y$ is second countable and $\Omega$ is the union of its compact subsets $\Omega_n$ for $n\in{\mathbb N}$. For each $a\in A$, let $(\omega,x)\mapsto F_{a,\omega}x$ be a continuous map from $\Omega\times X$ to $Y$. Let ${\cal F}_\omega=\{F_{a,\omega}:a\in A\}$ for $\omega\in\Omega$ and $\smash{\hbox{{\Goth U}}^*=\bigcap\limits_{\omega\in\Omega}\hbox{{\Goth U}}({\cal F}_\omega)}$. Then $\hbox{{\Goth U}}^*$ is a $G_\delta$-subset of $X$. Moreover, the following conditions are equivalent. \begin{itemize}\itemsep=-2pt \item[\rm(\ref{gc2}.1)]$\hbox{{\Goth U}}^*$ is dense in $X.$ \item[\rm(\ref{gc2}.2)]For each $n\in{\mathbb N}$, any non-empty open set $U$ in $X$ and any non-empty open set $V$ in $Y$, there exists $x\in U$ such that $V\cap\{F_{a,\omega}x:a\in A\}\neq \varnothing$ for each $\omega\in\Omega_n$. \end{itemize} \end{corollary}
Recall that if $X$ is a topological vector space, $A$ is a set and $\{f_n\}_{n\in{\mathbb Z}_+}$ is a sequence of maps from $A$ to $X$, then we say that $f_n$ {\it uniformly converges to $0$} on $A$ if for any neighborhood $W$ of $0$ in $X$, there is $n\in{\mathbb Z}_+$ such that $f_k(a)\in W$ for any $a\in A$ and any $k\geqslant n$.
\begin{definition}\label{MO} \rm Let $X$ and $Y$ be topological vector spaces, $A$ be a set and $\Omega$ be a topological space. We use the symbol $$ {\cal L}_{\Omega,A}(X,Y) $$ to denote the set of maps $(\omega,a,n,x)\mapsto T_{\omega,a,n}x$ from $\Omega\times A\times{\mathbb Z}_+\times X$ to $Y$ such that $T_{\omega,a,n}\in L(X,Y)$ for each $(\omega,a,n)\in \Omega\times A\times{\mathbb Z}_+$ and the map $(\omega,x)\mapsto T_{\omega,a,n}x$ from $\Omega\times X$ to $X$ is continuous for any $(a,n)\in A\times{\mathbb Z}_+$. If $T\in {\cal L}_{\Omega,A}(X,Y)$ is fixed, $\Lambda\subseteq {\mathbb Z}_+$, $u\in X$ and $U$ is a subset of $Y$, we denote \begin{equation}\label{MMM} M(u,\Lambda,U)=\{\omega\in\Omega: \text{$T_{\omega,a,n}u\in U$ for some $n\in\Lambda$ and $a\in A$}\}. \end{equation} \end{definition}
\begin{proposition}\label{gc3} Let $A$ be a set, $X$ be a Baire topological vector space, $Y$ be a separable metrizable topological vector space, $\Omega$ be a compact topological space and $T\in {\cal L}_{\Omega,A}(X,Y)$ be such that \begin{itemize}\itemsep=-2pt \item[\rm(\ref{gc3}.1)] $E=\{x\in X:T_{\omega,a,n}x\to 0\ \text{as $n\to\infty$ uniformly on $\Omega\times A$}\}$ is dense in $X;$ \item[\rm(\ref{gc3}.2)] for any non-empty open subset $U$ of $Y$, there exist $m\in{\mathbb N}$ and compact subsets $\Omega_1,\dots,\Omega_m$ of $\Omega$ such that $\Omega=\bigcup\limits_{j=1}^m\Omega_j$ and for any $j\in\{1,\dots,m\}$, $l\in{\mathbb Z}_+$ and a neighborhood $W$ of $0$ in $X$, there are a finite set $\Lambda\subset{\mathbb Z}_+$ and $u\in W$ for which $\min\Lambda\geqslant l$ and $\Omega_j\subseteq M(u,\Lambda,U)$. \end{itemize} Then $\hbox{{\Goth U}}^*=\bigcap\limits_{\omega\in\Omega}\hbox{{\Goth U}}({\cal F}_\omega)$ is a dense $G_\delta$-subset of $X$, where ${\cal F}_\omega=\{T_{\omega,a,n}:a\in A,\ n\in{\mathbb Z}_+\}$. \end{proposition}
\begin{proof} Let $U_0$ be a non-empty open subset of $X$ and $U$ be a non-empty open subset of $Y$. Pick $y_0\in U$ and a neighborhood $W$ of zero in $Y$ such that $y_0+W+W\subseteq U$. Then $V=y_0+W$ is a non-empty open subset of $Y$ and $V+W\subseteq U$. According to (\ref{gc3}.2), there exist compact subsets $\Omega_1,\dots,\Omega_m$ of $\Omega$ such that $\Omega=\bigcup\limits_{j=1}^m\Omega_j$ and \begin{equation}\label{gc33} \begin{array}{l} \text{for any $j\in\{1,\dots,m\}$, $l\in{\mathbb Z}_+$ a any neighborhood $W_1$ of $0$ in $X$, there are}\\ \text{a finite set $\Lambda\subset{\mathbb Z}_+$ and $u\in W_1$ such that $\min\Lambda\geqslant l$ and $\Omega_j\subseteq M(u,\Lambda,V)$.} \end{array} \end{equation} We shall construct inductively $u_0,\dots,u_{m}\in E\cap U_0$ and finite sets $\Lambda_1,\dots,\Lambda_{m}\subset {\mathbb Z}_+$ such that for $0\leqslant j\leqslant m$, \begin{equation}\label{conj} \text{$\Omega_p\subseteq M(u_j,\Lambda_p,U)$ for $1\leqslant p\leqslant j$.} \end{equation} By (\ref{gc3}.1), the linear space $E$ is dense in $X$. Hence we can pick $u_0\in U_0\cap E$, which will serve as the basis of induction. Assume now that $1\leqslant q\leqslant m$ and $u_0,\dots,u_{q-1}\in E\cap U_0$ and finite subsets $\Lambda_1,\dots,\Lambda_{q-1}$ of ${\mathbb Z}_+$ satisfying (\ref{conj}) with $0\leqslant j\leqslant q-1$ are already constructed. We shall construct $u_q\in E\cap U_0$ and a finite subset $\Lambda_q$ of ${\mathbb Z}_+$ satisfying (\ref{conj}) with $j=q$. Consider the set $$ G=\{u\in X:\Omega_p\subseteq M(u,\Lambda_p,U)\ \ \text{for}\ \ 1\leqslant p\leqslant q-1\}. $$ Since $\Omega_p$ are compact and $U$ is open, Lemma~\ref{gc1} implies that $G$ is open in $X$. According to (\ref{conj}) with $j=q-1$, $u_{q-1}\in G$. Since $u_{q-1}\in E$, there exists $l\in{\mathbb Z}_+$ such that \begin{equation}\label{sss} \text{$T_{\omega,a,n}u_{q-1}\in W$ for any $n\geqslant l$ and any $(\omega,a)\in\Omega\times A$.} \end{equation} Since $u_{q-1}\in G\cap U_0$, and $G\cap U_0$ is open in $X$, $W_1=(G\cap U_0)-u_{q-1}$ is a neighborhood of $0$ in $X$. According to (\ref{gc33}), there exists a finite subset $\Lambda_q$ of ${\mathbb Z}_+$ such that $$ \min\Lambda_q\geqslant l\ \ \ \text{and}\ \ \ G_1=\{u\in W_1:\Omega_q\subseteq M(u,\Lambda_q,V)\}\neq\varnothing. $$ By Lemma~\ref{gc1}, $G_1$ is open in $X$. Since $E$ is dense in $X$, we can pick $u\in G_1\cap E$. Denote $u_q=u_{q-1}+u$. We shall see that $u_q$ and $\Lambda_q$ satisfy (\ref{conj}) with $j=q$.
Since $u_{q-1},u\in E$ and $E$ is a linear space, we have $u_q\in E$. Since $u\in W_1=(G\cap U_0)-u_{q-1}$, we get $u_q\in G\cap U_0$. In particular, $u_q\in U_0\cap E$ and $u_q\in G$. By definition of $G$, $\Omega_p\subseteq M(u_q,\Lambda_p,U)$ for $1\leqslant p\leqslant q-1$. Since $u\in G_1$, for any $\omega\in \Omega_q$, there exist $n_\omega\in \Lambda_q$ and $a_\omega\in A$ such that $T_{\omega,a_\omega,n_\omega}u\in V$. Since $n_\omega\in \Lambda_q$ and $\min\Lambda_q\geqslant l$, we have $n_\omega\geqslant l$. According to (\ref{sss}), $T_{\omega,a_\omega,n_\omega}u_{q-1}\in W$. The equality $u_q=u_{q-1}+u$ and linearity of $T_{\omega,a_\omega,n_\omega}$ imply $T_{\omega,a_\omega,n_\omega}u_q\in V+W\subseteq U$. Since $\omega\in \Omega_q$ is arbitrary, $\Omega_q\subseteq M(u_q,\Lambda_q,U)$. This completes the proof of (\ref{conj}) for $j=q$ and the inductive construction of $u_0,\dots,u_{m}$ and $\Lambda_1,\dots,\Lambda_{m}$ satisfying (\ref{conj}).
Since $\Omega$ is the union of $\Omega_j$ with $1\leqslant j\leqslant m$, (\ref{conj}) for $j=m$ implies that $u_m\in U_0$ and $\Omega=M(u_m,{\mathbb Z}_+,U)$. That is, for any $\omega\in\Omega$ there are $a\in A$ and $n\in{\mathbb Z}_+$ such that $T_{\omega,a,n}u_m\in U$. Since $U_0$ and $U$ are arbitrary non-empty open subsets of $X$ and $Y$ respectively, condition (\ref{gc}.2) is satisfied. By Proposition~\ref{gc}, $\hbox{{\Goth U}}^*$ is a dense $G_\delta$-subset of $X$. \end{proof}
Since for any $\delta>0$, any compact interval of the real line is the union of finitely many intervals of length $\leqslant \delta$, we immediately obtain the following corollary.
\begin{corollary}\label{gc4} Let $A$ be a set, $X$ be a Baire topological vector space, $Y$ be a separable metrizable topological vector space, $\Omega$ be a compact interval of ${\mathbb R}$ and $T\in {\cal L}_{\Omega,A}(X,Y)$ be such that $(\ref{gc3}.1)$ is satisfied and \begin{itemize}\itemsep=-2pt \item[\rm(\ref{gc4}.2)] for any non-empty open subset $U$ of $Y$, there exists $\delta>0$ such that for any compact interval $J\subseteq\Omega$ of length $\leqslant\delta$, $l\in{\mathbb Z}_+$ and a neighborhood $W$ of $0$ in $X$, there exist a finite set $\Lambda\subset{\mathbb Z}_+$ and $u\in W$ for which $\min\Lambda\geqslant l$ and $J\subseteq M(u,\Lambda,U)$. \end{itemize} Then $\hbox{{\Goth U}}^*=\bigcap\limits_{\omega\in\Omega}\hbox{{\Goth U}}({\cal F}_\omega)$ is a dense $G_\delta$-subset of $X$, where ${\cal F}_\omega=\{T_{\omega,a,n}:a\in A,\ n\in{\mathbb Z}_+\}$. \end{corollary}
\section{Operator groups with the Runge property}
In this section we prove a statement more general than of Theorem~\ref{t1}.
\begin{definition}\label{run} \rm Let $X$ be a locally convex topological vector space and $\{T_z\}_{z\in{\mathbb C}}$ be an operator group. That is, $T_z\in L(X)$ for each $z\in{\mathbb C}$, $T_0=I$ and $T_{z+w}=T_zT_w$ for any $z,w\in{\mathbb C}$. We say that $\{T_z\}_{z\in{\mathbb C}}$ has the {\it Runge property} if for any continuous seminorm $p$ on
$X$ there exists $c=c(p)>0$ such that for any finite set $S$ of complex numbers satisfying $|z-z'|\geqslant c$ for $z,z'\in S$, $z\neq z'$, any $\varepsilon>0$ and $\{x_z\}_{z\in S}\in X^S$, there is $x\in X$ such that $p(T_{z}x-x_z)<\varepsilon$ for each $z\in S$. \end{definition}
\begin{lemma}\label{rutr} For each $a\in {\mathbb C}$ let $T_a\in L({\cal H}({\mathbb C}))$ be the translation operator $Tf(z)=f(z-a)$. Then the group $\{T_a\}_{a\in{\mathbb C}}$ has the Runge property. \end{lemma}
\begin{proof} Let $p$ be a continuous seminorm on ${\cal H}({\mathbb C})$. Then there exist $a>0$ such that $p(f)\leqslant q(f)$ for each $f\in {\cal H}({\mathbb C})$, where $q(f)=a\max\limits_{|z|\leqslant a}|f(z)|$. Take any $c>2a$. We shall show that $c$ satisfies the condition from Definition~\ref{run}. Let $\varepsilon>0$, $S$ be a finite set of complex numbers such that $|z-z'|\geqslant c$ for $z,z'\in S$, $z\neq z'$
and $\{f_z\}_{z\in S}\in {\cal H}({\mathbb C})^S$. For each $z\in S$ consider the disk $D_z=\{w\in{\mathbb C}:|z+w|\leqslant a\}$ and let $D=\bigcup\limits_{z\in S}D_z$. Since $|z-z'|\geqslant c$ for $z,z'\in S$, $z\neq z'$, the closed disks $D_z$ are pairwise disjoint. It follows that ${\mathbb C}\setminus D$
is connected. By the classical Runge theorem, any function holomorphic in a neighborhood of the compact set $D$ can be with any prescribed accuracy uniformly on $D$ approximated by a polynomial. Thus there is a polynomial $f$ such that $\sup\limits_{w\in D_z}|f(w)-f_z(z+w)|<\varepsilon/a$ for any $z\in S$. Equivalently,
$\sup\limits_{|w|\leqslant a}|f(w-z)-f_z(w)|<\delta$ for any $z\in S$. Using the definitions of $T_z$ and $q$, we obtain $p(T_zf-f_z)\leqslant q(T_zf-f_z)<\varepsilon$ for each $z\in S$. \end{proof}
It is also easy to show that the translation group satisfies the Runge property when acting on the Fr\'echet space $C({\mathbb C})$ of continuous functions $f:{\mathbb C}\to{\mathbb C}$ with the topology of uniform convergence on compact sets. Recall that an operator semigroup $\{T_t\}$ is called {\it strongly continuous} if the map $(t,x)\mapsto T_tx$ is separately continuous.
\begin{theorem}\label{t1a} Let $X$ be a separable Fr\'echet space and $\{T_z\}_{z\in{\mathbb C}}$ be a strongly continuous operator group on $X$ with the Runge property. Then the family $\{aT_b:a\in{\mathbb K}^\star,\ b\in{\mathbb C}^\star\}$ has a dense $G_\delta$-set of common hypercyclic vectors. \end{theorem}
According to Lemma~\ref{rutr}, Theorem~\ref{t1} is a particular case of Theorem~\ref{t1a}. The rest of this section is devoted to the proof of Theorem~\ref{t1a}. We need a couple of technical lemmas.
\begin{lemma}\label{tech} For each $\delta,C>0$, there is $R>0$ such that for any $n\in{\mathbb N}$, there exists a finite set
$S\subset{\mathbb C}$ such that $|z|\in{\mathbb N}$ and $nR+c\leqslant |z|\leqslant(n+1)R-c$ for any $z\in S$, $|z-z'|\geqslant c$ for any $z,z'\in S$, $z\neq z'$ and for each $w\in{\mathbb T}$, there exists $z\in S$ such that
$\bigl|w-\frac{z}{|z|}\bigr|<\delta/|z|$. \end{lemma}
\begin{proof} Without loss of generality, we may assume that $0<\delta<1$. Pick $m\in{\mathbb N}$ such that $2m\geqslant c$ and $h\in{\mathbb N}$ such that $h\geqslant (40\cdot m)/\delta$. We shall show that $R=hm$ satisfies the desired condition. Pick $n\in{\mathbb N}$ and consider $k=k(n)\in{\mathbb N}$ defined by the formula $k=\bigl[\frac{\pi (n+1)m}{2\delta n}\bigr]+1$, where $[t]$ is the integer part of $t\in{\mathbb R}$. For $1\leqslant j\leqslant k$ let $n_j=nR+2jm$. Clearly $n_j$ are natural numbers and $n_1=nR+2m\geqslant nR+c$. On the other hand, $n_k=nR+2mk\leqslant (n+1)R-2m$. Indeed, the last inequality is equivalent to $2(k+1)\leqslant h$, which is an easy consequence of the two inequalities $h>(40\cdot m)/\delta$ and $k+1\leqslant \frac{\pi (n+1)m}{2\delta n}+2\leqslant \frac{\pi m}{\delta}+2$. Thus, \begin{equation}\label{nk} nR+c\leqslant n_1\leqslant n_j\leqslant n_k\leqslant (n+1)R-2m\leqslant (n+1)R-c\ \ \text{for $1\leqslant j\leqslant k$}. \end{equation} Now we can define a finite set $S$ of complex numbers in the following way: \begin{equation}\label{SS} S=\{z_{j,l}:1\leqslant j\leqslant k,\ 0\leqslant l\leqslant 2nh-1\},\ \ \text{where}\ \ z_{j,l}=n_j\exp\Bigl(\frac{\pi i(lk+j)}{nhk}\Bigr) \end{equation}
and $\exp(z)$ stands for $e^z$. Clearly for each $z_{j,l}\in S$, we have $|z_{j,l}|=n_j\in{\mathbb N}$. Moreover, according to (\ref{nk}),
$nR+c\leqslant |z|\leqslant (n+1)R-c$ for any $z\in S$. Next, let $z,z'\in S$ and $z\neq z'$. Then $z=z_{j,l}$ and $z'=z_{p,q}$ for $1\leqslant j,p\leqslant k$, $0\leqslant l,q\leqslant 2nh-1$ and $(j,l)\neq (p,q)$. If $j\neq p$, then
$|z-z'|\geqslant ||z|-|z'||=|n_j-n_p|=2m|j-p|\geq2m\geqslant c$. If $j=p$, then $l\neq q$ and $$
|z-z'|=n_j\Bigl|\exp\Bigl(\frac{\pi i l}{nh}\Bigr)-\exp\Bigl(\frac{\pi i q}{nh}\Bigr)\Bigr|\geqslant n_j\Bigl|\exp\Bigl(\frac{\pi i}{nh}\Bigr)-1\Bigr|=2n_j\sin\Bigl(\frac{\pi}{2nh}\Bigr). $$
The inequality $\sin x\geqslant \frac{2x}{\pi}$ for $0\leqslant x\leqslant \pi/2$, the inequality $n_j>nR$ and the equality $R=hm$ imply $|z-z'|\geqslant
\frac{4\pi n_j}{2\pi nh}=\frac{2n_j}{nh}>\frac{2nR}{nh}=2m\geqslant c$. Thus $|z-z'|\geqslant c$ for any $z,z'\in S$, $z\neq z'$. Finally, consider the set $\Sigma=\{z/|z|:z\in S\}$. Clearly \begin{equation*} \Sigma=\Bigl\{\exp\Bigl(\frac{\pi i(lk+j)}{nhk}\Bigr):{{1\leqslant j\leqslant k,\atop 0\leqslant l\leqslant 2nh-1}}\Bigr\}= \Bigl\{\exp\Bigl(\frac{\pi ij}{nhk}\Bigr):1\leqslant j\leqslant 2nhk\Bigr\}=\{z\in{\mathbb C}:z^{2nhk}=1\}. \end{equation*} It immediately follows that $$
\sup_{w\in{\mathbb T}}\min_{z\in\Sigma}|w-z|=\Bigl|1-\exp\Bigl(\frac{\pi i}{2nhk}\Bigr)\Bigr|=2\sin\Bigl(\frac{\pi}{4nhk}\Bigr)\leqslant \frac{\pi}{2nhk}=\frac{\pi m}{2nRk}. $$ Since $k>\frac{\pi (n+1)m}{2\delta n}$, we get
$\sup\limits_{w\in{\mathbb T}}\min\limits_{z\in\Sigma}|w-z|<\delta (n+1)^{-1}R^{-1}$. That is, for any $w\in{\mathbb T}$, there exists $z\in S$
such that $\bigl|w-\frac{z}{|z|}\bigr|<\frac{\delta}{R(n+1)}$. Since
$|z|<R(n+1)$, we obtain $\bigl|w-\frac{z}{|z|}\bigr|<\delta/|z|$, which completes the proof. \end{proof}
\begin{lemma}\label{lll} Let $X$ be a locally convex topological vector space and $\{T_z\}_{z\in{\mathbb C}}$ be an operator group on $X$ such that the map $(u,h)\mapsto T_hu$ from $X\times{\mathbb C}$ to $X$ is continuous. Let also $x\in X$ and $p$ be a continuous seminorm on $X$. Then there exist a continuous seminorm $q$ on $X$ and $\delta>0$ such that $p\leqslant q$ and for any $a\in{\mathbb R}$, $w\in{\mathbb T}$, $n\in{\mathbb N}$ and $y\in X$ satisfying $q(x-e^{an}T_{wn}y)<1$, we have $p(x-e^{bn}T_{zn}y)<1$ whenever $b\in{\mathbb R}$ and $z\in{\mathbb T}$ are such that
$|a-b|<\delta/n$ and $|w-z|<\delta/n$. \end{lemma}
\begin{proof} Since the map $(u,h)\mapsto T_hu$ from $X\times{\mathbb C}$ to $X$ is continuous, there is $\theta>0$ and a continuous seminorm $q$ on $X$
such that $p(x-T_hx)\leqslant 1/4$ and $p(T_hu)\leqslant q(u)/4$ for any $u\in X$ whenever $|h|\leqslant \theta$. In particular, $p(u)\leqslant q(u)/4\leqslant q(u)$ for each $u\in X$. Pick $r\in(0,\theta)$ and assume that $a,b\in{\mathbb R}$, $w,z\in{\mathbb T}$, $n\in{\mathbb N}$ and $y\in X$ are such that
$q(x-e^{an}T_{wn}y)<1$, $|a-b|<r/n$ and $|w-z|<r/n$. Then $p(e^{an}T_{wn}y)\leqslant q(e^{an}T_{wn}y)\leqslant q(x)+1$. Since
$|a-b|<r/n$, we have $|e^{(b-a)n}-1|<e^r-1$. Hence \begin{equation}\label{ee1}
p(e^{bn}T_{wn}y-e^{an}T_{wn}y) =|e^{(b-a)n}-1|p(e^{an}T_{wn}y)\leqslant (e^r-1)(q(x)+1). \end{equation}
Since $|nw-nz|<r<\theta$ and $p(T_hu)\leqslant q(u)/4$ for any $u\in X$
whenever $|h|\leqslant \theta$, we have $$ p(T_{(z-w)n}x-e^{an}T_{zn}y)=p(T_{(z-w)n}(x-e^{an}T_{wn}y))\leqslant q(x-e^{an}T_{wn}y)/4<1/4. $$
Since $|(z-w)n|<r<\theta$, we get $p(x-T_{(z-w)n}x)\leqslant 1/4$. Using this inequality together with the last display and the triangle inequality, we obtain $p(x-e^{an}T_{zn}y)\leqslant 1/2$. The latter together with (\ref{ee1}) and the triangle inequality gives $p(x-e^{bn}T_{zn}y)<(e^r-1)(q(x)+1)+1/2$. Hence any $\delta\in(0,\theta)$ satisfying $(e^\delta-1)(q(x)+1)<1/2$, satisfies also the desired condition. \end{proof}
\subsection{Proof of Theorem~\ref{t1a}}
By Theorems~LM and~CMP, $\hbox{{\Goth H}}(bT_a)=\hbox{{\Goth H}}(b'T_{a'})$ if $|b|=|b'|$ and $a/a'\in{\mathbb R}_+$. Hence the set of common hypercyclic vectors of the family $\{aT_b:a\in{\mathbb K}^\star,\ b\in{\mathbb C}^\star\}$ coincides with the set $G$ of common hypercyclic vectors for the family $\{e^bT_a:(a,b)\in{\mathbb T}\times {\mathbb R}\}$. Thus it remains to show that $G$ is a dense $G_\delta$-subset of $X$. Fix $d>0$. According to Corollary~\ref{gc2}, it suffices to demonstrate that \begin{equation}\label{W} \begin{array}{l} \text{for any non-empty open subsets $U$ and $V$ of $X$, there is $y\in U$ such that}\\ \text{for any $a\in{\mathbb T}$ and $b\in[-d,d]$ there is $n\in{\mathbb N}$ for which $e^{bn}T_{an}y\in V$.} \end{array} \end{equation}
Pick a continuous seminorm $p$ on $X$ and $u,x\in X$ such that $\{y\in X:p(u-y)<1\}\subseteq U$ and $\{y\in X:p(x-y)<1\}\subseteq V$. By the uniform boundedness principle \cite{sch}, strong continuity of $\{T_z\}_{z\in{\mathbb C}}$ implies that the map $(z,v)\mapsto T_zv$ from ${\mathbb C}\times X$ to $X$ is continuous. By Lemma~\ref{lll}, there is a continuous seminorm $q$ on $X$ and $\delta>0$ such that $p(v)\leqslant q(v)$ for any $v\in X$ and \begin{equation}\label{L} \begin{array}{l} \text{for any $a,b\in{\mathbb R}$, $w,z\in{\mathbb T}$, $n\in{\mathbb N}$ and $y\in X$ satisfying $q(x-e^{an}T_{wn}y)<1$,} \\
\text{$|a-b|<\delta/n$ and $|w-z|<\delta/n$, we have $p(g-e^{bn}T_{zn}y)<1$.} \end{array} \end{equation} Since $\{T_z\}_{z\in{\mathbb C}}$ has the Runge property, there is $c>0$ such that \begin{equation}\label{J} \begin{array}{l}
\text{for any finite set $S\subset{\mathbb C}$ with $|z-z'|\geqslant c$ for $z,z'\in S$, $z\neq z'$, any $\varepsilon>0$ and any} \\ \text{$\{x_z\}_{z\in S}\in X^S$, there exists $y\in X$ such that $q(T_{z}y-x_z)<\varepsilon$ for any $z\in S$.} \end{array} \end{equation}
Let $R>0$ be the number provided by Lemma~\ref{tech} for the just chosen $\delta$ and $c$. By Lemma~\ref{tech}, for each $n\in{\mathbb N}$
there is a finite set $S_n\subset{\mathbb C}$ such that $|z|\in{\mathbb N}$ and
$nR+c\leqslant|z|\leqslant(n+1)R-c$ for any $z\in S_n$, $|z-z'|\geqslant c$ for any $z,z'\in S_n$, $z\neq z'$ and for each $w\in{\mathbb T}$, there is $z\in S_n$
such that $\bigl|w-\frac{z}{|z|}\bigr|<\frac{\delta}{|z|}$. Since $\sum\limits_{n=1}^\infty n^{-1}=\infty$, we can pick $d_1,\dots,d_k\in[-d,d]$ for which \begin{equation}\label{cover} \smash{[-d,d]\subseteq \bigcup_{n=1}^k \Bigl(d_n-\frac{\delta R^{-1}}{n+1},d_n+\frac{\delta R^{-1}}{n+1}\Bigr).} \end{equation}
Let $S=\bigcup\limits_{n=1}^kS_n$ and $\Lambda=S\cup\{0\}$. It is straightforward to see that $\Lambda$ is a finite set, $|z|\in{\mathbb Z}_+$
for any $z\in\Lambda$ and $|z-u|\geqslant c$ for any $z,u\in \Lambda$,
$z\neq u$. Let $N=\max\{|z|:z\in\Lambda\}$ and $\varepsilon=d^{-N}$. By (\ref{J}), there is $y\in X$ such that $q(u-y)<\varepsilon$ and
$q(T_zy-e^{-c_n|z|}x)<\varepsilon$ for each $z\in S$. Then $p(u-y)\leqslant q(u-y)<\varepsilon<1$ and therefore $f\in U$. By definition of
$\varepsilon$, $q(x-e^{c_n|z|}T_zy)<1$ for each $z\in S$. Let now $a\in{\mathbb T}$ and $b\in [-d,d]$. According to (\ref{cover}), there is
$n\in\{1,\dots,k\}$ such that $|b-d_n|<\frac{\delta R^{-1}}{n+1}$. By the mentioned property of the set $S_n$, we can choose $z\in S_n$
such that $\bigl|a-\frac{z}{|z|}\bigr|<\frac{\delta}{|z|}$. Since
$|z|<R(n+1)$, we have $|b-d_n|<\frac{\delta}{|z|}$. By (\ref{L}),
$p(x-e^{b|z|}T_{a|z|}y)<1$. Hence $e^{b|z|}T_{a|z|}f\in V$, which completes the proof of (\ref{W}) and that of Theorem~\ref{t1a}.
\section{Scalar multiples of a fixed operator}
In this section we shall prove Theorems~\ref{t3a} and~\ref{t4} as well as Corollaries~\ref{co1}, \ref{co2} and~\ref{co3}. Recall that a subset $A$ of a vector space is called {\it balanced} if $zx\in A$
for any $x\in A$ and $z\in{\mathbb K}$ satisfying $|z|\leqslant 1$. It is well-known that any topological vector space has a base of open neighborhoods of zero consisting of balanced sets. For two subsets $A,B$ of a vector space $X$ we say that $A$ {\it absorbs} $B$ if there exists $c>0$ such that $B\subseteq zA$ for any $z\in{\mathbb K}$
satisfying $|z|\geqslant c$. Obviously, if $A$ is balanced, then $A$ absorbs $B$ if and only if there is $c>0$ for which $B\subseteq cA$.
\begin{lemma}\label{sm1} Let $X$ be a topological vector space and $U$ be a non-empty open subset of $X$. Then there exists a non-empty open subset $V$ of $X$ and a balanced neighborhood $W$ of zero in $X$ such that $V+W\subseteq U$ and $W$ absorbs $V$. \end{lemma}
\begin{proof} Pick $u\in U$ and a balanced neighborhood $W_0$ of zero in $X$ such that $u+W_0+W_0+W_0\subseteq U$. Denote $V=u+W_0$ and $W=W_0+W_0$. Clearly $V$ is a non-empty open subset of $X$, $W$ is a balanced neighborhood of 0 in $X$ and $V+W=u+W_0+W_0+W_0\subseteq U$. Since $W_0$ is a neighborhood of 0 in $X$, we can pick $c\geqslant 1$ such that $u\in cW_0$. Since $W_0$ is balanced and $c\geqslant 1$, $W_0\subseteq cW_0$ and therefore $V=u+W_0\subseteq cW_0+W_0\subseteq c(W_0+W_0)=cW$. Since $W$ is balanced, $W$ absorbs $V$. \end{proof}
To any continuous linear operator $T$ on a complex topological vector space $X$ there corresponds ${\bf T}\in{\cal L}_{{\mathbb R},{\mathbb T}}(X,X)$ defined by the formula ${\bf T}_{t,w,n}x=we^{tn}T^nx$. We will use the symbol $M(T,u,\Lambda,U)$ to denote the sets defined in (\ref{MMM}) for ${\bf T}$. In other words, for $\Lambda\subseteq{\mathbb Z}_+$, $t\in{\mathbb R}$, $u\in X$ and a subset $U$ of $X$, we write \begin{equation*} M(T,u,\Lambda,U)=\{t\in{\mathbb R}: \text{$we^{tn}T^nu\in U$
for some $n\in\Lambda$ and $w\in{\mathbb T}$}\}. \end{equation*}
\begin{lemma}\label{sm2} Let $X$ be a complex topological vector space, $W$ be a balanced neighborhood of $0$ in $X$, $c>0$, $k\in{\mathbb N}$ and $\delta\in(0,(2ck)^{-1}]$. Then for any $m\in{\mathbb N}$, any $\alpha\in[-c,c]$, any $w\in{\mathbb T}$, any neighborhood $W_0$ of zero in $X$ and any $x\in cW$ such that $T^kx=we^{-\alpha k}x$, there exist $u\in W_0$ and a finite set $\Lambda\subset{\mathbb N}$ such that $\min\Lambda\geqslant m$ and $[\alpha+\delta,\alpha+2\delta]\subseteq M(T,u,\Lambda,x+W)$. \end{lemma}
\begin{proof} Let $\alpha\in[-c,c]$, $w\in{\mathbb T}$ and any $x\in cW$ be such that $T^kx=we^{-\alpha k}x$. For each $p\in{\mathbb N}$ consider $u_p=e^{-2\delta kp}x$. Since $T^kx=we^{-\alpha k}x$, we see that for $0\leqslant j\leqslant p$, $$ T^{(p+j)k}u_p=e^{-\alpha(p+j)k}e^{-2\delta kp}w^{p+j}x=\exp\Bigl(-(p+j)k\Bigl(\alpha+\frac{2p\delta}{p+j}\Bigr)\Bigr)w^{p+j}x. $$ That is, \begin{equation}\label{thj} w_je^{(p+j)k\theta_j}T^{(p+j)k}u_p=x\ \ \text{for $1\leqslant j\leqslant p$, where $\displaystyle\theta_j=\alpha+\frac{2\delta p}{p+j}$ and $w_j=w^{-p-j}\in{\mathbb T}$.} \end{equation} Let now $0\leqslant l\leqslant p-1$ and $\theta\in[\theta_{l+1},\theta_l]$. Since $e^{(p+l)k\theta}T^{(p+l)k}u_p=e^{(p+l)k(\theta-\theta_l)}e^{(p+l)k\theta_l}T^{(p+l)k}u_p$, using (\ref{thj}) with $j=l$, we obtain $$ w_le^{(p+l)k\theta}T^{(p+l)k}u_p=e^{(p+l)k(\theta-\theta_l)}x=x+(e^{(p+l)k(\theta-\theta_l)}-1)x. $$ Taking into account that $-(\theta_l-\theta_{l+1})\leqslant
\theta-\theta_l\leqslant 0$ and using the inequality $0\leqslant 1-e^{-t}\leqslant t$ for $t\geqslant 0$, we see that $|e^{(p+l)k(\theta-\theta_l)}-1|\leqslant (p+l)k(\theta_l-\theta_{l+1})$. This inequality, the inclusion $x\in cW$ the last display and the fact that $W$ is balanced imply that $$
w_le^{(p+l)k\theta}T^{(p+l)k}u_p\in x+c|e^{(p+l)k(\theta-\theta_l)}-1|W\subseteq x+c(p+l)k(\theta_l-\theta_{l+1})W. $$ Since $\theta_l-\theta_{l+1}=\frac{2p\delta}{(p+l)(p+l+1)}\leqslant \frac{2\delta}{p+l}$ and $\delta\leqslant (2ck)^{-1}$, we have $c(p+l)k(\theta_l-\theta_{l+1})\leqslant 1$. Thus according to the above display, $w_le^{(p+l)k\theta}T^{(p+l)k}u_p\in x+W$ whenever $\theta\in [\theta_{l+1},\theta_l]$. It follows that $[\theta_{l+1},\theta_l]\subseteq M(T,u_p,\Lambda_p,x+W)$ for $0\leqslant l\leqslant p-1$, where $\Lambda_p=\{(p+j)k:0\leqslant j\leqslant p\}$. Since the sequence $\{\theta_j\}_{0\leqslant j\leqslant p}$ decreases, $\theta_0=\alpha+2\delta$ and $\theta_p=\alpha+\delta$, we see that $[\alpha+\delta,\alpha+2\delta]=\bigcup\limits_{l=0}^{p-1}[\theta_{l+1},\theta_l]$. Since $[\theta_{l+1},\theta_l]\subseteq M(T,u_p,\Lambda_p,x+W)$ for $0\leqslant l\leqslant p-1$, we have $[\alpha+\delta,\alpha+2\delta]\subseteq M(T,u_p,\Lambda_p,x+W)$ for any $p\in{\mathbb N}$. Clearly $\min\Lambda_p=pk\to\infty$ and $u_p=e^{-2\delta kp}x\to 0$ in $X$ as $p\to\infty$. Thus we can pick $p\in{\mathbb N}$ such that $\min\Lambda_p>m$ and $u_p\in W_0$. Then $u=u_p$ and $\Lambda=\Lambda_p$ for such a $p$ satisfy all desired conditions. \end{proof}
We shall prove a statement more general than Theorem~\ref{t3a}.
\begin{theorem}\label{t3} Let $X$ be a separable complex ${\cal F}$-space, $T\in L(X)$ and $0\leqslant a<b\leqslant\infty$. Assume also that the following condition is satisfied. \begin{itemize}\itemsep=-2pt \item[\rm(\ref{t3}.1)]For any compact interval $J\subset(a,b)$ and any non-empty open subset $V$ of $X$, there exists $k=k(J,V)\in{\mathbb N}$ and a dense subset $C=C(J,V)$ of $J$ such that $$ \smash{V\cap \bigcup_{w\in{\mathbb T}}\hbox{\tt ker}\,(T^k-wc^kI)\neq\varnothing\ \ \text{for each $c\in C$}.} $$ \end{itemize}
Then $\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-set. \end{theorem}
\begin{proof} Let $\alpha_0,\alpha,\beta\in{\mathbb R}$ be such that $b^{-1}<e^{\alpha_0}<e^\alpha<e^{\beta}<a^{-1}$. For each $\omega\in [\alpha,\beta]$ consider the family ${\cal F}_\omega=\{ze^{\omega n}T^n:z\in{\mathbb T},\ n\in{\mathbb Z}_+\}$. We shall apply Corollary~\ref{gc4} with $A={\mathbb T}$, $T_{\omega,a,n}=ae^{\omega n}T^n$ and $\Omega=[\alpha,\beta]$. First, pick a compact interval $J\subset (a,e^{-\beta})$. For each non-empty open subset $V_0$ of $X$, we can use (\ref{t3}.1) to find $x\in V_0$, $k\in{\mathbb N}$, $r\in J$ and $w\in{\mathbb T}$
such that $T^{k}x=wr^{k}x$. The latter equality implies that $x$ is a sum of finitely many eigenvectors of $T$ corresponding to eigenvalues $\lambda_j$ with $|\lambda_j|=r<e^{-\beta}$. Hence $e^{\beta n}T^nx\to 0$ as $n\to\infty$. Since $V_0$ is an arbitrary non-empty open subset of $X$ and $x\in V_0$, we see that the space $E=\{x\in X:e^{\beta n}T^nx\to 0\}$ is dense in $X$. It immediately follows that $$ \text{for any $x\in E$, $ze^{\omega n}T^nx\to 0$ as $n\to\infty$ uniformly for $(z,\omega)\in{\mathbb T}\times[\alpha,\beta]$}. $$ Hence (\ref{gc3}.1) is satisfied. Let now $U$ be a non-empty open subset of $X$. By Lemma~\ref{sm1}, there exists a balanced neighborhood $W$ of zero in $X$ and a non-empty open subset $V$ of $X$ such that $V+W\subseteq U$ and $W$ absorbs $V$. Since $W$ absorbs $V$, there is $c>0$ such that $V\subseteq cW$. According to (\ref{t3}.2), we can pick $k\in{\mathbb N}$ and a dense subset $R$ of $[\alpha_0,\beta]$ for which \begin{equation}\label{qbxc} V\cap\bigcup_{w\in{\mathbb T}}\hbox{\tt ker}\,(T^k-we^{-rk}I)\neq\varnothing\ \ \text{for any $r\in R$}. \end{equation} Let $\delta_0=\min\{(2ck)^{-1},\alpha-\alpha_0\}$ and $r\in R$. By (\ref{qbxc}), we can pick $w_r\in{\mathbb T}$ and $x_r\in V\subseteq cW$ such that $T^kx_r=wr^{-rk}x_r$. By Lemma~\ref{sm2}, for any neighborhood $W_0$ of zero in $X$ and any $m\in{\mathbb N}$, there exist $u\in W_0$ and a finite set $\Lambda\subset{\mathbb N}$ satisfying $\min\Lambda\geqslant m$ and $[r+\delta_0,r+2\delta_0]\subseteq M(T,u,\Lambda,x_r+W)$. Pick $\delta\in(0,\delta_0)$. Since $R$ is dense in $[\alpha_0,\beta]$ and $\delta_0\leqslant \alpha-\alpha_0$, it is easy to see that each compact interval $J\subseteq[\alpha,\beta]$ of length at most $\delta$ is contained in $[r+\delta_0,r+2\delta_0]$ for some $r\in R$. Thus for each compact interval $J\subseteq[\alpha,\beta]$ of length at most $\delta$, any neighborhood $W_0$ of zero in $X$ and any $m\in{\mathbb N}$, there exist $r\in R$, $u\in W_0$ and a finite set $\Lambda$ such that $\min\Lambda\geqslant m$ and $J\subseteq M(T,u,\Lambda,x_r+W)$. The latter inclusion means that for each $t\in J$, there exist $w_t\in{\mathbb T}$ and $n_t\in\Lambda$ such that $w_tT^{n_t}u\in x_r+W$. Since $x_r\in V$ and $V+W\subseteq U$, we get $w_tT^{n_t}u\in U$. That is, for any compact interval $J\subseteq[\alpha,\beta]$ of length at most $\delta$, any neighborhood $W_0$ of zero in $X$ and any $m\in{\mathbb N}$, there exist $u\in W_0$ and a finite set $\Lambda$ such that $\min\Lambda\geqslant m$ and $J\subseteq M(T,u,\Lambda,U)$. Thus (\ref{gc4}.2) is also satisfied. By Corollary~\ref{gc4}, $$ H_{\alpha,\beta}=\bigcap_{\omega\in[\alpha,\beta]}\hbox{{\Goth U}}({\cal F}_\omega)\ \ \text{is a dense $G_\delta$-subset of $X$ whenever $b^{-1}<e^{\alpha}<e^{\beta}<a^{-1}$.} $$ By Theorem~LM, $\hbox{{\Goth U}}({\cal F}_\omega)=\hbox{{\Goth H}}(ze^\omega T)$ for any $\omega\in{\mathbb R}$ and $z\in{\mathbb T}$. Hence
$H_{\alpha,\beta}=\hbox{{\Goth H}}\{zT:e^\alpha\leqslant|z|\leqslant e^{\beta}\}$. From the above display it now follows that $\hbox{{\Goth H}}\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-subset of $X$ as the intersection of a countable family of dense $G_\delta$-sets. \end{proof}
\subsection{Proof of Theorem~\ref{t3a}}
We shall prove Theorem~\ref{t3a} by means of applying Theorem~\ref{t3}. To do this it suffices to demonstrate that (\ref{t3}.1) is satisfied. Let $J\subset(a,b)$ be a compact interval and $V$ be a non-empty open subset of $X$. For any $k\in{\mathbb N}$ let $O_k=\{c\in(a,b):F_{k,c}\cap V\neq\varnothing\}$. By (\ref{t3a}.2), $O_k$ are open subsets of $(a,b)$. According to (\ref{t3a}.3), $\{O_k:k\in{\mathbb N}\}$ is an open covering of $(a,b)$. Since $J$ is compact, we can pick $k_1,\dots,k_n\in{\mathbb N}$ such that $J\subseteq\bigcup\limits_{j=1}^n O_{k_j}$. By (\ref{t3a}.4), there is $k\in{\mathbb N}$ for which $\bigcup\limits_{j=1}^n F_{k_j,c}\subseteq F_{k,c}$ for any $c\in(a,b)$. Hence $O_k\supseteq \bigcup\limits_{j=1}^n O_{k_j}\supseteq J$. It follows that for any $c\in J$, there is $x\in F_{k,c}\cap V$. According to (\ref{t3a}.1), there is $w\in{\mathbb T}$ for which $x\in\hbox{\tt ker}\,(T^k-wc^kI)$. Thus $V\cap \bigcup\limits_{w\in{\mathbb T}}\hbox{\tt ker}\,(T^k-wc^kI)\neq\varnothing$ for any $c\in J$. That is, (\ref{t3}.1) is satisfied with $C=J$. It remains to apply Theorem~\ref{t3} to conclude the proof of Theorem~\ref{t3a}.
\subsection{Proof of Theorem~\ref{t4}}
Recall that a map $h$ from a topological space $X$ to a topological space $Y$ is called {\it open} if $h(U)$ is open in $Y$ for any open subset $U$ of $X$. Recall also that a subset $A$ of a connected open subset $U$ of ${\mathbb C}^m$ is called a {\it set of uniqueness} if any holomorphic function $\varphi:U\to{\mathbb C}$ vanishing on $A$ is identically zero. The following lemma contains few classical results that can be found in virtually any book on complex analysis.
\begin{lemma}\label{com1} Let $m\in{\mathbb N}$ and $U$ be a connected open subset of ${\mathbb C}^m$. Then any non-empty open subset of $U$ is a set of uniqueness and any non-constant holomorphic map $\varphi:U\to{\mathbb C}$ is open. Moreover, if $m=1$, then any subset of $U$ with at least one limit point in $U$ is a set of uniqueness. \end{lemma}
We need the following generalization of the last statement of Lemma~\ref{com1} to the case $m>1$. Although it is probably known, the author was unable to locate a reference.
\begin{lemma}\label{com2} Let $m\in{\mathbb N}$, $U$ be a connected open subset of ${\mathbb C}^m$, $\varphi:U\to{\mathbb C}$ be a non-constant holomorphic map and $A$ be a subset of ${\mathbb C}$ with at least one limit point in
$\varphi(U)$. Then $\varphi^{-1}(A)$ is a set of uniqueness. In particular, if $a=\inf\limits_{z\in U}|\varphi(z)|$,
$b=\sup\limits_{z\in U}|\varphi(z)|$, $c\in(a,b)$ and $G$ is a dense subset of ${\mathbb T}$, then $\varphi^{-1}(cG)$ is a set of uniqueness. \end{lemma}
\begin{proof} Assume the contrary. Then there exists a non-zero holomorphic function $f:U\to{\mathbb C}$ such that
$f\bigr|_{\varphi^{-1}(A)}=0$. Let $a\in \varphi(U)$ be a limit point of $A$ and $w\in U$ be such that $\varphi(w)=a$. Pick a convex open subset $V$ of ${\mathbb C}^m$ such that $w\in V\subseteq U$. For any complex one-dimensional linear subspace $L$ of ${\mathbb C}^m$, $V_L=(w+L)\cap V$ can be treated as a convex open subset of ${\mathbb C}$. If
$\varphi_L=\varphi\bigr|_{V_L}$ is non-constant, then by Lemma~\ref{com1}, $\varphi_L:V_L\to {\mathbb C}$ is open. Since $a=\varphi(w)$ is a limit point of $A$, it follows that $w$ is a limit point of $\varphi_L^{-1}(A)$. Using the one-dimensional uniqueness theorem, we see that $\varphi_L^{-1}(A)$ is a set of uniqueness in $V_L$. Since $f$ vanishes on
$\varphi^{-1}(A)\supseteq \varphi_L^{-1}(A)$, $f\bigr|_{V_L}=0$. On the other hand, if $\varphi_L$ is constant, then $(\varphi-a)\bigr|_{V_L}=0$. Since $L$ is arbitrary, we have $f(\varphi-a)\bigr|_V=0$. Since $V$, being a non-empty open subset of $U$, is a set of uniqueness, we have $f\cdot(\varphi-a)=0$. Since $f\not\equiv 0$, there is a non-empty open subset $W$ of $U$ such that $f(z)\neq 0$ for any $z\in W$. The equality $f\cdot(\varphi-a)=0$ implies that $\varphi(z)=a$ for any $z\in W$. Since $W$ is a set of uniqueness, $\varphi\equiv a$. We have arrived to a contradiction. Thus $\varphi^{-1}(A)$ is a set of uniqueness.
Assume now that $a=\inf\limits_{z\in U}|\varphi(z)|$,
$b=\sup\limits_{z\in U}|\varphi(z)|$, $c\in(a,b)$ and $G$ is a dense subset of ${\mathbb T}$. Since $U$ is connected $c{\mathbb T}\cap \varphi(U)\neq \varnothing$. Since $\varphi$ is open, the set $\varphi(U)$ is open in ${\mathbb C}$. Thus density of $G$ in ${\mathbb T}$ implies that $cG\cap \varphi(U)$ is dense in $c{\mathbb T}\cap \varphi(U)$, which is an open subset of $c{\mathbb T}$. Hence $cG$ has plenty of limit points in $\varphi(U)$ and it remains to apply the first part of the lemma. \end{proof}
We shall prove Theorem~\ref{t4} by means of applying Theorem~\ref{t3a}. First, note that density of $\hbox{\tt span}\,\{f(z):z\in U\}$ implies separability of $X$. Let $$ F_{k,c}=\hbox{\tt span}\,\{f(z):z\in U,\ \varphi(z)^k=c^k\}\ \ \text{for $k\in{\mathbb N}$ and $c\in(a,b)$.} $$ In order to apply Theorem~\ref{t3a} it suffices to verify that the map $(k,c)\mapsto F_{k,c}$ satisfies conditions (\ref{t3a}.1--\ref{t3a}.4). First, from the equality $Tf(z)=\varphi(z)f(z)$ it follows that $T^kx=c^kx$ for any $x\in F_{k,c}$. Hence (\ref{t3a}.1) is satisfied. Clearly $F_{k,c}\subseteq F_{m,c}$ whenever $k$ is a divisor of $m$. Hence for any $c\in (a,b)$ and any $k_1,\dots,k_n\in{\mathbb N}$, $F_{k_j,c}\subseteq F_{k,c}$ for $1\leqslant j\leqslant n$, where $k=k_1\cdot{\dots}\cdot k_n$. Thus (\ref{t3a}.4) is satisfied. It is easy to see that $$ F_c=\bigcup_{k=1}^\infty F_{k,c}=\hbox{\tt span}\,\{f(z):\varphi(z)\in c{\mathbb G}\},\ \ \text{where}\ \ {\mathbb G}=\{z\in{\mathbb T}:z^k=1\ \text{for some}\ k\in{\mathbb N}\}. $$ In order to prove (\ref{t3a}.3), we have to show that $F_c$ is dense in $X$. Assume the contrary. Since $F_c$ is a vector space and $X$ is locally convex, we can pick $g\in X^*$ such that $g\neq 0$ and $g(x)=0$ for each $x\in F_c$. In particular, $g(f(z))=0$ whenever $\varphi(z)\in c{\mathbb G}$. By Lemma~\ref{com2}, $\varphi^{-1}(c{\mathbb G})$ is a set of uniqueness. Since the holomorphic function $g\circ f$ vanishes on $\varphi^{-1}(c{\mathbb G})$, it is identically zero. Hence $g(f(z))=0$ for any $z\in U$, which contradicts the density of $\hbox{\tt span}\,\{f(z):z\in U\}$ in $X$. This contradiction completes the proof of (\ref{t3a}.3). It remains to verify (\ref{t3a}.2). Let $k\in{\mathbb N}$, $V$ be a non-empty open subset of $X$ and $G=\{c\in(a,b):F_{k,c}\cap V\neq\varnothing\}$. We have to show that $G$ is open in ${\mathbb R}$. Let $c\in G$. Then there exist $z_1,\dots,z_n\in U$ and $\lambda_1,\dots,\lambda_n\in{\mathbb C}$ such that $\varphi(z_j)^k=c^k$ for $1\leqslant j\leqslant n$ and $\sum\limits_{j=1}^n\lambda_jf(z_j)\in V$. Since $f$ is continuous, we can pick $\varepsilon>0$ such that $z_j+\varepsilon{\mathbb D}^m\subset U$ for $1\leqslant j\leqslant n$ and $\sum\limits_{j=1}^n\lambda_jf(w_j)\in V$ for any choice of $w_j\in z_j+\varepsilon{\mathbb D}^m$. By Lemma~\ref{com1}, $\varphi$ is open and therefore there exists $\delta>0$ such that $\varphi(z_j)+c\delta{\mathbb D}\subseteq \varphi(z_j+\varepsilon{\mathbb D}^m)$ for $1\leqslant j\leqslant n$. In particular, since
$|\varphi(z_j)|=c$, we see that $(1-\delta,1+\delta)\varphi(z_j)\subset\varphi(z_j+\varepsilon{\mathbb D})$ for $1\leqslant j\leqslant n$. Hence for each $s\in(1-\delta,1+\delta)$, we can pick $w_1,\dots,w_n\in U$ such that $w_j\in z_j+\varepsilon{\mathbb D}^m$ and $\varphi(w_j)=s\varphi(z_j)$ for $1\leqslant j\leqslant n$. Then $\varphi(w_j)^k=s^k\varphi(z_j)^k=(cs)^k$ and $\sum\limits_{j=1}^n\lambda_jf(w_j)\in V$ since $w_j\in z_j+\varepsilon{\mathbb D}$. Hence $cs\in G$ for each $s\in(1-\delta,1+\delta)$ and therefore $c$ is an interior point of $G$. Since $c$ is an arbitrary point of $G$, $G$ is open. This completes the proof of (\ref{t3a}.2). It remains to apply Theorem~\ref{t3a} to conclude the proof of Theorem~\ref{t4}.
\subsection{Proof of Corollary~\ref{co1}}
Note that ${\cal H}^*$ with the usual norm is a Banach space. Consider the map $f:U\to {\cal H}^*$ defined by the formula $f(z)(x)=x(z)$. It is straightforward to verify that $f$ is holomorphic, $M_\varphi^*f(z)=\varphi(z)f(z)$ for each $z\in U$ and $\hbox{\tt span}\,\{f(z):z\in U\}$ is dense in ${\cal H}^*$. The latter is a consequence of the fact that evaluation functionals separate points of ${\cal H}$. Using Theorem~\ref{t4}, we immediately obtain that
$G_0=\hbox{{\Goth H}}\{zM_\varphi^*:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-subset of ${\cal H}^*$. Now consider the map $R:{\cal H}\to{\cal H}^*$, $Rx(y)=\langle y,x\rangle$, where $\langle\cdot,\cdot\rangle$ is the scalar product of the Hilbert space ${\cal H}$. According to the Riesz theorem, $R$ is an ${\mathbb R}$-linear isometric isomorphism (it happens to be complex conjugate linear). It is also easy to see that $R^{-1}S^*R=S^\star$ for any $S\in L({\cal H})$, where $S^*$ is the dual of $S$ and $S^\star$ is the Hilbert space adjoint of $S$. Hence
$G=R^{-1}(G_0)$, where $G=\hbox{{\Goth H}}\{zM_\varphi^\star:b^{-1}<|z|<a^{-1}\}$. Since $R$ is a homeomorphism from ${\cal H}$ onto ${\cal H}^*$, $G$ is a dense $G_\delta$-subset of ${\cal H}$.
\subsection{Proof of Corollary~\ref{co2}}
Consider the map $f:{\mathbb C}\to{\cal H}({\mathbb C})$ defined by the formula $f(w)(z)=e^{wz}$. It is easy to see that $f$ is holomorphic, $\hbox{\tt span}\,\{f(z):z\in{\mathbb C}\}$ is dense in ${\cal H}({\mathbb C})$ and for each $w\in{\mathbb C}$, $\hbox{\tt ker}\,(D-wI)=\hbox{\tt span}\,\{f(w)\}$. In particular, $Df(w)=wf(w)$ and using the equality $TD=DT$, we get $wTf(w)=DTf(w)$ for each $w\in {\mathbb C}$. Hence $Tf(w)\in \hbox{\tt ker}\,(D-wI)=\hbox{\tt span}\,\{f(w)\}$ for any $w\in{\mathbb C}$. Thus there exists a unique function $\varphi:{\mathbb C}\to{\mathbb C}$ such that
$Tf(w)=\varphi(w)f(w)$ for each $w\in{\mathbb C}$. Using the fact that $f$ is holomorphic and each $f(w)$ does not take value $0$, one can easily verify that $\varphi$ is holomorphic. Moreover, since $T$ is not a scalar multiple of identity, $\varphi$ is non-constant. By the Picard theorem, any non-constant entire function takes all complex values except for maybe one. Hence $\inf\limits_{w\in{\mathbb C}}|\varphi(w)|=0$ and
$\sup\limits_{w\in{\mathbb C}}|\varphi(w)|=\infty$. By Theorem~\ref{t4}, $\hbox{{\Goth H}}\{zT:z\in{\mathbb C}^\star\}$ is a dense $G_\delta$-subset of ${\cal H}({\mathbb C})$.
\subsection{Proof of Corollary~\ref{co3}}
First, we consider the case ${\mathbb K}={\mathbb C}$. Let $a<\alpha<\beta<b$. By the assumptions, there is a dense subset $E$ of $X$ and a map $S:E\to E$
such that $TSx=x$, $\alpha^{-n}T^nx\to 0$ and $\beta^nS^nx\to 0$ for each $x\in E$. Let $U=\{w\in{\mathbb C}:\alpha<|w|<\beta\}$. Since $X$ is locally convex and complete, the relations $\alpha^{-n}T^nx\to 0$ and $\beta^nS^nx\to 0$ ensure that for each $w\in U$, the series $\sum\limits_{n=1}^\infty w^{-n}T^nx$ and $\sum\limits_{n=1}^\infty w^{n}S^nx$ converge in $X$ for any $x\in E$. Thus we can define $$ u_{x,w}=x+\sum\limits_{n=1}^\infty (w^{-n}T^nx+w^{n}S^nx)\ \ \text{for $w\in U$ and $x\in E$}. $$ Using the relations $TSx=x$ for $x\in E$ and $T\in L(X)$, one can easily verify that $Tu_{x,w}=wu_{x,w}$ for each $x\in E$ and $w\in U$. Now we consider $$ F_{k,c}=\hbox{\tt span}\,\{u_{x,w}:x\in E,\ w^k=c^k\}\ \ \text{for $k\in{\mathbb N}$ and $c\in(\alpha,\beta)$}. $$ We shall show that $F_{k,c}$ for $k\in{\mathbb N}$ and $c\in(\alpha,\beta)$ satisfy conditions (\ref{t3a}.1--\ref{t3a}.4). First, the equality $Tu_{x,w}=wu_{x,w}$ implies that $T^ky=c^ky$ for any $y\in F_{k,c}$. Hence (\ref{t3a}.1) is satisfied. Clearly $F_{k,c}\subseteq F_{m,c}$ whenever $k$ is a divisor of $m$. Hence for any $c\in (\alpha,\beta)$ and any $k_1,\dots,k_n\in{\mathbb N}$, $F_{k_j,c}\subseteq F_{k,c}$ for $1\leqslant j\leqslant n$, where $k=k_1\cdot{\dots}\cdot k_n$. Thus (\ref{t3a}.4) is satisfied. It is easy to see that $$ F_c=\bigcup_{k=1}^\infty F_{k,c}=\hbox{\tt span}\,\{u_{x,w}:x\in E,\ w\in c{\mathbb G}\},\ \ \text{where}\ \ {\mathbb G}=\{z\in{\mathbb T}:z^k=1\ \text{for some}\ k\in{\mathbb N}\}. $$ In order to prove (\ref{t3a}.3), we have to show that $F_c$ is dense in $X$. Assume the contrary. Since $F_c$ is a vector space and $X$ is locally convex, we can pick $g\in X^*$ such that $g\neq 0$ and $g(y)=0$ for each $y\in F_c$. Hence for any $x\in E$ and $w\in c{\mathbb G}$, we have $f_x(w)=0$, where $f_x(w)=g(u_{x,w})$. It is easy to verify that for any $x\in E$, the function $f_x:U\to {\mathbb C}$ is holomorphic. Since $f_x$ vanishes on $c{\mathbb G}$, the uniqueness theorem implies that each $f_x$ is identically zero. On the other hand, the $0^{\rm th}$ Laurent coefficient of $f_x$ is $g(x)$. Hence $g(x)=0$ for any $x\in E$. Since $E$ is dense in $X$, we get $g=0$. This contradiction completes the proof of (\ref{t3a}.3). It remains to verify (\ref{t3a}.2). Let $k\in{\mathbb N}$, $V$ be a non-empty open subset of $X$ and $G=\{c\in(\alpha,\beta):F_{k,c}\cap V\neq\varnothing\}$. We have to show that $G$ is open in ${\mathbb R}$. Let $c\in G$. Then there exist $x_1,\dots,x_n\in E$ and $w_1,\dots,w_n,\lambda_1,\dots,\lambda_n\in {\mathbb C}$ such that $w_j^k=c^k$ for $1\leqslant j\leqslant n$ and $\sum\limits_{j=1}^n\lambda_j u_{x_j,w_j}\in V$. Since for any fixed $x\in E$, the map $w\mapsto u_{x,w}$ is continuous, there is
$\delta>0$ such that $y_s\in V$ if $|c-s|<\delta$, where $y_s=\sum\limits_{j=1}^n\lambda_j u_{x_j,sw_j/c}$. On the other hand, $y_s\in E_{k,s}$ for each $s$ and therefore $(c-\delta,c+\delta)\cap(\alpha,\beta)\subseteq G$. Hence $c$ is an interior point of $G$. Since $c$ is an arbitrary point of $G$, $G$
is open. This completes the proof of (\ref{t3a}.2). By Theorem~\ref{t4}, $\hbox{{\Goth H}}\{zT:\beta^{-1}<|z|<\alpha^{-1}\}$ is a dense
$G_\delta$-set whenever $a<\alpha<\beta<b$. Hence the set of common hypercyclic vectors of the family $\{zT:b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-subset of $X$ as a countable intersection of dense $G_\delta$-sets. The proof of Corollary~\ref{co3} in the case ${\mathbb K}={\mathbb C}$ is complete.
Assume now that ${\mathbb K}={\mathbb R}$. Let $X_{\mathbb C}=X\oplus iX$ and $T_{\mathbb C}(u+iv)=Tu+iTv$ be complexifications of $X$ and $T$ respectively. It is straightforward to see that $T_{\mathbb C}$ satisfies the same conditions with $E_{\mathbb C}=E+iE$ and $S_{\mathbb C}(u+iv)=Su+iSv$ taken as $E$ and $S$. Corollary~\ref{co3} in the complex case implies that
$H_0=\hbox{{\Goth H}}\{zT_{\mathbb C}:z\in{\mathbb C},\ b^{-1}<|z|<a^{-1}\}$ is a dense $G_\delta$-subset of $X_{\mathbb C}$. Clearly $H=\hbox{{\Goth H}}\{zT:z\in{\mathbb R},\
b^{-1}<|z|<a^{-1}\}$ contains the projection of $H_0$ onto $X$ along $iX$ and therefore in dense in $X$. The fact that $H$ is a $G_\delta$-subset of $X$ follows from Corollary~\ref{gc2}.
\section{Counterexamples on hypercyclic scalar multiples}
We find operators, whose existence is assured by Theorem~\ref{t5} in the class of bilateral weighted shifts on $\ell_2({\mathbb Z})$. Recall that if $w=\{w_n\}_{n\in{\mathbb Z}}$ is a bounded sequence of non-zero scalars, then the unique $T_w\in L(\ell_2({\mathbb Z}))$ such that $T_we_n=w_ne_{n-1}$ for $n\in{\mathbb Z}$, where $\{e_n\}_{n\in{\mathbb Z}}$ is the canonical orthonormal basis of the Hilbert space $\ell_2({\mathbb Z})$, is called the {\it bilateral weighted shift with the weight sequence} $w$. Hypercyclicity of bilateral weighted shifts was characterized by Salas \cite{sal}, whose necessary and sufficient condition is presented in a more convenient shape in \cite{sh2}.
\begin{thmS} Let $T_w$ be a bilateral weighted shift on $\ell_2({\mathbb Z})$. Then $T_w$ is hypercyclic if and only if for any $k\in{\mathbb Z}_+$, \begin{equation} \mathop{\hbox{$\underline{\hbox{\rm lim}}$}}\limits\limits_{n\to\infty}(\widetilde w(k-n+1,k)+ \widetilde w(k+1,k+n)^{-1})=0,\ \
\text{where}\ \widetilde w(a,b)=\smash{\prod_{j=a}^b}\,|w_j|\ \text{for}\ a,b\in{\mathbb Z},\ a\leqslant b. \label{sal3} \end{equation} \end{thmS}
It is well-known and easy to see that a bilateral weighted shift
$T_w$ is invertible if and only if $\inf\limits_{n\in{\mathbb Z}}|w_n|>0$. In this case condition (\ref{sal3}) can be rewritten in the following simpler form.
\begin{thmSs} Let $T_w$ be an invertible bilateral weighted shift on $\ell_2({\mathbb Z})$. Then $T_w$ is hypercyclic if and only if \begin{equation} \mathop{\hbox{$\underline{\hbox{\rm lim}}$}}\limits\limits_{n\to\infty}(\widetilde w(-n,0)+\widetilde w(0,n)^{-1})=0. \label{sal4} \end{equation} \end{thmSs}
\subsection{Proof of Theorem~\ref{t5}, Part II}
First, we prove few elementary lemmas. The following one generalizes the fact that the set of hypercyclic vectors of a hypercyclic operator is dense.
\begin{lemma}\label{el} Let $X$ be a topological vector space and $\cal A$ be a family of pairwise commuting continuous linear operators on $X$. Then the set $\hbox{{\Goth H}}({\cal A})=\bigcap\limits_{T\in{\cal A}}\hbox{{\Goth H}}(T)$ is either empty or dense in $X$. \end{lemma}
\begin{proof} Let $x\in \hbox{{\Goth H}}({\cal A})$ and $S\in{\cal A}$. We have to show that $\hbox{{\Goth H}}({\cal A})$ is dense in $X$. Since $x$ is a hypercyclic vector for $S$, $O(S,x)=\{S^nx:n\in{\mathbb Z}_+\}$ is dense in $X$ and therefore $S$ has dense range. Take any $T\in{\cal A}$. Since $TS=ST$, $O(T,S^mx)=S^m(O(T,x))$ for each $m\in{\mathbb Z}_+$. Since $x\in\hbox{{\Goth H}}(T)$ and $S^m$ has dense range, $O(T,S^mx)$ is dense in $X$. Hence $S^mx\in\hbox{{\Goth H}}(T)$ for any $T\in{\cal A}$ and $m\in{\mathbb Z}_+$. That is, $O(S,x)\subseteq \hbox{{\Goth H}}({\cal A})$. Since $O(S,x)$ is dense in $X$, so is $\hbox{{\Goth H}}({\cal A})$. \end{proof}
\begin{lemma} \label{inter} Let $X$ be a locally convex topological vector space, $T\in L(X)$, $A\subseteq (0,\infty)$ and $x\in\hbox{{\Goth H}}\{cT:c\in\Lambda\}$. Assume also that there exists a non-empty open subset $U$ of $X$ such that \begin{equation}\label{qu} \sum\limits_{n\in Q_U}n^{-1}<\infty,\quad\text{where}\quad Q_U=\{n\in{\mathbb N}:a^nT^nx\in U\ \ \text{for some}\ \ a\in A\}. \end{equation} Then $A$ has zero Lebesgue measure. \end{lemma}
\begin{proof} Clearly we can assume that $A\neq\varnothing$ and therefore $\Lambda\neq\varnothing$, where $\Lambda=\ln(A)=\{\ln a:a\in A\}$. Since $X$ is Hausdorff and locally convex, we can find a continuous seminorm $p$ on $X$ such that $V=U\cap \{u\in X:1<p(u)<e\}$ is non-empty. It suffices to show that $\Lambda$ has zero Lebesgue measure. Let $\alpha\in\Lambda$ and $m\in{\mathbb N}$. Since $x$ is hypercyclic for $e^{\alpha}T$ and $V$ is open, we can find $n\geqslant m$ such that $e^{\alpha n}T^n\in V\subseteq U$. Then $n\in Q_U$ and $p(e^{\alpha n}T^nx)\in (1,e)$. Hence $$ \alpha\in (\alpha_n,\beta_n),\ \ \text{where}\ \ \alpha_n=\frac{-\ln(p(T^nx))}{n}\ \ \text{and}\ \ \beta_n=\frac{1-\ln(p(T^nx))}{n}. $$ Since $\alpha\in \Lambda$ is arbitrary, we obtain $$ \Lambda\subseteq\bigcup_{n\in Q_U,\ n\geqslant m}(\alpha_n,\beta_n)\ \ \text{for any $m\in{\mathbb N}$}. $$ On the other hand, $(\alpha_n,\beta_n)$ is an interval of length $n^{-1}$. Then (\ref{qu}) and the last display imply that $\Lambda$ can be covered by intervals with arbitrarily small sum of lengths. That is, $\Lambda$ has zero Lebesgue measure. \end{proof}
For $k\in{\mathbb N}$, we denote \begin{equation}\label{III} \begin{array}{l}\textstyle m_k=2^{3k^2},\ \ I_k^{-}=\{n\in{\mathbb N}:\frac78m_k\leqslant n<m_k\},\ \ I_k^{+}=\{n\in{\mathbb N}:m_k<n\leqslant \frac98m_k\}\\ \text{and}\ \ I_k=I_k^{-}\cup I_k^{+}\cup\{m_k\}=\{n\in{\mathbb N}:\frac78m_k\leqslant n\leqslant \frac98m_k\}.\end{array} \end{equation}
Consider the sequence $w=\{w_n\}_{n\in{\mathbb Z}}$ defined by the formula \begin{equation}\label{wp2} w_n=\left\{\begin{array}{ll}2^8&\text{if}\ \ n\in I_k^-\cup-I_k^+,\ k\in{\mathbb N}\\ 2^{-8}&\text{if}\ \ n\in I_k^+\cup-I_k^-,\ k\in{\mathbb N}\\ 1&\text{otherwise.}\end{array}\right. \end{equation} Clearly $w$ is a sequence of positive numbers and $0<2^{-8}=\inf\limits_{n\in{\mathbb Z}}w_n<\sup\limits_{n\in{\mathbb Z}}w_n=2^8<\infty$. Hence $T_w$ is an invertible bilateral weighted shift. In order to prove Part~II of Theorem~\ref{t5} it is enough to verify the following statement.
\begin{example}\label{p2t5} Let $w$ be the weight sequence defined by $(\ref{wp2})$ and $T=T_w$ be the corresponding bilateral weighted shift on $\ell_2({\mathbb Z})$. Then $M_T=(1/2,2)$ and any $\Lambda\subseteq(1/2,2)$ has Lebesgue measure $0$ if the family $\{aT:a\in\Lambda\}$ has a common hypercyclic vector. \end{example}
\begin{proof} Using the definition (\ref{wp2}) of the sequence $w$, it is easy to verify that for any $n\in{\mathbb N}$, \begin{equation}\label{wwp2} \beta(n)=\left\{\begin{array}{ll}2^{8n-7m_k+8}&\text{if}\ \ n\in I_k^-,\ k\in{\mathbb N},\\ 2^{9m_k-8n}&\text{if}\ \ n\in I_k^+,\ k\in{\mathbb N},\\ 1&\text{otherwise,}\end{array}\right. \qquad\text{where}\quad \beta(n)=\prod_{j=0}^n w_j. \end{equation} Moreover, $w_n^{-1}=w_{-n}$ for any $n\in{\mathbb Z}$. Using this fact and the equality $w_0=1$, we get \begin{equation}\label{wwp21} \widetilde w(j,n)=\left\{\begin{array}{ll}\beta(n)\beta(j-1)^{-1}&\text{if}\ \ j\geqslant 1,\\ \beta(-1-n)\beta(-j)^{-1}&\text{if}\ \ n\leqslant -1,\\ \beta(n)\beta(-j)^{-1}&\text{if}\ j\leqslant 0,\ \text{and}\ n\geqslant 0\end{array}\right. \qquad\text{for any $j,n\in{\mathbb Z}$, $j\leqslant n$,} \end{equation} where the numbers $\widetilde w(j,n)$ are defined in (\ref{sal3}). In particular, $\widetilde w(0,n)=\beta(n)$ and $\widetilde w(-n,0)=\beta(n)^{-1}$ for each $n\in{\mathbb N}$. This observation together with Theorem~${\rm S}'$ and the fact that $aT=T_{aw}$ for $a\neq 0$ imply that for $a>0$, \begin{equation}\label{hy1} \text{$aT$ is hypercyclic if and only if}\quad \mathop{\hbox{$\underline{\hbox{\rm lim}}$}}\limits_{n\to\infty}\beta(n)^{-1}\bigl(a^n+a^{-n}\bigr)=0. \end{equation} By (\ref{wwp2}), $1\leqslant \beta(n)\leqslant 2^n$ for $n\in{\mathbb N}$, which together with (\ref{hy1}) implies that $M_T\subseteq (1/2,2)$. On the other hand, by (\ref{wwp2}), $\beta(m_k)=2^{m_k}$ for each $k\in{\mathbb N}$. Hence $\beta(m_k)^{-1}\bigl(a^{m_k}+a^{-m_k}\bigr)\to 0$ as $k\to\infty$ for any $a\in(1/2,2)$. According to (\ref{hy1}), $aT$ is hypercyclic if $1/2<a<2$. Hence $M_T=(1/2,2)$.
Let now $\Lambda$ be a non-empty subset of $(1/2,2)$ such that the family $\{aT:a\in\Lambda\}$ has common hypercyclic vectors. We have to demonstrate that $\Lambda$ has zero Lebesgue measure. Pick
$\varepsilon>0$ such that $\frac{\varepsilon}{1-\varepsilon}<2^{-8}$. By Lemma~\ref{el}, there is a common hypercyclic vector $x$ of the family $\{aT:a\in\Lambda\}$ such that $\|x-e_{-1}\|<\varepsilon$. Let $$
\smash{Q=\{n\in{\mathbb N}:\|a^nT^nx-e_0\|<\varepsilon\ \ \text{for some}\ \ a\in\Lambda\}\ \ \text{and}\ \ J=\bigcup_{k=1}^\infty I_k.} $$ First, we show that $Q\subseteq J$. Let $n\in Q$. Then there is
$a\in\Lambda$ such that $\|a^nT^nx-e_0\|<\varepsilon$. Hence $$
\text{$|\langle a^nT^nx,e_0\rangle|>1-\varepsilon$ \ and \ $|\langle a^{n}T^nx,e_{-n-1}\rangle|<\varepsilon$.} $$ Using (\ref{wwp21}), we get $\langle a^nT^nx,e_0\rangle=a^n\beta(n)x_n$ and $\langle a^nT^nx,e_{-n-1}\rangle=a^n\beta(n)^{-1}x_{-1}$. Then from the last display it follows that $$
\text{$a^n\beta(n)|x_n|>1-\varepsilon$ \ and \
$a^n\beta(n)^{-1}w_n|x_{-1}|<\varepsilon$}. $$
Since $\|x-e_{-1}\|<\varepsilon$, $|x_{-1}|>1-\varepsilon$ and
$|x_n|<\varepsilon$. Then according to the last display, $$ \beta(n)>\frac{1-\varepsilon}{\varepsilon}\max\{a^n,a^{-n}\}\geqslant \frac{1-\varepsilon}{\varepsilon}>2^8>1. $$ By (\ref{wwp2}), $\beta(j)=1$ if $j\notin J$. Hence $n\in J$. Since $n$ is an arbitrary element of $Q$, we get $Q\subseteq J$.
Next, we show that $(Q-Q)\cap {\mathbb N}\subseteq J$. Indeed, let $m,n\in Q$ be such that $m>n$. Since $m,n\in Q$, we can pick $a,b\in\Lambda$
such that $\|a^nT^nx-e_0\|<\varepsilon$ and $\|b^mT^mx-e_0\|<\varepsilon$. In particular, $$
\text{$|a^nT^nx,e_0\rangle|>1-\varepsilon$, $|\langle b^mT^mx,e_0\rangle|>1-\varepsilon$, $|\langle a^nT^nx,e_{m-n}\rangle|<\varepsilon$ and $|\langle b^mT^mx,e_{n-m}\rangle|<\varepsilon$}. $$ Using (\ref{wwp21}), we get $$ \begin{array}{ll} \text{$\langle a^nT^nx,e_0\rangle=a^n\beta(n)x_n$},& \text{$\langle a^nT^nx,e_{m-n}\rangle= a^n\beta(m)\beta(m-n)^{-1}x_m$}, \\ \text{$\langle b^mT^mx,e_0\rangle=b^m\beta(m)x_m$}, &\text{$\langle b^mT^mx,e_{n-m}\rangle=b^m\beta(n)\beta(m-n-1)^{-1}x_n$}. \end{array} $$ According to the last two displays, $$ \beta(m-n-1)>\frac{1-\varepsilon}{\varepsilon}a^nb^{-m}\ \ \text{and}\ \ \beta(m-n)>\frac{1-\varepsilon}{\varepsilon}a^{-n}b^m. $$ Since $\beta(m-n)=\beta(m-n-1)w_{m-n}\geqslant 2^{-8}\beta(m-n-1)$ from the last display it follows that $$ \beta(m-n)>2^{-8}\frac{1-\varepsilon}{\varepsilon}\max\{a^nb^{-m},a^{-n}b^m\}\geqslant 2^{-8}\frac{1-\varepsilon}{\varepsilon}>1. $$ Since $\beta(j)=1$ if $j\notin J$, we have $m-n\in J$. Hence $(Q-Q)\cap {\mathbb N}\subseteq J$.
Let now $k\in{\mathbb N}$ and $m,n\in Q\cap I_k$ be such that $m>n$. Since $(Q-Q)\cap {\mathbb N}\subseteq J$, we have $m-n\in J$. Since $m,n\in I_k$, we get $m-n\leqslant \frac{m_k}4<\frac{7m_k}{8}=\min I_k$ . Hence $m-n\in \bigcup\limits_{j=0}^{k-1}I_j$, where $I_0=\varnothing$. Then
$|m-n|\leqslant \frac{9m_{k-1}}{8}<2m_{k-1}$, where $m_0=1$. Hence $Q\cap I_k$ has at most $2m_{k-1}$ elements. On the other hand, $n\geqslant \frac{7m_k}{8}\geqslant \frac{m_k}2$ for any $n\in I_k$ and therefore $$ \sum_{n\in Q\cap I_k}n^{-1}\leqslant 2m_{k-1} \frac{2}{m_k}=\frac{4m_{k-1}}{m_k}\leqslant 2^{-k}, $$ where the last inequality follows from the definition of $m_k$. Since $Q\subseteq J$ and $J$ is the union of disjoint sets $I_k$, we obtain $$ \sum_{n\in Q}n^{-1}=\sum_{k=1}^\infty\sum_{n\in Q\cap I_k}n^{-1}\leqslant\sum_{k=1}^\infty 2^{-k}=1<\infty. $$ Using the definition of $Q$ and Lemma~\ref{inter}, we now see that $\Lambda$ has zero Lebesgue measure. \end{proof}
\subsection{Proof of Theorem~\ref{t5}, Part~I}
Consider the sequences $\{a_n\}_{n\in{\mathbb Z}}$ and $\{w_n\}_{n\in{\mathbb Z}}$ defined by the formulae \begin{equation}\label{an} a_n=\left\{ \begin{array}{ll}
1&\text{if $|n|\leqslant 5$ or $-2\cdot5^k\leqslant n<-5^k$}\\ &\text{or $-5^{k+1}\leqslant n<-4\cdot 5^k$, $k\in{\mathbb N}$,}\\ 8^{-1}&\text{if $-3\cdot5^k\leqslant n<-2\cdot5^k$, $k\in{\mathbb N}$,}\\ 8&\text{if $-4\cdot5^k\leqslant n<-3\cdot5^k$, $k\in{\mathbb N}$,}\\ 2^{-1}&\text{if $2\cdot5^k<n\leq4\cdot5^k$, $k\in{\mathbb N}$,}\\ 4^{-1}&\text{if $5^k<n\leq2\cdot5^k$, $k\in{\mathbb N}$,}\\ 16&\text{if $4\cdot5^k<n\leq5^{k+1}$, $k\in{\mathbb N}$;} \end{array} \right. \qquad w_n=\left\{ \begin{array}{ll}
1&\text{if $|n|\leqslant 1$,}\\ n(n-1)^{-1}a_n&\text{if $n\geqslant 2$,}\\ (n+1)n^{-1}a_n&\text{if $n\leqslant -2$.} \end{array} \right. \end{equation} It is easy to see that $w$ is a bounded sequence of positive numbers and $\inf\limits_{n\in{\mathbb Z}}w_n>0$. Hence the bilateral weighted shift $T_w$ is invertible. In order to prove Part~I of Theorem~\ref{t5} it is enough to verify the following statement.
\begin{example}\label{p1t5} Let $w$ be the weight sequence defined by $(\ref{an})$ and $S=T_w$ be the corresponding bilateral weighted shift on $\ell_2({\mathbb Z})$. Then $M_S=\{1,2\}$. \end{example}
\begin{proof} Using (\ref{an}), one can easily verify that \begin{align}\label{gamma1} \gamma_+(n)&=\left\{ \begin{array}{ll} 4^{5^k-n}&\text{if $5^k<n\leq2\cdot5^k$, $k\in{\mathbb N}$,}\\ 2^{-n}&\text{if $2\cdot5^k<n\leq4\cdot5^k$, $k\in{\mathbb N}$,}\\ 16^{n-5^{k+1}}&\text{if $4\cdot5^k<n\leq5^{k+1}$, $k\in{\mathbb N}$,} \end{array} \right.\qquad\qquad\quad\ \ \text{where}\quad \gamma_+(n)=\prod_{j=0}^n a_j, \\ \label{gamma2} \gamma_-(n)&=\left\{ \begin{array}{ll} 1&\text{if $5^k<n\leq2\cdot5^k$ or $4\cdot5^k<n\leq5^{k+1}$, $k\in{\mathbb N}$,}\\ 8^{2\cdot 5^k-n}&\text{if $2\cdot5^k<n\leq3\cdot5^k$, $k\in{\mathbb N}$,}\\ 8^{n-4\cdot 5^k}&\text{if $3\cdot5^k<n\leq4\cdot5^{k}$, $k\in{\mathbb N}$.} \end{array} \right.\!\!\!\!\!\!\!\!\text{where}\quad \gamma_-(n)=\!\!\prod_{j=-n}^0 \!\!a_j. \end{align} For brevity we denote $\beta_+(n)=\widetilde w(0,n)$ and $\beta_-(n)=\widetilde w(-n,0)$, where $\widetilde w(k,l)$ are defined in (\ref{sal3}). By definition of $w$, \begin{equation}\label{bega} \beta_+(n)=n\gamma_+(n)\quad\text{and}\quad\beta_-(n)=\frac{\gamma_-(n)}{n}\ \ \text{for any $n\in{\mathbb N}$}. \end{equation} According to (\ref{gamma1}) and (\ref{gamma2}), $\gamma_+(5^k)=\gamma_-(5^k)=1$ and $\gamma_+(3\cdot 5^k)=\gamma_-(3\cdot 5^k)=8^{-5^k}$ for any $k\in{\mathbb N}$. Using (\ref{bega}), we get $\beta_+(5^k)^{-1}=\beta_-(5^k)=5^{-k}\to 0$ and $(2^{3\cdot 5^k}\beta_+(3\cdot 5^k))^{-1}=2^{3\cdot 5^k}\beta_-(3\cdot 5^k)=3^{-1}5^{-k}\to 0$ as $k\to\infty$. Applying Theorem~${\rm S}'$ to $S=T_w$ and $2S=T_{2w}$, we see that $S$ and $2S$ are both hypercyclic.
Let $c>0$ be such that $cS=T_{cw}$ is hypercyclic. By Theorem~${\rm S}'$, there exists a strictly increasing sequence $\{n_j\}_{j\in{\mathbb N}}$ of positive integers such that \begin{equation}\label{limi} (c^{n_j}\beta_+(n_j))^{-1}+c^{n_j}\beta_-(n_j)\to 0\ \ \text{as $j\to\infty$}. \end{equation} Let $k_j$ be the integer part of $\log_5n_j$. Then $n_j=b_j5^{k_j}$, where $1\leqslant b_j<5$. Passing to a subsequence, if necessary, we can additionally assume that $b_j\to b\in[1,5]$ as $j\to\infty$. Using (\ref{gamma1}) and (\ref{gamma2}), one can easily verify that convergence of $b_j$ to $b$ implies that \begin{equation}\label{li} \lim_{j\to\infty}\gamma_+(n_j)^{1/n_j}=\lambda_+(b) \ \ \ \text{and}\ \ \ \lim_{j\to\infty}\gamma_-(n_j)^{1/n_j}=\lambda_-(b), \end{equation} where the continuous positive functions $\lambda_+$ and $\lambda_-$ on $[1,5]$ are defined by the formula \begin{equation}\label{lam} \lambda_+(b)=\left\{ \begin{array}{ll} 4^{b^{-1}-1}&\text{if $1\leqslant b<2$,}\\ 1/2&\text{if $2\leqslant b\leqslant 4$,}\\ 16^{1-5b^{-1}}&\text{if $4<b\leq5$} \end{array} \right. \quad\text{and}\quad \lambda_-(b)=\left\{ \begin{array}{ll} 1&\text{if $b\in[1,2]\cup[4,5]$,}\\ 8^{2b^{-1}-1}&\text{if $2<b\leqslant 3$,}\\ 8^{1-4b^{-1}}&\text{if $3<b<4$.} \end{array} \right. \end{equation} According to (\ref{bega}), $$ \lim\limits_{n\to\infty}\biggl(\frac{\beta_+(n)}{\gamma_+(n)}\biggr)^{1/n}=1\ \ \text{and}\ \ \lim\limits_{n\to\infty}\biggl(\frac{\beta_-(n)}{\gamma_-(n)}\biggr)^{1/n}=1 $$ From (\ref{li}) and the above display it follows that $$ \lim_{j\to\infty}\bigl(c^{n_j}\beta_+(n_j)^{1/n_j}\bigr)^{-1/n_j}=(c\lambda_+(b))^{-1} \ \ \ \text{and}\ \ \ \lim_{j\to\infty}\bigl(c^{n_j}\beta_+(n_j)^{1/n_j}\bigr)^{1/n_j}=c\lambda_-(b). $$ These equalities together with (\ref{limi}) imply that $(c\lambda_+(b))^{-1}\leqslant 1$ and $c\lambda_-(b)\leqslant 1$. In particular, $\frac{\lambda_-(b)}{\lambda_+(b)}\leqslant 1$. On the other hand, (\ref{lam}) implies that $\frac{\lambda_-(b)}{\lambda_+(b)}>1$ for $b\in (1,3)\cup(3,5)$. Hence $b\in\{1,3,5\}$. If $b\in\{1,5\}$, then $\lambda_-(b)=\lambda_+(b)=1$ and the inequalities $(c\lambda_+(b))^{-1}\leqslant 1$ and $c\lambda_-(b)\leqslant 1$ imply that $c\leqslant 1$ and $c^{-1}\leqslant 1$. That is, $c=1$. If $b=3$, then $\lambda_-(b)=\lambda_+(b)=1/2$ and the inequalities $(c\lambda_+(b))^{-1}\leqslant 1$ and $c\lambda_-(b)\leqslant 1$ imply that $c/2\leqslant 1$ and $2/c\leqslant 1$. That is, $c=2$. Thus $c\in\{1,2\}$. Hence $M_S=\{1,2\}$. \end{proof}
\section{Proof of Theorem~\ref{t2}}
The main tool in the proof is the following result by Macintyre and Fuchs. The following theorem is a part of Theorem~1 in \cite{mf}.
\begin{thmMF} Let $d>0$, $n\in{\mathbb N}$ and $z_1,\dots,z_n\in{\mathbb C}$. Then there exist $n$ closed disks $D_1,\dots,D_n$ on the complex plane such that their radii $r_1,\dots,r_n$ satisfy $\sum\limits_{j=1}^nr_j^2\leqslant 4d^2$ and \begin{equation}\label{MF}
\sum_{j=1}^n|z-z_j|^{-2}<\frac{n(1+\ln n)}{d^2}\ \ \text{for any}\ \ z\in{\mathbb C}\setminus\bigcup_{j=1}^n D_j. \end{equation} \end{thmMF}
We also need the following elementary lemma.
\begin{lemma}\label{EL} Let $X$ be a topological vector space, $T\in L(X)$ and $f\in X^*\setminus\{0\}$. Assume also that there exist a polynomial $p$ such that $p(T)$ is hypercyclic. Then the sequence $\{(T^*)^nf\}_{n\in{\mathbb Z}_+}$ is linearly independent. \end{lemma}
\begin{proof} Assume that the sequence $\{(T^*)^nf\}_{n\in{\mathbb Z}_+}$ is linearly dependent. Then we can pick $n\in{\mathbb N}$ such that $(T^*)^nf\in L= \hbox{\tt span}\,\{f,T^*f,\dots,(T^*)^{n-1}f\}$. It follows that $L$ is a non-trivial finite dimensional invariant subspace for $T^*$. Hence $L^\perp=\{x\in X:g(x)=0\ \ \text{for any $g\in L$}\}$ is a closed linear subspace of $X$ of finite positive codimension invariant for $T$. Clearly $L^\perp$ is also invariant for $p(T)$. We have obtained a contradiction with a result of Wengenroth \cite{ww}, according to which hypercyclic operators on topological vector spaces have no closed invariant subspaces of positive finite codimension. \end{proof}
We are ready to prove Theorem~\ref{t2}. Let $X$ be a complex topological vector space such that $X^*\neq \{0\}$, $T\in L(X)$ and $\Lambda$ be a non-empty subset of ${\mathbb R}\times {\mathbb C}$ for which the family ${\cal A}=\{e^a(T+bI):(a,b)\in\Lambda\}$ has a common hypercyclic vector. In order to prove Theorem~\ref{t2} it suffices to show that $\Lambda$ has zero three dimensional Lebesgue measure. Pick a non-zero $f\in X^*$. By Lemma~\ref{el}, the set $\hbox{{\Goth H}}({\cal A})$ of common hypercyclic vectors for operators from $\cal A$ is dense in $X$. Since $\hbox{{\Goth H}}({\cal A})$ is also closed under multiplications by non-zero scalars, we can pick $x\in \hbox{{\Goth H}}({\cal A})$ such that $f(x)=1$. For each $n\in{\mathbb N}$ consider the complex polynomial \begin{equation}\label{pn1} p_n(b)=f((T+bI)^nx)=\sum_{j=0}^n \bin nj((T^*)^{n-j}f)(x)b^j. \end{equation} Clearly $p_n$ is a polynomial of degree $n$ with coefficient $1=f(x)$ in front of $b^n$ (such polynomials are usually called {\it monic}). Differentiating (\ref{pn1}) by $b$, we obtain that $p'_n(b)=nf((T+bI)^{n-1}x)=np_{n-1}(b)$. That is, \begin{equation}\label{pn2} p'_n=np_{n-1}\ \ \text{for each}\ \ n\in{\mathbb N}. \end{equation} Applying (\ref{pn2}) twice, one can easily verify that \begin{equation}\label{pn3} \bigl(p'_n/p_n\bigr)'=n^2\biggl(\Bigl( 1-\frac1n\Bigr)\frac{p_{n-2}}{p_n}-\Bigl(\frac{p_{n-1}}{p_n}\Bigr)^2\biggr)\ \ \text{for each}\ \ n\geqslant 2. \end{equation} The equality (\ref{pn3}) immediately implies the following inequality: \begin{equation}\label{pn4}
\bigl|(p'_n/p_n)'\bigr|\geqslant n^2\biggl(\Bigl|\frac{p_{n-2}}{2p_n}\Bigr|-\Bigl|\frac{p_{n-1}}{p_n}\Bigr|^2\biggr)\ \ \text{for each}\ \ n\geqslant 2. \end{equation}
\begin{lemma}\label{qq1} For any $(a,b)\in\Lambda$ and $k\in{\mathbb Z}_+$, the sequence $\{v_n\}_{n\geqslant k}$ is dense in ${\mathbb C}^{k+1}$, where $v_n=e^{an}(p_n(b),p_{n-1}(b),\dots,p_{n-k}(b))$. \end{lemma}
\begin{proof} Assume the contrary. Then there exist $(a,b)\in\Lambda$ and a non-empty open subset $W$ of ${\mathbb C}^{k+1}$ such that $v_n\notin W$ for each $n\geqslant k$. Let $S=e^a(T+bI)$. By definition of $p_m$, for $0\leqslant j\leqslant k$, $$ e^{an}p_{n-j}(b)=e^{an}f((T+bI)^{n-j}x)=e^{aj}f(S^{n-j}x)=e^{aj} (S^*)^{k-j}f(S^{n-k}x). $$ Thus the relation $v_n\notin W$ can be rewritten as $S^{n-k}x\notin R^{-1}(W)$, where the linear operator $R:X\to {\mathbb C}^{k+1}$ is defined by the formula $$ (Ry)_l=e^{a(l-1)}(S^*)^{k-l+1}f(y)\ \ \text{for}\ \ 1\leqslant l\leqslant k+1. $$ By Lemma~\ref{EL}, continuous linear functionals $f,S^*f,\dots,(S^*)^kf$ are linearly independent. It follows that $R$ is continuous and surjective. Hence $V=R^{-1}(W)$ is a non-empty open subset of $X$. Thus $S^{n-k}x$ does not meet the non-empty open set $V$ for each $n\geqslant k$, which is impossible since $x\in\hbox{{\Goth H}}(S)$. \end{proof}
By Lemma~\ref{qq1} with $k=2$, for any $(a,b)\in\Lambda$, the sequence $\{v_n=e^{an}(p_n(b),p_{n-1}(b),p_{n-2}(b))\}_{n\geqslant 2}$ is dense in ${\mathbb C}^3$. Since the map $F:{\mathbb C}^\star\times{\mathbb C}^2\to{\mathbb C}^3$, $F(u,v,w)=(u,v/u,w/u)$ is continuous and has dense range, $\{F(u_n):n\geqslant 2,\ p_n(b)\neq 0\}$ is dense in ${\mathbb C}^3$. That is, $$ \text{$\{(e^{an}p_n(b),p_{n-1}(b)/p_n(b),p_{n-2}(b)/p_n(b)):n\geqslant 2,\ p_n(b)\neq 0\}$ is dense in ${\mathbb C}^3$.} $$ It follows that any $(a,b)\in\Lambda$ is contained in infinitely many sets $C_n$, where $$
C_n=\{(a,b)\in{\mathbb R}\times{\mathbb C}:1<|e^{an}p_n(b)|<e,\
|p_{n-1}(b)/p_n(b)|<1,\ |p_{n-2}(b)/p_n(b))|>8\}. $$ That is, \begin{equation}\label{l1} \Lambda\subseteq\Lambda^*=\bigcap_{m=1}^\infty\bigcup_{n\geqslant m}C_n. \end{equation} Clearly, $C_n\subseteq {\mathbb R}\times B_n$, where $$
B_n=\{b\in{\mathbb C}:|p_{n-1}(b)/p_n(b)|<1,\ |p_{n-2}(b)/p_n(b))|>8\}. $$ Applying the inequality (\ref{pn4}), we see that \begin{equation}\label{bn}
B_n\subseteq B'_n=\Bigl\{b\in{\mathbb C}:\bigl|(p'_n(b)/p_n(b))'\bigr|\geqslant 3n^2\Bigr\}. \end{equation} Since $p_n$ is a monic polynomial of degree $n$, there exist $z_1,\dots,z_n\in{\mathbb C}$ such that $$ p_n(b)=\prod_{j=1}^n (b-z_j)\ \ \ \text{and therefore}\ \ \ (p'_n(b)/p_n(b))'=-\sum_{j=1}^n (b-z_j)^{-2}. $$ By Theorem~MF with $d=n^{-1/3}$, there are $n$ closed disks $D_1,\dots,D_n$ on the complex plane such that their radii $r_1,\dots,r_n$ satisfy $$ \sum\limits_{j=1}^nr_j^2\leqslant 4n^{-2/3}\ \ \text{and}\ \
\bigl|(p'_n(b)/p_n(b))'\bigr|\leqslant
\sum_{j=1}^n|b-z_j|^{-2}\!\!<n^{5/3}(1+\ln n)\ \ \text{for any}\ \ b\in{\mathbb C}\setminus\bigcup\limits_{j=1}^n D_j . $$ Since $n^{5/3}(1+\ln n)\leqslant 3n^2$ for any $n\in{\mathbb N}$, we see that $B'_n\subseteq \bigcup\limits_{j=1}^n D_j$. Hence $$ \mu_2(B_n)\leqslant \mu_2(B'_n)\leqslant \pi\sum\limits_{j=1}^nr_j^2\leqslant 4\pi n^{-2/3}, $$ where $\mu_k$ is the $k$-dimensional Lebesgue measure. For each $b\in B_n$, $A_{b,n}=\{a\in{\mathbb R}:(a,b)\in C_n\}$ can be written as $$
A_{b,n}=\{a\in{\mathbb R}:1<|e^{an}p_n(b)|<e\}=\Bigl(\frac{-\ln|p_n(b)|}{n},
\frac{1-\ln|p_n(b)|}{n}\Bigr), $$ which is an interval of length $n^{-1}$. Hence $\mu_1(A_{b,n})=n^{-1}$ for each $b\in B_n$. By the Fubini theorem, $$ \mu_3(C_n)=\int_{B_n}\mu_1(A_{b,n})\mu_2(db)=\frac{\mu_2(B_n)}{n}\leqslant 4\pi n^{-5/3}. $$ According to (\ref{l1}) and the above estimate, we obtain $$ \mu_3(\Lambda^*)\leqslant \inf_{m\in{\mathbb N}}4\pi\sum_{n=m}^\infty n^{-5/3}=0\ \ \text{since}\ \ \sum_{n=1}^\infty n^{-5/3}<\infty. $$ Thus $\mu_3(\Lambda^*)=0$ and therefore $\mu_3(\Lambda)=0$ since $\Lambda\subseteq \Lambda^*$. The proof of Theorem~\ref{t2} is complete.
\section{Concluding remarks and open problems}
Lemma~\ref{EL} implies the following easy corollary.
\begin{corollary}\label{1212} Let $X$ be a topological vector space such that $0<\hbox{\tt dim}\, X^*<\infty$. Then $X$ supports no hypercyclic operators. \end{corollary}
\begin{proof} Assume that $T\in L(X)$ is hypercyclic and $f\in X^*$, $f\neq 0$. By Lemma~\ref{EL}, the sequence $\{(T^*)^nf\}_{n\in{\mathbb Z}_+}$ is linearly independent, which contradicts the inequality $\hbox{\tt dim}\, X^*<\infty$. \end{proof}
In particular, ${\cal F}$-spaces $X=L_p[0,1]\times {\mathbb K}^n$ for $0<p<1$ and $n\in{\mathbb N}$ support no hypercyclic operators. Indeed, the dual of $X$ is $n$-dimensional. On the other hand, each separable infinite dimensional Fr\'echet space supports a hypercyclic operator \cite{bonper} and there are separable infinite dimensional ${\cal F}$-spaces \cite{kal} that support no continuous linear operators except the scalar multiples of $I$ and therefore support no hypercyclic operators. However the following question remains open.
\begin{question}\label{q4} Let $X$ be a separable ${\cal F}$-space such that $X^*$ is infinite dimensional. Is it true that there exists a hypercyclic operator $T\in L(X)$? \end{question}
Part~I of Theorem~\ref{t5} shows that there exists a continuous linear operator $S$ on $\ell_2$ such that $M_S=\{1,2\}$, where
$M_S=\{a>0:aS\ \text{is hypercyclic}\}$. Using the same basic idea as in the proof of Theorem~\ref{t5}, one can construct an invertible bilateral weighted shift $S$ on $\ell_2({\mathbb Z})$ such that $M_S$ is a dense subset of an interval and has zero Lebesgue measure. In particular, $M_S$ and its complement are both dense in this interval. It is also easy to show that for any ${\cal F}$-space $X$ and any $T\in L(X)$, $M_T$ is a $G_\delta$-set. If $X$ is a Banach space, then $M_T$ is separated from zero by the number $\|T\|^{-1}$. These observations naturally lead to the following question.
\begin{question}\label{q0} Characterize subsets $A$ of ${\mathbb R}_+$ for which there is $S\in L(\ell_2)$ such that $A=M_S$. In particular, is it true that for any $G_\delta$-subset $A$ of ${\mathbb R}_+$ such that $\inf A>0$, there exists $S\in L(\ell_2)$ for which $A=M_S$? \end{question}
In the proof of Part~II of Theorem~\ref{t5} we constructed an invertible bilateral weighted shift $T$ on $\ell_2({\mathbb Z})$ such that $M_T=(1/2,2)$ and any subset $A$ of $(1/2,2)$ such that the family $\{aT:a\in A\}$ has a common hypercyclic vector must be of zero Lebesgue measure. It is also easy to see that our $T$ enjoys the following extra property. Namely, if $E=\hbox{\tt span}\,\{e_n:n\in{\mathbb Z}\}$ and $x\in E$, then for $1/2<\alpha<\beta<2$, we have $\alpha^{-m_k}T^{m_k} x\to 0$ and $\beta^{m_k}T^{-m_k}x\to 0$ with $m_k=2^{3k^2}$. This shows that the convergence to zero condition in Corollary~\ref{co3} can not be replaced by convergence to 0 of a subsequence. Note that, according to the hypercyclicity criterion \cite{bp}, the latter still implies hypercyclicity of all relevant scalar multiples of $T$.
Recall that for $0<s\leqslant 1$ the {\it Hausdorff outer measure} $\mu_s$ on ${\mathbb R}$ is defined as $\mu_s(A)=\lim\limits_{\delta\downarrow0}\mu_{s,\delta}(A)$ with $\mu_{s,\delta}(A)=\inf\sum(b_j-a_j)^s$, where the infimum is taken over all sequences $\{(a_j,b_j)\}$ of intervals of length $\leqslant\delta$, whose union contains $A$. The number $\inf\{s\in(0,1]:\mu_s(A)=0\}$ is called the {\it Hausdorff dimension} of $A$ . With basically the same proof Lemma~\ref{inter} can be strengthened in the following way.
\begin{lemma} \label{inter1} Let $X$ be a locally convex topological vector space, $T\in L(X)$, $s\in(0,1]$, $A\subseteq (0,\infty)$ and $x$ be a common hypercyclic vector for the family $\{cT:c\in\Lambda\}$. Assume also that there exists a non-empty open subset $U$ of $X$ such that $\smash{\sum\limits_{n\in Q_U}n^{-s}<\infty}$, where $Q_U$ is defined in $(\ref{qu})$. Then $\mu_s(A)=0$. \end{lemma}
Using Lemma~\ref{inter1} instead of Lemma~\ref{inter}, one can easily see that the operator $T$ constructed in the proof of Part~II of Theorem~\ref{t5} has a stronger property. Namely, any $A\subset{\mathbb R}_+$ such that the family $\{cT:c\in A\}$ is hypercyclic has zero Hausdorff dimension.
Theorem~CMP guarantees existence of common hypercyclic vectors for all non-identity operators of a universal strongly continuous semigroup $\{T_t\}_{t\geqslant 0}$ on an ${\cal F}$-space. On the other hand, Theorem~CS shows that the non-identity elements of the 2-parametric translation group on ${\cal H}({\mathbb C})$ have a common hypercyclic vector. The latter group enjoys the extra property of depending holomorphically on the parameter. Note that Theorem~\ref{t1} strengthens this result.
\begin{question}\label{q1} Let $X$ be a complex Fr\'echet space and $\{T_z\}_{z\in{\mathbb C}}$ be a holomorphic strongly continuous operator group. Assume also that for each $z\in{\mathbb C}^\star$, the operator $T_z$ is hypercyclic. Is it true that the family $\{T_z:z\in{\mathbb C}^\star\}$ has a common hypercyclic vector? \end{question}
\begin{question}\label{q1a} Let $X$ be a complex Fr\'echet space and $\{T_z\}_{z\in{\mathbb C}}$ be a holomorphic strongly continuous operator group. Assume also that for each $z,a\in{\mathbb C}^\star$, the operator $aT_z$ is hypercyclic. Is it true that the family $\{aT_z:a,z\in{\mathbb C}^\star\}$ has a common hypercyclic vector? \end{question}
An affirmative answer to the following question would allow to strengthen Theorem~\ref{t4}.
\begin{question}\label{q5} Let $T$ be a continuous linear operator on a complex separable Fr\'echet space $X$ and $0\leqslant a<b\leqslant\infty$. Assume also that for any $\alpha\in(a,b)$, the sets $$
E_{\alpha}=\hbox{\tt span}\,\Biggl(\bigcup_{|z|<\alpha}\hbox{\tt ker}\,(T-zI)\Biggr)\quad
\text{and}\quad F_{\alpha}=\hbox{\tt span}\,\Biggl(\bigcup_{|z|>\alpha}\hbox{\tt ker}\,(T-zI)\Biggr) $$ are both dense in $X$. Is it true that the family
$\{zT:b^{-1}<|z|<a^{-1}\}$ has common hypercyclic vectors? \end{question}
It is worth noting that according to the Kitai Criterion for $T$ from the above question, $zT$ is hypercyclic for any $z\in{\mathbb C}$ with
$b^{-1}<|z|<a^{-1}$. It also remains unclear whether the natural analog of Theorem~\ref{t2} holds in the case ${\mathbb K}={\mathbb R}$. For instance, the following question is open.
\begin{question}\label{q3} Does there exist a continuous linear operator $T$ on a real Fr\'echet space such that the family $\{aT+bI:a>0,\ b\in{\mathbb R}\}$ has a common hypercyclic vector? \end{question}
{\bf Acknowledgements.} \ The author would like to thank Richard Aron for interest and helpful comments.
\small\rm
\vskip1truecm
\scshape
\noindent Stanislav Shkarin
\noindent Queens's University Belfast
\noindent Department of Pure Mathematics
\noindent University road, Belfast, BT7 1NN, UK
\noindent E-mail address: \qquad {\tt s.shkarin@qub.ac.uk}
\end{document} | arXiv | {
"id": "1209.1213.tex",
"language_detection_score": 0.6497347950935364,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Flow polynomials as Feynman amplitudes and their $lpha$-representation
} \begin{abstract}
Let $G$ be a connected graph; denote by $\tau(G)$ the set of its spanning
trees. Let $\mathbb F_q$ be a finite field, $s(\alpha,G)=\sum_{T\in\tau(G)}
\prod_{e \in E(T)} \alpha_e$, where ${\alpha_e\in \mathbb F_q}$. Kontsevich
conjectured in 1997 that the number of nonzero values of $s(\alpha, G)$ is
a polynomial in $q$ for all graphs. This conjecture was disproved by
Brosnan and Belkale. In this paper, using the standard technique
of the Fourier transformation of Feynman amplitudes, we express the flow
polynomial $F_G(q)$ in terms of the ``correct'' Kontsevich formula.
Our formula represents~$F_G(q)$ as a linear combination of Legendre
symbols of $s(\alpha, H)$ with coefficients $\pm 1/q^{(|V(H)|-1)/2}$, where~$H$
is a contracted graph of~$G$ depending on~$\alpha\in \left(\mathbb F^*_q\right)^{E(G)}$,
and $|V(H)|$ is odd. The case $q=5$ corresponds to the least number with which
all coefficients in the linear combination are positive. This allows us to hope
that the obtained result can be applied to prove the Tutte 5-flow conjecture. \end{abstract}
{\it Key words:} flow polynomial, Kontsevich conjecture, Laplacian matrix,
Feynman amplitudes, Legendre symbol, Tutte 5-flow conjecture.
\section{Introduction. The statement of the main result}
Let $q$ be a positive integer and let $A_q$ be an arbitrary abelian group consisting of $q$ elements; we usually use the additive group of the field ${\mathbb F}_q$ for $A_q$; in this case $q=p^d$, where $p$ is prime and $d\in\mathbb N$. Let $G(V,E)$ be a connected multigraph without loops; let $V(G)$ denote the set of its vertices and $E(G)$ do the set of its edges. When considering the initial graph, we sometimes omit the symbol~$G$ in denotations. In certain cases we need to indicate the orientation of graph edges, so we denote the origin of an edge $e\in E(G)$ by $i(e)$ and do its terminus by $f(e)$.
Recall that the chromatic polynomial $P_G(q)$ counts the number of proper vertex colorings with~$q$ colors. Define the norm of an element~$y$ of the group~$A_q$ as \[
||y|| = \left\{ \begin{array}{ll} 0, & y = 0, \\ 1, & y \ne 0, \end{array} \right. \] where the symbol $0$ in the right-hand side of the equality $y=0$ is the neutral element of the group. Let us associate each vertex~$v$ with its color~$x_v$. Evidently, a coloring is proper if and only if \[
\prod_{e\in E(G)} || x_{i(e)} - x_{f(e)}||=1, \] and otherwise the product equals 0.
Using this fact, we get \[
P_G (q) = \sum_{\begin{subarray}{e}
x_v \in A_q \\ \forall v \in V(G)\end{subarray} }
\prod_{e\in E(G)} || x_{i(e)} - x_{f(e)}||. \] This representation allows us to treat chromatic polynomials as vacuum Feynman amplitudes in the coordinate space (see Section~2 for details).
Let us define the delta function on an abelian group ($x\in A_q$) by the formula $$ \delta (x) = \left\{ {\begin{array}{ll} 0, & {x \ne 0}, \\ 1, & {x = 0}, \end{array} } \right. $$
i.e., in our case $\delta(x)=1-||x||$. Let $(\varepsilon_{v e})_{v\in V,e\in E}$ be the incidence matrix of an arbitrary orientation of the graph $G$; it obeys the formula $$ \varepsilon_{v e} = \left\{
\begin{array}{ll}
-1, & \mbox{if $i(e)=v$}, \\
1, & \mbox{if $f(e)=v$}, \\
0, & \mbox{if $e$ is nonincident to $v$}.
\end{array} \right. $$
Let us associate each edge~$e$ of the graph~$G$ with an element $k_e$ of the group~$A_q$ so that $\sum_{e\in E}\varepsilon_{v e} k_e=0$. This association is called a flow. The number of everywhere nonzero flows is a polynomial in~$q$; it is called the flow polynomial~\cite{distel,Tutte}. Therefore, the flow polynomial obeys the formula \begin{equation} \label{eq:1}
F_G(q) = \sum_{
\begin{subarray}{e} k_e \in A^*_q \\ \forall e\in E(G) \end{subarray}}
{\prod_{v \in V(G)} {\delta \left( {\sum_{e\in E(G)}
{\varepsilon_{v e} } k_e } \right)} }, \end{equation} (here $A_q^*$ is the collection of nonzero elements of the group~$A_q$). Another variant of formula~(\ref{eq:1}) takes the form \begin{equation} \label{FGq}
F_G(q) = \sum_{
\begin{subarray}{e} k_e \in A_q \\ \forall e\in E(G) \end{subarray}}
\prod_{e \in E(G)} ||k_e|| \prod_{v \in V(G)} \delta \left( {\sum_{e\in E(G)}
\varepsilon_{v e} k_e } \right). \end{equation} Formula~(\ref{FGq}) allows us to treat flow polynomials as vacuum Feynman amplitudes in the impulse space (see Section~2). Using the technique of Feynman amplitudes, we can get a new representation for the flow polynomial. Let us now state the main result of this paper.
The main theorem considers the case when $q = p^d$, $p$ is an odd prime, and $d\in\mathbb N$. Denote by $\eta$ the multiplicative quadratic character of the field~$\mathbb{F}_q$: $\eta(0)=0$, in other cases $\eta(x)=1$ or $\eta(x)=-1$ in dependence of whether $x$ is a square in the field $\mathbb F_q$ or not. For $d=1$ the function $\eta$ coincides with the Legendre symbol of the residue field modulo prime~$p$. Denote by $g(q)$ the quadratic Gaussian sum of the field~$\mathbb{F}_q$. It obeys the formula \begin{equation} \label{gq} g(q) = \left\{ \begin{array}{ll}
(-1)^{d-1}\sqrt{q},& \textrm{if}~p \bmod 4= 1, \\
(-1)^{d-1} i^d \sqrt{q},& \textrm{if}~p \bmod 4=3, \end{array} \right. \end{equation} here $i$ is the imaginary unit.
Let all edges of the graph~$G$ be associated with nonzero weights $\alpha_e$: $\alpha_e\in {\mathbb F}_q^*$ (${\mathbb F}_q^*$ is the collection of nonzero elements of the field). Denote the graph with the contracted vertex set $W$ ($W\subseteq V$) by $G/W$; note that it contains no internal edges of the subgraph~$H(W)$. Weights $\alpha_e$ of edges of the graph~$G/W$ are equal to weights of corresponding edges of the graph~$G$.
Let $\tau(G)$ be the set of spanning trees of the graph~$G$. Consider sums \begin{equation} \label{eq:2} s(\alpha,G)=\sum_{T \in \tau(G)}
{\prod_{e\in E(T)} {\alpha_e } }, \end{equation} which, evidently, are elements of the field ${\mathbb F}_q$; if the graph~$G$ consists of one vertex, then by definition we put $s(\alpha,G)=1$.
Denote by $W^*=W^*(\alpha,G)$ any nonempty minimum cardinality subset $W$ of the vertex set $V(G)$, for which the sum $s(\alpha,G/W)$ differs from zero. Evidently, if $|W|=1$ then $G/W\equiv G$. Note also that the number of edges in each tree $T\in\tau(G/W^*)$ equals $|V|-|W^*|$. Denote this difference by~$r^*(G,\alpha)$. \begin{ter}[The main theorem] \label{ter:1} Let $G$ be an arbitrary connected multigraph and let $q = p^d$ with odd prime $p$ and $d\in\mathbb N$. Then \begin{equation} \label{eq:3} F_G (q) = \sum_{\alpha \in \left(\mathbb F^*_q\right)^{E(G)}} {\eta \left(s(\alpha,G/W^*)\right) \left[ \frac{g(q)}q \right]^{r^*(G,\alpha)} }. \end{equation} \end{ter}
Note that $s(\alpha,G)$ is an algebraic complement of any element of the weighted Laplacian matrix~$L$ of the graph $G$. As we prove in Section~4, $s(\alpha,G/W^*)$ coincides with the largest dimension nondegenerate principal minor of the matrix~$L$. Therefore, one can interpret our theorem as a representation of the flow polynomial as a linear combination of Legendre symbols of minors of the Laplacian matrix.
Note that in what follows we assume that the multigraph has no loops, though this is not explicitly stated in Theorem~\ref{ter:1}. The fact is (as one can easily see) that by removing a loop from the graph we make the left- and right-hand sides of equality~(\ref{eq:3}) exactly $q-1$ times less. So the proof of the theorem for an arbitrary multigraph is reduced to the proof for a multigraph without loops.
Consider the graph~$K_3$ as a simplest illustration of Theorem~\ref{ter:1}. Its flow polynomial obeys the formula $F_{K_3}(q)=q-1$. One can prove (see Propositions~\ref{utv:1}--\ref{utv:2} in Section~7) that in sum~(\ref{eq:3}) the terms that correspond to the case $r^*(G,\alpha)=1$ cancel each other out, and this sum contains no terms that correspond to the case $r^*(G,\alpha)=0$. Therefore, the assertion of Theorem~\ref{ter:1} for~$K_3$ is equivalent to the equality \begin{equation} \label{K3} q-1 = \sum_{\alpha_1,\alpha_2,\alpha_3 \in \mathbb F^*_q} \eta(\alpha_1\alpha_2+\alpha_1\alpha_3+\alpha_2\alpha_3)
\left[ \frac{g(q)}q\right]^2. \end{equation}
A natural generalization of the notion of a flow polynomial for the case of an arbitrary matroid is the notion of the characteristic function of the dual matroid~\cite{aigner}. We discuss the generalization of Theorem~\ref{ter:1} for arbitrary matroids representable over the field~$\mathbb F_q$ in a separate publication.
The paper has the following structure. In Section~2 we give a brief information on Feynman amplitudes and motivate our interest to flow polynomials. This section is not necessary for a formal understanding of the proof of Theorem~\ref{ter:1}, but it is useful for the comprehension of its sources and of prospects for the use of the mentioned technique. In Section~3 we recall some properties of the Fourier transformation over a finite field and prove the key lemma which represents $F_G (q)$ as a double sum, namely, the sum that is first taken over $\alpha$ and then over~$k$. In Section~4 we prove that the matrix of the quadratic form in the exponent of the function summed with respect to~$k$ is the Laplacian matrix and recall its combinatorial properties related to the evaluation of its minors. In Section~5 we learn to calculate multidimensional Gaussian sums over a finite field. Finally, in Section~6 by using the obtained results we prove Theorem~\ref{ter:1}. Note that the proof is one paragraph long. In Section~7 we simplify formula~(\ref{eq:3}), using the fact that, in particular, its terms that correspond to odd values of $r^*(G,\alpha)$ cancel each other out. We also discuss possible applications of this formula for proving the Tutte 5-flow conjecture and give results of calculations for some graphs with~$q=5$.
\section{Some properties of Feynman amplitudes}
The simplest case when we are faced with vacuum Feynman amplitudes (FA) consists in the calculation of mean values \begin{equation} \langle \exp(-\sum_{x=1}^n \epsilon\, \varphi^4(x)) \rangle_{\mu_0}, \label{phi4} \end{equation} where $\mu_0$ is the Gaussian measure with the binary correlation function (the so-called propagator) $\rho(x,y)$, in formula~(\ref{phi4}) $x,y\in\{1,\ldots,n\}$ and $\varphi(x)\in \mathbb R$. As is known (see, for example, \cite[section~2.2]{malyshev}), one can use the so-called pairing technique for calculating the mean value with respect to the Gaussian measure, for example, $\langle \varphi^2 (x) \varphi^2 (y) \rangle_{\mu_0}=\rho(x,x)\rho(y,y)+2 \rho(x,y)^2$ (just for this reason $2n$th moments of the standard Gaussian distribution equal $(2n-1)!!$).
Expanding the exponent in~(\ref{phi4}) in a series, in the $n$th order with respect to $\epsilon$ we get homogeneous graphs with~$n$ vertices of the degree~4 labeled with variables like~$x,y$ (over which we calculate the sum), whose edges contribute the value $\rho(x,y)$ to the product of propagators (which is to be summed up).
Actually, Feynman used this technique in the case of a functional measure, when the index~$x$ itself took on values in a continual set like~${\mathbb R}^4$ rather than in a finite one, and instead of the summation with respect to $x$ there was the integration. In the so-called scalar models in the quantum field theory~\cite{smirnov} the propagator (in the coordinate representation) is set to $||x-y||^\lambda$ (here $x,y\in {\mathbb R}^4$). We can get the definition of $P_G(q)$ proposed by us above by exactly transferring all definitions given for variables that take on values in~${\mathbb R}^4$ to the case of a finite group (a finite field).
Note that we do not consider results of the evaluation of the vacuum variant of real FA with the propagator $||x-y||^\lambda$ because the integral diverges with all~$\lambda$ for any graph~$G$. It seems to be possible to avoid this obstacle by performing the integration in all variables, except one. Namely, since $\rho(x,y)$ depends only on the difference $x-y$, the result of the integration in $|V|-1$ variables is independent of the value of the rest one. Just such an integral is called a vacuum FA in the case of other transitive invariant propagators in the coordinate space, which explains the used terminology. However, actually, in the quantum field theory with the propagator $||x-y||^\lambda$ considered here the vacuum integral is always set to infinity.
Nonvacuum FA with such propagator are less trivial. Several variables (more than one) are fixed there, and the integration is performed over all the rest ones. An analog of a nonvacuum FA for a finite field gives the number of proper colorings of the graph~$G$, provided that some of its vertices have got colors $y_v$ already. Note also that the graph precoloring extension problem is well known~\cite{precoloring}, its complexity for various types of graphs is studied rather thoroughly. However, properties of the polynomial that define the number of proper colorings in this case are studied less.
We need these polynomials and their flow analogs for deducing explicit formulas for FA of a~$p$-adic argument; see section~3 in the paper~\cite{lerner} for the corresponding explicit formulas that take into account the specificity of the transfer of the known properties of chromatic and flow polynomials to the mentioned case. Note that here, on the contrary, we consider the application of the FA technique in combinatorics.
FA in the impulse space have been used from the very beginning of the development of this technique. If in the coordinate representation we consider a propagator in the form $\rho(x,y)=f(x-y)$, then in the impulse representation each edge is associated with the function $\widehat f(k)$, where $\widehat f$ is the Fourier transformation of~$f$. Since in ${\mathbb R}^4$ it holds $\widehat{||\cdot||^\lambda}=c(\lambda)
||\cdot||^{-4-\lambda}$ (one can easily verify this property; note that the Fourier transformation in the real-valued case is understood in the sense of generalized functions, though in what follows such details are inessential), each edge in the diagram of the impulse representation in the real-valued case also corresponds to some degree of the norm. Formula~(\ref{FGq}) is a formal calque of definitions of vacuum FA accepted in the so-called real scalar theory (in view of remarks on the convergence analogous to those given above for the coordinate space).
In a nonvacuum case, in the impulse representation some variables (some real-valued analogs of variables~$k_e$ introduced by us) are fixed and the result of the integration with respect to all the rest variables depends on them. The fact that nonvacuum FA in the impulse and coordinate representations are connected with each other by the Fourier transformation explains the terms ``impulse/coordinate representation''. In Lemma~\ref{lem:1} given below this connection is considered in the simplest case of finite fields.
Note that in the theory of FA it has long been discovered that when formally associating one and the same propagators with edges of the graph of an FA in coordinate and impulse spaces, the vacuum amplitude in the coordinate space for the planar graph~$G$ coincides with the corresponding amplitude in the impulse space for the dual graph~$\widetilde G$~\cite{bleher}. But if propagators are connected with each other by the Fourier transformation, then vacuum amplitudes (if they are defined) in coordinate and impulse representations coincide.
In combinatorics the coincidence of chromatic and flow polynomials of dual graphs has also long been known, namely, from the very inception of the concept of a flow polynomial. However, in a finite field the Fourier transformation of a norm is not a norm. For this reason the connection between flow and chromatic polynomials of one and the same graph, which is a result of the Fourier transformation of a norm, is more complex. We discuss this connection in a separate paper.
Let us return to real FA. If all propagators have the same degrees, then even in the nonvacuum case there arises a difficulty with the convergence of all finite-dimensional integrals that define amplitudes. This difficulty can be eliminated by labeling each edge with ``its own'' complex-valued degree of the propagator and the subsequent analytic continuation of the integration result. This approach implies the estimation of the convergence domain in the space $\lambda_e$, which is also problematic. Moreover, in the quantum field theory, it is important to be able to find poles of the analytic continuation. This can be done with the help of the so-called $\alpha$-representation.
In the theory with a homogeneous propagator, the technique of the $\alpha$-representation consists in the following steps. One represents each function in the form $||k_e||^{-4-\lambda}$ associated with the edge $e$ in the impulse space as $||k_e^2||^{(-4-\lambda)/2}$ and replace it with the value of the Fourier transformation result of the norm of $\alpha_e$ raised to the corresponding degree at the point $k_e^2$. When calculating an FA in the impulse space, changing the integration order (we first integrate with respect to $k$ and then do with respect to $\alpha$), we get a multidimensional Gaussian integral with respect to variables~$k_e$, $e\in E$ with a special matrix depending on $\alpha$ (the weighted Laplacian matrix of the graph~$G$). The Gaussian integral equals the square root of the matrix determinant; the latter obeys formula~(\ref{eq:2}) (for real $\alpha$). As a result we get a notation for the FA in the impulse space as the integral with respect to $\alpha$ of a function whose behavior can be studied easily (just this is called the $\alpha$-representation).
Formula~(\ref{eq:3}) gives an analogous representation (in the vacuum variant) for $F_G(q)$ in the case of a finite field $\mathbb F_q$. Instead of the integral we get the sum over all nonzero values of $\alpha_e$ in ${\mathbb F}_q$, and the result is expressed via the Legendre symbol of minors of the same Laplacian matrix as in the real-valued case.
The $\alpha$-representation of FA was also mentioned in earlier papers in combinatorics. In December 1997, when giving a talk at the Gelfand seminar in Rutgers University, Maxim Kontsevich proposed a conjecture that for any connected multigraph~$G$ the number $N(G,q)$ of nonzero values of $s(\alpha,G)$ for $\alpha \in \left(\mathbb{F}_q\right)^{E(G)}$ is a polynomial in~$q$. Though the conjecture was never published, it has aroused the interest of experts in combinatorics (see~\cite{stanleyArticle,chung}). Against expectations, sometime later this conjecture was refuted in a nonconstructive way \cite{belk}. Constructive examples of graphs, for which the conjecture is not valid, start to be found comparatively recently~\cite{Schnetz}.
Note that if formula~(\ref{eq:3}) contains only those terms that correspond to the maximal value of the rank $r^*(G,\alpha)$ of the Laplacian matrix of the graph~$G$ (i.e., $r^*(G,\alpha)=|V|-1$), then (accurate to the coefficient) it allows the representation $$ \sum\limits_{ \alpha \in \left( \mathbb{F}_q^*\right)^{E(G)}} \eta \left(s(\alpha,G)\right). $$ We need to impose no additional constraint on the rank in the sum, because terms such that $s(\alpha,G)=0$ contribute nothing to the sum. Note also that the value $N(G,q)$ mentioned in the Kontsevich conjecture is, evidently, representable as $$ N(G,q)=\sum\limits_{ \alpha \in \left( \mathbb{F}_q\right)^{E(G)}} \eta^2 \left(s(\alpha,G)\right). $$
The main result of this paper (the $\alpha$-representation) is a ``proper'' notation of the Kontsevich conjecture. The linear combination obtained with the help of the technique of FA is really a polynomial in~$q$, more precisely, it is the flow polynomial of $G$. This polynomial is a source of many unsolved questions ``dual'' to the map coloring problem.
\section{The Fourier transformation and flow polynomials}
The Fourier transformation in the finite group $A_q$ is defined with the help of the notion of an additive character $\chi(x)$, $x\in A_q$. Recall that an additive character~\cite[chapter 5]{lidlNider} is a complex-valued function $\chi(x)$, $x\in A_q$, such that $\chi(x+y)= \chi(x)\chi(y)$ for any $x,y\in A_q$. Evidently, for the neutral element of the group it holds $\chi(0)=1$, therefore the value of the complex module of the character identically equals one. Evidently, for the neutral group element it holds $\chi(0)=1$, therefore $|\chi(x)|=1$ for any $x\in A_q$.
The character that identically equals one is said to be trivial. One can easily prove (\cite[theorem~5.4]{lidlNider}) that for any nontrivial character it holds \begin{equation} \label{chizerro} \sum_{x\in A_q} \chi(x)=0. \end{equation}
In what follows, for $A_q$ we choose only an additive group of the finite field $\mathbb F_q$, $q=p^d$. It is well known that (\cite[theorem~5.7]{lidlNider}) any character of this group $\mathbb{F}_q$ takes the form $\chi_k(x)=\chi_1(kx)$, where $k\in$ $\mathbb{F}_q$ and $\chi_1(x)=\exp{(2\pi i\, \operatorname{Tr}(x)/p)}$, while $\operatorname{Tr}(x)=x+x^p+x^{p^2}\ldots+x^{p^{d-1}}$. One can easily prove that formula~(\ref{chizerro}) in this case allows the form \begin{equation} \label{eq:5} \sum_{k\in \mathbb F_q}{\chi_1(k t)}=q\,\delta(t). \end{equation}
For any function $f(x)$ whose argument $x$ takes on values in~$\mathbb F_q$ we define the \emph{Fourier transformation} $\widehat{f}(k)$, $k\in \mathbb F_q$, as $$ \widehat{f}(k)=\sum_{x\in \mathbb F_q}{f(x)\chi_1( k x)}. $$ Formula~(\ref{eq:5}) easily implies the equality $$ f(x)=\frac1q\sum_{k\in \mathbb F_q}\widehat f(k)\chi_1(- k x) $$ (the inverse Fourier transformation formula). Note that formula~(\ref{eq:5}) means that (accurate to the multiplier) the delta-function and the unit are connected with each other by the Fourier transformation.
\begin{lem} \label{lem:1} Let $G$ be a multigraph without loops. Then the product of characters has the following property: \begin{equation} \label{eq:6} \sum_{x \in \left(\mathbb F_q\right)^{V(G)}} \prod_{e\in E(G)} \chi_1((x_{i(e)} - x_{f(e)})k_e) = \prod_{v \in V(G)} q\,\delta \left( \sum_{e\in E(G)} {\varepsilon_{v e} } k_e \right). \end{equation} \end{lem} \textbf{Proof:} Evidently, $$ \prod_{e\in E(G)} {\chi_1( (x_{i(e)} - x_{f(e)} )k_e )}= \chi_1 (\sum_{e\in E(G)} (x_{i(e)} - x_{f(e)} ) k_e ). $$ Note that the argument of the additive character is representable as $$ \sum_{e\in E(G)} {(x_{i(e)} - x_{f(e)} )k_e }= \sum_{v \in V(G)} {x_v \sum_{e\in E(G)}{\varepsilon _{v e} } k_e }. $$ Certain insignificant transformations give $$ \sum_{\begin{subarray}{e} x_v \in \mathbb F_q\\ \forall v \in V(G) \end{subarray}} \prod_{e\in E(G)} {\chi_1((x_{i(e)}-x_{f(e)} )k_e )}= \sum_{\begin{subarray}{e} x_v \in \mathbb F_q \\ \forall v \in V(G) \end{subarray}} \prod_{v \in V(G)} {\chi_1(x_v \sum_{e\in E(G)} {\varepsilon_{v e} } k_e } ) = $$ $$ = \prod_{v \in V(G)}\left( {\sum_{x_v\in \mathbb F_q} {\chi_1(x_v \sum_{e\in E(G)} {\varepsilon_{v e} k_e }) } }\right). $$ Applying formula~(\ref{eq:5}), we get $$ \prod_{v \in V(G)} {\sum_{x_v\in \mathbb F_q} {\chi_1(x_v} \sum_{e\in E(G)} {\varepsilon_{v e} k_e } ) } = \prod_{v \in V(G)} {q\,\delta \left( {\sum_{e\in E(G)} {\varepsilon_{v e} } k_e } \right)}. $$ $\square$
We can interpret Lemma~\ref{lem:1} as follows. Consider an FA in the coordinate space with the propagator $\delta(x-y)$ with ``external variables'' $z_e$, i.e., $$\sum_{x\in (\mathbb F_q)^{V(G)}} \prod_{e \in E(G)} \delta(x_{i(e)}-x_{f(e)}-z_e).$$ Then the Fourier transformation (with respect to variables $z_e$, $e\in E$) coincides (accurate to a constant coefficient) with the FA in the impulse representation with the unit propagator. \begin{lem}[The Key lemma] \label{lem:2} Let $G$ be a connected multigraph without loops. Then \begin{equation} \label{eq:7}
F_G (q) = q^{ - |V(G)|} \sum_{\alpha \in (\mathbb F_q^*)^{E(G)}} \sum_{ x_v \in \mathbb (F_q)^{V(G)}} \chi\left( \sum_{e\in E(G)} (x_{i(e)} - x_{f(e)} )^2 \alpha_e \right), \end{equation} \end{lem} \textbf{Proof:} Let us apply Lemma~\ref{lem:1} for $k_e=\alpha_e$ and calculate the sum over all nonzero $\alpha_e$. We get \begin{equation} \label{eq:8} \frac {\sum\limits_{\begin{subarray}{e} \alpha_e \in \mathbb F_q^* \\ \forall e\in E(G)\end{subarray}} \sum\limits_{\begin{subarray}{e} x_v \in \mathbb F_q \\ \forall v \in V(G)\end{subarray}} \prod\limits_{e\in E(G)} \chi_1((x_{i(e)} - x_{f(e)} )\alpha_e)
}{q^{|V(G)|}} = \sum_{ \begin{subarray}{e} \alpha_e \in \mathbb F_q^* \\ \forall e\in E(G)\end{subarray}}\prod_{v \in V(G)} \delta \left( {\sum_{e\in E(G)} {\varepsilon_{v e} } \alpha_e } \right). \end{equation} By the definition of a flow polynomial~(\ref{eq:1}) the right-hand side of the latter equality coincides with~$F_G(q)$.
Let us now change the summation order in the left-hand side of formula~(\ref{eq:8}). We get $$ \sum_{\begin{subarray}{e} x_v \in \mathbb F_q \\ \forall v \in V(G)\end{subarray}} {\prod_{e\in E(G)} \sum_{\alpha_e \in \mathbb F_q^*} {\chi_1((x_{i(e)} - x_{f(e)} )\alpha_e) } }. $$
One can easily see that \begin{equation} \label{eq:9} \sum_{\alpha_e \in \mathbb F_q^*} \chi_1((x_{i(e)} - x_{f(e)} )\alpha_e)
= \left\{ \begin{array}{ll} q-1, & \text{if } x_{i(e)} = x_{f(e)},\\ -1, & \text{if }x_{i(e)} \ne x_{f(e)}. \end{array} \right. \end{equation} Really, let $x_{i(e)} - x_{f(e)} = y_e$. In view of~(\ref{eq:5}) with $y_e \ne 0 $ we get $\sum_{\alpha_e \in \mathbb F_q} \chi_1(y_e \alpha_e)= 0$, otherwise $\sum_{\alpha_e \in \mathbb F_q} \chi_1(y_e \alpha_e)= q$. In~(\ref{eq:9}) we calculate the sum over all $\alpha_e$, except $\alpha_e=0$, which corresponds to the term that equals~1.
The right-hand side of formula~(\ref{eq:9}) is a function of $||y_e|| =h(||y_e||)$, namely, $$ h(z) = \left\{ {\begin{array}{ll} q-1,& \text{if }z=0,\\ -1, &\text{otherwise.} \end{array} } \right.$$
This is the key moment in our proof. Using this fact, we replace $x_{i(e)} - x_{f(e)}$ in the left-hand side of~(\ref{eq:8}) with $(x_{i(e)} - x_{f(e)})^2 $, while the value $||x_{i(e)}-x_{f(e)}||$ remains the same. We get the assertion of the lemma. $\square$
\section {The matrix tree theorem}
In Lemma~\ref{lem:2} we have represented a flow polynomial as the sum of characters of a quadratic form with respect to variables $x$, namely, $$\sum_{e\in E(G)} (x_{i(e)} - x_{f(e)})^2 \alpha_e .$$ The matrix of this quadratic form is the weighted Laplacian matrix of the graph~$G$. \begin{lem} \label{lem:3} Let $G$ be a multigraph without loops. Then the following correlation takes place: $$
\sum_{e\in E(G)} ( x_{i(e)} - x_{f(e)} )^2 \alpha _e = x_V^t L x_V, $$ where $x_V$ is the vector column of all variables associated with vertices; the superscript $t$ is the transposition sign; $L$ is the so-called weighted Laplacian matrix of the graph~$G$, i.e., \begin{equation} \label{laplas} \ell_{kj} = \left\{ \begin{array}{ll}
- \sum_{e:\,\{i(e),f(e)\}=\{ k,j\} } \alpha _e,& k\ne j, \\
\sum_{e:\, k\in\{i(e),f(e)\} } \alpha _e ,& k = j. \end{array} \right. \end{equation} \end{lem}
Formula~~(\ref{laplas}) means that each nondiagonal element of the Laplacian matrix equals the sum of weights of \textit{all} edges connecting the corresponding vertices multiplied by $(-1)$; each diagonal element equals the sum of weights of all edges incident to the corresponding vertex. Therefore, the Laplacian matrix is symmetric and degenerate, the sum of elements in each row of this matrix equals zero.
Since Lemma~\ref{lem:3} is well known, we do not give its proof here. The case of a simple graph with $\alpha_e\equiv 1$ is studied in~\cite[lemma~4.3]{MatricesAndGraphs}. In what follows, in order to indicate the dependence of the matrix $L$ on $\alpha_e$, $e \in E(G)$, we denote this matrix by $L(G,\alpha)$; its determinant equals zero. Below we also need principal minors of the matrix $L(G,\alpha )$ of lesser orders. Let us discuss their combinatorial sense.
Let us first consider minors of the order~${|V|-1}$. This result is classical; it goes back to works by R.Kirchhoff, J.J.Sylvester, and A.Cayley published in the middle of the 19th century (see details in \cite[section~5.6, remarks to chapter~5]{stanleyBook},~\cite{stanleyArticle}). Recall that the symbol $s(\alpha,G)$ denotes sum~(\ref{eq:2}).
\begin{ter}[the matrix tree theorem,~\cite{stanleyBook, MatricesAndGraphs}] \label{ter:2} Let $G$ be a connected multigraph without loops, let $L'(G,\alpha )$ be obtained from $L(G,\alpha )$ by deleting the $i$th row and the $i$th column, $i\in V(G)$. Then with any $i$ it holds $$
\det L'(G,\alpha ) = s(\alpha,G), $$ where the sum $s(\alpha,G)$ obeys formula~(\ref{eq:2}). \end{ter}
Let us now consider minors of lesser orders. We are going to reduce this case to the previous theorem. \begin{ter} \label{ter:3} Let~$G$ be a connected multigraph without loops. The principal minor of the matrix $L(G,\alpha)$ that is formed by rows and columns with numbers \{$i_1,\dots i_k$\} coincides with $s(\alpha,G/W)$, where $W=V\setminus\{i_1,\dots i_k\}$. \quad \end{ter} \textbf{Proof:} Really, all vertices of the graph $G/W$, except one ``contracted'' vertex~$v'$, correspond to vertices $i_1,\dots i_k$ of the initial graph. The submatrix of the matrix $L(G,\alpha)$ that is formed by rows $i_1,\dots i_k$ coincides with the submatrix obtained by deleting from the matrix $L(G/W,\alpha)$ the row and the column that correspond to the vertex $v'$. Applying the previous theorem for the graph $G/W$, we get the assertion of Theorem~\ref{ter:3}. $\square$
Note that the idea of using trees of the graph $G/W$ for calculating minors of the Laplacian matrix of the graph~$G$ is also not new, it goes back to works of Alexander~Kelmans~\cite{kelmans}. The following proposition is also an analog of Theorem~\ref{ter:3} (apparently, it goes back to works of Miroslav~Fiedler).
\begin{ter}[see \cite{MatricesAndGraphs}, theorem~4.7] \label{minor} Let~$G$ be a connected multigraph without loops. Then the principal minor of the matrix $L(G,\alpha)$ that is formed by deleting rows and columns with numbers ${i_1,\ldots i_k}$ equals $$ \sum_{\{T_1,\ldots,T_k\} }\
\prod_{e\in \cup_j E(T_j)} {\alpha _e } ; $$ here the sum is taken over all forests of the graph~$G$ that consist of~$k$ trees such that each tree $T_j$ in it contains exactly one vertex from the set $\{i_1,\dots i_k\}$. \end{ter}
\section {Calculation of multidimensional Gaussian sums over a finite field} Let us explicitly evaluate the expression $
\sum_{x_V \in \mathbb{F}_q^{V} } \chi_1(x_V^t B x_V ). $
Here $B$ is an arbitrary symmetric matrix of the dimension $|V|\times|V|$ with elements in the field~${\mathbb F}_q$, where $q$ is odd. In this section for convenience we identify the finite set~$V$ with the starting point of some part of the set of natural numbers. As is well known, any symmetric matrix of the rank~$r$ has a nondegenerate principal minor of the order~$r$. This fact is used in the following lemma.
\begin{lem} \label{lem:4} Let $q=p^d$ with odd prime $p$, $\operatorname{rank}\, B = r$. Then $$ \sum_{x_V \in \mathbb{F}_q^V } \chi_1(x_V^t B x_V ) =
q^{|V|} \eta(\det B_r) \left[ {\frac{g(q)}{q}} \right]^r, $$ where $g(q)$ obeys formula~(\ref{gq}), $\det B_r$ is an arbitrary nonzero principal minor of the order~$r$. \end{lem}
Before proving Lemma~\ref{lem:4} let us recall some properties of its one-dimensional variant~\cite{ai,lidlNider}.
The value $\sum_{k\in\mathbb F^*_q} \eta ( k )\chi_1(k t) $ is called the Gaussian sum. According to elementary properties of quadratic residues of a finite field, the Gaussian sum vanishes with~${t=0}$. Otherwise, by summing it up with equality~(\ref{eq:5}) and performing certain elementary transformations, we find its value as \[ \sum_{k\in\mathbb F_q} \chi_1(k^2 t) . \]
For $t\ne 0$ we can change the variable $x=kt$ in the initial definition of the Gaussian sum and thus find its value as $\eta(t) g(q)$, where $$ g(q)=\sum_{x\in\mathbb F^*_q} \eta ( x )\chi_1(x). $$ It is well known that~\cite[theorem~5.15]{lidlNider} the value $g(q)$ obeys formula~(\ref{gq}). As a result we get \begin{equation} \label{onedim} \sum_{k\in\mathbb F_q} \chi_1(k^2 t)=\left\{\begin{array}{ll} q, & \mbox{if $t=0$,}\\ \eta(t) g(q), & \mbox{if $t\ne 0$.} \end{array} \right. \end{equation}
\textbf{Proof of Lemma~\ref{lem:4}:}\quad 1. Let us first consider a particular case, namely, let $B$ be diagonal. This means that the left-hand side of the desired equality is representable as follows: $$ \sum_{x \in \mathbb{F}_q^V} \chi_1\left(
\sum_{i=1}^{|V|} b_{ii} x_{i}^2 \right). $$
Let us represent the latter value as the sum of two terms. Namely, let the first term represent the sum of elements with coefficients $b_{ii}$ which are either equal to zero or not (without loss of generality we assume that nonzero coefficients occupy the first places). We get
$$\prod_{i = r + 1}^{|V|} \left( \sum_{x_{i} \in \mathbb F_q } \chi_1( b_{ii} x_i^2 ) \right) \prod_{i=1}^r \left(\sum_{x_{i} \in \mathbb F_q } \chi_1(b_{ii} x_{i }^2 ) \right) . $$
Let us apply formula~(\ref{onedim}). Represent the latter expression as $$
q^{|V|-r} \prod_{i = 1}^r \left( \eta ( b_{ii} ) g(q)
\right) = q^{|V|} \eta \left( {\det B_r} \right)\left[ \frac{g(q)}{q} \right]^r. $$
2. The general case can be reduced to the diagonal one. Any nondegenerate symmetric $r\times r$-matrix $B_r$ is reducible to the diagonal form over the field~$\mathbb F_q$. This means that there exists a nondegenerate matrix $Q_r$ of the same dimension such that $$
Q_r^t B_r Q_r = \Lambda,\quad
\mbox{where}~~\Lambda = \left( \begin{array}{ccc}
\lambda _1 & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & \lambda _r \end{array} \right)
~~\mbox{and $\lambda _i \ne 0$ for all~$i$.} $$ Determinants of these matrices satisfy the correlation \begin{equation} \label{det}
\det \Lambda = (\det Q_r )^2 \det B_r\,. \end{equation}
In what follows without loss of generality we assume that the matrix $B_r$ is formed by first~$r$ rows and columns of the matrix~$B$. Let us now construct two more nondegenerate matrices; let their dimension equal $|V|\times|V|$. We get $$
\widehat Q = \left( \begin{array}{cccc}
Q_r & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1 \end{array} \right),\qquad \widehat Q^t = \left( {\begin{array}{cccc}
{Q_r^t } & 0 & \cdots & 0 \\
0 & 1 & \cdots & 0 \\
\vdots & \vdots & \ddots & \vdots \\
0 & 0 & \cdots & 1 \end{array} } \right). $$ Let $D = \widehat Q^t B \widehat Q$. Note that properties of the rank imply that $\operatorname{rank}\, D=r$. Here the matrix~$D$ takes the form $$
D = \left( \begin{array}{cc}
\Lambda & D_{12} \\
D_{21} & D_{22} \end{array} \right), $$ where $D_{12}, D_{21}$ and $D_{22}$ are some (rectangular) matrices obtained as a result of the transformation.
The biunique change of variables $x_V=Q y_V$ gives $$ \sum_{x_V \in \mathbb{F}_q^V } \chi_1(x_V^t B x_V ) = \sum_{y_V \in \mathbb{F}_q^V } \chi_1(y_V^t \widehat Q^t B \widehat Q y_V ) = \sum_{y_V \in \mathbb{F}_q^V } \chi_1(y_V^t D y_V ). $$
Applying the formula proved in item~1 for the matrix~$D$ of the rank~$r$, we get $$ \sum_{y_V \in \mathbb{F}_q^V } \chi_1(y_V^t D y_V ) =
q^{|V|} \eta ( \det D_r ) \left[ \frac{g(q)}{q} \right]^r; $$ here $\det D_r$ is a nonzero principal minor of the matrix $D$ of the order $r$ (we use~$\det \Lambda$ as this minor). On the other hand, we know (see~(\ref{det}) that it obeys the formula $$ \det D_r =
\left( \det Q_r \right)^2 \det (B_r). $$ Using properties of the quadratic character $\eta$, we get $$
\eta \left( {\det D_r} \right) =
\eta \left( (\det B_r) \left( \det Q_r \right)^2 \right) =
\eta \left( \det B_r \right). $$ This means that $$
\sum_{x_V \in \mathbb{F}_q^V } {\chi_1(x_V^t B x_V ) } = q^{|V|} \eta(\det B_r) \left[ \frac{g(q)}{q} \right]^r. $$ $\square$
\section{Proof of the main theorem}
In this section we prove the main theorem treating it as a corollary of results obtained in three previous sections.
\textbf{Proof of Theorem~\ref{ter:1}:} Let us transform formula~(\ref{eq:7}) given in the Key lemma~\ref{lem:2}. Applying Lemma~\ref{lem:3} to the inner sum of characters of this expression, we represent this sum as follows: $$ \sum_{ x_V \in \mathbb{F}_q^V } \chi_1\left(\sum_{e\in E(G)} (x_{i(e)} - x_{f(e)} )^2 \alpha _e \right) = \sum_{ x_V \in \mathbb{F}_q^V } \chi_1(x_V^t L(G,\alpha) x_V) . $$ By using Lemma~\ref{lem:4} we deduce \begin{equation} \label {main}
F_G (q) = \sum_{\alpha \in \left( \mathbb{F}_q^*\right)^{E(G)}} \eta \left( \det L_r(G,\alpha) \right)\left[ \frac{g(q)}{q} \right]^r, \end{equation} where $r$ is the rank of the Laplacian matrix~$L(G,\alpha)$ and $\det L_r(G,\alpha)$ is its nonzero principal minor of the order~$r$. Finally, using Theorem~\ref{ter:3} for $\det L_r(G,\alpha)$, we come to the assertion of Theorem~\ref{ter:1}. $\square$
\section{Simplifications and applications of the main theorem} \subsection{A simplified variant of the main formula and the Tutte conjecture} In this item we simplify formula~(\ref{eq:3}) given in the main theorem and briefly discuss various related conjectures.
According to Tutte's 3-flow conjecture, for any 4-edge-connected graph~$G$ it holds ${F_G(3)>0}$. This conjecture remained completely inaccessible for a long time, moreover, nothing was known even about $k$- edge-connected graphs. Recently Carsten Thomassen~\cite{thomassen} has succeeded in proving the existence of a 3-flow for 8-edgeconnected graphs. Sometime later this result was strengthened by a group of authors~\cite{combB}, namely, it was proved that the same is true for 6-edge-connected graphs (in this connection see the essay~\cite{lovasz}). However, there are still no arguments in favor of the most intriguing Tutte conjecture, according to which for \textit{any} connected graph~$G$ without bridges it holds $F_G(5)>0$. As far as we know, there are no methods that take into account the specificity of the prime odd number~$q$ or its degree when calculating $F_G(q)$.
Evidently, one can graduate the sum mentioned in the assertion of the main theorem with respect to values~$r^*(G,\alpha)$, namely, $$
F_G (q) = \sum_{r=0}^{|V|-1} \left[ \frac{g(q)}{q} \right]^{r} S(r,q),\quad\mbox{ where } S(r,q)=\sum_{\begin{subarray}{e }\alpha:\, \alpha \in \left( \mathbb{F}_q^*\right)^{E(G)},\\\phantom{\alpha:}\, r^*(G,\alpha)=r\end{subarray}} {\eta \left(s(\alpha,G/W^*)\right)}. $$
\begin{utv} \label{utv:1} For any odd~$r$ it holds $S(r,q)=0$. \end{utv}
\textbf{Proof:} Really, let $\gamma$ be an arbitrary element of the field ${\mathbb F}_q$ such that $\eta(\gamma)=-1$. Let us associate an arbitrary collection $\alpha_e, e\in E(G)$ with a collection $\beta_e, e\in E(G)$, where
$\alpha_e=\gamma \beta_e$. Evidently, this correspondence is biunique. The number of vertices in the contracted graph $G/W^*$ equals ${|V|-|W|+1}$. Therefore in any tree $T$ in this graph the number of edges equals $|V|-|W|$, consequently,
$\prod_{e\in E(T)} \beta_e =\prod_{e\in E(T)} \alpha_e \gamma^{|V|-|W|}$. Thus, we get $s(\alpha,G/W)=s(\beta,G/W) \gamma^{|V|-|W|}$. In view of the multiplicativity of the symbol $\eta$ we obtain $$ \sum_{\begin{subarray}{e }\alpha:\, \alpha \in \left( \mathbb{F}_q^*\right)^{E(G)},\\\phantom{\alpha:}\, r^*(G,\alpha)=r\end{subarray}} {\eta \left(s(\alpha,G/W^*)\right)} =\eta(\gamma)^r \sum_{\begin{subarray}{e }\beta:\, \beta \in \left( \mathbb{F}_q^*\right)^{E(G)},\\\phantom{\beta:}\, r^*(G,\beta)=r\end{subarray}} {\eta \left(s(\beta,G/W^*)\right)} =-S(r,q), $$ consequently, in this case $S(r,q)=-S(r,q)=0$. $\square$
Therefore, the main formula obtained in this paper is representable as follows: \begin{equation} \label{eq:4MT} F_G (q) = \sum_{\begin{subarray}{e} r:\, r \bmod 2 =
0, \\ \phantom{r:}\,r\leqslant |V|-1, \end{subarray}} \left[ \frac{g(q)}{q} \right]^{r} S(r,q). \end{equation}
In the remaining part of the paper we discuss various conjectures connected with values $S(r,q)$ when $q=5$. Note that coefficients of the linear combination $S(r,q)$ in formula~(\ref{eq:4MT}) with $q=5$ are always positive ($q=5$ is the first prime number with this property). Really, let $r=2 r'$, then with prime $q$ such that $q \bmod 4 = 1$ we get $\left[ \frac{g(q)}{q} \right]^{2r'}=\frac{1}{q^{r'}}$.
Consequently, the flow polynomial $F_G(q)$ is automatically positive with $q=5$, if at least one value of $r$ in formula~(\ref{eq:4MT}) ensures that $S(r,5)> 0$, while with the rest values of $r$ it holds $S(r,5)\geqslant 0$. Unfortunately, (as we see in examples given in subsection~7.3) this is not necessarily true for an arbitrary graph. In the next item we give some facts that are useful in calculations.
\subsection{The rank of the Laplacian matrix over a finite field} Here we study some easy properties of the Laplacian matrix which enable us to estimate the value $r^*(G,\alpha)$. Let us first pay attention to one simple property of formula~(\ref{main}). \begin{utv} \label{utv:2} If a graph~$G$ contains at least one nonmultiple edge (in particular, if the graph $G$ is simple and nontrivial), then formula~(\ref{eq:4MT}) contains no terms that correspond to $r=0$. \end{utv} \textbf{Proof:} Really, the rank of the matrix differs from zero if the matrix has at least one nonzero element. If the graph $G$ has at least one nonmultiple edge $e = (i(e), f(e))$, then the element of the matrix $L(G,\alpha)$ with indices $i(e)$, $f(e)$ equals $\alpha_e \ne 0$. $\square$
In what follows we assume that $q\equiv p$ is an odd prime number. The easy property given below allows us to estimate possible values of $|W^*|$ (or, equivalently, to estimate $r^*(G,\alpha)$), using the known integer value of the Laplacian determinant. \begin{lem} \label{linal} Let $p$ be a prime number and let $B$ be an $n\times n$ integer matrix of the rank~$r$ considered as a matrix over the residue field modulo~$p$. Then the determinant of any minor of the matrix of the order $r+i$, $i\in\mathbb N$, $i\leqslant n-r$, considered as an integer number, is a multiple of~$p^i$. \end{lem} \textbf{Proof:} If the rank of the matrix~$B$ over the field~$\mathbb F_p$ equals~$r$, then any its minor~$M$ of the order $r+i$ contains no more than $r$ independent basic rows over the field~$\mathbb F_p$. Then each of all the rest rows in this minor is representable as a linear combination of these rows. The addition to some row of a linear combination of the rest ones does not affect the value of the determinant over any field. Consequently, when evaluating the determinant we can assume that all elements in these row equal zero modulo~$p$, i.e., they are multiples of $p$. Then the determinant of the minor~$M$ (over the field~$\mathbb R$) is a multiple of $p^j$, where $j$ is the number of the mentioned rows. $\square$
\begin{sled}
Let $G$ be a connected multigraph without loops and let $q=p$ be an odd prime number. Assume that the determinant of the matrix $L(G,\alpha)$ is not a multiple of $p^{k+1}$, $k\in\mathbb N$. Then $|W^*|\leqslant k$ and $r^*(G,\alpha)\geqslant |V|-k$, respectively. \end{sled} \textbf{Proof:}
Really, by definition the principal minor of the matrix $L(G,\alpha)$ that corresponds to vertices in $V-W^*$ differs from zero and $W^*$ is the minimal cardinality set with such a property. Therefore $\operatorname{rank} L(G,\alpha)=|V|-|W^*|$. Consequently, by Lemma~\ref{linal}, if $|W^*|>k$, then $L(G,\alpha)$ would be a multiple of~$p^{k+1}$. $\square$
\subsection{Examples of calculations for $q=5$} We have considered various graphs, which are basic in Tutte's conjecture. Evidently, if the degree of some vertex $v$ of the graph $G$ equals~2, then the evaluation of the flow polynomial $F_G(q)$ in this graph is reduced to the evaluation of $F_{G'}(q)$, where $G'$ is obtained from $G$ by contracting the vertex $v$ and an adjacent one. For this reason we have considered only those graphs where the degree of vertices is not less than three.
It is well known (see \cite{lovasz}) that with $q\geqslant 3$ an example of a graph without bridges such that $F_G(q) < 0$ and the sum $|E|+|V|$ takes on the minimal value is a simple cubic graph (provided that such an example exists). Thus, for $q=3$ the graph $K_4$ serves as such an example, and for $q=4$ the Petersen graph does. According to the Tutte conjecture, for $q=5$ such an example does not exist. Therefore, for proving this conjecture it suffices to consider simple cubic graphs.
We have considered $S(r,5)$ for all such graphs with no more than 10 vertices. As appeared, all such graphs with less than 10 vertices satisfy the condition $S(r,5)\geqslant 0$ for all~$r$. However, we have found several graphs with 10 vertices for which $S(6,5)< 0$. In particular, for the Petersen graph we have $S(2,5)=S(4,5)=0$, $S(6,5) = -384$, $S(8,5) = 151\,920$, and $F_G(5) = S(6,5)/125+S(8,5)/625=240$.
Moreover, one can find such examples even for simple noncubic graphs with lesser numbers of vertices. Thus, for the graph $K_{5}$ with one added vertex of the degree~3 we have obtained the following values: $S(2,5) = -180$, $S(4,5) = 513\,300$, and $F_G(5) =S(2,5)/5+S(4,5)/25=20\,496$. One may think that the last nonzero term of the sequence $S(i,5), i=2,4,\ldots$, is positive for any simple graph of all degrees with more than two vertices. However, this is not true. For the graph $K_{3,4}$ it holds $S(2,5) = 612$, $S(4,5)=244\,860$, $S(6,5)=-8\,100$, and $F_G(5)=S(2,5)/5+S(4,5)/25+S(6,5)/125=9\,852$.
Therefore, the reason, for which for any connected graph without bridges (in accordance with the Tutte conjecture) the sum of Legendre symbols with coefficients $\frac15,\frac{1}{25}$, etc. given in Theorem~\ref{ter:1} equals a positive (integer) number, remains unknown.
\end{document} | arXiv | {
"id": "1609.01120.tex",
"language_detection_score": 0.7645294070243835,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} In this paper, for finite discrete field $F$, nonempty set $\Gamma$, weight vector $\mathfrak{w}=({\mathfrak w}_\alpha)_{\alpha\in\Gamma}\in F^\Gamma$ and weighted generalized shift $\sigma_{\varphi,{\mathfrak w}}:F^\Gamma\to F^\Gamma$, we find necessary and sufficient conditions for uniform dynamical system $(F^\Gamma,\sigma_{\varphi,{\mathfrak w}})$ to be Li--Yorke chaotic. Next we find necessary and sufficient conditions for $(F^\Gamma,\sigma_{\varphi,{\mathfrak w}})$ to be Devaney chaotic. \end{abstract} \maketitle
\noindent {\small {\bf 2020 Mathematics Subject Classification:} 37B02, 54H15 \\ {\bf Keywords:}} Devaney chaotic, Li--Yorke chaotic, sensitive, topological transitive, weighted generalized shift.
\section{Introduction} \noindent By a dynamical system $(X,f)$ we mean a topological space $X$ and a continuous map $f:X\to X$. The family of all metric dynamical systems is a subclass of the family of all uniform dynamical systems which is in its own turn a subclass of the family of all uniform transformation semigroups. Hence it is natural for ideas employed in metric dynamical systems to be extended to uniform dynamical systems or for ideas adopted in uniform transformation semigroups to be restricted to uniform dynamical systems. Amongst several properties introduced for uniform transformation semigroups we want to make focus on Devaney and Li--Yorke chaos. In this text we study Devaney and Li--Yorke chaos in a sub--class of uniform dynamical systems ``weighted generalized shifts''. \\ In classical form, for compact metric space $(X,d)$, we call the dynamical system $(X,f)$ ``Devaney'' chaotic if it has the following properties~\cite{de, wang}: \begin{itemize} \item DPP (Dense Periodic Points). $Per(f)\:(=\{x\in X:\exists n\geq1\: f^n(x)=x\})$ is a dense subset of $X$, \item TT (Topological Transitive). For all nonempty open subsets $U,V$ of $X$ there exists $n\geq1$ such that $f^n(U)\cap V\neq\varnothing$, \item SIC (Sensitive to Initial Conditions). There exists $\delta>0$ such that for all $x\in X$ and open neighbourhood $U$ of $x$ there exists $y\in U$ and $n\geq0$ such that $d(f^n(x),f^n(y))>\delta$. \end{itemize} Several authors have tried to extend the above concept of Devaney chaos to uniform dynamical systems$\setminus$transformation (semi-)groups by redefining (see e.g.~\cite{baz, jaleb}): \begin{itemize} \item sensitivity in uniform dynamical systems$\setminus$transformation (semi-)groups and \item periodic points in uniform transformation (semi-)groups. \end{itemize} Let us see what about the other well--known chaos, ``Li--Yorke''. In traditional form a metric dynamical system is Li--Yorke chaotic if it contains an uncountable scrambled subset~\cite{li, li-sen}, also one can extend the above concept of a Li--Yorke chaotic dynamical system to Li--Yorke chaotic transformation (semi-)groups with special considerations on \\ $\bullet$ phase (semi-)group (for finitely generated case of phase semigroup see \cite{chu}), or \\ $\bullet$ phase space (uniform phase space) \\ or even both of the above evaluations. One may consider even Li--Yorke chaotic uniform transformation semigroups modulo an ideal~\cite{khodam}. \subsection*{Background on uniform spaces} For collection $\mathcal F$ of subsets of $X\times X$, we say $\mathcal F$ is a uniform structure on $X$ if (let $\Delta_X:=\{(x,x):x\in X\}$): \begin{itemize} \item $\forall\mathcal{O}\in\mathcal{F}\:\:\Delta_X\subseteq\mathcal{O}$, \item $\forall\mathcal{O},\mathcal{U}\in\mathcal{F}\:\:\mathcal{O}\cap\mathcal{U}\in\mathcal{F}$, \item $\forall\mathcal{O}\in\mathcal{F}\:\:\forall\mathcal{U}\subseteq X\times X\:\:(\mathcal{O}\subseteq\mathcal{U}\Rightarrow\mathcal{U}\in\mathcal{F})$, \item $\forall\mathcal{O}\in\mathcal{F}\:\:{\mathcal O}^{-1}\in \mathcal{F}$, \item $\forall\mathcal{O}\in\mathcal{F}\:\:\exists\mathcal{U}\in\mathcal{F}\:\:\mathcal{U}\circ\mathcal{U}\subseteq \mathcal{O}$. \end{itemize} Then $\{D\subseteq X:\forall x\in X\:\:\exists\mathcal{O}\in\mathcal{F}\:\:\mathcal{O}[x]\subseteq D\}$ where $\mathcal{O}[x]:=\{y\in X:(x,y)\in\mathcal{O}\}$ (for $\mathcal{O}\in\mathcal{F}$ and $x\in X$) is a topology on $X$ called uniform topology on $X$ induced by $\mathcal F$. We say topological space $Y$ is uniformizable if there exists a uniform structure $\mathcal{H}$ on $Y$ such that uniform topology induced by $\mathcal H$ on $Y$ is compatible with original topology on $Y$, moreover in the above case we say $\mathcal H$ is a compatible uniform structure on $Y$. Let's recall that compact Hausdorff topological space $Z$ has a unique compatible uniform structure $\{\mathcal{O}\subseteq Z\times Z:\mathcal{O}$ contains an open neighbourhood of $\Delta_Z\}$. For details on uniform spaces see \cite{dug}. \subsection*{Background on (weighted) generalized shifts} \noindent One--sided and two--sided shifts are amongst most useful tools in different areas including ergodic theory and dynamical systems which have been studied and developed earlier than 60 years ago, one may find their first motivations in Kolomogorov, Sinai, Ornstein and other mathematicains' works~\cite{casa, ornstein, walters}. One may find different generalizations of the above shifts regarding their point of view, e.g. as it has been mentioned in~2006~in~\cite{Marchenko} ``\textit{Between 1938 and 1940 Jean Delsarte and B. M. Levitan developed the theory of generalized shift operators $T_x^y[f]$, which map functions $f(x)$ into functions of two variables $T_x^y[f]$ and satisfy four axioms that generalized the properties of ordinary shift. Among these axioms ...}''. However in many works, special case of the above definition, i.e., $p$ times iterated two--sided shift have been considered as generalized shifts too~\cite{calcul}. In the following text we will use none of the above two generalizations, we use point of view, notation and definition of generalized shift established and introduced for the first time in~\cite{note} (again as a generalization of one--sided and two--sided shift) which has been appeared in dynamical and non--dynamical papers (e.g.~\cite{abad, anna-b}): let's recall that for arbitrary nonempty sets $X,\Gamma$ and $\varphi:\Gamma\to\Gamma$, we call $\sigma_\varphi:X^\Gamma\to X^\Gamma$ with $\sigma_\varphi((x_\alpha)_{\alpha\in\Gamma})=(x_{\varphi(\alpha)} )_{\alpha\in\Gamma}$ for $(x_\alpha)_{\alpha\in\Gamma}\in X^\Gamma$, a generalized shift~\cite{note}. \begin{definition} Suppose $M$ is a module over ring $R$, $\Gamma$ is a nonempty set, $\mathfrak{w}=({\mathfrak w}_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ is a ``weight vector'' and $\varphi:\Gamma\to\Gamma$ is arbitrary, we call $\sigma_{\varphi,\mathfrak{w}}:M^\Gamma\to M^\Gamma$ with $\sigma_{\varphi,\mathfrak{w}}((x_\alpha)_{\alpha\in\Gamma})=(\mathfrak{w}_\alpha x_{\varphi(\alpha)})_{\alpha \in\Gamma}$ for $(x_\alpha)_{\alpha\in\Gamma}\in M^\Gamma$ a weighted generalized shift. Note that for topological module $M$, weighted generalized shift $\sigma_{\varphi,\mathfrak{w}}:M^\Gamma\to M^\Gamma$ is continuous, where $M^\Gamma$ has been equipped with product topology. \end{definition} \noindent Weighted generalized shifts in some point of view are just weighted composition operators~\cite{comp} which are main interest of many mathematicians, however in the above point of view, they have been introduced for the first time in~\cite{fweight} as a common generalization of ``generalized shifts'' and ``weighted shifts'', also study over this concept has been continued in~\cite{weight}. \begin{convention} In the following text consider discrete finite abelian ring $R$ with unity $1$ and zero element $0$, arbitrary set $\Gamma$ with at least two elements, self--map $\varphi:\Gamma\to\Gamma$ and weight vector $\mathfrak{w}=({\mathfrak w}_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$. Moreover equip $R^\Gamma$ with product topology and (unique) compatible uniform structure \begin{center} $\mathcal{K}=\{\mathcal{O}\subseteq R^\Gamma\times R^\Gamma:$ there exists finite subset $M$ of $\Gamma$ with $\mathcal{O}_M\subseteq\mathcal{O}\}$ \end{center} where for $M\subseteq \Gamma$, we have \[\mathcal{O}_M:=\{((x_\alpha)_{\alpha\in\Gamma},(y_\alpha)_{\alpha\in\Gamma}) \in R^\Gamma\times R^\Gamma:\forall\alpha\in M\:\:(x_\alpha=y_\alpha)\}\:. \] For $\theta\in\Gamma$ and $r\in R$, let $U(\theta,r)=\mathop{\prod}_{\alpha\in\Gamma}U_\alpha$ where $U_\theta=\{r\}$ and $U_\alpha=R$ for $\alpha\neq\theta$. So $\{U(\theta,r):\theta\in\Gamma,r\in R\}$ is a sub--base of product topology on $R^\Gamma$, note that for $r_1,\ldots,r_n\in R$ and distinct $\alpha_1,\ldots,\alpha_n\in\Gamma$, we have $\mathop{\bigcap}\limits_{1\leq i\leq n}U(\alpha_i,r_i)=\mathop{\prod}\limits_{\alpha\in\Gamma}V_\alpha$ for $V_{\alpha_i}=\{r_i\}$ ($1\leq i\leq n$) and $V_\alpha=R$ for $\alpha\neq\alpha_1,\ldots,\alpha_n$. \end{convention}
\section{Sensitivity in $(R^\Gamma,\sigma_{\varphi,{\mathfrak w}})$} \noindent In metric space $(X,d)$ for $\varepsilon>0$, let $\alpha_\varepsilon:=\{(x,y)\in X\times X:d(x,y)<\varepsilon\}$, then $\mathcal{F}_d:=\{\mathcal{O}\subseteq X\times X:\exists\varepsilon>0\:\:\alpha_\varepsilon\subseteq\mathcal{O}\}$ is a compatible uniform structure on $X$ and it is easy to see that $(X,f)$ is sensitive if and only if there exists $\mathcal{O}\in {\mathcal F}_d$ such that for each $x\in X$ and open neighbourhood $U$ of $x$, there exists $y\in U$ and $n\geq0$ such that $(f^n(x),f^n(y))\notin\mathcal{O}$. Note that sensitivity of $(X,f)$ depends on chosen compatible metric $d$ on $X$ \cite{fed}. This comparison leads us to the following definition: \begin{definition} In uniform space $(X,\mathcal F)$ we say dynamical system $(X,f)$ is \begin{itemize} \item sensitive if there exists entourage $\mathcal{O}\in\mathcal{F}$, such that for all $x\in X$ and all open
neighbourhood $U$ of $x$, there exists $y\in U$ and $n\geq0$ with
\linebreak $(f^n(x),f^n(y))\notin\mathcal{O}$~\cite[Definition 3]{wu} (see~\cite[Definition 4.2]{jaleb} too), \item strongly sensitive if there exists entourage $\mathcal{O}\in\mathcal{F}$, such that for all $x\in X$ and all open
neighbourhood $U$ of $x$, there exists $y\in U$ and $N\geq0$ with $(f^n(x),f^n(y))\notin\mathcal{O}$ for all $n\geq N$
(see~\cite[Definition 3.1]{abraham} too). \end{itemize} \end{definition} \noindent Note that sensitivity of $(X,f)$ depends on chosen compatible uniformity of $X$. However since every compact Hausdorff space has a unique compatible uniform structure, so in this case no need to specify compatible uniform structure on compact Hausdorff space $X$. \\ In this section we prove that for finite field $R$, $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is (strongly) sensitive if and only if there exists non--quasi--periodic point $\theta\in\Gamma$ such that for all $n\geq0$, $\mathfrak{w}_{\varphi^n(\theta)}\neq0$. \begin{remark} For self--map $h:A\to A$, we say $a\in A$ is a: \\ $\bullet$ periodic point of $h$, if there exists $n\geq1$ such that $h^n(a)=a$, \\ $\bullet$ quasi--periodic of $h$, if there exist $i>j\geq1$ such that $h^i(a)=h^j(a)$ (or equivalently $\{h^n(a)\}_{n\geq1}$ is finite) (known as: quasi--periodic point~\cite[Definition 2.1]{abad}, eventually periodic point~\cite{nathan}, pre--periodic point~\cite{ben} too), \\ $\bullet$ non--quasi--periodic of $h$, if it is not a quasi--periodic point of $h$, or equivalently $\{h^n(a)\}_{n\geq1}$ is infinite (known as wandering point too~\cite[Definition 2.1]{abad}). \end{remark} \begin{lemma}\label{salam-sen-10} If for all $\alpha\in\Gamma$: \\ $\bullet$ either $\alpha$ is a quasi--periodic point of $\varphi$, \\ $\bullet$ or $\alpha$ is a non--quasi--periodic point of $\varphi$ and there exists $n\geq0$ with ${\mathfrak w}_{\varphi^n(\alpha)}=0$, \\ then $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is not sensitive. \end{lemma}
\begin{proof} Suppose for each $\alpha\in\Gamma$ either $\alpha$ is a quasi--periodic point of $\varphi$ or there exists $n\geq0$, with ${\mathfrak w}_{\varphi^n(\alpha)}=0$. Consider $\mathcal{O}\in\mathcal{K}$, there exists finite subset $M$ of $\Gamma$ with $\mathcal{O}_M\subseteq\mathcal{O}$. \begin{center} \begin{tabular}{cc} $\Lambda:=\{\varphi^n(\alpha):n\geq0$ and $\alpha\in M$ is a quasi--periodic point of $\varphi\}\cup$ & \\ $\{\varphi^n(\alpha):\alpha\in M$ is a non--quasi--periodic point of $\varphi$ & ($\divideontimes$) \\ and $0\leq n\leq\min\{k\geq0: \mathfrak{w}_{\varphi^k(\alpha)}=0\}\}$ &
\\ \end{tabular} \end{center} then $\Lambda$ is a finite subset of $\Gamma$. Consider $(x_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$, then \[U=\{(y_\alpha)_{\alpha\in\Gamma}\in R^\Gamma:\forall\alpha\in \Lambda, \:y_\alpha=x_\alpha\}(= \mathop{\bigcap}\limits_{\alpha\in \Lambda}U(\alpha,x_\alpha))\] is an open neighbourhood of $(x_\alpha)_{\alpha\in\Gamma}$. Consider $(y_\alpha)_{\alpha\in\Gamma}\in U$, for $\lambda\in\Lambda$ and $n\geq0$ we have the following cases: \begin{itemize} \item[a.] $\lambda\in \Lambda$ is a quasi--periodic point of $\varphi$. In this case
$\{\varphi^i(\lambda):i\geq0\}\subseteq\Lambda$, thus $\varphi^n(\lambda)\in\Lambda$
and $x_{\varphi^n(\lambda)}=y_{\varphi^n(\lambda)}$ which shows
$\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}x_{\varphi^n(\lambda)}
=\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}y_{\varphi^n(\lambda)}$. \item[b.] $\lambda\in\Lambda$ is a non--quasi--periodic point of $\varphi$ and
$0\leq n\leq\min\{k\geq0:\mathfrak{w}_{\varphi^k (\lambda)}=0\}$.
In this case $\varphi^n(\lambda)\in\Lambda$ and using a similar method described in (a) we have
$\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}x_{\varphi^n(\lambda)}
=\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}y_{\varphi^n(\lambda)}$. \item[c.] $\lambda\in\Lambda$ is a non--quasi--periodic point of $\varphi$ and $n>\min\{k\geq0:\mathfrak{w}_{\varphi^k
(\lambda)}=0\}$. In this case
$\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}x_{\varphi^n(\lambda)}
=0=\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}y_{\varphi^n(\lambda)}$. \end{itemize} Using the above cases: \[\forall\lambda\in\Lambda\:\:\forall n\geq 0,\:\:(\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}x_{\varphi^n(\lambda)}
=\mathfrak{w}_\lambda\mathfrak{w}_{\varphi(\lambda)}\cdots\mathfrak{w}_{\varphi^{n-1}(\lambda)}y_{\varphi^n(\lambda)})\] i.e., for all $n\geq0$ we have: \\ $(\sigma^n_{\varphi,\mathfrak{w}}((x_\alpha)_{\alpha\in\Gamma}) , \sigma^n_{\varphi,\mathfrak{w}}((y_\alpha)_{\alpha\in\Gamma})) $ \begin{eqnarray*} & = & ((\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{n-1}(\alpha)}x_{\varphi^n(\alpha)})_{\alpha\in\Gamma}, (\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{n-1}(\alpha)}y_{\varphi^n(\alpha)})_{\alpha\in\Gamma})\\ & \in & \mathcal{O}_\Lambda\subseteq\mathcal{O}_M\subseteq\mathcal{O} \end{eqnarray*} Therefore for all $\mathcal{O}\in\mathcal{K}$ and $x\in R^\Gamma$, there exists open neighbourhood $U$ of $x$ such that $(\sigma^n_{\varphi,\mathfrak{w}}(x) , \sigma^n_{\varphi,\mathfrak{w}}(y))\in\mathcal{O}$ and $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is not sensitive. \end{proof}
\noindent Let's recall that we say $x\in R\setminus\{0\}$ is invertible if there exists $y\in R\setminus\{0\}$ such that $xy=yx=1$.
\begin{lemma}\label{salam-sen-20} If there exists non--quasi--periodic point $\theta\in\Gamma$ such that for all $n\geq0$, $\mathfrak{w}_{\varphi^n(\theta)}$ is invertible, then $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is (strongly) sensitive. \end{lemma}
\begin{proof} Suppose $\theta$ is a non--quasi--periodic point of $\varphi$ (i.e., $\{\varphi^n(\theta)\}_{n\geq0}$ is a one--to--one sequence) such that for all $n\geq0$, $\mathfrak{w}_{\varphi^n(\theta)}$ is invertible. Consider $x=(x_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ and open neighbourhood $U$ of $x$, there exists finite subset $M$ of $\Gamma$ such that $x\in\mathop{\bigcap}\limits_{\alpha\in M}U(\alpha,x_\alpha)\subseteq U$. Since $\{\varphi^n(\theta)\}_{n\geq0}$ is a one--to--one sequence, there exists $N\geq1$ such that \[\forall n\geq N\:\:(\varphi^n(\theta)\notin M)\:,\] in particular {\small \[\{(y_\alpha)_{\alpha\in\Gamma}\in R^\Gamma:\forall \alpha\notin\{\varphi^N(\theta),\varphi^{N+1}(\theta),\ldots\}\:\:(x_\alpha=y_\alpha)\}\subseteq \mathop{\bigcap}\limits_{\alpha\in M}U(\alpha,x_\alpha)\subseteq U\:.\tag{$\divideontimes\divideontimes$}\]} Choose $z=(z_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ such that: \[z_\alpha=\left\{\begin{array}{lc} an\: element\: of\: R\setminus\{x_{\varphi^m(\theta)}\} & m\geq N,\alpha=\varphi^m(\theta)\:, \\ x_\alpha & \alpha\notin\{\varphi^N(\theta),\varphi^{N+1}(\theta),\varphi^{N+2}(\theta),\ldots\}\:, \end{array}\right.\] then by ($\divideontimes\divideontimes$), $z\in U$. Moreover for all $m\geq N$ we have:
\\ $z_{\varphi^m(\theta)}\in R\setminus\{x_{\varphi^m(\theta)}\}$ \begin{eqnarray*}
& \Rightarrow & z_{\varphi^m(\theta)}\neq x_{\varphi^m(\theta)} \\ & \Rightarrow &
\mathfrak{w}_\theta\mathfrak{w}_{\varphi(\theta)}\cdots\mathfrak{w}_{\varphi^{m-1}(\theta)} z_{\varphi^m(\theta)}
\neq \mathfrak{w}_\theta\mathfrak{w}_{\varphi(\theta)}\cdots\mathfrak{w}_{\varphi^{m-1}(\theta)} x_{\varphi^m(\theta)} \\ && \: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:\: \: \: \: \:
(since\:{\mathfrak w}_{\varphi^i(\theta)}s\: are \: invertible) \\ & \Rightarrow & ((\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{m-1}(\alpha)}
z_{\varphi^m(\alpha)})_{\alpha\in\Gamma},(\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots
\mathfrak{w}_{\varphi^{m-1}(\alpha)}x_{\varphi^m(\alpha)})_{\alpha\in\Gamma})\notin\mathcal{O}_{\{\theta\}} \\ & \Rightarrow & (\sigma^m_{\varphi,\mathfrak{w}}(z),\sigma^m_{\varphi,\mathfrak{w}}(x))\notin\mathcal{O}_{\{\theta\}} \end{eqnarray*} Hence for all $x\in R^\Gamma$ and open neighborhood $U$ of $x$, there exists $z\in U$ and $N\geq1$ such that for all $m\geq N$, $(\sigma^m_{\varphi,\mathfrak{w}}(z),\sigma^m_{\varphi,\mathfrak{w}}(x))\notin\mathcal{O}_{\{\theta\}} (\in\mathcal{K})$, and $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is (strongly) sensitive. \end{proof}
\begin{corollary}\label{salam-sen-30} In finite field $R$ the following statements are equivalent: \begin{itemize} \item[a.] $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is sensitive, \item[b.] $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is strongly sensitive, \item[c.] there exists non--quasi--periodic point $\theta\in\Gamma$ such that for all $n\geq0$, $\mathfrak{w}_{\varphi^n(\theta)}\neq0$. \end{itemize} \end{corollary} \begin{proof} Use Lemmas~\ref{salam-sen-10} and \ref{salam-sen-20}, and the fact that all nonzero elements of $R$ are invertible. \end{proof} \noindent The following counterexample shows that if we omit the assumption of being $R$ a finite field, then Corollary~\ref{salam-sen-30} may fails to be true.
\begin{counterexample} For $\mathbb{Z}_4=\{\overline{0},\overline{1},\overline{2},\overline{3}\}(=\frac{\mathbb Z}{4\mathbb{Z}})$, $\mathfrak{v}=(\mathfrak{v}_n)_{n\in\mathbb{N}}:=(\overline{2})_{n\in\mathbb{N}}$, and $\psi:\mathop{\mathbb{N}\to\mathbb{N}}\limits_{n\mapsto n+1}$, then for all $n\geq0$ we have $\mathfrak{v}_{\psi^n(1)}=\mathfrak{v}_{n+1}=\overline{2}\neq\overline{0}$. Hence $(\mathbb{Z}_4^{\mathbb N},\sigma_{\psi,\mathfrak{v}})$ satisfies item (c) in Corollary~\ref{salam-sen-30}. However $(\mathbb{Z}_4^{\mathbb N},\sigma_{\psi,\mathfrak{v}})$ is not strongly sensitive since for all $x,y\in \mathbb{Z}_4^{\mathbb N}$ and $k\geq2$ we have $\sigma_{\psi,\mathfrak{v}}^k(x)=\sigma_{\psi,\mathfrak{v}}^k(y)=(\overline{0})_{n\in\mathbb{N}}$. \end{counterexample}
\section{Li--Yorke chaotic $(R^\Gamma,\sigma_{\varphi,{\mathfrak w}})$, for finite field $R$} \noindent Let's recall that by transformation semigroup $(S,X,\rho)$ we mean a discrete topological semigroup $S$ with identity $e$, topological space $X$ and continuous map $\rho:S\times X\to X$ with $\rho(s,x)=:sx$ such that for all $x\in X$ and $s,t\in S$ we have $ex=x$ and $(st)x=s(tx)$. It is well known that the collection of all dynamical systems and the collection of all transformation semigroups with phase semigroup $\mathbb{N}\cup\{0\}$ are in one--to--one correspondence in the following sense:
A dynamical system $(X,f)$ is just the transformation semigroup $(\mathbb{N}\cup\{0\},X,\rho_f)$ where $\rho_f(n,x):=f^n(x)$ for all $n\geq0$ and $x\in X$.
\\ So it is possible to adopt the definition of Li--Yorke chaos from transformation semigroups with uniform phase space to dynamical systems with uniform phase spaces, i.e. whenever we say in dynamical system $(X,f)$ two points $x,y\in X$ are proximal (resp. asymptotic, scrambled, ...) we mean $x,y$ are proximal (resp. asymptotic, scrambled, ...) in transformation semigroup $(\mathbb{N}\cup\{0\},X,\rho_f)$, moreover whenever we say the dynamical system $(X,f)$ is Li--Yorke chaotic, it means the transformation semigroup $(\mathbb{N}\cup\{0\},X,\rho_f)$ is Li--Yorke chaotic. \\ In dynamical system $(X,f)$ and transformation semigroup $(S,X,\rho)$ with compact Hausdorff phase space $X$ and unique compatible uniform structure $\mathcal F$, we have the following definitions~\cite[Definitions 2.1 and 2.2]{khodam}: \\ {\bf Proximal pair and proximal relation.} For $x,y\in X$, we say $x,y$ are proximal in transformation semigroup $(S,X,\rho)$, if there exists a net $\{s_{\lambda}\}_{\lambda\in\Lambda}$ in $S$
and $z\in X$ with $\mathop{\lim}\limits_{\lambda\in\Lambda}s_\lambda x=z=
\mathop{\lim}\limits_{\lambda\in\Lambda}s_\lambda y$. In $(X,f)$ the following statements are equivalent: \begin{itemize} \item $x,y$ are proximal in dynamical system $(X,f)$ \item for each $\mathcal{O}\in{\mathcal F}$,
$\{n\geq1:(f^n(x),f^n(y))\in\mathcal{O}\}$ is infinite \item for each $\mathcal{O}\in{\mathcal F}$,
$\{n\geq1:(f^n(x),f^n(y))\in\mathcal{O}\}$ is nonempty. \end{itemize} We denote the collection of all proximal pairs of dynamical system $(X,f)$ by $Prox(X,f)$ or simply $Prox (f)$. \\ {\bf Asymptotic pair and asymptotic relation.} For $x,y\in X$, we say $x,y$ are asymptotic modul $\mathcal{P}_{fin}(S)(=\{A\subseteq S:A$ is finite$\}$) in $(S,X,\rho)$, if for each $\mathcal{O}\in{\mathcal F}$,
$\{s\in S:(sx,sy)\notin\mathcal{O}\}$ is finite. We denote the collection of all asymptotic pairs of dynamical system $(X,f)$
by $Asym(X,f)$ or simply $Asym(f)$. Hence
\[Asym(f)=\{(x,y)\in X\times X:\forall\mathcal{O}\in\mathcal{F}\:\exists N
\geq1\:\forall n\geq N\:(f^n(x),f^n(y))\in\mathcal{O}\}\]
in particular, $Asym(f)\subseteq Prox(f)$. \\ {\bf Scrambled pair and scrambled subset.} We say $x,y\in X$ are scrambled if they are proximal and they are not asymptotic. $A\subseteq X$ with at least two elements is scrambled if all distinct elements $z,w\in A$ are scrambled. \\ {\bf Li--Yorke chaotic.} $(X,f)$ is Li--Yorke chaotic if it has an uncountable scrambled subset. \\ In this section we prove that for finite field $R$ the following statements are equivalent: \begin{itemize} \item $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is sensitive, \item $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ has at least one scrambled pair, \item $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is Li--Yorke chaotic. \end{itemize}
\begin{note} For $x,y\in R^\Gamma$ the following statements are equivalent: \begin{itemize} \item[1.] $(x,y)\in Asym(\sigma_{\varphi,\mathfrak{w}})$ \item[2.] for all ${\mathcal O}\in{\mathcal K}$, $\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x),\sigma_{\varphi,\mathfrak{w}}^n(y))\notin{\mathcal O}\}$ is finite, \item[3.] for all finite subset $M$ of $\Gamma$, $\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x),\sigma_{\varphi,\mathfrak{w}}^n(y))\notin{\mathcal O}_M\}$ is finite, \item[4.] for all $\theta\in\Gamma$, $\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x),\sigma_{\varphi,\mathfrak{w}}^n(y))\notin{\mathcal O}_{\{\theta\}}\}$ is finite. \end{itemize} Since for all ${\mathcal O}\in{\mathcal K}$ there exists finite subset $M$ of $\Gamma$ with ${\mathcal O}_M\subseteq\mathcal{O}$, also for all $\theta_1,\ldots,\theta_n\in\Gamma$ we have ${\mathcal O}_{\{\theta_1,\ldots,\theta_n\}}=\bigcap\{{\mathcal O}_{\{\theta_i\}}:1\leq i\leq n\}$. \end{note}
\begin{lemma}\label{salam-yorke-10} Suppose for all $\alpha\in\Gamma$: \\ $\bullet$ either $\alpha$ is a quasi--periodic point of $\varphi$, \\ $\bullet$ or $\alpha$ is a non--quasi--periodic point of $\varphi$ and there exists $n\geq0$ with ${\mathfrak w}_{\varphi^n(\alpha)}=0$, \\ then $Prox(\sigma_{\varphi,\mathfrak{w}})= Asym(\sigma_{\varphi,\mathfrak{w}})$. \end{lemma}
\begin{proof} Suppose for all $\alpha\in\Gamma$, $\alpha$ is a quasi--periodic point of $\varphi$ or there exists $n\geq0$ with ${\mathfrak w}_{\varphi^n(\alpha)}=0$. We prove $Prox(\sigma_{\varphi,\mathfrak{w}})\subseteq Asym(\sigma_{\varphi,\mathfrak{w}})$. Consider $(x,y)=((x_\alpha)_{\alpha\in\Gamma},(y_\alpha)_{\alpha\in\Gamma})\in Prox(\sigma_{\varphi,\mathfrak{w}}) \setminus Asym(\sigma_{\varphi,\mathfrak{w}})$. Since $(x,y)\notin Asym(\sigma_{\varphi,\mathfrak{w}})$, there exists $\theta\in\Gamma$ such that \[T_1:=\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x),\sigma_{\varphi,\mathfrak{w}}^n(y))\notin{\mathcal O}_{\{\theta\}}\}\] is infinite. Consider $\Lambda$ as ($\divideontimes$) in the proof of Lemma~\ref{salam-sen-10} for $M=\{\theta\}$, hence: \[\Lambda:=\left\{\begin{array}{lc} \{\varphi^i(\theta):i\geq0\}\:, & \theta{\rm \: is \: a \: quasi-periodic \: point \: of \: }\varphi \:,\\ \{\theta,\varphi(\theta),\ldots,\varphi^i(\theta)\}\:, & \theta{\rm \: is \: a \: non-quasi-periodic \: point \: of \: } \\ & \varphi {\rm \: and\:} i=\min\{n\geq0:\mathfrak{w}_{\varphi^n(\theta)}=0\}\:, \end{array}\right.\]
then $\Lambda$ is a finite subset of $\Gamma$. Choose $m>\:|\Lambda|+1$ with $m\in T_1$. By $(\sigma_{\varphi,\mathfrak{w}}^m(x),
\sigma_{\varphi,\mathfrak{w}}^m(y))\notin{\mathcal O}_{\{\theta\}}$ we have
$\mathfrak{w}_\theta\mathfrak{w}_{\varphi(\theta)}\cdots\mathfrak{w}_{\varphi^{m-1}(\theta)}
x_{\varphi^m(\theta)}\neq\mathfrak{w}_\theta\mathfrak{w}_{\varphi(\theta)}\cdots\mathfrak{w}_{\varphi^{m-1}(\theta)}
y_{\varphi^m(\theta)}$ which shows $x_{\varphi^m(\theta)}\neq y_{\varphi^m(\theta)}$
and $\mathfrak{w}_{\varphi^i(\theta)}\neq0$ for all $i\in\{0,\ldots,|\Lambda|+1\}$. Hence not only $\theta$ is a quasi--periodic point of $\varphi$ but also $\mathfrak{w}_{\varphi^j(\theta)}\neq0$ for all $j\geq0$. \\ Since $(x,y)\in Prox(\sigma_{\varphi,\mathfrak{w}})$, there exists $l\geq1$ such that $(\sigma_{\varphi,\mathfrak{w}}^l(x),\sigma_{\varphi,\mathfrak{w}}^l(y))\in{\mathcal O}_\Lambda$, thus $\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{l-1}(\alpha)}x_{\varphi^l(\alpha)}=
\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{l-1}(\alpha)}y_{\varphi^l(\alpha)}$
for all $\alpha\in\Lambda$, i.e.$ \mathfrak{w}_{\varphi^n(\theta)}\cdots
\mathfrak{w}_{\varphi^{n+l-1}(\theta)}x_{\varphi^{n+l}(\theta)}=
\mathfrak{w}_{\varphi^n(\theta)}\cdots
\mathfrak{w}_{\varphi^{n+l-1}(\theta)}y_{\varphi^{n+l}(\theta)}$ for all $n\geq0$. Hence \[\forall n\geq l\:\: x_{\varphi^{n}(\theta)}=y_{\varphi^{n}(\theta)} \] so $(\sigma_{\varphi,\mathfrak{w}}^n(x),
\sigma_{\varphi,\mathfrak{w}}^n(y))\in{\mathcal O}_{\{\theta\}} $ for all $n\geq l$ which is in contradiction with infiniteness of $T_1$. \end{proof}
\begin{remark}\label{salam-yorke-20} There exists an uncountable collection $\mathcal E$ of infinite subsets of $\mathbb N$ such that for each distinct $E,F\in\mathcal{E}$, the set $E\cap F$ is finite~\cite{large}. \end{remark}
\begin{definition} For $h:A\to A$ define equivalence relation $\thicksim_h$ on $A$ with \[x\thicksim_h y\Leftrightarrow (\exists n,m\geq0 \:\: h^n(x)=h^m(y))\] for all $x,y\in A$. If $h:A\to A$ is one--to--one, then for every equivalence class $D\in \frac{A}{\thicksim_h}$ exactly one of the following conditions occurs: \begin{itemize} \item $D$ is finite and for all $\alpha\in D$, we have $D=\{h^n(\alpha):n\geq0\}\subseteq Per(h)$, \item $D$ is infinite and there exists unique $\alpha\in D$ such that
$D=\{h^n(\alpha):n\geq0\}$ (so $\{h^n(\alpha)\}_{n\geq0}$ is
a one--to--one sequence), \item $D$ is infinite and for all $\alpha\in D$ and $n\in{\mathbb Z}$
we have $h^n(\alpha)\neq\varnothing$, moreover
$D=\{h^n(\alpha):n\in{\mathbb Z}\}$ (so $\{h^n(\alpha)\}_{n\in{\mathbb Z}}$ is
a one--to--one bi--sequence), \end{itemize} \end{definition}
\subsection{Lemmas on a special case}\label{ABC} In the following string of Lemmas, i.e., \ref{ABC10}, \ref{ABC20}, \ref{ABC30} suppose $\nu\in\Gamma$ is a non--quasi--periodic point of $\varphi$ such that $\mathfrak{w}_{\varphi^n(\nu)}\neq0$, for all $n\geq0$, if $A\subseteq\mathbb{N}\cup\{0\}$ let: \[\xi_A(n):=\left\{\begin{array}{lc} 1 & n\in A\:, \\ 0 & n\notin A\:,\end{array}\right.\] and $x^{A}:=(x^{A}_\alpha)_{\alpha\in\Gamma}$ where: \[x_\alpha^A=\left\{\begin{array}{lc} \xi_A(p) & \alpha=\varphi^{\frac{p(p+1)}{2}}(\nu),p\geq0 \:, \\ 0 & otherwise\:. \end{array}\right.\]
\begin{lemma}\label{ABC10} If $E,F\subseteq \mathbb{N}\cup\{0\}$ with infinite $E\setminus F$, then $(x^E,x^F)\notin Asym(\sigma_{\varphi,\mathfrak{w}})$. \end{lemma}
\begin{proof} For $r\in E\setminus F$, we have \[ x^E_{\varphi^{\frac{r(r+1)}2}(\nu)}=\xi_E(r)=1\:\:,\:\: x^F_{\varphi^{\frac{r(r+1)}2}(\nu)}=\xi_F(r)=0\:, \] which leads to: \[\begin{array}{c} \mathfrak{w}_\nu\mathfrak{w}_{\varphi(\nu)}\cdots\mathfrak{w}_{\varphi^{\frac{r(r+1)}2-1}(\nu)}x^{E}_{\varphi^{\frac{r(r+1)}2}(\nu)}\neq 0 \:,\\ \\ \mathfrak{w}_\nu\mathfrak{w}_{\varphi(\nu)}\cdots\mathfrak{w}_{\varphi^{\frac{r(r+1)}2-1}(\nu)}x^{F}_{\varphi^{\frac{r(r+1)}2}(\nu)}= 0 \:, \end{array}\] and $(\sigma_{\varphi,\mathfrak{w}}^{r}((x_\alpha^E)_{\alpha\in\Gamma}),\sigma_{\varphi,\mathfrak{w}}^{r}((x_\alpha^F)_{\alpha\in\Gamma}))\notin\mathcal{O}_{\{\nu\}}$. Hence: \[E\setminus F\subseteq\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^{n}((x_\alpha^E)_{\alpha\in\Gamma}),\sigma_{\varphi,\mathfrak{w}}^{n}((x_\alpha^F)_{\alpha\in\Gamma}))\notin\mathcal{O}_{\{\nu\}}\}\] and $\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^{n}(x^E),\sigma_{\varphi,\mathfrak{w}}^{n}(x^F))\notin\mathcal{O}_{\{\nu\}}\}$ is infinite, therefore: \[(x^{E},x^{F})\notin Asym(\sigma_{\varphi,\mathfrak{w}})\:.\] \end{proof}
\begin{lemma}\label{ABC20} Suppose $M$ is a finite subset of $\frac{\nu}{\thicksim_\varphi}$ and $E,F\subseteq{\mathbb N}\cup\{0\}$, then \[T_M:=\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x^E),\sigma_{\varphi,\mathfrak{w}}^n(x^F))\in \mathcal{O}_M\}\] is infinite. \end{lemma}
\begin{proof} Since $\mathcal{O}_\varnothing=R^\Gamma\times R^\Gamma$, for $M=\varnothing $ the proof is obvious. Suppose $M\neq\varnothing$. For $\alpha\in M$, there exist $n_\alpha,m_\alpha\geq1$ such that $\varphi^{m_\alpha}(\nu)=\varphi^{n_\alpha}(\alpha)$. Choose $ r\geq\mathop{\max}\limits_{\alpha\in M} n_\alpha$, then for each $n\geq r+\mathop{\max}\limits_{\alpha\in M} m_\alpha$ and $\beta\in M$ we have: \begin{eqnarray*} \frac{n(n+1)}2+r-n_\beta+m_\beta &\geq & \frac{n(n+1)}2+r-\mathop{\max}\limits_{\alpha\in M} n_\alpha+m_\beta \\ & \geq & \frac{n(n+1)}2+m_\beta >\frac{n(n+1)}2 \end{eqnarray*} and \begin{eqnarray*} \frac{n(n+1)}2+r-n_\beta+m_\beta &\leq & \frac{n(n+1)}2+r-n_\beta+\mathop{\max}\limits_{\alpha\in M} m_\alpha \\ & \leq & \frac{n(n+1)}2-n_\beta+n \\ & < & \frac{n(n+1)}2+n<\frac{(n+1)(n+2)}2 \end{eqnarray*} therefore $\frac{n(n+1)}2<\frac{n(n+1)}2+r-n_\beta+m_\beta<\frac{(n+1)(n+2)}2$ and: \[\forall p\geq0\:\:\:( \frac{n(n+1)}2+r-n_\beta+m_\beta\neq\frac{p(p+1)}2)\:,\] which shows $x^E_{\varphi^{ \frac{n(n+1)}2+r-n_\beta+m_\beta}(\nu)}=x^F_{\varphi^{ \frac{n(n+1)}2+r-n_\beta+m_\beta}(\nu)}=0$. Hence for $K=E,F$ we have $x_{\varphi^{ \frac{n(n+1)}2+r}(\beta)}^K=x_{\varphi^{ \frac{n(n+1)}2+r-n_\beta+m_\beta}(\nu)}^K=0$, which leads to \[(\sigma_{\varphi,\mathfrak{w}}^{\frac{n(n+1)}2+r}(x^E),\sigma_{\varphi,\mathfrak{w}}^{\frac{n(n+1)}2+r}(x^F))\in\mathcal{O}_{\{\beta\}}\:.\] Thus {\small \begin{eqnarray*} \{\frac{n(n+1)}2+r:n\geq r+\mathop{\max}\limits_{\alpha\in M} m_\alpha\} & \subseteq & \mathop{\bigcap}\limits_{\beta\in M}\{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x^E),\sigma_{\varphi,\mathfrak{w}}^n(x^F))\in \mathcal{O}_{\{\beta\}}\} \\ & = & \{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x^E),\sigma_{\varphi,\mathfrak{w}}^n(x^F))\in \mathcal{O}_M\}=T_M \end{eqnarray*}} and $T_M$ is infinite. \end{proof}
\begin{lemma}\label{ABC30} For all $E,F\subseteq{\mathbb N}\cup\{0\}$, we have $(x^E,x^F)\in Prox(\sigma_{\varphi,\mathfrak{w}})$. \end{lemma}
\begin{proof} For $H\subseteq \Gamma$ let $T_H:= \{n\geq1:(\sigma_{\varphi,\mathfrak{w}}^n(x^E),\sigma_{\varphi,\mathfrak{w}}^n(x^F))\in \mathcal{O}_H\}$. For each $\alpha\in\Gamma\setminus\frac{\theta}{\thicksim_\varphi}$ and each $n\geq0$, we have $\varphi^n(\alpha)\in\Gamma\setminus\frac{\theta}{\thicksim_\varphi}$ and $x^E_{\varphi^n(\alpha)}=x^F_{\varphi^n(\alpha)}=0$, thus $(\sigma_{\varphi,{\mathfrak w}}^n(x^E),\sigma_{\varphi,{\mathfrak w}}^n(x^F))\in\mathcal{O}_{\{\alpha\}}$. Which shows \[\forall H\subseteq \Gamma\setminus\frac{\theta}{\thicksim_\varphi}\:\:\: (T_{H}=\mathbb{N})\:.\] Thus for each finite subset $L$ of $\Gamma$ we have: \[T_L = T_{L\cap\frac{\nu}{\thicksim_\varphi}}\cap T_{L\setminus\frac{\theta}{\thicksim_\varphi}} =T_{L\cap\frac{\nu}{\thicksim_\varphi}}\cap\mathbb{N}=T_{L\cap\frac{\nu}{\thicksim_\varphi}}\] By Lemma~\ref{ABC20}, $T_{L\cap\frac{\theta}{\thicksim_\varphi}}$ is infinite, thus $T_L$ is infinite, which leads to the desired result \end{proof}
\subsection{Main theorem on Li--Yorke chaoticity of $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$} \noindent Now we are ready to establish our main theorem on Li--Yorke chaoticity of weighted generalized shift $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$. \begin{theorem}\label{salam-yorke-30} In finite field $R$ the following statements are equivalent (see~\cite{gen-li} for countable $\Gamma$ and generalized shift dynamical system $(\sigma_\varphi,R^\Gamma)$ too): \\ 1. $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is (strongly) sensitive, \\ 2. there exists non--quasi--periodic point $\theta\in\Gamma$ such that for all $n\geq0$,
$\mathfrak{w}_{\varphi^n(\theta)}\neq0$, \\ 3. $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ has at least one scrambled pair, \\ 4. $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is Li--Yorke chaotic. \end{theorem}
\begin{proof} ``${\bf (1\Leftrightarrow2)}$'' By Corollary~\ref{salam-sen-30}, (1) and (2) are equivalent. \\ ``${\bf (4\Rightarrow3)}$'' It's clear that (4) implies (3). \\ ``${\bf (3\Rightarrow2)}$'' Note that all elements of $Prox(\sigma_{\varphi,\mathfrak{w}})\setminus Asym(\sigma_{\varphi,\mathfrak{w}})$ are scrambled pairs, thus (3) is equivalent to $Prox(\sigma_{\varphi,\mathfrak{w}})\not\subseteq Asym(\sigma_{\varphi,\mathfrak{w}})$, which implies (2) by Lemma~\ref{salam-yorke-10}. \\
``${\bf (2\Rightarrow4)}$'' Suppose $\nu$ is a
non--quasi--periodic point of $\varphi$ such that for all $n\geq0$,
$\mathfrak{w}_{\varphi^n(\nu)}\neq0$. By Remark~\ref{salam-yorke-20} there exists uncountable collection $\mathcal E$ of infinite subsets of $\mathbb N$ such that for all distinct $C,D\in\mathcal{E}$, $C\cap D$ is finite. Using notations of Subsection~\ref{ABC}, let $Y:=\{x^A:A\in\mathcal{E}\}$, then by Lemmas~\ref{ABC10},~\ref{ABC30},
for each distinct $C,D\in\mathcal{E}$ we have $(x^C,x^D)\in Prox(\sigma_{\varphi,\mathfrak{w}})\setminus Asym(\sigma_{\varphi,\mathfrak{w}})$, i.e., $x^C,x^D$ are scrambled, hence $Y$ is an uncountable scrambled subset of $R^\Gamma$ and $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is Li--Yorke chaotic. \end{proof}
\section{When does $(R^\Gamma,\sigma_{\varphi,{\mathfrak w}})$ have dense periodic points?} \noindent The following theorem is the main and unique theorem of this section.
\begin{theorem}\label{salam-dense-10} For $\sigma_{\varphi,{\mathfrak w}}:R^\Gamma\to R^\Gamma$ the following statements are equivalent: \begin{itemize} \item[1.] $\sigma_{\varphi,{\mathfrak w}}:R^\Gamma\to R^\Gamma$ is onto, \item[2.] $\varphi:\Gamma\to\Gamma$ is one--to--one and for all $\alpha\in\Gamma$, $\mathfrak{w}_\alpha$ is invertible, \item[3.] $Per(\sigma_{\varphi,{\mathfrak w}})$ is dense in $R^\Gamma$. \end{itemize} \end{theorem}
\begin{proof} ``${\bf (1\Rightarrow2)}$'': Suppose $\sigma_{\varphi,{\mathfrak w}}:R^\Gamma\to R^\Gamma$ is onto. There exists $(x_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ such that $(1)_{\alpha\in\Gamma}=\sigma_{\varphi,{\mathfrak w}} ((x_\alpha)_{\alpha\in\Gamma})=({\mathfrak w}_\alpha x_\alpha)_{\alpha\in\Gamma}$, hence for all $\alpha\in\Gamma$ we have ${\mathfrak w}_\alpha x_\alpha=1$ and ${\mathfrak w}_\alpha$ is invertible. \\ For distinct $\theta,\lambda\in\Gamma$ choose $(z_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ with $z_\theta=1$ and $z_\lambda=0$. There exists $(y_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ with $(z_\alpha)_{\alpha\in\Gamma}= \sigma_{\varphi,{\mathfrak w}}((y_\alpha)_{\alpha\in\Gamma})=(\mathfrak{w}_\alpha y_\alpha)_{\alpha\in\Gamma}$, hence $1=z_\theta=\mathfrak{w}_\theta y_{\varphi(\theta)}$ and $0=z_\lambda=\mathfrak{w}_\lambda y_{\varphi(\lambda)}$ which leads to $y_{\varphi(\theta)}\neq0$ and $y_{\varphi(\lambda)}=0$, hence $y_{\varphi(\theta)}\neq y_{\varphi(\lambda)}$ and $\varphi(\theta)\neq\varphi(\lambda)$. Therefore $\varphi:\Gamma\to\Gamma$ is one--to--one. \\ ``${\bf (2\Rightarrow1)}$'': Suppose (2) is valid. Consider $(x_\alpha)_{\alpha\in\Gamma}\in R^\Gamma$ and let: \[z_\alpha:=\left\{\begin{array}{lc} 0 & \alpha\notin\varphi(\Gamma)\:, \\ {\mathfrak w}_\beta^{-1}x_\beta & \beta\in\Gamma,\alpha=\varphi(\beta)\:,\end{array}\right.\] then $\sigma_{\varphi,{\mathfrak w}}((z_\alpha)_{\alpha\in\Gamma})=(x_\alpha)_{\alpha\in\Gamma}$. \\ ``${\bf (3\Rightarrow2)}$'': $Per(\sigma_{\varphi,{\mathfrak w}})$ is dense in $R^\Gamma$. Choose distinct $\theta,\lambda\in\Gamma$ and let: \[V_\alpha=\left\{\begin{array}{lc} \{0\} & \alpha=\lambda\:, \\ \{1\} & \alpha=\theta\:, \\
R & \alpha\neq\theta,\lambda\:,\end{array}\right.\] then $V:=\mathop{\prod}\limits_{\alpha\in\Gamma}V_\alpha$ is a nonempty open subset of $R^\Gamma$, hence by hypothesis (3) there exists $(u_\alpha)_{\alpha\in\Gamma}\in Per(\sigma_{\varphi,{\mathfrak w}})\cap V$. There exists $n\geq 1$ with $(u_\alpha)_{\alpha\in\Gamma}=\sigma_{\varphi,{\mathfrak w}}^n((u_\alpha)_{\alpha\in\Gamma})= ({\mathfrak w}_\alpha{\mathfrak w}_{\varphi(\alpha)}\cdots{\mathfrak w}_{\varphi^{n-1}(\alpha)}u_{\varphi^n(\alpha)})_{ \alpha\in\Gamma}$ which leads to \[{\mathfrak w}_\theta{\mathfrak w}_{\varphi(\theta)}\cdots{\mathfrak w}_{\varphi^{n-1}(\theta)}u_{\varphi^n(\theta)}=1\:, \tag{*}\] \[{\mathfrak w}_\lambda{\mathfrak w}_{\varphi(\lambda)}\cdots{\mathfrak w}_{\varphi^{n-1}(\lambda)}u_{\varphi^n(\lambda)} =0\: . \tag{**}\] Using (*), ${\mathfrak w}_\theta$ is invertible and $u_{\varphi^n(\theta)}\neq 0$, however by a similar method for all $\alpha\in\Gamma$, ${\mathfrak w}_\alpha$ is invertible. Since for all $\alpha\in\Gamma$, ${\mathfrak w}_\alpha$ is invertible by (**), $u_{\varphi^n(\lambda)}=0$. Therefore $u_{\varphi^n(\lambda)}=0\neq u_{\varphi^n(\theta)}$ which leads to $\varphi(\lambda)\neq\varphi(\theta)$ and $\varphi:\Gamma\to\Gamma$ is one--to--one. \\ ``${\bf (2\Rightarrow3)}$'': Suppose (2) is valid. If $\alpha_1,\ldots,\alpha_p\in\Gamma$ are distinct and $r_1,\ldots,r_p\in R$ we should prove $(\mathop{\bigcap}\limits_{1\leq i\leq p}U(\alpha_i,r_i))\cap Per(\sigma_{\varphi,{\mathfrak w}})\neq\varnothing$. For $\beta\in\{\alpha_1,\ldots,\alpha_p\}$, we have following cases (see Fig. 1): \begin{itemize} \item[case a.] if $\frac{\beta}{\thicksim_\varphi}$ is finite, let $A_\beta=\frac{\beta}{\thicksim_\varphi}$ (i.e., $\beta$ is periodic), \item[case b.] if there exists $\alpha\in\Gamma$ with infinite $\frac{\beta}{\thicksim_\varphi}=\{\varphi^n(\alpha):n\geq0\}$
and $\beta=\varphi^m(\alpha)$ for $m\geq0$, let $A_\beta=\{\varphi^n(\alpha):0\leq n\leq m\}$, \item[case c.] if for all $n\in\mathbb{Z}$ we have $\varphi^n(\beta)\neq\varnothing$, suppose $\frac{\beta}{\thicksim_\varphi}\cap
\{\alpha_1,\ldots,\alpha_p\}$ is equal to $\{\varphi^{t_1}(\beta),\ldots,\varphi^{t_s}(\beta)\}$ with $t_1<t_2<\cdots<t_s$ let
$A_\beta=\{\varphi^i(\beta):t_1\leq i\leq t_s\}$. \end{itemize} Let $A=\bigcup\{A_{\alpha_i}:1\leq i\leq p\}$, then $A$ is a finite subset of $\Gamma$. Choose arbitrary $(x_\alpha)_{\alpha\in\Gamma}\in \mathop{\bigcap}\limits_{1\leq i\leq p}U(\alpha_i,r_i)$. For $\alpha\in\Gamma$ we have the following cases: \begin{itemize} \item if $\alpha\in\Gamma\setminus\bigcup\{\frac{\beta}{\thicksim_\varphi}:\beta\in A\}$, then let $y_\alpha=0$, \item if there exists periodic point $\theta\in A$ with $\alpha\in \frac{\theta}{\thicksim_\varphi}$, then $\alpha\in A_\theta\subseteq A$,
let $y_\alpha:=x_\alpha$, \item if there exists non--periodic point $\theta\in A$ with
$\alpha\in \frac{\theta}{\thicksim_\varphi}=\{\varphi^m(\theta):m\geq0\}$, then there exists $n\geq1$ with
$\theta,\varphi(\theta),\ldots,\varphi^{n-1}(\theta)\in A$ and $\varphi^{n}(\theta),\varphi^{n+1}(\theta),\ldots\notin
A$. Let $y_{\varphi^i(\theta)}=x_{\varphi^i(\theta)}$ for $i=0,\ldots,n-1$ and
\[y_{\varphi^{n+j}(\theta)}=(\mathfrak{w}_{\varphi^j(\theta)}\mathfrak{w}_{\varphi^{j+1}(\theta)}\cdots\mathfrak{w}_{\varphi^{j+n-1}(\theta)})^{-1}y_{\varphi^{j}(\theta)}\tag{+}\] for all $j\geq0$.
\item if there exists non--periodic point $\mu\in A$ with $\varphi^m(\mu)\neq\varnothing$ for all $m\in{\mathbb Z}$ and
$\alpha\in \frac{\mu}{\thicksim_\varphi}=\{\varphi^m(\mu):m\in{\mathbb Z}\}$, then there exists
$\theta\in\frac{\mu}{\thicksim_\varphi}$ and $n\geq1$ with
$\theta,\varphi(\theta),\ldots,\varphi^{n-1}(\theta)\in A$ and $\varphi^{n}(\theta),\varphi^{n+1}(\theta),\ldots\notin A$,
also $\varphi^{-1}(\theta),\varphi^{-2}(\theta),\ldots\notin A$.
Let $y_{\varphi^i(\theta)}=x_{\varphi^i(\theta)}$ for $i=0,\ldots,n-1$ and (+)
for all $j\in\mathbb{Z}$. \end{itemize} For $\theta\in A$, let
$n_\theta=|\frac{\theta}{\thicksim_\varphi}\cap A|$ and
$T=\:|\{r\in R\setminus\{0\}:r$ is invertible
$\}|\:\mathop{\prod}\limits_{\theta\in A}n_\theta$, then $y=(y_\alpha)_{\alpha\in\Gamma}\in \mathop{\bigcap}\limits_{1\leq i\leq p}U(\alpha_i,r_i)$ and $\sigma_{\varphi,\mathfrak{w}}^T(y)=y$, since for all $\alpha\in\Gamma$, we have the following cases: \begin{itemize} \item $\alpha\notin \bigcup\{\frac{\beta}{\thicksim_\varphi}:\beta\in A\}$. In this case for all $i$, we have
$\varphi^i(\alpha)\notin\bigcup\{\frac{\beta}{\thicksim_\varphi}:\beta\in A\}$, hence $y_\alpha=y_{\varphi^i(\alpha)}=
y_{\varphi^T(\alpha)}=0$, hence:
\[y_\alpha=0=\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{T-1}(\alpha)}y_{\varphi^{T}(\alpha)}\] \item $\alpha\in \bigcup\{\frac{\beta}{\thicksim_\varphi}:\beta\in A\}$ is not periodic,
in this case there exists non--periodic point $\theta\in A$ and $j$ such that $\alpha\in\frac{\theta}{\thicksim_\varphi}$,
$\varphi^j(\theta)=\alpha$ and (+) is valid for $n=n_\theta$, so (+) is valid for all multiplications of $n_\theta$
like $T$, therefore
\[y_{\varphi^j(\theta)}=\mathfrak{w}_{\varphi^j(\theta)}\mathfrak{w}_{\varphi^{j+1}(\theta)}\cdots\mathfrak{w}_{\varphi^{T+j-1}(\theta)}y_{\varphi^{T+j}(\theta)}\]
hence:
\[y_\alpha=\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{T-1}(\alpha)}y_{\varphi^{T}(\alpha)}\tag{++}\] \item $\alpha\in \bigcup\{\frac{\beta}{\thicksim_\varphi}:\beta\in A\}$ is periodic,
in this case there exists a periodic point $\theta\in A$ such that $\alpha\in\frac{\beta}{\thicksim_\varphi}\subseteq A$.
Moreover $\varphi^{n_\alpha}(\alpha)=\alpha$, so for $k=\:|\{r\in R\setminus\{0\}:r$ is invertible$\}|$
we have $\varphi^{kn_\alpha}(\alpha)=\alpha$ and for all invertible elements $r\in R\setminus\{0\}$, $r^k=1$, in
particular $\mathfrak{w}_\psi^k=1$ for all $\psi\in\Gamma$, hence we have:
\begin{eqnarray*}
\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{kn_\alpha-1}(\alpha)}y_{\varphi^{kn_\alpha}(\alpha)} & = & \mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{kn_\alpha-1}(\alpha)}y_\alpha \\
& = & (\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{n_\alpha-1}(\alpha)})^ky_\alpha \\
& = & \mathfrak{w}_\alpha^k\mathfrak{w}_{\varphi(\alpha)}^k\cdots\mathfrak{w}_{\varphi^{n_\alpha-1}(\alpha)}^ky_\alpha \\
& = & y_\alpha
\end{eqnarray*} and $\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{m-1}(\alpha)}y_{\varphi^{m}(\alpha)}=y_\alpha$ for $m=kn_\alpha$ and all of multiplations of $kn_\alpha$ like $T$, hence again we have (++). \end{itemize} Using above cases (++) is valid for all $\alpha\in\Gamma$ and: \[\sigma^T_{\varphi,\mathfrak{w}}((y_\alpha)_{\alpha\in\Gamma})=(\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{T-1}(\alpha)}y_{\varphi^{T}(\alpha)})_{\alpha\in\Gamma}=(y_\alpha)_{\alpha\in\Gamma}\] Therefore $\mathop{\bigcap}\limits_{1\leq i\leq p}U(\alpha_i,r_i)\cap Per(\sigma_{\varphi,\mathfrak{w}})\neq\varnothing$. \end{proof}
\begin{center}
\begin{tabular}{|c|c|} & case \\ \hline {\tiny $\underbrace{\xymatrix{\varphi(\beta)\ar[r] &\varphi^2(\beta)\ar[r] & \cdots\ar[r] & \varphi^j(\beta)\ar[dlll] \\ \beta=\varphi^{j+1}(\beta) \ar[u] &&& }}_{\frac{\beta}{\thicksim_\varphi}=A_\beta}$} & a \\ \hline {\tiny $\underbrace{\xymatrix{\alpha\ar[r]\ar@{-}[d] &\varphi(\alpha)\ar[r] & \cdots\ar[r] & \varphi^m(\alpha)=\beta\ar[r]\ar@{-}[d]&\varphi^{m+1}(\alpha)\ar[r] & \cdots \\ \ar@{-}[r] & A_\beta \ar@{-}[rr]& &&&}}_{\frac{\beta}{\thicksim_\varphi} }$} & b \\ \hline {\tiny $\underbrace{\xymatrix{\cdots\ar[r] & \varphi^{t_1-1}(\beta)\ar[r] & \varphi^{t_1}(\beta)\ar[r]\ar@{-}[d] & \varphi^{t_1+1}(\beta)\ar[r] &\cdots\ar[r] & \varphi^{t_s}(\beta)\ar[r]\ar@{-}[d]&\varphi^{t_s+1}(\beta)\ar[r] & \cdots \\ & & \ar@{-}[r] & A_\beta \ar@{-}[rr]& &&&}}_{\frac{\beta}{\thicksim_\varphi} }$} & c \\ \hline \end{tabular} \\ $\:$ \\ Fig. 1 \end{center}
\section{Devaney chaotic $(R^\Gamma,\sigma_{\varphi,{\mathfrak w}})$, for finite field $R$} \noindent We say the dynamical system $(X,f)$ is topological transitive if for each nonempty open subsets $U,V$ of $X$, there exists $n\geq1$
with $f^n(U)\cap V\neq\varnothing$. Moreover $(X,f)$ with compact Hausdorff $X$ is Devaney chaotic if it is sensitive, topological transitive, and $Per(f)$ is dense in X. \\ In this section we prove that
$(\sigma_{\varphi,\mathfrak{w}},R^\Gamma)$ is Devaney chaotic (topological transitive) if and only if $\varphi$ is one--to--one without any periodic point and $\mathfrak{w}_\alpha$ is invertible for each $\alpha\in\Gamma$. \begin{lemma}\label{salam-tran-10} Weighted generalized shift $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is topological transitive if and only if $\varphi$ is one--to--one without periodic points and for all $\alpha\in\Gamma$, $\mathfrak{w}_\alpha$ is invertible. \end{lemma}
\begin{proof} ``$\Rightarrow$'' Suppose $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is topological transitive, for each nonempty open subset $U$ of $R^\Gamma$, there exists $n\geq1$ such that $U\cap\sigma_{\varphi,\mathfrak{w}}^n (R^\Gamma)\neq \varnothing$, thus $U\cap \sigma_{\varphi,\mathfrak{w}}(R^\Gamma)\neq\varnothing$. Hence $\sigma_{\varphi,\mathfrak{w}}(R^\Gamma)$ is dense in $R^\Gamma$. Since $R^\Gamma$ is compact Hausdorff and $\sigma_{\varphi,\mathfrak{w}}:R^\Gamma\to R^\Gamma$ is continuous, $\sigma_{\varphi,\mathfrak{w}}(R^\Gamma)$ is a closed subset of $R^\Gamma$ which leads to $\sigma_{\varphi,\mathfrak{w}}(R^\Gamma)=\overline{ \sigma_{\varphi,\mathfrak{w}}(R^\Gamma)}=R^\Gamma$. Therefore $\sigma_{\varphi,\mathfrak{w}}:R^\Gamma\to R^\Gamma$ is onto and by Theorem~\ref{salam-dense-10}, $\varphi$ is one--to--one and all $\mathfrak{w}_\alpha$ s are invertible. \\ We prove $\varphi$ does not have any periodic point. Suppose $\theta\in Per(\varphi)$, consider following nonempty open subsets of $R^\Gamma$ (note that $\{\varphi^n(\theta):n\geq0\}$ is a finite subset of $\Gamma$): \[U:=\mathop{\bigcap}\limits_{\alpha\in\{\varphi^n(\theta):n\geq0\}}U(\alpha,1)\:,\: V:=\mathop{\bigcap}\limits_{\alpha\in\{\varphi^n(\theta):n\geq0\}}U(\alpha,0)\:.\] For all $n\geq1$ and $(x_\alpha)_{\alpha\in\Gamma}$, we have: \begin{eqnarray*} (x_\alpha)_{\alpha\in\Gamma}\in V
& \mathop{\Rightarrow}\limits^{V\subseteq U(\varphi^n(\theta),0)}
& (x_\alpha)_{\alpha\in\Gamma}\in U(\varphi^n(\theta),0) \\ & \Rightarrow & x_{\varphi^n(\theta)}=0 \\ & \Rightarrow &
\mathfrak{w}_\theta\mathfrak{w}_{\varphi(\theta)}\cdots\mathfrak{w}_{\varphi^{n-1}(\theta)}x_{\varphi^n(\theta)}=0 \\ & \Rightarrow &
(\mathfrak{w}_\alpha\mathfrak{w}_{\varphi(\alpha)}\cdots\mathfrak{w}_{\varphi^{n-1}(\alpha)}x_{\varphi^n(\alpha)})_{
\alpha\in\Gamma}\notin U(\theta,1) \\ & \Rightarrow & \sigma^n_{\varphi,\mathfrak{w}}((x_\alpha)_{\alpha\in\Gamma})\notin U(\theta,1) \\ & \mathop{\Rightarrow}\limits^{U(\theta,1)\supseteq U} &
\sigma^n_{\varphi,\mathfrak{w}}((x_\alpha)_{\alpha\in\Gamma})\notin U \end{eqnarray*} therefore $\sigma^n_{\varphi,\mathfrak{w}}(V)\cap U=\varnothing$ for all $n\geq0$ which is in contradiction with topological transitivity of $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$, hence $Per(\varphi)=\varnothing$. \\ ``$\Leftarrow$'' Suppose $\varphi$ is one--to--one without any periodic point and $\mathfrak{w}_\alpha$ s are invertible. Suppose $U,V$ are nonempty open subsets of $R^\Gamma$, then: \\ $\bullet$ there exist $r_1,\ldots,r_n\in R$ and distinct $\alpha_1,\ldots,\alpha_n\in\Gamma$ with
$\mathop{\bigcap}\limits_{i=1}^n U(\alpha_i,r_i)\subseteq U$, \\ $\bullet$ there exist $s_1,\ldots,s_m\in R$ and distinct $\beta_1,\ldots,\beta_m\in\Gamma$ with
$\mathop{\bigcap}\limits_{i=1}^m U(\beta_i,s_i)\subseteq V$. \\ We may suppose $\mathop{\bigcap}\limits_{i=1}^n U(\alpha_i,r_i)=\mathop{\prod}\limits_{\alpha\in\Gamma}U_\alpha$ (so for $\alpha\neq\alpha_1,\ldots,\alpha_n$ we have $U_\alpha=R$) and $\mathop{\bigcap}\limits_{i=1}^m U(\beta_i,s_i)=\mathop{\prod}\limits_{\alpha\in\Gamma}V_\alpha$ (so for $\alpha\neq\beta_1,\ldots,\beta_m$ we have $V_\alpha=R$). For each $\alpha\in\Gamma$, $\{\varphi^n(\alpha)\}_{n\geq1}$ is a one--to--one sequence, hence there exists $k_\alpha\geq1$ such that \[\{\varphi^n(\alpha):n\geq k_\alpha\}\cap\{\alpha_1,\ldots,\alpha_n\}=\varnothing\:.\] Let $N=\max(k_{\beta_1},\cdots,k_{\beta_m})$, then for each $p\geq N$ and $\alpha\in\Gamma$, we have the following cases: \begin{itemize} \item[i.] $\alpha\neq\beta_1,\ldots,\beta_m$. In this case $V_\alpha=R$, hence: \begin{eqnarray*} (\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap V_\alpha& = &(\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap R \\ &=& \mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)}\neq\varnothing\:. \end{eqnarray*} \item[ii.] there exists $j\in\{1,\ldots,m\}$ such that $\alpha=\beta_j$. In this case
using $p\geq N\geq k_{\beta_j}$ we have $\varphi^p(\alpha)=\varphi^p(\beta_j)\neq\alpha_1,\ldots,\alpha_n$,
hence $U_{\varphi^p(\alpha)}=R$, therefore (note that for all $\gamma\in\Gamma$, $\mathfrak{w}_\gamma$ is
invertible which leads to $\mathfrak{w}_\gamma R=R$): \begin{eqnarray*} (\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap V_\alpha & = &
(\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
R)\cap V_\alpha \\ & = & R\cap V_\alpha=V_\alpha\neq\varnothing\:. \end{eqnarray*} \end{itemize} By the above cases: \[\forall\alpha\in\Gamma\:\:( (\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap V_\alpha\neq\varnothing)\:,\] thus: \begin{eqnarray*} \sigma^p_{\varphi,\mathfrak{w}}(U)\cap V & \supseteq &
\sigma^p_{\varphi,\mathfrak{w}}(\mathop{\bigcap}\limits_{i=1}^n U(\alpha_i,r_i))\cap
\mathop{\bigcap}\limits_{i=1}^m U(\beta_i,s_i) \\ & = & \sigma^p_{\varphi,\mathfrak{w}}(\mathop{\prod}\limits_{\alpha\in\Gamma}U_\alpha)\cap
(\mathop{\prod}\limits_{\alpha\in\Gamma}V_\alpha) \\ & = & (\mathop{\prod}\limits_{\alpha\in\Gamma}\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap (\mathop{\prod}\limits_{\alpha\in\Gamma}V_\alpha) \\ & = & \mathop{\prod}\limits_{\alpha\in\Gamma}((\mathfrak{w}_\alpha\cdots\mathfrak{w}_{\varphi^{p-1}(\alpha)}
U_{\varphi^p(\alpha)})\cap V_\alpha) \neq\varnothing\:. \end{eqnarray*} Hence $\sigma^p_{\varphi,\mathfrak{w}}(U)\cap V\neq\varnothing$, which leads to transitivity of $\sigma_{\varphi,\mathfrak{w}}:R^\Gamma\to R^\Gamma$. \end{proof}
\begin{theorem}\label{salam-tran 20} The following statements are equivalent (see~\cite{dev} for countable $\Gamma$ and generalized shift dynamical system $(\sigma_\varphi,R^\Gamma)$ too): \begin{itemize} \item $(\sigma_{\varphi,\mathfrak{w}},R^\Gamma)$ is Devaney chaotic, \item $(R^\Gamma,\sigma_{\varphi,\mathfrak{w}})$ is topological transitive, \item $\varphi$ is one--to--one without any periodic point and $\mathfrak{w}_\alpha$ is invertible for each $\alpha\in\Gamma$. \end{itemize} \end{theorem}
\begin{proof} Use Theorem~\ref{salam-dense-10}, Lemma~\ref{salam-tran-10} and Lemma~\ref{salam-sen-20}. \end{proof}
\begin{note} One may obtain Theorem~\ref{salam-tran 20} by Theorem~\ref{salam-dense-10}, Lemma~\ref{salam-tran-10} and~\cite[Theorem 4.7]{jaleb} too. \end{note}
\noindent {\small {\bf Fatemah Ayatollah Zadeh Shirazi}, Faculty of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Enghelab Ave., Tehran, Iran \linebreak ({\it e-mail}: f.a.z.shirazi@ut.ac.ir, fatemah@khayam.ut.ac.ir)} \\ {\small {\bf Elaheh Hakimi}, School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Enghelab Ave., Tehran, Iran ({\it e-mail}: elaheh.hakimi@gmail.com)} \\ {\small {\bf Arezoo Hosseini}, Faculty of Mathematics, College of Science, Farhangian University, Pardis Nasibe--shahid sherafat, Enghelab Ave., Tehran, Iran ({\it e-mail}: a.hosseini@cfu.ac.ir)} \\ {\small {\bf Reza Rezavand}, School of Mathematics, Statistics and Computer Science, College of Science, University of Tehran, Enghelab Ave., Tehran, Iran ({\it e-mail}: rezavand@ut.ac.ir)}
\end{document} | arXiv | {
"id": "2204.07950.tex",
"language_detection_score": 0.5268949866294861,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{Standing waves of fixed period for $n+1$ vortex filaments} \author{Walter Craig} \address{Department of Mathematics \& Statistics, McMaster University, Hamilton Ontario L8S 4K1 \textsc{Canada}} \author{Carlos Garc\'{\i}a-Azpeitia} \address{Departamento de Matem\'{a}ticas, Facultad de Ciencias, Universidad Nacional Aut\'{o}noma de M\'{e}xico, 04510 M\'{e}xico DF, M\'{e}xico} \thanks{Walter Craig is deceased} \subjclass[2010]{35B10, 35B32} \keywords{Vortex filaments. Periodic solutions. Bifurcation.}
\begin{abstract} The $n+1$ vortex filament problem has explicit solutions consisting of $n$ parallel filaments of equal circulation in the form of nested polygons uniformly rotating around a central filament which has circulation of opposite sign. We show that when the relation between temporal and spatial periods is fixed at certain rational numbers, these configurations have an infinite number of homographic time dependent standing wave patterns that bifurcate from these uniformly rotating central configurations.
\end{abstract} \maketitle
\section*{Introduction}
In reference \cite{KMD95}, a model system of equations was derived for the interaction of near-parallel vortex filaments. The model considers vortex filaments in $\mathbb{R}^{3}$ to be coordinatized by curves $(u_{j} (t,s),s)\in\mathbb{C}\times\mathbb{R}$ for $j=0,\dots,n$ that describe the positions of $n+1$ vertically oriented vortex filaments. Different aspects of this problem have been investigated in \cite{BaMi,BaMi12,GaCr15,Ga17,GaIz12,Po03,Ne01} and references therein. In this article we study central configurations of $n+1$ vortex filaments with $n$ filaments of equal circulation and one filament of opposite circulation.
Let $u_{j}(t,s)$ for $j=1,...,n$ be the positions of the $n$ filaments of circulation $1$ and $u_{0}(t,s)$ the filament of circulation $-\kappa$ with $\kappa>0$. A homographic standing wave of the $n+1$ vortex filament problem with fixed period is a solution of the form \begin{equation} u_{j}(t,s)=ae^{i\omega t}\left( a_{j}+a_{j}u(t/q,s)\right) ~, \label{SW} \end{equation} where $\omega=-a^{-2}$ is real, $q$ is an integer and $u(t,s)$ is a complex $2\pi$-periodic function in $t$ and $s$.
The complex numbers $a_{j}\in\mathbb{C}$ for $j=0,...,n$ lie in a central configuration with $a_{0}=0$. That is, the complex numbers $a_{j}$ satisfy \begin{equation} 0=\sum_{i=1}^{n}\frac{a_{i}}{\left\vert a_{i}\right\vert ^{2}},\qquad -a_{j}=\sum_{i=1(i\neq j)}^{n}\frac{a_{j}-a_{i}}{\left\vert a_{j} -a_{i}\right\vert ^{2}}-\kappa\frac{a_{j}}{\left\vert a_{j}\right\vert ^{2}}, \label{cc} \end{equation} for $j=1,...,n$. There are many configurations that satisfy \eqref{cc}, for example in the form of nested polygons. In particular, an explicit solution of (\ref{cc}) is given by the regular polygon \[ a_{j}=\left( \kappa-\left( n-1\right) /2\right) ^{-1/2}e^{ij\zeta} ,\qquad\zeta=2\pi/n, \] if $\kappa>\left( n-1\right) /2$.
Setting $u=0$ in equation \eqref{SW} corresponds to the family of homographic solutions for which $n$ straight parallel filaments rotate around the central filament with uniform frequency $\omega$ and amplitude $a$. The standing waves of the title of this article correspond to non-trivial $2\pi$-periodic solutions $u(t,s)$ of the equation $Lu+g(u)=0$, where $L$ is the linear operator \begin{equation} L(\omega)u:=-\left( i/q\right) \partial_{t}u-\partial_{s}^{2}u+\omega\left( u+\bar{u}\right) \text{,} \end{equation} and $g$ is an analytic nonlinearity describing the horizontal vortex filament interaction. Our goal is to construct standing wave solutions that bifurcate from the initial configuration $u=0$, for which the frequency $\omega$ is the bifurcation parameter. The solution given by (\ref{SW}) with a $2\pi$-periodic function $u$ has fixed spatial period $s\in\lbrack0,2\pi)$ and temporal period $t\in\lbrack0,2\pi q)$ in a frame of reference that is rotating with frequency $\omega$, i.e. the solution is periodic or quasiperiodic with the two temporal frequencies $\omega$ and $1/q$ when observed in a stationary reference frame. The main theorem is as follows.
\begin{theorem} \label{1}Let $q$ be an integer. For each $k_{0}\in\mathbb{N}$, there is a local continuum of $2\pi$-periodic solutions $u$ bifurcating from the unperturbed configuration with $u=0$ and initial frequency \begin{equation} \omega_{0}=-\frac{1}{q}\left( 1-\frac{1}{2k_{0}^{2}q}\right) \text{.} \label{Om} \end{equation} The local bifurcation $(u,\omega)$ consists of standing waves satisfying the estimates \[ u(t,s)=b\left[ \cos j_{0}t+i\left( 1-k_{0}^{-2}/q\right) \sin j_{0}t\right] \cos k_{0}s+\mathcal{O}(b^{2}), \] with $\omega=\omega_{0}+\mathcal{O}(b^{2})$ and $j_{0}=qk_{0}^{2}-1$, where $b\in\lbrack0,b_{0}]$ gives a local parameterization of the bifurcation curve. Furthermore, these solutions satisfy the following symmetries \begin{equation} u(t,s)=\bar{u}(-t,s)=u(t,-s)\text{.}\nonumber \end{equation}
\end{theorem}
\begin{figure}
\caption{Illustration of the standing waves obtained in Theorem 1 for the case of $n=3$ vortex filaments of equal circulation (blue) and one vortex filament of opposite circulation (red).}
\end{figure} Therefore, for any central configuration $a_{j}$ satisfying (\ref{cc}), the previous theorem gives homographic solutions of the form (\ref{SW}). The periodic solutions $u$ are special in that the ratio of their temporal and spatial periods are rational. In reference \cite{GaCr15} we studied the case of irrational ratios, which is a small divisor problem for a nonlinear partial differential equation which requires techniques related to KAM theory even for the case of constructions of periodic solutions. Our approach is parallel to that of the semilinear wave and beam equation in one dimension, where time periodic solutions with rational periods (free vibrations) were shown to exist in \cite{AmZe80,Ar,Ki79,Ki00,Ra78}, and later for irrational periods in \cite{Ba,Ra}. On the other hand, time periodic solutions bifurcating from stationary solutions with irrational periods is a small divisor problem, for which constructions of solutions by Nash-Moser methods came much later in \cite{Be07,Bo95,CrWa93}, and references therein.
In the present analysis the ratio of the periodic solution is rational and the small divisor problem does not occur. The key element of the proof consists on the fact that for special temporal frequencies, given by $1/q$, the Schr\"{o}dinger operator $L(\omega)$, when restricted to the orthogonal complement of the null space, has a bounded inverse in the set of frequencies $\omega_{0}\in$ $(-1/q,0)$. Unlike in semilinear wave and beam equations, our equation is a genuine Hamiltonian PDE represented by a Schr\"{o}dinger operator $L(\omega)$ which does not have the regularity that is usually obtained in other equations, i.e. our result can be obtained only in a narrow set of parameters where $L(\omega_{0})$ has a nontrivial kernel. This is also the case of the counter-rotating vortex filament pair studied in \cite{Ga17}, but this is the first time that periodic solutions without small divisors are obtained in a genuine non-linear Hamiltonian PDE using this method.
In section 1, we set up a Lyapunov-Schmidt reduction to prove the existence of standing waves. In section 2 we solve the range equation for $\omega_{0}\in$ $(-1/q,0)$ using the contracting mapping theorem. In section 3 we use the symmetries of the problem to solve the bifurcation equation by means of the Crandall-Rabinowitz theorem.
\section{Setting the problem}
From \cite{KMD95} the system of model equations for the dynamics of $n+1$ near-parallel vortex filaments, with circulations $\Gamma_{0}=-\kappa$ and $\Gamma_{j}=1$ for $j=1,...,n$, is given by \begin{equation} \partial_{t}u_{j}=i\left( \Gamma_{j}\partial_{ss}u_{j}+\sum_{i=0~(i\not = j)}^{n}\Gamma_{j}\frac{u_{j}-u_{i}}{\left\vert u_{j}-u_{i}\right\vert ^{2} }\right) ,\qquad j=0,...,n. \label{fp} \end{equation} Homographic solutions of the $n+1$ filaments are particular solutions of the form \[ u_{j}(t,s)=w(t,s)a_{j}~, \] where $w(t,s)$ is a complex valued function and where $a_{j}$'s are complex numbers satisfying the condition of a central configuration. In this class of solutions the shape of the intersections of the filaments with a horizontal complex plane is homographic with the shape of their intersection with any other horizontal plane $\{x_{3}=c\}$ for any $c$ and at any time $t$.
For a general central configuration \begin{equation}
-a_{j}=\sum_{i=0(i\not =j)}^{n}\Gamma_{i}\frac{a_{j}-a_{i}}{|a_{j}-a_{i}|^{2} }~,\qquad j=0,...,n, \label{Eqn:CentralConfiguration} \end{equation} homographic solutions satisfy the system of equations (\ref{fp}) if $w(t,s)$ solves the system of equations \[ a_{j}\partial_{t}w(t,s)=i\left( \Gamma_{j}a_{j}\partial_{ss}w(t,s)-\frac {w(t,s)}{\left\vert w(t,s)\right\vert ^{2}}a_{j}\right) ,\qquad j=0,...,n,\text{.} \] In the particular case that $a_{0}=0$ in the central configuration, the condition for the configuration $a_{j}$ becomes (\ref{cc}) and the system of equations is satisfied by solutions of the simple equation, \begin{equation} \partial_{t}w=i\left( \partial_{ss}w-\frac{w}{\left\vert w\right\vert ^{2} }\right) \text{.} \label{pde} \end{equation} Therefore, $u_{j}(t,s)=w(t,s)a_{j}$ is an homographic solution of the vortex filament problem if the configuration $a_{j}$ satisfies (\ref{cc}) and $w$ is a solution of the equation (\ref{pde})
A particular solution of (\ref{cc}) is given by a regular polygon $a_{j}=re^{ij\zeta}$ with radius $r=\left( \kappa-\left( n-1\right) /2\right) ^{-1/2}$ if $\kappa>\left( n-1\right) /2$, because \[ \sum_{i=1(i\neq j)}^{n}\frac{a_{j}-a_{i}}{\left\vert a_{j}-a_{i}\right\vert ^{2}}-\kappa\frac{a_{j}}{\left\vert a_{j}\right\vert ^{2}}=-\left( \kappa-\frac{n-1}{2}\right) \frac{a_{j}}{r^{2}}=-a_{j}~. \] Also, there are other solutions of (\ref{cc}) corresponding to nested polygons.
Equation \eqref{pde} has the set of solution $w=ae^{i\omega t}$ with \[ \omega=-a^{-2}<0~, \] that corresponds to $n$ vortex filaments uniformly rotating in the central configuration $a_{j}$ with amplitude $a$ and frequency $\omega$. We look for bifurcation of solutions of the equation \eqref{pde} of the form \begin{equation} w(t,s)=ae^{i\omega t}(1+u(t/q,s)), \label{cov} \end{equation} where $q$ is an integer and $u(t,s)$ is $2\pi$-periodic in $t$ and $s$. This is a solution that has fixed temporal and spatial periodicity when viewed in a coordinate frame rotating about the $x_{3}$-axis with frequency $\omega$. When $u=0$ the solution corresponds to $n$ vortex filaments uniformly rotating in the central configuration $a_{j}$. The equation \eqref{pde} for a perturbation from this configuration is \begin{equation} \left( i/q\right) \partial_{t}u=-u_{ss}+\omega(u+\bar{u})+g(u,\overline {u})~\text{,} \label{pdev} \end{equation} where the nonlinearity $g$ is given by \[ g(u,\bar{u})=\omega\frac{\bar{u}^{2}}{1+\bar{u}}=\omega\sum_{n=2}^{\infty }(-1)^{n}\bar{u}^{n}~. \]
In order to simplify the analysis of symmetries, the equation is represented in real coordinates $u(\tau,s)=(x(\tau,s),y(\tau,s))\in\mathbb{R}^{2}$, i.e., the equation is equivalent to \[ Lu+g(u)=0, \] where $g(u)=\mathcal{O}\left( \left\vert u\right\vert ^{2}\right) $ is analytic for $\left\vert (x,y)\right\vert <1$ and $L$ is the linear operator \begin{equation} Lu:=-\left( 1/q\right) J\partial_{t}u-\partial_{s}^{2}u+\omega\left( I+R\right) u\text{,} \end{equation} where $R=diag(1,-1)$.
We define the Hilbert space $L^{2}({\mathbb{T}}^{2};\mathbb{R}^{2})$, with the inner product \[ \left\langle u_{1},u_{2}\right\rangle =\frac{1}{(2\pi)^{2}}\int_{{\mathbb{T} }^{2}}u_{1}\cdot u_{2}\,dtds\text{.} \] A function $u\in L^{2}$ can be written in a Fourier basis as \[ u=\sum_{(j,k)\in\mathbb{Z}^{2}}u_{j,k}e^{i(jt+ks)},\qquad u_{j,k}=\bar {u}_{-j,-k}\in\mathbb{C}^{2}. \] The Sobolev space $H^{s}$ is the usual subspace of functions in $L^{2}$ with bounded norm \[ \left\Vert u\right\Vert _{H^{s}}^{2}=\sum_{(j,k)\in\mathbb{Z}^{2}}\left\vert u_{j,k}\right\vert ^{2}\left( j^{2}+k^{2}+1\right) ^{s}\text{.} \] This space has the Banach algebra property for $s>1$, \[ \left\Vert uv\right\Vert _{H^{s}}\leq C\left\Vert u\right\Vert _{H^{s} }\left\Vert v\right\Vert _{H^{s}}~. \] The Banach algebra property implies that the nonlinear operator $g(u)=\mathcal{O}(\left\Vert u\right\Vert _{H^{s}}^{2})$ is well defined and continuous for $\left\Vert u\right\Vert _{H^{s}}<1$.
The linear operator $L:D(L)\rightarrow H^{s}$ is continuous when the domain \[ D(L)=\{u\in H^{s}:Lu\in H^{s}\}~, \] is completed under the graph norm \[ \left\Vert u\right\Vert _{L}^{2}=\left\Vert Lu\right\Vert _{H^{s}} ^{2}+\left\Vert u\right\Vert _{H^{s}}^{2}~. \] In Fourier basis, the operator $L:D(L)\rightarrow H^{s}$ is given by \[ Lu=\sum_{(j,k)\in\mathbb{Z}^{2}}M_{j,k}u_{j,k}e^{i(jt+ks)}~, \]
where \[ M_{j,k}=\left( \begin{array} [c]{cc} k^{2}+2\omega & i\left( j/q\right) \\ -i\left( j/q\right) & k^{2} \end{array} \right) \text{.} \] Then, the eigenvalues and eigenvectors of $L$ are \begin{align} \lambda_{j,k,l} & =k^{2}+\omega+l\sqrt{\left( j/q\right) ^{2}+\omega^{2} }~,\\ e_{j,k,l} & =\left( \begin{array} [c]{c} -\omega-l\sqrt{\left( j/q\right) ^{2}+\omega^{2}}\\ i\left( j/q\right) \end{array} \right) , \end{align} for $(j,k,l)\in\mathbb{Z}^{2}\times\mathbb{Z}_{2}$, where $\mathbb{Z} _{2}=\{1,-1\}$ is a group under the product.
The eigenvalue $\lambda_{j,k,1}$ always is positive, and $\lambda _{j,k,-1}(\omega_{0})=0$ if
\[ \omega_{0}=\left( \left( j/qk\right) ^{2}-k^{2}\right) /2<0. \] Given that $L(\omega_{0})$ has a nontrivial kernel, we expect bifurcation of solutions of $L(\omega)u+g(u)=0$ as $\omega$ crosses $\omega_{0}$.
\begin{definition} We define $N$ as the subset of all lattice points corresponding to zero eigenvalues, \[ N(\omega_{0})=\left\{ \left( j,k,-1\right) \in\mathbb{Z}^{2}\times \mathbb{Z}_{2}:\lambda_{j,k,-1}\left( \omega_{0}\right) =0\right\} ~\text{.} \]
\end{definition}
By definition we have that the kernel of $L(\omega_{0})$ is generated by eigenfunctions $e_{j,k,l}e^{i(jt+ks)}$ with $\left( j,k,l\right) \in N$. Notice that additional sites to $(\pm j_{0},\pm k_{0},-1)$ may be present in $N(\omega_{0})$ due to resonances. The Lyapunov-Schmidt reduction separates the kernel and the range equations using the projections \[ Qu = \sum_{\left( j,k,l\right) \in N} u_{j,k,l}e_{j,k,l}e^{i(jt+ks)} ~,\qquad Pu = (I-Q)u ~. \] Setting \[ u = v+w ~, \qquad v = Qu ~, \qquad w = Pu~\text{,} \]
the equation $Lu+g(u)=0$ is equivalent to the kernel equation \begin{equation} QLQv + Qg(v+w) = 0~, \end{equation} and the range equation \begin{equation} PLPw + Pg(v+w)= 0~. \end{equation}
\section{The range equation}
In this section, the range equation is solved as a fixed point $w(\omega,v)\in H^{s}$ of the operator \[ Kw=-\left( PLP\right) ^{-1}g(w+v,\omega)~\text{.} \] The local solution $w=w(\omega,v)$ is provided by an application of the contraction mapping theorem, where we only need to prove that $\left( PLP\right) ^{-1}:PH^{s}\rightarrow PH^{s}$ is well defined and bounded. For this, we will establish bound estimates in the eigenvalues $\lambda_{j,k,l}$.
For $l=1$, we clearly have \[ \lambda_{j,k,1}=k^{2}+\omega+\sqrt{\left( j/q\right) ^{2}+\omega^{2}}\gtrsim k^{2}+\left\vert j\right\vert ~\text{.} \] For $l=-1$, we have the following estimate,
\begin{lemma} For $2\varepsilon< \left\vert \omega\right\vert < 1/q-2\varepsilon$, we have \begin{equation} \left\vert \lambda_{j,k,-1}(\omega)\right\vert \gtrsim\varepsilon\text{ for }\left( j,k,l\right) \in N^{c}\text{.} \end{equation}
\end{lemma}
\begin{proof} In the case $\left\vert j\right\vert /q\neq k^{2}$, the inequality $\left\vert k^{2}-\left\vert j\right\vert /q\right\vert \geq1/q$ holds and \[ \left\vert \lambda_{j,k,-1}(\omega_{0})\right\vert \geq\left\vert k^{2}-\left\vert j\right\vert /q\right\vert -\left\vert \left\vert j\right\vert /q+\omega-\sqrt{\left( j/q\right) ^{2}+\omega^{2}}\right\vert ~. \] Since $\lim_{x\rightarrow\infty}\left( x+\omega-\sqrt{x^{2}+\omega^{2} }\right) =\omega$, then \[ \left\vert \left\vert j\right\vert /q+\omega-\sqrt{\left( j/q\right) ^{2}+\omega^{2}}\right\vert <\left\vert \omega\right\vert +\varepsilon~, \] for $\left\vert k\right\vert +\left\vert j\right\vert \geq M$ with $M$ big enough. Therefore, \[ \left\vert \lambda_{j,k,-1}(\omega)\right\vert \geq\left\vert k^{2}-\left\vert j\right\vert /q\right\vert -\left\vert \omega\right\vert -\varepsilon\geq \frac{1}{q}-\left\vert \omega\right\vert -\varepsilon\geq\varepsilon~. \] In the case $\left\vert j\right\vert /q=k^{2}$, then \[ \left\vert \lambda_{j,k,-1}(\omega)\right\vert =\left\vert k^{2}+\omega -\sqrt{k^{2}+\omega^{2}}\right\vert \geq\left\vert \omega\right\vert -\varepsilon\geq\varepsilon~. \] for $\left\vert k\right\vert $ big enough. In both cases we have that $\left\vert \lambda_{j,k,-1}(\omega)\right\vert \geq\varepsilon$ if $\left\vert k\right\vert +\left\vert j\right\vert \geq M$ with $M$ big enough. We conclude that the estimate holds except by a finite number of points $\left( j,k\right) \in\mathbb{Z}^{2}$. Therefore, there is a constant $c$ such that the estimate $\left\vert \lambda_{j,k,-1}(\omega)\right\vert \geq c\varepsilon$ holds for all $(j,k,-1)\in N^{c}$. \end{proof}
From the previous estimates we have that $\left( PLP\right) ^{-1}$ is a bounded operator with \[ \left\Vert \left( PLP\right) ^{-1}w\right\Vert _{H^{s}}\lesssim \varepsilon^{-1}\left\Vert w\right\Vert _{H^{s}}\text{.} \]
\begin{proposition} \label{2}Assume $2\varepsilon<\left\vert \omega\right\vert <1/q-2\varepsilon$. There is a unique continuous solution $w(v,\omega)\in H^{s}$ of the range equation defined for $(v,\omega)$ in a small neighborhood of $(0,\omega )\in\ker L(a_{0})\times\mathbb{R}$ such that \begin{equation} \left\Vert w(v,\omega)\right\Vert _{H^{s}}\lesssim\varepsilon^{-1}\left\Vert v\right\Vert ^{2}\text{,} \end{equation} for small $\varepsilon$. \end{proposition}
\begin{proof} By the Banach algebra property of $H^{s}$, the operator \[ g(w)=\mathcal{O}(\left\Vert w\right\Vert _{H^{s}}^{2}):B_{\rho}\rightarrow H^{s} \] is well define in the domain $B_{\rho}=\{w\in H^{s}:\left\Vert w\right\Vert _{H^{s}}<\rho\}$ for $\rho<1$. We can chose a small enough $\varepsilon$ such that the hypothesis of the previous lemma hold true. Therefore, \begin{align*} Kw & =-\left( PLP\right) ^{-1}g(w+v,\omega)=\mathcal{O}(\varepsilon ^{-1}\left\Vert w\right\Vert _{H^{s}}^{2})\\ K & :B_{\rho}\subset PH^{s}\rightarrow PH^{s}\text{,} \end{align*} is well defined and continuous. Moreover, it is a contraction for $\rho$ of order $\rho=\mathcal{O}(\varepsilon)$. By the contraction mapping theorem, there is a unique continuous fixed point $w(v,\omega)\in B_{\rho}$. The estimate $\left\Vert w(v,\omega)\right\Vert _{H^{s}}\leq\varepsilon ^{-1}\left\Vert v\right\Vert ^{2}$ is obtained from \[ \left\Vert Kw\right\Vert _{H^{s}}\lesssim\varepsilon^{-1}\left( \left\Vert w\right\Vert _{H^{s}}^{2}+\left\Vert v\right\Vert ^{2}\right) . \]
\end{proof}
\begin{remark} Since $\left( PLP\right) ^{-1}$ is continuous but not compact, we do not automatically obtain the regularity of the solutions by bootstrapping arguments. Instead,the regularity is obtained using the Sobolev embedding $H^{s}\subset C^{2}$ for $s\geq3$. \end{remark}
\section{The bifurcation equation}
\begin{proposition} For $k_{0}\in\mathbb{N}$, we define \begin{equation} \omega_{0}=-\frac{1}{q}\left( 1-\frac{1}{2qk_{0}^{2}}\right) ,\qquad j_{0}=qk_{0}^{2}-1. \end{equation} For these frequencies we have $\omega_{0}\in\left( -1/q,0\right) $ and \[ N(\omega_{0})=\{(0,0,-1),\left( \pm j_{0},\pm k_{0},-1\right) \}. \]
\end{proposition}
\begin{proof} Since $\lambda_{j,k,-1}=k^{2}+\omega-\sqrt{\left( j/q\right) ^{2}+\omega ^{2}}$, then $\lambda_{j,0,-1}(\omega)=0$ only if $j=0$. For $k_{0} \in\mathbb{N}^{+}$, the condition $\lambda_{j_{0},k_{0},-1}(\omega_{0})=0$ $\ $is satisfied only if \[ \omega_{0}=\left( \left( j_{0}/qk_{0}\right) ^{2}-k_{0}^{2}\right) /2\text{.} \] In addition, the condition $\omega_{0}\in\left( -1/q,0\right) $ holds if an only if the lattice point $\left( j_{0},k_{0}\right) \in\mathbb{N}^{2}$ satisfies $j_{0}=qk_{0}^{2}-1$. In this case \[ \omega_{0}=\frac{1}{2}\left( \left( k_{0}-\frac{1}{qk_{0}}\right) ^{2}-k_{0}^{2}\right) =-\frac{1}{q}\left( 1-\frac{1}{2qk_{0}^{2}}\right) , \] then the frequency $\omega_{0}$ is determined uniquely for each point $\left( j_{0},k_{0}\right) \in\mathbb{N}^{2}$ because $\omega_{0}$ is decreasing in $k_{0}$. Therefore, we have that $(0,0,-1)$ and $\left( \pm j_{0},\pm k_{0},-1\right) $ are the only elements in $N(\omega_{0})$. \end{proof}
Since $\ker L(\omega_{0})$ has dimension $5$ for $\omega_{0}\in\left( -1/q,0\right) $, we need to reduce the bifurcation equation to a subspace of dimension one in order to apply the Crandall-Rabinowitz theorem. This is attained by exploiting the equivariance of the system \eqref{pdev} under the action of the group $G=O(2)\times O(2)$ given by \[ \rho(\tau,\sigma)u(t,s)=u(t+\tau,s+\sigma)~, \] for the abelian components, and for the reflections, \[ \rho(\kappa_{1})u(t,s)=Ru(-t,s),\quad\rho(\kappa_{2})u(t,s)=u(t,-s)~, \] where $R=diag(1,-1)$. By the uniqueness of $w(v,\omega)$, the bifurcation equation has the same equivariant properties as the differential equation. This property is used in the following proposition to reduce the bifurcation equation to a subspace of dimension one.
\begin{proposition} \label{3}The bifurcation equation has a local continuum of $2\pi$-periodic solution bifurcating from $(v,\omega)=(0,\omega_{0})$ with estimates \begin{equation} v(t,s)=b\left( \begin{array} [c]{c} \cos j_{0}t\\ \left( 1-k_{0}^{-2}/q\right) \sin j_{0}t \end{array} \right) \cos k_{0}s+\mathcal{O}(b^{2})\text{,\qquad}\omega=\omega _{0}+\mathcal{O}(b^{2}), \end{equation} where $b\in\lbrack0,b_{0}]$ gives a parameterization of the local bifurcation, and symmetries \begin{equation} v(t,s)=Rv(-t,s)=v(t,-s)=v(t+\pi/j_{0},s+\pi/k_{0}). \end{equation}
\end{proposition}
\begin{proof} In Fourier components \[ v=\sum_{\left( j,k,l\right) \in N}u_{j,k,l}e_{j,k,l}e^{i(jt+ks)},\qquad u_{j,k,l}=\bar{u}_{-j,-k,l}, \] the action of the abelian part of the group $G$ is given by \[ \rho(\varphi)u_{j,k,l}=e^{ij\varphi}u_{j,k,l},\qquad\rho(\theta)u_{j,k,l} =e^{ik\theta}u_{j,k,l}~. \] Since \[ e_{j,k,-1}=\left( \begin{array} [c]{c} -\omega_{0}-\sqrt{\left( j/q\right) ^{2}+\omega_{0}^{2}}\\ i\left( j/q\right) \end{array} \right) =\left( \begin{array} [c]{c} k^{2}\\ i(j/q) \end{array} \right) , \] then $Re_{j,k,-1}=e_{-j,k,-1}$ and $e_{j,k,-1}=e_{j,-k,-1}$. Therefore, we have \[ \rho(\kappa_{1})v=\sum_{\left( j,k,l\right) \in N}u_{j,k,l}e_{-j,k,-1} e^{i(-jt+ks)}=\sum_{\left( j,k,l\right) \in N}u_{-j,k,l}e_{j,k,-1} e^{i(jt+ks)}, \] and \[ \rho(\kappa_{2})v=\sum_{\left( j,k,l\right) \in N}u_{j,k,l}e_{j,k,-1} e^{i(jt-ks)}=\sum_{\left( j,k,l\right) \in N}u_{j,-k,l}e_{j,k,-1} e^{i(jt+ks)}\text{.} \] Therefore, the action of the reflections in Fourier components is given by \[ \rho(\kappa_{1})u_{j,k,l}=u_{-j,k,l}=\bar{u}_{j,-k,l}~,\qquad\rho(\kappa _{2})u_{j,k,l}=u_{j,-k,l}~. \]
The irreducible representations under the action of $O(2)\times O(2)$ corresponds to the subspaces \[ (u_{j_{0},k_{0},-1},u_{j_{0},-k_{0},-1})\in\mathbb{C}^{2}. \] The linear operator $L$ is diagonal in these irreducible representations with eigenvalue $\lambda_{j_{0},k_{0},-1}$ of complex multiplicity two. The group \[ S=\left\langle \kappa_{1},\kappa_{2},(\pi/j_{0},\pi/k_{0})\right\rangle \] has fixed point space $(u_{j,k,-1},u_{j,-k,-1})=(b,b)$ for $b\in\mathbb{R}$ in this representation. By setting \[ \ker L^{S}(\omega_{0}):=\ker L(\omega_{0})\cap\emph{Fix~}(S)~\text{,} \] the bifurcation equation \begin{equation} QLQw+Qg(v+w(v,\omega)):\ker L^{S}(\omega_{0})\times\mathbb{R}\rightarrow\ker L^{S}(\omega_{0}) \label{BE} \end{equation} is well defined by the equivariance properties. Moreover, since $u_{0,0,-1} $\emph{ }is not fixed by the subgroup $S$, then $\ker L^{S}(\omega_{0})$ is generated by the simple eigenfunction \[ \sum_{j=\pm j_{0},k=\pm k_{0}}e_{j,k,-1}e^{i(jt+ks)}=4\left( \begin{array} [c]{c} k_{0}^{2}\cos j_{0}t\\ j_{0}/q\sin j_{0}t \end{array} \right) \cos k_{0}s\text{.} \]
Since $\ker L^{S}(\omega_{0})$ has dimension one, the local bifurcation for $\omega$ close to $\omega_{0}$ follows from the Crandall-Rabinowitz theorem applied to the bifurcation equation (\ref{BE}). It is only necessary to verify that $\partial_{\omega}L(\omega)f$ is not in the range of $L$ for $f\in\ker L^{S}(\omega_{0})$, which follows from the fact that \[ \partial_{\omega}L(\omega)f=\left( I+R\right) f~. \] The estimates $\omega=\omega_{0}+\mathcal{O}(b)$ and \[ v(t,s)=b\left( \begin{array} [c]{c} \cos j_{0}t\\ \left( 1-k_{0}^{-2}/q\right) \sin j_{0}t \end{array} \right) \cos k_{0}s+\mathcal{O}(b^{2}) \] are consequence of the Crandall-Rabinowitz estimates. Moreover, the ${\mathbb{S}}^{1}$-action of the element $\varphi=\pi$ in the kernel generated is given by $\rho(\varphi)=-1$. This symmetry implies that the bifurcation equation is odd and $\omega=\omega_{0}+\mathcal{O}(b^{2})$. \end{proof}
The main theorem follows from this proposition and the fact that $u=v+w(v,\omega)$ with \[ \left\Vert w(v,\omega)\right\Vert _{H^{s}}=\mathcal{O}\left( \left\Vert v\right\Vert ^{2}\right) =\mathcal{O}\left( b^{2}\right) . \]
\vskip0.25cm \textbf{Acknowledgements.} W.C. was partially supported by the Canada Research Chairs Program and NSERC through grant number 238452--16. C.G.A was partially supported by a UNAM-PAPIIT project IN115019. We acknowledge the assistance of Ramiro Chavez Tovar with the preparation of the figure.
\end{document} | arXiv | {
"id": "1903.08302.tex",
"language_detection_score": 0.6595421433448792,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter}
\title{Eighth-order Derivative-Free Family of Iterative Methods for Nonlinear Equations } \author[rvt]{ Laila M Assas } \ead{ lmassas@uqu.edu.sa} \author[focal]{Fayyaz Ahmad \corref{cor1}\fnref{fn1} } \ead{fayyaz.ahmad@upc.edu} \author[els]{Malik Zaka Ullah } \ead{ mzhussain@kau.edu.sa}
\cortext[cor1]{Corresponding author} \fntext[fn1]{This research was supported by Spanish MICINN grants AYA2010-15685.}
\address[rvt]{Department of Mathematics, Umm al Qura University, Makkah, Kingdom of Saudi Arabia } \address[focal]{Dept. de FÃsica i Enginyeria Nuclear, Universitat PolitÚcnica de Catalunya, Barcelona 08036, Spain } \address[els]{ Department of Mathematics, King Abdulaziz University, Jeddah 21589, Kingdom of Saudi Arabia }
\begin{abstract} In this note, we present an eighth-order derivative-free family of iterative methods for nonlinear equations. The proposed family shows optimal eight-order of convergence in the sense of the Kung and Traub conjecture \cite{5} and is based on the Steffensen derivative approximation used in the Newton-method. As a final step, having in mind computational purposes, a derivative-free polynomial base interpolation is used in order to get optimal order of convergence with only four functional evaluations. Numerical esperiments and few issues are discussed at the end of this note. \end{abstract}
\begin{keyword} Non-linear equations \sep Steffensen's method \sep Polynomial interpolation \sep Iterative methods \end{keyword}
\end{frontmatter}
\section{Introduction} \label{Intro} Let $f:D \subseteq \Re \longrightarrow \Re $ be a sufficiently differentiable function of single variable in some neighborhood $D$ of $ \alpha $, where $\alpha $ is a simple root ($f'(\alpha) \neq 0 $) of nonlinear algebraic equation $f(x)=0$. The well-known Newton method is denined by the iteration \begin{align} \label{eqns1}
x_{n+1} &= x_n - \frac{f(x_n)}{f'(x_n)}, \end{align} which shows a second-order convergence. One can easily get Steffensen's approximation for first order derivative as \begin{align} \label{eqns2} \begin{cases} f(x_n-\kappa f(x_n)) &\approx f(x_n)-\kappa f(x_n) f'(x_n), \\ \kappa f(x_n) f'(x_n) &\approx f(x_n)- f(x_n-\kappa f(x_n)), \\ f'(x_n) &\approx \frac{1}{\kappa} \frac{f(x_n)- f(x_n-\kappa f(x_n))}{ f(x_n)}. \end{cases} \end{align} If we substitute the derivative approximation (\ref{eqns2}) in (\ref{eqns1}), we obtain Steffensen's second order accurate derivative-free iterative method for non-linear equations \cite{1}. \begin{align}\label{eqns3} \begin{cases} w_n &= x_n-\kappa f(x_n), \\ x_{n+1} &= x_n - \kappa \frac{ f(x_n)^2}{f(x_n)- f(w_n)}. \end{cases} \end{align}
In 2012, an optimal eighth-order iterative method \cite{2} was proposed by Y. Khan et al. as \begin{align}\label{eqns4}
\begin{cases}
y_n &= x_n-\frac{f(x_n)}{f'(x_n)}, \\
z_n &= y_n -G\Bigg( \frac{f(y_n)}{f(x_n)} \Bigg) \frac{f(y_n)}{f'(x_n)}, \\
x_{n+1} &= z_n - \frac{\mu}{\mu + \nu q_n^{2}} \frac{f(z_n)}{f'(z_n)},
\end{cases} \end{align} where $\mu \neq 0$, $\nu \in \Re $, $q_n = f(z_n)/f(x_n)$, $G(t)$ is a real-valued function with $G(0)=1$, $G'(0)=2$, $G''(0)< \infty$, and
\begin{align} \label{eqns5} \begin{cases}
f'(z_n) &\approx K-C(y_n-z_n)-D(y_n-z_n)^2,\\
H &= \frac{f(x_n)-f(y_n)}{x_n-y_n}, \\
K &= \frac{f(x_n)-f(z_n)}{y_n-z_n}, \\
D &= \frac{f^{'}(x_n)- H }{(x_n-y_n)(x_n-z_n)}-\frac{H-K}{(x_n-z_n)^2}, \\
C &= \frac{H-K}{x_n-z_n}-D( x_n+ y_n-2 z_n).
\end{cases} \end{align} In the original draft of paper \cite{2} the expression for $C$ has typo-mistake, which is corrected here. Actually (\ref{eqns5}) polynomial interpolation approximation for $f'(z_n)$ is given in \cite{3}. Clearly (\ref{eqns4}) iterative scheme is not derivative free. The main contribution in this paper is to use the idea of iterative scheme (\ref{eqns4}) by introducing Steffensen's derivative approximation for $f'(x_n)$ and then finally construct derivative-free approximation for $f'(z_n)$ without reducing order of convergence. \section{Construction of derivative-free family} First we construct an interpolation polynomial approximation for $f'(z_n)$. Suppose we have $f(x_n)$, $f(w_n)$ (defined in \ref{eqns3}), $f(y_n)$ and $f(z_n)$, One could construct a three-degree polynomial as follows \begin{align} \label{eqns6}
\begin{cases}
p(\phi) &= f(y_n) + r_1 (\phi - y_n) + r_2 (\phi - y_n)^2 + r_3 (\phi - y_n)^3, \\
p'(\phi) &= r_1 + 2 r_2 (\phi - y_n) + 3 r_3 (\phi - y_n)^2.
\end{cases} \end{align} By using four functional values, we get the following system of equations: \begin{align} \label{eqns7}
\begin{cases} v_1 &= r_1 a + r_2 a^2 + r_3 a^3, \\ v_2 &= r_1 b + r_2 b^2 + r_3 b^3, \\ v_3 &= r_1 c + r_2 c^2 + r_3 c^3,
\end{cases} \end{align} where \begin{align}\label{eqns8}
\begin{cases}
v_1&= f(x_n) - f(y_n ) ,\\
v_2&= f(z_n) - f(y_n ) , \\
v_3&= f(w_n) - f(y_n ) , \\
a&=x_n-y_n , \\
b&=z_n-y_n \\
c&=w_n-y_n .
\end{cases} \end{align}
After solving (\ref{eqns7}) for $r_1$, $r_2$ and $r_3$ and substituting them in (\ref{eqns6}) implies the following approximation
for $f'(z_n)$: \begin{align} \label{eqns9}
f'(z_n) &\approx \psi_n =\frac{b (b-c) }{(a-b)(a-c)} \frac{v_1}{a} +\frac{-3 b^2+2 b c+2 a b-a c }{(a-b)(b-c)} \frac{v_2}{b}+
\frac{b (b-a)}{(a-c)(b-c)} \frac{v_3}{c}.
\end{align}
We consider the following family of iterative methods for nonlinear equations:
\begin{align} \label{eqns10}
\begin{cases}
w_n &= x_n-\kappa f(x_n), \\
y_n &=x_n - \kappa \frac{ f(x_n)^2}{f(x_n)- f(w_n)},\\
z_n &= y_n - \kappa \frac{ f(y_n) f(x_n)}{f(x_n)- f(w_n)} G(t_1,t_2), \\
x_{n+1} &= z_n - \frac{f(z_n)}{\psi_n} H(s_1,s_2),
\end{cases}
\end{align} where $t_1= \frac{f(y_n)}{f(x_n)}$, $t_2=\frac{f(y_n)}{f(w_n)}$, $s_1=\frac{f(z_n)}{f(x_n)}$, $s_2=\frac{f(z_n)}{f(w_n)}$ and $\kappa (\neq 0) \in \Re$. \section{Convergence analysis} We state the following theorem about the order of convergence of the family described in (\ref{eqns10}).
\newtheorem{mydef1}{Theorem} \begin{mydef1}
Let $f: D\subseteq \Re \rightarrow \Re $ be a sufficiently differentiable function, and $\alpha \in D$ is a simple root of $f(x)=0$,
for an open interval $D$. If $x_0$ is chosen sufficiently close to $\alpha$, then the iterative scheme given in (\ref{eqns10}) converges to $\alpha$. If $G$
and $H$ satisfy
\begin{align} \label{eqns11}
G(0,0)&=1,\ \frac{\partial G}{\partial t_1} \Bigg|_{(0,0)} =1,\ \frac{\partial G}{\partial t_2} \Bigg|_{(0,0)} =1,\
H(0,0)=1,\ \frac{\partial H}{\partial s_1}\Bigg|_{(0,0)} =0,\ \frac{\partial H}{\partial s_2} \Bigg|_{(0,0)} =0,
\end{align}
and $\frac{\partial^2 G}{\partial t_1^2}$, $ \frac{\partial^2 G}{\partial t_2^2}$ ,$ \frac{\partial^2 G}{ \partial t_1 \partial t_2 }$,
$\frac{\partial^2 H}{\partial s_1^2}$, $ \frac{\partial^2 H}{\partial s_2^2}$ ,$ \frac{\partial^2 H}{ \partial s_1 \partial s_2 }$ are bounded at $(0,0)$ then the iterative scheme (\ref{eqns10}) shows an order of convergence at least equal to eight. \end{mydef1} \begin{proof} Let the error at step $n$ be denoted by $e_n= x_n-\alpha$ and let us define $c_1 = f'(\alpha)$ and $c_k = \frac{1}{k!} \frac{f^{(k)}(\alpha)}{f'(\alpha)}$, $k = 2,3,\cdots$. If we expand $f$ around the root $\alpha$ and express it in terms of powers of error $e_n$, we obtain \begin{align}
f(x_n) &= c_1 ( e_n+ c_2 e_n^2+ c_3 e_n^3+ c_4 e_n^4+ c_5 e_n^5+ c_6 e_n^6+ c_7 e_n^7+ c_8 e_n^8+ O( e_n^9) ), \label{eqns12} \\
\begin{split} \label{eqns13}
f(w_n) &= -c_1 (-1+\kappa c_1) e_n + c_1 c_2 (-3 \kappa c_1+1+ \kappa^2 c_1^2) e_n^2 -c_1 (4 \kappa c_1 c_3+2 c_2^2 \kappa c_1-2 \kappa^2 c_1^2 c_2^2-c_3-3 c_3 \kappa^2 c_1^2 \\
&\quad +c_3 \kappa^3 c_1^3) e_n^3 + c_1 (-5 \kappa c_1 c_4-5 c_2 \kappa c_1 c_3+8 \kappa^2 c_1^2 c_2 c_3+\kappa^2 c_1^2 c_2^3-3 c_2 \kappa^3 c_1^3 c_3+ c_4+6 c_4 \kappa^2 c_1^2-4 c_4 \kappa^3 c_1^3 \\
&\quad + c_4 \kappa^4 c_1^4) e_n^4 +\cdots + O(e_n^9),
\end{split} \\
\begin{split} \label{eqns14}
y_n -\alpha &= - c_2 (-1+ \kappa c_1) e_n^2 + (2 c_3-3 \kappa c_1 c_3+ c_3 \kappa^2 c_1^2+2 c_2^2 \kappa c_1-2 c_2^2- \kappa^2 c_1^2 c_2^2 ) e_n^3 + (3 c_4+10 c_2 \kappa c_1 c_3-6 \kappa c_1 c_4 \\
&\quad +4 c_4 \kappa^2 c_1^2- c_4 \kappa^3 c_1^3-7 \kappa^2 c_1^2 c_2 c_3-7 c_2 c_3-5 \kappa c_1 c_2^3+2 c_2 \kappa^3 c_1^3 c_3+3 \kappa^2 c_1^2 c_2^3+4 c_2^3- \kappa^3 c_1^3 c_2^3) e_n^4 \\
&\quad +\cdots+O(e_n^9),
\end{split} \\
\begin{split} \label{eqns15}
f(y_n) &= - c_1 c_2 (-1+ \kappa c_1) e_n^2 - c_1 (-2 c_3+3 \kappa c_1 c_3- c_3 \kappa^2 c_1^2-2
c_2^2 \kappa c_1+2 c_2^2+ \kappa^2 c_1^2 c_2^2) e_n^3 -c_1 (-3 c_4-10 c_2 \kappa c_1 c_3\\
&\quad +6 \kappa c_1 c_4-4 c_4 \kappa^2 c_1^2+c_4 \kappa^3 c_1^3+7 \kappa^2 c_1^2 c_2 c_3+7 c_2 c_3+7 \kappa c_1 c_2^3-2 c_2 \kappa^3 c_1^3 c_3-4 \kappa^2 c_1^2 c_2^3-5 c_2^3 \\
&\quad +\kappa^3 c_1^3 c_2^3) e_n^4 + \cdots + O(e_n^9),
\end{split} \\
\begin{split} \label{eqns16}
\frac{f(y_n)}{f(x_n)} &= -c_2 (-1+ \kappa c_1) e_n + (
2 c_3-3 \kappa c_1 c_3+ c_3 \kappa^2 c_1^2+3 c_2^2 \kappa c_1-3 c_2^2- \kappa^2 c_1^2 c_2^2 ) e_n^2 +\cdots+O(e_n^9),
\end{split} \\
\begin{split} \label{eqns17}
\frac{f(y_n)}{f(w_n)} &= c_2 e_n + (- \kappa c_1 c_3+2 c_2^2 \kappa c_1+2 c_3-3 c_2^2) e_n^2 + \cdots+O(e_n^9).
\end{split} \end{align} The Taylor series expansion of $G(t_1,t_2)$ is given by \begin{align} \label{eqns18}
\begin{split}
G\Bigg(\frac{f(y_n)}{f(x_n)},\frac{f(y_n)}{f(w_n)}\Bigg) &= 1 + \frac{f(y_n)}{f(x_n)} +
\frac{f(y_n)}{f(w_n)} + A_1 \Bigg( \frac{f(y_n)}{f(x_n)} \Bigg)^2 + A_2 \Bigg( \frac{f(y_n)}{f(w_n)}
\Bigg)^2 + A_3 \Bigg( \frac{f(y_n)}{f(x_n)} \Bigg) \Bigg( \frac{f(y_n)}{f(w_n)} \Bigg) + O\Big( t_1^3,t_2^3 \Big).
\end{split} \end{align} By using (\ref{eqns12}), (\ref{eqns13}), (\ref{eqns15}), (\ref{eqns16}), (\ref{eqns17}), we find
\begin{align}
z_n-\alpha &= (- A_1 c_2^2+6 \kappa^2 c_1^2 c_2^2-c_3+2 \kappa c_1 c_3-10 c_2^2 \kappa c_1-c_3 \kappa^2 c_1^2+5 c_2^2+ A_1 c_2^2 \kappa c_1+2 c_2^2 A_3 \kappa c_1-c_2^2 A_3 \kappa^2 c_1^2\nonumber \\
&\quad +3 A_2 c_2^2 \kappa c_1-3 A_2 c_2^2 \kappa^2 c_1^2+ A_2 c_2^2 \kappa^3 c_1^3- A_2 c_2^2- \kappa^3 c_1^3 c_2^2-c_2^2 A_3) c_2 e_n^4 + (-4 \kappa^2 c_1^2 c_3^2-4 \kappa^2 c_1^2 c_4 c_2\nonumber \\
&\quad -31 A_2 c_2^4 \kappa c_1+36 A_2 c_2^4 \kappa^2 c_1^2-19 A_2 c_2^4 \kappa^3 c_1^3+4 A_2 c_2^4 \kappa^4 c_1^4-23 c_2^4 A_3 \kappa c_1+18 c_2^4 A_3 \kappa^2 c_1^2-5 c_2^4 A_3 \kappa^3 c_1^3\nonumber \\
&\quad -15 A_1 c_2^4 \kappa c_1+6 A_1 c_2^4 \kappa^2 c_1^2+5 c_4 \kappa c_1 c_2+3 \kappa^4 c_1^4 c_2^2 c_3+ c_4 \kappa^3 c_1^3 c_2+68 \kappa^2 c_1^2 c_2^2 c_3-25 c_3 \kappa^3 c_1^3 c_2^2-2 c_4 c_2\nonumber \\
&\quad -78 \kappa c_1 c_2^2 c_3-3 \kappa^4 c_1^4 c_2^4+5 \kappa c_1 c_3^2+ c_3^2 \kappa^3 c_1^3-2 c_3^2-36 c_2^4+32 c_2^2 c_3-66 c_2^4 \kappa^2 c_1^2+80 c_2^4 \kappa c_1+24 \kappa^3 c_1^3 c_2^4\nonumber \\
&\quad +9 A_1 c_2^2 \kappa c_1 c_3-3 A_1 c_2^2 c_3 \kappa^2 c_1^2+21 A_2 c_2^2 \kappa c_1 c_3-27 A_2 c_2^2 c_3 \kappa^2 c_1^2 +15 A_2 c_2^2 c_3 \kappa^3 c_1^3-3 A_2 c_2^2 \kappa^4 c_1^4 c_3 \nonumber \\
&\quad +15 c_2^2 A_3 \kappa c_1 c_3-12 c_2^2 A_3 c_3 \kappa^2 c_1^2+3 c_2^2 A_3 c_3 \kappa^3 c_1^3-6 A_1 c_2^2 c_3-6 A_2 c_2^2 c_3-6 c_2^2 A_3 c_3 +10 A_1 c_2^4+10 c_2^4 A_3 \nonumber \\
&\quad +10 A_2 c_2^4) e_n^5 + \cdots + O(e_n^9), \label{eqns19}
\end{align}
\begin{align}
\begin{split} \label{eqns20}
f(z_n) &= c_1 (-A_1 c_2^2+6 \kappa^2 c_1^2 c_2^2- c_3+2 \kappa c_1 c_3-10 c_2^2 \kappa c_1- c_3 \kappa^2 c_1^2+5 c_2^2+A_1 c_2^2 \kappa c_1+2 c_2^2 A_3 \kappa c_1- c_2^2 A_3 \kappa^2 c_1^2\\
&\quad +3 A_2 c_2^2 \kappa c_1-3 A_2 c_2^2 \kappa^2 c_1^2+A_2 c_2^2 \kappa^3 c_1^3-A_2 c_2^2- \kappa^3 c_1^3 c_2^2- c_2^2 A_3) c_2 e_n^4 + c_1 (-4 \kappa^2 c_1^2 c_3^2-4 \kappa^2 c_1^2 c_4 c_2 \\
&\quad -31 A_2 c_2^4 \kappa c_1+36 A_2 c_2^4 \kappa^2 c_1^2-19 A_2 c_2^4 \kappa^3 c_1^3+4 A_2 c_2^4 \kappa^4 c_1^4-23 c_2^4 A_3 \kappa c_1+18 c_2^4 A_3 \kappa^2 c_1^2-5 c_2^4 A_3 \kappa^3 c_1^3\\
&\quad -15 A_1 c_2^4 \kappa c_1+6 A_1 c_2^4 \kappa^2 c_1^2+5 c_4 \kappa c_1 c_2+3 \kappa^4 c_1^4 c_2^2 c_3+c_4 \kappa^3 c_1^3 c_2+68 \kappa^2 c_1^2 c_2^2 c_3-25 c_3 \kappa^3 c_1^3 c_2^2-2 c_4 c_2\\
&\quad -78 \kappa c_1 c_2^2 c_3-3 \kappa^4 c_1^4 c_2^4+5 \kappa c_1 c_3^2+c_3^2 \kappa^3 c_1^3-2 c_3^2-36 c_2^4+32 c_2^2 c_3-66 c_2^4 \kappa^2 c_1^2+80 c_2^4 \kappa c_1+24 \kappa^3 c_1^3 c_2^4\\
&\quad +9 A_1 c_2^2 \kappa c_1 c_3-3 A_1 c_2^2 c_3 \kappa^2 c_1^2+21 A_2 c_2^2 \kappa c_1 c_3-27 A_2 c_2^2 c_3 \kappa^2 c_1^2+15 A_2 c_2^2 c_3 \kappa^3 c_1^3-3 A_2 c_2^2 \kappa^4 c_1^4 c_3 \\
&\quad +15 c_2^2 A_3 \kappa c_1 c_3-12 c_2^2 A_3 c_3 \kappa^2 c_1^2+3 c_2^2 A_3 c_3 \kappa^3 c_1^3-6 A_1 c_2^2 c_3-6 A_2 c_2^2 c_3-6 c_2^2 A_3 c_3\\
&\quad +10 A_1 c_2^4+10 c_2^4 A_3+10 A_2 c_2^4) e_n^5 + \cdots +O(e_n^9),
\end{split}\\
\begin{split} \label{eqns21}
\psi_n &= (2 A_2 c_2^3 \kappa^3 c_1^3-2 \kappa^3 c_1^3 c_2^3+12 \kappa^2 c_1^2 c_2^3-2 \kappa^2 c_1^2 c_2 c_3-6 A_2 c_2^3 \kappa^2 c_1^2+c_4 \kappa^2 c_1^2-2 c_2^3 A_3 \kappa^2 c_1^2-20 \kappa c_1 c_2^3\\
&\quad +6 A_2 c_2^3 \kappa c_1-2 \kappa c_1 c_4+2 A_1 c_2^3 \kappa c_1+4 c_2 \kappa c_1 c_3+4 c_2^3 A_3 \kappa c_1+10 c_2^3-2 A_1 c_2^3 -2 c_2^3 A_3-2 c_2 c_3 \\
&\quad -2 A_2 c_2^3+c_4) c_1 c_2 e_n^4 + \cdots + O(e_n^9).
\end{split} \end{align} Finally, $H(s_1,s_2)$ has Taylor's expansion \begin{align}
\begin{split}\label{eqns22}
H\Bigg(\frac{f(z_n)}{f(x_n)},\frac{f(z_n)}{f(w_n)}\Bigg) &= 1+ B_1
\Bigg( \frac{f(z_n)}{f(x_n)} \Bigg)^2 + B_2 \Bigg( \frac{f(z_n)}{f(w_n)} \Bigg)^2 +
B_3 \Bigg( \frac{f(z_n)}{f(x_n)} \Bigg) \Bigg( \frac{f(z_n)}{f(w_n)} \Bigg)+O(s_1^3,s_2^3).
\end{split} \end{align} From (\ref{eqns20}) and (\ref{eqns21}), we deduced the following error equation which leads to the desired result \begin{align}
e_{n+1} &= c_2^2 (-6 \kappa^2 c_1^2 c_3 c_4-10 A_1 c_2^5-10 A_2 c_2^5-10 c_2^5 A_3+6 c_2 \kappa^2 c_1^2 c_3^2+31 \kappa^2 c_1^2 c_4 c_2^2-4 c_2 \kappa^3 c_1^3 c_3^2\nonumber \\
&\quad +4 c_3 \kappa c_1 c_4+4 c_3 c_4 \kappa^3 c_1^3+46 c_3 \kappa^3 c_1^3 c_2^3-23 c_4 \kappa^3 c_1^3 c_2^2+8 c_4 \kappa^4 c_1^4 c_2^2- c_4 \kappa^4 c_1^4 c_3-4 c_2 \kappa c_1 c_3^2-20 \kappa c_1 c_4 c_2^2\nonumber \\
&\quad - c_3 c_4-62 c_3 \kappa^2 c_1^2 c_2^3+ \kappa^4 c_1^4 c_2 c_3^2-16 \kappa^4 c_1^4 c_2^3 c_3-12 \kappa^5 c_1^5 c_2^5+2 \kappa^5 c_1^5 c_3 c_2^3- \kappa^5 c_1^5 c_4 c_2^2+6 A_1 c_2^3 \kappa^2 c_1^2 c_3\nonumber \\
&\quad +3 A_1 c_2^2 \kappa c_1 c_4-6 A_1 c_2^3 \kappa c_1 c_3-2 A_1 c_2^3 \kappa^3 c_1^3 c_3-3 A_1 c_2^2 c_4 \kappa^2 c_1^2+A_1 c_2^2 c_4 \kappa^3 c_1^3-8 c_2^3 A_3 \kappa^3 c_1^3 c_3+4 c_2^2 A_3 \kappa c_1 c_4\nonumber \\
&\quad -6 c_2^2 A_3 \kappa^2 c_1^2 c_4+12 c_2^3 A_3 \kappa^2 c_1^2 c_3+40 c_2^3 c_3 \kappa c_1-A_1 c_2^2 c_4+2 A_1 c_2^3 c_3-A_2 c_2^2 c_4+2 A_2 c_2^3 c_3- c_2^2 A_3 c_4\nonumber \\
&\quad +2 c_2^3 A_3 c_3+25 c_2^5+2 c_2^3 A_3 \kappa^4 c_1^4 c_3+4 c_2^2 A_3 \kappa^3 c_1^3 c_4- c_2^2 A_3 \kappa^4 c_1^4 c_4-8 c_2^3 A_3 \kappa c_1 c_3-20 A_2 c_2^3 \kappa^3 c_1^3 c_3\nonumber \\
&\quad +5 A_2 c_2^2 \kappa c_1 c_4-10 A_2 c_2^2 \kappa^2 c_1^2 c_4+20 A_2 c_2^3 \kappa^2 c_1^2 c_3+10 A_2 c_2^3 \kappa^4 c_1^4 c_3+10 A_2 c_2^2 \kappa^3 c_1^3 c_4-5 A_2 c_2^2 \kappa^4 c_1^4 c_4 \nonumber \\
&\quad-10 A_2 c_2^3 \kappa c_1 c_3-2 A_2 c_2^3 \kappa^5 c_1^5 c_3+A_2 c_2^2 \kappa^5 c_1^5 c_4+ c_2 c_3^2-10 c_2^3 c_3+5 c_2^2 c_4+160 \kappa^2 c_1^2 c_2^5-130 \kappa^3 c_1^3 c_2^5\nonumber \\
&\quad -100 c_2^5 \kappa c_1+56 \kappa^4 c_1^4 c_2^5-32 A_1 c_2^5 \kappa^2 c_1^2+30 A_1 c_2^5 \kappa c_1+14 A_1 c_2^5 \kappa^3 c_1^3+46 c_2^5 A_3 \kappa^3 c_1^3-62 c_2^5 A_3 \kappa^2 c_1^2\nonumber \\
&\quad -16 c_2^5 A_3 \kappa^4 c_1^4+40 c_2^5 A_3 \kappa c_1+108 A_2 c_2^5 \kappa^3 c_1^3-102 A_2 c_2^5 \kappa^2 c_1^2-62 A_2 c_2^5 \kappa^4 c_1^4+50 A_2 c_2^5 \kappa c_1+18 A_2 c_2^5 \kappa^5 c_1^5\nonumber \\
&\quad +A_1^2 c_2^5+A_2^2 c_2^5+ c_2^5 A_3^2-2 A_1^2 c_2^5 \kappa c_1-2 A_1 c_2^5 \kappa^4 c_1^4+A_1^2 c_2^5 \kappa^2 c_1^2-4 c_2^5 A_3^2 \kappa^3 c_1^3+6 c_2^5 A_3^2 \kappa^2 c_1^2-4 c_2^5 A_3^2 \kappa c_1\nonumber \\
&\quad +2 c_2^5 A_3 \kappa^5 c_1^5+ c_2^5 A_3^2 \kappa^4 c_1^4+15 A_2^2 c_2^5 \kappa^4 c_1^4-20 A_2^2 c_2^5 \kappa^3 c_1^3+15 A_2^2 c_2^5 \kappa^2 c_1^2-6 A_2^2 c_2^5 \kappa c_1-6 A_2^2 c_2^5 \kappa^5 c_1^5 \nonumber \\
&\quad +A_2^2 c_2^5 \kappa^6 c_1^6-2 A_2 c_2^5 \kappa^6 c_1^6-8 A_1 c_2^5 A_2 \kappa^3 c_1^3+12 A_1 c_2^5 A_2 \kappa^2 c_1^2+6 A_1 c_2^5 A_3 \kappa^2 c_1^2-8 A_1 c_2^5 A_2 \kappa c_1-6 A_1 c_2^5 A_3 \kappa c_1 \nonumber \\
&\quad +2 A_1 c_2^5 \kappa^4 c_1^4 A_2-2 A_1 c_2^5 \kappa^3 c_1^3 A_3+10 c_2^5 A_3 \kappa^4 c_1^4 A_2-20 c_2^5 A_3 \kappa^3 c_1^3 A_2+20 c_2^5 A_3 \kappa^2 c_1^2 A_2-10 c_2^5 A_3 \kappa c_1 A_2 \nonumber \\
&\quad -2 c_2^5 A_3 \kappa^5 c_1^5 A_2+2 A_1 c_2^5 A_3+2 A_1
c_2^5 A_2+2 A_2 c_2^5 A_3+ \kappa^6 c_1^6 c_2^5) e_n^8 + O(e_n^9). \label{eqns23}
\end{align} \end{proof}
It is clear that the considered family of numerical schemes requires four functional evaluations and attains optimal convergence order eight according to Kung and Traub conjecture which can be stated as follows \cite{5}: if $n$ is the total number of functional evaluations per iteration, then the optimal convergence order of the associated numerical procedure is $2^{n-1}$.
\section{Numerical Results} \newtheorem{mydef3}{Definition} \begin{mydef3}
The computational order of convergence \cite{4}, can be approximated by
\begin{align} \label{eqns24}
COC &\approx \frac{ln|(x_{n+1}-\alpha)(x_n-\alpha)^{-1}|}{ln|(x_n-\alpha)(x_{n-1}-\alpha)^{-1}|},
\end{align} where $x_{n-1}$, $x_n$ and $x_{n+1}$ are successive iterations closer to the root $\alpha$ of $f(x)=0$. \end{mydef3} For the purpose of comparison between newly developed family and other derivative-free methods, a list of derivative-free methods for nonlinear equations is presented here.
\subsection{The Kung-Traub Eighth-order Derivative-free Method (K-T)} The Kung-Traub eighth-order derivative-free method is discussed in \cite{5,6}, and also considered in \cite{7} is given as
\begin{align}\label{eqns25}
\begin{split}
\begin{cases}
w_n &= x_n + \beta f(x_n), \\
y_n &= x_n -\Bigg( \frac{\beta f(x_n)^2 }{f(w_n)-f(x_n)} \Bigg), \\
z_n &= y_n - \Bigg( \frac{f(x_n)f(w_n)}{f(y_n)-f(x_n)} \Bigg) \Bigg[ \frac{1}{f[w_n,x_n]} -\frac{1}{f[w_n,x_n]} \Bigg],\\
x_{n+1} &= z_n - \Bigg( \frac{f(w_n)f(x_n)f(y_n)}{f(z_n)-f(x_n)}\Bigg) \\
&\quad \Bigg\{ \Bigg( \frac{1}{f(z_n)-f(w_n)} \Bigg) \Bigg[ \frac{1}{f[y_n,z_n]} -\frac{1}{f[w_n,y_n]}
\Bigg] - \Bigg( \frac{1}{f(y_n)-f(x_n)} \Bigg) \Bigg[ \frac{1}{f[w_n,y_n]} -\frac{1}{f[w_n,x_n]} \Bigg] \Bigg\}.
\end{cases}
\end{split}
\end{align}
\subsection{R. Thukral $M1$, $M2$, $M3$ Methods} In 2011, R. Thukral \cite{7} presented three variants of his proposed eighth-order three-point derivative-free method. Three members of the family called by author namely, $M1$, $M2$, and $M3$, are listed as \begin{align}
\phi_1 &= \Bigg( 1-\frac{f(y_n)}{f(w_n)} \Bigg)^{-1}, \label{eqns26} \\
\phi_2 &= \Bigg( 1+\frac{f(y_n)}{f(w_n)} + \Big( \frac{f(y_n)}{f(w_n)} \Big)^2 \Bigg), \label{eqns27} \\
\phi_3 &= \frac{f[x_n,w_n]}{f[w_n,y_n]}, \label{eqns28} \end{align} and \begin{align} \label{eqns29} \begin{cases}
w_n &= x_n +\beta f(x_n), \\
y_n &= x_n-\Bigg( \frac{\beta f(x_n)^2}{f(w_n)-f(x_n)}\Bigg), \\
z_n &= y_n - \phi_k \Bigg( \frac{f(y_n)}{f[x_n,y_n]} \Bigg), \\
x_{n+1}&= z_n - \Bigg( 1-\frac{f(z_n)}{f(w_n)} \Bigg)^{-1} \Bigg(
1- \frac{f(y_n)^3}{f(w_n)^2 f(x_n)} \Bigg) \Bigg( \frac{f[x_n,y_n] f(z_n)}{f[y_n,z_n] f[x_n,z_n]}\Bigg),
\end{cases} \end{align}
\begin{table}[!htbp] \begin{center}
\begin{tabular}{p{7cm} p{5cm}}
\hline
\hline \\
Functions & Roots \\
\hline
\hline \\ $f_1 (x) = exp(x)\ sin(x) + ln(1 + x^2 )$ & $\alpha=0 $ \\ $f_2 (x) = x^{15} + x^4 + 4x^2 -15 $ & $\alpha = 1.148538 . . .$ \\ $f_3 (x) = (x - 2)(x^{10} + x + 1)\ exp(-x-1)$ & $\alpha=2 $\\ $f_4 (x) = exp(-x^2 + x + 2) - cos(x + 1) + x^3 + 1$ & $\alpha = -1$ \\ $f_5 (x) = (x+1)\ exp(sin(x))-x^2 \ exp(cos(x))-1$ & $\alpha=0 $\\ $f_6 (x) = sin(x)^2 - x^2 + 1 $ & $\alpha = 1.40449165 . . . $ \\ $f_7 (x) = 10\ exp(-x^2 ) -1 $ & $\alpha = 1.517427 . . . $\\ $f_8 (x) = (x^2 -1)^{-1} - 1 $ & $\alpha = 1.414214 . . . $\\ $f_{9} (x) = ln(x^2 + x + 2) - x + 1 $ & $\alpha = 4.15259074 . . .$ \\ $f_{10} (x) = cos(x)^2 - x/5 $ & $\alpha = 1.08598268 . . .$ \\ $f_{11} (x) = sin(x) - \frac{x}{2} $ & $\alpha=0 $ \\ $f_{12} (x) = x^{10} - 2x^3 - x + 1 $ & $\alpha = 0.591448093 . . .$ \\ $f_{13} (x) = exp(sin(x)) - x + 1 $ & $\alpha = 2.63066415 . . . $\\ \end{tabular}
\begin{tabular}{p{1.5cm} p{2.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} p{1.5cm} }
\hline
\hline \\
($f_n(x)$,$x_0$ ) & L & K-T & M1 & M2 & M3 & P1 & P2 \\
\hline
\hline \\
$f_1$,\ 0.25 & (L1) 6.38e-247 & 3.14e-136 & 1.69e-141 & 7.43e-142 & 1.69e-141 & 3.20e-113 & 8.98e-120 \\
$f_2$,\ 1.1 & (L1) 1.2376e-652 & 3.72e-61 & 3.44e-62 & 3.44e-62 & 3.44e-62 & 2.68e-7 & 2.67e-7 \\
$f_3$,\ 2.1 & (L1) 1.057e-422 & 1.91e-60 & 1.49e-60 & 1.49e-60 & 1.49e-60 & 7.71e-8 & 7.56e-8 \\
$f_4 $,\ -0.5 & (L1) 2.952e-383 & 5.11e-362 & 1.92e-362 & 1.93e-362 & 1.92e-362 & 9.99e-367 & 8.78e-366 \\
$f_5 $,\ 0.25 & (L1) 2.336e-407 & 4.13e-328 & 6.52e-326 & 9.47e-326 & 6.52e-326 & 1.98e-322 & 2.56e-332 \\
$f_6 $,\ 1.2 & (L8) 1.719e-421 & 1.00e-327 & 4.58e-341 & 7.57e-344 & 4.58e-341 & 1.79e-381 & 1.72e-405 \\
$f_7 $,\ 2 & (L2) 7.264e-238 & 5.19e-88 & 1.24e-120 & 6.40e-124 & 1.24e-120 & 1.51e-187 & 6.79e-228 \\
$f_8 $,\ 1.7 & (L3) 1.429e-234 & 1.23e-113 & 1.74e-171 & 5.45e-188 & 1.74e-171 & 5.96e-211 & 4.84e-167 \\
$f_{9} $,\ 4.4 & (L4) 2.504e-997 & 1.15e-928 & 4.52e-942 & 1.27e-965 & 4.52e-941 & 6.15e-904 & 4.11e-937 \\
$f_{10} $,\ 1.5 & (L5) 2.81e-305 & 7.19e-303 & 5.07e-284 & 1.84e-245 & 5.07e-285 & 4.91e244 & 1.78e-275 \\
$f_{11} $,\ 0.25 & (L6) 2.35e-1143 & 3.65e-782 & 1.00e-819 & 4.98e-823 & 1.00e-819 & 5.13e-794 & 5.13e-812 \\
$f_{12}$,\ 0.25 & (L6) 7.86e-318 & 2.03e-256 & 5.65e256 & 1.82e-254 & 5.65e-256 & 1.07e-264 & 6.31e-268 \\
$f_{13}$,\ 2.0 & (L7) 2.54e-436 & 2.63e-396 & 1.94e-378 & 5.1e-378 & 1.94e-378 & 8.70e-380 & 6.80e-379 \\ \hline \hline \\
& & & (COC) & & & & \\ \hline \hline
$f_1$ & (L1) 7.9999 & 7.9986 & 7.9995 & 7.9998 &7.9995 & 7.9958 & 7.9978\\
$f_2$ & (L1) 8.0000 & 7.8671 & 7.9371 & 7.9371 &7.9371 & 3.2715 & 3.2731\\
$f_3$ & (L1) 7.9999 & 7.8660 & 7.9047 & 7.9047 &7.9047 &4.2595 & 4.2675\\
$f_4 $ & (L1) 8.0000 & 7.9905 & 7.9905 & 7.9905 &7.9905 &7.9907 & 7.9907\\
$f_5 $ & (L1) 8.0000 & 7.9884 & 7.9882 & 7.9882 &7.9882 &8.0000 & 8.0000\\
$f_6 $ & (L8) 8.0000 & 8.000 & 8.0000& 8.0000 &8.0000 &8.0000 &8.0000 \\
$f_7 $ & (L2) 7.9999 & 8.0097 & 8.0025 & 8.0018 &8.0025 &8.0005 &8.0001 \\
$f_8 $ & (L3) 8.0000 & 8.0027 & 8.0004 & 8.0002 &8.0004 &8.0000 &8.0002 \\
$f_{9}$ & (L4) 8.0000 & 8.0000 & 8.0000 & 8.0000 &8.0000 &8.0000 &8.0000 \\
$f_{10}$ & (L5) 8.0000 & 8.0002 & 7.9845 & 7.9797 &7.9845 &8.0003 &7.9833 \\
$f_{11}$ & (L6) 11.000 & 10.996 & 10.996 & 10.996 &10.996 &10.996 &10.996 \\
$f_{12}$ & (L6) 8.0000 & 8.0000 & 7.9809 & 7.9807 &7.9809 &7.9822 &8.0000 \\
$f_{13}$ & (L7) 7.9999 & 8.0000 & 8.0000 & 8.0000 &8.0000 &8.0000 &8.0000 \\
\hline \end{tabular} \end{center} \caption{Numerical comparison between three-point derivative-free methods }\label{tb2:numerical comparison between methods} \end{table}
where $k=1,2,3$, $\beta \in \Re^{+}$, $\phi_k$ are listed in (26)-(28). (\ref{eqns29}) is called $M1$, $M2$ and $M3$ for $\phi_1$, $\phi_2$ and $\phi_3$ respectively. \subsection{Petkovic et al. Type Methods} In \cite{7}, author developed Petkovic type 1 (P1) and type 2 (P2) derivative-free methods for the comparison of numerical efficiency, (P1) and (P2) respectively, are written as \begin{align}\label{eqns30}
\begin{cases}
\begin{split}
w_n &= x_n +\beta f(x_n), \\
y_n &= x_n-\Bigg( \frac{\beta f(x_n)^2}{f(w_n)-f(x_n)}\Bigg), \\
z_n &= y_n-\Bigg( 1+ \frac{f(y_n)}{f(w_n)}+\frac{f(y_n)}{f(x_n)} \Bigg) \Bigg[ \frac{(w_n-x_n)f(y_n)}{f(w_n)-f(x_n)} \Bigg], \\
x_{n+1} &= z_n - \Bigg( 1-\frac{f(z_n)}{f(w_n)} \Bigg)^{-1} \\
&\quad \Bigg( 1- \frac{2 f(y_n)^3 }{f(w_n)^2 f(x_n)} - \frac{f(y_n)^3}{f(w_n)f(x_n)^2} - \Bigg( \frac{f(y_n)}{f(w_n)} \Bigg)^3 \Bigg)
\Bigg( \frac{f[x_n,y_n]f(z_n)}{f[y_n,z_n]f[x_n,z_n]} \Bigg),
\end{split}
\end{cases} \end{align} and \begin{align}\label{eqns31}
\begin{cases}
\begin{split}
w_n &= x_n +\beta f(x_n), \\
y_n &= x_n-\Bigg( \frac{\beta f(x_n)^2}{f(w_n)-f(x_n)}\Bigg), \\
z_n &= y_n-\Bigg( \frac{1+f(y_n)f(x_n)^{-1}}{1-f(y_n)f(w_n)^{-1}} \Bigg) \Bigg( \frac{f(y_n) (w_n-x_n)}{f(w_n)-f(x_n)} \Bigg), \\
x_{n+1} &= z_n - \Bigg( 1-\frac{f(z_n)}{f(w_n)} \Bigg)^{-1} \Bigg( 1- \frac{2 f(y_n)^3 }{f(w_n)^2 f(x_n)} - \frac{f(y_n)^3}{f(w_n)f(x_n)^2} \Bigg)
\Bigg( \frac{f[x_n,y_n]f(z_n)}{f[y_n,z_n]f[x_n,z_n]} \Bigg).
\end{split}
\end{cases} \end{align} \subsection{Proposed family (L)} We define the following weight functions: \begin{align} G_1(t_1,t_2) &= \frac{1}{Ã1-(t_1+t_2)+\omega (t_1+t_2)^2}, \ \omega \in \Re, \label{eqns32} \\ G_2 (t_1,t_2) &= 1 + t_1 + t_2 + t_1^2 + 1.9 t_2^2 + 4.4 t_1 t_2, \label{eqns33} \\ H_1(s_1,s_2) &= 1, \label{eqns34} \\ H_2(s_1,s_2) &= \frac{1}{1 + s_1 s_2 + s_1^2 + s_2^2}, \label{eqns35} \\ H_3(s_1,s_2) &= 1 + s_2^4 + s_2^6, \label{eqns36} \\ H_4(s_1,s_2) &= 1 + s_1^2 + s_2^2 + 2 s_1 s_2, \label{eqns37} \\ H_5(s_1,s_2) &= \frac{1}{1-20 s_1 s_2}, \label{eqns39} \end{align} where $t_i$ and $s_i$ are defined in (\ref{eqns10}). Further we give names to methods for the purpose of simplicity as follows \begin{align}\label{eqns38} \begin{cases} L1 = (G_1,\ H_1,\ \omega=+0.01,\ \kappa=0.01),\ L2 = (G_1,\ H_1,\ \omega=-0.022,\ \kappa=0.01), \\ L3 = (G_1,\ H_1,\ \omega=-0.001,\ \kappa=0.01),\ L4 = (G_2,\ H_1,\ \omega=+0.01,\ \kappa=0.01), \\ L5 = (G_1,\ H_3,\ \omega=-0.01,\ \kappa=0.01),\ L6 = (G_1,\ H_2,\ \omega=+0.01,\ \kappa=0.01), \\ L7 = (G_1,\ H_4,\ \omega=+0.01,\ \kappa=0.01),\ L8 = (G_1,\ H_5,\ \omega=+0.01,\ \kappa=0.01). \end{cases} \end{align}
A set of thirteen nonlinear equations is used for numerical computations from \cite{7}, in Table \ref{tb2:numerical comparison between methods}. All the families in the numerical implementation are derivative-free and use four function evaluations to get the order of convergence eight .
For all methods, 12 (TNFE) total number of function evaluations are used, and absolute error ($|x_n-\alpha|$) is displayed. Computational order of convergence is calculated according to (\ref{eqns24}) for the method. All numerical values for methods K-T, M1, M2, M3, P1, P2 are taken from \cite{7}.
\section{Conclusion} In this note, we have presented a family of eighth-order derivative-free methods. The proper selection of weight functions showed a reasonable reduction in error as compared to other referenced derivative-free methods. It is obvious that constructed family has broad choice for the weight function in the third and fourth step of the method. The true essence of the family is hidden in the construction of interpolation polynomial for the approximation of $f'(z)$ and weight functions make it more flexible to get higher performance and efficiency.
\end{document} | arXiv | {
"id": "1302.0419.tex",
"language_detection_score": 0.4314948618412018,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{title} {Limit lamination theorems for H-surfaces} \end{title} \date{} \begin{author} {William H. Meeks III\thanks{This material is based upon
work for the NSF under Award No. DMS-1309236.
Any opinions, findings, and conclusions or recommendations
expressed in this publication are those of the authors and do not
necessarily reflect the views of the NSF.}
\and Giuseppe Tinaglia\thanks{The second author was partially
supported by EPSRC grant no. EP/M024512/1}} \end{author} \maketitle
\begin{abstract} In this paper we prove some general results for constant mean curvature lamination limits of certain sequences of compact surfaces
$M_n$ embedded in $\mathbb{R}^3$ with constant mean curvature
$H_n$ and fixed finite genus, when the boundaries of these surfaces tend to infinity. Two of these theorems generalize to the non-zero constant mean curvature case, similar structure theorems by Colding and Minicozzi in~\cite{cm23,cm25} for limits of sequences of minimal surfaces of fixed finite genus. \end{abstract}
\noindent{\it Mathematics Subject Classification:} Primary 53A10,
Secondary 49Q05, 53C42
\noindent{\it Key words and phrases:} Minimal surface, constant mean curvature, one-sided curvature estimate, curvature estimates, minimal lamination, $H$-surface, $H$-lamination, chord-arc, removable singularity, positive injectivity radius. \maketitle
\section{Introduction} In this paper we apply results in~\cite{mt8,mt7,mt13,mt9} to obtain (after passing to a subsequence) constant mean curvature lamination limits for sequences of compact surfaces $M_n$ embedded in $\mathbb{R}^3$ with constant mean curvature $H_n$ and fixed finite genus, when the boundaries of these surfaces tend to infinity in $\mathbb{R}^3$. These lamination limit results are inspired by and generalize to the non-zero constant mean curvature setting similar structure theorems by Colding and Minicozzi in~\cite{cm23,cm25} in the case of embedded minimal surfaces; also see some closely related work of Meeks, Perez and Ros in~\cite{mpr14, mpr11} in the minimal setting.
For clarity of exposition, we will call an oriented surface $M$ immersed in $\mathbb{R}^3$ an {\it $H$-surface} if it is {\it embedded} and it has {\it non-negative constant mean curvature $H$}. In this manuscript $\mathbb{B}(R)$ denotes the open ball in $\mathbb{R}^3$ centered at the origin $\vec{0}$
of radius $R$ and for a point $p$ on a surface $\Sigma$ in $ \mathbb{R}^3$, $|A_{\Sigma}|(p)$ denotes the norm of the second fundamental form of $\Sigma$ at $p$.
\begin{definition} \label{def:lbsf} {\rm Let $U$ be an open set in $\mathbb{R}^3$. \begin{enumerate}[1.] \item We say that a sequence of smooth surfaces $\Sigma(n)\subset U$ has {\em locally bounded norm of the second fundamental form in $U$} if for every compact subset $B$ in $U$, the norms of the second fundamental forms of the surfaces $\Sigma(n)$ are uniformly bounded in $B$. \item We say that a sequence of smooth surfaces $\Sigma(n)\subset U$ has {\em locally positive injectivity radius in $U$} if for every compact subset $B$ in $U$, the injectivity radius functions of the surfaces $\Sigma(n)$ at points in $B$ are bounded away from zero for $n$ sufficiently large; see Definition~\ref{definj} for the definition of the injectivity radius function. \item We say that a sequence of smooth surfaces $\Sigma(n)\subset U$ has {\em uniformly positive injectivity radius in $U$} if there exists an ${\varepsilon}>0$ such that for every compact subset $B$ in $U$, the injectivity radius functions of the surfaces $\Sigma(n)$ at points in $B$ are bounded from below by ${\varepsilon}$ for $n$ sufficiently large. \end{enumerate}} \end{definition}
We will also need the next definition
in the statement of Theorems~\ref{H-lam-thm} below, as well as Definition~\ref{def:flux} of the flux of a 1-cycle in an $H$-surface, in the statements of Theorems~\ref{H-lam-thm} and \ref{geometry2} below.
\begin{definition} {\em A {\em strongly Alexandrov embedded} $H$-surface $f\colon \Sigma \to \mathbb{R}^3$ is a proper immersion of a complete surface $\Sigma$ of constant mean curvature $H$ that extends to a proper immersion of a complete three-manifold $W$ so that ${\Sigma}$
is the mean convex boundary of $W$ and $f|_{\mbox{Int} (W)}$ is injective.
See~\cite{mt3} for further discussion on this notion.} \end{definition}
In this paper we wish to describe for any large radius $R>0$, the geometry in $\mathbb{B}(R)$ of any connected compact $H$-surface $M$ in $\mathbb{R}^3$ of fixed finite genus that passes through the origin and satisfies: \begin{enumerate} \item the non-empty boundary of $M$ lies much farther than $R$ from the origin; \item the injectivity radius function of $M$ is not too small at points in $\mathbb{B}(R)$. \end{enumerate} In order to obtain this geometric description of $M$, it is natural to consider a sequence $\{M_n\}_{n\in\mathbb N}$ of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in M_n$, $M_n$ contains no spherical components, $\partial M_n\subset [\mathbb{R}^3 -\mathbb{B}(n)]$ and such that the sequence has locally bounded injectivity radius in $\mathbb{R}^3$. Then after passing to a subsequence and possibly translating the surfaces $M_n$ by vectors of uniformly bounded length so that $\vec{0}\in M_n$ still holds, then exactly one of the following three possibilities occurs in the sequence: \begin{enumerate} \item $\{M_n\}_{n\in\mathbb N}$ has locally bounded norm of the second fundamental form in $\mathbb{R}^3$.
\item $\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=\infty$.
\item $\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=C$, for some $C>0$ \end{enumerate} Depending on which of the above three mutually exclusive conditions holds for $\{M_n\}_{n\in\mathbb N}$, one has a limit geometric description given by its corresponding theorem listed below.
The next theorem corresponds to the case where $\{M_n\}_{n\in\mathbb N}$ has locally bounded norm of the second fundamental form in $\mathbb{R}^3$.
\begin{theorem}\label{H-lam-thm} Suppose that $\{M_n\}_{n\in\mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in M_n$, $M_n$ contains no spherical components, $\partial M_n\subset [\mathbb{R}^3 -\mathbb{B}(n)]$ and the sequence has locally bounded norm of the second fundamental form in $\mathbb{R}^3$. Then, after replacing $\{M_n\}_{n\in\mathbb N}$ by a subsequence, the sequence of surfaces $\{M_n\}_{n\in \mathbb N}$ converges with respect to the $C^{\alpha}$-norm, for any ${\alpha}\in(0,1)$, to a minimal lamination $M_\infty$ of $\mathbb{R}^3 $ by parallel planes or it converges smoothly (with multiplicity one or two) to a possibly disconnected, strongly Alexandrov embedded $H$-surface $M_\infty$ of genus at most $k$ and every component of $M_\infty$ is non-compact. Moreover: \begin{enumerate}[1.] \item If the convergent sequence has uniformly positive injectivity radius in $\mathbb{R}^3$ or if $H=0$, then the norm of the second fundamental form of $M_\infty$ is bounded. \item If there exist positive numbers $I_0, H_0$ such that for $n$ large either the injectivity radius functions of the $M_n$ at $\vec{0} $ are bounded from above by $ I_0$ or $H_n\geq H_0$, then the limit object is a possibly disconnected, strongly Alexandrov embedded $H$-surface $M_\infty$ and there exist a positive constant $\eta=\eta (M_\infty)$ and simple closed oriented curves ${\gamma}_n \subset M_n$ with scalar fluxes $F({\gamma}_n)$ with $\lim_{n\to\infty} F({\gamma}_n)=\eta$. \end{enumerate} \end{theorem}
The next theorem corresponds to the case where $\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=\infty$.
\begin{theorem}\label{geometry1} Suppose that $\{M_n\}_{n\in\mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in M_n$, $M_n$ contains no spherical components, $\partial M_n\subset [\mathbb{R}^3 -\mathbb{B}(n)]$, the sequence has locally positive injectivity radius in $\mathbb{R}^3$,
$\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=\infty$.
Let $\mathcal{S}\subset \mathbb{R}^3$ denote the $x_3$-axis. Then, after replacing by a subsequence and applying a fixed rotation that fixes the origin: \begin{enumerate}[1.] \item $\{M_n\}_{n\in\mathbb N}$ converges with respect to the $C^{\alpha}$-norm, for any ${\alpha}\in(0,1)$, to the minimal foliation $\mathcal{L}$ of $\,\mathbb{R}^3-\mathcal{S}$ by horizontal planes punctured at points in $\mathcal{S}$. \item For any $R>0$ there exists $n_0\in \mathbb{N}$ such that for $n>n_0$, there exists a possibly disconnected compact subdomain $\mathcal{C}_n$ of $M_n$, with $[M_n\cap \mathbb{B}(R/2)]\subset \mathcal{C}_n \subset \mathbb{B}(R)$ and with $\partial \mathcal{C}_n\subset \mathbb{B}(R)-\mathbb{B}(R/2)$, consisting of a disk ${\mathcal D}_n$ containing the origin and possibly a second disk that intersects $\mathbb{B}(R/n)$, where each disk has intrinsic diameter less than $3R$.
\item Away from $\mathcal{S}$, each component of $ \mathcal{C}_n$ consists of two multi-valued graphs spiraling together to form a double spiral staircase (see Remark~\ref{remark:spiral} for an explicit geometric description of the double spiral staircase structure for \,$ \mathcal{C}_n$). \end{enumerate} \end{theorem}
The last theorem corresponds to the case where $\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=C$.
\begin{theorem}\label{geometry2} Suppose that $\{M_n\}_{n\in\mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in M_n$, $M_n$ contains no spherical components, $\partial M_n\subset [\mathbb{R}^3 -\mathbb{B}(n)]$, the sequence has locally positive injectivity radius in $\mathbb{R}^3$,
$\lim_{n\to \infty} |A_{M_n}|(\vec{0})=\infty$ and $\lim_{n\to \infty} I_{M_n}(\vec{0})=C$, for some $C>0$.
Let $\mathcal{S}_0=\{(0,0,t)\mid t\in \mathbb{R}\}$, $\mathcal{S}_C=\{(C,0,t)\mid t\in \mathbb{R}\}$ and $\mathcal{S}=\mathcal{S}_0\cup \mathcal{S}_C$. Then, after replacing by a subsequence and applying a fixed rotation that fixes the origin: \begin{enumerate}[1.] \item $\{M_n\}_{n\in\mathbb N}$ converges with respect to the $C^{\alpha}$-norm, for any ${\alpha}\in(0,1)$, to the minimal foliation $\mathcal{L}$ of $\,\mathbb{R}^3-\mathcal{S}$ by horizontal planes punctured at points in $\mathcal{S}$. \item Given $R>C$ there exists $n_0\in\mathbb{N}$ such that for $n>n_0$, the subdomain $\Delta_n$ of $M_n\cap \mathbb{B}(R)$ that intersects $\mathbb{B}(\frac R4)$ is a planar domain. In fact, $\Delta_n$ consists of a connected planar domain $\Delta_1(n)$ containing the origin and possibly a second connected planar domain $\Delta_2(n)$ and $\Delta_2(n)\cap\mathbb{B}(\frac Rn)\neq \O$. Moreover, the intrinsic distance in $M_n$ between any two points in the same connected component of $\Delta_n$ is less than $3R$. Away from $\mathcal{S}$, each component of $\Delta_n$ consists of exactly two multi-valued graphs spiraling together. Near $\mathcal{S}_0$ and $\mathcal{S}_C$, the pair of multi-valued graphs form double spiral staircases with opposite handedness (see Remark~\ref{remark:spiral} for a geometric description of each of the 1 or 2 components of $ \Delta_n$ near points of $\mathcal{S}$). Thus, circling only $\mathcal{S}_0$ or only $\mathcal{S}_C$ in $\Delta_n$ results in going either up or down, while a path circling both $\mathcal{S}_0$ and $\mathcal{S}_C$ closes up. \item There exist simple closed oriented curves ${\gamma}_n \subset M_n$ converging to the line segment joining the pair of
points in $\mathcal{S}\cap\{x_3=0\}$ and having lengths converging to
$2C$ and fluxes converging to $(0,2C,0)$. \end{enumerate} \end{theorem}
In~\cite{mt7} we apply the non-zero flux conclusions in Theorems~\ref{H-lam-thm} and \ref{geometry2} to obtain curvature estimates away from the boundary for any compact $1$-annulus in $\mathbb{R}^3$ that has scalar flux that is either zero or greater than some $\rho>0$; see Corollary~5.4 in~\cite{mt7} for this result.
The geometric description in item~2 of Theorem~\ref{geometry2} is identical to the geometric description of the $H=0$ case given in Theorem~0.9 of paper~\cite{cm25} by Colding and Minicozzi, where in their situation the number of components in $\mathcal{C}_n(R)$ must be one. When the hypotheses of Theorem~\ref{geometry1} or \ref{geometry2} hold, as $n$ approaches infinity the convergent geometry of the surfaces $M_n$ around the line or pair of lines in $\mathcal{S}$ is that of a so-called ``parking garage structure". See for instance~\cite{mpr14} for the general notion and theory of parking garage surfaces in $\mathbb{R}^3$ and the notion of the convergence of these surfaces to a limit ``parking garage structure". This kind of limiting structure and its application to obtain curvature estimates for certain minimal planar domains in $\mathbb{R}^3$ first appeared in work of Meeks, Perez and Ros in~\cite{mpr1}.
\noindent {\sc Acknowledgements:} The authors would like to thank Joaquin Perez for making Figure~\ref{fig2cone}.
\section{Preliminaries.} \label{sec:pre}
Throughout this paper, we use the following notation. Given $a,b,R>0$, $p\in \mathbb{R}^3$ and ${\Sigma}$ a surface in $\mathbb{R}^3$:
\begin{itemize} \item $\mathbb{B}(p,R)$ is the open ball of radius $R$ centered at $p$. \item $\mathbb{B}(R)=\mathbb{B}(\vec{0},R)$, where $\vec{0}=(0,0,0)$. \item For $p\in {\Sigma}$, $B_{{\Sigma}}(p,R)$ denotes the open intrinsic ball in ${\Sigma}$ of radius $R$.
\item $A(r_1,r_2)=\{(x_1,x_2,0)\mid r_2^2\leq x_1^2+x_2^2\leq r_1^2\}$. \end{itemize}
We first introduce the notion of multi-valued graph, see~\cite{cm22} for further discussion. Intuitively, an $N$-valued graph is a simply-connected embedded surface covering an annulus such that over a neighborhood of each point of the annulus, the surface consists of $N$ graphs. The stereotypical infinite multi-valued graph is half of the helicoid, i.e., half of an infinite double-spiral staircase.
\begin{figure}
\caption{A right-handed 3-valued graph.}
\label{3-valuedgraph}
\end{figure}
\begin{definition}[Multi-valued graph]\label{multigraph} {\rm Let $\mathcal{P}$ denote the universal cover of the punctured $(x_1,x_2)$-plane, $\{(x_1,x_2,0)\mid (x_1,x_2)\neq (0,0)\}$, with global coordinates $(\rho , \theta)$. \begin{enumerate}[1.] \item An {\em $N$-valued graph over the annulus $ A(r_1,r_2)$} is a single valued graph $u(\rho, \theta)$ over $\{(\rho ,\theta )\mid r_2\leq \rho
\leq r_1,\;|\theta |\leq N\pi \}\subset \mathcal{P}$, if $N$ is odd, or over $\{(\rho ,\theta )\mid r_2\leq \rho \leq r_1,\;(-N+1)\pi\leq \theta \leq \pi (N+1)\}\subset \mathcal{P}$, if $N$ is even. \item An $N$-valued graph $u(\rho,{\theta})$ over the annulus $ A(r_1,r_2)$ is called {\em righthanded} \, [{\em lefthanded}] if whenever it makes sense, $u(\rho,{\theta})<u(\rho,{\theta} +2\pi)$ \, [$u(\rho,{\theta})>u(\rho,{\theta} +2\pi)$] \item The set $\{(r_2,\theta, u(r_2,\theta)), \theta\in[-N\pi,N\pi]\}$ when $N$ is odd (or $\{(r_2,\theta, u(r_2,\theta)), \theta\in[(-N+1)\pi,(N+1)\pi]\}$ when $N$ is even) is the {\em inner boundary} of the $N$-valued graph. \end{enumerate} } \end{definition}
From Theorem 2.23 in~\cite{mt7} one obtains the following, detailed geometric description of an $H$-disk with large norm of the second fundamental form at the origin. The precise meanings of certain statements below are made clear in~\cite{mt7} and we refer the reader to that paper for further details.
\begin{theorem}\label{mainextension} Given ${\varepsilon},\tau>0$ and $\overline{{\varepsilon}}\in (0,{\varepsilon}/4)$ there exist constants $\Omega_\tau:=\Omega(\tau )$, $\omega_\tau:=\omega(\tau )$ and $G_\tau:=G({\varepsilon},\tau,\overline{\varepsilon} ) $ such that if $M$ is an $H$-disk, $H\in (0,\frac 1{2{\varepsilon}})$, $\partial M\subset \partial \mathbb{B}({\varepsilon})$, $\vec 0\in M$ and
$|A_M|(\vec 0)>\frac 1\eta G_\tau$, for $\eta\in (0,1]$, then for any $p\in \mathbb{B}(\vec{0},\eta\overline{{\varepsilon}})$ that is a maximum of the function
$|A_{M}|(\cdot)(\eta\bar{\varepsilon}-|\cdot|)$, after translating $M$ by $-p$, the following geometric description of $M$ holds: \par
\begin{itemize}
\item On the scale of the norm of the second fundamental form $M$ looks like one or two helicoids nearby the origin and, after a rotation that turns these helicoids into vertical helicoids, $M$ contains a 3-valued graph
$u$ over $A({\varepsilon}\slash\Omega_\tau,\frac{\omega_\tau}{|A_M|(\vec 0)})$
with norm of its gradient less than $\tau$ and with inner boundary in $\mathbb{B}(10\frac{\omega_\tau}{|A_M|(\vec 0)})$.
\item Moreover, given $j\in\mathbb N$ if we let the constant $G_\tau$ depend on $j$ as well, then $M$ contains $j$ pairwise disjoint 3-valued graphs with their inner boundaries in
$\mathbb{B}(10\frac{\omega_\tau}{|A_M|(\vec 0)})$. \end{itemize}
\end{theorem}
Theorem~\ref{mainextension}
was inspired by the pioneering work of Colding and Minicozzi in the
minimal case~\cite{cm21,cm22,cm24,cm23}; however in the constant positive mean curvature setting this description has led to a different conclusion, that is the existence of the intrinsic curvature estimates stated below.
\begin{theorem}[Intrinsic curvature estimates, Theorem~1.3 in~\cite{mt7}] \label{cest} Given $\delta,{\mathcal H}>0$, there exists a constant $K (\delta,{\mathcal H})$ such that for any $H$-disk ${\mathcal D}$ with $H\geq {\mathcal H}$, { $${{\sup}_{ \{p\in {\mathcal D} \, \mid \, d_{{\mathcal D}}(p,\partial
{\mathcal D})\geq \delta\}} |A_{\mathcal D}|\leq K (\delta,{\mathcal H})}.$$} \end{theorem}
Rescalings of a helicoid give a sequence of embedded minimal disks with arbitrarily large norm of the second fundamental form at points arbitrarily far from its boundary; therefore in the minimal setting, similar curvature estimates do not hold.
The next two results from~\cite{mt9} will also be essential tools that we use in this paper.
\begin{theorem}[Extrinsic one-sided curvature estimates for $H$-disks] \label{th} There exist ${\varepsilon}\in(0,\frac{1}{2})$ and $C \geq 2 \sqrt{2}$ such that for any $R>0$, the following holds. Let ${\mathcal D}$ be an $H$-disk such that $${\mathcal D}\cap \mathbb{B}(R)\cap\{x_3=0\} =\O \quad \mbox{and} \quad \partial {\mathcal D}\cap \mathbb{B}(R)\cap\{x_3>0\}=\O.$$ Then: \begin{equation} \label{eq1}
\sup _{x\in {\mathcal D}\cap \mathbb{B}({\varepsilon} R)\cap\{x_3>0\}} |A_{{\mathcal D}}|(x)\leq \frac{C}{R}. \end{equation} In particular, if ${\mathcal D}\cap \mathbb{B}({\varepsilon} R)\cap\{x_3>0\}\neq\O$, then $H\leq \frac{C}{R}$. \end{theorem}
The next corollary follows immediately from Theorem~\ref{th} by a simple rescaling argument. It roughly states that we can replace the $(x_1,x_2)$-plane by any surface that has a fixed uniform estimate on the norm of its second fundamental form.
\begin{corollary} \label{cest2} Given an $a\geq 0$, there exist ${\varepsilon}\in(0,\frac{1}{2})$ and $C_{a} >0$ such that for any $R>0$, the following holds. Let $\Delta$ be a compact immersed surface in $\mathbb{B}(R)$ with $\partial \Delta \subset \partial \mathbb{B}(R)$, $\vec{0}\in \Delta$
and satisfying $|A_{\Delta}| \leq a/R$. Let ${\mathcal D}$ be an $H$-disk such that $${\mathcal D}\cap \mathbb{B}(R)\cap\Delta=\O \quad \mbox{and} \quad \partial {\mathcal D}\cap \mathbb{B}(R)=\O.$$ Then: \begin{equation} \label{eq1*}
\sup _{x\in {\mathcal D}\cap \mathbb{B}({\varepsilon} R)} |A_{{\mathcal D}}|(x)\leq \frac{C_{a}}{R}. \end{equation} In particular, if ${\mathcal D}\cap \mathbb{B}({\varepsilon} R)\neq \O$, then $H\leq \frac{C_{a}}{R}$. \end{corollary}
The next curvature estimate is a more involved application of Theorem~\ref{th} and also uses Theorem~\ref{thm2.1} below in its proof.
\begin{corollary}[Corollary 4.6 in~\cite{mt13}] \label{cest-cor} There exist constants ${\varepsilon}<1$, $C>1$ such that the following holds. Let $\Sigma_1$, $\Sigma_2$, $\Sigma_3$ be three pairwise disjoint $H_i$-disks with $\partial \Sigma_i\subset [ \mathbb{R}^3- \mathbb{B}(1)]$ \,for $i=1,2,3$. If $\,\mathbb{B}({\varepsilon})\cap\Sigma_i\not=\O$ for $i=1,2,3$, then { \[
\sup_{\mathbb{B}({\varepsilon})\cap\Sigma_i,\,i=1,2,3}|A_{\Sigma_i}|\leq C. \]}\end{corollary}
In~\cite{mt8}, we applied the one-sided curvature estimates in Theorem~\ref{th} to prove a relation between intrinsic and extrinsic distances in an $H$-disk, which can be viewed as a {\em weak chord arc} property. This result was motivated by and generalizes a previous result by Colding-Minicozzi for 0-disks, namely Proposition~1.1 in~\cite{cm35}. We begin by making the following definition.
\begin{definition} Given a point $p$ on a surface $\Sigma\subset \mathbb{R}^3$, ${\Sigma} (p,R)$ denotes the closure of
the component of $\Sigma \cap {\mathbb{B}}(p,R)$ passing through $p$. \end{definition}
\begin{theorem}[Weak chord arc property, Theorem 1.2 in~\cite{mt8}] \label{thm1.1} There exists a $\delta_1 \in (0, \frac{1}{2})$ such that the following holds.
Let ${\Sigma}$ be an $H$-disk in $\mathbb{R}^3.$ Then for all intrinsic closed balls $\overline{B}_{\Sigma}(x,R)$ in ${\Sigma}- \partial {\Sigma}$:
\begin{enumerate} \item ${\Sigma} (x,\delta_1 R)$ is a disk with piecewise smooth boundary $\partial \Sigma(x,\delta_1 R)\subset \partial \mathbb{B}({\delta}_1R)$. \item $
{\Sigma} (x, \delta_1 R) \subset B_{\Sigma} (x, \frac{R}{2}).$ \end{enumerate} \end{theorem}
For applications here, we will also need the closely related chord-arc result below, that is Theorem 1.2 in~\cite{mt13}.
\begin{theorem}[Chord arc property for $H$-disks] \label{main2} There exists a constant $a>1$ so that the following holds. Suppose that ${\Sigma}$ is an $H$-disk with $ \vec{0}\in{\Sigma}$, $R>r_0>0$ and ${B}_{\Sigma}(\vec 0,aR) \subset {\Sigma}-\partial {\Sigma}$. If
$\sup_{\large B_{\Sigma}(\vec 0,(1-\frac{\sqrt{2}}{2})r_0)}|A_{\Sigma}|>r_0^{-1}$, then
$$ \frac{1}{3}\mbox{\rm dist}_{\Sigma} (x,\vec{0})\leq |x|/2+r_0, \; \mbox{\rm for } x\in B_{\Sigma}(\vec 0,R).$$ \end{theorem}
Since in the proofs of Theorems~\ref{H-lam-thm}, \ref{geometry1} and \ref{geometry2} we will frequently refer to parts of the statement of the Limit Lamination Theorem for $H$-disks, namely Theorem~1.1 in~\cite{mt13}, we state it below for more direct referencing.
\begin{theorem}[Limit lamination theorem for $H$-disks] \label{thm2.1} Fix ${\varepsilon} >0$ and let $\{M_n\}_n$ be a sequence of $H_n$-disks in $\mathbb{R}^3$ containing the origin and such that $\partial M_n \subset [\mathbb{R}^3 - \mathbb{B}(n)]$ and
$|A_{M_n} |(\vec{0})\geq {\varepsilon}$. Then, after replacing by some subsequence, exactly one of the following two statements hold. \begin{enumerate}[A.] \item The surfaces $M_n$ converge smoothly with multiplicity one or two on compact subsets of $\mathbb{R}^3$ to a helicoid $M_{\infty}$ containing the origin. Furthermore, every component $\Delta$ of $M_n\cap \mathbb{B}(1)$ is an open disk whose closure $\overline{\Delta}$ in $M_n$ is a compact disk with piecewise smooth boundary, and where the intrinsic distance in $M_n$ between any two points of its closure $\overline{\Delta}$ less than 10. \item There are points $p_n\in M_n$ such that \[ \lim_{n\to \infty}p_n=\vec{0} \text{\, and \,}
\lim_{n\to \infty}|A_{M_n}|(p_n)=\infty, \] and the following hold: \begin{enumerate} \item The surfaces $M_n$ converge to a foliation of $\mathbb{R}^3$ by planes and the convergence is $C^\alpha$, for any ${\alpha}\in(0,1)$, away from the line containing the origin and orthogonal to the planes in the foliation. \item There exists compact subdomains $\mathcal{C}_n$ of $M_n$, $[M_n\cap \ov{\mathbb{B}}(1)]\subset \mathcal{C}_n \subset \mathbb{B}(2)$ and $\partial \mathcal{C}_n\subset \mathbb{B}(2)-\ov{\mathbb{B}}(1)$, each $\mathcal{C}_n$ consisting of one or two pairwise disjoint disks, where each disk component has intrinsic diameter less than 3 and intersects $\mathbb{B}(1/n)$. Moreover, each connected component of $M_n\cap \mathbb{B}(1)$ is an open disk whose closure in $M_n$ is a compact disk with piecewise smooth boundary. \end{enumerate} \end{enumerate} \end{theorem}
\begin{remark}[Double spiral staircase structure] \label{remark:spiral} {\em Suppose that Case B occurs in the statement of Theorem~\ref{thm2.1} and let $\Delta_n$ be a component of $\mathcal{C}_n$. By Remark~3.6 in~\cite{mt13}, after replacing the surfaces $M_n$ by a subsequence and composing them by a rotation of $\mathbb{R}^3$ that fixes the origin and so that the planes of the limit foliation are horizontal, then, as $n$ tends to infinity, $\Delta_n$ has the structure of a {\em double spiral staircase}, in the following sense: \begin{enumerate} \item $\Delta_n $ contains a smooth connected arc ${\Gamma}_n(t)$, called its {\em central column}, that is parameterized by the set of its third coordinates which equals the interval $I_n=(-1-\frac1n,1+\frac1n)$. ${\Gamma}_n(t)$ is
the set of points of $\Delta_n $ with vertical tangent planes
and ${\Gamma}_n(t)$ is $\frac1n$-close to the arc $\{(0,0,t) \mid t\in I_n\}$ with respect to the $C^1$-norm.
For each $t\in I_n$, let $T_n(t)$ be the vertical tangent plane of $\Delta_n$ at ${\Gamma}_n(t)$. \item For $t\in I_n$, $T_n(t)\cap \Delta_n$ contains a smooth arc ${\alpha}_{n,t}$ passing through ${\Gamma}_n(t)$ that is $\frac1n$-close in the $C^1$-norm to an arc $\beta_{n,t}$ of the line $T_n(t)\cap \{x_3=t\}$ such that ${\Gamma}_n(t)\in \beta_{n,t}$ and the end points of $\beta_{n,t}$ lie in $\mathbb{B}(2)-\overline{\mathbb{B}}(1)$; here $\{{\alpha}_{n,t}\}_{t\in I_n}$ is a pairwise disjoint collection of arcs and $\Delta_n=\bigcup_{t\in I_n}{\alpha}_{n,t}$.
\item The absolute Gaussian curvature of $\Delta_n$ along ${\Gamma}_n(t)$ is pointwise greater than $n$. Since the central column ${\Gamma}_n(t)$ of $\Delta_n$ is converging $C^1$ to the segment given by $\mathbb{B}(1)\cap \{x_3\text{-axis}\}$, the arcs ${\alpha}_{n,t}$ are converging to $T_n(t)\cap \Delta_n$ and on the scale of curvature $\Delta_n$ is closely approximated by a vertical helicoid near every point of ${\Gamma}_n(t)$ (see Corollary~3.8 in~\cite{mt9}), then the rate of change of the horizontal unit normal of $T_n(t)$ along ${\Gamma}_n(t)$ is greater than $\sqrt{n}$. \end{enumerate} } \end{remark}
Next, we recall the notion of flux of a 1-cycle of an $H$-surface; see for instance~\cite{kks1,ku2,smyt1} for further discussions of this invariant.
\begin{definition} \label{def:flux} {\rm Let $\gamma$ be a 1-cycle in an $H$-surface $M$. The {\em flux} of $\gamma$ is $F({\gamma})=\int_{\gamma}(H\gamma+\xi)\times \dot{\gamma}$, where $\xi$ is the unit normal to $M$ along $\gamma$. The norm
$|F({\gamma})|$ is called the {\em scalar flux} of ${\gamma}$.} \end{definition}
The flux of a 1-cycle in an $H$-surface $M$ is a homological invariant and we say that $M$ has {\em zero flux} if the flux of any 1-cycle in $M$ is zero; in particular, since the first homology group of a disk is zero, an $H$-disk has zero flux. Finally, the next definition was needed in Definition~\ref{def:lbsf} in the Introduction.
\begin{definition} \label{definj} {\rm The injectivity radius $I_M(p)$ at a point $p$ of a complete Riemannian manifold $M$ is the supremum of the radii $r>0$ of the open metric balls $B_M(p,r)$ for which the exponential map at $p$ is a diffeomorphism. This defines the {\it injectivity radius function,} $I_M\colon M\to (0,\infty ]$, which is continuous on $M$ (see e.g., Proposition~88 in~\cite{ber1}). When $M$ is complete, we let $\mbox{\rm Inj}(M)$ denote the {\it injectivity radius of $M$}, which is defined to be the infimum of $I_M$.} \end{definition}
\section{The proof of Theorem~\ref{H-lam-thm}.} \label{sec:finiteg}
In this section we will prove all of the statements in Theorem~\ref{H-lam-thm} except for one of the implications in item~2. However, at the end of this section we explain how the missing proof of this implication follows from
item~2 of Theorem~\ref{geometry2}. Hence, once
Theorem~\ref{geometry2} is proven in Section~\ref{sec5}, the proof of
Theorem~\ref{H-lam-thm} will be complete.
Let
$\{M_n\}_{n\in\mathbb N}$ be a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in M_n$, $M_n$ contains no spherical components, $\partial M_n\subset [\mathbb{R}^3 -\mathbb{B}(n)]$ and the sequence has locally bounded norm of the second fundamental form in $\mathbb{R}^3$. By a standard argument, a subsequence of the surfaces converges to a weak $H$-lamination $\mathcal{L}$ of $\mathbb{R}^3$; see the references \cite{mpr10,mpr18,mt4} for this argument and the Appendix for the definition and some key properties of a weak $H$-lamination that we will apply below.
Let $L$ be a leaf of $\mathcal{L}$. If $L$ is stable ($L$ admits a positive Jacobi function), then $L$ is a complete, stable constant mean curvature surface in $\mathbb{R}^3$, which must be a flat plane by~\cite{lor2,ros9}. If $L$ is a flat plane, then the injectivity radius is infinite. Since by Theorem~4.3 in~\cite{mpr19} limit leaves (see Definition~\ref{deflimit} for the definition of limit leaf) of $\mathcal{L}$ are stable, we conclude that if $L$ is a limit leaf of $\mathcal{L}$, then it is a plane and has infinite injectivity radius. Thus we also conclude that if $\mathcal{L}$ has a limit leaf, then $H=0$. From this point till the beginning of the proof of item~1 of the theorem, we will assume that $\mathcal{L}$ is not a lamination of $\mathbb{R}^3$ by parallel planes.
Suppose now that $L$ is a non-flat leaf of $\mathcal{L}$. By the discussion in the previous paragraph and item~3 of Remark~\ref{remarkweak}, $L$ is a non-limit leaf and it has on its mean convex side an embedded half-open regular neighborhood $N(L)$ in $\mathbb{R}^3$ that intersects $\mathcal{L}$ only in the leaf $L$; also since $L$ is not a limit leaf of $\mathcal{L}$, then $N(L)$ lies in the interior of an open set $\widehat{N}(L)$ that also intersects $\mathcal{L}$ only in the leaf $L$. Since the leaf $L$ is not stable, the existence of $N(L)$ allows us to apply the arguments in the proof of Case A in the proof of Proposition~3.1 of~\cite{mt9},
to show that the sequence $\{M_n\cap \widehat{N}(L)\}_{n\in \mathbb{N}}$ converges to $L$ with multiplicity one or two and the genus of $L$ is at most $k$.
We now prove that $\mathcal{L}$ does not contain a limit leaf. Arguing by contradiction, suppose $L$ is a limit leaf of $\mathcal{L}$, then, as previously proved, $L$ must be a flat plane. Thus, since we are assuming that $\mathcal{L}$ is not a lamination of $\mathbb{R}^3$ by parallel planes,
$\mathcal{L}$ is a minimal lamination containing a flat leaf $L$ and a non-flat leaf $L'$ with finite genus at most $k$. By Theorem~7 in~\cite{mr13}, a finite genus leaf of a minimal lamination of $\mathbb{R}^3$ is proper, which contradicts the Half-space Theorem~\cite{hm10} since $L'$ is contained in the half-space determined by $L$. This proves that $\mathcal{L}$ contains no limit leaves.
Since $\mathcal{L}$ is a weak $H$-lamination of $\mathbb{R}^3$ that does not have a limit leaf, then the union of the leaves of $\mathcal{L}$ is a properly immersed, possibly disconnected
$H$-surface, such that around any point $p$ where the leaves of
the weak lamination do not form a lamination, there exists an ${\varepsilon}>0$ such that $\mathcal{L}\cap \mathbb{B}(p,{\varepsilon})$ consists of exactly two disks in leaves of $\mathcal{L}$ with boundaries in $ \mathbb{B}(p,{\varepsilon})$ and these two disks lie on one side of each other, intersect at $p$ and their non-zero mean curvature vectors are oppositely oriented. See the Appendix for further discussion of properties of weak $H$-laminations.
If $H=0$, then the leaves of $\mathcal{L}$ are embedded by the maximum
principle and $\mathcal{L}$ is connected because of the Strong Halfspace Theorem~\cite{hm10}. Thus by elementary separation properties, $\mathcal{L}$ bounds a proper region $W$ of $\mathbb{R}^3$. Hence, $\mathcal{L}$ is a connected, strongly Alexandrov embedded minimal surface in the case where $H=0$.
Suppose next that $H>0$ and note that by the previous description or by item~3 in Remark~\ref{remarkweak}, each leaf $L$ of $\mathcal{L}$ can be perturbed slightly on its mean convex side to be properly embedded and hence $L$ is strongly Alexandrov embedded. By Theorem~2 in~\cite{enr1}, for any two components $\Sigma_1$ and $\Sigma_2$ of $\mathcal{L}$, $\Sigma_1$ does not lie in the mean convex component of $\mathbb{R}^3-\Sigma_2$. It follows that each of the components of $\mathbb{R}^3-\mathcal{L}$, except for one, is a mean convex domain with one boundary component. This means $\mathcal{L}$ corresponds to a possibly disconnected strongly Alexandrov embedded $H$-surface. Finally, since closed Alexandrov embedded $H$-surfaces in $\mathbb{R}^3$ are round spheres and no component of $M_n$ is spherical, a monodromy argument implies that each leaf of the limit lamination $\mathcal{L}$ is non-compact. Setting $M_\infty:=\mathcal{L}$ finishes the proof of the first statement of the theorem.
We next prove item~1 in the theorem. Namely, we will prove that if the sequence $\{M_n\}_{n\in\mathbb N}$ has uniformly positive injectivity radius in $\mathbb{R}^3$ or if $H=0$, then the norm of the second fundamental form of $M_\infty$ is bounded. If the constant mean curvature of $M_\infty$ is positive and if the sequence $\{M_n\}_{n\in\mathbb N}$ has uniformly positive injectivity radius in $\mathbb{R}^3$, then the norms of the second fundamental forms of the surfaces $M_n$ converging to $M_\infty$ on any compact region of $\mathbb{R}^3$ are eventually bounded from above by a constant that only depends on the curvature estimate given in Theorem~\ref{cest}; hence $M_\infty$ has uniformly bounded norm of its second fundamental form in this case. If the mean curvature of $M_\infty$ is zero, then as observed already either $M_\infty$ is a lamination of $\mathbb{R}^3$ by parallel planes or else $M_\infty$ is a properly embedded connected minimal surface in $\mathbb{R}^3$ of finite genus. If $M_\infty$ is a lamination of $\mathbb{R}^3$ by parallel planes then the claim is clearly true. Otherwise, by the classification of the asymptotic behavior of properly embedded minimal surfaces in $\mathbb{R}^3$ with finite genus given in the papers~\cite{bb2,col1,mpr6}, the norm of the second fundamental form of the unique leaf of $M_\infty$ is also bounded in this case. This last observation completes the proof of item~1 in the theorem.
We next consider the proof of item~2 in the theorem. Namely, we will prove that if there exist positive numbers $I_0, H_0$ such that for $n$ large either the injectivity radius functions of the surfaces $M_n$ at $\vec{0} $ are bounded from above by $ I_0$ or $H_n\geq H_0$, then $M_\infty$ is a strongly Alexandrov embedded $H$-surface and there exist a positive constant $\eta=\eta (M_\infty)$ and simple closed oriented curves ${\gamma}_n \subset M_n$ with scalar fluxes $F({\gamma}_n)$ with $\lim_{n\to\infty} F({\gamma}_n)=\eta$.
We first show that $M_\infty$ cannot be a lamination of $\mathbb{R}^3$ by parallel planes. Arguing by contradiction, suppose that the sequence $\{M_n\}_{n\in\mathbb N}$ converges to a lamination of $\mathbb{R}^3$ by parallel planes. In particular $\lim_{n\to\infty}H_n=0$ and the injectivity radius functions of the surfaces $M_n$ at $\vec{0} $ are bounded from above by $I_0$. Then,
for $n$ large, the Gauss equation implies that
the $\limsup K_{M_n}$ of the Gaussian curvature functions
of the surfaces $M_n$ is non-positive.
Classical results on Jacobi fields along geodesics in such surfaces imply that for $n$ large the exponential map of $M_n$ at $\vec 0$
on the closed disk in $T_{\vec 0}M_n$ of a certain radius $r_n\in (0, I_0]$ is a local diffeomorphism that is injective on the interior of the disk but it is not injective along its boundary circle of radius $r_n$ (see, for instance, Proposition~2.12, Chapter~13 of~\cite{doc2}). Moreover, since the sequence $\{M_n\}_{n\in \mathbb{N}}$ has locally bounded norm of the second fundamental form, the sequence of numbers $r_n$ is bounded away from zero. Hence, there exists a sequence of simple closed geodesic loops ${\alpha}_n\subset M_n$ based at $\vec 0$ and of lengths uniformly bounded from below and above that are smooth everywhere except possibly at $\vec 0$. By the nature of the convergence, ${\alpha}_n$ converges to a geodesic loop in $M_\infty$ based at $\vec 0$. Therefore $M_\infty$ cannot be a lamination of $\mathbb{R}^3$ by parallel planes.
We next prove the existence of the 1-cycles ${\gamma}_n\subset M_n$ with non-zero flux described in item~2 of the theorem. Since $M_\infty$ cannot be a lamination of $\mathbb{R}^3$ by parallel planes, by the already proved first main statement of the theorem, the sequence $\{M_n\}_{n\in\mathbb N}$ converges with multiplicity one or two to a possibly disconnected, non-flat strongly Alexandrov embedded $H$-surface $M_\infty$ of genus at most $k$. Since the convergence to $M_\infty$ is with multiplicity one or two, a curve lifting argument shows that in order to prove that item~2 holds, it suffices to show that $M_\infty$ has non-zero flux.
If $\lim_{n\to\infty} H_n=0$ but the injectivity radius functions of the $M_n$ at $\vec{0} $ are bounded from above by $I_0$, then the same arguments as before imply that $M_\infty$ is not simply-connected because a simply-connected minimal surface cannot contain a geodesic loop. Thus, by the results in~\cite{mt2}, the finite genus minimal surface $M_\infty$ must have non-zero flux.
It remains to consider the case that $H_n\geq H_0>0$. In this case $M_\infty$ is a proper collection of non-zero constant mean curvature surfaces, each component of which is non-compact and the entire surface has finite genus at most $k$. Abusing the notation, let $M_\infty$ denote the component containing the origin. If $M_\infty$ has injectivity radius function uniformly bounded from below by a positive constant, then it has uniformly bounded norm of the second fundamental form by Theorem~\ref{cest} and again, by the results in~\cite{mt2}, $M_\infty$ has non-zero flux.
In other words, item~2 can only fail if $M_\infty$ has positive mean curvature but the injectivity radius function is not bounded from below
by a positive constant. In this case we can apply the blow-up argument described in Proposition~\ref{cor:5.7}. Such a blow-up argument gives the following. Let $p_n\in M_\infty$ be a sequence of points such that $\lim_{n\to\infty}{I_{M_\infty}(p_n)}= 0$ and let $q_n$ be a sequence of points with almost-minimal injectivity radius for $B_{M_\infty}(p_n,1)$, see Definition~\ref{amininj}. Then, by Proposition~\ref{cor:5.7} there exist
positive numbers $R_n$, $\lim_{n\to\infty}R_n= \infty$, such that after replacing by a subsequence the component ${\bf M}_n$ of $\frac{1}{I_{M_\infty}(q_n)}[M_\infty-q_n]\cap \mathbb{B}(R_n)$
containing $\vec 0$ has boundary in $\partial\mathbb{B}(R_n)$ and the following properties hold: \begin{itemize} \item ${\bf M}_n$ has finite genus at most $k$. \item $I_{{\bf M}_n}(x)\geq 1\slash 2$ for any $x\in {\bf M}_n\cap \mathbb{B}(R_n\slash 2)$ and $I_{{\bf M}_n}(\vec 0)=1$. \item The mean curvatures ${\bf H}_n$ of the ${\bf M}_n$ converge to zero as $n $ goes to infinity.\end{itemize}
Suppose for the moment that the sequence ${\bf M}_n$ has locally bounded norm of the second fundamental form in $\mathbb{R}^3$. By the already proven main statement of Theorem~\ref{H-lam-thm} applied to the sequence ${\bf M}_n$, a subsequence converges to a possibly disconnected, strongly Alexandrov embedded $H$-surface ${\bf M}_\infty$ of genus at most $k$ and every component of $M_\infty$ is non-compact. Since $\lim_{n\to\infty} {\bf H}_n=0$ but the injectivity radius functions of the ${\bf M}_n$ at $\vec{0} $ are bounded from above by $1$, then our previous arguments imply that the finite genus minimal surface ${\bf M}_\infty$ must have non-zero flux. Hence, we may assume that the sequence ${\bf M}_n$ fails to have locally bounded norm of the second fundamental form. Assume for the moment that Theorem~\ref{geometry2} holds. In the case that we are considering, we can apply item~2 of Theorem~\ref{geometry2} to conclude that the surfaces ${\bf M}_n$ have non-zero flux, which would mean that $M_\infty$ has non-zero flux as well. The construction of the closed curves, called connection loops, with non-zero flux is described in detail after Remark~\ref{moneortwo}.
In summary, the proof of
the Theorem~\ref{H-lam-thm} will be complete once Theorem~\ref{geometry2} is proven
in Section~\ref{sec5}.
\section{The proof of Theorem~\ref{geometry1}.} \label{sec4} In this section we will prove Theorem~\ref{geometry1}. Suppose that $\{M_n\}_{n\in \mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in {M_{n}}$, ${M_{n}}$ contains no spherical components, $\partial {M_{n}}\subset [\mathbb{R}^3 -\mathbb{B}(n)]$, the sequence has locally positive injectivity radius in $\mathbb{R}^3$ and
$\lim_{n\to \infty} |A_{{M_{n}}}|(\vec{0})=\infty$. Since we will use some of the results proved here in the proof of Theorem~\ref{geometry2}, we will for the moment not invoke the additional hypothesis $\lim_{n\to \infty} I_{{M_{n}}}(\vec{0})=\infty$.
Since $\lim_{n\to \infty} |A_{{M_{n}}}|(\vec{0})=\infty$ and $I_{M_n}(\vec{0})$ is bounded from below by some positive number, then Theorem~\ref{cest} implies that $\lim_{n\to \infty} H_n =0$. After replacing by a subsequence, there exists a smallest closed nonempty set $\mathcal{S}\subset \mathbb{R}^3$ such that the sequence $\{M\}_{n\in \mathbb N}$ has locally bounded norm of the second fundamental form in $\mathbb{R}^3-\mathcal{S}$ and converges with respect to the $C^{\alpha}$-norm, for any ${\alpha}\in (0,1)$, to a nonempty minimal lamination $\mathcal{L}$ of $\mathbb{R}^3-\mathcal{S}$; the set $\mathcal{S}$ is smallest in the sense that every subsequence fails to converge to a minimal lamination in a proper subset of $\mathcal{S}$. The proofs of the existence of $\mathcal{S}$ and $\mathcal{L}$ are the same as those appearing in the proofs of the first three items in Claim~3.4 in~\cite{mt9} and we refer the reader to~\cite{mt9} for the details.
We begin by studying the geometry of $\mathcal{L}$ and $\mathcal{S}$. The local analysis presented here
is analogous to and inspired by the one given in the minimal case considered by
Colding and Minicozzi in~\cite{cm25}. Indeed the structure of the lamination
nearby points in $\mathcal{S}$ is identical.
Let $p\in \mathcal{S}$. After replacing by a subsequence, there exists a sequence of points $p_n\in {M_{n}}$ converging to $p$ such that the norm of the second fundamental form of ${M_{n}}$ at $p_n$ is at least $n$. Since we may assume that the injectivity radius function of $M_n$ is at least some ${\varepsilon}>0$ at $p_n$, then applying Theorem~\ref{thm1.1}, we find that for $n$ large, the intersection of each $H_n$-disk $B_{{M_{n}}}(p_n,{\varepsilon})$ with $\mathbb{B}(p, {\delta}_1{\varepsilon})$ contains a component ${M_{n}}(p_n, {\delta}_1{\varepsilon})$ that is an $H_n$-disk with boundary in the boundary of $\mathbb{B}(p_n, {\delta}_1{\varepsilon})$. Theorem~\ref{mainextension} now gives that for $n$ large, there exists a collection of 3-valued graphs $\{G_1(n),\ldots,G_{k(n)}(n)\}$ with inner boundaries converging to $p$, norms of their gradients at most one and $\lim_{n\to \infty} k(n)= \infty$. Since $\{G_1(n),\ldots,G_{k(n)}(n)\}$ is a collection of embedded and pair-wise disjoint 3-valued graphs contained in a compact ball, it must contain a sequence of 3-valued graphs for which the distance between the sheets is going to zero. Hence, after reindexing, we can assume that the 3-valued graphs $G_1(n)$ are collapsing in the limit to a minimal disk $D(p)\subset \mathbb{B}(p,s)$ of gradient at most one over its tangent plane at $p$ and where $\partial D(p)\subset \partial \mathbb{B}(p,s)$ and $s<\delta_1{\varepsilon}$ is fixed and depending on ${\varepsilon}$; actually one produces the punctured graphical disk \[ D(p,*)=D(p)-\{p\} \]
as a limit and then $p$ is seen to be a removable singularity. \begin{figure}
\caption{The local {\em Colding-Minicozzi picture} around a point where the curvature blows up. The stable punctured disk $D(p,*)$ appears in the limit lamination.}
\label{fig2cone}
\end{figure}
By the one-sided curvature estimate in Corollary~\ref{cest2}, the 3-valued graph $G_1(n)$ gives rise to curvature estimates at points of ${M_{n}}$ nearby $G_1(n)$ and, as $n$ goes to infinity, these curvature estimates give rise to curvature estimates in compact subsets of the complement set \[ W(p)= \mathbb{B}(p,s)-\mathcal{C}_p \] of some closed solid double cone $\mathcal{C}_p$ with axis being the normal line to $D(p)$ at $p$. In other words, after replacing by a subsequence, the surfaces ${M_{n}}$ have locally bounded norm of the second fundamental form in $W(p)$, which implies $W(p)\cap\mathcal{S}=\O$. This observation implies that for every point $p\in \mathcal{S}$, one has a minimal lamination $\mathcal{L}_{W(p)}=\mathcal{L}\cap W(p)$ of $W(p)$ as described in previous paragraphs: see Figure~\ref{fig2cone}. This local picture is exactly the same as the one that occurs in the case where the mean curvatures of the surfaces ${M_{n}}$ are zero; as in the minimal case, we refer to it as the local {\em Colding-Minicozzi picture} near points in $\mathcal{S}$.
See the discussion following Definition~4.9 in~\cite{mpr14} for a more detailed analysis of this picture.
\begin{definition} Given $p\in \mathcal{S}$, let $L_p$ be the leaf of $\mathcal{L}$ containing the punctured disk $D(p,*)$. \end{definition}
The arguments appearing in the proof of this claim are based on the proof of the similar Lemmas~4.10, 4.11 and 4.12 in~\cite{mpr14}.
\begin{claim}\label{epsdistance} The closure of $L_p$ in $\mathbb{R}^3$ is a plane $\overline L_p$ which intersects $\mathcal{S}$ in a discrete set of points. \end{claim}
\begin{proof} We claim that the punctured disk $D(p,*)$ in the Colding-Minicozzi picture at $p$ is a limit leaf of the local lamination $\mathcal{L}_W(p)$ of $W(p)$. Recall that in $W(p)$ the surfaces $M_n$ have uniformly bounded norm of the second fundamental form at points of intersection with the annulus $A=W(p)\cap \partial \mathbb{B}(p,{\varepsilon})$. After replacing $\mathcal{C}_p$ by a cone of wider aperture and choosing ${\varepsilon}>0$ sufficiently small, for $n$ sufficiently large, the annulus $A$ contains a pair of spiraling arcs ${\alpha}_1(n), {\alpha}_2(n)\subset A\cap M_n$ that begin at one of the boundary components of $A$ and end at its other boundary component. Furthermore, as $n\to \infty$, the arcs ${\alpha}_1(n), {\alpha}_2(n)$ converge to a limit lamination $\mathcal{L}_A$ of $A$ that contains the simple closed curve $D(p,*)\cap A$, which is a graph of small gradient over its projection to the tangent space of $D(p)$ at $p$. Since every homotopically non-trivial simple closed curve in $A$ intersects ${\alpha}_1(n)$, then by compactness, such a simple closed curve also intersects $\mathcal{L}_A$. In particular, $D(p,*)\cap A$ must be a limit leaf of $\mathcal{L}_A$. It follows that $D(p,*)$ is a limit leaf of $\mathcal{L}_W(p)$ of $W(p)$, which proves our claim.
Since the punctured disk $D(p,*)$ in the Colding-Minicozzi picture at $p$ is a limit leaf of the local lamination $\mathcal{L}_W(p)$ of $W(p)$, $L_p$ is a limit leaf of $\mathcal{L}$, and thus it is stable. Consider $L_p$ to be a Riemannian surface with its related metric space structure, namely, the distance between two points in $L_p$ is the infimum of the lengths of arcs on the leaf that join the two points. Let $\widehat{L}_p$ be the abstract metric completion of ${L_p}$. Since $L_p$ is a subset of $\mathbb{R}^3$, $\mathbb{R}^3$ is complete and extrinsic distances are at most equal to intrinsic distances, then the inclusion map of $L_p$ into $\mathbb{R}^3$ extends uniquely to a continuous map from $\widehat{L}_p$ into $\mathbb{R}^3$, and the image of $\widehat{L}_p$ is contained in the closure $\overline L_p$ of $L_p$ in $\mathbb{R}^3$. Note that this continuous map sends a point $q\in \widehat{L}_p -L_p$ to a point of $\overline L_p\cap \mathcal{S}$, which with an abuse of notation we still call $q$. Suppose $q\in \overline L_p\cap \mathcal{S}$ is the induced inclusion into $\mathbb{R}^3$ of a point in $\widehat{L}_p$ and let $\{q_k\}_k\in {L_p}$ be a Cauchy sequence converging to $q$. If for all $q\in \overline L_p\cap \mathcal{S}$, the related Cauchy sequence $q_k$ lies in the punctured disk $D(q,*)$, then the inclusion of the completion $\widehat{L}_p$ of $L_p$ in $\mathbb{R}^3$ would be a complete minimal surface in $\mathbb{R}^3$. Since $\widehat{L}_p$ would be stable outside of a discrete set of points and since for any compact minimal surface $\Lambda$ with boundary, $\Lambda $ is stable if and only $\Lambda $ punctured in a finite set of points is stable, then it follows that the minimal surface $\widehat{L}_p$ is stable. Hence, $\widehat{L}_p$ viewed in $\mathbb{R}^3$ would be a plane equal to $\overline L_p$~\cite{cp1,fs1}. Thus, in order to show that $\overline L_p$ is a plane, it suffices to show that for $k$ large, the points $q_k$ lie in the punctured disk $D(q,*)$.
Arguing by contradiction, suppose that for some $q\in \overline L_p\cap \mathcal{S}$ and $k$ large, $q_k\not\in D(q,*)$. Clearly for $k$ large, $q_k$ is arbitrarily close to $q$ in $\mathbb{R}^3$ and in particular, $q_k\in \mathbb{B}(q,s)$. Then it follows that, after extracting a subsequence, for $k$ large, the points $q_k$ lie in the same component $\Delta$ of $\mathbb{B}(q,s)-D(q)$.
First consider the special case where $q$ is an isolated point in $\mathcal{S}\cap \overline{\Delta}$, that is, for $\rho$ sufficiently small, $ \overline{\Delta}\cap \mathcal{S}\cap \mathbb{B}(q,\rho)=\{q\}$. Then $\overline L_p\cap \overline{\Delta}\cap[\mathbb{B}(q,\rho)-q]$ is a minimal lamination of $\mathbb{B}(q,\rho) -\{q\}$ with stable leaves and thus $q$ is a removable singularity of this minimal lamination by Theorem~1.2 in~\cite{mpr10}. This regularity property implies that for $\rho$ sufficiently small, $\overline \Delta\cap \{L_p\cup \{q\}\}\cap\mathbb{B}(q,\rho)$ contains a collection of disks $\{D_n\}_{n\in \mathbb{N}}$ in $\mathcal{L}$ with boundary curves in $\partial\mathbb{B}(q,\rho)$ that converge $C^1$ to the disk $D(q)$ as $n$ goes to infinity. Since the points $q_k$ lie in components of $\overline \Delta \cap \{L_p\cup \{q\}\}$ that are different from $D(q)$, then their intrinsic distances to $q$ in $\widehat L_p$ would be bounded uniformly from below by $\rho/2$ for $k$ large; this is because each such point $q_k$ is separated in $\mathbb{B}(q,\rho)$ from $q$ by the disk ${D}_n$ for $n$ sufficiently large. Therefore, in this case the sequence of points $q_k$ cannot be a Cauchy sequence converging to $q$ in $\widehat L_p$.
The case when $q$ is not an isolated point of $\mathcal{S}\cap \overline{\Delta}$ can be treated in a similar manner. If there exists a sequence of points $p_i\in \mathcal{S}\cap \overline{\Delta}$ converging to $q$, then for $i$ large, through each of these points there would be a punctured disk $[D'(p_i)-\{p_i\}]\subset L_q$ punctured at $p_i$ with boundary in $\partial \mathbb{B}(p,\rho)$ and the disks $D'(p_i)$ converge $C^1$ to $D(p)$. If two points, $x_1, x_2$, in $L_p\cap \Delta\cap \mathbb{B}(q,\rho/2)$, $\rho$ small, lie on different disks or are separated in $\Delta$ by one of the disks $D'(p_i)$, then the distance between $x_1$ and $x_2$ in $L_p$ would be bounded from below by $\rho/3$ for $k$ large. Since the points $p_i$ converge in $\Delta\cap \mathbb{B}(q,\rho/2)$ to $q$, there is always a disk $D'(p_i)$ that eventually separates points in the Cauchy sequence $\{q_k\}_k$. Hence, the sequence $\{q_k\}_k$ cannot be a Cauchy sequence unless, for $k$ large, the points $q_k$ lie in $D(q,*)$. By the argument given in the first paragraph of the proof, this implies that $\overline L_p$ is a plane.
Finally, the fact that $\overline L_p\cap \mathcal{S}$ is a discrete set of points in the plane $\overline L_p$ follows
from the geometry of the Colding-Minicozzi picture. This completes the proof of the claim. \end{proof}
By Claim~\ref{epsdistance}, for any $p\in \mathcal{S}$ the closure of $L_p$ in $\mathbb{R}^3$ is a plane $\overline L_p$ which intersects $\mathcal{S}$ in a discrete, therefore countable, set of points. After applying a fixed rotation around the origin, we will assume that $L_{\vec{0}}$ and $L_p$, for any $p\in\mathcal{S}$, are horizontal planes.
The arguments already considered in the proof of Theorem~\ref{H-lam-thm} can be adapted to show that the leaves of the minimal lamination $\mathcal{L}$ have genus at most $k$ and hence have finite genus. For the sake of completeness, we include the proof of this key topological property for the leaves of $\mathcal{L}$.
\begin{claim} \label{FG-claim} Each leaf $L$ of $\mathcal{L}$ has finite genus at most $k$. \end{claim}
\begin{proof} First suppose that $L=L_p$ for some $p\in \mathcal{S}$. In this case $\overline L_p$ is a plane and so $L$ has genus zero, which implies that the claim holds for $L$.
Next consider the case $\overline{L}\cap \mathcal{S}=\O$. In this case $\overline{L}$ is a minimal lamination of $\mathbb{R}^3$ and the arguments in the proof of Theorem~\ref{H-lam-thm} imply that the claim holds for $L$.
Finally, consider the case that $p\in \overline{L}\cap \mathcal{S}$ and $L\neq L_p$. In this case, $L$ lies in a halfspace component ${\mathcal H}$ of $\mathbb{R}^3-\overline L_p$. Suppose for the moment that $ (\overline{L}\cap \mathcal{S})-\overline L_p\neq \O$ and let $q\in(\overline{L}\cap \mathcal{S})-\overline L_p$. In this subcase, $L$ is contained in the open slab ${\mathcal T}$ of $\mathbb{R}^3$ with boundary planes $\overline L_p$ and $\overline L_q$. Since $L$ is connected, it must intersect every horizontal plane contained in ${\mathcal T}$ and ${\mathcal T}\cap \mathcal{S} =\O$.
We claim that $L$ is properly embedded in ${\mathcal T}$. If not, then the closure of $L$ in ${\mathcal T}$ is a minimal lamination of ${\mathcal T}$ with a limit leaf $X$, which is stable. By the same argument as in Claim~\ref{epsdistance}, stability implies that $X$ extends across the closed countable set $\overline{L}\cap \mathcal{S}\subset(\overline L_p\cup \overline L_q)\cap \mathcal{S}$ to a complete stable minimal surface in $\mathbb{R}^3$. Hence, $X$ is a horizontal plane in ${\mathcal T}$ which is disjoint from $L$, which contradicts the discussion in the previous paragraph. Hence, in this subcase $L$ is properly embedded in ${\mathcal T}$. A similar argument shows that if $ (\overline{L}\cap \mathcal{S})-\overline L_p=\O$, then the leaf $L$ is properly embedded in the half space ${\mathcal H}$. Hence, in either case, $L$ is properly embedded in an open simply-connected subset of $\mathbb{R}^3$ and so it separates this open set and has an open regular neighborhood in it. Now the
arguments in the proof of Theorem~\ref{H-lam-thm} imply that $L$ has finite
genus at most $k$. \end{proof}
By using Claims~\ref{epsdistance} and~\ref{FG-claim}, the next claim follows. Since the proof of Claim~\ref{onepoint} is almost identical to the proof of the first statement in Claim~3.2 in~\cite{mt13}, we omit it here.
\begin{claim} \label{onepoint} For any $t\in\mathbb{R}$, the intersection $\{x_3=t\}\cap\mathcal{S}$ is nonempty. \end{claim}
We now invoke the last hypothesis in the statement of Theorem~\ref{geometry1}: $$\lim_{n\to \infty} I_{{M_{n}}}(\vec{0})=\infty.$$ We remark that we have obtained Claim~\ref{onepoint} without invoking this hypothesis. After replacing by a subsequence, for each $m\in \mathbb{N}$, there exists an
increasing sequence $N(m)\in \mathbb{N}$ such that the injectivity radius function of $M_{N(m)}$ at $\vec{0}$ is greater than ${m}/{\delta}_1$, where ${\delta}_1$ is the constant given in Theorem~\ref{thm1.1}. Therefore, by the same theorem, for any $m\in\mathbb{N}$, the connected component $M(N(m))$ of $M_{N(m)}\cap \mathbb{B}(m)$ containing the origin is an $H_{N(m)}$-disk with $\partial M(N(m))\subset \partial \mathbb{B}(m)$. For simplicity of notation and after replacing by a further subsequence and relabeling, we will use $M_n$ to denote the sequence $M_{N(m)}$ and so $M(n)$ will now denote $M(N(m))$. After replacing by a further sequence, we will assume that item~B of Theorem~\ref{thm2.1} holds.
Let $l$ denote the $x_3$-axis. Since
$\lim_{n\to \infty}|A_{M_n}|(\vec{0}) = \infty$, part (a) of item~B of Theorem~\ref{thm2.1} shows that the sequence $M(n)$ converges away from $l$ to a foliation $\mathcal{L}'$ of $\mathbb{R}^3-l$ by punctured horizontal planes. Part (b) of item~B of Theorem~\ref{thm2.1} implies that given $R>0$, if $n$ is sufficiently large there exists a possibly disconnected compact subdomain $\mathcal{C}_{n}(R)$ of $M(n)$, with $[M(n)\cap \mathbb{B}(R/2)]\subset \mathcal{C}_n(R) \subset \mathbb{B}(R)$ and with $\partial \mathcal{C}_n(R)\subset \mathbb{B}(R)-\mathbb{B}(R/2)$, consisting of a disk ${\mathcal D}_n(R,1)$ containing the origin $\vec{0}$ and possibly a second disk ${\mathcal D}_n(R,2)$. Moreover, the diameter of each connected component of $\mathcal{C}_{n}(R)$ is bounded by $3R$ and ${\mathcal D}_n(R,i)\cap \mathbb{B}(R/n)\neq\mbox{\rm \O}$, for $i=1,2$. Hence, if $M_n\cap \mathbb{B}(R/2)=M(n)\cap \mathbb{B}(R/2)$ then the theorem follows. If that is not the case, then we proceed as follows.
Suppose, after choosing a subsequence, that for some $R>0$, $M_n\cap \mathbb{B}(R/2)$ contains a component $\Delta_n(R)$ that is not contained in $\mathcal{C}_n(R)$. We first show that even in this case, the sequence $\{M_n\}_{n\in \mathbb{N}}$, and not solely $\{M(n)\}_{n\in \mathbb{N}}$, converges to the foliation $\mathcal{L}'$ away from $l$. Since $M_n$ has locally positive injectivity radius, the horizontal planar regions forming on $M(n)$ away from $l$ imply that the sequence $\{M_n\}_{n\in\mathbb N}$ has locally bounded norm of the second fundamental form in $\mathbb{R}^3-l$; these curvature estimates arise from the intrinsic curvature estimates in Corollary~\ref{cest2}. By the embeddedness of $M_n$, $M_n$ must converge to $\mathcal{L}'$ away from $l$ as $n$ goes to infinity and $l$ is again a line and nearby it the sequence $M_n$ has arbitrary large norm of the second fundamental form. This discussion proves that $\mathcal{S}=l$ and $\mathcal{L}=\mathcal{L}'$ regardless of whether or not $M_n\cap \mathbb{B}(R/2)=M(n)\cap \mathbb{B}(R/2)$. Moreover, using these curvature estimates and the double spiral staircase structure of ${\mathcal D}_n(R,1)$, it is straightforward to prove that
$\Delta_n(R)$ contains points $y_n$ converging to $\vec{0}$.
After choosing a subsequence, for some $R$ fixed and for every $n\in \mathbb{N}$, $M_n\cap \mathbb{B}(R/2)\neq M(n)\cap \mathbb{B}(R/2)$. In this remaining case let
$y_n$ be chosen as in the previous paragraph.
Assume that $\lim_{n} I_{M_n}(y_n)=\infty$; in fact, we will prove this in Claim~\ref{claim:Inj=Inf}. Arguing similarly to the previous discussion, after replacing by a subsequence, we may assume that $I_{M_n}(y_n)\geq n/{\delta}_1$, where ${\delta}_1$ is the constant given in Theorem~\ref{thm1.1}, and that $\partial M(n)\subset\partial \mathbb{B}(R_n)$, with $R_n>2n$. By Theorem~\ref{thm1.1} the connected component $M'(n)$ of $M_n\cap \mathbb{B}(y_n,n)$ containing $y_n$ is an $H_n$-disk with $\partial M'(n)\subset \partial \mathbb{B}(y_n,n)$. Item~B of Theorem~\ref{thm2.1} implies that for $n$ sufficiently large, there exists a possibly disconnected compact subdomain $\mathcal{C}_n'(R)\subset M'(n)$ with $\mathcal{C}_n'(R) \subset \mathbb{B}(y_n,R)$ and with $\partial \mathcal{C}'_n(R)\subset [\mathbb{B}(y_n,R)-\mathbb{B}(y_n,R/2)]$ consisting of a disk ${\mathcal D}'_n(R,1)$ containing $y_n$ and possibly a second disk ${\mathcal D}'_n(R,2)$, where each disk has intrinsic diameter bounded by $3R$ and ${\mathcal D}'_n(R,i)\cap \mathbb{B}(y_n,R/n)\neq\mbox{\rm \O}$, for $i=1,2$.
Since $\lim_{n\to\infty}y_n=\vec 0$ and $R_n>2n$, $M(n)$ and $M'(n)$ are disks satisfying the following properties: \begin{itemize} \item $M(n)\subset \mathbb{B}(R_n)$ and $\partial M(n)\subset \partial \mathbb{B}(R_n)$; \item $M'(n)\subset \mathbb{B}(y_n, n)\subset \mathbb{B}(R_n)$ and $\partial M'(n)\subset \partial \mathbb{B}(y_n,n)$; \item $y_n\notin M(n)$ and $y_n\in M'(n)$. \end{itemize}
Then elementary separation properties give
that $M(n)\cap M'(n)=\mbox{\rm \O}$. In particular,
$\mathcal{C}_n(R)\cap \mathcal{C}'_n(R)=\mbox{\rm \O}$. Thus, to finish the
proof assuming $\lim_{n\to \infty} I_{M_n}(y_n)=\infty$, it suffices to show that $M_n\cap \mathbb{B}(R/2)\subset {\mathcal D}_n(R,1)\cup{\mathcal D}_n'(R,1)$.
Suppose that either ${\mathcal D}_n(R,2)$ or ${\mathcal D}_n'(R,2)$ existed. Applying Corollary~\ref{cest-cor} would give that $\vec 0$ cannot be a singular point. Therefore, ${\mathcal D}_n(R,2)$ and ${\mathcal D}_n'(R,2)$ do not exist. On the other hand, if $M_n\cap \mathbb{B}(R/2)\neq[M(n)\cup M'(n)]\cap \mathbb{B}(R/2)$, then by repeating the arguments used so far, there would exist a sequence of point $x_n$ with $\lim_{n\to\infty}x_n=\vec 0$ and a third sequence of disks ${\mathcal D}''_n(R)$ disjoint from ${\mathcal D}_n(R,1)\cup{\mathcal D}_n'(R,1)$, with $x_n\in {\mathcal D}''_n(R)$ and $\partial {\mathcal D}''_n(R)\subset [\mathbb{B}(x_n, R)-\mathbb{B}(x_n,R/2)]$. Again, one would obtain a contradiction by applying Corollary~\ref{cest-cor}. Therefore $M_n\cap \mathbb{B}(R/2)=[M(n)\cup M'(n)]\cap \mathbb{B}(R/2)$ and so, to complete the proof of Theorem~\ref{geometry1}, it remains to prove the claim below.
\begin{claim} \label{claim:Inj=Inf} $\lim_{n\to \infty} I_{M_n}(y_n)=\infty$. \end{claim} \begin{proof} Arguing by contradiction, suppose that after replacing by a subsequence $\lim_{n\to \infty} [I_{M_n}(y_n)=T_n]=T\in (0,\infty)$. Since $\lim_{n\to\infty} H_n=0$, then,
for $n$ large, the Gauss equation implies that
the $\limsup K_{M_n}$ of the Gaussian curvature
functions of the surfaces $M_n$ is non-positive.
Classical results on Jacobi fields along geodesics in such surfaces imply that for $n$ large the exponential map of $M_n$ at $y_n$
on the closed disk in $T_{y_n}M_n$ of radius $T_n$ is a local diffeomorphism that is injective on the interior of the disk but it is not injective along its boundary circle of radius $T_n$. Hence, there exists a sequence of simple closed geodesic loops ${\alpha}_n\subset M_n$ based at $y_n$ and of lengths $2T_n$ converging to $2T$ that are smooth everywhere except possibly at $y_n$. Since the sequence $\{M_n\}_{n\in\mathbb N}$ has locally positive injectivity radius in $\mathbb{R}^3$, there exists an ${\varepsilon}\in(0,T)$ such that for $n$ large, \[
I_{M_n}|_{M_n\cap\mathbb{B}(\vec 0, 5T)}\geq {\varepsilon}. \] Therefore, if the intrinsic distance between two points $x$ and $y$ in $M_n\cap \mathbb{B}(\vec 0, 5T)$ is less than ${\varepsilon}$, then there exists a unique length minimizing geodesic in $B_{M_n}(x,{\varepsilon})$
connecting them.
Since
the sequence of surfaces is converging to flat planes away from $l$ and $\lim_{n\to\infty }y_n= \vec 0$, if for some divergent sequence of integers $n$, there were points $p_n\in {\alpha}_n$ that lie outside of some fixed sized cylindrical neighborhood of $l$ and converge to a point $p$, then a subsequence of the geodesics ${\alpha}_n$ would converge to a set containing an infinite geodesic ray starting at $p$ in the horizontal plane containing $p$. This follows because the converge is smooth away from $l$. If there were a sequence of points $p_n\in{\alpha}_n$ converging to a point $p$ not in $l$ then a neighborhood $U_n$ of $p_n$ would converge smoothly to a horizontal flat disk $D(p)$ centered at $p$. Since ${\alpha}_n$ is a geodesic, ${\alpha}_n\cap U_n$ would converge to a diameter $d$ of $D(p)$ and there would be a point $q\in D(p)$ which is the limit of points $q_n\in{\alpha}_n$ and that is further away from $l$ then $p$. The convergence is smooth nearby $q$. Therefore, applying the previous argument gives that the limit set of convergence of ${\alpha}_n$ can be extended at $q$ in the direction $\overrightarrow{pq}$. Iterating this argument would give that the limit set of ${\alpha}_n$ contains an infinite geodesic ray starting at $p$ in the horizontal plane containing $p$. This would give a contradiction because the loops have length less than $3T$. Therefore after replacing by a subsequence, the ${\alpha}_n$ must converge to a vertical segment $\sigma$ containing the origin and of length less than or equal to $T$.
Note that by Theorem~\ref{thm1.1}, for $n$ large, ${\alpha}_n$ cannot be contained in $\mathbb{B}(y_n, {\delta}_1 {\varepsilon})$, otherwise it would be contained in $B_{M_n}(y_n, {\varepsilon} \slash 2)$ and, by the properties of ${\varepsilon}$, $B_{M_n}(y_n, {\varepsilon} \slash 2)$ cannot contain a geodesic loop such as ${\alpha}_n$. Therefore, if we let $p_1$ and $p_2$ be the endpoints of the line segment $\sigma$, without loss of generality, we can assume that $p_1\neq \vec 0$ and that $x_3(p_1) \in [\delta_1 {\varepsilon},T]$. Let $q_n$ be points of ${\alpha}_n$ with a largest $x_3 $-coordinate, and so $\lim_{n\to\infty}q_n=p_1$. By arguments similar to the ones used in the previous paragraphs of this proof, for any $r< {\varepsilon}/ 2$, ${\alpha}_n\cap \mathbb{B}(q_n, {\delta}_1r)$ contains an arc component $\beta_n$ with $\partial \beta_n= z_n(1)\cup z_n(2)\subset \partial \mathbb{B}(q_n, {\delta}_1r) $ satisfying the following properties for $n$ large: \begin{enumerate} \item $q_n\in\beta_n$;
\item $\lim_{n\to\infty}|z_n(1)-z_n(2)|=0$; \item $z_n(2)\in B_{M_n}(z_n(1), r) $ \item $\mbox{\rm dist}_{M_n} (z_n(1),z_n(2))\geq {\delta}_1 r.$ \end{enumerate}
Let $r:=\min \{{\varepsilon}\slash a, {\varepsilon}/ 2\}$, where $a$ is the
constant given in Theorem~\ref{main2}.
Then applying Theorem~\ref{main2} with $\vec 0$ replaced
by $z_n(1)$ and $R=r$, we have that if $\sup_{B_{{\Sigma}}(z_n(1),r_0(n))}|A_{M_n}|>\frac1{r_0(n)}$ where $r>r_0(n)$, then \[
\frac{1}{3}\mbox{\rm dist}_{M_n} (z_n(1),z_n(2))<|z_n(1)-z_n(2)|+r_0(n). \] Since as $n$ goes to infinity, there are points arbitrarily intrinsically close to $z_n(1)$ and with arbitrarily large norm of the second fundamental form, we can assume that $r_0(n)<\frac{ {\delta}_1 {\varepsilon}}{6 a} $. Combining this, $\mbox{\rm dist}_{M_n} (z_n(1),z_n(2))\geq \frac{ {\delta}_1 {\varepsilon}}{a}$ and the previous inequality, we have obtained that \[
\frac{ {\delta}_1 {\varepsilon}}{6 a}<|z_n(1)-z_n(2)|. \] Since the right hand-side of this inequality is going to zero as $n$ goes to infinity, while the left hand-side is fixed, bounded away from zero, independently on $n$, we have obtained a contradiction, which finishes the proof that $\lim_{n \to \infty} I_{M_n}(y_n)=\infty$. \end{proof}
Now that item~2 of Theorem~\ref{geometry1} is proved, we can apply Theorem~\ref{thm2.1} and Remark~\ref{remark:spiral} to obtain the double spiral staircase description in item~3 for the each of the 1 or 2 components of $\mathcal{C}_n$.
This final observation completes the proof of Theorem~\ref{geometry1}.
\section{The proof of Theorem~\ref{geometry2}.} \label{sec5}
In this section we will prove Theorem~\ref{geometry2}. Suppose that $\{M_n\}_{n\in \mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$, $\vec{0}\in {M_{n}}$, ${M_{n}}$ contains no spherical components, $\partial {M_{n}}\subset [\mathbb{R}^3 -\mathbb{B}(n)]$, the sequence has locally positive injectivity radius in $\mathbb{R}^3$ and \[
\lim_{n\to \infty} |A_{{M_{n}}}|(\vec{0})=\infty\,\text{ and }\lim_{n\to \infty} I_{{M_{n}}}(\vec{0})=C,\]
for some $C>0$.
After replacing by a subsequence, Claim~\ref{onepoint} implies that the surfaces $M_n$ converge $C^\alpha$, for any ${\alpha}\in(0,1)$, to a minimal lamination $\mathcal{L}$ outside of a closed set $\mathcal{S}$ with $\vec 0\in \mathcal{S}$ and $x_3(\mathcal{S})=\mathbb{R}$, where each leaf of $\mathcal{L}$ is a horizontal plane punctured in a discrete set of points in $\mathcal{S}$.
The Colding-Minicozzi picture of $\mathcal{L}$ around each point of $\mathcal{S}$ together with the curvature estimates in Theorem~\ref{th} and the fact that the foliation $\mathcal{L}$ is a foliation of $\mathbb{R}^3-\mathcal{S}$ by punctured horizontal planes imply that if $\mathcal{S}_0$ is a connected component of $\mathcal{S}$, then $x_3(\mathcal{S}_0)=\mathbb{R}$ and $\mathcal{S}_0$ is a Lipschitz graph over the $x_3$-axis. Moreover, given $p\in \mathcal{S} \cap \mathbb{B}(R)$, there exists $\delta:=\delta(R)>0$ such that for $n$ large, the intersection ${M_{n}}\cap \mathbb{B}(p,\delta)$ consists of one or two disks. This is a consequence of Corollary~\ref{cest-cor}, Theorem~\ref{thm1.1} and the fact that for $n$ large, the injectivity radius function of ${M_{n}}$ is bounded away from zero on any fixed compact set of $\mathbb{R}^3$. By this observation and arguing like in the proof of Theorem~1.1 in~\cite{mt13}, one obtains the following result.
\begin{claim} \label{claim-lines} The set $\mathcal{S}$ satisfies the following properties: \begin{itemize} \item The set $\mathcal{S}$ is a discrete collection of vertical lines, one of which is the $x_3$-axis. \item Given $R>0$, for $n$ sufficiently large the intersection of each line segment in $\mathcal{S}\cap \mathbb{B}(R)$ is the $C^1$ limit with multiplicity at most two of analytic curves in $M_n$ which are pre-images of the equator via the Gauss map. \item Let $l$ be a line in $\mathcal{S}$. Given $p\in l$ and $R>0$ such that $\mathbb{B}(p,R)\cap \mathcal{S} \subset l$ then, for $n$ large, the collection $\mathcal{C}_n$ of components of ${M_{n}}\cap \mathbb{B}(p,\frac R2)$ such that $\mathcal{C}_n\cap \mathbb{B}(p,\frac R4)\neq \mbox{\rm \O}$ consists of at most two disjoint disks. Furthermore, each of the 1 or 2 disk components of \,$\mathcal{C}_n$ is contained in a disk in $M_n\cap \mathbb{B}(p,R)$ with boundary curve in $\mathbb{B}(p,R)-\mathbb{B}(p,R/2)$, where these disks have the structure of double spiral staircases, see Remark~\ref{remark:spiral}, with central columns that are graphs with small $C^1$-norms over an arc in $l\,\cap \mathbb{B}(R)$, and $\mathcal{C}_n\cap \mathbb{B}(p,\frac R4)$ is contained in the union of these subdisks. \end{itemize} \end{claim}
Let $l$ be a line in $\mathcal{S}$. We now need to attach two labels to $l$. The first one is the following: if $l$ is the $C^1$ limit with multiplicity one, respectively two, of analytic curves which are pre-images of the equator via the Gauss map, we say that $l$ has {\em multiplicity one}, respectively {\em two}. The second label is the following: let $C_l(R)$ be the vertical solid cylinder of radius $R$ with axis $l$. For a given line $l$ in $ \mathcal{S}$, fix $R_l>0$ such that $C_l(2R_l)\cap \mathcal{S}=l$. Then, for $n$ large, $\partial C_l(R_l\slash2)\cap {M_{n}}$ contains either two or four highly winding spirals nearby the $(x_1,x_2)$-plane. Since ${M_{n}}$ is embedded, these spirals are all right-handed or left-handed for a given $n$. After passing to a subsequence and using a diagonal argument gives that for a given $l$ in $\mathcal{S}$ and $n$ large, the spirals have the same ``handedness.'' We say that $l$ is {\em right-handed} if such spirals are right-handed and that $l$ is {\em left-handed} otherwise.
In the next claims we prove that $\mathcal{S}$ consists of exactly two vertical lines.
\begin{claim}\label{twolines} Let $l_1$ and $l_2$ be two distinct components of $\mathcal{S}$. Then, if $\, \! $ $l_1$ is right-handed (left-handed), $l_2$ must be left-handed (right-handed). In particular, $\mathcal{S}$ consists of at most two lines. \end{claim} \begin{proof} Arguing by contradiction, suppose that $l_1$ and $l_2$ are two distinct components of $\mathcal{S}$ having the same handedness. We will obtain a contradiction by proving that as $n$ goes to infinity, the number of pairwise disjoint pairs of loops in $M_n$ such that each pair intersects transversely at one point is greater than the fixed genus bound $k$ for the surfaces ${M_{n}}$. By using standard topological arguments, the existence of such loops implies that the genus ${M_{n}}$ is greater than $k$.
Without loss of generality, suppose that $l_1$ and $l_2$ are both left-handed. Let $p_i:=l_i\cap\{x_3=0\}$, $i=1,2$ and let $\overline{p_1p_2}$ denote the line segment connecting them. For simplicity, first assume that $\overline{p_1p_2}\cap [\mathcal{S}-[l_1\cup l_2]]=\O$. Then, as $n$ goes to infinity, the segment $\overline{p_1p_2}-[C_{l_1}(R_{l_1}\slash4 )\cup C_{l_2}(R_{l_2}\slash 4)]$ lifts near the $(x_1,x_2)$-plane to an increasing number of arcs $\gamma_i$ in ${M_{n}}-[C_{l_1}(R_{l_1}\slash4 )\cup C_{l_2}(R_{l_2}\slash 4)]$. In fact, an ${\varepsilon}$-neighborhood $\Gamma$ of $\overline{p_1p_2}$ in the $(x_1,x_2)$-plane lifts to an increasing number of strips $\Gamma_i$ in ${M_{n}}-[C_{l_1}(R_{l_1}\slash4 -{\varepsilon})\cup C_{l_2}(R_{l_2}\slash 4-{\varepsilon})]$. Because ${M_{n}}$ is embedded, the strips $\Gamma_i$ can be ordered by their relative heights. Moreover, the arcs of the spiralling curves $M_n \cap \partial C_{l_1}(R_{l_1}\slash2 )$ given by $\Gamma_i\cap \partial C_{l_1}(R_{l_1}\slash2 )$ can be connected, via arcs in $\Gamma_i$, to the arcs in the spiralling curves $M_n \cap \partial C_{l_2}(R_{l_2}\slash2 ))$ given by $\Gamma_i\cap \partial C_{l_1}(R_{l_2}\slash2 )$. There are three possibilities to consider. \begin{enumerate} \item The lines $l_1$ and $l_2$ have both multiplicity one. \item One line has multiplicity one and the other one has multiplicity two. \item Both lines have multiplicity two. \end{enumerate}
The construction of the collection of pairwise disjoint pairs
of loops when the lines $l_1$ and $l_2$ have both
multiplicity one is illustrated in Figure~\ref{infinitegenus1}. \begin{figure}
\caption{The blue curve and the yellow curve intersect exactly at one point.}
\label{infinitegenus1}
\end{figure}
The construction of the collection of pairwise disjoint pairs
of loops in case two is illustrated in Figure~\ref{infinitegenus2}.
\begin{figure}
\caption{The right side of the picture is connected. On the left side of the picture, the red set is part of a connected set ${\mathcal H}_1$ and the green set is part of a connected set ${\mathcal H}_2$. The sets ${\mathcal H}_1$ and ${\mathcal H}_2$ are disjoint. Therefore, the end points of the blue arc and of the yellow arc can be connected so that the resulting closed curves intersect in exactly one point, as shown in the picture.}
\label{infinitegenus2}
\end{figure} The construction in the third and last case is also straightforward and it is left to the reader.
If $\overline{p_1p_2}\cap [\mathcal{S}-[l_1\cup l_2]]\neq\O$, the proof can be easily modified by replacing $\overline{p_1p_2}$ by a smooth embedded arc in the $(x_1,x_2)$-plane that is a small normal graph over $\overline{p_1p_2}$ and only intersects the singular set at its end points $p_1,p_2$. \end{proof}
The next claim finally shows that $\mathcal{S}$ consists of exactly two lines. Note that by the previous claim, if there are two lines in $\mathcal{S}$, then one of these two lines must be right-handed and the other one must be left-handed. Recall that \[ \lim_{n\to \infty} I_{{M_{n}}}(\vec{0})=C. \]
\begin{claim}\label{twovertical} The set $\mathcal{S}$ consists of exactly two vertical lines one of which is the $x_3$-axis. \end{claim}
\begin{proof}
We have already shown that there are at most two vertical lines in $\mathcal{S}$ and that
the $x_3$-axis is in $\mathcal{S}$. Since $\lim_{n\to\infty} H_n=0$, then,
for $n$ large, the Gauss equation implies that
the $\limsup K_{M_n}$ of the Gaussian curvature functions
of the surfaces $M_n$ is non-positive.
Since $\lim_{n\to \infty} [I_{{M_{n}}}(\vec{0})=C_n]=C$,
classical results on Jacobi fields along geodesics in such surfaces imply that for $n$ large the exponential map of $M_n$ at the origin
on the closed disk in $T_{\vec 0}M_n$ of radius $C_n$ is a local diffeomorphism that is injective on the open disk but not injective along its boundary circle of radius $C_n$. Hence, there exists a sequence of simple closed geodesic loops ${\alpha}_n\subset M_n$ based at the origin and of lengths $2C_n$ converging to $2C$ that are smooth everywhere except possibly at the origin.
Arguing by contradiction, suppose that $\mathcal{S}$ is the $x_3$-axis. The arguments to rule out this picture are exactly the same ones used in the proof of Claim~\ref{claim:Inj=Inf} by taking $T=C$. This implies that the number of lines in $\mathcal{S}$ must be two. \end{proof}
From now on, $l_1$ will denote the $x_3$-axis, $l_2$ will denote the other component in $\mathcal{S}$ and $p_2=l_2\cap \{x_3=0\}$.
\begin{claim}\label{multi} If $l_1$ has multiplicity one, respectively two, then so does $l_2$. \end{claim} \begin{proof} By Claim~\ref{twolines}, one vertical line must be right-handed and the other must be left-handed. Suppose that one line has multiplicity one and the other has multiplicity two. Like in the proof of Claim~\ref{twolines}, we will obtain a contradiction by proving that as $n$ goes to infinity, the number of pairwise disjoint pairs of loops such that each pair intersects transversely at one point is greater than the fixed genus bound $k$ for the surfaces ${M_{n}}$. The construction of such pairs of loops is illustrated in Figure~\ref{infinitegenus3}. \begin{figure}
\caption{The right side of the picture is connected. On the left side of the picture, the red set is part of a connected set ${\mathcal H}_1$ and the green set is part of a connected set ${\mathcal H}_2$. The sets ${\mathcal H}_1$ and ${\mathcal H}_2$ are disjoint. Therefore, the end points of the blue arc and of the yellow arc can be connected so that the resulting closed curves intersect in exactly one point, as shown in the picture. One of the differences with Figure~\ref{infinitegenus2} is in the handedness of the lines.}
\label{infinitegenus3}
\end{figure} \end{proof}
Assume now that $l_1$ has multiplicity two. Let $d>0$ denote the distance between $l_1$ and $l_2$. Recall that by Claims~\ref{claim-lines} and~\ref{multi}, if $l_1$ has multiplicity two so does $l_2$ and that, for $n$ large, the subset ${\mathcal D}_1(n)$ of $M_n\cap \mathbb{B}(\frac d2)$ such that ${\mathcal D}_1(n)\cap \mathbb{B}(\frac d4)\neq \O$ consists of two disks $A_1(n)$ and $B_1(n)$ and the subset ${\mathcal D}_2(n)$ of $M_n\cap \mathbb{B}(p_2,\frac d2)$ such that ${\mathcal D}_2(n)\cap \mathbb{B}(p_2,\frac d4)\neq \O$ consists of two disks $A_2(n)$ and $B_2(n)$.
\begin{claim} \label{claim5.5} Let $R>d$ and suppose that $l_1$ has multiplicity two. Then, for $n$ large, after possibly relabeling the disks, the following holds. The components $\Delta_n$ of $M_n\cap \mathbb{B}(R)$ such that $\Delta_n \cap \mathbb{B}(\frac d4)\neq \O $ consist of two distinct planar domains $\Delta_1(n)$ and $\Delta_2(n)$ such that $A_1(n)\cup A_2(n)\subset \Delta_1(n)$ and $B_1(n)\cup B_2(n)\subset \Delta_2(n)$.
\end{claim}
\begin{proof} Let $\Pi$ denote the vertical plane perpendicular to the segment $\vec 0 p_2$ connecting $\vec 0$ and $p_2$ and containing its midpoint. As $n$ goes to infinity, $\Pi\cap \Delta_n$ consists of an increasing collection of arcs $S(n)$ that are becoming horizontal arcs. Since $\Delta_n$ is embedded, these planar arcs can be ordered by their relative heights over the midpoint of $\vec 0 p_2$. Note that $\Delta_n-\Pi$ contains four disconnected components $\Omega_{ij}(n)$, $i,j=1,2$, and the following holds for $i=1,2$: $A_i(n)\subset \Omega_{i1}(n)$ and $B_i(n)\subset \Omega_{i2}(n)$. For $i=1,2$ let $\alpha_i(n)$ denote the arcs in $S(n)$ that are contained in the boundary of $\Omega_{i1}(n)$ and let $\beta_i(n)$ denote the arcs in $S(n)$ that are contained in the boundary of $\Omega_{i2}(n)$. Recall that $A_1(n)$ and $B_1(n)$, respectively $A_2(n)$ and $B_2(n)$, separate $\mathbb{B}(\frac d4)$, respectively $\mathbb{B}(p_2,\frac d4)$, into three components and the mean curvature vector of $M_n$ points outside of the component $W_1(n)$, respectively $W_2(n)$, with boundary $A_1(n)\cup B_1(n)$, respectively $A_2(n)\cup B_2(n)$. This is because otherwise applying Corollary~4.9 in~\cite{mt13} would give curvature estimates in a neighborhood of $\vec 0$, respectively $p_2$, and this would contradict the fact that $\vec 0$, respectively $p_2$, is in $\mathcal{S}$. Using this observation and the previous discussion gives that either $\alpha_1(n)=\alpha_2(n)$ and $\beta_1(n)=\beta_2(n)$, or $\alpha_1(n)=\beta_2(n)$ and $\beta_1(n)=\alpha_2(n)$. After possibly relabeling, either case implies that $\Delta_n$ is disconnected.
It remains to prove that each connected component of $\Delta_n$ has genus zero. This follows from the ``almost periodicity'' of the previous description. If there were a pair of loops intersecting at exactly one point then, as $n$ goes to infinity, there would be an increasing number of such pairs in $M_n$, contradicting the fact that the genus of $M_n$ is bounded from above by $k$.
\end{proof}
\begin{remark}\label{moneortwo} {\em If instead $l_1$ has multiplicity one then, by the same arguments, the components $\Delta_n$ of $M_n\cap \mathbb{B}(R)$ such that $\Delta_n \cap \mathbb{B}(\frac d4)\neq \O $ consist of a unique
planar domain for $n$ large. Moreover, it is easy to see that if $l_1$ has
multiplicity two and $\Delta_2(n)$ denotes the connected component of $\Delta_n$
that intersect $\mathbb{B}(\frac d4)$ and does not contain the origin then, after possibly
reindexing the subsequence, $\Delta_2(n)\cap \mathbb{B}(\frac Rn)\neq \O$. }
\end{remark}
For the time being, let us assume that the distance between $l_1$ and $l_2$ is $C$. We now deal with the construction of closed curves with non-zero flux. Note that this construction is analogous to the one described in Figures~4, 5 and 6 in~\cite{mpr3}, which in turn was a modification of a related argument in~\cite{mpr1}; the closed curves constructed by the methods in ~\cite{mpr1,mpr3} are called {\em connection loops}.
Recall that if $\gamma$ is a 1-cycle in an $H$-surface $M$, then the {\em flux} of $\gamma$ is \[ F({\gamma})=\int_{\gamma}(H\gamma+\xi)\times \dot{\gamma}, \]
where $\xi$ is the unit normal to $M$ along $\gamma$. The flux of a 1-cycle in $M$ is a homological invariant.
Given ${\varepsilon}>0$ sufficiently small, as $n$ goes to infinity, the line segment $\vec0p_2-[\mathbb{B}({\varepsilon})\cup \mathbb{B}(p_2,{\varepsilon})]$ lifts to an increasing number of arcs $\gamma_i(n,{\varepsilon})$ in ${M_{n}}-[\mathbb{B}({\varepsilon})\cup \mathbb{B}(p_2,{\varepsilon})]$ that, as $n$ goes to infinity, converge $C^1$ to the line segment $\vec0 p_2-[\vec0 p_2\cap [\mathbb{B}({\varepsilon})\cup\mathbb{B}(p_2,{\varepsilon})]]$. Because ${M_{n}}$ is embedded, the lifts $\gamma_i(n,{\varepsilon})$ can be ordered by their relative heights and the signs of the inner product between the unit normal vector to ${M_{n}}$ along $\gamma_i(n,{\varepsilon})$ and $(0,0,1)$ are alternating.
Let ${\varepsilon}_n$ be a sequence of positive numbers with $\lim_{n\to\infty}{\varepsilon}_n=0$
such there exists a sequence of two consecutive lifts $\gamma_1(n,{\varepsilon}_n)$ and $\gamma_2(n,{\varepsilon}_n)$ of $\vec0p_2-[\mathbb{B}({\varepsilon}_n)\cup \mathbb{B}(p_2,{\varepsilon}_n)]$ and the following holds: the end points of such lifts are contained in $\mathbb{B}(2{\varepsilon}_n)$ and $\mathbb{B}(p_2,2{\varepsilon}_n)$ and the lifts converge to the line segment $\vec 0p_2$ away from $\vec 0$ and $p_2$ as $n$ goes to infinity. Let $\alpha_1(n,{\varepsilon}_n)$ be an arc in $\mathbb{B}(2{\varepsilon}_n)\cap {M_{n}}$ connecting the endpoints of $\gamma_1(n,{\varepsilon}_n)$ and $\gamma_2(n,{\varepsilon}_n)$ in $\mathbb{B}({\varepsilon}_n)$ and let $\alpha_2(n,{\varepsilon}_n)$ be an arc in $\mathbb{B}(p_2,2{\varepsilon}_n)\cap {M_{n}}$ connecting the endpoints of $\gamma_1(n,{\varepsilon}_n)$ and $\gamma_2(n,{\varepsilon}_n)$ in $\mathbb{B}(p_2,2{\varepsilon}_n)$ such that the loop \[ \gamma_1(n,{\varepsilon}_n)\cup\alpha_1(n,{\varepsilon}_n)\cup \gamma_2(n,{\varepsilon}_n)\cup\alpha_2(n,{\varepsilon}_n) \] is smooth; note that since the sequence $\{M_n\}_{n\in\mathbb N}$ has locally positive injectivity radius in $\mathbb{R}^3$, by using Theorem~\ref{thm1.1}, as $n$ goes to infinity, the sum of the lengths of the arcs $\alpha_1(n,{\varepsilon}_n)$ and $\alpha_2(n,{\varepsilon}_n)$, can be assumed to approach zero as well. Let $\Gamma_{n}$ be a unit speed parametrization of such a loop and, for $p\in \Gamma_{n}$ let $ N_{n}(p)$ denote the normal to ${M_{n}}$ at $p$.
Recall that as $n$ goes to infinity, the mean curvature of $M_n$ is going to zero therefore, since the length of $\Gamma_{n}$ is bounded from above independently of $n$, the term in the flux formula involving the mean curvature is going to zero. In other words, \[ F(\Gamma_{n})=\int_{\Gamma_{(n,{\varepsilon})}} N_{n}(p)\times \dot \Gamma_{n)}(p)+f(n), \quad {\rm where } \lim_{n\to \infty}f(n)=0. \] As $n$ goes to infinity, for any $p\in \gamma_i(n,{\varepsilon}_n)$ the vectors $N_{n}(p)\times \dot \Gamma_{n}(p)$ are converging to the same unit vector perpendicular to $\vec0p_2$, the lengths of ${\alpha}_1(n,{\varepsilon}_n)\cup{\alpha}_2(n,{\varepsilon}_n)$ are going to zero, and the lengths of $\gamma_1(n,{\varepsilon}_n)\cup\gamma_2(n,{\varepsilon}_n)$ are converging to $2C$. Therefore, after possibly changing their orientation, the curves $\Gamma_{n}$ converge to the line segment $\vec 0p_2$, have lengths converging to $2C$ and fluxes converging to $(0,2C,0)$. This construction of curves with non-zero flux finishes the proof of item 3 of Theorem~\ref{geometry2}, assuming that the distance between the lines $l_1$ and $l_2$ is $C$.
We can now prove that the distance $d$ between the lines $l_1$ and $l_2$ is $C$. Arguing by contradiction, suppose that $d<C$ or $d>C$. If $d<C$ then, by the previous arguments, there exists a sequence of loops $\Gamma_{n}$ containing the origin with the norms of their fluxes bounded from below by $d$. Since the flux of a 1-cycle is a homological invariant, this implies that such curves are homologically non-trivial. Moreover the lengths of $\Gamma_{n}$ are converging to $2d<2C$. Therefore, there exists ${\varepsilon}>0$ such that for $n$ sufficiently large, $\Gamma_{n}\subset B_{M_n}(\vec 0, C-{\varepsilon})$. However, since $\lim_{n\to \infty} [I_{{M_{n}}}(\vec{0})=C_n]=C$, for $n$ sufficiently large $B_{M_n}(\vec 0, C-{\varepsilon})$ is a disk. This implies that for $n$ sufficiently large, $\Gamma_{n}$ is homologically trivial which is a contradiction.
Suppose $d>C$. Since $\lim_{n\to \infty} [I_{M_n}(\vec 0)=C_n]=C\in (0,\infty)$, there exists a sequence of simple closed geodesic loops ${\alpha}_n\subset M_n$ based at $\vec 0$ and of lengths $2C_n$ converging to $2C$ that are smooth everywhere except possibly at $\vec 0$; see the proof of Claim~\ref{claim:Inj=Inf}. In fact, arguing exactly as in the proof of Claim~\ref{claim:Inj=Inf} gives that the limit set of $\alpha_n$ must contain a point in $\mathcal{S}-l_1$. Note that $\alpha_n\subset \overline \mathbb{B}(C_n)$. Since $d>C$, there exists ${\varepsilon}>0$ such that, for $n$ sufficiently large, $\alpha_n\subset \mathbb{B}(d-{\varepsilon})$. In particular, for $n$ sufficiently large, $\alpha_n$ is at distance at least ${\varepsilon}$ from the line $l_2=\mathcal{S}-l_1$ and thus the limit set of $\alpha_n$ does not contain a point in $\mathcal{S}-l_1$. This contradiction proves that the distance between $l_1$ and $l_2$ is equal to $C$.
Finally, given $R>C$ let $\Delta(n)$ be a connected component of $M_n\cap \mathbb{B}(R)$ that intersects $\mathbb{B}(\frac R4)$. We want to show that for $n$ sufficiently large, given two points in $\Delta(n)$, their distance in $M_n$ is less than $3R$. Without loss of generality, let us assume that $\Delta(n)$ is the connected component containing the origin. Then, it suffices to show that given a point in $\Delta(n)$, its distance to the origin in $M_n$ is less than $\frac32R$. Arguing by contradiction, assume there exists $R>C$ and points $p(n)\in \Delta(n)$ at distance greater than or equal to $\frac 32R$ to the origin. After going to a subsequence, let $p=\lim_{n\to\infty} p(n)$. Recall that by Theorem~\ref{thm1.1} and Claim~\ref{claim5.5}, since the sequence $\{M_n\}_{n\in\mathbb N}$ has locally positive injectivity radius in $\mathbb{R}^3$, there exists ${\varepsilon}>0$ such that for $n$ sufficiently large, the intersection $\Delta(n)\cap\mathbb{B}({\varepsilon})$ is a disk that is contained in $B_{M_n}(R)$. Therefore, for $n$ sufficiently large, $p_n\notin\mathbb{B}({\varepsilon})$ which implies that $p\notin\mathbb{B}(\frac \ve2)$.
Let $\gamma$ be the horizontal line segment connecting $p$ to a point $q$ in the $x_3$-axis and let $\alpha$ be the line segment in the $x_3$-axis connecting $q$ to the origin. If $\gamma\cap l_2\neq \O$, let $z$ denote such point of intersection. Note that the length of $\gamma\cup\alpha$ is less than $\sqrt 2R<\frac 32R$. By the arguments used in the proof of this theorem, it is clear that there exists a sequence of curves $\gamma(n)$ in $M_n$ connecting the origin to the point $p(n)$ that converges to $\gamma\cup\alpha$ away from the points $q$ and $z$. Moreover, the lengths of this curve converge to the length of $\gamma\cup\alpha$. Therefore, for $n$ sufficiently large, the length of $\gamma(n)$ is less than $\frac 32R$ and so the distance from $p(n)$ to the origin in $M_n$ is less than $\frac32R$. This contradiction proves that for $n$ sufficiently large, given two points in $\Delta(n)$, their distance in $M_n$ is less than $3R$.
The geometric description given in item 2 of Theorem~\ref{geometry2} follows easily from the arguments used in its proof. This finishes the proof of the theorem.
\begin{remark} \label{rem:6.6} {\em Suppose that for some ${\varepsilon}>0$, $\{M_n\}_{n\in \mathbb N}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with finite genus at most $k$,
$\vec{0}\in {M_{n}}$, $d_{{M_{n}}}(\vec{0},\partial {M(n)})\to \infty$ and that $ I_{{M(n})}(x)\geq {\varepsilon}$ for any $x\in {M(n)}$ with $d_{M(n)}(x,\partial M(n))>1$. Then Corollary~3.2 in~\cite{mt8} shows that after replacing by a subsequence, the components $M_n$ of $M(n)\cap\mathbb{B}(n)$ containing the origin satisfy the conditions of one of the Theorem~\ref{H-lam-thm}, \ref{geometry1} or \ref{geometry2}. } \end{remark}
\begin{definition}{\rm A point of {\em almost-minimal injectivity radius}\label{amininj} of a compact surface $M$ surface with boundary is a point $p \in M$ where the function $\frac{d_{M}(p,\partial M)}{I_{M}(p )}$ has its maximal value.} \end{definition}
As a consequence of Remark~\ref{rem:6.6}, we have the following proposition that is related to Theorem~1.1 in~\cite{mpr14}, which was proved under the hypothesis that $H=0$.
\begin{proposition} \label{cor:5.7} Let $M(n)$ be a sequence of compact $H_n$-surfaces with boundary embedded in $\mathbb{R}^3$ with finite genus at most $k$ together with points $p_n\in M_n$ satisfying $$\lim_{n\to\infty}\frac{d_{M(n)}(p_n,\partial M(n))}{I_{M(n)}(p_n)} = \infty.$$
Given points $q_n\in M(n)$ of almost-minimal injectivity radius, there exist
positive numbers $R_n$, $\lim_{n\to\infty}R_n= \infty$, satisfying: \begin{enumerate} \item The component $M_n$ of $[\frac{1}{I_{M(n)}(q_n)} (M(n)-q_n)]\cap \mathbb{B}(R_n)$ containing $\vec{0}$ has its boundary in $ \partial \mathbb{B}(R_n)$ and genus at most $k$. \item The sequence $\{M_n\}_{n\in \mathbb{N}}$ has uniformly positive injectivity radius in $\mathbb{R}^3$ and $I_{M_n}(\vec{0})=1$.\end{enumerate} Then after choosing a subsequence and then translating the surfaces $M_n$ by vectors of length at most 1, the sequence $\{M_n\}_{n\in \mathbb{N}}$ satisfies the hypotheses of Theorems~\ref{geometry2} with $C=1$ or the sequence $\{M_n\}_{n\in \mathbb{N}}$ satisfies the hypotheses of Theorem~\ref{H-lam-thm}. \end{proposition} \begin{proof} After choosing a subsequence suppose that $$\frac{d_{M(n)}(p_n,\partial M(n))}{I_{M(n)}(p_n)}\geq n.$$ Let $q_n\in M_n$ be points of almost-minimal injectivity radius and let $M'_n$ be the scaled and translated $H'_n$-surface $$M_n'=[\frac{1}{I_{M(n)}(q_n)} [B_{M(n)}(q_n,n/2)-q_n)].$$ Observe that $\{{M'_{n}}\}_{n\in \mathbb{N}}$ is a sequence of compact $H_n$-surfaces in $\mathbb{R}^3$ with genus at most $k$,
$\vec{0}\in {M'_{n}}$, $\lim_{n\to \infty}d_{{M'_{n}}}(\vec{0},\partial {M'_n})$ and that $ I_{M'_n}(x)\geq 1/2$ for any $x\in M'_n$ with $d_{M'_n}(x,\partial M'_n)>1$. Corollary~\ref{cor:5.7} now follows immediately from Remark~\ref{rem:6.6}. \end{proof}
\section{Appendix.}
In this appendix we give the definition of a weak CMC lamination of a Riemannian three-manifold. Specializing to the case where all of the leaves have the same mean curvature $H\in \mathbb{R}$, one obtains the definition of a weak $H$-lamination, for which we give a few more explanations. A simple example of a weak 1-lamination $\mathcal{L}$ of $\mathbb{R}^3$ that is a not a 1-lamination is the union of two spheres of radius 1 that intersect at single point of tangency.
For further background material on these notions see Section~3 of~\cite{mpr21}, \cite{mpr18} or our previous papers~\cite{mt4,mt3}.
\begin{definition} \label{definition} {\rm A (codimension-one) {\it weak CMC lamination} ${\cal L}$ of a Riemannian three-manifold $N$ is a collection $\{ L_\alpha\}_{\alpha\in I}$ of (not necessarily injectively) immersed constant mean curvature surfaces called the {\it leaves} of ${\cal L}$, satisfying the following four properties. \begin{enumerate}[1.] \item $\bigcup_{\alpha\in I}L_{\alpha }$ is a closed subset of $N$. With an abuse of notation, we will also consider $\mathcal{L}$ to be the closed set $\bigcup_{\alpha\in I}L_{\alpha }$.
\item The function $|A_{\cal L}|\colon {\cal L}\to [0,\infty )$ given by \begin{equation} \label{eq:sigma}
|A _{\cal L}|(p)=\sup \{ |A _L|(p)\ | \ L \mbox{ is a leaf of ${\cal L}$ with $p\in L$} \} . \end{equation} is uniformly bounded on compact sets of~$N$. \item For every $p\in N$, there exists an ${\varepsilon}_p>0$ such that if for some ${\alpha}\in I$, $q\in L_{\alpha} \cap B_N(p,{\varepsilon}_p)$, then $q$ contains a disk neighborhood in $L_{\alpha}$ whose boundary is contained in $N- B_N(p,{\varepsilon}_p)$. \item If $p\in N$ is a point where either two leaves of ${\cal L}$ intersect or a leaf of ${\cal L}$ intersects itself, then each of these surfaces nearby $p$ lies at one side of the other (this cannot happen if both of the intersecting leaves have the same signed mean curvature as graphs over their common tangent space at $p$, by the maximum principle). \end{enumerate} Furthermore: \begin{itemize} \item If $N=\bigcup _{\alpha } L_{{\alpha} }$, then we call ${\cal L}$ a {\it weak CMC foliation} of $N$.
\item If the leaves of ${\cal L}$ have the same constant mean curvature $H$, then we call ${\cal L}$ a {\it weak $H$-lamination} of $N$ (or $H$-foliation, if additionally $N=\bigcup _{\alpha } L_{{\alpha} }$). \end{itemize} } \end{definition}
\begin{figure}
\caption{The leaves of a weak $H$-lamination with $H\neq 0$ can intersect each other or themselves, but only tangentially with opposite mean curvature vectors. Nevertheless, on the mean convex side of these locally intersecting leaves, there is a lamination structure. }
\label{figHlamin}
\end{figure}
The following proposition follows immediately from the definition of a weak $H$-lamination and the maximum principle for $H$-surfaces.
\begin{proposition} \label{prop10.2} Any weak $H$-lamination ${\cal L}$ of a Riemannian three-manifold $N$ has a local $H$-lamination structure on the mean convex side of each leaf. More precisely, given a leaf $L_{{\alpha} }$ of ${\cal L}$ and given a small disk $\Delta \subset L_{\alpha }$, there exists an ${\varepsilon} >0$ such that if $(q,t)$ denotes the normal coordinates for $\exp _q(t\eta _q)$ (here $\exp $ is the exponential map of $N$ and $\eta $ is the unit normal vector field to $L_{{\alpha} }$ pointing to the mean convex side of $L_{{\alpha} }$), then the exponential map $\exp $ is an injective submersion in
$U(\Delta ,{\varepsilon} ):= \{ (q,t) \ | \ q\in \mbox{\rm Int}(\Delta ), \, t\in (-{\varepsilon} ,{\varepsilon} )\} $, and the inverse image $\exp^{-1}({\cal L})\cap \{ q\in \mbox{\rm Int}(\Delta ), \,t\in [0,{\varepsilon} )\} $ is an $H$-lamination of $U(\Delta ,{\varepsilon} $) in the pulled back metric, see Figure~\ref{figHlamin}. \end{proposition}
\begin{definition} \label{deflimit}{\rm A leaf $L_{\alpha}$ of a weak $H$-lamination $\mathcal{L}$ is a {\em limit leaf} of $\mathcal{L}$ if at some $p\in L_{\alpha}$, on its mean convex side near $p$, it is a limit leaf of the related local $H$-lamination given in Proposition~\ref{prop10.2}. }\end{definition}
\begin{remark} \label{remarkweak}
{\em \begin{description} \item[{\rm 1.}] A weak $H$-lamination for $H=0$ is a minimal lamination. \item[{\rm 2.}] Every CMC lamination (resp. CMC foliation) of a Riemannian three-manifold is a weak CMC lamination (resp. weak CMC foliation). \item[{\rm 3.}] Theorem~4.3 in~\cite{mpr19} states that the 2-sided cover of a limit leaf of a weak $H$-lamination is stable. By Lemma~3.3 in~\cite{mpr10} and the main theorem in~\cite{ros9}, the only complete stable $H$-surfaces in $\mathbb{R}^3$ are planes. Hence, every leaf $L$ of a weak $H$-lamination $\mathcal{L}$ of $\mathbb{R}^3$ is properly immersed and has an embedded half-open regular neighborhood $N(L)$ on its mean convex side, and $N(L)$ can be chosen to be disjoint from $\mathcal{L}$ if $L$ is not a plane. In particular, if $L$ is a leaf of a weak $H$-lamination $\mathcal{L}$ of $\mathbb{R}^3$, then there is a small perturbation $L'$ of $L$ in $N(L)$ that is properly embedded in $\mathbb{R}^3$.
\end{description}
} \end{remark}
\center{William H. Meeks, III at profmeeks@gmail.com\\ Mathematics Department, University of Massachusetts, Amherst, MA 01003} \center{Giuseppe Tinaglia at giuseppe.tinaglia@kcl.ac.uk\\ Department of Mathematics, King's College London, London, WC2R 2LS, U.K.}
\end{document} | arXiv | {
"id": "1510.07549.tex",
"language_detection_score": 0.7849474549293518,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{abstract} For an arbitrary complex algebraic variety which is not necessarily pure dimensional, the intersection complex can be defined as the direct sum of the Deligne-Goresky-MacPherson intersection complexes of each irreducible component. We give two axiomatic topological characterizations of the middle perversity direct sum intersection complex, one stratification dependent and the other stratification independent. To accomplish this, we show that this direct sum intersection complex can be constructed using Deligne's construction in the more general context of topologically stratified spaces. A consequence of these characterizations is the invariance of this direct sum intersection complex under homeomorphisms. \end{abstract}
\maketitle
\tableofcontents
\section{Introduction} In \cite{GM1}, Goresky and MacPherson introduce the intersection (co)homology groups for a topological pseudomanifold. In \cite{GM2}, Goresky and MacPherson construct a complex of sheaves whose (hyper)cohomology gives the intersection homology groups. This complex of sheaves is called the \textit{(Deligne-Goresky-MacPherson) intersection complex} and the construction is referred to as \textit{Deligne's construction} (the indexing convention used for intersection complexes is discussed at the end of the introduction). They show that the intersection complex is uniquely characterized (up to canonical isomorphism) by certain axioms. A consequence of this characterization is that the intersection complex, and hence the intersection homology, is invariant under homeomorphisms. Irreducible, or even pure dimensional, (complex algebraic) varieties can be viewed as topological pseudomanifolds and intersection homology is a useful tool for understanding their topology; see \cite{dCL}. Arbitrary varieties, however, cannot be viewed as topological pseudomanifolds because their irreducible components may have differing dimensions. Instead, they must be viewed as topologically stratified spaces; see $\S \ref{top strat space}$.
For arbitrary varieties, there is still a natural candidate for the intersection complex. In \cite{dC1}, de Cataldo defines the middle perversity intersection complex of a variety as a direct sum of the middle perversity Deligne-Goresky-MacPherson intersection complexes of each irreducible component. He then observes that this complex satisfies virtually all of the properties of the usual intersection complex for irreducible varieties, e.g. Poincar\'e duality, existence of mixed and pure Hodge structures, Lefschetz theorems, etc. In \cite{dCM}, de Cataldo and Maulik prove the homeomorphism invariance of the intersection complex as a lemma and use it to prove that the perverse Leray filtration for the Hitchin morphism is independent of the complex structure of the curve. An axiomatic characterization of the intersection complex, analogous to the one given by Goresky and MacPherson for pseudomanifolds, is desirable because it gives a topological criterion for determining which complexes can be the intersection complexes. Example \ref{ax 2 fails} in $\S 4$ shows that although each summand of the intersection complex is characterized by the axioms proposed by Goresky and MacPherson, it is not so clear which axioms characterize the direct sum.
The main goal of this paper is to give an axiomatic topological characterization of the middle perversity intersection complex of an arbitrary complex algebraic variety which is not necessarily pure dimensional. For those wondering why we only consider the middle perversity, see Remark \ref{arb perv rmk}. Although we will only work with complex algebraic varieties in this paper, our results hold for any topologically stratified space with only even dimensional strata (see Remark \ref{alg var vs strat space}). In particular, they also hold for complex analytic spaces. We summarize our approach below.
Let $X$ be a complex algebraic variety of complex dimension $n$, with stratification $$\mathfrak{X} : X= X_n \supseteq X_{n-1} \supseteq \cdots \supseteq X_{-1}= \varnothing$$ by closed subvarieties, so that all strata contained in $X_k - X_{k-1}$ are of pure complex dimension $k$ and $X$ (in the classical topology) has the structure of a topologically stratified space (e.g. $\mathfrak{X}$ induced by a Whitney stratification). In section \ref{construction}, we show that the stratification induces an open dense subset $U \subseteq X$ such that each \begin{enumerate} \item each point of $U$ admits a neighborhood homeomorphic to $\mathbb{C}^m$ for some $1 \leq m \leq n$, i.e. $U = \bigsqcup_{m=1}^n U^m$ where $U^m$ is a topological manifold of complex dimension $m$, \item $\overline{U^m}-U^m$ has complex dimension $\leq m-1$, where $\overline{U^m}$ is the closure of $U^m$ in $X$. \end{enumerate} We then construct a complex $IC(\mathfrak{X},\mathcal{L})$ of sheaves on $X$ using Deligne's construction with respect to: the (lower) middle perversity, the stratification $\mathfrak{X}$, and any local system $\mathcal{L}$ on the induced open dense subset $U \subseteq X$. Proposition \ref{IC is direct sum} shows that we can interpret this complex as a direct sum of Deligne-Goresky-MacPherson intersection complexes. In particular, for possibly reducible varieties, the complex $IC(\mathfrak{X},\mathcal{L})$ is the intersection complex defined by de Cataldo in \cite{dC1}. When the variety is pure dimensional, Deligne's construction begins by using the stratification to induce a filtration by open sets. A key ingredient in our construction of $IC(\mathfrak{X},\mathcal{L})$ is a new way of using the stratification to induce a filtration of a not necessarily pure dimensional complex algebraic variety by open sets. Example \ref{open filt ex} shows that this procedure is more subtle than one might initially expect. A priori the complex $IC(\mathfrak{X},\mathcal{L})$ depends on the stratification $\mathfrak{X}$ and the local system $\mathcal{L}$. Our main result is the following:
\begin{thm'}[\S \ref{top indep}]\label{main thm 1} Let $X$ be a complex algebraic variety of complex dimension $n$ which is not necessarily pure dimensional. Let $U$ be an open dense subset of $X$ satisfying $(1)$ and $(2)$ above. Let $\mathcal{L}^m$ be a local system on $U^m$ and set $\mathcal{L} = \bigoplus_{m=1}^n \mathcal{L}^m$ (extend each $\mathcal{L}^m$ on $U^m$ to $U$ by zero). Then there exists a unique (up to canonical isomorphism) complex $IC(X, \mathcal{L})$ satisfying: \begin{enumerate}[(a)]
\item (Normalization) There exists an open dense subset $V$ of $X$ such that $V = \bigsqcup_{m=1}^n V^m$ where $V^m$ is a topological manifold of complex dimension $m$, dim$_\mathbb{C}(\overline{V^m} - V^m) \leq m-1$, and $IC(X,\mathcal{L})|_{V^m} \simeq \mathcal{L}'^m[m]$ where $\mathcal{L}'$ is the unique extension of $\mathcal{L}^m|_{U^m \cap V^m}$ to $V^m$ (see Remark \ref{local sys rmk} for more details on $\mathcal{L}'$).
\item (Pure Dimensional Support) For $1 \leq m \leq n$, if $a > -m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^*S) \neq 0\} < -a.$$
\item (Pure Dimensional Cosupport) For $1 \leq m \leq n$, if $a < m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^!S) \neq 0\} < a.$$ \end{enumerate} where $i_x: \{x\} \to X$ is the inclusion.
In particular, the complex $IC(\mathfrak{X},\mathcal{L})$ is independent of the stratification and the complex $IC(X,\mathcal{L})$ is invariant under homeomorphisms. \end{thm'}
\setcounter{thm'}{0}
This theorem gives another proof of the homeomorphism invariance of the intersection complex proved by de Cataldo and Maulik in \cite{dCM} for possibly reducible varieties. We prove our main theorem by giving two characterizations of the complex $IC(\mathfrak{X},\mathcal{L})$ in Section \ref{axiomatic char}. These characterizations are analogous to the stratification dependent characterization, [AX1], and the stratification independent characterization, [AX2], of the intersection complex of a topological pseudomanifold given by Goresky and MacPherson in \cite{GM2}. To emphasize the analogy with the axioms proposed by Goresky and MacPherson, we will denote our sets of axioms by [AX1$'$] and [AX2$'$]. More precisely, we give a stratification dependent collection of axioms, [AX1$'$], and prove that $IC(\mathfrak{X},\mathcal{L})$ is the unique complex (up to canonical isomorphism) satisfying axioms [AX1$'$]; see Definition \ref{AX$1'$} and Theorem \ref{main thm 2}. We discuss the differences between axioms [AX1$'$] and axioms [AX1] in Remark \ref{ax1' vs ax1}. We then give a stratification independent collection of axioms, [AX2$'$], and prove that axioms [AX2$'$] are equivalent to axioms [AX1$'$]; see Definition \ref{AX$2'$} and Proposition \ref{main prop}. We discuss the differences between axioms [AX2$'$] and axioms [AX2] in Remark \ref{ax2' vs ax2}. In Section \ref{top indep}, we finish the proof by giving a way to compare objects in $D^b_c(X)$ with respect to two different stratifications which may not have a common refinement.
\begin{conv*}\label{indexing conv}
Let $X$ be a complex algebraic variety of pure complex dimension $n$. Let $U \subseteq X$ be an open dense subset which is a topological manifold of dimension $n$ and let $\mathcal{L}$ be a local system on $U$. We require that the middle perversity Deligne-Goresky-MacPherson intersection complex $IC(X,\mathcal{L})$ satisfies $IC(X, \mathcal{L})|_U \simeq \mathcal{L}[n]$. With this convention, $IC(X,\mathcal{L})$ is perverse.\\
\indent Let $X$ be a complex algebraic variety of complex dimension $n$ which is not necessarily pure dimensional. Let $U$ be an open dense subset of $X$ such that $U = \bigsqcup_{m=1}^n U^m$ where each $U^m$ is a topological manifold of dimension $n$ and dim$_\mathbb{C}(\overline{U^m}-U^m) \leq m-1$. Let $X^m = \overline{U^m}$ be the closure of $U^m$ in $X$. In Corollary \ref{irred comp cor}, we show that $X^m$ can be interpreted as the union of all irreducible $m$-dimensional components of $X$. Let $\mathcal{L}^m$ be a local system on $U^m$ and set $\mathcal{L} = \bigoplus_{m=1}^n \mathcal{L}^m$ where each $\mathcal{L}^m$ is extended to $U$ by zero. The intersection complex of $X$ is defined to be $IC(X, \mathcal{L}) = \bigoplus_{m=1}^n IC(X^m,\mathcal{L}^m)$ where each $IC(X^m,\mathcal{L}^m)$ is normalized as above. In particular, $IC(X, \mathcal{L})|_U \simeq \bigoplus_{m=1}^n \mathcal{L}^m[m]$. Although this indexing convention seems more cumbersome than the Borel convention where the local systems are not shifted, it is more convenient to use when construct the complex $IC(\mathfrak{X},\mathcal{L})$ (see Remark \ref{indexing conv rmk} for a more detailed discussion). \end{conv*}
\section{Preliminaries} We begin by fixing some terminology and notation. Given a set $A$ and a subset $B \subseteq A$, we denote by $B^c$ the set complement of $B$. The word \textit{variety} means a separated scheme of finite type over the complex numbers $\mathbb{C}$. We endow varieties with the classical topology. In this case, Whitney showed that varieties admit the structure of Whitney stratified spaces \cite{W}. Verdier then showed that there exists a Whitney stratification such that each strata is complex algebraic \cite{V}. Finally, Teissier showed that varieties admit a canonical Whitney stratification for which the strata are algebraic \cite{Te}. We work with a fixed regular Noetherian ring $R$ with finite Krull dimension. We shall mainly be concerned with the cases that $R = \mathbb{Z}, \mathbb{Q}$, or $\mathbb{C}$. The word \textit{sheaf} means a sheaf of $R$-modules. The constant sheaf on a topological space $X$ is denoted by $R_X$. The word \textit{complex} means a complex of sheaves of $R$-modules. Let $Sh(X)$ denote the abelian category of sheaves on $X$, and $D^b(X)$ denote the bounded derived category of the abelian category $Sh(X)$.
\subsection{Topologically Stratified Spaces} \label{top strat space} We begin by recalling the basic definitions associated with topologically stratified spaces given in \cite{GM2}. A more detailed discussion can be found in \cite[Ch. 2]{F}. \begin{defn}\label{top strat space def} The definition of a topological stratified space is inductive. A $0$-dimensional topologically stratified Hausdorff space is a countable collection of points with the discrete topology. An $n$-dimensional \textit{topological stratification} of a paracompact Hausdorff space $X$ is a finite filtration $\mathfrak{X}$ by closed subsets \begin{equation} \label{strat eq} \mathfrak{X}: X = X_n \supseteq X_{n-1} \supseteq \cdots \supseteq X_0 \supseteq X_{-1}=\varnothing \end{equation} such that for each point $p \in X_k - X_{k-1}$, there exists a neighborhood $N$ of $p$, a compact Hausdorff space $L$ with an $(n-k-1)$-dimensional topological stratification \begin{equation} L = L_{n-k-1} \supseteq \cdots \supseteq L_0 \supseteq L_{-1}= \varnothing, \end{equation} and a homeomorphism \begin{equation} \label{local struc eq} \phi : \mathbb{R}^k \times \text{cone}^o(L) \to N \end{equation} which takes each $\mathbb{R}^k \times \text{cone}^o(L_j)$ homeomorphically to $N \cap X_{k+j+1}$. Here, cone$^o(L)$ denotes the open cone $L \times [0,1) / \sim$ where $(l,0) \sim (l',0)$ for all $l,l' \in L$. We use the convention that cone$^o(\varnothing)$ is a point. We often refer to $N$ as a \textit{distinguished neighborhood} and $\mathfrak{X}$ as a stratification. In Remark \ref{projection rmk}, we emphasize some important structure of distinguished neighborhoods. To maintain simplicity in our formulas later on, we will make the assumption that stratified spaces do not contain any open $0$-dimensional strata, i.e. isolated points. \end{defn}
If $X_k - X_{k-1}$ is nonempty, then for any $p \in X_k - X_{k-1}$, any distinguished neighborhood $N$ gives a homeomorphism $N \cap X_{k} \simeq \mathbb{R}^k \times \text{cone}^o(L_{-1}) \simeq \mathbb{R}^k$. By shrinking $N$ we can take $N \subseteq X_{k-1}^c$. Thus, if $X_k- X_{k-1}$ is nonempty, it is a $k$-dimensional topological manifold. The connected components of $X_k- X_{k-1}$ are called the \textit{$k$-dimensional strata} of $X$.
A consequence of the definition is that stratified spaces satisfy the \textit{axiom of the frontier}, i.e. the closure of any stratum is a union of lower dimensional strata. We refer the reader to \cite[\S 2.2-\S 2.3]{F} for proofs.
\begin{rmk}\label{projection rmk} Let $X$ be a stratified space with stratification $\mathfrak{X}$ and $N \simeq \mathbb{R}^k \times cone^o(L)$ be a distinguished neighborhood of $x \in X_k-X_{k-1}$. Let $\pi : N \to cone^o(L)$ denote the natural projection map. There is a natural stratification on $\mathbb{R}^k \times cone^o(L)$ given by setting $(\mathbb{R}^k \times cone^o(L))_j \coloneqq \mathbb{R}^k \times cone^o(L_j)$. Since $\mathbb{R}^k \times \text{cone}^o(L_j)$ homeomorphic to $N \cap X_{k+j+1}$, the natural stratification on $\mathbb{R}^k \times cone^o(L)$ is the same as the stratification on $N$ induced by $\mathfrak{X}$. In particular, if $S$ is a stratum of $X$, then $S \cap N$ is a union of strata of the form $\mathbb{R}^k \times cone^o(T)$ where $T$ is a stratum of $L$. It follows that $\pi^{-1}(\pi(S \cap N)) = S \cap N$. \end{rmk}
\begin{rmk} \label{even dim strat rmk} A Whitney stratification on a complex algebraic variety $X$ induces a topological stratification. Thus, we can view $X$ as a topologically stratified space with only even dimensional strata. We will denote this stratification by \begin{equation*} \mathfrak{X} : X = X_n \supseteq X_{n-1} \supseteq \cdots \supseteq X_0 \supseteq X_{-1} = \varnothing \end{equation*} where $X_k - X_{k-1}$ consists of complex $k$-dimensional strata. The strata can be taken to be complex algebraic, but we will not need this fact. A stratification of a complex algebraic variety will always mean stratification in the above sense. \end{rmk}
\begin{defn} A topologically stratified space $X$ is \textit{purely $n$-dimensional} if $X_n-X_{n-1}$ is dense in $X$. A topologically stratified space is purely $n$-dimensional if and only if every open set has topological dimension $n$ in the sense of Hurewicz and Wallman described in \cite{HW}. An \textit{$n$-dimensional topological pseudomanifold} is a purely $n$-dimensional topologically stratified space which admits a stratification $\mathfrak{X}$ such that $X_{n-1} = X_{n-2}$. \end{defn}
\begin{defn} Let $X$ and $Y$ be stratified spaces. A continuous map $f: X \to Y$ is \textit{stratified} if \begin{enumerate} \item $f$ is \textit{stratum preserving}, i.e. for any stratum $S$ of $Y_k - Y_{k-1}$, $f^{-1}(S)$ is a union of strata of $X$. \item for each $p \in Y_k - Y_{k-1}$, there exists a neighborhood $N$ of $p$ in $Y_k$, a topologically stratified space $$F = F_k \supseteq F_{k-1} \supseteq \cdots \supseteq F_{-1} = \varnothing$$ and a strata preserving homeomorphism $F \times N \to f^{-1}(N)$ which commutes with projection to $N$. \end{enumerate} \end{defn}
\subsection{The Constructible Derived Category}
Let $X$ be a topologically stratified space. A sheaf $\mathcal{L}$ on $X$ is \textit{locally constant} if for each $x \in X$, there exists an open set $U \subseteq X$ and an $R$-module $M$ such that $\mathcal{L}|_U \simeq M_U$, where $M_U$ is the constant sheaf on $U$ associated with the $R$-module $M$. A locally constant sheaf $\mathcal{L}$ with finitely generated stalks is referred to as a \textit{local system}. A complex of sheaves $S$ is \textit{cohomologically locally constant} (CLC) if the associated cohomology sheaves are locally constant. Now, let $\mathfrak{X}$ be any filtration of $X$ by closed subsets, not necessarily a stratification. A complex of sheaves $S$ is \textit{cohomologically locally constant with respect to $\mathfrak{X}$} ($\mathfrak{X}$-clc) if for each $k$, $S|_{X_k - X_{k-1}}$ is CLC. A complex of sheaves $S$ is \textit{constructible with respect to $\mathfrak{X}$} ($\mathfrak{X}$-cc) if $S$ is $\mathfrak{X}$-clc and the stalks of the cohomology sheaves are finitely generated. A complex of sheaves $S$ is \textit{topologically constructible} if $S$ is bounded and $S$ is constructible with respect to some stratification of $X$. In this paper, the word \textit{constructible} means topologically constructible. Let $D^b_c(X)$ denote the full subcategory of $D^b(X)$ consisting of constructible complexes and $D^b_{\mathfrak{X}}(X)$ denote the full subcategory of $D^b(X)$ consisting of $\mathfrak{X}$-cc complexes. The standard $t$-structure on $D^b(X)$ induces a $t$-structure on $D^b_c(X)$. The truncation functors are denoted $\tau_{\leq i}: D^b_c(X) \to D^{b, \leq i}_c(X)$ and $\tau_{\geq i}: D^b_c(X) \to D^{b, \geq i}_c(X)$.
Useful references for sheaf theory are \cite{I,KS}. A brief discussion of the constructible derived category can be found in \cite[\S 1.3-\S 1.15]{GM2}. For a more complete discussion, we refer the reader to \cite{B}. We will record some of the most useful facts below for convenience.
Let $X$, $Y$ be stratified spaces with stratifications $\mathfrak{X}$ and $\mathfrak{Y}$ respectively. Let $f:X \to Y$ be a stratified map with respect to these stratifications. We have the four functors \begin{center} \begin{tikzcd} D^b_{\mathfrak{X}}(X) \arrow[r, bend left, "{Rf_*, Rf_!}"] &D^b_\mathfrak{Y}(Y) \arrow[l, bend left, "{f^*, f^!}"]. \end{tikzcd} \end{center}
\begin{prop}\label{top man} If $X$ is an oriented manifold and $i:Z \to X$ is the inclusion of a locally closed oriented submanifold of codimension $d$, we have that $i^!R_X \simeq i^*R_X[-d]$. \begin{proof} See \cite[p. 336]{I}. \end{proof} \end{prop}
There are adjunctions ($f^*,Rf_*$) and ($Rf_!, f^!$). There is a morphism of functors $Rf_! \to Rf_*$ which is an isomorphism if $f$ is proper. For an open set $U \subseteq X$ and $Z = X - U$ its closed complement, we have inclusions $$U \xrightarrow{j} X \xleftarrow{i} Z. $$ Since $Z$ is closed, $Ri_! = i_!$. This gives rise to the \textit{adjunction distinguished triangles} \begin{gather*} i_!i^! \to id \to Rj_*j^* \xrightarrow{[1]},\\ Rj_!j^! \to id \to i_*i^* \xrightarrow{[1]}. \end{gather*}
\begin{lem} \label{projection commute} Let $M$ be a locally contractible topological space and $\pi : X' = X \times M \to X$ be the projection. Let $Y \subseteq X$ and $Y' = \pi^{-1}(Y)$. We have a cartesian diagram \begin{center} \begin{tikzcd} Y' \arrow[r,"i'"] \arrow[d,"\pi'"] &X' \arrow[d, "\pi"]\\ Y \arrow[r,"i"] &X \end{tikzcd} \end{center} \begin{enumerate}[(a)] \item If $Y \subseteq X$ is open and $S \in D^b_c(Y)$, then \begin{equation*} Ri'_*\pi'^{*}S \simeq \pi^*Ri_*S. \end{equation*} \item If $Y \subseteq X$ is closed and $T \in D^b_c(X)$, then \begin{equation*} \pi'^*i^!T \simeq i'^!\pi^*T. \end{equation*} \end{enumerate} \begin{proof} See \cite[V, 3.13]{B}. \end{proof} \end{lem}
We end with the following important proposition.
\begin{prop} \label{morph lifting prop} Suppose $A, B, C$ are objects in $\mathcal{D}^b_c(X)$ and $\mathcal{H}^a(A) = 0$ for $a \geq k+1$. Let $\psi : B \to C$ be a morphism such that the induced maps on cohomology $\mathcal{H}^a(B) \to \mathcal{H}^a(C)$ are isomorphisms for all $a \leq k$. Then the map induced by $\psi$ $$\text{Hom}_{D^b_c(X)}(A, B) \to \text{Hom}_{D^b_c(X)}(A, C)$$ is an isomorphism. \begin{proof} See \cite[\S 1.15]{GM2}. \end{proof} \end{prop}
\begin{comment} \subsection{Dimension}\label{dim} The dimension of a stratum is simply its dimension as a manifold. In this paper, we will find it necessary to talk about the dimension of arbitrary topological spaces. The notion of dimension that we will use is the notion of \textit{topological dimension} due to Hurewicz and Wallman, see \cite{HW}.
For manifolds, the notion of topological dimension agrees with the notion of dimension of a manifold. For this reason, we take the word \textit{dimension} to mean topological dimension. If dim$X$ is even, we will sometimes denote $\frac{\text{dim}X}{2}$ by dim$_\mathbb{C} X$ and refer to dim$_\mathbb{C} X$ as the \textit{complex dimension}. We will need the following results.
\begin{lem} Let $M$ and $N$ be path connected, locally path connected, and semilocally simply connected topological spaces. Let $p: \tilde{N} \to N$ be the universal cover. Let $f: M \to N$ be a continuous map. Then the induced map on fundamental groups $f_* : \pi_1(M,x) \to \pi_1(N,f(x))$ is surjective if and only if the pullback covering space $f^*\tilde{N}=M \times_N \tilde{N}$ is connected. \begin{proof} We have a commutative diagram \begin{center} \begin{tikzcd} f^*\tilde{N} \arrow[r] \arrow[d, "p'"] & \tilde{N} \arrow[d,"p"]\\ M \arrow[r,"f"] &N \end{tikzcd} \end{center} For any $x \in M$, let $F' = p'^{-1}(x)$ and $F = p^{-1}(f(x))$. There is a homeomorphism $F' \simeq F$. The long exact sequence of homotopy groups associated with the universal covering $p$ implies that $\pi_1(N) \simeq \pi_0(F)$. The long exact sequence for homotopy groups associated with the covering $p'$ implies that there is an exact sequence $$\pi_1(M) \to \pi_0(F') \to \pi_0(f^*\tilde{N}) \to 0.$$ Using the two isomorphisms above, we have an exact sequence $$\pi_1(M) \to \pi_1(N) \to \pi_0(f^*\tilde{N}) \to 0.$$ From this, we see that the induced map on fundamental groups $f_*$ is surjective if and only if $f^*\tilde{N}$ is connected. \end{proof} \end{lem}
\begin{prop} Any $n$-dimensional topological manifold cannot be disconnected by a subset of topological dimension $\leq n-2$. \begin{proof} See \cite[Ch. IV \S 5 Corollary 1]{HW}. \end{proof} \end{prop}
\begin{prop} \label{fund group surj} Let $M$ be an $n$-dimensional manifold and $Z \subset M$ a closed subset with topological dimension $\leq n-2$. Then there is a surjection of fundamental groups $$\pi_1(M-Z, b) \twoheadrightarrow \pi_1(M,b)$$ induced by the inclusion $i: M -Z \to M$. \begin{proof} Let $p:\tilde{M} \to M$ be the universal covering of $M$ and note that $\tilde{M}$ is also an $n$-dimensional manifold. By the previous proposition, it suffices to prove that $i^*\tilde{M}$ is connected. Since $i$ is inclusion, $i^*\tilde{M} = \tilde{M} - p^{-1}(Z)$. Since $p$ is a local homeomorphism and topological dimension is a local notion, dim$p^{-1}(Z) \leq n-2$. By the proposition of Hurewicz and Wallman, $i^*\tilde{M}=\tilde{M} - p^{-1}(Z)$ is connected. \end{proof} \end{prop} \end{comment}
\section{Deligne's Construction for Complex Algebraic Varieties}\label{construction}
We briefly recall Deligne's construction when the complex algebraic variety $X$ has pure complex dimension $n$ with stratification $\mathfrak{X}$ by closed subvarieties. The stratification induces a filtration by open subsets \begin{equation*} U_1 \subseteq U_2 \subseteq \cdots \subseteq U_{n+1} = X \end{equation*} where $U_k = X-X_{n-k}$. Since $X$ is pure dimensional, $U_1$ is dense in $X$. Let $j_k :U_k \to U_{k+1}$ denote the inclusion maps. Define a complex recursively as follows: if $\mathcal{L}$ is a local system on the open dense union of strata $U_1$, then set \begin{equation*} I_1 = \mathcal{L}[n] \end{equation*} \begin{equation*} I_{k+1} = \tau_{\leq k-1-n} Rj_{k*}I_k \end{equation*} Note that here we are using the middle perversity and the indexing convention described at the end of the introduction.
We see that in the pure dimensional case, the starting point for Deligne's construction of the intersection complex is a local system on an open dense union of strata, shifted by the complex dimension of that open dense set. When the variety is not necessarily pure dimensional, the starting point for Deligne's construction will still be a local system on a open dense union of strata. However, the notion of shifting by dimension becomes more complicated. This is because an open dense set in the variety may consist of many components of different dimensions. Given a local system on an open dense set, restriction gives local systems on each component of fixed dimension. We can then shift each restricted local system by the dimension of the component that it is supported on. We make this more precise below.
In what follows, let $X$ be a complex algebraic variety of complex dimension $n$, with stratification $$\mathfrak{X} : X= X_n \supseteq X_{n-1} \supseteq \cdots \supseteq X_{-1}= \varnothing$$ so that all strata contained in $X_k - X_{k-1}$ are of pure complex dimension $k$ and $X$ has the structure of a topologically stratified space (e.g. $\mathfrak{X}$ is induced by a Whitney stratification). Let $p$ denote the middle perversity. Unless otherwise stated, the word dimension is taken to mean complex dimension.
\begin{rmk}\label{alg var vs strat space} In all of our proofs, we only use the fact that a complex algebraic variety $X$ has the structure of a topologically stratified space (in the sense of Definition \ref{top strat space def}). We do not use the algebraic structure of $X$ or of any of its strata. Thus, if one replaces the words complex algebraic variety by "topologically stratified space with only even dimensional strata" and is careful with the notion of dimension, then one obtains the same statements for this larger class of objects. In particular, our results will also hold for complex analytic spaces. If one is interested in the more general statement, then one should use the notion of topological dimension given in \cite{HW}. The main reason we make this simplification is to avoid constantly switching between complex dimension (more natural when stating our results) and real dimension (more natural when discussing stratified spaces). This will hopefully alleviate some of the confusion in the rest of the paper. \end{rmk}
\subsection{Identifying the Open Dense Union of Strata}\label{U1} In this section, we identify an open dense subset of the complex algebraic variety $X$ that will serve as the starting point of Deligne's construction. Fix a stratification $\mathfrak{X}$ of $X$. For each $0 \leq m \leq n$, let $U^m$ be the union of all $m$-dimensional strata which are open in $X$ and let $X^m \coloneqq \overline{U^m}$. Since the closure of a stratum is a union of strata of lower dimension by the axiom of the frontier, $X^m$ is a union of strata and $\partial X^m = X^m - U^m$ is a union of strata of lower dimension. In particular dim$_\mathbb{C}\partial X^m \leq m-1$. Each $X^m$ is therefore a pseudomanifold with stratifications $$\mathfrak{X}^m : X^m_m \supseteq X^m_{m-1} \supseteq \cdots \supseteq X^m_0 \supseteq X^m_{-1} = \varnothing,$$ where $X^m_k = X^m \cap X_k$ and $X^m_k - X^m_{k-1}$ consists of strata of pure complex dimension $k$ for $k \leq m$. Set $U^m_k = X ^m - X_{m-k}$. Notice that in general, $U^m_k$ is only locally closed in $X$ and $U^m_1 = U^m$. Set \begin{equation} U_1 = \bigsqcup_{m=1}^n U^m. \end{equation} We will see in Corollary \ref{irred comp cor} that $X^m$ actually the union of all $m$-dimensional irreducible components of $X$.
\begin{rmk} If $X$ is of pure dimension $n$, then $U^n = X - X_{n-1}$ and $U^m = \varnothing$ for $m < n$. Moreover, $U^n$ is dense in $X$ and $X^n = \overline{U^n} = X$. \end{rmk}
\begin{prop} \label{open dense strata} The open set $U_1 = \bigsqcup_{m=1}^n U^m$ is dense, i.e. $\bigcup_{m=1}^n X^m = X$
\begin{proof} Suppose $\bigcup_{m=1}^n X^m$ is strictly contained in $X$. Then the set complement $(\bigcup_{m=1}^n X^m)^c = \bigsqcup_{i \in I} S_i$ is a union of strata. Since the closure of any stratum is a union of lower dimensional strata, there are two cases. Fix any stratum $S_1 \subseteq (\bigcup_{m=1}^n X^m)^c $. \begin{case} We have $S_1\subseteq \overline{S_k}$ for some $k \in I$. In this case, since $S_1^c = \bigcup_{m=1}^n X^m \sqcup \bigsqcup_{i \neq 1} S_i$, we have $$ \overline{S_1^c} = \bigcup_{m=1}^n X^m \cup \bigcup_{i \neq 1} \overline{S_i}.$$ Since $S_1 \subseteq \overline{S_k}$, we have that $S_1 \subseteq \overline{S_1^c} = \text{interior}(S_1)^c$. This implies that $S_1$ has empty interior which is a contradiction since $S_1$ is a nonempty stratum. \end{case} \begin{case} The strata $S_1$ does not meet $S_k$ for any $k \neq i$. This implies that $$ \overline{S_1^c} = \bigcup_{m=1}^n X^m \cup \bigcup_{i \neq 1} \overline{S_i} = \bigcup_{m=1}^n X^m \sqcup \bigsqcup_{i \neq 1} S_i = S_1^c.$$ It follows that $S_1^c = \overline{S_1^c} = \text{interior}(S_1)^c$, i.e. $S_1$ is open in $X$. This contradicts the definition of $X^m$. \end{case} In either case, we have a contradiction. So we conclude that $(\bigcup_{m=1}^n X^m)^c = \varnothing$, i.e. $U_1$ is dense. \end{proof} \end{prop}
\subsection{The Open Filtration Induced by a Stratification}
In this section, we describe a filtration of $X$ by open subsets, beginning with $U_1$, induced by a stratification $\mathfrak{X}$. The following example shows that applying Deligne's construction to certain filtrations by open sets will not produce a direct sum of intersection complexes.
\begin{ex} \label{open filt ex} Let $E \subseteq \mathbb{P}^2$ be a smooth elliptic curve and $C_E \subseteq \mathbb{C}^3$ be the affine cone over $E$. Let $L$ be a line in $\mathbb{C}^3$ passing through the origin that is not contained in $C_E$ and $C' = C_E \cap \{z_3 = 1\} \subset \mathbb{C}^3$. Let $X = C_E \cup L$. Consider the stratification $$\mathfrak{X} : C_E \cup L \supset L \cup C' \supset \{0\} \supset \varnothing.$$ With the notation above, $U^2 = C_E - C' - \{0\}$ and $U^1 = L - \{0\}$. Taking closures, we have $X^2 = C_E$ and $X^1 = L$. We have sets \begin{multicols}{2}
\begin{enumerate}[itemsep=5pt]
\item[] $U^2_1 = X^2 - X_1 = C_E - C' - \{0\}$,
\item[] $U^2_2 = X^2 - X_0 = C_E - \{0\}$,
\item[] $U^2_3 = X^2-X_{-1} = C_E$.
\item[] $U^1_1 = X^1-X_0 = L-\{0\}$,
\item[] $U^1_2 = X^1-X_{-1} = L$,
\end{enumerate}
\end{multicols} One possible way to filter $X$ by open subsets is the following. Let \begin{equation*} \begin{aligned} U_1 &= U^2_1 \cup U^1_1 =\left( C_E - C'- \{0\} \right) \cup \left(L-\{0\} \right),\\ U_2 &= U^2_2 \cup U^1_2 = \left(C_E - \{0\} \right) \cup L = X,\\ U_3 &= U^2_3 \cup U^1_3 = C_E \cup L = X. \end{aligned} \end{equation*} This gives a filtration by open subsets $$U_1 \xrightarrow{j_1} U_2 \xrightarrow{j_2=id} U_3.$$ We apply Deligne's construction to this filtration. Recall that $p$ denotes the middle perversity. On the open dense set $U_1$, let $I_1 = \mathbb{Q}_{U^2}[2] \oplus \mathbb{Q}_{U^1}[1]$. On $U_2 = X$, if we truncate at $p(2)-2 = -2$, the complex appearing in Deligne's construction is $$\tau_{\leq p(2) - 2}Rj_{1*}I_1 = \tau_{\leq -2}Rj_{1*}\left ( \mathbb{Q}_{U^2}[2] \oplus \mathbb{Q}_{U^1}[1]\right) = \tau_{\leq -2}Rj_{1*}\mathbb{Q}_{U^2}[2].$$ Here we see that the truncation operation kills off the contribution from the open 1-dimensional stratum. We add this contribution back in using Deligne's construction for $U^1$, i.e. on $U_2$, set $$I_2 = \tau_{\leq p(2)-2}Rj_{1*}I_1 \oplus \tau_{\leq p(2) - 1}Rj_{1*}\mathbb{Q}_{U^1}[1] = \tau_{\leq -2}Rj_{1*}\mathbb{Q}_{U^2}[2] \oplus \tau_{\leq - 1}Rj_{1*}\mathbb{Q}_{U^1}[1].$$ Notice that $I_2$ is not a direct sum of intersection complexes since the first summand is truncated at $-2$ instead of $-1$. If we instead truncate at $p(4)-2 =-1$, we have on $U_2 = X$ the complex $$I_2' = \tau_{\leq p(4) - 2}Rj_{1*}I_1 = \tau_{\leq -1}Rj_{1*}\mathbb{Q}_{U^2}[2] \oplus \tau_{\leq - 1}Rj_{1*}\mathbb{Q}_{U^1}[1].$$
However, the first summand of $I_2'$ is still not the intersection complex of $C_E$. The support condition fails for $\tau_{\leq -1}Rj_{1*}\mathbb{Q}_{U^2}[2]$ since $\{x \in C_E \ | \ \mathcal{H}^1(\tau_{\leq -1}Rj_{1*}\mathbb{Q}_{U^2}[2])_x \neq 0\} = C'$ is not zero dimensional. \end{ex}
The problem with the filtration in the example is that strata of differing dimensions were added at the same stage in the filtration. Our filtration of the complex algebraic variety $X$ by open sets described below avoids this issue and is motivated by the following observation. If $X$ is pure of dimension $n$ with stratification $\mathfrak{X}$, then the induced filtration by open subsets is given by $$\varnothing \subseteq U_1 \subseteq \cdots \subseteq U_{n+1} = X,$$ where $U_k = X - X_{n-k}$. It follows that $U_{k+1}-U_k = (X-X_{n-k-1}) - (X-X_{n-k}) = X_{n-k}-X_{n-k-1}$ consists of all codimension $k$ strata of $X$. None of these strata can be open since any open subset of pure dimensional variety $X$ has dimension $n$. So $U_{k+1} - U_k$ consists of all non-open codimension $k$ strata of $X$. We would like our filtration of $X$ by open sets to satisfy the same property.
Let \begin{gather} W_k = \bigcup_{m=n-k+2}^n U^m_{m-n+k},\\ U_k = W_k \sqcup \bigsqcup_{m=1}^{n-k+1}U^m_1. \end{gather}
A priori, the sets $U_k$ are not necessarily open in $X$ since the sets $U^m_{m-n+k} = X^m - X_{n-k}$ are only locally closed in $X$. However, we have the following lemma.
\begin{lem} The set $U_k$ is open in $U_{k+1}$ for each $ 1 \leq k \leq n$. \begin{proof} We show that if $p \in U_k$, there is a neighborhood $N$ of $p$ in $U_{k+1}$ that is contained in $U_k$. If $p \in \bigsqcup_{m=1}^{n-k} U^m$, then we are done. If $p \in \bigcup_{m=n-k+1}^n U^m_{m-n+k}$, let $N = U_{k+1} \cap X_{n-k}^c$. Since $\bigsqcup_{m=1}^{n-k} U^m \subseteq X_{n-k}$, we see that $N = W_{k+1} \cap X^c_{n-k}$. Notice that $p \in N$ and $N$ is open in $U_{k+1}$. We claim that $N \subseteq U_k$. Let $q \in N$. Since $q \in W_{k+1}$, $q \in U^m_{m-n+k+1} = X^m -X_{n-k-1}$ for some $m \geq n-k+1$. Since $q \in X_{n-k}^c$, we see that $q \in U^m_{n-k} \subseteq U_k$. \end{proof} \end{lem}
Since $U_{n+1} = X$, the previous lemma implies that $U_n$ is open in $X$. It follows from descending induction on $k$ that $U_k$ is open in $X$ for all $1 \leq k \leq n$. This gives a finite filtration $\mathfrak{U}$ of $X$ by open subsets \begin{equation}\label{open filt eq} \mathfrak{U}: \varnothing \subseteq U_1 \subseteq \cdots \subseteq U_n \subseteq U_{n+1} = X. \end{equation} We have inclusions $U_k \xrightarrow{j_k} U_{k+1} \xleftarrow{i_k} \left( U_{k+1} - U_k\right)$. We will refer to the filtration $\mathfrak{U}$ as the \textit{open filtration induced by} $\mathfrak{X}$.
We conclude this section with several facts about the structure of the open filtration $\mathfrak{U}$.
\begin{lem}\label{Wk+1-Wk} We have $U_{k+1} - U_k = \left( W_{k+1} - W_k \right) - U^{n-k+1}_1$. \begin{proof} Notice that \begin{equation*} \begin{aligned} U_{k+1} - U_k &= U_{k+1} - \left(W_k \sqcup \bigsqcup_{m=1}^{n-k+1} U^m_1\right)\\ &= \left( U_{k+1} - W_k \right) - \bigsqcup_{m=1}^{n-k+1} U^m_1\\ &= \left (\left(W_{k+1}-W_k\right) - \bigsqcup_{m=1}^{n-k+1} U^m_1\right) \cup \left( \left( \bigsqcup_{m=1}^{n-k} U^m_1 - W_k \right) - \bigsqcup_{m=1}^{n-k+1} U^m_1 \right)\\ &= \left (W_{k+1}-W_k \right) - \bigsqcup_{m=1}^{n-k+1} U^m_1\\ &= \left( W_{k+1}-W_k \right) - U^{n-k+1}_1, \end{aligned} \end{equation*} where the last equality holds since $\bigsqcup_{m=1}^{n-k} U^m_1 \subseteq W_{k+1}^c$. \end{proof} \end{lem}
\begin{lem} \label{non open strat lem} The set $U_{k+1} - U_k$ consists of all non-open $(n-k)$-dimensional strata, i.e. $X_{n-k} - X_{n-k-1} = \left(U_{k+1} - U_k\right) \sqcup U^{n-k}_1$. \begin{proof} Suppose $x \in X_{n-k}-X_{n-k-1}$. Let $S \subseteq X_{n-k}-X_{n-k-1}$ be the $(n-k)$-dimensional stratum containing $x$. Since $X^m$ is a union of strata and $X = \cup_{m=1}^nX^m$, $S \subseteq X^m$ for some $m \geq n-k$. Since $S \subseteq X_{n-k-1}^c$, $S \subseteq X^m - X_{n-k-1} \subseteq U^m_{m-n+k+1}\subseteq U_{k+1}$. If $S$ is open, then $S \subseteq U^{n-k}_1$. If $S$ is not open, then $S \subseteq W_{k+1}$. In this case, suppose that $S \subseteq U_k$. Since $S$ is not open, $S \subseteq W_k$. In particular $S \subseteq U^m_{m-n+k} = X^m - X_{n-k}$ for some $m \geq n-k+2$. This implies that $S \subseteq X_{n-k}^c$ which is a contradiction. So $x \in S \subseteq U_{k+1}-U_k$. It follows that $X_{n-k} - X_{n-k-1} \subseteq \left(U_{k+1} - U_k\right) \sqcup U^{n-k}_1$.
Conversely, if $x \in U_{k+1}-U_k$, then $x \notin U_k$ implies that $x \notin U^m_{m-n+k}= X^m - X_{n-k}$ for all $m$. It follows that $x \notin X_{n-k}^c$, i.e. $x \in X_{n-k}$. Since $x \in U_{k+1}$, $x \in U^m_{m-n+k+1} = X^m - X_{n-k-1}$ for some $m$. In particular, $x \in X_{n-k-1}^c$. It follows that $x \in X_{n-k}-X_{n-k-1}$. If $x \in U^{n-k}$, then $x \in X_{n-k}-X_{n-k-1}$ by definition. It follows that $\left(U_{k+1} - U_k\right) \sqcup U^{n-k}_1 \subseteq X_{n-k} - X_{n-k-1}$. \end{proof} \end{lem}
\begin{lem} \label{U^m closed in U_k} Fix $1 \leq k \leq n$. Then $U^m_{m-n+k}$ is closed in $U_k$ for $n-k+1 \leq m \leq n$ and $U^m_1$ is closed in $U_k$ for $1 \leq m \leq n-k$. \begin{proof} Suppose $n-k+1 \leq m \leq n$. Since $X^m$ is closed in $X$, it suffices to show that $U^m_{m-n+k} = X^m \cap U_k$. The inclusion $U^m_{m-n+k} \subseteq X^m \cap U_k$ follows from the definition of $U_k$. Now let $x \in X^m \cap U_k$. Since $x \in X^m$ and $m \geq n-k+1$, $x \in W_k$ or $x \in U^{n-k+1}_1$. It follows that $x \in U^l_{l-n-k} = X^l - X_{n-k}$ for some $n-k+1 \leq l \leq n$. In particular, $x \notin X_{n-k}$. It follows that $x \in X^m - X_{n-k} = U^m_{m-n+k}$. We conclude that $U^m_{m-n+k} = X^m \cap U_k$.
A similar argument shows that $U^m_1 = X^m \cap U_k$ for $1 \leq k \leq n-k$. \end{proof} \end{lem}
\subsection{Construction of $IC(\mathfrak{X},\mathcal{L})$}
Let $\mathfrak{X}$ be a stratification of $X$ and $\mathfrak{U}$ the open filtration induced by $\mathfrak{X}$. Let $\mathcal{L}$ be a local system on the open dense subset $U_1 \subseteq X$.
\begin{rmk}\label{local sys open dense rmk}
We can express $\mathcal{L}$ as $\mathcal{L} = \bigoplus_{m=1}^n a^m_{1*}\mathcal{L}^m$ where $\mathcal{L}^m \coloneqq \mathcal{L}|_{U^m}$ is a local system on $U^m$ and $a^m_1:U^m \to U_1$ is inclusion of a closed subset. We will often abuse notation and identify $a^m_{1*}\mathcal{L}^m$ with $\mathcal{L}^m$. Since each $\mathcal{L}^m$ is a local system on $U^m$, we can associate with $\mathcal{L}$ the complex $\bigoplus_{m=1}^n \mathcal{L}^m[m]$. \end{rmk}
Define a complex $IC(\mathfrak{X}, \mathcal{L})$ on $X$ recursively as follows: set
\begin{equation}\label{gen Deligne cons} \begin{gathered} I_1 = \bigoplus_{m=1}^n \mathcal{L}^m[m] \text{ on } U_1,\\ I_{k+1} = \tau_{\leq k-1-n}Rj_{k*}I_k \oplus \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m] \text{ on } U_{k+1}, \end{gathered} \end{equation} and let $IC(\mathfrak{X}, \mathcal{L}) = I_{n+1}$. Note that the truncation is done with respect to the middle perversity.
We refer to $IC(\mathfrak{X},\mathcal{L})$ as the object obtained by the Deligne's construction with respect to the stratification $\mathfrak{X}$ and the local system $\mathcal{L}$. Note that this construction only uses the filtration structure of $\mathfrak{X}$. We emphasize that we are only shifting the local system by the complex dimension of $U^m$. This shift is done so that our complex $IC(\mathfrak{X},\mathcal{L})$ agrees with the indexing convention discussed at the end of the introduction.
We show below that the complex $IC(\mathfrak{X}, \mathcal{L})$ can be interpreted as a direct sum of Deligne-Goresky-MacPherson intersection complexes. Let $X$ be a complex algebraic variety of complex dimension $n$ with stratification $\mathfrak{X}$. Recall that the stratification $\mathfrak{X}$ induces an open dense subset $U_1 = \bigsqcup_{m=1}^n U^m$. We saw that $X = \bigcup_{m=1}^n X^m$ where $X^m = \overline{U^m}$. Corollary \ref{irred comp cor} will imply that $X^m$ can also be interpreted as the union of all $m$-dimensional irreducible components of $X$. Let $\mathcal{L} = \bigoplus_{m=1}^n\mathcal{L}^m$ be a local system on the open dense union of strata $U_1$. Let $IC(\mathfrak{X}^m,\mathcal{L}^m)$ be the object obtained by Deligne's construction with respect to the induced stratification $\mathfrak{X}^m$ of $X^m$ and the local system $\mathcal{L}^m$ on $U^m$ for the pure dimensional variety $X^m$. Notice that $IC(\mathfrak{X}^m,\mathcal{L}^m)$ is precisely the Deligne-Goresky-MacPherson intersection complex of $X^m$. Let $a^m: X^m \to X$ be inclusion.
\begin{prop} \label{IC is direct sum} With the notation above, we have that $$IC(\mathfrak{X},\mathcal{L}) \simeq \bigoplus_{m=1}^n a^m_* IC(\mathfrak{X}^m, \mathcal{L}^m).$$ \begin{proof} Fix $1 \leq k \leq n$ and $n-k+1 \leq m \leq n$. Let $\bullet \coloneqq m-n+k$. Consider the cartesian diagram \begin{center} \begin{tikzcd} U_{\bullet}^m\arrow[r, "a^m_\bullet"] \arrow[d, "j^m_\bullet"] \arrow[dr, phantom, "\square"] & U_{k} \arrow[d,"j_k"] \\ U_{\bullet+1}^m \arrow[r,"a^m_{\bullet +1}"] & U_{k+1} \end{tikzcd} \end{center}
where all maps are inclusions. Lemma \ref{U^m closed in U_k} implies that the maps $a^m_\bullet$ and $a^m_{\bullet + 1}$ are inclusions of closed subsets. It follows that \begin{equation*} Rj_{k*}a^m_{\bullet*} \simeq R(j_k \circ a^m_\bullet)_* = R(a^m_{\bullet +1} \circ j^m_{\bullet})_* \simeq a^m_{\bullet+1*} Rj^m_{\bullet*}. \end{equation*} Now, notice that the complex $IC(\mathfrak{X},\mathcal{L})$ is a direct summand of complexes of the form \begin{equation*} \begin{aligned} \tau_{\leq -1}Rj_{n*} \cdots \tau_{\leq -m} Rj_{n-m+1*} a^m_{1*} \mathcal{L}^m[m]. \end{aligned} \end{equation*} Using the above commutation relation and the fact that $a^m_{\bullet*}$ is exact, we can iteratively move $a^m_{1*}$ to the left. We conclude that \begin{equation*} \tau_{\leq -1}Rj_{n*} \cdots \tau_{\leq -m} Rj_{n-m+1*} a^m_{1*} \mathcal{L}^m[m] \simeq a^m_* \tau_{\leq -1} Rj^m_{m*} \cdots \tau_{\leq -m} Rj^m_{1*}\mathcal{L}^m[m]. \end{equation*} It follows that $IC(\mathfrak{X},\mathcal{L}) = \bigoplus_{m=1}^n a^m_* IC(\mathfrak{X}^m, \mathcal{L}^m)$. \end{proof} \end{prop}
\begin{rmk}\label{indexing conv rmk}
With our choice of indexing convention in the construction of $IC(\mathfrak{X},\mathcal{L})$, each summand appearing in $Rj_{k*}I_k$ is truncated at the same place. If one uses the Borel convention and does not shift the initial local systems when constructing the complex $IC(\mathfrak{X}, \mathcal{L})$, the summands appearing in $Rj_{k*}I_k$ will need to be truncated in different places to ensure that $IC(\mathfrak{X}, \mathcal{L})$ is a direct sum of the middle perversity Deligne-Goresky-MacPherson intersection complexes. It is therefore simpler notationally to use our indexing convention when describing the construction of $IC(\mathfrak{X},L)$ than the Borel convention. Additionally, notice that with our indexing convention, the cohomology sheaves of $IC(\mathfrak{X},L)|_{W_{k+1}}$ vanish above degrees $k-1-n$ by definition. A crucial point is that this vanishing condition can be stated without using the fact that $IC(\mathfrak{X},\mathcal{L})$ is a direct sum of complexes and we use it in our axiomatic characterization of $IC(\mathfrak{X},\mathcal{L})$ (see Axioms [AX$1'$] in Definition \ref{AX$1'$}). One can go from the Borel indexing convention to our indexing convention (and vice-versa) by shifting each summand by the appropriate complex dimension. \end{rmk}
\begin{rmk}\label{arb perv rmk} If one wishes to consider other perversities, one can try to mimic the above construction of $IC(\mathfrak{X},\mathcal{L})$ (in this context, the Borel convention seems more natural). However, there is not a clear analogue of the middle perversity vanishing conditions mentioned in the previous remark. Due to this, characterizing the intersection complex for arbitrary perversities seems like a more subtle question. \end{rmk}
We conclude this section by illustrating the construction in the setting of Example \ref{open filt ex}. \begin{ex} With the same notation as Example \ref{open filt ex}, the open filtration $\mathfrak{U}$ induced by the stratification $\mathfrak{X}$ is given by $$\mathfrak{U}: U_1 \xrightarrow{j_1} U_2 \xrightarrow{j_2} U_3 = X,$$ where \begin{equation*} \begin{aligned} U_1 &= U^2_1 \cup U^1_1 =\left( C_E - C'- \{0\} \right) \cup \left(L-\{0\} \right),\\ U_2 &= U^2_2 \cup U^1_1 = \left(C_E - \{0\} \right) \cup L -\{0\},\\ U_3 &= U^2_3 \cup U^1_2 = C_E \cup L = X. \end{aligned} \end{equation*} Deligne's construction proceeds as follows. On $U_1$, set $I_1 = \mathbb{Q}_{U^2}[2] \oplus \mathbb{Q}_{U^1}[1]$. On $U_2$, set $$I_2 = \tau_{\leq -2}Rj_{1*}I_1 \oplus \mathbb{Q}_{U^1}[1] = \tau_{\leq-2}Rj_{1*} \mathbb{Q}_{U^2}[2] \oplus \mathbb{Q}_{U^1}[1].$$ On $U_3$, set \begin{gather*} I_3 = \tau_{\leq -1} Rj_{2*}I_2 = \tau_{\leq -1} Rj_{2*} \left(\tau_{\leq-2}Rj_{1*} \mathbb{Q}_{U^2}[2] \oplus \mathbb{Q}_{U^1}[1]\right) \\ = \tau_{\leq -1} Rj_{2*}\tau_{\leq-2}Rj_{1*} \mathbb{Q}_{U^2}[2] \oplus \tau_{\leq -1}Rj_{2*} \mathbb{Q}_{U^1}[1]. \end{gather*} Here we see that both summands of $IC(\mathfrak{X}) = I_3$ are intersection complexes. \end{ex}
\section{An Axiomatic Characterization of $IC(\mathfrak{X},\mathcal{L})$}\label{axiomatic char} When the complex algebraic variety is pure dimensional, Goresky and MacPherson give a stratification independent set of axioms characterizing the intersection complex in \cite{GM2}. We recall the axioms with respect to the middle perversity for pure dimensional varieties below for convenience.
\begin{defn} \label{AX2} Let $X$ be a complex algebraic variety of pure complex dimension $n$. A topologically constructible complex $S$ satisfies axioms [AX2] if \begin{enumerate}[(a)]
\item (Normalization) $S|_{X-\Sigma} = \mathcal{L}[n]$ where $\Sigma \subset X$ is a closed subset of complex dimension $n-1$ and $\mathcal{L}$ is a local system on $X-\Sigma$, \item (Lower Bound) $\mathcal{H}^a(S) = 0$ for $a < -n$,
\item (Support) dim$_\mathbb{C}\{x \in X \ | \ \mathcal{H}^a(i_x^*S)\neq 0 \} < -a$ for $a > -n$,
\item (Cosupport) dim$_\mathbb{C}\{x \in X \ | \ \mathcal{H}^a(i_x^!S)\neq 0 \} < a$ for $a < n$, \end{enumerate} where $i_x:\{x\} \to X$ is inclusion. These axioms differ slightly from the ones proposed by Goresky and MacPherson in \cite{GM2} because we normalize using complex dimension rather than real dimension. \end{defn}
Let $X$ be a possibly reducible complex algebraic variety of complex dimension $n$. Let $X^m$ be the union of all $m$-dimensional irreducible components of $X$. Then each $X^m$ is a variety of pure dimension $m$. Let $IC(X^m)$ be the corresponding intersection complexes (with $\mathbb{Q}$ coefficients). Recall that the intersection complex (with $\mathbb{Q}$ coefficients) $IC(X)$ of $X$ is defined to be $IC(X) = \bigoplus_{m=1}^n IC(X^m)$. Since each summand satisfies the support and cosupport axioms, one might guess that the direct summand satisfies the support and cosupport axioms. The next example shows that this is not the case. One might also guess that the complex $IC(X)|_{X^m}$ satisfies axioms [AX2] since each summand satisfies axioms [AX2]. If this were true, there would be a natural map $IC(X) \to \bigoplus_{m=1}^n IC(X^m)$ via the adjunction maps. The next example shows that this is also not the case.
\begin{ex} \label{ax 2 fails}
Inside $\mathbb{C}^3$, let $P = \{(z_1,z_2,0) | z_i \in \mathbb{C}\}$ and $L = \{(0,0,z_3) | z_3 \in \mathbb{C}\}$. Let $X = P \cup L$ be the reducible variety with irreducible components $P$ and $L$. The intersection complex of $X$ is given by $IC = IC(P) \oplus IC(L) = \mathbb{Q}_P[2] \oplus \mathbb{Q}_L[1]$. The support and cosupport axioms [AX2](c)(d) fail for $IC$ since \begin{equation*}
\text{dim}_\mathbb{C}\{x \in X \ | \ \mathcal{H}^{-1}(i_x^*IC) \neq 0\} = \text{dim}_\mathbb{C} L = 1 \neq 0, \end{equation*} and \begin{equation*}
\text{dim}_\mathbb{C}\{x \in X \ | \ \mathcal{H}^{1}(i_x^!IC) \neq 0\} = \text{dim}_\mathbb{C} L = 1 \neq 0, \end{equation*}
where $i_x:\{x\}\to X$ is inclusion. If we instead consider $IC|_P = \mathbb{Q}_P[2] \oplus \tilde{i}_{0*}\mathbb{Q}[1]$ where $\tilde{i}_0:\{0\} \to P$ is the inclusion, the support condition axiom [AX2](c) is satisfied. However, notice that \begin{equation*}
\tilde{i}_0^!(IC|_P) = \tilde{i}_0^! \mathbb{Q}_P[2] \oplus \tilde{i}_0^!\tilde{i}_{0*}\mathbb{Q}[1] = \mathbb{Q}[-2] \oplus \mathbb{Q}[1]. \end{equation*}
This implies that the cosupport condition [AX2](d) fails for $IC|_P$ since $\{x \in P \ | \ \mathcal{H}^{-1}(\tilde{i}_x^!IC|_P) \neq 0\} = \{0\} \neq \varnothing$. \end{ex}
In the previous example, we see that the cosupport axiom fails because we first restrict the complex $IC$ to the irreducible component $P$. If we do not first restrict, notice that $$i^!_0(IC) = i^!_0\mathbb{Q}_P[2] \oplus i^!_0\mathbb{Q}_L[1] = \mathbb{Q}[-2] \oplus \mathbb{Q}[-1].$$ This implies that
$$\text{dim}_\mathbb{C}\{x \in P \ | \ \mathcal{H}^1(i_x^!IC)\neq 0\} = \text{dim}_\mathbb{C}\{0\} = 0.$$
We conclude that for $a < 2$, $\text{dim}_\mathbb{C}\{x \in P \ | \ \mathcal{H}^a(i_x^!IC)\neq 0\} < a$. The significance of this observation is that although neither $IC$ nor $IC|_P$ satisfies the cosupport condition, $IC$ satisfies a \textit{pure dimensional} analog of the cosupport condition. We will show in the following sections that a pure dimensional analog of the support and cosupport axioms will help us characterize the complex $IC$.
In the following sections, let $X$ be a complex algebraic variety of complex dimension $n$ with stratification $\mathfrak{X}$. Consider the open filtration $$\mathfrak{U}: U_1 \subseteq \cdots \subseteq U_n \subseteq U_{n+1} = X,$$
induced by the stratification $\mathfrak{X}$ as in Equation \ref{open filt eq}. Recall that $U_1 = \bigsqcup_{m=1}^n U^m$ where $U^m$ is the union of all open $m$-dimensional strata in $X$. Let $\mathcal{L}$ be a local system on $U_1$. As in Remark \ref{local sys open dense rmk}, we write $\mathcal{L} = \bigoplus_{m=1}^n\mathcal{L}^m$ where $\mathcal{L}^m$ is a local system on $U^m$ extended to $U_1$ by zero.
\subsection{Axioms [AX1$'$]}
\begin{defn} \label{AX$1'$}
Let $S$ be a complex on $X$ and set $S_{k} \coloneqq S|_{U_k}$. We have inclusions $U_k \xrightarrow{j_k} U_{k+1} \xleftarrow{i_k} \left( U_{k+1} - U_k \right)$. Recall that $W_k = \bigcup_{m=n-k+2}^n U^m_{m-n+k}$ where $U^m_{m-n+k}=X^m - X_{n-k}$. We say that $S$ satisfies axioms [AX1$'$] (with respect to the stratification $\mathfrak{X}$) if \begin{enumerate}[(a)]
\item (Normalization) $S|_{U_1} \simeq \bigoplus_{m=1}^n \mathcal{L}^m[m]$ in $D^b_c(U_1)$,
\item (Vanishing) for all $k \geq 1$, $\mathcal{H}^a(S|_{W_{k+1}}) = 0$ for $a > k-1-n$, \item (Attaching) the induced morphism on cohomology sheaves $$\mathcal{H}^a(i_k^*S_{k+1}) \to \mathcal{H}^a(i_k^*Rj_{k*}j_k^*S_{k+1})$$ is an isomorphism for all $k \geq 1$ and $a \leq k-1-n$. \end{enumerate} \end{defn}
\begin{rmk}\label{ax1' vs ax1}
The stratification dependent axioms [AX1$'$] are analogous to the stratification dependent axioms [AX1] for pseudomanifolds proposed by Goresky and MacPherson in \cite{GM2}. When $X$ is a pseudomanifold, axioms [AX1$'$] reduce to axioms [AX1]. One difference between the axioms is that we do not include a lower bound axiom. This is because the lower bound axiom for pseudomanifolds is actually implied by the other axioms (in particular [AX1](a) and (d)) and is not needed to characterize the intersection complex. We will also not need an analog of the lower bound axiom to characterize the complex $IC(\mathfrak{X},\mathcal{L})$. The normalization axiom [AX1$'$](a) differs from [AX1](a) in that our open dense set $U_1$ contains strata of differing dimensions. We require that each local system is shifted based on the dimension of the strata that it is supported on. The vanishing axiom [AX1$'$](b) differs from [AX1](c) in that we restrict our complex $S$ to the smaller open set $W_{k+1}$ instead of $U_{k+1}$. The reason for this is that the open set $U_{k+1}$ contains the open strata $U^m$ for $n-k \leq m \leq n$. The normalization axiom implies that $S|_{U^m} \simeq \mathcal{L}^m[m]$. We must therefore ignore these strata if we want the vanishing axiom to hold. The attaching axiom [AX1$'$](c) is completely analogous to [AX1](d). They both give the same vanishing condition for the cohomology sheaves when restricting the complex to the non-open $(n-k)$-dimensional strata.
We also do not require that the complex $S$ is $\mathfrak{X}$-cc. We will eventually see that if $S$ satisfies axioms [AX1$'$], then $S$ is $\mathfrak{X}$-cc. This is analogous to Borel's discussion of constructibility in the pseudomanifold case; see \cite[V, \S 3]{B}. \end{rmk}
\subsection{Alternative Formulations of [AX1$'$](c)}
In this section, we give two useful alternative characterizations of [AX1$'$](c), namely [AX1$'$](c$'$) and [AX1$'$](c$''$). Recall the adjunction distinguished triangle $$i_{k!}i_k^!S_{k+1} \to S_{k+1} \to Rj_{k*}j_k^*S_{k+1} \xrightarrow{[1]}.$$
Restricting gives the distinguished triangle $$i_k^!S_{k+1} \to i_k^*S_{k+1} \to i_kRj_{k*}j_k^*S_{k+1} \xrightarrow{[1]}.$$
The long exact sequence in cohomology and [AX1$'$](c) imply that $\mathcal{H}^a(i_k^!S_{k+1}) = 0$ for $a \leq k-n$. So we see that [AX1$'$](c) is equivalent to
\begin{itemize} \item[(c$'$)] $\mathcal{H}^a(i_k^!S_{k+1}) = 0$ for $k \geq 1$ and $a \leq n - k$. \end{itemize}
We now relate this to the vanishing of the costalks $\mathcal{H}^a(i_x^!S)$. Fix $k \geq 1$. Suppose $x \in U_{k+1} - U_k$. Factor the inclusion $i_x: \{x\} \to X$ into
\begin{center} \begin{tikzcd} \{x\} \arrow[r, "i_x"] \arrow[d, "\mu_x"] & X \\ U_{k+1} - U_k \arrow[r, "i_k"] & U_k \arrow[u, "\alpha"] \end{tikzcd} \end{center}
It follows that \begin{equation*} \begin{aligned} i_x^!S &= \mu_x^! \circ i_k^! \circ \alpha^!S\\ &= \mu_x^! \circ i_k^! S_{k+1}\\ &= \mu_x^* \circ i_k^!S_{k+1}[-2(n-k)], \end{aligned} \end{equation*} where the second equality holds because $\alpha$ is an open inclusion and the third equality follows from Proposition \ref{top man} since $U_{k+1}-U_k$ is a topological manifold of real dimension $2(n-k)$. It follows that $$\mathcal{H}^a(i^!_xS) = \mathcal{H}^{a-2(n-k)}(S_{k+1})_x.$$ Hence we see that [AX1$'$](c') is equivalent to
\begin{itemize} \item[(c$''$)] If $x \in U_{k+1} - U_k$, then $\mathcal{H}^a(i_x^!S) = 0$ for all $a \leq n-k$. \end{itemize}
\subsection{[AX1$'$] Characterizes $IC(\mathfrak{X},\mathcal{L})$} The main goal of this section is to prove the following theorem. \begin{thm} \label{main thm 2} Let $X$ be a complex algebraic variety of complex dimension $n$ with stratification $\mathfrak{X}$. Let $\mathfrak{U}$ be the open filtration induced by $\mathfrak{X}$. Let $\mathcal{L} = \bigoplus_{m=1}^n \mathcal{L}^m$ be any local system on the open dense union of strata $U_1 \subseteq X$. The functor $F$ which takes the complex $\bigoplus_{m=1}^n \mathcal{L}^m[m]$ to the complex $IC(\mathfrak{X},\mathcal{L})$ defines an equivalence of categories between \begin{enumerate}[(a)] \item the full subcategory of $D^b_c(U_1)$ whose objects are all complexes of the form $\bigoplus_{m=1}^n \mathcal{L}^m[m]$ where $\mathcal{L}^m$ is a local system on $U^m_1$ extended to $U_1$ by zero, and \item the full subcategory of $D^b_c(X)$ whose objects are all complexes satisfying axioms [AX1$'$] \end{enumerate}
The inverse functor $G$ assigns to any complex $S$ satisfying axioms [AX1$'$] the complex \newline $\bigoplus_{m=1}^n \mathcal{H}^{-m}(S|_{U_1})[m]$. \end{thm}
We have the two immediate corollaries. \begin{cor} \label{main cor 1} If a complex $S$ satisfies [AX1$'$], then $S$ is canonically isomorphic to \\ $F(\mathcal{L})=IC(\mathfrak{X},\mathcal{L})$ in $\mathcal{D}^b_c(X)$. \end{cor}
\begin{cor} If a complex $S$ satisfies [AX1$'$], then $S$ is $\mathfrak{X}$-cc. \begin{proof} Since $S$ satisfies [AX1$'$], $S$ is isomorphic to $IC(\mathfrak{X},\mathcal{L})$. Since $IC(\mathfrak{X},\mathcal{L})$ is constructed by iterated pushforwards along strata and truncations applied to the constructible complex $\bigoplus_{m=1}^n \mathcal{L}^m[m]$, it is constructible. Therefore, $S$ is $\mathfrak{X}$-cc. \end{proof} \end{cor}
To prove Theorem \ref{main thm 2}, we make the following reduction. For each $k \geq 1$, let $\mathcal{C}_k$ denote the full subcategory of $D^b_c(U_k)$ consisting of complexes which satisfy axiom [AX1$'$] on $U_k$. If $S \in \mathcal{C}_k$, then $S$ is a complex on $$U_k = W_k \sqcup \bigsqcup_{m=1}^{n-k+1}U^m_1.$$
Notice that $W_k$ is closed in $U_k$ and let $i^W_k:W_k \to U_k$ be the inclusion. The normalization and vanishing axioms imply that $S|_{U^m_1} \simeq \mathcal{H}^{-m}(S)$ is a local system. We set $\mathcal{L}^m \coloneqq \mathcal{H}^{-m}(S)$ for $1 \leq m \leq n-k+1$. Since $U_k$ is a disjoint union of $W_k$ and the $U^m_1$'s, $S$ can be expressed as $$S = S_{W_k} \oplus \bigoplus_{m=1}^{n-k+1} \mathcal{L}^m[m],$$ where $S_{W_k} = i^W_{k*}i^{W*}_k S$. We will denote the adjunction map $S \to S_{W_k}$ by $pr_1$ and the direct sum of adjunction maps $S \to \bigoplus_{m=1}^{n-k+1} \mathcal{L}^m[m]$ by $pr_2$.
For any $S \in \mathcal{C}_k$, define $F_k(S)$ by: \begin{equation} F_k(S) = \tau_{\leq k-1-n}Rj_{k*}S \oplus \bigoplus_{m=1}^{n-k} \mathcal{H}^{-m}(S)[m]. \end{equation} We claim that $F_k$ is a functor from $\mathcal{C}_k$ to $\mathcal{C}_{k+1}$. It suffices to show that for any $S \in \mathcal{C}_k$, $F_k(S) \in \mathcal{C}_{k+1}$. The normalization and vanishing axioms are all satisfied by definition of $F_k(S)$. Since \begin{equation}\label{jkFk=id} \begin{aligned} j_k^*F_k(S) &= j_k^*\left(\tau_{\leq k-1-n}Rj_{k*}S \oplus \bigoplus_{m=1}^{n-k} \mathcal{H}^{-m}(S)[m]\right)\\ &= \tau_{\leq k-1-n}S \oplus \bigoplus_{m=1}^{n-k}\mathcal{L}^m[m]\\ &= \tau_{\leq k-1-n} \left(S_{W_k} \oplus \bigoplus_{m=1}^{n-k+1} \mathcal{L}^m[m] \right) \oplus \bigoplus_{m=1}^{n-k}\mathcal{L}^m[m]\\ &= S_{W_k} \oplus \mathcal{L}^{n-k+1}[n-k+1] \oplus \bigoplus_{m=1}^{n-k}\mathcal{L}^m[m]\\ &= S, \end{aligned} \end{equation} the attaching axiom is satisfied because the attaching morphism is the composition $$\tau_{\leq k-1-n}i_k^*Rj_{k*}S \simeq i_k^*F_k(S) \to i_k^*Rj_{k*}j_k^*F_k(S) \simeq i_k^*Rj_{k*}S.$$ The restriction functor $j_k^*$ is clearly a functor from $\mathcal{C}_{k+1}$ to $\mathcal{C}_k$.
The key observation is that our original functor $F$ is the composition $F = F_n \circ F_{n-1} \circ \cdots \circ F_1$ and the inverse functor $G$ is the composition $G=j_1^* \circ \cdots \circ j_{n-1}^* \circ j_n^*$. Theorem \ref{main thm 2} is therefore a consequence of the following theorem.
\begin{thm} For $k \geq 1$, the functor $F_k$ defines an equivalence of categories between $\mathcal{C}_k$ and $\mathcal{C}_{k+1}$. The inverse functor $G_k$ is $j_k^*$. \begin{proof} Equation \ref{jkFk=id} shows that $j_k^*F_k = id_{\mathcal{C}_k}$ as a functor. We must also show that $F_kj_k^*$ is isomorphic to $id_{\mathcal{C}_{k+1}}$ as functors, i.e. for any $S \in \mathcal{C}_{k+1}$, we must construct an isomorphism $S \to F_kj_k^*S$ such that for any morphism $S \to T$ in the category $\mathcal{C}_{k+1}$, the diagram
\begin{center} \begin{tikzcd} S \arrow[r] \arrow[d] & T \arrow[d] \\ F_kj_k^*(S) \arrow[r] & F_kj_k^*(T) \end{tikzcd} \end{center} commutes. We construct the morphism $S \to F_kj_k^*S$ as follows. Since $S \in \mathcal{C}_{k+1}$, $S = S_{W_{k+1}} \oplus \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m]$. It follows that
\begin{equation*} \begin{aligned} F_kj_k^*S &= \tau_{\leq k-1-n}Rj_{k*}j_k^*(S_{W_{k+1}} \oplus \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m]) \oplus \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m]\\ &= \tau_{\leq k-1-n}Rj_{k*}j_k^*S_{W_{k+1}} \oplus \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m]. \end{aligned} \end{equation*}
The adjunction morphism gives us a morphism $S_{W_{k+1}} \to Rj_{k*}j_k^*S_{W_{k+1}}$. The vanishing axiom [AX1$'$](b) implies that $S_{W_{k+1}} \simeq \tau_{\leq k-1-n}S_{W_{k+1}}$. Therefore, we have a morphism $$S \xrightarrow{pr_1}S_{W_{k+1}} \simeq \tau_{\leq k-1-n}S_{W_{k+1}} \to \tau_{\leq k - 1 -n}Rj_{k*}j_k^*S_{W_{k+1}} \simeq \tau_{\leq k - 1 -n} Rj_{k*}j_k^*S.$$ We also have a morphism $$S \xrightarrow{pr_2} \bigoplus_{m=1}^{n-k} \mathcal{L}^m[m].$$ Taking the direct sum of these morphisms gives us a morphism $S \to F_kj_k^*S$. By construction, the morphism $S \to F_kj_k^*S$ is an isomorphism over $U_k$. We need to check that it is an isomorphism over $U_{k+1} - U_k$. The attaching axiom [AX1$'$](c) implies that $$i_k^*S \to i_k^*Rj_{k*}j_k^*S$$ induces an isomorphism on cohomology sheaves for all $a \leq k-1-n$. Thus, $$ i_k^*S \simeq \tau_{\leq k-1-n} i_k^*S \simeq \tau_{\leq k-1-n} i_k^*Rj_{k*}j_k^*S \simeq i_k^*F_kj_k^*S.$$
We have thus constructed an isomorphism $S \to F_kj_k^*S$. Since this morphism is constructed as a direct sum of two morphisms, we will check that each summand is a morphism of functors. Let $f:S \to T$ be a morphism in the category $\mathcal{C}_{k+1}$. Consider the diagram \begin{center} \begin{tikzcd} S \arrow[d, "pr_1"] \arrow[rrr, "f"] & & & T\arrow[d, "pr_1"] \\ S_{W_{k+1}} \arrow[rrr, "f_{W_{k+1}}"] \arrow[dd, "\eta_{k+1}(S)"] \arrow[dr] & & & T_{W_{k+1}} \arrow[dd, "\eta_{k+1}(T)"] \arrow[dl] \\ & Rj_{k*}j_k^*S_{W_{k+1}} \arrow[r] & Rj_{k*}j_k^*T_{W_{k+1}}\\ \tau_{\leq k-1-n}Rj_{k*}j_k^*S_{W_{k+1}} \arrow[rrr,"g_{W_{k+1}}"] \arrow[ur] & & & \tau_{\leq k-1-n}Rj_{k*}j_k^*T_{W_{k+1}} \arrow[ul, "\theta"] \end{tikzcd} \end{center} where $g_{W_{k+1}} = \tau_{\leq k - 1 -n} Rj_{k*}j_k^*(f_{W_{k+1}})$. It is clear that the top square commutes and the two trapezoids commute. The left and right triangles commute by the truncation distinguished triangle. The commutativity of the top and bottom trapezoids combined with the commutativity of the left and right triangles imply that $$\theta \circ \eta_{k+1}(T) \circ f_{W_{k+1}} = \theta \circ g_{W_{k+1}} \circ \eta_{k+1}(S).$$ Since $S_{W_{k+1}} \simeq \tau_{\leq k-1-n}S_{W_{k+1}}$ and $\theta$ induces isomorphisms on cohomology sheaves for all $a \leq k-1-n$, Proposition \ref{morph lifting prop} implies that $$\eta_{k+1}(T) \circ f_{W_{k+1}} = g_{W_{k+1}} \circ \eta_{k+1}(S).$$ It follows that the bottom rectangle commutes. Commutativity of the upper and lower rectangles implies that the largest rectangle commutes. Since the diagram \begin{center} \begin{tikzcd} S \arrow[d,"pr_2"] \arrow[r, "f"] &T \arrow[d,"pr_2"]\\ \bigoplus_{m=1}^{n-k} \mathcal{L}_S^m[m] \arrow[r] &\bigoplus_{m=1}^{n-k} \mathcal{L}_T^m[m] \end{tikzcd} \end{center} commutes, we conclude that the isomorphism $id_{\mathcal{C}_{k+1}} \to F_kj_k^*$ is an isomorphism of functors. \end{proof} \end{thm}
\subsection{Axioms [AX2$'$]}
In this section, we give a stratification independent collection of axioms characterizing $IC(\mathfrak{X},\mathcal{L})$. Let $X$ be a complex algebraic variety of complex dimension $n$.
\begin{defn} \label{AX$2'$} Suppose that $S$ is $\mathfrak{X}$-clc for some stratification $\mathfrak{X}$ of $X$. We say that $S$ satisfies axioms [AX2$'$] if \begin{enumerate}[(a)]
\item (Normalization) There exists an open dense subset $V$ of $X$ such that $V = \bigsqcup_{m=1}^n V^m$ where $V^m$ is a topological manifold of complex dimension $m$, dim$_\mathbb{C}(\overline{V^m} - V^m) \leq m-1$, and there exist local systems $\mathcal{L}^m$ on $V^m$ such that $S|_{V^m} \simeq \mathcal{L}^m[m]$
\item (Pure Dimensional Support) For $1 \leq m \leq n$, if $a > -m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^*S) \neq 0\} < -a.$$
\item (Pure Dimensional Cosupport) For $1 \leq m \leq n$, if $a < m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^!S) \neq 0\} < a.$$ \end{enumerate} where $i_x : \{x\} \to X$ is the inclusion. \end{defn}
\begin{rmk}\label{ax2' vs ax2} The stratification independent axioms [AX2$'$] are analogous to axioms [AX2] proposed by Goresky and MacPherson in \cite{GM2}; see Definition \ref{AX2} for axioms [AX2]. When $X$ is a pure dimensional complex algebraic variety, axioms [AX2$'$] reduce to axioms [AX2]. Again, we do not include an analog of the lower bound axiom because it is not needed to characterize the complex $IC(\mathfrak{X},\mathcal{L})$. The normalization axiom [AX2$'$](a) differs from [AX2](a) in that the open dense set $V$ contains manifolds of differing dimensions. We require that each local system is shifted by the complex dimension of the manifold. The pure dimensional support axiom [AX2$'$](b) differs from [AX2](b) in a significant way. Instead of looking at all possible stalks of the complex $S$, we look at stalks of $S$ in a specific $V^m$. For each $m$, we place a condition on the vanishing of cohomology of these stalks in certain degrees. The specific degrees subject to our conditions depend on $m$ instead of the dimension of the complex algebraic variety. The difference between [AX2$'$](c) and [AX2](c) is similar to the difference between [AX2$'$](b) and [AX2](b).
We also make a remark on the assumption that $S$ is $\mathfrak{X}$-clc. In \cite{GM2}, it is assumed that $S$ is topologically constructible, i.e. the cohomology sheaves of $S$ also have finitely generated stalks. The finite generation of the stalks of the cohomology sheaves is a consequence of the axioms by Corollary \ref{main cor 1} and the following proposition. \end{rmk}
\begin{prop} \label{main prop} Let $X$ be a complex algebraic variety of complex dimension $n$ and let $\mathfrak{X}$ be a stratification of $X$ by closed subvarieties. Suppose that $S$ is $\mathfrak{X}$-clc. Then $S$ satisfies [AX1$'$] with respect to $\mathfrak{X}$ if and only if $S$ satisfies [AX2$'$]. \end{prop}
Before proving the proposition, we will need to establish several lemmas. Let $\mathfrak{X}$ be a topological stratification of $X$. Recall that $U^m$ is the union of all open $m$-dimensional strata of $X$ and $X^m$ is defined to be $\overline{U^m}$. Let $W^m$ be the largest set of points in $X$ which admit a neighborhood homeomorphic to $\mathbb{C}^{m}$. We can equivalently think of $W^m$ as the largest open subset of $X$ which is a topological manifold of complex dimension $m$.
\begin{lem} \label{strat contain} With the notation above, we have $U^m \subseteq W^m$. \begin{proof} Let $p \in U^m$. Let $S^m_p \subseteq X_m - X_{m-1}$ be the open complex $m$-dimensional stratum containing $p$. By definition of topologically stratified space, there exists a neighborhood $N_p$ and a real $(2(n-m)-1)$-dimensional topologically stratified space $L$ such that $N_p \simeq \mathbb{C}^m \times cone^o(L)$. Recall from the definition of stratified space that the stratification of $L$ induces one on $N_p$. Since $S^m_p$ is open, we can take $N_p \subseteq S^m_p \subseteq X_m$. It follows that $N_p = N_p \cap X_m \simeq \mathbb{C}^m \times cone^o(L_{-1}) = \mathbb{C}^m$. So $p \in W^m$. \end{proof} \end{lem}
\begin{lem} Suppose that $V$ is any open dense subset of $X$ consisting of points in $X$ which admit a neighborhood homeomorphic to some $\mathbb{C}^{m}$. Write $V = \bigsqcup_{m=1}^m V^m$ where $V^m$ is a topological manifold of complex dimension $m$. Then $\overline{V^m} = X^m$. \begin{proof} We will show that $\overline{V^m}$ and $X^m$ are both equal to $\overline{W^m}$. We first show that $X^m = \overline{W^m}$. Lemma \ref{strat contain} implies that $X^m \subseteq \overline{W^m}$. We now show that $W^m \subseteq X^m$. Let $p \in W^m$ and suppose that $p \notin X^m$. Since $p \in W^m$, there exists a distinguished neighborhood $N_p$ of $p$ homeomorphic to $\mathbb{C}^m$. Since $X = \bigcup_{l=1}^n X^l$, $p \in X^l$ for some $l \neq m$. Since $X^l = \overline{U^l}$, $N_p \cap U^l$ must be nonempty. Let $q \in N_p \cap U^l$. Since $q \in N_p$, $q$ admits a neighborhood homeomorphic to $\mathbb{C}^m$. Since $q \in U^l$, Lemma \ref{strat contain} implies that $q$ admits a neighborhood homeomorphic to $\mathbb{C}^l$. This is a contradiction because $l \neq m$. It follows that $p \in X^m$. \par The proof that $\overline{V^m} = \overline{W^m}$ is similar. \end{proof} \end{lem}
\begin{cor}\label{irred comp cor} Let $X$ be a complex algebraic variety of complex dimension $n$ with stratification $\mathfrak{X}$. Let $U^m$ be the union of all open $m$-dimensional strata. Then $\overline{U^m}$ is the union of all $m$-dimensional irreducible components of $X$. \begin{proof} To see this, let $\tilde{X}^m$ be the union of all $m$-dimensional irreducible components of $X$. Let $V^m$ be the smooth locus of $\tilde{X}^m - \left( \bigcup_{l \neq m} \tilde{X}^l \cap \tilde{X}^m \right)$. Then $V = \bigsqcup_{m=1}^n V^m$ is an open dense subset of $X$ consisting of points which admit a neighborhood homeomorphic to $V^m$. It then follows from the previous lemma that $\tilde{X^m} = \overline{U^m}$. \end{proof} \end{cor}
\begin{lem} \label{4a=5a} Let $S$ be an $\mathfrak{X}$-clc complex. Then $S$ satisfies [AX1$'$](a) if and only if $S$ satisfies [AX2$'$](a) . \begin{proof}
If $S$ satisfies [AX1$'$](a) with respect to $\mathfrak{X}$, then the open set $U_1$ coming from the stratification also satisfies the requirements in [AX2$'$](a). Now let $S$ be $\mathfrak{X}$-clc and suppose $S$ satisfies [AX2$'$](a). Since $S$ is $\mathfrak{X}$-clc, $S|_{U^m}$ is CLC. In particular, all of the cohomology sheaves $\mathcal{H}^a(S)|_{U^m}$ are locally constant. For $a \neq -m$, [AX2$'$](a) implies that $\mathcal{H}^a(S)|_{U^m \cap V^m} = 0$. Since $\mathcal{H}^a(S)|_{U^m}$ is locally constant and its restriction to $U^m \cap V^m$ is $0$, we conclude that $\mathcal{H}^a(S)|_{U^m} = 0$. This proves the lemma. \end{proof} \end{lem}
\begin{rmk} \label{local sys rmk}
Let $S$ be a $\mathfrak{X}$-clc complex and suppose $S$ satisfies [AX2$'$](a). Then $S|_V^m = \mathcal{L}^m[m]$ where $\mathcal{L}^m$ is a local system on the topological manifold $V^m$. By the previous lemma, the assumption that $S$ is $\mathfrak{X}$-clc implies that $S|_{U^m}\simeq \mathcal{L}'^m$ where $U^m$ is the open subset of $X^m$ coming from the stratification and $\mathcal{L}'^{m}$ is a local system on $U^m$. Since dim$_\mathbb{C}(\overline{V^m}-V^m) \leq m-1$, $U^m \cap V^m$ has real codimension greater than or equal to $2$ in $U^m$. This implies that there is a surjection of fundamental groups $\pi_1(U^m \cap V^m) \twoheadrightarrow \pi_1(U^m)$. Fix a base point $x \in U^m \cap V^m$. The local system $\mathcal{L}'$ on $U_1$ corresponds to a representation $\phi : \pi_1(U_1, x) \to Aut(\mathcal{L}'_x)$ and the restriction $\mathcal{L}|_{U_1 \cap V}$ corresponds to a representation $\tilde{\phi}: \pi_1(U^m \cap V^m, x) \to Aut(\mathcal{L}^m_x)$. Since $\mathcal{L}'_x = S_x = \mathcal{L}_x$, we have a commutative diagram: \begin{center} \begin{tikzcd} \pi_1(U^m \cap V^m, x) \arrow[d, twoheadrightarrow, "i_*"] \arrow[dr,"\tilde{\phi}"]\\ \pi_1(U^m, x) \arrow[r,"\phi"] &Aut(\mathcal{L}_x) \end{tikzcd} \end{center} Surjectivity of the fundamental groups implies that $\phi$ is the unique representation making this diagram commute. To see this, let $\psi$ be another such representation. Then for any $[\gamma] \in \pi_1(U^m,x)$, surjectivity of the fundamental groups says there exists $[\sigma] \in \pi_1(U^m \cap V^m, x)$ such that $i_*([\sigma]) = [\gamma]$. It follows that $$\phi([\gamma]) = \tilde{\phi}([\sigma]) = \psi([\gamma]),$$
and so $\psi = \phi$. This implies that the local system $\mathcal{L}'^m$ on $U^m$ is the unique extension of the local system $\mathcal{L}^m|_{U^m \cap V^m}$. The most important case of this is the constant sheaf. If $\mathcal{L}^m|_{U^m \cap V^m} \simeq R_{U^m \cap V^m}$, then the representation $\tilde{\phi}$ is trivial. Surjectivity of the fundamental groups implies that the representation $\phi$ is trivial, i.e. $\mathcal{L}^m|_{U^m} \simeq R_{U^m}$.
This fact is false without the surjectivity of fundamental groups. Consider the inclusion $S^1-\{p\} \to S^1$ and take any nontrivial local system on $S^1$. Then its restriction to $S^1-\{p\}$ is trivial. \end{rmk}
\begin{rmk} \label{union of strat} The significance of Lemma \ref{4a=5a} is the following. If $S$ is $\mathfrak{X}$-clc and satisfies [AX2$'$], then we can replace the open set $V$ appearing in [AX2$'$](a) with the open set $U_1$ coming from the stratification. Since $S$ is $\mathfrak{X}$-clc, the sets appearing in [AX2$'$](b) and [AX2$'$](c) can be taken to be unions of strata. \end{rmk}
We are now ready to prove Proposition \ref{main prop}. \begin{proof}[Proof of Proposition \ref{main prop}]
Suppose $S$ is an $\mathfrak{X}$-clc complex and that $S$ satisfies [AX2$'$]. Lemma \ref{4a=5a} implies that $S$ satisfies [AX1$'$](a). We now prove that $S$ satisfies [AX1$'$](b) if and only if $S$ satisfies [AX2$'$](b). Fix $1 \leq m \leq n$ and $a > -m$. By Remark \ref{union of strat}, the set $\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\}$ is a union of strata. Suppose $S$ satisfies [AX1$'$](b). This implies that the strata contained in $\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\}$ cannot meet $W_{k+1}$ for $ a > k - 1 -n$. Hence they can only be contained in $W_{k+1}$ for $a \leq k - 1 - n$, equivalently $k \geq a + n +1$. So the strata are contained in $W_{k+1} - W_k$ for some $k \geq a + n + 1$. By Lemma \ref{Wk+1-Wk}, $U_{k+1} - U_k = (W_{k+1} - W_k) - U^{n-k+1}$. Since $k \geq a + n + 1$ and $a > -m$, we have that $n-k+1 \leq -a < m$. It follows that $U^{n-k+1}$ cannot be among the strata contained in $\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\}$. This implies that the only allowable strata are contained in $U_{k+1} - U_k$ for $n-k < -a$. It follows that dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\} \leq n-k < -a$.
Conversely, suppose $S$ satisfies [AX2$'$](b). Then dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\} < -a$. Since $\{x \in X^m \ | \ \mathcal{H}^a(i_x^*S) \neq 0\}$ is a union of strata, it can only contain strata of dimension $< -a$. These strata are contained in $U_{k+1}-U_k \subseteq W_{k+1} - W_k$ for $n-k < -a$. So these strata can only be contained in $W_{k+1}$ for $n-k+1 \geq -a$ or equivalently, $a \leq k - 1 -n$. This implies that $S|_{W_{k+1}} \simeq \tau_{\leq k - 1 -n} S|_{W_{k+1}}$.
We now prove that $S$ satisfies [AX1$'$](c) if and only if it satisfies [AX1$'$](c). Fix $1 \leq m \leq n$ and $a < m$. Again by Remark \ref{union of strat}, the set $\{x \in X^m \ | \ \mathcal{H}^a(i_x^!(S) \neq 0\}$ is a union of strata. If $x \in U^m$, then by factoring the inclusion $i_x: \{x\} \to X$ as \begin{center} \begin{tikzcd} \{x\} \arrow[r, "i_x"] \arrow[d, "\mu_x"]&X\\ U^m \arrow[ru, "j^m" '] \end{tikzcd} \end{center}
we see that $i^!S = \mu_x^! S|_{U^m_1} =\mu_x^*\mathcal{L}^m[-m]$. Since $a < m$, we see that $\{x \in X^m \ | \ \mathcal{H}^a(i_x^!(S) \neq 0\}$ does not contain any open $m$-dimensional strata.
Now, suppose $S$ satisfies [AX2$'$](c), then $S$ also satisfies [AX1$'$](c$''$). In particular, this implies that these strata cannot meet $U_{k+1} - U_k$ for $a \leq n-k$. Thus the only allowable strata are contained in $U_{k+1} - U_k$ for $a > n-k$. It follows that dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^!(S) \neq 0\} \leq n-k < a$.
Conversely suppose $S$ satisfies [AX2$'$](c). Then dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^!(S) \neq 0\} < a$. Since $\{x \in X^m \ | \ \mathcal{H}^a(i_x^!S) \neq 0\}$ is a union of non-open strata, it can only contain strata of complex dimension $< a$. These strata are contained in $U_{k+1} - U_k$ for $a > n-k$. \end{proof}
\section{Topological Independence of $IC(\mathfrak{X},\mathcal{L})$}\label{top indep} The main goal of this section is to prove Theorem \ref{main thm 1}.
\begin{thm'} Let $X$ be a complex algebraic variety of complex dimension $n$ which is not necessarily pure dimensional. Let $U$ be an open dense subset of $X$ such that $U = \bigsqcup_{m=1}^n U^m$ where $U^m$ is a topological manifold of complex dimension $m$ and dim$_\mathbb{C}(\overline{U^m}-U^m) \leq m-1$. Let $\mathcal{L}^m$ be a local system on $U^m$ and set $\mathcal{L} = \bigoplus_{m=1}^n \mathcal{L}^m$ (extend each $\mathcal{L}^m$ on $U^m$ to $U$ by zero). Then there exists a unique (up to canonical isomorphism) complex $IC(X, \mathcal{L})$ satisfying axioms [AX2$'$], i.e. $IC(X, \mathcal{L})$ is the unique complex satisfying: \begin{enumerate}[(a)]
\item (Normalization) There exists an open dense subset $V$ of $X$ such that $V = \bigsqcup_{m=1}^n V^m$ where $V^m$ is a topological manifold of complex dimension $m$, dim$_\mathbb{C}(\overline{V^m}-V^m) \leq m-1$, and $IC(X,\mathcal{L})|_{V^m} \simeq \mathcal{L}'^m[m]$ where $\mathcal{L}'$ is the unique extension of $\mathcal{L}^m|_{U^m \cap V^m}$ to $V^m$ (see Remark \ref{local sys rmk} for more details on $\mathcal{L}'$).
\item (Pure Dimensional Support) For $1 \leq m \leq n$, if $a > -m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^*S) \neq 0\} < -a.$$
\item (Pure Dimensional Cosupport) For $1 \leq m \leq n$, if $a < m$, $$\text{dim}_\mathbb{C}\{x \in \overline{V^m} \ | \ \mathcal{H}^a(i_x^!S) \neq 0\} < a.$$ \end{enumerate} \end{thm'}
To prove Theorem \ref{main thm 1}, we follow the same strategy as Goresky and MacPherson in \cite{GM2}. The main difficulty is that we need some way of comparing objects in $D^b_c(X)$ satisfying [AX1$'$] with respect to two different stratifications, which may not have a common refinement. To address this, we will construct a canonical filtration $\mathfrak{X}^{can}$ such that: \begin{enumerate} \item each topological stratification is a refinement of $\mathfrak{X}^{can}$, \item applying Deligne's construction with respect to $\mathfrak{X}^{can}$ yields a complex $J^{can}$ satisfying [AX2$'$], \item $J^{can}$ is $\mathfrak{X}$-clc for any stratification $\mathfrak{X}$. \end{enumerate}
The existence of such a complex $J^{can}$ implies Theorem \ref{main thm 1} as follows. Suppose $S$ is $\mathfrak{X}$-clc for some stratification $\mathfrak{X}$ of $X$ and $S$ satisfies [AX2$'$]. Then $S$ satisfies [AX1$'$] with respect to $\mathfrak{X}$ by Proposition \ref{main prop}. Similarly, the complex $J^{can}$ described above also satisfies [AX1$'$] with respect to $\mathfrak{X}$. By Corollary \ref{main cor 1}, $S$ and $J^{can}$ are canonically isomorphic in $D^b_c(X)$. If $T$ is the complex obtained by applying Deligne's construction to any other stratification $\mathfrak{\tilde{X}}$, then $T$ satisfies [AX1$'$] with respect to $\mathfrak{\tilde{X}}$ and satisfies [AX2$'$] by Proposition \ref{main prop}. It follows that $T$ is also canonically isomorphic to $J^{can}$.
\subsection{Construction of the Canonical Filtration} We will construct the canonical filtration $\mathfrak{X}^{can}$ inductively. For each $1 \leq m \leq n$, let $W^m$ be the largest set of points in $X$ which admit a neighborhood homeomorphic to $\mathbb{C}^m$ and let $X^m = \overline{W^m}$. Set $X^{can}_{n-1} = X - W^n$. Now, suppose that \begin{equation*} \mathfrak{X}^{can}_k: X^{can}_{n} \supseteq X^{can}_{n-1} \supseteq \cdots \supseteq X^{can}_{n-k} \end{equation*} has been defined and each $X^{can}_{n-l}$ is closed in $X$. Recall that the open filtration $\mathfrak{U}^{can}_k$ induced by $\mathfrak{X}^{can}_k$ is given by \begin{equation*} \mathfrak{U}^{can}_k: U^{can}_1 \subseteq \cdots \subseteq U^{can}_k, \end{equation*} where \begin{equation*} U^{can}_l = \left(X^n - X^{can}_{n-l} \right) \cup \left(X^{n-1} - X^{can}_{n-l+1}\right) \cup \cdots \cup \left(X^{n-l+2} - X^{can}_{n-2}\right) \sqcup \bigsqcup_{m=1}^{n-l+1}W^m. \end{equation*}
Let $J^{can}_k \in D^b_c(U^{can}_k)$ be the complex obtained by applying Deligne's construction with respect to the filtration $\mathfrak{X}^{can}_k$. Let $h_k \colon U^{can}_k \to X$ be the open inclusion. Let $V'$ be the largest open subset of $X^{can}_{n-k} - W^{n-k}$ which is a topological manifold of complex dimension $n-k$ and such that $\left(Rh_{k*}J^{can}_k\right)|_{X^{can}_{n-k}}$ is CLC. Let $V = V' \sqcup W^{n-k}$ and define $X^{can}_{n-k-1} \coloneqq X^{can}_{n-k} - V$. Notice that $X^{can}_{n-k-1}$ is closed in $X$.
\begin{lem} $X^{can}_{n-1}$ is a union of strata for any stratification $\mathfrak{X}$. \begin{proof} Fix a stratification $\mathfrak{X}$. Recall that $X^{can}_1 = X - W^n$. Since $X$ is a union of strata, it suffices to show that $W^n$ is a union of strata. We claim that $W^n$ is a union of the strata $S_r$ which in the normal direction, look like $\mathbb{C}^{n-r}$. If $x$ is contained in such a stratum, then $x$ has a neighborhood homeomorphic to $\mathbb{C}^m$. Conversely, if $x$ has a neighborhood homeomorphic to $\mathbb{C}^m$ and $S_r$ is the stratum containing $x$, then by possibly shrinking the neighborhood, we see that $S_r$ must look like $\mathbb{C}^{n-r}$ in the normal direction. \end{proof} \end{lem}
\begin{prop} For $0 \leq k \leq n$, we have \begin{enumerate} \item For any stratification $\mathfrak{X}$, $X^{can}_{n-k-1}$ is a union of strata, \item dim$_\mathbb{C} X^{can}_{n-k-1} \leq n-k-1$, \item $Z^{can}_{n-k} = (X^{can}_{n-k} - X^{can}_{n-k-1})- W^{n-k}$ is either empty or a $n-k$ complex dimensional topological manifold.
\item Let $J^{can}$ be the object obtained by applying Deligne's construction with respect to the canonical filtration $\mathfrak{X}^{can}$ and a local system $\mathcal{L}$ on $\bigsqcup_{m=1}^n W^m$. Then $J^{can}|_{Z^{can}_{n-k}}$ is CLC. \end{enumerate} \begin{proof} We prove (1)-(4) by induction on $k$. If $k = 0$, then $X^{can}_{n-1}$ is a union of strata by the previous lemma. Moreover, $W^n$ contains all of the $n$-dimensional strata of $X$ by Lemma \ref{strat contain}, so dim$_\mathbb{C} X^{can}_{n-1} \leq n-1$. This shows that $(1)$ and $(2)$ are satisfied. Since $Z^{can}_{n} = (X - X^{can}_{n-1}) - W^n = \varnothing$ is empty, $(3)$ and $(4)$ are also satisfied.
Now fix $k > 0$ and suppose that (1)-(4) hold for all integers strictly less than $k$. Induction hypothesis (1) says that $X^{can}_{n-k}$ is a union of strata. If we can show that the set $V$ used to define $X^{can}_{n-k-1}$ is a union of strata which contains the $n-k$ complex dimensional strata of $X$, then (1) and (2) will hold for $k$. Property (3) will hold for $k$ since $$Z_{n-k} = (X^{can}_{n-k} - X^{can}_{n-k-1})- W^{n-k} = V - W^{n-k} = V'$$ and $V'$ is a topological manifold of complex dimension $n-k$. Finally, since $Rh_{k*}J^{can}_k$ is CLC on $Z^{can}_{n-k}$,
$$J^{can}|_{Z^{can}_{n-k}} = \tau_{\leq k-1-n}\left(Rh_{k*}h_k^*J^{can}\right)|_{Z^{can}_{n-k}} = \tau_{\leq k-1-n}\left(Rh_{k*}J^{can}_k\right)|_{Z^{can}_{n-k}}$$ is CLC. This implies that (4) will hold for $k$. Thus it suffices to show that $V$ is a union of strata which contains the $n-k$ complex dimensional strata of $X$. This is a consequence of the following lemma. \end{proof} \end{prop}
\begin{lem}
In the situation above, the complex $Rh_{k*}J^{can}_k|_{X^{can}_{n-k}}$ is $\mathfrak{X}$-clc for $k \geq 1$. \begin{proof} Denote the stratification of $X$ $$\mathfrak{X} : X = X_n \supseteq X_{n-1} \supseteq \cdots \supseteq X_{-1} = \varnothing.$$
By the induction hypothesis, $X^{can}_{n-k}$ is a union of strata. We must show that $Rh_{k*}J^{can}_k|_{X^{can}_{n-k}}$ is CLC on each stratum. Let $x \in X^{can}_{n-k}$ and let $S_r \subseteq X_{2r} - X_{2r-1}$ be the stratum containing $x$. By definition of topologically stratified space, there exists a distinguished neighborhood $N$ and a $(2(n-r) - 1)$ real dimensional topologically stratified space $L$ such that $N \simeq \mathbb{C}^r \times cone^o(L)$ and $N \cap X_{2r+l'+1} \simeq \mathbb{C}^r \times cone^o(L_{l'})$. Let $V \coloneqq cone^o(L)$ and $\pi : \mathbb{C}^r \times V \to V$ be projection onto the second factor. For $l \leq k$, let $$\tilde{U}^{can}_l \coloneqq U^{can}_l \cap N,$$ and $$\hat{U}^{can}_l \coloneqq \pi(U^{can}_l).$$ Let $\tilde{j}_l: \tilde{U}^{can}_l \to \tilde{U}^{can}_{l+1}$ and $\hat{j}_l: \hat{U}^{can}_l \to \hat{U}^{can}_{l+1}$ denote inclusions. For $l \leq k$, the induction hypothesis (1) ensures that $U^{can}_l$ is a union of strata. Remark \ref{projection rmk} implies that $\pi^{-1}(\pi(\tilde{U}^{can}_l))$. It follows that \begin{multline*}
(Rh_{k*}J^{can}_k)|_N \simeq R\tilde{h}_{k*} \bigg(\tau_{\leq k-1-n} R\tilde{j}_{k-1*}\cdots \tau_{\leq -n}R\tilde{j}_{1*}\pi^*\hat{\mathcal{L}}^n[n] \\ \oplus \cdots \oplus \tau_{\leq k -1 -n} R\tilde{j}_{k-1*}\pi^*\hat{\mathcal{L}}^{n-k-2}[n-k-2] \oplus \bigoplus_{m=1}^{n-k+1} \pi^*\hat{\mathcal{L}}^m[m]\bigg). \end{multline*} By Lemma \ref{projection commute}, moving $\pi^*$ to the left changes tildes to hats. This gives \begin{multline*}
(Rh_{k*}J^{can}_k)|_N \simeq \pi^*R\hat{h}_{k*} \bigg(\tau_{\leq k-1-n} R\hat{j}_{k-1*}\cdots \tau_{\leq -n}R\hat{j}_{1*}\hat{\mathcal{L}}^n[n] \\ \oplus \cdots \oplus \tau_{\leq k -1 -n} R\hat{j}_{k-1*}\hat{\mathcal{L}}^{n-k-2}[n-k-2] \oplus \bigoplus_{m=1}^{n-k+1} \hat{\mathcal{L}}^m[m]\bigg). \end{multline*}
Since $V_{0}$ is a point, the complex $(Rh_{k*}J^{can}_k)|_{\pi^{-1}(V_{0})}$ is CLC. \end{proof} \end{lem}
\begin{prop} Let $J^{can}$ be the complex obtained from Deligne's construction with respect to the canonical filtration $\mathfrak{X}^{can}$ and some local system $\mathcal{L}$ on $\bigsqcup_{m=1}^n W^m$. Then $J^{can}$ satisfies [AX2$'$]. \begin{proof}
$J^{can}$ satisfies [AX2$'$](a) by construction. To verify [AX2$'$](b), fix $1 \leq m \leq n$ and $a > -m$. We want to show that dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^*J^{can}) \neq 0 \} < -a$. First, notice that $W^l \cap X^m$ is nonempty if and only if $l = m$. Since $J^{can}|_{W^m} \simeq \mathcal{L}^m[m]$ and $a > -m$, the set $W^l \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^*J^{can}) \neq 0 \}$ is empty for all $1 \leq l \leq n$. Thus, it suffices to consider the intersection
$$Z^{can}_{n-k} \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^*J^{can}) \neq 0 \}.$$
Since $J^{can}|_{Z^{can}_{n-k}} \simeq \tau_{\leq k - 1 -n} J^{can}|_{Z^{can}_{n-k}}$, this intersection is possibly nonempty if and only if $a \leq k - 1 - n$. Since dim$_\mathbb{C} Z^{can}_{n-k} \leq n-k$, it follows that
$$\text{dim}_\mathbb{C} \left(Z^{can}_{n-k} \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^*J^{can}) \neq 0 \}\right) \leq n-k < -a.$$
Since this is true for any $k \geq 1$, we conclude that dim$_\mathbb{C}\{x \in X^m \ | \ \mathcal{H}^a(i_x^*J^{can}) \neq 0 \} < -a$.
To verify [AX2$'$](c), fix $1 \leq m \leq n$ and $a < m$. A similar argument to the above shows that $W^l \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^!J^{can}) \neq 0 \}$ is empty for all $1 \leq l \leq n$. Again, it suffices to consider $Z^{can}_{n-k} \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^!J^{can}) \neq 0 \}$. Notice that by Proposition \ref{non open strat lem}, $Z^{can}_{n-k} = U^{can}_{k+1} - U^{can}_k$. The inclusions $U^{can}_{k} \xrightarrow{j} U^{can}_{k+1} \xleftarrow{i} U^{can}_{k+1}-U^{can}_k$ give rise to the adjunction triangle
$$i_!i^!J^{can}|_{U^{can}_{k+1}} \to J^{can}|_{U^{can}_{k+1}} \to Rj_*j^* J^{can}|_{U^{can}_{k+1}} \xrightarrow{[1]}.$$ Restriction to $U^{can}_{k+1}-U^{can}_k$ gives
$$i^!J^{can}|_{U^{can}_{k+1}} \to i^*J^{can}|_{U^{can}_{k+1}} \to i^*Rj_*j^* J^{can}|_{U^{can}_{k+1}} \xrightarrow{[1]}.$$
Since $i^*J^{can}|_{U^{can}_{k+1}} \simeq \tau_{\leq k - 1 -n} i^*Rj_*j^*J^{can}|_{U^{can}_{k+1}}$ by construction, the long exact sequence in cohomology implies that $\mathcal{H}^a(i^!J^{can}|_{U^{can}_{k+1}}) = 0$ for $a \leq k-n$. Factor the inclusion $i_x : \{x\} \to X$ into \begin{center} \begin{tikzcd} \{x\} \arrow[r,"i_x"] \arrow[d, "\mu_x"] & X\\ U^{can}_{k+1} - U^{can}_k \arrow[r,"i"] & U^{can}_{k+1} \arrow[u,"\beta"] \end{tikzcd} \end{center} Since $U^{can}_{k+1} - U^{can}_k$ is a topological manifold of dimension $2(n-k)$, Proposition \ref{top man} implies that
$$i_x^!J^{can} = \mu_x^!i^!J^{can}|_{U^{can}_{k+1}} = \mu_x^*i^!J^{can}|_{U^{can}_{k+1}}[-2(n-k)],$$
where the first equality holds since $U^{can}_{k+1}$ is open in $X$. It follows that $\mathcal{H}^a(i_x^!J^{can}) = 0$ for $a \leq n-k$. So $(U^{can}_{k+1}-U^{can}_k ) \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^!J^{can}) \neq 0 \}$ is possibly nonempty if and only if $a > n-k$. We conclude that
$$\text{dim}_\mathbb{C}\left(Z^{can}_{n-k} \cap \{x \in X^m \ | \ \mathcal{H}^a(i_x^!J^{can}) \neq 0 \} \right)\leq n-k < a.$$
Since this is true for any $k \geq 1$, we conclude that dim$\{x \in X^m \ | \ \mathcal{H}^a(i_x^!J^{can}) \neq 0 \} < a$. \end{proof} \end{prop}
\end{document} | arXiv | {
"id": "1905.12715.tex",
"language_detection_score": 0.7321001291275024,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\draft
\title{Three qubits can be entangled in two inequivalent ways}
\author{W. D\"ur, G. Vidal and J. I. Cirac}
\address{Institut f\"ur Theoretische Physik, Universit\"at Innsbruck, A-6020 Innsbruck, Austria}
\date{\today}
\maketitle
\begin{abstract} Invertible local transformations of a multipartite system are used to define equivalence classes in the set of entangled states. This classification concerns the entanglement properties of a single copy of the state. Accordingly, we say that two states have the same kind of entanglement if both of them can be obtained from the other by means of local operations and classical communication (LOCC) with nonzero probability. When applied to pure states of a three-qubit system, this approach reveals the existence of two inequivalent kinds of genuine tripartite entanglement, for which the GHZ state and a W state appear as remarkable representatives. In particular, we show that the W state retains maximally bipartite entanglement when any one of the three qubits is traced out. We generalize our results both to the case of higher dimensional subsystems and also to more than three subsystems, for all of which we show that, typically, two randomly chosen pure states cannot be converted into each other by means of LOCC, not even with a small probability of success. \end{abstract}
\pacs{03.67.-a, 03.65.Bz, 03.65.Ca, 03.67.Hk}
\narrowtext
\section{Introduction}
The understanding of entanglement is at the very heart of Quantum Information Theory (QIT). In recent years, there has been an ongoing effort to characterize qualitatively and quantitatively the entanglement properties of multiparticle systems. A situation of particular interest in QIT consists of several parties that are spatially separated from each other and share a composite system in an entangled state. This setting conditionates the parties ---which are typically allowed to communicate through a classical channel--- to only act locally on their subsystems. But even restricted to local operations assisted with classical communication (LOCC), the parties can still modify the entanglement properties of the system and in particular they can try to convert an entangled state into another. This possibility leads to natural ways of defining equivalence relations in the set of entangled states ---where equivalent states are then said to contain the same kind of entanglement---, also of establishing hierarchies between the resulting classes.
For instance, we could agree in identifying any two states which can be obtained from each other with certainty by means of LOCC. Clearly, this criterion is interesting in QIT because the parties can use these two states indistintively for exactly the same tasks. It is a celebrated result \cite{Be95} that, when applied to many copies of a state, this criterion leads to identifying all bipartite pure-state entanglement with that of the EPR state $1/\sqrt{2}(\ket{00}+\ket{11}) $\cite{Ei35}. That is, the entanglement of any pure state $\ket{\psi}_{AB}$ is asymptotically equivalent, under deterministic LOCC, to that of the EPR state, the entropy of entanglement $E(\psi_{AB})$ ---the entropy of the reduced density matrix of either system $A$ or $B$--- quantifying the amount of EPR entanglement contained asymptotically in $\ket{\psi}_{AB}$. In contrast, recent contributions have shown that in systems shared by three or more parties there are several inequivalent forms of entanglement under asymptotic LOCC \cite{asy,Be99}.
This paper is essentially concerned with the entanglement properties of a single copy of a state, and thus asymptotic results do not apply here. For single copies it is known that two pure states $\ket{\psi}$ and $\ket{\phi}$ can be obtained with certainty from each other by means of LOCC if and only if they are related by local unitaries LU \cite{Vi00J,Be99}. But even in the simplest bipartite systems, $\ket{\psi}$ and $\ket{\phi}$ are typically not related by LU, and continuous parameters are needed to label all equivalence classes \cite{Li97,Sc,Su00,Ca00,Ke99}. That is, one has to deal with infinitely many kinds of entanglement. In this context an alternative, simpler classification would be advisable.
One such classification is possible if we just demand that the conversion of the states is through stochastic local operations and classical communication (SLOCC) \cite{Be99}; that is, through LOCC but without imposing that it has to be achieved with certainty. In that case we can establish an equivalence relation stating that two states $\ket{\psi}$ and $\ket{\phi}$ are equivalent if the parties have a non-vanishing probability of success when trying to convert $\ket{\psi}$ into $\ket{\phi}$, and also $\ket{\phi}$ into $\ket{\psi}$ \cite{comment}. This relation has been termed stochastic equivalence in Ref.\ \cite{Be99}. Their equivalence under SLOCC indicates that both states are again suited to implement the same tasks of QIT, although this time the probability of a successful performance of the task may differ from $\ket{\phi}$ to $\ket{\psi}$. Notice in addition that since LU are a particular case of SLOCC, states equivalent under LU are also equivalent under SLOCC, the new classification being a coarse graining of the previous one.
The main aim of this work is to identify and characterize all possible kinds of pure-state entanglement of three qubits under SLOCC. Unentangled states, and also those which are product in one party while entangled with respect to the remaining two, appear as expected, trivial cases. More surprising is the fact that there are two different kinds of genuine tripartite entanglement. Indeed, we will show that any (non-trivial) tripartite entangled state can be converted, by means of SLOCC, into one of two standard forms, namely either the GHZ state \cite{Gr89} \begin{equation}
\ket{GHZ}=1/\sqrt{2}(|000\rangle+|111\rangle) \label{GHZ}, \end{equation} or else a second state \begin{equation}
|W\rangle=1/\sqrt{3}(|001\rangle+|010\rangle+|100\rangle) \label{W}, \end{equation} and that this splits the set of genuinely trifold entangled states into two sets which are unrelated under LOCC. That is, we will see that if $\ket{\psi}$ can be converted into the state $\ket{GHZ}$ in (\ref{GHZ}) and $\ket{\phi}$ can be converted into the state $\ket{W}$ in (\ref{W}), then it is not possible to transform, not even with only a very small probability of success, $\ket{\psi}$ into $\ket{\phi}$ nor the other way round.
The previous result is based on the fact that, unlike the GHZ state, not all entangled states of three qubits can be expressed as a linear combination of only two product states. Remarkably enough, the inequivalence under SLOCC of the states $\ket{GHZ}$ and $\ket{W}$ can alternatively be shown from the fact that the 3-tangle (residual tangle), a measure of tripartite correlations introduced by Coffman et. al. \cite{Wo99}, does not increase on average under LOCC, as we will prove here.
We will then move to the second main goal of this work, namely the analysis of the state $\ket{W}$. It can not be obtained from a state $\ket{GHZ}$ by means of LOCC and thus one could expect, in principle, that it has some interesting, characteristic properties. Recall that in several aspects the GHZ state can be regarded as the maximally entangled state of three qubits. However, if one of the three qubits is traced out, the remaining state is completely unentangled. Thus, the entanglement properties of the state $\ket{GHZ}$ are very fragile under particle losses. We will prove that, oppositely, the entanglement of
$\ket{W}$ is maximally robust under disposal of any one of the three qubits, in the sense that the remaining reduced density matrices\footnote{The reduced density matrix $\rho_{AB}$ of a pure tripartite state $|\psi\rangle$ is defined as $\rho_{AB}\equiv tr_C(|\psi\rangle\langle\psi|)$.} $\rho_{AB}$, $\rho_{BC}$ and $\rho_{AC}$ retain, according to several criteria, the greatest possible amount of entanglement, compared to any other state of three qubits, either pure or mixed.
We will finally analyze entanglement under SLOCC in more general multipartite systems. We will show that, for most of these systems, there is typically no chance at all to transform locally a given state into some other if they are chosen randomly, because the space of entangled pure states depends on more parameters than those that can be modified by acting locally on the subsystems.
The paper is organized as follows. In section II we characterize mathematically the equivalence relation established by stochastic conversions under LOCC, and illustrate its performance by applying it to the well-known bipartite case. In section III we move to consider a system of three qubits, for which we prove the existence of 6 classes of states under SLOCC ---including the 2 genuinely tripartite ones---. Section IV is devoted to study the endurance of the entanglement of the state $\ket{W}$ against particle losses. In section V more general multipartite systems are considered. Section VI contains some conclusions. Finally, appendix A to C prove, respectively, some needed results related to SLOCC, the monotonicity of the 3-tangle under LOCC and the fact that $\ket{W}$ retains optimally bipartite entanglement when one qubit is traced out.
\section{Kinds of entanglement under Stochastic LOCC}
In this work we define as equivalent the entanglement of two states $\ket{\psi}$ and $\ket{\phi}$ of a multipartite system iff local protocols exist that allow the parties to convert each of the two states into the other one with some a priori probability of success. In this approach, we follow the definition for stochastic equivalence as given in \cite{Be99}\footnote{Stochastic transformations under LOCC had been previously analyzed in \cite{Lo97,Vi99}.}. The underlying motivation for this definition is that, if the entanglement of is $\ket{\psi}$ and $\ket{\phi}$ is equivalent, then the two states can be used to perform the same tasks, although the probability of a successful performance of the task may depend on the state that is being used.
\subsection{Invertible local operators}
Sensible enough, this classification would remain useless if in practice we would not be able to find out which states are related by SLOCC. Let us recall that, all in all, no practical criterion is known so far that determines whether a generic transformation can be implemented by means of LOCC. However, we can think of any local protocol as a series of rounds of operations, where in each round a given party manipulates locally its subsystem and communicates classically the result of its operation (if it included a measurement) to the rest of parties. Subsequent operations can be made dependent on previous results and the protocol splits into several branches. This picture is useful because for our purposes we need only focus on one of these branches. Suppose that state $\ket{\psi}$ can be locally converted into state $\ket{\phi}$ with non-zero probability. This means that at least one branch of the protocol does the job. Since we are concerned only with pure states we can always characterize mathematically this branch as an operator which factors out as the tensor product of a local operator for each party. For instance, in a three-qubit case we would have that $\ket{\psi}$ can be locally converted into $\ket{\phi}$ with some finite probability iff an operator $A\otimes B \otimes C$ exists such that \begin{equation} \ket{\phi} = A\otimes B \otimes C \ket{\psi}, \label{phipsi} \end{equation} where operator $A$ contains contributions coming from any round in which party A acted on its subsystem, and similarly for operators $B$ and $C$ \footnote{ In practice the constraints $A^{\dagger}A,~B^{\dagger}B,~C^{\dagger}C \leq 1$
should be fulfilled if the invertible operators $A,B,C$ are to come from local POVMs. In this work we do not normalize them in order to avoid introducing unimportant constants to the equations. Instead, both the initial and final states are normalized. }. Carrying on with the 3-qubit example, let us now consider for simplicity that both states $\ket{\psi}$ and $\ket{\phi}$ have rank 2 reduced density matrices $\rho_A \equiv $ tr$_{BC}(|\psi\rangle\langle\psi|), \rho_B$ and $\rho_C$. Then clearly the rank of operators $A$, $B$ and $C$ need to be 2 (see appendix A). In other words, each of these operators is necessarily invertible, and in particular \begin{equation} \ket{\psi} = A^{-1}\otimes B^{-1}\otimes C^{-1} \ket{\phi}. \end{equation} We see thus that, under the assumption of maximal rank for the reduced density matrices, two-way convertibility implies the existence of invertible operators $A$, $B$ and $C$ as in (\ref{phipsi}) [actually, one-way convertibility alone has already implied that an invertible local operator (ILO) $A\otimes B\otimes C$ exists]. Obviously, the converse also holds, namely that if an ILO $A\otimes B\otimes C$ exists then for each direction of the conversion a local protocol can be build that succeeds with non-zero probability. As explained in appendix A in detail, we can get rid of the previous assumption on the ranks and announce with generality,
{\bf Result:} States $\ket{\psi}$ and $\ket{\phi}$ are equivalent under stochastic local operations and classical communication ---SLOCC--- iff an invertible local operator ---ILO--- relating them [as in, for instance, equation (\ref{phipsi})] exists.
\subsection{Bipartite entanglement under SLOCC}
What does this classification implies in the well-known case \cite{Lo97,Vi99,Ni99} of bipartite systems? Since LU are included in SLOCC, we can take the Schmidt decomposition of a pure state $\ket{\psi}\in\hbox{$\mit I$\kern-.7em$\mit C$}^n\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^m$, $n\leq m$, as the starting point for our analysis. Thus, \begin{equation} \sum_{i=1}^{n_{\psi}} \sqrt{\lambda_i} \ket{i}\otimes\ket{i} = U_A\otimes U_B\ket{\psi}; ~~~ \lambda_i >0, ~n_{\psi} \leq n,\label{Schmidt1} \end{equation} where $U_A$ and $U_B$ are some proper local unitaries, the coefficients $\lambda_i$ decrease with $i$, and $n_{\psi}$ is the number of non-vanishing terms in the Schmidt decomposition. Clearly, the ILO \begin{equation} \frac{1}{\sqrt{n_\psi}}(\sum_{i=1}^{n_{\psi}} \frac{1}{\sqrt{\lambda_i}}\proj{i} + \sum_{i=n_{\psi}+1}^n \proj{i})\otimes 1_B \end{equation} transforms (\ref{Schmidt1}) into a maximally entangled state \begin{equation} \frac{1}{\sqrt{n_{\psi}}}\sum_i^{n_{\psi}} \ket{i}\otimes\ket{i}, \label{maxi} \end{equation} which depends only on the Schmidt number $n_{\psi}$. Since SLOCC cannot modify the rank of the reduced density matrices $\rho_A$ and $\rho_B$, which is given by $n_{\psi}$, we conclude that in $\hbox{$\mit I$\kern-.7em$\mit C$}^n\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^m$, $n\leq m$, there are $n$ different kinds of entangled states, corresponding to $n$ different classes under SLOCC. Each of these classes is characterized by a given Schmidt number, and we can choose as their representatives the state (\ref{maxi}) with $n_{\psi}=1,...,n$. Clearly $n_{\psi}=1$ corresponds to states that are less entangled than the rest (they are, after all, unentangled). This hierarchical relation can be extended to the rest of classes by noting that none-invertible local operators can project out some of the Schmidt terms and thus diminish the Schmidt number of a state. Therefore the state $\ket{\psi}$ can be locally converted into the state $\ket{\phi}$ with some finite probability iff $n_{\psi}\geq n_{\phi}$, or in terms of kinds of entanglement, we can say that the entanglement of the class characterized by a given Schmidt number is more powerful than that of a class with a smaller Schmidt number.
For later reference we also note that in a two-qubit system, ${\cal H}=\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2$, we can write any state, after using a convenient LU, uniquely as \begin{equation} \ket{\psi} = c_{\delta}~ \ket{0}\otimes\ket{0} + s_{\delta} ~\ket{1}\otimes\ket{1}; ~~~~~c_{\delta} \geq s_{\delta}\geq 0, \end{equation} where $c_{\delta}, s_{\delta}$ stand for cos$\delta$ and sin$\delta$. This is either a product (unentangled) state $\ket{\psi_{A-B}}=\ket{0}\otimes \ket{0}$ for $c_{\delta}=1$ or else an entangled state that can be converted into the EPR state, \begin{equation} \frac{1}{\sqrt{2}}(\ket{0}\otimes\ket{0}+\ket{1}\otimes\ket{1}), \label{EPR} \end{equation} with probability $p=E_{2}(\psi)$, where $E_2(\psi)\equiv \lambda_2$ is the entanglement monotone that provides a quantitative description of the non-local resources contained in a single copy of a two-qubit pure state \cite{monotones}. Any state $\ket{\psi}$ can be obtained from state (\ref{EPR}) with certainty, this contributing to the fact that the EPR state is considered the maximally entangled state of two qubits.
\section{Entanglement of pure states of three qubits}
In this section we analyze a system of three qubits. We show that SLOCC split the set of pure states into 6 inequivalent classes, which further structure themselves into a three-grade hierarchy when non-invertible local operations are used to relate them. At the top of the hierarchy we find two inequivalent classes of true tripartite entanglement, which we name GHZ-class and W-class after our choice of corresponding representatives. The three possible classes of bipartite entanglement are accessible (with some non-vanishing probability) from {\em any} state of the W and GHZ classes by means of a non-invertible local operator. Finally, at the bottom of the hierarchy we find non-entangled states.
The ranks r$(\rho_A)$, r$(\rho_B)$ and r$(\rho_C)$ of the reduced density matrices, together with the range $R(\rho_{BC})$ of $\rho_{BC}$, will be the main mathematical tools used through the first part of this section. By analysing them we will be able to make an exhaustive classification of three-qubit entanglement. Later on we will rephrase some of these results in terms of well-known measures of entanglement. In particular, we will see that the existence of two inequivalent kinds of true tripartite entanglement under SLOCC is very much related to the fact that the 3-tangle, a measure of tripartite entanglement introduced in \cite{Wo99}, is an entanglement monotone (see appendix B).
At the end of the section also a practical way to identify the class an arbitrary state belongs to will be discussed.
\subsection{Non-entangled states and bipartite entanglement.}
If at least one of the local ranks r$(\rho_A)$, r$(\rho_B)$ or r$(\rho_C)$ is 1, then the pure state of the three qubits factors out as the tensor product of two pure states, and this implies that at least one of the qubits is uncorrelated with the other two. SLOCC distinguish states with this feature depending on which qubits are uncorrelated from the rest.
\noindent {\bf Class A-B-C (product states)} \\ This class corresponds to states with $r(\rho_A)=r(\rho_B)=r(\rho_C)=1$. They can be taken, after using some convenient LU, into the form \begin{equation} \ket{\psi_{A-B-C}} = \ket{0}\ket{0}\ket{0}, \end{equation} where we have already relaxed the notation for $\ket{0}\otimes\ket{0}\otimes\ket{0}$.
\noindent {\bf Classes A-BC, AB-C and C-AB \\ (bipartite entanglement)} \\ These three classes of states contain only bipartite entanglement between two of the qubits, one of the reduced density matrices having rank 1 and the other two having rank 2. For example, the states in class $A-BC$ possess entanglement between the systems $B$ and $C$ ($r(\rho_B)=r(\rho_C)=2$) and are product with respect to system $A$ ($r(\rho_A)=1$). LU allow us to write uniquely states of the class $A-BC$ as \begin{equation}
|\psi_{A-BC}\rangle=|0\rangle(c_\delta|0\rangle|0\rangle+s_\delta |1\rangle|1\rangle), ~~~ c_{\delta} \geq s_{\delta}>0,
\end{equation} and similary for $|\psi_{B-AC}\rangle$ and $|\psi_{C-AB}\rangle$. We choose the maximally entangled state \begin{equation}
\frac{1}{\sqrt{2}} |0\rangle(|0\rangle|0\rangle+|1\rangle|1\rangle) \label{repreA-BC} \end{equation} as representative of the class $A-BC$. Any other state within this class can be obtained from (\ref{repreA-BC}) with certainty by means of LOCC.
The proof that these four marginal classes are inequivalent under SLOCC is very simple. We only need to recall that the local ranks are invariant under ILO (see appendix A). In what follows we will analyze the more interesting case of $r(\rho_{\kappa})=2,~\kappa = A,B,C$. To see that there are two inequivalent classes fulfilling this condition we will have to study possible product decompositions of pure states.
\subsection{True three-qubit entanglement.}
There turns out to be a close connection between convertibility under SLOCC and the way entangled states can be expressed minimally as a linear combination of product states. For instance, as we shall prove later on, the GHZ and W states have a different number of terms in their minimal product decompositions (\ref{GHZ}) and (\ref{W}), namely 2 and 3 product terms respectively, and this readily implies that there is no way to convert one state into the other by means of an ILO $A\otimes B\otimes C$. Indeed, let us consider, e.g., the most general pure state that can be obtained reversibly from a $\ket{GHZ}$. It reads \begin{equation} A\otimes B\otimes C\ket{GHZ}=\frac{1}{\sqrt{2}}( \ket{A0}\ket{B0}\ket{C0}+\ket{A1}\ket{B1}\ket{C1}), \label{new} \end{equation} where $\ket{A0}$ and $\ket{A1}$ are linearly independent vectors (since $A$ is invertible) and similarly for the other two qubits. That is, the minimal number of terms in a product decomposition for the state (\ref{new}) is also 2. Actually, we have that also for a general multipartite system,
{\bf Observation:} The minimal number of product terms for any given state remains unchanged under SLOCC.
This simple observation tells us already that in three qubits there are at least two inequivalent kinds of genuine tripartite entanglement under SLOCC, that of $\ket{GHZ}$ and that of $\ket{W}$.
However, we still have to prove that the state $\ket{W}$ cannot be expressed as a linear combination of just two product vectors. In order to complete our classification we also have to show that any pure state of three qubits with maximal local ranks can be reversibly converted into either the state $\ket{GHZ}$ or the state $\ket{W}$. We start with an obvious lemma regarding product decompositions:
{\bf Lemma:} Let $\sum_{i=1}^l \ket{e_i}\ket{f_i}$ be a product decomposition for the state $\ket{\eta}\in{\cal H}_{E}\otimes{\cal H}_{F}$. Then the set of states $\{\ket{e_i}\}_{i=1}^l$ span the range of $\rho_E\equiv$Tr$_F \proj{\eta}$.
{\bf Proof:} We have that $\rho_E = \sum_{i,j=1}^l \braket{f_i}{f_j} \ket{e_j}\bra{e_i}$. On the other hand $\ket{\nu}$ is in the range of $\rho_E$ iff a state $\ket{\mu}$ exists such that $\ket{\nu}=\rho_E\ket{\mu}$, that is $\ket{\nu} = \sum_{i,j=1}^l \braket{f_i}{f_j}\braket{e_i}{\mu} \ket{e_j}$. $\Box$
In particular, $r(\rho_A)=2$ implies that at least two product terms are needed to expand $\ket{\psi}\in \hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2$. Let us suppose that a product decomposition with only two terms is possible, namely \begin{equation} \ket{\psi}=\ket{a_1}\ket{b_1}\ket{c_1} + \ket{a_2}\ket{b_2}\ket{c_2}. \label{2deco} \end{equation} Then, also according to the previous lemma, $\ket{b_1}\ket{c_1}$ and $\ket{b_2}\ket{c_2}$ have to span the range of $\rho_{BC}$, R$(\rho_{BC})$.
But R$(\rho_{BC})$ is a two dimensional subspace of $\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2$. Therefore it always contains either only one or only two product states \cite{STV} [unless R$(\rho_{BC})$ was supported in $\hbox{$\mit I$\kern-.7em$\mit C$}\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2$ or $\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}$, but we already excluded this possibility because we are considering r$(\rho_B)=$r$(\rho_C)=2$]. Notice that a two-term decomposition (\ref{2deco}) requires that R$(\rho_{BC})$ contains at least two product vectors. Only one product vector in R$(\rho_{BC})$, and thus the impossibility of decomposition (\ref{2deco}), is going to be precisely the trait of the states in the W-class.
\noindent {\bf GHZ-class}\\ Let us suppose first that R$(\rho_{BC})$ contains two product vectors, $\ket{b_1}\ket{c_1}$ and $\ket{b_2}\ket{c_2}$. Then decomposition (\ref{2deco}) is possible, and actually unique, with $\ket{a_i}= \braket{\xi_i}{\psi}$, $i=1,2$, where $\ket{\xi_i}$ are the two vectors supported in $R(\rho_{BC})$ that are biorthonormal to the $\ket{b_i}\ket{c_i}$. In this case we can use LU in order to take $\ket{\psi}$ into the useful standard product form (see also \cite{Ac00}) \begin{equation}
|\psi_{GHZ}\rangle=\sqrt{K}(c_\delta|0\rangle|0\rangle|0\rangle+s_\delta e^{i\varphi}|\varphi_A\rangle|\varphi_B\rangle|\varphi_C\rangle), \label{GHZclass} \end{equation} where \begin{eqnarray} \ket{\varphi_A}=c_{\alpha}\ket{0}+s_{\alpha}\ket{1}\nonumber\\ \ket{\varphi_B}=c_{\beta}\ket{0}+s_{\beta}\ket{1}\nonumber\\ \ket{\varphi_C}=c_{\gamma}\ket{0}+s_{\gamma}\ket{1} \end{eqnarray} and $K=(1+2 c_\delta s_\delta c_\alpha c_\beta c_\gamma c_\varphi)^{-1} \in (1/2,\infty)$ is a normalization factor. The ranges for the five parameters are $\delta \in (0,\pi/4], \alpha,\beta,\gamma \in (0,\pi/2]$ and $\varphi \in [0,2\pi)$.
All these states are in the same equivalence class as the $\ket{GHZ}$ (\ref{GHZ}) under SLOCC. Indeed, the ILO \begin{eqnarray} \sqrt{2K}\left( \begin{array}{ll} c_\delta & s_\delta c_\alpha e^{i\varphi} \\0 & s_\delta s_\alpha e^{i \varphi} \end{array} \right) \otimes \left( \begin{array}{ll} 1 & c_\beta \\0 & s_\beta \end{array} \right) \otimes \left( \begin{array}{ll} 1 & c_\gamma\\0 & s_\gamma \end{array} \right), \end{eqnarray} applied to $\ket{GHZ}$ produces the state (\ref{GHZclass}).
The GHZ state is a remarkable representative of this class. It is maximally entangled in several senses \cite{Gi98}. For instance, it maximally violates Bell-type inequalities, the mutual information of measurement outcomes is maximal, it is maximally stable against (white) noise and one can locally obtain from a GHZ state with unit propability an EPR state shared between any two of the three parties. Another relevant feature is that when any one of the three qubits is traced out, the remaining two are in a separable ---and therefore unentangled--- state.
\noindent {\bf W-class}\\ Let us move to analyze the case where R$(\rho_{BC})$ contains only one product vector. We already argued that decomposition (\ref{2deco}) is now not possible. Instead we can (uniquely) write \begin{equation} \ket{\psi} = \ket{a_1}\ket{b_1}\ket{c_1} + \ket{a_2}\ket{\phi_{BC}}, \label{const} \end{equation} where $\ket{\phi_{BC}}$ is the vector of $R(\rho_{BC})$ which is orthogonal to $\ket{b_1}\ket{c_1}$, and $\ket{a_1}$ and $\ket{a_2}$ are given by
$\braket{b_1|\langle c_1}{\psi}$ and $\braket{\phi_{BC}}{\psi}$. By means of LU (\ref{const}) can be always rewritten as \begin{eqnarray} \ket{\psi} = (\sqrt{c}\ket{1}+ \sqrt{d}\ket{0})&&\ket{00} \nonumber\\ +\ket{0}&&(\sqrt{a}\ket{01} + \sqrt{b}\ket{10}). \label{Wclass2} \end{eqnarray} Indeed, we first take $\ket{b_1}\ket{c_1}$ into $\ket{0}\ket{0}$. Then, since $\ket{\phi_{BC}}$ has been chosen orthogonal to $\ket{b_1}\ket{c_1}$, it must become $x\ket{01} +y\ket{10}+z \ket{11}$. By requiring that a linear combination of these two vectors has no second product vector we obtain that $z=0$ \cite{product}. In addittion the coefficients $ \sqrt{a}\equiv x,~\sqrt{b}\equiv y,~\sqrt{c}$ and $\sqrt{d}$ can be made positive by absorbing the three relative phases into the definition of state $\ket{1}$ of subsystems $A$, $B$ and $C$. Thus case (i) has been taken into the form (\ref{Wclass2}) by just using LU. Before we showed that 2 terms could not suffice in a product decomposition of the state. Now we see that 3 product terms always do the job, for instance $(\sqrt{c}\ket{1}+ \sqrt{d}\ket{0})\ket{00}$, $\sqrt{a}\ket{0}\ket{01}$ and $\sqrt{b}\ket{0}\ket{10}$ once we took the original state into the standard, unique form \begin{equation} \ket{\psi_W} = \sqrt{a} \ket{001} +\sqrt{b} \ket{010} + \sqrt{c} \ket{100} +\sqrt{d} \ket{000}, \label{Wclass} \end{equation} where $a, b, c >0$, and $d \equiv 1- (a + b+ c) \geq 0$.
The parties can locally obtain the state (\ref{Wclass}) from the state $\ket{W}$ in (\ref{W}), which we choose as a representative of the class ---and whose study we postpone for later on---, by application of an ILO of the form \begin{eqnarray} \left( \begin{array}{ll} \sqrt{a} & \sqrt{d} \\0 & \sqrt{c} \end{array} \right) \otimes \left( \begin{array}{ll} \sqrt{3} & 0 \\0 & \frac{\sqrt{3b}}{\sqrt{a}} \end{array} \right) \otimes \left( \begin{array}{ll} 1 & 0\\0 & 1 \end{array} \right). \end{eqnarray}
Before moving to relate these classes by means of non--invertible local operators, we note that states within the GHZ-class and the W-class depend, respectively, on 5 and 3 parameters that cannot be changed by means of LU. Previous works \cite{Li97,Sc,Ac00,Ca00} have shown that a generic state of three qubits depends, up to LU, on 5 parameters. This means that states tipically belong to the GHZ-class, or equivalently, that a {\em generic} pure state of three qubits can be locally transformed into a GHZ with finite probability of success (see also \cite{Co00}). The W-class is of zero measure compared to the GHZ-class. This does not mean, however, that it is irrelevant. In a similar way as separable mixed states are not of zero measure with respect to entangled states, even though product states are, it is in principle conceivable that mixed states having only W-class entanglement are also not of zero measure in the set of mixed states.
\subsection{Relating SLOCC--classes by means of non--invertible operators}
In this subsection, we investigate the hierarchical relation of the 6 SLOCC-equivalence classes under non--invertible operators, i.e. under general LOCC.
A non--invertible local operator transforms $\ket{\psi}$ into $\ket{\phi}$ according to (\ref{phipsi}), but with at least one of the local operators $A$, $B$ and $C$ having rank 1. This means that the local ranks of the pure states can be diminished. For instance, if the initial state $\ket{\psi}$ belongs either to the GHZ or W class, then a non-invertible operator will diminish at least one of the local ranks. That is, $\ket{\phi}$ belongs necessarily to one of the bipartite classes $\kappa-\mu\nu~ (\kappa\not=\mu\not=\nu \in\{A,B,C\})$ or else is a product state $A-B-C$.
Thus we have that the classes GHZ and W are also inequivalent even under most general LOCC, whereas e.g. a measurement of the projector $P=|+\rangle\langle+|$
with $|+\rangle=1/\sqrt{2}(|0\rangle+|1\rangle)$ in party $A$ maps states within the classes $W$ (\ref{Wclass}) and $GHZ$ (\ref{GHZclass}) to states within the class $A-BC$. In a similar way, non--invertible local operators (local, standard measurements) can convert states within one of the classes $\kappa-\mu\nu$ to states within the class $A-B-C$. Note that in all cases described above, the inverse transformations, e.g. from the class $A-B-C$ to one of the classes $\kappa-\mu\nu$ are impossible as they would imply an increase of the rank of at least one of the reduced density operators $\rho_A,\rho_B,\rho_C$. These results are summarized in Fig. \ref{Fig1}.
\subsection{Measures of entanglement and classes under SLOCC}
Several measures have been introduced so far in the literature in order to quantify entanglement. Although this section is mainly concerned with qualitative aspects of multipartite quantum correlations, we would like to relate some of these measures, namely some bipartite ones and the tripartite 3-tangle \cite{Wo99}(see appendix B), to our classification. Remarkably, the existence of two kinds of genuine tripartite entanglement in a three-qubit system, as well as the inequivalence between bipartite and tripartite entanglement, can be easily understood from the non-increasing character of these measures under LOCC. In addittion, the 3-tangle allows for a systematic and practical identification of which class under SLOCC any pure state belongs to.
For each $\kappa=A,B$ and $C$ we can regard the three-qubit system as a bipartite system, with qubit $\kappa$, say $A$ for concreteness, being one part of the system and the remaining two qubits, $B$ and $C$, being the other. Correspondingly, a state $\ket{\psi}$ of the three qubits can be viewed as a bipartite state $\ket{\psi_{A(BC)}}$. For bipartite states several measures are known, which are entanglement monotones \cite{Vi00J}; that is, which cannot be increased, on average, under LOCC. For instance, we already mentioned the entropy of entanglement $E(\psi)$ for asymptotic conversions --given by the entropy $S_A$ of the eigenvalues of $\rho_A$--- and the monotone $E_2(\psi)$ for the single copy case ---which is given by the smallest eigenvalue $\lambda_2$ of $\rho_A$. They all satisfy that vanish for product states (corresponding to $\rho_A$ with rank 1) while having a positive value for any other state (corresponding to $\rho_A$ with rank 2). Thus we can interpret the inequivalence under SLOCC of states whose reduced density matrices differ in rank also in terms of the impossibility of creating any of the bipartite measures. For instance, a state in the $A-BC$ class has $S_A=0$, and thus cannot be transformed with any finite probability into a state of the $AB-C$ class, because this would have $S_A>0$. We conclude that the monotonicity of these measures readily split the set of pure states of three qubits into five subsets which are inequivalent under SLOCC, namely unentagled states $A-B-C$, the three classes $A-BC$, $AB-C$ and $C-AB$ containing only bipartite entanglement, and a fifth subset of entangled states with $S_A,S_B,S_C \neq 0$ (i.e. r$(\rho_A)=$r$(\rho_B)=$r$(\rho_C)=2$). Bipartite measures cannot, however, determine the inequivalence of the GHZ and W classes.
Is there any known measure of tripartite entanglement which can distinguish between these two classes? The 3-tangle does. Indeed, it can be computed from the product decompositions (\ref{GHZclass}) and (\ref{Wclass}) (see \cite{Wo99} for details), and reads \begin{equation} \tau(\psi_{GHZ})=(2Ks_\alpha s_\beta s_\gamma s_\delta c_\delta)^2 \label{tangle} \not =0\label{tauGHZ} \end{equation} for any state in the GHZ class, while it vanishes for any state in the W class. In the appendix B we prove that the 3-tangle is an entanglement monotone, a very desirable property for any quantity aiming at measuring entanglement. Consequently, a state in the W class cannot be transformed by means of LOCC (and in particular SLOCC) to a state in the GHZ class, which is an independent proof of the fact that the two kinds of true tripartite entanglement are indeed inequivalent under SLOCC.
\subsection{Practical identification}
Given an arbitrary state $\ket{\psi}$ of three qubits, expressed in any basis, it may be interesting to know, for instance, whether it can be converted by means of LOCC into a GHZ or a W state, if any, or into a EPR state shared between two of the parties. In our original analysis of the classes we already have provided a constructive method, based on the analysis of r$(\rho_{\kappa})$ and R$(\rho_{BC})$, to determine the class of $\ket{\psi}$ under SLOCC. Analysing the R$(\rho_{BC})$ may, however, not be the most practical way to proceed. Here we suggest to proceed instead according to the following two steps:
\begin{itemize} \item compute $\rho_{\kappa}$, $\kappa=A,B$ and $C$, and check whether they have a vanishing determinant. [note that det$\rho_{\kappa}=0 \Leftrightarrow S_{\kappa} = 0 \Leftrightarrow $r$(\rho_{\kappa})=1$] \item If none of the previous determinants vanish [that is, $\ket{\psi}$ has true tripartite entanglement], then compute the $3$-tangle using the recipe introduced in \cite{Wo99}. \end{itemize} Then Table I, which sumarizes the relation between classes under SLOCC and measures of entanglement, can be used to catalogue state $\ket{\psi}$.
\section{State $\ket{W}$ and residual bipartite entanglement.}
As mentioned in the previous section, in several aspects the state $\ket{GHZ}$ is the maximally entangled state of three qubits. It also has the feature that when one of the qubits is traced out, then the remaining two are completely unentangled. This means, in particular, that if one of the three parties sharing the system decides not to cooperate with the other two, then they can not use at all the entanglement resources of the state. The same happens if for some reason the information about one of the qubits ---namely the identity of the corresponding states $\ket{0}$ and $\ket{1}$ in (\ref{GHZ})--- is lost.
Here we would like to investigate the robustness of the entanglement of a three-qubit state $\ket{\psi}$ against disposal of one of the qubits \cite{Br00}. The residual, two-qubit states $\rho_{AB}$, $\rho_{AC}$ and $\rho_{BC}$ are in general mixed states. There are several measures of entanglement of mixed states and therefore multiple ways of quantifying how much (mixed-state) bipartite entanglement the state $\ket{\psi}$ turns into when one of the qubits is traced out. Nevertheless, most of the criteria we have examined coincide in pointing out the state $\ket{W}$ as the one that maximally retains bipartite entanglement. Note that the reduced density matrix of $|W\rangle$ is identical for any two subsystems and is e.g. given by \begin{equation}
\rho_{AB}=\frac{2}{3}|\Psi^+\rangle\langle\Psi^+|+\frac{1}{3}|00\rangle\langle00|,\label{reducedW}
\end{equation} with $|\Psi^+\rangle=1/\sqrt{2}(|01\rangle+|10\rangle)$ being a maximally entangled state of two qubits. Note that one can obtain from a single copy of
(\ref{reducedW}) a state which is arbitrarily close to the state $|\Psi^+\rangle$ by means of a filtering measurement \cite{Gi96}.
\subsection{Average residual entanglement}
Let us consider first which is the amount of bipartite entanglement, according to some measure ${\cal E}(\rho)$, that the two remaining qubits retain on average when a third one is traced out, that is, \begin{equation} \bar{{\cal E}}(\psi) \equiv \frac{1}{3}({\cal E}(\rho_{AB})+ {\cal E}(\rho_{AC}) + {\cal E}(\rho_{BC})). \label{average} \end{equation} In general, computing the amount of entanglement ${\cal E}(\rho)$ for bipartite mixed states is a difficult problem. However numerical results have shown that $\ket{W}$ maximizes the average entanglement of formation, that is the choice ${\cal E}(\rho) = E_f(\rho)$, where $E_f(\rho)$ \footnote{The entanglement of formation is given by $E_f(\rho)=h(\frac{1}{2}+\frac{1}{2}\sqrt{1-{\cal C}^2})$, where ${\cal C}$ is the concurrence and $h$ is the binary entropy function $h(x)=-x{\rm log}_2x-(1-x){\rm log}_2(1-x)$.} is the minimal amount of bipartite pure-state entanglement [as quantified by means of the entropy of entanglement] required to prepare locally one single copy of the state $\rho$ \cite{Vi00}.
In addition, we have managed to show analytically (see appendix C) for the particular choice ${\cal E}(\rho) = {\cal C}(\rho)^2$, where ${\cal C}(\rho)$ is the concurrence (for a definition of the concurrence see e.g. \cite{Wo99}), the state $\ket{W}$ reaches the maximal average value $\bar{{\cal C}^2}(W)=4/9$, which no other state can match.
\subsection{Least entangled pair}
Another way of quantifying how resistent the entanglement of a tripartite state $\ket{\psi}$ is to dismissal of one part of the system consists in looking at the least entangled of the three possible remaining parts, namely at the function \begin{equation} {\cal E}_{\min}(\psi) \equiv \min({\cal E}(\rho_{AB}), {\cal E}(\rho_{AC}), {\cal E}(\rho_{BC})). \label{worstcase} \end{equation} For this ``worst case scenario'' we have been able to prove analytically (see appendix C) that the maximal value of ${\cal E}_{\min}(\psi)$ is obtained by the state $\ket{W}$ for any bipartite measure ${\cal E}(\rho)$ which is monotonic with the concurrence, ${\cal C}(\rho)$, such as the entanglement of formation $E_f(\rho)$ and the monotone $E_2(\rho)$ \footnote{The entanglement monotone $E_2$, expressed in terms of the concurrence ${\cal C}$ is given by $E_2(\rho)=\frac{1}{2}-\frac{1}{2}\sqrt{1-{\cal C}^2}$.}, which denotes the minimal amount of bipartite pure-state entanglement [quantified by means of $E_2(\psi)$] required to prepare locally one single copy of the state $\rho$.
We conclude that the state $\ket{W}$ is the state of three-qubits whose entanglement has the highest degree of endurance against loss of one of the three qubits. We conceive this property as important in any situation where one of the three parties sharing the system, say Alice, may suddenly decide not to cooperate with the other two. Notice that even in the case that Alice would decide to try to destroy the entanglement between Bob and Claire, this would not be possible, since any local action on A cannot prevent Bob and Claire from sharing, at least, the entanglement contained in $\rho_{BC}$ (for instance, by simply ignoring Alice's actions). Therefore, although essentially tripartite, the entanglement of the state $\ket{W}$ is also readily bipartite, in contrast to that of the state $\ket{GHZ}$, which only after some local manipulation can be brought into a bipartite form.
\section{Generalization to $N$ parties}
In this last section we would like to apply the same techniques to analyze the entanglement of more general multipartite systems. We will learn that the set of entangled states is a rather inaccessible jungle for the local explorer, for two pure states $\ket{\psi}$ and $\ket{\phi}$ are typically not connected by means of LOCC, so that the parties are usually unable to convert states locally. We will also study generalizations to $N$ qubits of the state $\ket{W}$.
\subsection{Local inaccessibility of states in general multipartite systems}
Let us consider first $N$ parties each possessing a qubit. The Hilbert space of the system is \begin{equation} {\cal H}^{(N)}=\underbrace{\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes...\otimes \hbox{$\mit I$\kern-.7em$\mit C$}^2}_{N}, \end{equation} and therefore up to a global, physically irrelevant complex constant, a generic vector depends on $2(2^N-1)$ real parameters. On the other hand we want to identify vectors which are related by means of a ILO. A general one-party, invertible operator $A$ must have non-vanishing determinant, which we can fix to one, det$A=1$, because the operator $kA$ only differs in that it introduces in the transformed states an extra constant factor $k\in\hbox{$\mit I$\kern-.7em$\mit C$}$, which we have already addressed. That is, $A\in SL_{2}(\hbox{$\mit I$\kern-.7em$\mit C$})$, and it depends on $6$ real parameters. Therefore the set of equivalence classes under SLOCC, \begin{equation} \frac{{\cal H}^{(N)}}{\underbrace{SL_{2}(\hbox{$\mit I$\kern-.7em$\mit C$})\times SL_{2}(\hbox{$\mit I$\kern-.7em$\mit C$}) \times ... \times SL_{2}(\hbox{$\mit I$\kern-.7em$\mit C$})}_{N}}, \end{equation} depends {\em at least} on $2(2^N-1) - 6N$ parameters. This lower bound allows for a finite number of classes for $N=3$, but shows that for any larger number $N$ of qubits there are infinitely many classes, labeled by at least one continuous parameter. The reason is that the number of parameters from a state $\ket{\psi}$ which the parties can modify by means of a general ILO $A\otimes B\otimes...\otimes N$ grows linearly with $N$ ($6N$ for the multi-qubit case), whereas the number of parameters required to specify $\ket{\psi}$ grows exponentially with $N$.
More generally, if the Hilbert space is given by ${\cal H}=\hbox{$\mit I$\kern-.7em$\mit C$}^{n_1}\otimes ... \otimes \hbox{$\mit I$\kern-.7em$\mit C$}^{n_N}$, then the set of equivalence classes under SLOCC, \begin{equation} \frac{\hbox{$\mit I$\kern-.7em$\mit C$}^{n_1}\otimes ... \otimes \hbox{$\mit I$\kern-.7em$\mit C$}^{n_N}}{SL_{n_1}(\hbox{$\mit I$\kern-.7em$\mit C$})\times...\times SL_{n_N}(\hbox{$\mit I$\kern-.7em$\mit C$})}, \end{equation} depends at least on $2(n_1n_2...n_N-1) - 2\sum_{i=1}^N(n_i^2-1))$. This shows that only for $N=3$ there are still some systems with (potentially) only a finite number of classe under SLOCC, namely those with Hilbert space $\hbox{$\mit I$\kern-.7em$\mit C$}^2\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^{n_2}\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^{n_3}$, that is, having a qubit as at least one of the subsystems. In all other cases, one finds an infinite number of classes.
We notice that even allowing for non-invertible local operations the amount of parameters that can be changed by local manipulations is typically smaller than that the state depends on. That is, the subset of states that can be reached locally from a given state $\ket{\psi}$ is of zero measure in the set of states of the multipartite system. Recall that in the bipartite scenario, ${\cal H}=\hbox{$\mit I$\kern-.7em$\mit C$}^n\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^m$, there is always a maximally entangled state from which all the other states can be locally prepared with certainty of success. We see now that, in constrast, there is typically in a multipartite system no state from which all the others can be prepared, not even with some probability of success. Of course, the parties can always resort to, say, using a sufficient amount of EPR states distributed among them to prepare any multipartite state by standard teleportation. This implies, however, using an initial state (that of many EPR states) which belongs to a Hilbert space much larger than the Hilbert space of the state the parties are trying to create, and thus does not change the previous conclusion.
\subsection{State \ket{W} in multi-qubit systems}
Let us have a look at the generalized form $|W_N\rangle$ of the state
$|W\rangle$ (\ref{W}). We define the state \begin{equation}
|W_N\rangle \equiv 1/\sqrt{N}|N-1,1\rangle,
\end{equation} where $|N-1,1\rangle$ denotes the totally symmetric state including $N-1$ zeros and $1$ ones. For example, we obtain for $N=4$ \begin{equation}
|W_4\rangle=1/\sqrt{4}(|0001\rangle+|0010\rangle+|0100\rangle+|1000\rangle).
\end{equation} One immediately observes that the entanglement of this state is again very robust against particle losses, i.e. the state $|W_N\rangle$ remains entangled even if any $N\!-\!2$ parties lose the information about their particle. This means that any two out of $N$ parties possess an entangled state, independently of whether the remaining $(N-2)$ parties decide to cooperate with them or not. This can be seen by computing the reduced density operator $\rho_{AB}$ of
$|W_N\rangle$, i.e. by tracing out all but the first and the second systems. By symmetry of the state $|W_N\rangle$, we have that all reduced density operators $\rho_{\kappa\mu}$ are identical and we obtain \begin{equation}
\rho_{\kappa\mu}=\frac{1}{N}(2 |\Psi^+\rangle\langle\Psi^+|+(N-2)|00\rangle\langle 00|). \end{equation} The concurrence can easily determined to be \begin{equation} {\cal C}_{\kappa\mu}(W_N)=\frac{2}{N}, \end{equation} which shows that $\rho_{\kappa\mu}$ is entangled, even distillable. We conjecture that the average value of the square of the concurrence for $\ket{W_N}$, \begin{equation} \frac{2}{N(N-1)}\sum_{\kappa} \sum_{\mu\neq\kappa} {\cal C}^2_{\kappa\mu}(W_N)=\frac{4}{N^2}, \end{equation} is again the maximal value achievable for any state of $N$ qubits.
\section{Summary and conclusions}
In this work, we investigated equivalence classes of multipartite states specified by stochastic local operations and classical communication. We showed that for pure states of three qubits there are 6 different classes of this kind. In particular, we found that there are two inequivalent types of genuine tripartite entanglement, represented by the GHZ state and the state W. We showed that the state W is the state of three qubits that retains a maximal amount of bipartite entanglement when any one of the three qubits is traced out. For multipartite ($N\geq 4$) and multilevel systems, we showed that there exist infinitely many inequivalent kinds of entanglement (i.e. classes under SLOCC).
\section*{Acknowledgments} This work was supported by the Austrian Science Foundation under the SFB ``control and measurement of coherent quantum systems'' (Project 11), the European Community under the TMR network ERB--FMRX--CT96--0087, the European Science Foundation and the Institute for Quantum Information GmbH. G.V also acknowledges a Marie Curie Fellowship HPMF-CT-1999-00200 (European Community).
\section*{Appendix A: SLOCC and local ranks}
In this appendix we show that states $\ket{\psi}$ and $\ket{\phi}$ belong to the same class under SLOCC iff they are related by means of a invertible local operator (ILO). From this connection it follows easily that the local ranks of a pure state, r($\rho_{\kappa})$, $\kappa=A,B,...$, are invariant under SLOCC, whereas under LOCC they can only decrease.
{\bf Lemma:} If the bipartite vectors $\ket{\psi}$ and $\ket{\phi} \in \hbox{$\mit I$\kern-.7em$\mit C$}^n\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^m$ fulfill \begin{equation} \ket{\phi} = A\otimes 1_B \ket{\psi}, \end{equation} then the ranks of the corresponding reduced density matrices satisfy r$(\rho_A^{\psi}) \geq $ r$(\rho_A^{\phi})$ and r$(\rho_B^{\psi}) \geq $r$(\rho_B^{\phi})$.
{\bf Proof:} We consider the Schmidt decomposition of $\ket{\psi}$, \begin{equation} \ket{\psi} = \sum_{i=1}^{n_{\psi}} \sqrt{\lambda_i^{\psi}}\ket{i}\ket{i},~~~ \lambda^{\psi}_i > 0,~~ n_{\psi}\leq \min (n,m), \end{equation} and write the operator $A$ as \begin{equation} A = \sum_{i=1}^n \ket{\mu_i}\bra{i}, \label{operator} \end{equation} where $\ket{\mu_i} \in \hbox{$\mit I$\kern-.7em$\mit C$}^n$ do not need to be normalized nor linearly independent. Then we have that $\rho_A^{\psi} = \sum_{i=1}^{n_{\psi}}\proj{i}$ and $\rho_A^{\phi} = A\rho_A^{\psi}A^{\dagger} = \sum_{i=1}^{n_{\psi}}\proj{\mu_i}$, so that r$(\rho_A^{\phi})\leq n_{\psi}$. The second inequality of the Lemma follows from the fact that for any bipartite vector r$(\rho_A)=$ r$(\rho_B)$. $\Box$
{\bf Corollary:} If the vectors $\ket{\psi},\ket{\phi}\in{\cal H}_A\otimes{\cal H}_B\otimes...\otimes{\cal H}_N$ are connected by a local operator as $\ket{\phi}=A\otimes B\otimes...\otimes N \ket{\psi}$, then the local ranks satisfy r$(\rho_{\kappa}^{\psi}) \geq $ r$(\rho_{\kappa}^{\phi})$, $\kappa=A, B,..., N$.
{\bf Proof:} Indeed, for each of the parties, say Alice for concreteness, we can view the operator $A\otimes B\otimes...\otimes N$ as the composition of two local operators, $A\otimes 1_{B...N}$ and $1_A\otimes (B\otimes...\otimes N)$, and the Hilbert space as ${\cal H}_A\otimes {\cal H}_{B...N}$. Then, because of the previous lemma, application of the first operator cannot increase r($\rho_A$), and the same happens with the second operator, which cannot increase r$(\rho_{B...N})$ [recall that for any pure state r$(\rho_A)=$ r$(\rho_{B...N})$]. $\Box$
{\bf Theorem:} Two pure states of a multipartite system are equivalent under SLOCC iff they are related by a local invertible operator.
{\bf Proof:} If \begin{equation} \ket{\phi}=A\otimes B\otimes...\otimes N \ket{\psi}, \label{localoperator} \end{equation} then a local protocol exists for the parties to transform $\ket{\psi}$ into $\ket{\phi}$ with a finite probability of success. Indeed, each party needs simply perform a local POVM including a normalized version of the corresponding local operator in (\ref{localoperator}). For instance, Alice has to apply a POVM defined by operators $\sqrt{p_A}A$ and $\sqrt{1_A - p_AA^{\dagger}A}$, where $p_A\leq 1$ is a positive weight such that $p_AA^{\dagger}A \leq 1_A$, and similarly for the rest of the parties. Then such a local protocol converts $\ket{\psi}$ succesfully into $\ket{\phi}$ with probability $p_Ap_B...p_N$. If, in addition, $A,B,...,N$ are invertible operators, then obviously \begin{equation} \ket{\psi}=A^{-1}\otimes B^{-1}\otimes...\otimes N^{-1} \ket{\phi} \end{equation} and the conversion can be reversed locally. Let us then move to prove the converse. We already argued (section II.A) that if $\ket{\psi}$ can be converted into $\ket{\phi}$ by LOCC, then a local operator relate them. We want to prove now that equivalence of $\ket{\psi}$ and $\ket{\phi}$ under SLOCC implies that this operator can always be chosen to be invertible. For simplicity, we will assume that $\ket{\psi}$ and $\ket{\phi}$ are related by a local operator acting non-trivially only in Alice's part, \begin{equation} \ket{\phi} = A\otimes 1_{B...N} \ket{\psi}. \label{onlyA} \end{equation} [The general case would correspond to composing operator $A\otimes 1_{B...N}$ with operator $1_A\otimes B\otimes 1_{C...N}$, and similarly for the rest of the parties. The following argumentation should then be applied sequentially to each party individually.] We can then consider the Schmidt decomposition of the states with respect to part $A$ and part $B...N$ \begin{eqnarray} \ket{\psi} = \sum_{i=1}^{n_{\psi}} \sqrt{\lambda_i^{\psi}}\ket{i}\ket{\tau_i},~~~~~~~~~~~ \lambda^{\psi}_i > 0\\ \ket{\phi} = \sum_{i=1}^{n_{\phi}} \sqrt{\lambda_i^{\phi}}(U_A\ket{i})\ket{\tau_i},~~~~~ \lambda^{\phi}_i > 0 \end{eqnarray} where the local unitary $U_A$ relate the two local Schmidt basis in Alice's part, $\{\ket{i}\}_{i=1}^n \in {\cal H}_A = \hbox{$\mit I$\kern-.7em$\mit C$}^n$, $\ket{\tau_i}\in {\cal H}_B\otimes...\otimes{\cal H}_N$, and $n_{\psi}=n_{\phi}$ because of the previous corollary. Now, operator $A$ in equation (\ref{onlyA}) must be of the form (up to some irrelevant permutations in the Schmidt basis) \begin{eqnarray} A = U_A (A_1 + A_2)\nonumber\\ A_1 \equiv \sum_{i=1}^{n_{\psi}} \sqrt{\frac{\lambda_i^{\phi}}{\lambda_i^{\psi}}} \proj{i},\\ A_2 \equiv \sum_{i=n_{\psi}+1}^n\ket{\mu_i}\bra{i} \end{eqnarray} where $\ket{\mu_i}$ are arbitrary unnormalized vectors. Notice that vectors $\ket{\mu_i}$ play no role in equation (\ref{onlyA}) since $A_2\otimes1_{B...N}\ket{\psi}=0$. Therefore we can redefine \begin{equation} A_2 \equiv \sum_{i=n_{\psi} +1}^n \proj{i}, \end{equation} which implies that $A$ is an invertible operator.$\Box$
\section*{Appendix B: $\tau$ is an entanglement monotone} In this appendix, we show that the 3-tangle $\tau$ is an entanglement monotone, i.e. decreasing on average under LOCC in all the three parties. We first note that any local protocol can be decomposed into POVM's such that only one party performs operations on the system. This, together with the invariance of the 3-tangle $\tau$ under permutations of the parties, ensures that it is sufficient to consider a local POVM in $A$ only. Furthermore, we can restrict ourselves to two--outcome POVM's due to the fact that a genarlized (local) POVM can be implemented by a sequence of two outcome POVM's. Let $A_1,A_2$ be the two POVM elements such that $A_1^\dagger A_1+A_2^\dagger A_2 = \mbox{$1 \hspace{-1.0mm} {\bf l}$}$. We can write $A_i=U_iD_iV$, where $U_i$, $V$ are unitary matrices and $D_i$ are diagonal matrices with entries $(a,b)$ $[((1-a^2)^{\frac{1}{2}},(1-b^2)^{\frac{1}{2}}$)] respectively. Note that we used the singular value decomposition for $A_i$, and we have that the restriction that $A_1,A_2$ constitute a POVM immediately implies that the unitary operation $V$ can be chosen to be the same in both cases. We consider an initial state
$|\psi\rangle$ with 3-tangle $\tau(\psi)$. Let $|\tilde\phi_i\rangle=
A_i|\psi\rangle$ be the (unnormalized) states after the application of the POVM. Normalizing them, we obtain $|\phi_i\rangle=|\tilde\phi_i\rangle/\sqrt{p_i}$
with $p_i=\langle\tilde\phi_i|\tilde\phi_i\rangle$ and $p_1+p_2=1$. We want to show that $\tau^\eta$, $0<\eta\leq1$ is, on average, always decreasing and thus an entanglement monotone, i.e for \begin{equation} <\tau^\eta>= p_1\tau^\eta(\phi_1)+ p_2 \tau^\eta(\phi_2)\label{monotontau} \end{equation} we have that \begin{equation} <\tau^\eta> \leq \tau^\eta(\psi) \label{monoton} \end{equation} is fulfilled for all possible choices of the POVM $\{A_1,A_2\}$. Using that $\tau$ is invariant under local unitaries, we do not have to consider the unitary operations $U_i$ in our calculations, i.e. $\tau(U_iD_iV\psi)=\tau(D_iV\psi)$. Taking this simplification into account, a straightforward calculation shows that \begin{equation} \tau(\phi_1)=\frac{a^2b^2}{p_1^2} \tau(\psi) , \mbox{ } \tau(\phi_2)=\frac{(1-a^2)(1-b^2)}{p_2^2} \tau(\psi), \end{equation} where we used that $\tau(\epsilon \tilde{\phi_i})=\epsilon^4 \tau(\tilde{\phi_i})$, which can be checked by noting that $\tau$ is a quartic function with respect to its coefficients in the standard basis\cite{Wo99}. Note that the dependence of $\tau(\phi_i)$ on the unitary operation $V$ is hidden in $p_i$. For $\eta=1/2$, one obtains for example $\tau^{\frac{1}{2}}(\phi_1)=ab/p_1\tau^{\frac{1}{2}}(\psi)$. Substituting in (\ref{monotontau}), we find \begin{equation} <\tau^{\frac{1}{2}}>=(ab+\sqrt{(1-a^2)(1-b^2)}) \tau^{\frac{1}{2}}(\psi). \label{tau12} \end{equation} In this case, one can easily check that ($\ref{tau12}) \leq \tau^{\frac{1}{2}}$ by noting that (\ref{tau12}) is maximized for $a=b$. We thus have that $\tau^{\frac{1}{2}}$ is, on average, always decreasing and thus an entanglement monotone. In a similar way, one can check for $0< \eta \leq 1$ that $\tau^\eta$ is an entanglement monotone. However, for $\eta \not= 1/2$, the derivation is a bit more involved due to the fact that in this case the propabilities $p_i$ in the expression for $<\tau^\eta>$ do no longer cancel and have to be calculated explicitly.
\section*{Appendix C: $|W\rangle$ maximizes residual bipartite entanglement}
Here we show that for all tripartite pure states, except the state $|W\rangle$ the following inequality holds \begin{equation} E_\tau \equiv {\cal C}^2_{AB}+{\cal C}^2_{AC}+{\cal C}^2_{BC} < \frac{4}{3},\label{inequ}
\end{equation} while the state $|W\rangle$ reaches the value $E_\tau=4/3$. Note that we used the shorthand notation ${\cal C}_{AB}$ for the concurrence of the reduced density operator $\rho_{AB}, {\cal C}(\rho_{AB})$, and similary for ${\cal C}_{AC}$,${\cal C}_{BC}$.
Inequality (\ref{inequ}) already implies that the state $|W\rangle$ reaches the maximum average value $\bar{{\cal E}}(\psi)$ of Equ. (\ref{average}) for the choice of ${\cal E}(\rho) = {\cal C}(\rho)^2$, namely $\bar{{\cal E}}(W)=4/9$.
At the same time, inequality (\ref{inequ}) also shows that the state $|W\rangle$ maximizes the function ${\cal E}_{\min}(\psi)$ (\ref{worstcase}) for the choice of ${\cal E}(\rho) = {\cal C}(\rho)^2$, since (\ref{inequ}) implies that \begin{equation} {\cal C}^2_{\min}(\psi) \equiv \min({\cal C}^2_{AB},{\cal C}^2_{AC},{\cal C}^2_{BC}) < 4/9 \label{mini}
\end{equation} for all states except the state $|W\rangle$, for which the value $4/9$ is reached. From (\ref{mini}) follows that for any bipartite measure of entanglement ${\cal E}(\rho)$ which is monotonically increasing with the square of the concurrence (and hence with the concurrence itself), the state $|W\rangle$ maximizes the function ${\cal E}_{\min}(\psi)$ (\ref{worstcase}), i.e. \begin{equation} {\cal E}_{\min}(\psi) < {\cal E}_{\min}(W) = {\cal E}({\cal C}^2=4/9). \end{equation} Assume that this is not the case, i.e. there exist a state $\psi$ for which ${\cal E}_{\min}(\psi) > {\cal E}_{\min}(W)$. Since by assumption ${\cal E}$ is monotonically increasing with the concurrence, this would imply that ${\cal C}^2_{\min}(\psi) >4/9$, which contradicts Equ. (\ref{mini}) and is hence impossible.
Note in addition that any good measure of entanglement should be a convex function \cite{Vi00J}, as ${\cal C}(\rho), E_f(\rho)$ and $E_2(\rho)$ are. This implies, when applied to (\ref{average}) and (\ref{worstcase}) that the optimal values for $\bar{{\cal E}}$ and ${\cal E}_{\min}$ are achieved for pure states.
Ther remainder of this appendix is devoted to prove inequality (\ref{inequ}). Using the definition of the 3-tangle, $\tau\equiv\tau_{ABC}={\cal C}^2_{A(BC)}-{\cal C}^2_{AB}-{\cal C}^2_{AC}$ ~\cite{Wo99}and the invariance of the 3-tangle under permutations of the parties, we can rewrite $E_\tau$ as $1/2({\cal C}^2_{A(BC)}+{\cal C}^2_{B(AC)}+{\cal C}^2_{C(AB)}-3\tau)$. Using that ${\cal C}^2_{\kappa(\mu\nu)}=4{\rm det}\rho_{\kappa}$, we can evaluate $E_\tau$ for the different classes.
Starting with the class $A-B-C$, we immeadetly obtain that $E_\tau(\Psi_{A-B-C})=0$. For the class $A-BC$, we have that $\tau=0$ and ${\cal C}^2_{A(BC)}=0$. Since ${\cal C}^2_{B(AC)},{\cal C}^2_{C(AB)} \leq 1$, we have that $E_\tau(\Psi_{A-BC}) \leq 1$ in this case (and similary for the classes $B-AC, C-AB$).
Now we consider the class $W$, specified by equ. (\ref{Wclass}). Again, we have that $\tau=0$. We find that $E_\tau(\Psi_W)=4(ab+ac+bc)$ (which does not depend on $d$). Notice that $E_\tau$ is maximized for $a=b=c=1/3$ - which corresponds to the state $|W\rangle$ - and leads to $E_\tau=4/3$. For all other values of $a,b,c,d$, we have that $E_\tau < 4/3$.
Let us now turn to the class GHZ, specified in eq. (\ref{GHZclass}). Using that $\tau(\Psi_{GHZ})$ is given in eq. (\ref{tauGHZ}) and $\det\rho_A=K^2c_\delta^2s_\delta^2s_\alpha^2(1-c_\beta^2c_\gamma^2)$ (and similary for $\det\rho_{B,C}$), we obtain \begin{equation} E_\tau=\frac{4c_\delta^2s_\delta^2[(s_\alpha^2s_\beta^2+s_\alpha^2s_\gamma^2+s_\beta^2s_\gamma^2)-3s_\alpha^2s_\beta^2s_\gamma^2]}{(1+2c_\delta s_\delta c_\alpha c_\beta c_\gamma c_\varphi)^2} \label{Etau} \end{equation} One readily checks that (\ref{Etau}) is maximized for $\delta=\pi/4$ and $\varphi=\pi$ (which corresponds to $c_\delta=s_\delta=1/\sqrt{2}$ and $c_\varphi=-1$), independent of the values of $\alpha,\beta,\gamma \in (0,\pi/2]$. Thus we have that $E_\tau \leq E_\tau(\delta=\pi/4,\varphi=\pi)$ and after some algebra we obtain \begin{equation} E_\tau \leq \frac{(c_\alpha^2+c_\beta^2+c_\gamma^2)-2(c_\alpha^2c_\beta^2+c_\alpha^2c_\gamma^2+c_\beta^2c_\gamma^2)+3c_\alpha^2c_\beta^2c_\gamma^2}{(1+c_\alpha c_\beta c_\gamma )^2} \label{Etau1} \end{equation} We want to show that the (rhs) of eq. \ref{Etau1} $< 4/3$. Let us call $x\equiv c_\alpha,y\equiv c_\beta,z\equiv c_\gamma$ with $0\leq x,y,z<1$. We thus have to show that \begin{eqnarray} f(x,y,z)\equiv&&3(x^2+y^2+z^2)-6(x^2y^2+x^2z^2+y^2z^2) \nonumber \\ &+&5(x^2y^2z^2)-4+8xyz < 0 \end{eqnarray} Let us calculate the maximum of $f(x,y,z)$. We therefore take the derivatives of $f(x,y,z)$ with respect to $x,y,z$ respectively (which we denote by $f_x,f_y,f_z$) and set them to zero. One immeadetly observes (by considering linear combination of the resulting equations, e.g. $xf_x-yf_y$, where one e.g. obtains $(x^2-y^2)(1-2z^2)=0$), that for a maximum we must have $x=y=z$. The possible solutions of the resulting polynomial of degree 5 can be checked to lie outside the intervall $[0,1)$, i.e. outside the range of $x,y,z$ except for $x=y=z=0$. It can however be easily verified that this solution give rise to a minimum of $f(x,y,z)$, namely $f(0,0,0)=-4$. Thus the maximum of $f(x,y,z)$ is obtained at the border of the range for $x,y,z$, which corresponds to the surfaces of a cube. Due to the fact that $f(x,y,z)$ is invariant under permutations of the variables, we only have to check two of the surfaces, e.g. the surfaces specified by $x=0$ and $x=1$ (actually $x=1-\epsilon$, where $\epsilon$ is an infinitesimally small positive number) and we find (i) $f(0,x,y)=3(y^2+z^2)-6y^2z^2-4 \leq -1$ (the maximum in this case is e.g. obtained for $y=0,z=1-\epsilon$)) and (ii) $f(1,y,z)=8yz-3(y^2+z^2)-y^2z^2-1 < 0$. In (ii), it can be checked that a necessary condition for a maximum is $y=z$ and that $f(1,y,y)$ is monotonically increasing in $[0,1)$ and is thus maximized for $y=z=(1-\epsilon)$. One obtains $f(x,y,z) \leq f(1,1-\epsilon,1-\epsilon) < 0$ as desired.
So we managed to show that the state $|W\rangle$ is the only state which fulfills $E_\tau=4/3$, and for all other tripartite pure states we have that $E_\tau < 4/3$.
\begin{references}
\bibitem{Be95} C. H. Bennett, H. J. Bernstein, S. Popescu, B. Schumacher, quant-ph/9511030
\bibitem{Ei35} Einstein, Podolsky \& Rosen, Phys. Rev. {\bf 47}, 777-780 (1935).
\bibitem{asy} N. Linden, S. Popescu, B. Schumacher and M. Westmoreland, quant-ph/9912039; G. Vidal , W. D\"ur and J. I. Cirac, quant-ph/0004009; S. Wu and Y. Zhang, quant-ph/0004020.
\bibitem{Be99} C. H. Bennett, S. Popescu, D. Rohrlich, J.A. Smolin and A.V. Thapliyal, quant-ph/9908073;
\bibitem{Vi00J} G. Vidal, Journ. of Mod. Opt. {\bf 47}, 355 (2000);
\bibitem{Li97} N. Linden and S. Popescu, Fortsch.Phys. {\bf 46}, 567 (1998);
\bibitem{Sc} J. Schlienz, Ph.D. thesis
\bibitem{Su00} A. Sudbery, quant-ph/0001116;
\bibitem{Ca00} H. A. Carteret and A. Sudbery, quant-ph/0001091;
\bibitem{Ke99} J. Kempe, Phys. Rev. A{\bf 60} 910-916 (1999);
\bibitem{comment} Different classifications based on LU invariants were recently proposed in \cite{Ac00} and \cite{Ca00}. Depending on the values of these invariants, several classes of states in one case and types of entanglement in the other are identified. We want to remark that our classification is based on probabilistic conversions under LOCC, and therefore, as expectable, our results are not fully compatible with those of these approaches.
\bibitem{Gr89} D. M. Greenberger, M. Horne, A. Zeilinger, {\it Bell's theorem, Quantum Theory, and Conceptions of the Universe,} ed. M. Kafatos, Kluwer, Dordrecht 69 (1989); D. Bouwmeester et al., Phys. Rev. Lett. {\bf 82 }, 1345 (1999).
\bibitem{Wo99} V. Coffman, J. Kundu, and W. K. Wootters, Phys. Rev. A {\bf 61}, 052306 (2000); see also quant-ph/9907047
\bibitem{Lo97} H. K. Lo and S. Popescu, quant-ph/9707038;
\bibitem{Vi99} G. Vidal, Phys. Rev. Lett. {\bf 83 }, 1046 (1999);
\bibitem{Ni99} M. Nielsen, Phys. Rev. Lett. {\bf 83 }, 436 (1999);
\bibitem{monotones} Recall that the $n$ entanglement monotones $E_k(\psi)\equiv\sum_{i=k}^n \lambda_i$, where $\lambda_i$ come from the Schmidt decomposition of $\ket{\psi} \in \hbox{$\mit I$\kern-.7em$\mit C$}^n\otimes\hbox{$\mit I$\kern-.7em$\mit C$}^n$, \begin{equation} \ket{\psi} = \sum_{i=1}^n \sqrt{\lambda_i} \ket{i}\otimes\ket{i},~~ \lambda_i\geq\lambda_{i+1}\geq 0, \end{equation} are the measures that provide a quantitative description of the entanglement resources of a single copy of pure states, in that, for instance, they give the optimal probability $P(\psi\rightarrow\phi)$ of conversion of the state $\ket{\psi}$ into the state $\ket{\phi}$ under LOCC \cite{Vi99}, namely \begin{equation} P(\psi\rightarrow\phi)= \min \{\frac{E_1(\phi)}{E_1(\psi)},...,\frac{E_n(\phi)}{E_n(\psi)}\}. \end{equation} In a two-qubit system we have only two non-trivial monotones, $E_1(\psi)=\lambda_1+\lambda_2 =1$ and $E_2=\lambda_2\leq 1/2.$ In a similar way as $E_2(\psi)>E_2(\phi)$ determines that $\ket{\psi}$ can be transformed into $\ket{\phi}$ with certainty by means of LOCC \cite{Ni99}, the extension to mixed states $\rho$ of two qubits, $E_2(\rho)$, also says when the entanglement of the pure state $\ket{\psi}$ suffices to locally prepare $\rho$ with certainty \cite{Vi00}.
\bibitem{STV} A. Sampera, R. Tarrach, G. Vidal, Phys. Rev. A {\bf 58} (1998) 826-830.
\bibitem{Ac00} A. Ac\'{\i}n, A. Andrianov, L. Costa, E. Jan\'{e}, J. I. Latorre and R. Tarrach, quant-ph/0003050;
\bibitem{Gi98} N. Gisin and H. Bechmann-Pasquinucci, quant-ph 9804045
\bibitem{product} A state $t\ket{00}+x\ket{01}+y\ket{10}+z\ket{11}$ is product iff it can be written as $(a\ket{0}+b\ket{1})(c\ket{0}+d\ket{1})$, that is iff tz=xy. A complex linear combination $\lambda_1\ket{00}+\lambda_2(x\ket{01}+y\ket{10}+z\ket{11})$ with $\lambda_2\neq 0$ cannot be product for any $\lambda_1$ iff $z=0$.
\bibitem{Co00} O. Cohen, T. A. Brun, quant-ph/0001084
\bibitem{Br00} A. Higuchi, A. Sudbery, quant-ph/0005013; H. J. Briegel, R. Raussendorf, quant-ph/0004051
\bibitem{Gi96} N. Gisin, Phys. Lett. A {\bf 210} 151 (1996).
\bibitem{Vi00} G. Vidal, quant-ph/0003002;
\end{references}
\narrowtext \begin{table}
\begin{tabular}[t]{||c|l|l|l|l||} Class & $S_A$ & $S_B$ & $S_C$ & $\tau$ \\ \hline A-B-C & 0 & 0 & 0 & 0 \\ \hline A-BC & 0 & $>0$ & $>0$ & 0 \\ \hline B-AC & $>0$ & 0 & $>0$ & 0 \\ \hline C-AB & $>0$ & $>0$ & 0 & 0 \\ \hline W & $>0$ & $>0$ & $>0$ & 0 \\ \hline GHZ & $>0$ & $>0$ & $>0$ & $>0$ \\ \end{tabular} \caption[]{Values of the local entropies $S_A, S_B, S_C$ and the 3-tangle $\tau$ for the different classes} \label{Table1} \end{table}
\begin{figure}\label{Fig1}
\end{figure}
\end{document} | arXiv | {
"id": "0005115.tex",
"language_detection_score": 0.81562739610672,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\newcommand{\varepsilon}{\varepsilon} \newtheorem{tth}{Theorem}[section] \newtheorem{dfn}[tth]{Definition} \newtheorem{lem}[tth]{Lemma} \newtheorem{prop}[tth]{Proposition} \newtheorem{coro}[tth]{Corollary} \renewcommand{\hspace*{1em}}{\hspace*{1em}} \begin{center} {\Large {\bf Multi-bifurcations of Wavefronts on $r$-corners } } \vspace*{0.4cm}\\ {\large Takaharu Tsukada} \footnote{Higashijujo 3-1-16 Kita-ku, Tokyo 114-0001 JAPAN. e-mail : tsukada@math.chs.nihon-u.ac.jp} \vspace*{0.2cm}\\ {\large College of Humanities \& Sciences, Department of Mathematics,\\
Nihon University}\end{center} \begin{abstract} We extend the notion of reticular Legendrian unfoldings in order to investigate multi-time bifurcations of wavefronts generated by an $r$-corner. We give a classification list of generic and stable bifurcations with two time parameter and give all generic figures in the plane and the space. \end{abstract}
\section{Introduction} \hspace*{1em} Legendrian singularity can be found in many problems of differential geometry, calculus of variations and mathematical physics. One of the most successful their applications is the study of singularity of wavefronts. Bifurcation of wavefronts generated by a hypersurface without boundary in a smooth manifold is investigated as the theory of Legendrian unfoldings by S.Izumiya \cite{izumiya1}. We investigated the theory of reticular Legendrian unfoldings in order to describe bifurcations of wavefronts generated by a hypersurface with an $r$-corner in \cite{bifsemi}. These theories are investigated on one-parameter bifurcations of wavefronts. In this paper we investigate $m$-parameter bifurcations of wavefronts on an $r$-corner. Since almost theory can be proved by the parallel methods of \cite{bifsemi}, we give our theory along the paper and omit the details the parts which can be prove by the parallel methods.\\
Let us consider a $m$-parameter family $\{ L_{\sigma,t} \}_{\sigma\subset I_r,t\in ({\mathbb R}^m,0)}$ of contact regular $r$-cubic configurations on $J^1({\mathbb R}^n,{\mathbb R})$ defined by contact embedding germs $C_t: (J^1({\mathbb R}^n,{\mathbb R}),0) \rightarrow J^1({\mathbb R}^n,{\mathbb R})$ depending smoothly on $t\in ({\mathbb R}^m,0)$ such that $C_0(0)=0$, $L_{\sigma,t} = C_t(L^0_\sigma) $ for all $\sigma\subset I_r,\ t\in ({\mathbb R}^m,0)$. We investigate bifurcations of wavefronts of $\{ L_{\sigma,t} \}_{\sigma\subset I_r}$ around time $0$.
In order to realize this, we shall need to extend the notion of {\em reticular Legendrian unfoldings} which is defined in \cite{bifsemi}.
\section{Stabilities of unfoldings}\label{unfold:sec} \hspace*{1em} In this section we recall the theory of function germs with respect to {\it the reticular $t$-${\cal P}$-${\cal K}$-equivalence relation} which is developed in \cite{tPKfunct}.
Let ${\mathbb H}^r=\{ (x_1,\ldots,x_r)\in {\mathbb R}^r|x_1\geq 0,\ldots,x_r\geq 0\}$ be an $r$-corner. We denote by ${\cal E}(r;k_1,r;k_2)$ the set of all germs at $0$ of smooth maps ${\mathbb H}^r\times {\mathbb R}^{k_1} \rightarrow {\mathbb H}^r\times {\mathbb R}^{k_2}$ and set ${\mathfrak M}(r;k_1,r;k_2)=
\{ f\in {\cal E}(r;k_1,r;k_2)|f(0)=0 \}$. We denote ${\cal E}(r;k_1,k_2)$ for ${\cal E}(r;k_1,0;k_2)$ and denote ${\mathfrak M}(r;k_1,k_2)$ for ${\mathfrak M}(r;k_1,0;k_2)$.
If $k_2=1$ we write simply ${\cal E}(r;k)$ for ${\cal E}(r;k,1)$ and ${\mathfrak M}(r;k)$ for ${\mathfrak M}(r;k,1)$. Then ${\cal E}(r;k)$ is an ${\mathbb R}$-algebra in the usual way and ${\mathfrak M}(r;k)$ is its unique maximal ideal. We also denote by ${\cal E}(k)$ for ${\cal E}(0;k)$ and ${\mathfrak M}(k)$ for ${\mathfrak M}(0;k)$. We remark that ${\cal E}(r;k,p)$ is an ${\cal E}(r;k)$-module generated by $p$-elements.
We denote by $J^l(r+k,p)$ the set of $l$-jets at $0$ of germs in ${\cal E}(r;k,p)$. There are natural projections: \[ \pi_l:{\cal E}(r;k,p)\longrightarrow J^l(r+k,p),\ \pi^{l_1}_{l_2}:J^{l_1}(r+k,p)\longrightarrow J^{l_2}(r+k,p)\ (l_1 > l_2). \] We write $j^lf(0)$ for $\pi_l(f)$ for each $f\in {\cal E }(r;k,p)$.
Let $(x,y)=(x_1,\cdots,x_r,y_1,\cdots,y_k)$ be a fixed coordinate system of $({\mathbb H}^r\times {\mathbb R}^k,0)$. We denote by ${\cal B}(r;k)$ the group of diffeomorphism germs $({\mathbb H}^r\times {\mathbb R}^{k},0)\rightarrow ({\mathbb H}^r\times {\mathbb R}^{k},0)$ of the form: \[ \phi(x,y)=(x_1\phi_1^1(x,y),\cdots,x_r\phi_1^r(x,y),\phi_2^1(x,y),\cdots,\phi_2^k(x,y) ). \]
We denote by ${\cal B}_n(r;k+n)$ the group of diffeomorphism germs $({\mathbb H}^r\times {\mathbb R}^{k+n},0)\rightarrow ({\mathbb H}^r\times {\mathbb R}^{k+n},0)$ of the form: \[ \phi(x,y,u)=(x_1\phi_1^1(x,y,u),\cdots,x_r\phi_1^r(x,y,u),\phi_2^1(x,y,u),\cdots,\phi_2^k(x,y,u) ,\phi_3^1(u),\ldots,\phi_3^n(u)).\] We denote $\phi(x,y,u)=(x\phi_1(x,y,u),\phi_2(x,y,u),\phi_3(u))$, $\frac{\partial f_0}{\partial y}=(\frac{\partial f_0}{\partial y_1},$ $\cdots,\frac{\partial f_0}{\partial y_k})$, and denote other notations analogously.
\begin{lem}\label{gw1.8:cor}{\rm (cf., \cite[Corollary 1.8]{spsing}), also see \cite[Lemma2.1]{tPKfunct})} Let $B$ be a submodule of ${\cal E}(r;k+n+m)$, $A_1$ be a finitely generated ${\cal E}(m)$-submodule of ${\cal E}(r;k+n+m)$ generated $d$-elements, and $A_2$ be a finitely generated ${\cal E}(n+m)$ submodule of ${\cal E}(r;k+n+m)$. Suppose
\[ {\cal E}(r;k+n+m)=B+A_2+A_1+{\mathfrak M}(m){\cal E}(r;k+n+m)
+{\mathfrak M}(n+m)^{d+1}{\cal E}(r;k+n+m). \]
Then \[{\cal E}(r;k+n+m)=B+A_2+A_1,\] \[ {\mathfrak M}(n+m)^d{\cal E}(r;k+n+m)\subset B+A_2+{\mathfrak M}(m){\cal E}(r;k+n+m).\] \end{lem}
We recall the stabilities of $n$-dimensional unfolding under {\it reticular ${\cal P}$-${\cal K}$-equivalence} which is developed in \cite{retLeg}.
We say that $f_0,g_0\in{\cal E}(r;k)$ are {\it reticular ${\cal K}$-equivalent} if there exist $\phi\in{\cal B}(r;k)$ and a unit $a\in {\cal E}(r;k)$ such that $g_0=a\cdot f_0\circ \phi$.
We say that a function germ $f_0\in {\mathfrak M}(r;k)$ is {\it reticular ${\cal K}$-$l$-determined} if all function germ which has same $l$-jet of $f_0$ is reticular ${\cal K}$-equivalent to $f_0$. If $f_0$ is reticular ${\cal K}$-$l$-determined for some $l$, then we say that $f_0$ is reticular ${\cal K}$-finitely determined.\\ \begin{lem}\label{findetc:lm}{\rm (see \cite[Lemma 2.3]{tPKfunct})} Let $f_0(x,y)\in {\mathfrak M}(r;k)$ and let \[ {\mathfrak M}(r;k)^{l+1}\subset {\mathfrak M}(r;k)(\langle f_0, x\frac{\partial f_0}{\partial x}\rangle +{\mathfrak M}(r;k)\langle \frac{\partial f_0}{\partial y}\rangle ) +{\mathfrak M}(r;k)^{l+2},\] then $f_0$ is reticular ${\cal K}$-$l$-determined. Conversely if $f_0(x,y)\in {\mathfrak M}(r;k)$ is reticular ${\cal K}$-$l$-determined, then \[ {\mathfrak M}(r;k)^{l+1}\subset \langle f_0,x\frac{\partial f_0}{\partial x} \rangle_{ {\cal E}(r;k) } +{\mathfrak M}(r;k)\langle \frac{\partial f_0}{\partial y}\rangle. \] \end{lem}
We say that $f,g\in{\cal E}(r;k+n)$ are {\it reticular ${\cal P}$-${\cal K}$-equivalent} if there exist $\Phi\in{\cal B}_n(r;k+n)$ and a unit $\alpha\in {\cal E}(r;k+n)$ such that $g=\alpha\cdot f\circ \Phi$.\\
We say that $f(x,y,u)\in{\mathfrak M}(r;k+n)$ is {\it reticular ${\cal P}$-${\cal K}$-infinitesimally stable} if \[ {\cal E}(r;k+n)= \langle f,x\frac{\partial
f}{\partial x},\frac{\partial f}{\partial y}\rangle_{ {\cal
E}(r;k+n) }+\langle \frac{\partial f}{\partial u}\rangle_{{\cal E}(n)}. \]
We define the several stabilities of unfolding of function germ under the reticular ${\cal P}$-${\cal K}$-equivalence in ${\mathfrak M}(r;k)$ in \cite{tPKfunct}. We have the following theorem: \begin{tth}\label{pk:tth}{\rm (see \cite[Theorem 2.5]{tPKfunct})} Let $f\in {\mathfrak M}(r;k+n)$ be an unfolding of $f_0\in {\mathfrak M}(r;k)$. Then the following are equivalent. \\ {\rm (1)} $f$ is reticular ${\cal P}$-${\cal K}$-stable.\\ {\rm (2)} $f$ is reticular ${\cal P}$-${\cal K}$-versal.\\ {\rm (3)} $f$ is reticular ${\cal P}$-${\cal K}$-infinitesimally versal. \\ {\rm (4)} $f$ is reticular ${\cal P}$-${\cal K}$-infinitesimally stable. \\ {\rm (5)} $f$ is reticular ${\cal P}$-${\cal K}$-homotopically stable. \end{tth}
We say that $F,G\in{\cal E}(r;k+n+m)$ are {\it reticular $t$-${\cal P}$-${\cal K}$-equivalent} if there exist $\Phi\in{\cal B}(r;k+n+m)$ and a unit $\alpha\in {\cal E}(r;k+n+m)$ such that \\ (1) $\Phi$ can be written in the form: $\Phi(x,y,u,t)=(x\phi_1(x,y,u,t),\phi_2(x,y,u,t),\phi_3(u,t),\phi_4(t))$, \\ (2) $G=\alpha\cdot F\circ \Phi$.\\
We say that $F(x,y,u,t)\in {\mathfrak M}(r;k+n+m)$ is {\it reticular $t$-${\cal P}$-${\cal K}$-infinitesimally stable} if \begin{equation} {\cal E}(r;k+n+m) = \langle F,x\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y} \rangle_{ {\cal E}(r;k+n+m) }+ \langle \frac{\partial F}{\partial u}\rangle_{{\cal E}(n+m)}+\langle \frac{\partial F}{\partial t}\rangle_{{\cal E}(m)}.\label{inftsa} \end{equation}
We define the several stabilities of unfolding of function germ under the reticular $t$-${\cal P}$-${\cal K}$-equivalence in ${\mathfrak M}(r;k+n+m)$ in \cite{tPKfunct}. We have the following theorem:
\begin{tth}\label{mthft:th}{\rm (see \cite[Theorem 3.14]{tPKfunct})} Let $F(x,y,u,t)\in {\mathfrak M}(r;k+n+m)$ be an unfolding of $f(x,y,u)\in {\mathfrak M}(r;k+n)$ and let $f$ is an unfolding of $f_0(x,y)\in {\mathfrak M}(r;k)$. Then following are equivalent. \\ {\rm (1)} There exists a non-negative number $l$ such that $f_0$ is reticular ${\cal K}$-$l$-determined and $F$ is reticular $t$-${\cal P}$-${\cal K}$-$q$-transversal for $q\geq lm+l+m+1$.\\ {\rm (2)} $F$ is reticular $t$-${\cal P}$-${\cal K}$-stable.\\ {\rm (3)} $F$ is reticular $t$-${\cal P}$-${\cal K}$-versal.\\ {\rm (4)} $F$ is reticular $t$-${\cal P}$-${\cal K}$-infinitesimally versal.\\ {\rm (5)} $F$ is reticular $t$-${\cal P}$-${\cal K}$-infinitesimally stable.\\ {\rm (6)} $F$ is reticular $t$-${\cal P}$-${\cal K}$-homotopically stable. \end{tth} This theorem is used in the proof of Theorem \ref{staLeg:th}.
\section{Reticular Legendrian unfoldings}\label{RLeg:unfo} \hspace*{1em} We consider the $1$-jet bundle $J^1({\mathbb R}^n,{\mathbb R})$ with the canonical $1$-form $\theta$ and the canonical coordinate system $(q,z,p)=(q_1,\ldots,q_n,z,p_1,\ldots,p_n)$, the natural projection $\pi:J^1({\mathbb R}^n,{\mathbb R})\rightarrow {\mathbb R}^n\times {\mathbb R} ((q,z,p)\mapsto (q,z))$. We also consider the big $1$-jet bundle $J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R})$. and the canonical $1$-form $\Theta$ on that space. Let $(t,q)=(t_1,\ldots,t_m,q_1,\ldots,q_n)$ be the canonical coordinate system on ${\mathbb R}^m\times{\mathbb R}^n$ and $(t,q,z,s,p)= (t_1,\ldots,t_m,q_1,\ldots,q_n,z,s_1,\ldots,s_m,p_1,\ldots,p_n)$ be the corresponding coordinate system on $J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R})$. Then the canonical $1$-form $\Theta$ is given by \[ \Theta=dz-\sum_{i=1}^np_idq_i-\sum_{i=1}^ms_idt_i. \] There exists the natural projection \[ \Pi:J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R})\rightarrow {\mathbb R}^m\times{\mathbb R}^n\times {\mathbb R}\ \ (t,q,z,s,p)\mapsto (t,q,z).\]
Then we consider the following contact diffeomorphism germ $C$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n, {\mathbb R}),0)$:
\begin{lem}\label{C:lem}{\rm (cf., \cite[Lemma 3.1]{bifsemi})} For any multi-family of contact embedding germs $C_t: (J^1({\mathbb R}^n,{\mathbb R}),0) \rightarrow J^1({\mathbb R}^n,$ ${\mathbb R})\ (C_0(0)=0)$ depending smoothly on $t\in ({\mathbb R}^m,0)$, there exist unique function germs $h_1,\ldots,h_m$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ such that $h_i$ depends only on $t,q,z,s_i,p$ for each $i$ and the map germ $C:(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) \rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ defined by \[ C(t,q,z,s,p)=(t,q\circ C_t(q,z,p),z\circ C_t(q,z,p), h(t,q,z,s,p),p\circ C_t(q,z,p))\] is a contact diffeomorphism. \end{lem} The function germ $h_i$ is uniquely determined by \begin{equation} h_i(t,q,z,s,p)=\frac{\partial z_t}{\partial t}(q,z,p)-p_t(q,z,p) \frac{\partial q_t}{\partial t}(q,z,p)+\alpha(t,q,z,p)s_i. \label{hs:eqn} \end{equation} We define that $\tilde{L}^0_\sigma=\{(t,q,z,s,p) \in J^1({\mathbb R}^m\times
{\mathbb R}^n,{\mathbb R})| q_\sigma=p_{I_r-\sigma}=q_{r+1}=\cdots=q_n=s=z=0,q_{I_r-\sigma}\geq 0 \}$ for $\sigma\subset I_r$ and ${\mathbb L}=\{(t,q,z,s,p) \in J^1({\mathbb R}^m\times
{\mathbb R}^n,{\mathbb R})| q_1p_1=\cdots=q_rp_r=q_{r+1}=\cdots=q_n=s=z=0,q_{I_r}\geq 0 \}$ be a representative as a germ of the union of $\tilde{L}^0_\sigma$ for all $\sigma\subset I_r$. \begin{dfn}\label{C:dfn}{\rm Let $C$ be a contact diffeomorphism germ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$. We say that $C$ is {\em a ${\cal P}$-contact diffeomorphism} if $C$ has the form: \begin{equation} C(t,q,z,s,p)=(t,q_C(t,q,z,p),z_C(t,q,z,p), h_C(t,q,z,s,p),p_C(t,q,z,p))\label{tconta:eqn} \end{equation} and the function germ $h_C^i$ depends only on $t,q,z,s_i,p$ for each $i=1,\ldots,m$. }\end{dfn} \begin{dfn}{\rm We say that a map germ ${\cal L}:({\mathbb L},0)\rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ is {\em a reticular Legendrian unfolding} if ${\cal L}$ is the restriction of a ${\cal P}$-contact diffeomorphism. We call $\{ {\cal L}(\tilde{L}^{0}_\sigma) \}_{\sigma\subset I_r}$ {\em the unfolded contact regular $r$-cubic configuration of } ${\cal L}$. }\end{dfn}
We note that: Let $\{ \tilde{L}_{\sigma} \}_{\sigma\subset I_r}$ be an unfolded contact regular $r$-cubic configuration
associated with an $m$-parameter family of contact regular $r$-cubic configurations $\{ L_{\sigma,t} \}_{\sigma\subset I_r,t\in ({\mathbb R}^m,0)}$. Then there is the following relation between the wavefront $W_\sigma=\Pi(\tilde{L}_{\sigma})$ and the family of wavefronts $W_{\sigma,t}=\pi(L_{\sigma,t})$: \[ W_\sigma=\bigcup_{t\in ({\mathbb R}^m,0)} \{t\}\times W_{\sigma,t} \ \ \ \ \mbox{ for all } \sigma\subset I_r.\]
Let $K,\Psi$ be contact diffeomorphism germs on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$. We say that $K$ is {\em a ${\cal P}$-Legendrian equivalence} if $K$ has the form: \begin{equation} K(t,q,z,s,p)= (\phi_1(t),\phi_2(t,q,z),\phi_3(t,q,z),\phi_4(t,q,z,s,p), \phi_5(t,q,z,s,p))\label{PLequi}. \end{equation} We say that $\Psi$ is {\em a reticular ${\cal P}$-diffeomorphism} if $\pi_t\circ \Psi$ depends only on $t$ and $\Psi$ preserves $\tilde{L}^0_\sigma$ for all $\sigma\subset I_r$. \\
Let $\{ \tilde{L}^i_{\sigma} \}_{\sigma\subset I_r}(i=1,2)$ be unfolded contact regular $r$-cubic configurations on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$. We say that they are {\em ${\cal P}$-Legendrian equivalent} if there exist a ${\cal P}$-Legendrian equivalence $K$
such that $\tilde{L}^2_{\sigma}=K(\tilde{L}^1_{\sigma})$ for all $\sigma\subset I_r$.
In order to understand the meaning of ${\cal P}$-Legendrian equivalence, we observe the following: Let $\{ \tilde{L}^i_{\sigma} \}_{\sigma\subset I_r}(i=1,2)$ be unfolded contact regular $r$-cubic configurations on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ and $\{ L^i_{\sigma,t} \}_{\sigma\subset I_r,t\in ({\mathbb R}^m,0)}$ be the corresponding $m$-parameter families of contact regular $r$-cubic configurations on $J^1({\mathbb R}^n,{\mathbb R})$
respectively. We take the smooth $m$-parameter path germs $w_i:({\mathbb R}^m,0)\rightarrow (J^1({\mathbb R}^n,{\mathbb R}),0)$ such that $\{ L^i_{\sigma,t} \}_{\sigma\subset I_r}$ are defined at $w_i(t)$ for $i=1,2$. Suppose that there exists a ${\cal P}$-Legendrian equivalence $K$ from $\{ \tilde{L}^1_{\sigma} \}_{\sigma\subset I_r}$ to $\{ \tilde{L}^2_{\sigma} \}_{\sigma\subset I_r}$ of the form (\ref{PLequi}). We set $W_{\sigma,t}^i$ be the wavefront of $L^i_{\sigma,t}$ for $\sigma\subset I_r,\ t\in ({\mathbb R}^m,0)$ and $i=1,2$. We define the family of diffeomorphism $g_t:({\mathbb R}^n\times {\mathbb R}, \pi(w_1(t)))\rightarrow ({\mathbb R}^n\times {\mathbb R}, \pi(w_2(t)))$ by $g_t(q,z)=(\phi_2(t,q,z),\phi_3(t,q,z))$. Then we have that $g_t(W^1_{\sigma,t})=W^1_{\sigma,\phi_1(t)}$ for all $\sigma\subset I_r,\ t\in ({\mathbb R}^m,0)$.
We also define the equivalence relation among reticular Legendrian unfoldings. Let ${\cal L}_i:({\mathbb L},0)\rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0), (i=1,2)$ be reticular Legendrian unfoldings. We say that ${\cal L}_1$ and ${\cal L}_2$ are {\em ${\cal P}$-Legendrian equivalent} if there exist a ${\cal P}$-Legendrian equivalence $K$ and a reticular ${\cal P}$-diffeomorphism $\Psi$ such that $K\circ {\cal L}_1={\cal L}_2\circ \Psi$.
\begin{lem}\label{exleg:lm}{\rm (cf., \cite[Lemma 3.4]{bifsemi})} Let $\{\tilde{L}_\sigma \}_{\sigma \subset I_r}$ be an unfolded contact regular $r$-cubic configuration on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$. Then there exists a ${\cal P}$-contact diffeomorphism germ $C$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ such that $C$ defines $\{\tilde{L}_\sigma \}_{\sigma \subset I_r}$ and preserves the canonical $1$-form. \end{lem}
We can construct generating families of reticular Legendrian unfoldings. A function germ $F(x,y,t,q,z)\in{\mathfrak M}(r;k+m+n+1)$ is said to be {\em ${\cal P}$-$C$-non-degenerate} if $\frac{\partial F}{\partial x}(0)=\frac{\partial F}{\partial y}(0)=0$ and $x,t,F,\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y}$ are independent on $({\mathbb H}^k\times {\mathbb R}^{k+m+n+1},0)$.\\
A ${\cal P}$-$C$-non-degenerate function germ $F(x,y,t,q,z)\in {\mathfrak M}(r;k+m+n+1)$ is called {\em a generating family} of a reticular Legendrian unfoldings ${\cal L}$ if \begin{eqnarray*} {\cal L}(\tilde{L}^0_{\sigma}) = \{ (t,q,z,\frac{\partial F}{\partial t}/(-\frac{\partial F}{\partial z}), \frac{\partial F}{\partial q}/(-\frac{\partial F}{\partial z}))\in
(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)|\hspace{1cm}\\ \hspace{2cm} x_\sigma=F=\frac{\partial F}{\partial x_{I_r-\sigma}}= \frac{\partial F}{\partial y}=0,x_{I_r-\sigma}\geq 0\} \mbox{ for all }\sigma\subset I_r. \end{eqnarray*}
By Lemma \ref{exleg:lm} we may assume that an extension of reticular Legendrian unfolding preserves the canonical $1$-form. \begin{lem} {\rm (cf., \cite[Lemma 3.5]{bifsemi})} Let $C$ be a ${\cal P}$-contact diffeomorphism germ $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)\rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ which preserves the canonical $1$-form. If the map germ \[ (T,Q,Z,S,P)\rightarrow (T,Q,Z,s_C(T,Q,Z,S,P),p_C(T,Q,Z,S,P))\] is a diffeomorphism, there exists a function germ $H(T,Q,p)\in {\mathfrak M}(m+n+n)^2$ such that the canonical relation $P_C$ associated with $C$ has the form: \begin{eqnarray} P_C=\{ (T,Q,Z,-\frac{\partial H}{\partial T}(T,Q,p)+s, -\frac{\partial H}{\partial Q}, T,-\frac{\partial H}{\partial p}, H-\langle \frac{\partial H}{\partial p},p\rangle +Z, s,p)\},\label{cano:eqn} \end{eqnarray} and the function germ $F\in {\mathfrak M}(r;n+m+n+1)$ defined by $F(x,y,t,q,z)=-z+H(t,x,0,y)+\langle y,q\rangle$
is a generating family of the reticular Legendrian unfolding $C|_{{\mathbb L}}$. \end{lem}
We have the following theorem which gives the relations between reticular Legendrian unfoldings and their generating families. \begin{tth}\label{UCgf:th}{\rm (cf., \cite[Theorem 3.6]{bifsemi})} {\rm (1)} For any reticular Legendrian unfolding ${\cal L}:({\mathbb L},0) \rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$, there exists a function germ $F(x,y,t,q,z)\in{\mathfrak M}(r;k+m+n+1)$ which is a generating family of ${\cal L}$.\\ {\rm (2)} For any ${\cal P}$-$C$-non-degenerate function germ $F(x,y,t,q,z)\in{\mathfrak M}(r;k+m+n+1)$ with $\frac{\partial F}{\partial t}(0)=\frac{\partial F}{\partial q}(0)=0$, there exists a reticular
Legendrian unfolding ${\cal L}:({\mathbb L},0)\rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ of which $F$ is a generating family.\\ {\rm (3)} Two reticular Legendrian unfolding are ${\cal P}$-Legendrian equivalent if and only if their generating families are stably reticular $t$-${\cal P}$-${\cal K}$-equivalent. \end{tth}
\section{Stabilities of reticular Legendrian unfoldings} \hspace*{1em} Let $U$ be an open set in $J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})$. We consider contact embedding germs $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) \rightarrow J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})$ and contact embeddings $U\rightarrow J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})$. Let $(T,Q,S,Z,P)$ and $(t,q,z,s,p)$ be canonical coordinates of the source space and the target space respectively. We define the following notations:\\ $\imath:(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})\cap \{Z=0 \},0)\rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ be the inclusion map on the source space, \begin{eqnarray*} C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) & = &
\{ C| C \mbox{ is a ${\cal P}$-contact embedding germ}\\ & & \hspace{1cm} (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)\rightarrow J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})\}, \\ C_T^\Theta (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) & = & \{ C\in C_T(J^1({{\mathbb R}^m\times \mathbb R}^n
,{\mathbb R}),0)|\ C^*\Theta=\Theta \}, \\ C_T^{Z} (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)
& = & \{ C\circ\imath\ |C \in C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) \},\\ C_T^{\Theta,Z} (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)
& = & \{ C\circ\imath\ | C \in C_T^\Theta (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0) \}.
\end{eqnarray*}
Let $V=U\cap \{Z=0\}$ and $\tilde{\imath}:V\rightarrow U$ be the inclusion map. \begin{eqnarray*} C_T(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})) & = & \{ \tilde{C}:U\rightarrow
J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})| \\
& & \hspace{1cm} \tilde{C} \mbox{ is a contact embedding of the form (\ref{tconta:eqn})}\},\\
C_T^\Theta (U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})) & = & \{ \tilde{C}\in
C_T(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))\ | \tilde{C}^*\Theta=\Theta \},\\
C_T^Z (V,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})) & = & \{ \tilde{C}\circ
\tilde{\imath}\ |\tilde{C}\in C_T(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}) )\},\\
C_T^{\Theta,Z} (V,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})) & = &
\{ \tilde{C}\circ
\tilde{\imath}\ |\tilde{C}\in C_T^\Theta (U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})) \}. \end{eqnarray*} \begin{dfn}{\rm We define stabilities of reticular Legendrian unfoldings. Let ${\cal L}$ be a reticular Legendrian unfolding.\\ {\bf Stability}: We say that ${\cal L}$ is {\it stable} if the following condition holds: Let $C^{0}\in C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ be ${\cal P}$-contact embedding germs
such that $C^{0}|_{{\mathbb L}}={\cal L}$ and $\tilde{C}^{0}\in C_T(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))$ be representatives of $C^{0}$. Then there exist open neighborhoods $N_{\tilde{C}^{0}}$ of $\tilde{C}^{0}$ in $C^\infty$-topology such that for any $\tilde{C}\in N_{\tilde{C}^{0}}$, there exist points $x_0=(T,0,\ldots,0,P^0_{r+1},\ldots,P^0_n)\in U$ such that the reticular Legendrian unfolding ${\cal L}_{x_0}$ and ${\cal L}$ are ${\cal P}_{(m)}$-Legendrian equivalent, where the reticular Legendrian unfolding ${\cal L}_{x_0}$ is defined by \[ x=(T,Q,Z,S,P)\mapsto \tilde{C}(x_0+x)-\tilde{C}(x_0)+(0,0,P^0_{r+1}Q_{r+1}+\cdots +P^0_nQ_n,0,0). \]
{\bf Homotopical stability}: A one-parameter family of ${\cal P}$-contact embedding germs $\bar{C}:(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}) \times {\mathbb R},(0,0))\rightarrow J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})\ ((T,Q,Z,S,P,\tau) \mapsto C_\tau(T,Q,Z,S,P))$ is called a {\em ${\cal P}$-contact
deformation} of ${\cal L}$ if $C_0|_{{\mathbb L}}={\cal L}$. A map germ $\bar{\Psi}:(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}) \times {\mathbb R},(0,0)) \rightarrow (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)((T,Q,Z,S,P,\tau) \mapsto \Psi_\tau(T,Q,Z,S,P))$ is called a {\em one-parameter
deformation of reticular diffeomorphisms} if $\Psi_0=id_{J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})}$ and $\Psi_t$
is a ${\cal P}$-diffeomorphism for all $t$ around $0$. We say that ${\cal L}$ is {\it homotopically stable} if for any reticular ${\cal P}$-contact deformations $\bar{C}=\{ C_\tau\}$ of ${\cal L}$, there exist
one-parameter families of ${\cal P}$-Legendrian
equivalences $\bar{K}=\{ K_\tau \}$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ with $K_0=id$ of the form \begin{equation}K_\tau(t,q,z,s,p)=(\phi^1_\tau (t),\phi^2_\tau (t,q,z),\phi^3_\tau (t,q,z), \phi^{4}_\tau (t,q,z,s,p),\phi^{5}_\tau (t,q,z,s,p))\label{Khomo:eqn} \end{equation} and one-parameter deformations of reticular ${\cal P}$-diffeomorphisms $\bar{\Psi}=\{ \Psi_\tau \}$ such that $C_\tau=K_\tau\circ C_0\circ \Psi_\tau$ for $t$ around $0$.\\
{\bf Infinitesimal stability}: Let $C\in C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ be a ${\cal P}$-contact diffeomorphism germ. We say that a vector field $v$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ along $C$ is {\em an infinitesimal ${\cal P}$-contact transformation} of $C$ if there exists a ${\cal P}$-contact deformation $\bar{C}=\{C_\tau\}$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ such that $C_0=C$ and
$\frac{dC_\tau}{d\tau}|_{\tau =0}=v$. We say that a vector field $\xi$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ is {\em an infinitesimal reticular ${\cal P}$-diffeomorphism} if there exists a one-parameter deformation of reticular ${\cal P}$-diffeomorphisms $\bar{\Psi}=\{ \Psi_\tau \}$
such that $\frac{d\Psi_\tau}{d\tau }|_{\tau =0}=\xi$. We say that a vector field $\eta$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),w)$ is {\em an infinitesimal ${\cal P}$-Legendrian equivalence} if there exists a
one-parameter family of ${\cal P}$-Legendrian equivalences $\bar{K}=\{K_\tau\}$ such that $K_0=id_{J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R})}$
and $\frac{dK_\tau}{d\tau}|_{\tau =0}=\eta$. We say that ${\cal L}$ is
{\it infinitesimally stable} if for any extension $C$ of ${\cal L}$ and any infinitesimal ${\cal P}$-contact transformation $v$ of $C$, there exist infinitesimal reticular ${\cal P}$-diffeomorphisms $\xi$ and infinitesimal ${\cal P}$-Legendrian equivalences $\eta$ of the form \begin{eqnarray} \eta(t,q,z,s,p)=a_1(t)\frac{\partial}{\partial t}+ a_2(t,q,z)\frac{\partial}{\partial q}+a_3(t,q,z)\frac{\partial}{\partial z} \hspace{2cm}\nonumber \\ + a_4(t,q,z,s,p)\frac{\partial}{\partial s}+a_5(t,q,z,s,p) \frac{\partial}{\partial p} \label{etaform} \end{eqnarray} such that $v=C_*\xi+\eta\circ C$. }\end{dfn}
We may take an extension of a reticular Legendrian unfolding ${\cal L}$ by an element of $C^\Theta_T (J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ by Lemma \ref{exleg:lm}. Then as the remark after the definition of the stability of reticular Legendrian maps in \cite[p.121]{retLeg}, we may consider the following other definitions of stabilities of multi-reticular Legendrian unfoldings: (1) The definition given by replacing $C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ and $C_T(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))$ to $C_T^\Theta(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ and $C_T^\Theta(U,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))$ of original definition respectively. (2) The definition given by replacing to $C_T^Z(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ and $C_T^Z(V,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))$ respectively. (3) The definition given by replacing to $C_T^{\Theta,Z}(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ and $C_T^{\Theta,Z}(V,J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}))$ respectively, where $V=U\cap \{Z=0\}$.\\
Then we have the following lemma which is proved by the same method of the proof of \cite[Lemma 7.2]{retLeg} \begin{lem}\label{sta:lm}{\rm (cf., \cite[Lemma 4.3]{bifsemi})} The original definition and other three definitions of stabilities of reticular Legendrian unfoldings are all equivalent. \end{lem} By this lemma, we may choose an extension of a reticular Legendrian unfolding from among all of
$C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0))$, $C_T^\Theta(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0))$,
$C_T^Z(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0))$, and $C_T^{\Theta,Z}(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0))$.\\
We say that a function germ $H$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ is {\em ${\cal P}$-fiber preserving} if $H$ has the form $H(t,q,z,s,p)=\sum_{i=1}^nh_j(t,q,z)p_j+h_0(t,q,z)+\sum_{i=1}^ma_i(t)s_i$.
\begin{lem}\label{infsta:t-Leglem}{\rm (cf., \cite[Lemma 4.4]{bifsemi})} Let $C\in C_T(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$. Then the following hold: {\rm (1)} A vector field germ $v$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ along $C$
is an infinitesimal ${\cal P}$-contact transformation of $C$ if and only if there exists a function germ $f$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ such that $f$ does not depend on $s$ and $v=X_f\circ C$.\\ {\rm (2)} A vector field germ $\eta$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ is an infinitesimal ${\cal P}$-Legendrian equivalence
if and only if there exists a ${\cal P}$-fiber preserving function germ $H$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ such that $\eta=X_H$.\\ {\rm (3)} A vector field $\xi$ on $(J^1({\mathbb R}^m\times {\mathbb R}^n,{\mathbb R}),0)$ is an infinitesimal reticular ${\cal P}$-diffeomorphism if and only if there exists a function germ $g\in B$ such that $\xi=X_g$, where $B=\langle q_1p_1,\ldots,q_rp_r,$ $ q_{r+1},\ldots,q_n,z\rangle_{{\cal E}_{t,q,z,p}} +\langle s \rangle_{{\cal E}_t}$. \end{lem}
We define the several stabilities of reticular Legendrian unfoldings and we have the following theorem:
\begin{tth}\label{staLeg:th}{\rm (cf., \cite[Theorem 4.6]{bifsemi})} Let ${\cal L}$ be a reticular Legendrian unfolding with a generating family $F(x,y,t,q,z)$. Then the following are all equivalent.\\ {\rm (u)} $F$ is a reticular $t$-${\cal P}$-${\cal K}$-stable unfolding of
$F|_{t=0}$.\\ {\rm (hs)} ${\cal L}$ is homotopically stable.\\ {\rm (is)} ${\cal L}$ is infinitesimally stable.\\ {\rm (a)} ${\cal E}_{t,q,p}= B_0+ \langle 1,p_1\circ C',\ldots,p_n\circ C'\rangle_{(\Pi\circ C')^*{\cal E}_{t,q,z}}+
\langle s\circ C'\rangle_{{\cal E}_t}$, where $C'=C|_{z=s=0}$ and $B_0=\langle q_1p_1,\ldots,q_rp_r, q_{r+1},\ldots,q_n\rangle_{{\cal E}_{t,q,p}}$. \end{tth}
\section{Genericity of reticular Legendrian unfoldings} \hspace*{1em} In order to give a generic classification of reticular Legendrian unfoldings, we reduce our investigation to finite dimensional jet spaces of ${\cal P}$-contact diffeomorphism germs.
\begin{dfn}\label{l,l+1detLeg}{\rm Let ${\cal L}$ be a reticular Legendrian unfolding. We say that ${\cal L}$ is $l$-determined if the following condition holds: For any extension $C\in C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ of ${\cal L}$, the reticular Legendrian unfolding
$C'|_{{\mathbb L}}$ and ${\cal L}$ are ${\cal P}$-Legendrian equivalent for all $C'\in C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ satisfying that $j^lC(0)=j^lC'(0)$ . }\end{dfn}
As Lemma \ref{sta:lm}, we may consider the following other definition of finitely determinacy of reticular Legendrian maps:\\ (1) The definition given by replacing $C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ to $C^\Theta_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$.\\ (2) The definition given by replacing $C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ to $C^Z_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$.\\ (3) The definition given by replacing $C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ to $C^{\Theta,Z}_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$.\\ Then the following holds by \cite[p.341 Proposition 5.6]{generic}: \begin{prop}{\rm (cf., \cite[Proposition 5.2]{bifsemi})} Let ${\cal L}$ be a reticular Legendrian unfolding. Then \\ {\rm (A)} If ${\cal L}$ is $l$-determined of the original definition, then ${\cal L}$ is $l$-determined of the definition {\rm (1)}.\\ {\rm (B)} If ${\cal L}$ is $l$-determined of the definition {\rm (1)}, then ${\cal L}$ is $l$-determined of the definition {\rm (3)}.\\ {\rm (C)} If ${\cal L}$ is $(l+1)$-determined of the definition {\rm (3)}, then ${\cal L}$ is $l$-determined of the definition {\rm (2)}.\\ {\rm (D)} If ${\cal L}$ is $l$-determined of the definition {\rm (2)}, then ${\cal L}$ is $l$-determined of the original definition. \end{prop}
\begin{tth}\label{n+1det:Leg}{\rm (cf., \cite[Lemma 5.3]{bifsemi})} Let ${\cal L}:({\mathbb L},0) \rightarrow (J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ be a reticular Legendrian unfolding. If ${\cal L}$ is infinitesimally stable then ${\cal L}$ is $(n+m+3)$-determined. \end{tth} {\em Proof.} It is enough to prove ${\cal L}$ is $(n+m+2)$-determined of Definition \ref{l,l+1detLeg} (3). Let $C\in C^{\Theta,Z}_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ be an extension of ${\cal L}$. We may assume that $P_C$ has the form \[P_C=\{(T,Q,0,-\frac{\partial H}{\partial T}(T,Q,p)+s, -\frac{\partial H}{\partial Q},T,-\frac{\partial H}{\partial p}, H-\langle \frac{\partial H}{\partial p},p\rangle ,s,p)\} \] for some function germ $H(T,Q,p)\in {\mathfrak M}(2n+m)^2$. Then the function germ $F(x,y,t,q,z)=-z+H_0(x,y,t)+\langle y,q\rangle \in {\mathfrak M}(r;n+m+n+1)$ is a generating family of ${\cal L}$, where $H_0(x,y,t)=H(t,x,0,y)\in {\mathfrak M}(r;n+m)^2$. We have that
$F$ is a reticular $t$-${\cal P}$-${\cal K}$-stable unfolding of $f(x,y,q,z):=-z+H_0(x,y,0)+\langle y,q\rangle \in {\mathfrak M}(r;n+n+1)$. This means that \[{\cal E}(r;n+1+n+m) =\langle F,x\frac{\partial
F}{\partial x},\frac{\partial
F}{\partial y}\rangle_{ {\cal
E}(r;n+1+n+m) } +\langle 1,\frac{\partial F}{\partial q}\rangle_{{\cal E}(1+n+m)}+ \langle \frac{\partial F}{\partial t}\rangle_{{\cal E}(m)}. \] By the restriction of this to $q=z=0$, we have that \begin{equation} {\cal E}(r;n+m)=\langle H_0,x\frac{\partial
H_0}{\partial x},\frac{\partial
H_0}{\partial y}\rangle_{ {\cal
E}(r;n+m) } +\langle 1,y_1,\ldots,y_n,\frac{\partial
H_0}{\partial t}\rangle_{{\cal E}(m)}.\label{rest:eqn} \end{equation} This means that \begin{equation} {\mathfrak M}(r;n+m)^{n+m+1}\subset \langle H_0,x\frac{\partial H_0}{\partial x}, \frac{\partial H_0}{\partial y} \rangle_{{\cal E}(r;n+m)}+{\mathfrak M}(m){\cal E}(r;n+m). \label{n+3det:leg} \end{equation} Let $C'\in C^{\Theta,Z}_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ satisfying $j^{n+m+2}C(0)=j^{n+m+2}C'(0)$ be given. There exists a function germ $H'(T,Q,p)\in {\mathfrak M}(2n+1)$ such that \[ P_{C'}=\{(T,Q,0,-\frac{\partial H'}{\partial T}(T,Q,p)+s, -\frac{\partial H'}{\partial Q},T,-\frac{\partial H'}{\partial p}, H'-\langle \frac{\partial H'}{\partial p},p\rangle ,s,p)\}.\] Since $H=z-qp$ on $P_C$ and $H'=z-qp$ on $P_{C'}$, we have that $j^{n+m+2}H_0(0)=j^{n+m+2}H_0'(0)$, where $H_0'(x,y,t)=H'(t,x,0,y)\in {\mathfrak M}(r;n+m)^2$. By (\ref{n+3det:leg}) we have that \[ {\mathfrak M}(r;n)^{n+m+1}\subset \langle H_0,x\frac{\partial H_0}{\partial x} (x,y,0), \frac{\partial H_0}{\partial y}(x,y,0) \rangle_{{\cal E}(r;n)} \] and this means that $H_0(x,y,0)$ is reticular ${\cal K}$-$(n+m+2)$-determined by Lemma \ref{findetc:lm}. Therefore we may assume that
$H_0|_{t=0}=H'_0|_{t=0}$. It follows that $H_0-H_0'\in {\mathfrak M}(m){\mathfrak M}(r;n+m)^{n+m+2}$. Then the function germ $G(x,y,t,q,z)=-z+H_0'(x,y,t)+ \langle y,q\rangle \in {\mathfrak M}(r;n+1+n+m)$
is a generating family of $C'|_{{\mathbb L}}$.
We define the function germ $E_{\tau_0}(x,y,t,\tau)\in {\cal E}(r;n+m+1)$ by $E_{\tau_0}(x,y,t,\tau)=(1-\tau-\tau_0)H_0(x,y,t)+(\tau+\tau_0)H_0'(x,y,t)$ for $\tau_0\in [0,1]$. By (\ref{rest:eqn}) and (\ref{n+3det:leg}), we have that \begin{equation} {\mathfrak M}(r;n+m)^{n+m+2} \subset \langle H_0,x\frac{\partial H_0}{\partial x} \rangle_{{\cal E}(r;n+m)}+{\mathfrak M}(r;n+m)\langle \frac{\partial H_0}{\partial y}\rangle +{\mathfrak M}(m)\langle 1,y,\frac{\partial H_0}{\partial t}\rangle.\label{n+3eqn:eqn} \end{equation} Then we have that \begin{eqnarray*} & & {\mathfrak M}_t{\mathfrak M}_{x,y,t}^{n+m+2}{\cal E}_{x,y,t,\tau} \\ & \subset & {\mathfrak M}_{t,\tau} \langle E_{\tau_0},x\frac{\partial E_{\tau_0}}{\partial x}\rangle_{{\cal E}_{x,y,t,\tau}}+{\mathfrak M}_{t,\tau}{\mathfrak M}_{x,y,t,\tau}\langle\frac{\partial E_{\tau_0}}{\partial y} \rangle \\ & & \hspace{5cm}+{\mathfrak M}_{t,\tau}^2 \langle 1,y,\frac{\partial
E_{\tau_0}}{\partial t}\rangle +{\mathfrak M}_{t,\tau}{\mathfrak M}_t {\mathfrak M}_{x,y,t}^{n+m+2}{\cal E}_{x,y,t,\tau}. \end{eqnarray*} By Malgrange preparation theorem we have that \begin{eqnarray*} \frac{\partial E_{\tau_0}}{\partial \tau}\in {\mathfrak M}_t{\mathfrak M}_{x,y,t}^{n+3}\subset {\mathfrak M}_t{\mathfrak M}_{x,y,t}^{n+3}{\cal E}_{x,y,t,\tau}\hspace{5cm} \\ \subset {\mathfrak M}_{t,\tau}( \langle E_{\tau_0},x\frac{\partial E_{\tau_0}}{\partial x}\rangle_{{\cal E}_{x,y,t,\tau}}+ {\mathfrak M}_{x,y,t,\tau}\langle\frac{\partial E_{\tau_0}}{\partial y} \rangle) +{\mathfrak M}_{t,\tau}^2 \langle 1,y,\frac{\partial
E_{\tau_0}}{\partial t}\rangle. \end{eqnarray*} for $\tau_0\in [0,1]$. Then there exist $\Phi(x,y,t)\in {\cal B}_m(r;n+m)$ and a unit $a\in {\cal E}(r;n+m)$ and $b_1(t),\ldots,b_n(t),c(t)\in {\mathfrak M}(m)$ such that\\ (1) $\Phi$ has the form: $ \Phi(x,y,t)=(x\phi_1(x,y,t),\phi_2(x,y,t),\phi_3(t)) $,\\ (2) $H_0(x,y,t)=a(x,y,t)\cdot H_0'\circ \Phi(x,y,t)+\sum_{i=1}^ny_ib_i(t)+c(t)$ for $(x,y,t)\in ({\mathbb H}^r\times {\mathbb R}^{n+m},0)$
We define the reticular $t$-${\cal P}$-${\cal K}$-isomorphism $(\Psi,d)$ by \[ \Psi(x,y,t,q,z)=(x\phi_1(x,y,t),\phi_2(x,y,t),\phi_3(t),q(1-b(t)),z), d(x,y,t,q,z)=a(x,y,t). \]
We set $G':=d\cdot G\circ\Psi\in {\mathfrak M}(r;n+n+m)$. Since $\frac{\partial E_{\tau_0}}{\partial \tau}|_{t=0}=0$, we have that
$a(x,y,0)=1$ and $\Phi(x,y,0)=(x,y,0)$. Therefore we have that $G'|_{t=0}=f$. Then $F$ and $G'$ are reticular $t$-${\cal P}$-${\cal K}$-infinitesimal versal unfoldings of $F|_{t=0}$. Since $G$ and $G'$ are reticular $t$-${\cal P}$-${\cal K}$-equivalent, it follows that $F$ and $G$ are reticular $t$-${\cal P}$-${\cal K}$-equivalent. Therefore ${\cal L}$ and $C'|_{{\mathbb L}}$ are ${\cal P}$-Legendrian equivalent.
$\blacksquare$\\
Let ${\cal L}$ be a stable reticular Legendrian unfolding. We say that ${\cal L}$ is {\it simple} if there exists a representative $\tilde{C}\in C_T( U,J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}))$ of
a extension of ${\cal L}$ such that
$\{ \tilde{C}_x| x\in U\}$ is covered by finite orbits $[C_1],\ldots,[C_l]$ for
some ${\cal P}$-contact embedding germs $C_1,\ldots,C_l\in C_T(U,J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}))$.
\begin{lem}\label{simpleLeg:lem}{\rm (cf., \cite[Proposition 5.5]{bifsemi})} A stable reticular Legendrian unfolding ${\cal L}$ is simple if and only if for a generating family $F(x,y,t,q,z)\in {\mathfrak M}(r;k+m+n+1)$ of ${\cal L}$, $f(x,y)=F(x,y,0,0)\in {\mathfrak M}(r;k)^2$ is a ${\cal K}$-simple singularity. \end{lem}
Let $J^l(2n+2m+1,2n+2m+1)$ be the set of $l$-jets of map germs from $(J^1({\mathbb R}^m \times{\mathbb R}^n,{\mathbb R}),0)$ to $(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$ and $tC^l(n)$ be the immersed manifold in $J^l(2n+2m+1,2n+2m+1)$ which consists of $l$-jets of ${\cal P}$-contact embedding germs. Let $L^l(2n+2m+1)$ be the Lie group which consists of $l$-jets of diffeomorphism germs on $(J^1({\mathbb R}\times{\mathbb R}^n,{\mathbb R}),0)$.
We consider the Lie subgroup $rtLe^l(2n+2m+1)$ of $L^l(2n+2n+1)\times L^l(2n+2m+1)$ which consists of
$l$-jets of reticular ${\cal P}$-diffeomorphisms on the source space and $l$-jets of ${\cal P}$-Legendrian equivalences of $\Pi$ at $0$: \begin{eqnarray*} rtLe^l(n,m)=\{ (j^l\Psi(0),j^lK(0))\in
L^l(2n+2m+1)\times L^l(2n+2m+1)\ | \\
\Psi_i
\mbox{ is a reticular} \mbox{ ${\cal P}$-diffeomorphism on } (J^1({\mathbb R}\times{\mathbb R}^n,{\mathbb R}),0), \\ K \mbox{ is a ${\cal P}$-Legendrian equivalence of } \Pi \}. \end{eqnarray*}
The group $rtLe^l(2n+2m+1)$ acts on $J^l(2n+2m+1,2n+2m+1)$ and $tC^l(2n+2m+1)$ is invariant under this action.
Let $C$ be a ${\cal P}$-contact diffeomorphism germ on $(J^1({\mathbb R}\times{\mathbb R}^n,{\mathbb R}),0)$ and set
$z=j^lC(0)$, ${\cal L}=C|_{{\mathbb L}}$. We denote the orbit $rtLe^l(2n+2m+1)\cdot z$ by $[z]$. Then \[
[z]=\{ j^lC'(0)\in tC^l(2n+2m+1)\ | \ {\cal L} \mbox{ and }
C'|_{{\mathbb L}} \mbox{ are ${\cal P}$-Legendrian equivalent} \}. \]
For $\tilde{C}=\in C_T(U,J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}))$, we define
the continuous map $j^l_0\tilde{C}:U\rightarrow tC^l(n)$ by $x$ to the $l$-jet of $\tilde{C}_{x}$. For $C\in C_T(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)$, we define $j^l_0C:(J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}),0)\rightarrow
tC^l(n)$ by the analogous method. \begin{tth}\label{stabletrans_tleg2:th}{\rm (cf., \cite[Theorem 5.4]{bifsemi})} Let ${\cal L}$ be a reticular Legendrian unfolding. Let $C$ be an extension of ${\cal L}$ and $l\geq (n+m+1)^2$. Then the followings are equivalent:\\ {\rm (s)} ${\cal L}$ is stable.\\ {\rm (t)} $j^l_0C$ is transversal to $[j^l_0C(0)]$.\\ {\rm (a')} ${\cal E}_{t,q,p}= B_0+ \langle 1,p_1\circ C',\ldots,p_n\circ C'\rangle_{(\Pi\circ C')^*{\cal E}_{t,q,z}}+
\langle s\circ C'\rangle_{{\cal E}_t}+{\mathfrak M}_{t,q,p}^l$, where $C'=C|_{z=s=0}$ and $B_0=\langle q_1p_1,\ldots,q_{r}p_{r}, q_{r+1},$ $\ldots,q_n\rangle_{{\cal E}_{t,q,p}}$,\\ {\rm (a)} ${\cal E}_{t,q,p}= B_0+ \langle 1,p_1\circ C',\ldots,p_n\circ C'\rangle_{(\Pi\circ C')^*{\cal E}_{t,q,z}}+ \langle s\circ C'\rangle_{{\cal E}_t}$,\\ {\rm (is)} ${\cal L}$ is infinitesimally stable,\\ {\rm (hs)} ${\cal L}$ is homotopically stable,\\ {\rm (u)} A generating family $F$ of ${\cal L}$ is reticular
$t$-${\cal P}$-${\cal K}$-stable unfolding of $F|_{t=0}$. \end{tth} {\it Proof}. We prove only (a')$\Rightarrow$(a) By the restriction of (a') to $t=0$ we have that:
\[{\cal E}_{q,p}= B_1+ \langle 1,p_1\circ C'',\ldots,p_n\circ C''\rangle_{(\Pi\circ C'')^*{\cal E}_{t,q,z}}
+ \langle s\circ C''\rangle_{{\mathbb R}}+ {\mathfrak M}_{q,p}^l, \]
where $C''=C'|_{t=0}$ and $B_1=B_0|_{t=0}$. Then we have that
\[{\cal E}_{q,p}= B_1+ (\Pi\circ C'')^* {\mathfrak M}_{t,q,p}{\cal E}_{q,p(m)}
+ \langle 1,p_1\circ C'',\ldots,p_n\circ C'', s\circ C''\rangle_{{\mathbb R}}+{\mathfrak M}_{q,p}^l. \]
It follows that \[ {\mathfrak M}_{q,p}^{n+m+1}\subset B_1+(\Pi\circ C'')^* {\mathfrak M}_{t,q,p}{\cal E}_{q,p}.\] Therefore \[ {\mathfrak M}_{t,q,p}^{n+m+1}\subset B_0\times+ (\Pi\circ C')^* {\mathfrak M}_{t,q,p}{\cal E}_{q,p}+ {\mathfrak M}_{t}{\cal E}_{t,q,p},\] and we have that
\[{\mathfrak M}_{t,q,p}^l=({\mathfrak M}_{t,q,p(m)}^{n+m+1})^{n+m+1}
\subset B_0+(\Pi\circ C')^* {\mathfrak M}_{t,q,p}^{n+m+1}{\cal E}_{t,q,p}+ {\mathfrak M}_{t}{\cal E}_{t,q,p}. \]
It follows that \[ {\cal E}_{t,q,p}= B_0+ \langle 1,p_1\circ C',\ldots,p_n\circ C'\rangle_{(\Pi\circ C')^*{\cal E}_{t,q,z}}+ \langle s\circ C'\rangle_{{\cal E}_t}+ (\Pi\circ C')^* {\mathfrak M}_{t,q,p}^{n+m+1}{\cal E}_{t,q,p}+ {\mathfrak M}_{t}{\cal E}_{t,q,p}. \] This means (a) by Lemma \ref{gw1.8:cor}.\\
\begin{tth}{\rm (cf., \cite[Theorem 5.6]{bifsemi})} Let $r=0,n\leq 5,m=2,3$ or $r=1,n\leq 3,m=2,3$. Let $U$ be a neighborhood of $0$ in $J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R})$. Then there exists a residual set $O\subset C^\Theta_T(U,J^1({\mathbb R}^m\times{\mathbb R}^n,{\mathbb R}))$
such that for any $\tilde{C}\in O$ and $w\in U$, the reticular Legendrian unfolding $\tilde{C}_w|_{{\mathbb L}}$ is stable and has a generating family which is stably reticular $t$-${\cal P}$-${\cal K}$-equivalent to one of the types in the classification list below. \end{tth}
Let $F(x,y,u,t)\in {\mathfrak M}(r;k+n+2)$ be a reticular $t$-${\cal P}$-${\cal K}$-stable unfolding of $f(x,y)$ be given ($r=0,n\leq 6$ or $r=1,n\leq 4$). Since $f$ is simple singularity we may assume that $f$ has the normal form of $A,D,E(r=0)$ or $B,C,F(r=1)$. By analogous method of \cite[p.200]{tPKfunct}, we may assume that $F$ has the form \[ F(x,y,u,t)=f(x,y)+a(u_l,\ldots ,u_n,t)\varphi_0(x,y)+u_1\varphi_1(x,y)+\cdots + u_{l-1}\varphi_{l-1}(x,y), \] where the function germ $f(x,y)+t\varphi_0(x,y)+u_1\varphi_1(x,y)+\cdots + u_{l-1}\varphi_{l-1}(x,y)\in {\mathfrak M}(r;k+n+1)$ is a reticular $t$-${\cal P}$-${\cal K}$-universal unfolding. Since $F$ is also a reticular $t$-${\cal P}$-${\cal K}$-universal unfolding of $f$, we have that \[ {\cal E}_{x,y,u,t}=\langle f,x\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}\rangle_{{\cal E}_{x,y,u,t}} +\langle \varphi_1,\ldots,\varphi_{l-1}\rangle_{{\cal E}_{u,t}}+ \varphi_0(\langle \frac{\partial a}{\partial u_l},\ldots,\frac{\partial a}{\partial u_n} \rangle_{{\cal E}_{u_l,\ldots,u_n,t}} +\langle \frac{\partial a}{\partial t}\rangle_{{\cal E}_{t}}). \] This means that $a(u_l,\ldots,u_n,t)$ is a ${\cal P}$-${\cal R}$-versal unfolding of $a(u_l,\ldots,n,0)$ with codimension $\leq 3$. Since the ${\cal P}$-${\cal R}$-equivalence of $a$ is allowed under the reticular $t$-${\cal P}$-${\cal K}$-equivalence, it follows the classification of ${\cal P}$-${\cal R}$-versal unfolding of functions on $u_l,\ldots,u_n$ of type $A_2,A_3$.\\
We classify $F(x,y,q,t)\in {\mathfrak M}(r;k+n+m)$ with $r=0,n\leq 6,m=2$ and $r=1,n\leq 4,m=2$.\\ $({}^2A_1$) $y^2+(t_1+t_2u_1+u_1^3\pm u_2^2\pm \ldots \pm u_l^2)$,\\ $({}^2A_2$) $y^3+(t_1+t_2u_2+u_2^3\pm u_3^2\pm \ldots \pm u_l^2)y+u_1$,\\ $({}^2A_3$) $y^4+(t_1+t_2u_3+u_3^3\pm u_4^2\pm \ldots \pm u_l^2)y^2+u_1y+u_2$,\\ $({}^2A_4$) $y^5+(t_1+t_2u_4+u_4^3\pm u_5^2\pm \ldots \pm u_l^2)y^3+u_1y^2+u_2y+u_3$,\\ $({}^2A_5$) $y^6+(t_1+t_2u_5+u_5^3)y^4+u_1y^3+u_2y^2+u_3y+u_4$,
$y^6+(t_1+t_2u_5+u_5^3\pm u_6^2)y^4+u_1y^3+u_2y^2+u_3y+u_4$,\\ $({}^2A_6$) $y^7+(t_1+t_2u_6+u_6^3)y^5+u_1y^4+u_2y^3+u_3y^2+u_4y+u_5$,\\ $({}^2D^\pm_4)$ $y_1^2y_2\pm y_2^3+(t_1+t_2u_4+u_4^3\pm u_5^2\pm \ldots \pm u_l^2)y_2^2+u_1y_2+u_2y_1+u_3$,\\ $({}^2D_5)$ $y_1^2y_2+ y_2^4+(t_1+t_2u_5+u_5^3)y_2^3+u_1y_2^2+u_2y_2+u_3y_1+u_4$,
$y_1^2y_2+ y_2^4+(t_1+t_2u_5+u_5^3\pm u_6^2)y_2^3+u_1y_2^2+u_2y_2+u_3y_1+u_4$,\\ $({}^2D^\pm_6)$ $y_1^2y_2\pm y_2^5+(t_1+t_2u_6+u_6^3)y_2^6+u_1y_2^3+u_2y_2^2+u_3y_2+u_4y_1+u_5$,\\ $({}^2E_6)$ $y_1^3+ y_2^4+(t_1+t_2u_6+u_6^3)y_1y_2^2+u_1y_1y_2+u_2y_2^2+u_3y_1+u_4y_2+u_5$\\ ,where $l\leq 6$.\\
$({}^2B_2)$ $x^2+(t_1+t_2u_2+u_2^3\pm u_2^2\pm \ldots \pm u_l^2)x+u_1$,\\ $({}^2B_3)$ $x^3+(t_1+t_2u_3+u_3^3)x+u_1x+u_2$,
$x^3+(t_1+t_2u_3+u_3^3\pm u_4^2)x+u_1x+u_2$,\\ $({}^2B_4)$ $x^4+(t_1+t_2u_4+u_4^3)x^2+u_1x^2+u_2x+u_3$,\\ $({}^2C_3^\pm$) $\pm xy+y^3+(t_1+t_2u_3+u_3^2)x+u_1y+u_2$,
$\pm xy+y^3+(t_1+t_2u_3+u_3^2\pm u_4^2)x+u_1y+u_2$,\\ $({}^2C_4)$ $xy+y^4+(t_1+t_2u_4+u_4^3)y^3+u_1y^2+u_2y+u_3$,\\ $({}^2F_4)$ $x^2+y^3+(t_1+t_2u_4+u_4^3)xy+u_1x+u_2y+u_3$\\ ,where $l\leq 4$. We give all figures of bifurcations of generic wavefronts with $n=2,3$ and $m=2$: $({}^2A_1$), $({}^2A_2$), $({}^2A_3$), $({}^2B_2)$, $({}^2B_3)$, $({}^2C_3^\pm$).\\ We give the positions of the figures as the following way:
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_1$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_1$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_1$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_2$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_2$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2B_2$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2B_2$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2B_2$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2A_3$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2B_3$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2C_3^+$}
\end{figure}
\begin{figure}\end{figure}
\begin{figure}\end{figure}
\begin{figure}
\caption{${}^2C_3^-$}
\end{figure}
\end{document} | arXiv | {
"id": "1308.2274.tex",
"language_detection_score": 0.5419131517410278,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\begin{frontmatter} \title{On corrected Poisson approximations for sums
of independent indicators}
\runtitle{Corrected Poisson approximations}
\begin{aug}
\author[A]{\fnms{Nickos}~\snm{Papadatos}\ead[label=e1]{npapadat@math.uoa.gr}}
\address[A]{National and Kapodistrian
University of Athens\printead[presep={,\ }]{e1}}
\end{aug}
\begin{abstract}
Let $S_n=I_1+\cdots+I_n$ be a sum of independent indicators $I_i$, with
$p_i=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(I_i=1)=1-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(I_i=0)$, $i=1,\ldots,n$. It is well-known
that the total variation distance between $S_n$ and $Z_\lambda$, where
$Z_\lambda$ has a Poisson distribution with mean $\lambda=\sum_{i=1}^n p_i$,
is typically of order $\sum_{i=1}^n p_i^2$.
In the present work we propose a class of corrected Poisson
approximations, which enable the second order factorial moment
distance (and hence, the total variation distance)
to be bounded
above by a constant multiple of $\sum_{i=1}^n p_i^3$ and
$\sum_{i=1}^n p_i^4$, hence
improving the order of approximation. \end{abstract}
\begin{keyword}[class=MSC]
\kwd[Primary ]{62F15}
\kwd{60E05}
\end{keyword}
\begin{keyword}
\kwd{Corrected Poisson}
\kwd{Poisson-Binomial Distribution}
\kwd{Factorial Moment Distance}
\kwd{Total Variation Distance}
\kwd{Gini-Kantorovich-Wasserstein distance}
\kwd{Inequalities for Elementary Symmetric Functions}
\kwd{Sums of Independent Indicators}
\kwd{Improved Order of Approximation} \end{keyword}
\end{frontmatter}
\section{Introduction}
\label{sec.intro}
The total variation distance between two real-valued random variables,
$X_1,X_2$, is defined by
\[
d_{{\rm tv}}(X_1,X_2)=\sup_B |\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(X_1\in B)-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(X_2\in B)|,
\]
where the supremum is taken over all Borel subsets $B$ of
$(-\infty,\infty)$. In the particular case where $X_1,X_2$ are non-negative
integer-valued random variables
with probability mass-functions $f_1(k)$, $f_2(k)$, $k=0,1,\ldots$,
the formula simplifies to
\[
d_{{\rm tv}}(X_1,X_2)=\frac{1}{2}
\sum_{k=0}^{\infty}
\Big|f_1(k)-f_2(k)\Big|.
\]
Let $\{I_i\}_{i=1}^n$ be a collection of independent $0-1$
indicators, with $p_i=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(I_i=1)=1-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(I_i=0)$, $i=1,\ldots,n$.
The distribution of $S_n=\sum_{i=1}^n I_i$ is
concentrated on $\{0,\ldots,n\}$
and it is called Poisson-binomial distribution. Clearly, it is a
generalization of the Binomial distribution, for if the $p_i$'s are
all equal (say $p_i=p$ for each $i$),
then $S_n$ follows a ${\rm Bin}(n,p)$ distribution.
The distribution of $S_n$ is quite complicated in general, and a
classical result of Poisson roughly states that if $n$ is large and
the $p_i$'s are small, then $S_n$ is close to $Z_\lambda$, where
$Z_\lambda$ is a Poisson random variable with mean
$\lambda=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} S_n=\sum_{i=1}^n
p_i$. The quality of Poisson approximation is of fundamental interest, and
has become a classical theme in applied probability. Among the first
who
gave explicit bounds was Khintchin (1933), LeCam (1960) and Deheuvels and Pfeifer (1986),
who showed that
\[
d_{{\rm tv}}(S_n,Z_\lambda)\leq \sum_{i=1}^n p_i^2.
\]
Kontoyiannis et al. (2005), using information theoretic arguments,
showed that a similar bound for the Hellinger distance
is valid,
namely,
\[
d_H^2(S_n,Z_{\lambda}):=\frac{1}{2}\sum_{k=0}^\infty
\left(\sqrt{\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(S_n=k)}-\sqrt{\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(Z_{\lambda}=k)}\right)^2\leq
\frac{1}{\lambda}\sum_{i=1}^n \frac{p_i^3}{1-p_i}.
\]
Since $d_{\rm tv}\leq d_{H}\sqrt{2-d_H^2}$, see Novak (2019), this inequality
for equal $p_i=\lambda/n$ yields the bound
\[
d_{\rm tv}(S_n,Z_{\lambda})\leq
\frac{\sqrt{\lambda}\sqrt{2n(n-\lambda)-\lambda^3}}{n(n-\lambda)}\sim \frac{\sqrt{2\lambda}}{n}, \ \ \mbox{as } \ n\to\infty.
\]
The celebrated
Chen-Stein method, introduced by Chen (1975),
fruitfully applies to this problem, and provides many other
approximation results for dependent indicators too; see Barbour et al. (1992).
One of the
well-known results (in the independent case)
is the double inequality
\[
\frac{\min\{1,1/\lambda\}}{32}\sum_{i=1}^n p_i^2\leq
d_{{\rm tv}}(S_n,Z_\lambda)\leq \frac{1-e^{-\lambda}}{\lambda}\sum_{i=1}^n p_i^2,
\]
due to Barbour and Eagleson (1983) and Barbour and Hall (1984).
Hence, the order of Poisson approximation cannot be improved. For
example, if $p_i=\lambda/n$ for each $i$, so that $S_n$ is
${\rm Bin}(n,\lambda/n)$, then $d_{{\rm tv}}(S_n,Z_\lambda)\sim
n^{-1}$, in the sense that $n d_{{\rm tv}}(S_n,Z_\lambda)$ is
bounded away from $0$ and $\infty$ as $n\to\infty$.
For a comprehensive
review in various Poisson approximation results the reader is refered to Novak (2019); see also
Roos (1999, 2001), Serfling (1975, 1978).
The purpose of the present work is to introduce a methodology in order to
improve the rate of
convergence from, roughly, $n^{-1}$ to $n^{-2}$ (Theorem \ref{theo.main})
and to $n^{-3}$ (Theorem \ref{theo.main2}).
Though possible, we were not able to treat
either higher orders (except for equal $p_i$ --
Example \ref{exam.binomial.exact}),
or dependent indicators.
The improvements are attained by using suitable signed Poisson measures, which
we term {\it Corrected Poisson Distributions}, and refer
to a stronger metric, the factorial moment distance; see Definition \ref{def.d2}. The main results are based on some novel
accurate inequalities for the factorial moments of $S_n$ (Lemmas \ref{lem.comparison} and \ref{lem.comparison2}); these inequalities,
in fact, relate
the well-known {\it elementary symmetric functions} of $(p_1,\ldots,p_n)$
with their power sums, $\sum_i p_i^k$, when $p_i\in [0,1]$
for all $i$.
In section \ref{sec.last} we give a number of remarks and examples;
in particular, the binomial case is treated in detail in Example
\ref{exam.binomial.exact}.
\section{The corrected Poisson distributions}
\label{sec.2}
The second order corrected Poisson distribution is defined as
\[
g_{\lambda;\gamma}(k)=e^{-\lambda}\frac{\lambda^k}{k!}
\Big(1-\gamma\big((k-\lambda)^2-k\big)\Big),
\ \ \ k=0,1,\ldots \ ,
\]
where $\gamma\in\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}R\hspace*{.2ex}}$ and $\lambda>0$ are constants.
If $\gamma=0$, $g_{\lambda;0}$ reduces to the ordinary Poisson($\lambda$)
probability mass function, which we simply denote by
$\phi_1$ or $g_\lambda$. However, for $\gamma>0$ the values $g_{\lambda;\gamma}(k)$
become negative for large $k$, hence, it is not a proper
probability mass function in general, although
$\sum_{k=0}^\infty g_{\lambda;\gamma}(k)=1$ for all $\gamma\in\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}R\hspace*{.2ex}}$
and $\lambda>0$.
Hence, it will be convenient to make use of the class ${\cal F}_2$,
defined
below.
\begin{defi}
\label{def.F}
\[
{\cal F}_2:=\Big\{g:\{0,1,\ldots\}\rightarrow\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}R\hspace*{.2ex}}: \sum_{k=0}^\infty |g(k)| u^k<\infty
{\rm \ for \ some \ } u>2 {\rm \ and \ } \sum_{k=0}^\infty g(k)=1\Big\}.
\]
\end{defi}
It is obvious that the class ${\cal F}_2$ includes both $g_{\lambda;\gamma}$ and
$f_n$, the probability mass function of $S_n$. Moreover, the
factorial moments of any function $g\in {\cal F}_2$ can be defined
as
\[
\mu_m(g) := \sum_{k=m}^\infty (k)_m g(k),\ \ m=0,1,\ldots,
\]
where $(k)_m:=k(k-1)\cdots(k-m+1)=k!/(k-m)!$ is the descending factorial of
order $m$ of $k$, with the convention $(k)_0=1$; note that
all these moments are finite, since the radius of convergence is greater than $2$.
In the usual proper case
where $g\geq 0$, $\mu_m(g)=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} (X)_m$ where $X$ follows $g$, and thus,
$\mu_m(g)$ is just the $m$-th factorial moment of $X$. However, we
avoid to write $\mu_m=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} (X)_m$ when the distribution of $X$ takes
negative values (i.e., for signed measures).
The $\nu$-th order corrected Poisson distribution is defined
by
\[
\phi_{\nu}(k):=e^{-\lambda}\frac{\lambda^k}{k!}
\Big(1-\gamma_2 P_2(k)-\gamma_3 P_3(k)
-\cdots-\gamma_{2\nu-2}P_{2\nu-2}(k)
\Big)\in {\cal F}_2,
\]
and it obviously provides an extension of
$g_{\lambda;\gamma_2}=\phi_2$.
Here $P_m(k)$ are
the Poisson-Charlier orthogonal polynomials of
$Z_\lambda\sim g_{\lambda}=\phi_1$, namely,
\[
P_m(k)=\sum_{j=0}^m(-1)^{m-j}{m\choose j} \lambda^{m-j} (k)_j,
\]
and satisfy (see, e.g., Afendras et al (2011))
\[
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} P_m(Z_\lambda) P_\nu(Z_\lambda)=m!\lambda^m \delta_{m\nu} \ \
\ \mbox{ and } \ \ \
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} P_m(Z_{\lambda})g(Z_\lambda)=\lambda^m \mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} \Delta^m g(Z_{\lambda});
\]
observe that the second (covariance) identity implies the first
(orthogonality) relation, and recall that $\Delta g(k)=g(k+1)-g(k)$ denotes
the forward difference operator and
$\Delta^m$ its $m$-th iteration.
Regarding the total variation distance for functions in ${\cal
F}_2$, the definition is similar and well understood:
\begin{defi}{\rm
\label{def.tvd}
For $g_1,g_2\in{\cal F}_2$,
\[
d_{{\rm tv}}(g_1,g_2):=\frac{1}{2}\sum_{k=0}^{\infty}
\Big|g_1(k)-g_2(k)\Big|.
\]
}
\end{defi}
However, in our situation, it is quite complicated to deal with the total variation
distance, while it seems more convenient to work with a factorial moment distance
of order
two, defined as follows:
\begin{defi}
\label{def.d2}
{\rm
For $g_1,g_2\in{\cal F}_2$,
\[
d_2(g_1,g_2):=\frac{1}{2}\sum_{m=1}^{\infty}
\frac{2^{m}}{m!}\Big|\mu_m(g_1)-\mu_m(g_2)\Big|.
\]
}
\end{defi}
The metric $d_2$ as well as the following theorem have introduced
for the proper case ($g\in{\cal F}_2$, $g\geq 0$) by Afendras and Papadatos (2017);
the result readily extends to ${\cal F}_2$, hence
the proof is omitted.
\begin{theo}
\label{theo.d2}
{\rm
(a) Any $g\in{\cal F}_2$ can be recovered from its factorial moment
sequence by the inversion formula
\[
g(k)=\frac{1}{k!}\sum_{m=k}^\infty
\frac{(-1)^{m-k}}{(m-k)!}
\mu_m(g), \ \ \ k=0,1,\ldots \ .
\]
(b)
For $g_1,g_2\in{\cal F}_2$,
\[
d_{{\rm tv}}(g_1,g_2)\leq d_2(g_1,g_2).
\]
}
\end{theo}
\section{Main results}
\label{sec.3}
Let $S_n=\sum_{i=1}^n I_i$ be a sum of $0-1$ independent indicators as in the
introduction, where $I_i$ has success probability $p_i$, $i=1,\ldots,n$.
We set $\lambda_j=\sum_{i=1}^n p_i^j$, $j=1,2,\ldots$, and in
particular, $\lambda_1=\lambda=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} S_n=\sum_{i=1}^n p_i$.
The accuracy of Poisson approximation cannot be smaller (in magnitude)
than
$\sum_{i=1}^n p_i^2$ because the Poisson distribution has equal
mean and variance, in contrast to $S_n$, which always has smaller
variance than mean. Due to the tuning parameter $\gamma$, the
second order corrected Poisson can fill this gap.
A simple calculation shows
that
\begin{lem}
\label{lem.moments.Poisson}
{\rm
\[
\nu_m:=\mu_m(g_{\lambda;\gamma})=\lambda^m-m(m-1)
\gamma \lambda^{m},
\
\ \ m=0,1,\ldots \ .
\]
}
\end{lem}
It follows that the "variance" of the corrected Poisson equals to
$\nu_2+\nu_1-\nu_1^2=(\lambda^2-2\gamma\lambda^2)
+\lambda-\lambda^2=\lambda-2\gamma\lambda^2$,
while the variance of $S_n$ is $\sum_{i=1}^n
p_i(1-p_i)=\lambda-\lambda_2$. Equating variances we are led to the choice
$\gamma=\lambda_2/(2\lambda^2)$, and our first result reads as follows:
\begin{theo}
\label{theo.main}
{\rm
Under the preceding notations, let $f_n$ be the
probability mass function of $S_n$, and set
\[
\phi_2(k):=g_{\lambda;\lambda_2/(2\lambda^2)}(k)=
e^{-\lambda}\frac{\lambda^k}{k!}
\Big(1-\frac{\lambda_2}{2\lambda^2}\big((k-\lambda)^2-k\big)\Big),
\ \ \ k=0,1,\ldots \ .
\]
Then,
\[
d_{{\rm tv}}(f_n, \phi_2)
\leq
d_2(f_n,\phi_2)\leq
\left(\frac{4}{3}\lambda_3+\lambda_2^2\right)e^{2\lambda}
\leq
\left(\frac{4}{3}+\lambda\right)
e^{2\lambda}
\sum_{i=1}^n p_i^3.
\]
}
\end{theo}
This inequality provides an essential improvement over
the traditional Poisson approximation rate, $\sum_{i=1}^n p_i^2$; to see
this it suffices to take equal $p_i=\lambda/n$. In that case,
$d_{{\rm tv}}({\rm Bin}(n,\lambda/n),\phi_1)\geq A_\lambda n^{-1}$
for some constant $A_{\lambda}>0$ depending only
on $\lambda$, while
Theorem \ref{theo.main} shows that
$d_{{\rm tv}}({\rm Bin}(n,\lambda/n),\phi_2)\leq B_{\lambda}
n^{-2}$.
At this point we note that a second order correction of similar
nature is provided by Theorem 3 in Barbour and Hall (1984); namely, for
$A\subseteq\{0,1,\ldots\}$ they defined
the quantity
\[
\Delta(A):=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(S_n\in A)-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(Z_\lambda\in A)+\frac{\lambda_2}{2\lambda^2}
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{I_A(Z_{\lambda})\Big(Z_\lambda^2-(2\lambda+1)Z_{\lambda}+
\lambda^2\Big)\right\}
\]
and proved that
\[
\sup_A \Big|\Delta(A)\Big| \leq A_{\lambda} \sum_{i=1}^n p_i^3+B_{\lambda} \
\Big(\sum_{i=1}^n p_i^2\Big)^2\leq (A_\lambda+\lambda B_\lambda)\sum_{i=1}^n p_i^3.
\]
In our notation, $\sup_A \big|\Delta(A)\big|
=d_{\rm tv}(f_n,\phi_2)$.
Also, the quantity $\sum_{i=1}^n p_i^3$ appears also in Barbour and Hall's correction, and their constant is (much) smaller than the one provided by Theorem \ref{theo.main}; thus, the bound regarding the total
variation
distance is not new. However, the result concerning the stronger metric
$d_2$ is novel, and can be extended to higher orders,
see, e.g., Theorem \ref{theo.main2} below.
To this end, we choose the constants $\gamma_2,\gamma_3,\gamma_4$
in order to
fit the moments of $S_n$ up to order three,
and we obtain the third order corrected Poisson distribution
\begin{eqnarray*}
\phi_3(k) & := &
e^{-\lambda}\frac{\lambda^k}{k!}
\Big(1
-\frac{\lambda_2}{2\lambda^2} P_2(k)
+\frac{\lambda_3}{3\lambda^3} P_3(k)
+\frac{\lambda_2^2}{8\lambda^4} P_4(k)
\Big)
\\
&=& e^{-\lambda}\frac{\lambda^k}{k!}
\Big(a_0+a_1 k+a_2 (k)_2+a_3 (k)_3+a_4 (k)_4
\Big)
\end{eqnarray*}
where
\[
\mbox{$
a_0=1-\frac{\lambda_2}{2}-\frac{\lambda_3}{3}+\frac{\lambda_2^2}{8},
\
a_1=\frac{2\lambda_2-\lambda_2^2+2\lambda_3}{2\lambda},
\
a_2=\frac{3\lambda_2^2-2\lambda_2-4\lambda_3}{4\lambda^2},
\
a_3=\frac{2\lambda_3-3\lambda_2^2}{6\lambda^3},
\
a_4=\frac{\lambda_2^2}{8\lambda^4}.
$}
\]
It is easy to verify the following
\begin{lem}
\label{lem.moments.Poisson2}
{\rm
\[
\mu_m(\phi_3)=
\lambda^m-\frac{(m)_2}{2}\lambda_2\lambda^{m-2}
+\frac{(m)_3}{3}\lambda_3\lambda^{m-3}+
\frac{(m)_4}{8}\lambda_2^2\lambda^{m-4},
\ \ m=0,1,\ldots \ ,
\]
where $(m)_j=m!/(m-j)!$.
}
\end{lem}
Using the above third-order correction we have the following result.
\begin{theo}
\label{theo.main2}
{\rm
Under the preceding notations and the assumptions
of Theorem \ref{theo.main},
\[
d_{{\rm tv}}(f_n,\phi_3)\leq d_2(f_n,\phi_3)\leq \frac{2}{3}
(\lambda^2+4\lambda+3) e^{2\lambda}
\sum_{i=1}^n p_i^4.
\]
}
\end{theo}
This inequality provides another essential improvement over
the previous results, as it is easily verified from
the particular case of equal $p_i=\lambda/n$. In that case,
Theorem \ref{theo.main2} implies that
$d_{{\rm tv}}({\rm Bin}(n,\lambda/n),\phi_3)\leq C_{\lambda}
n^{-3}$.
\section{Proofs}
\label{sec.4}
The proofs of Theorems \ref{theo.main} and
\ref{theo.main2} use as a main tool the distance $d_2$,
and require some accurate bounds of the moment sequence
$\mu_m:=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} (S_n)_m$; these bounds may be of some independent
interest.
Firstly we state and prove an auxiliary result.
\begin{lem}
\label{lem.positivity}
{\rm
For each $j=0,\ldots,m$, $s=1,2,\ldots$, define the quantities
\[
\theta_j(m,s):=\sum_{k=j}^m (-1)^{k-j} {m \choose k}
\lambda_{k+s} \lambda^{m-k}.
\]
Then, $\theta_j(m,s)\geq 0$.
}
\end{lem}
\begin{pr}{Proof}
Consider a random variable $X$ taking values in $\{1,\ldots,n\}$ with
respective probabilities $\pi_i:=p_i/\lambda$, $i=1,\ldots,n$, and set
$Y:=h(X)$, where $h(i):=\pi_i$, $i=1,\ldots,n$. Then,
\[
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^{k+s-1}=\sum_{i=1}^n \pi_i h(i)^{k+s-1}
=\sum_{i=1}^n \pi_i^{k+s}
=\frac{\lambda_{k+s}}{\lambda^{k+s}}.
\]
We thus have
\[
\frac{ \theta_j(m,s)}{\lambda^{m+s}}=
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{Y^{s-1}
\sum_{k=j}^m (-1)^{k-j} {m \choose k}
Y^k\right\}.
\]
This is clearly nonnegative when $j=0$, because it represents the expectation
of $Y^{s-1}(1-Y)^m$ and $0\leq Y\leq 1$ by definition.
For $j\in\{1,\ldots,m\}$ we shall make use of the identity
\[
w(x):=\sum_ {k=j}^m (-1)^{k-j} {m \choose k}
x^k=j{m\choose j} \int_0^x t^{j-1}(1-x+t)^{m-j} dt.
\]
This identity can be proved as follows: Let $U_{1:m}<\ldots<U_{m:m}$ be the order statistics
from the uniform distribution over the interval $(0,1)$.
For every $a\in(0,1)$,
\[
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(U_{j:m}\leq a)=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}( {\rm at \ least\ } j \ {\rm \ out \ of\ } m \
{\rm are\ less\ than\ } \ a)
=\sum_{k=j}^m {m \choose k } a^k (1-a)^{m-k}.
\]
Since it is well-known that
$U_{j:m}$ follows a Beta$(j,m+1-j)$ density, we also have
\[
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(U_{j:m}\leq a)=j{m\choose j}\int_0^a y^{j-1}(1-y)^{m-j} dy,
\]
and equating the above expressions we get
\[
\sum_{k=j}^m {m \choose k } a^k (1-a)^{m-k}
=j{m\choose j}\int_0^a y^{j-1}(1-y)^{m-j} dy.
\]
Because both sides represent entire functions of $a$ in the complex plan
(polynomials), the identity holds
for all $a$, and we are allowed to set $a=-x/(1-x)$, $x\neq 1$,
obtaining
\[
\sum_{k=j}^m (-1)^k {m \choose k } x^k
=j{m\choose j}(1-x)^m \int_0^{-x/(1-x)} y^{j-1}(1-y)^{m-j} dy.
\]
The substitution $y=-t/(1-x)$ in the last integral yields
\[
\sum_{k=j}^m (-1)^k {m \choose k } x^k
=(-1)^j j{m\choose j}\int_0^{x} t^{j-1}(1-x+t)^{m-j} dt,
\]
which is equivalent to the desired identity.
Now, the integral expansion shows that $w(x)\geq 0$ for $0\leq x\leq 1$, and
we arrived at the representation
\[
\frac{ \theta_j(m,s)}{\lambda^{m+s}}=
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{Y^{s-1} w(Y) \right\}\geq 0,
\]
completing the proof.
\end{pr}
We are now in a position to compare the factorial moments of $S_n$ with those of the
corrected Poisson $\phi_2$. Surprisingsly enough, it turns out
that the
sequence
$\mu_m=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} (S_n)_m$
dominates $\nu_m:=\mu_m(\phi_2)$ for all $m$.
\begin{lem}
\label{lem.comparison}
{\rm
If $\mu_m$ is the $m$-th factorial moment of $S_n$ then
\[
\mu_m=m! S_{n,m},
\]
where $S_{n,m}=\sum p_{i_1} \cdots p_{i_m}$ (known as elementary symmetric function of order $m$ in the variables $p_1,\ldots,p_n$) -- the sum runs over all
${n \choose m}$ combinations $\{i_1,\ldots,i_m\}$ of $\{1,\ldots,n\}$ (with the convention
$S_{n,m}=0$ for $m>n$).
Moreover, the following inequality holds true for all $m\geq 1$:
\[
\lambda^m-\frac{(m)_2}{2} \lambda_2 \lambda^{m-2}\leq
\mu_m\leq
\lambda^m-\frac{(m)_2}{2} \lambda_2 \lambda^{m-2} +
\frac{(m)_3}{3}
\lambda_3 \lambda^{m-3}
+\frac{(m)_4}{8}
\lambda_2^2 \lambda^{m-4}=\mu_m(\phi_3),
\]
where $(m)_2=m(m-1)$, $(m)_3=m(m-1)(m-2)$, $(m)_4=m(m-1)(m-2)(m-3)$.
}
\end{lem}
\begin{pr}{Proof}
The expression for $\mu_m$ is well-known, see e.g.\
Galambos (1987), so it remains to show the inequalities.
The proof will be done by induction on $m$. Regarding the lower bound, this is obviously
true for $m=1,2$. Observe that
\[
\mu_{m+1}=(m+1)! S_{n,m+1}=\sum_{i=1}^n p_i \Big\{m! S_{n-1,m}(i)\Big\},
\]
where $S_{n-1,m}(i)=\sum p_{i_1} \cdots p_{i_m}$, and where now the sum runs over all
${n-1 \choose m}$ combinations $\{i_1,\ldots,i_m\}$ of $\{1,\ldots,n\}\setminus\{i\}$.
Hence, assuming that the lower bound holds for some $m$, we obtain
\[
\mu_{m+1}
\geq
\sum_{i=1}^n p_i \left\{(\lambda-p_i)^m-\frac{(m)_2}{2}
(\lambda_2-p_i^2) (\lambda-p_i)^{m-2}\right\}=S, \ {\rm say},
\]
and it suffices to show that $S$ is bounded below by
$\lambda^{m+1}-\frac{(m+1)_2}{2} \lambda_2 \lambda^{m-1}$.
Expanding the binomial terms and interchanging the order of summation
we find
\begin{eqnarray*}
S &= &\sum_{k=0}^m (-1)^k {m \choose k} \lambda^{m-k}
\sum_{i=1}^n p_i^{k+1}
-\frac{(m)_2}{2} \lambda_2 \sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k}
\sum_{i=1}^n p_i^{k+1} \\
& &
+\frac{(m)_2}{2} \sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k}
\sum_{i=1}^n p_i^{k+3}\\
&=&
\sum_{k=0}^m (-1)^k {m \choose k} \lambda^{m-k}
\lambda_{k+1}
-\frac{(m)_2}{2} \lambda_2 \sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k}
\lambda_{k+1} \\
& &
+\frac{(m)_2}{2} \sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k}
\lambda_{k+3}=S_1+S_2+S_3, \ \ {\rm say}.
\end{eqnarray*}
According to Lemma \ref{lem.positivity},
$S_3=\frac{(m)_2}{2} \theta_0(m-2,3)\geq 0$. Also,
\[
S_1= \lambda^{m+1}-m\lambda_2\lambda^{m-1} +\theta_2(m,2),
\ \ \
S_2=-\frac{(m)_2}{2} \lambda_2 \left\{\lambda^{m-1}-
\theta_1(m-2,1)\right\}.
\]
Combining the above and using the fact that the $\theta$'s are nonnegative,
we obtain
\[
S\geq \lambda^{m+1}-m\lambda_2\lambda^{m-1} -\frac{(m)_2}{2}
\lambda_2 \lambda^{m-1}
= \lambda^{m+1}-\frac{(m+1)_2}{2}
\lambda_2 \lambda^{m-1},
\]
and the lower bound is proved.
Regarding the upper bound, the inequality
$\mu_m\leq\mu_m(\phi_3)$
is trivially
true for $m=1,2,3$ (as equality), and for $m=4$
yields $\mu_4(\phi_3)-\mu_4=6\lambda_4>0$.
Assuming that the bound holds for some $m\geq 4$ and proceeding as before, we obtain
\begin{eqnarray*}
\mu_{m+1}
& \leq &
\sum_{i=1}^n p_i \left\{(\lambda-p_i)^m-\frac{(m)_2}{2}
(\lambda_2-p_i^2) (\lambda-p_i)^{m-2}
+\frac{(m)_3}{3}
(\lambda_3-p_i^3) (\lambda-p_i)^{m-3}
\right.
\\
&&
\hspace*{30ex}
\left.
+\frac{(m)_4}{8}
(\lambda_2-p_i^2)^2 (\lambda-p_i)^{m-4}
\right\}=S, \ {\rm say},
\end{eqnarray*}
and we shall now show that $S$ is bounded above by
\[
\lambda^{m+1}-\frac{(m+1)_2}{2} \lambda_2 \lambda^{m-1}
+\frac{(m+1)_3}{3} \lambda_3 \lambda^{m-2}
+
\frac{(m+1)_4}{8} \lambda_2^2 \lambda^{m-3}=\mu_{m+1}(\phi_3).
\]
On expanding the binomial terms and by interchanging the order of summation
we find $S=\sum_{i=1}^6 S_i$
where
\begin{eqnarray*}
S_1 & = &
\sum_{k=0}^m (-1)^k {m \choose k} \lambda^{m-k}\lambda_{k+1},
\\
S_2 & = &
-\frac{(m)_2}{2} \lambda_2
\sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k} \lambda_{k+1},
\\
S_3 & = &
\frac{(m)_2}{2}
\sum_{k=0}^{m-2} (-1)^k {m-2 \choose k} \lambda^{m-2-k} \lambda_{k+3},
\\
S_4 & = &
\frac{(m)_3}{3} \lambda_3
\sum_{k=0}^{m-3} (-1)^k {m-3 \choose k} \lambda^{m-3-k} \lambda_{k+1},
\\
S_5 & = &
-\frac{(m)_3}{3}
\sum_{k=0}^{m-3} (-1)^k {m-3 \choose k} \lambda^{m-3-k} \lambda_{k+4},
\\
S_6 & = &
\frac{(m)_4}{8}
\sum_{k=0}^{m-4} (-1)^k {m-4 \choose k} \lambda^{m-4-k} \Big(\lambda_2^2\lambda_{k+1}-2\lambda_2\lambda_{k+3}+\lambda_{k+5}\Big)
=\frac{(m)_4}{8} T, \ \ \mbox{say}.
\end{eqnarray*}
We shall prove that $T\leq \lambda_2^2 \lambda^{m-3}$.
Indeed, by considering
the random variable $Y\in[0,1]$ as in the proof of Lemma \ref{lem.positivity}, we have $\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^{k}=\lambda_{k+1}/\lambda^{k+1}$, and hence, after some algebra, $T=\lambda^{m+1}
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{ (1-Y)^{m-4}(Y^2-\mu)^2\right\}=\lambda^{m+1}a_m$, say,
where $\mu=\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y$. On the other hand,
$\lambda_2^2 \lambda^{m-3}=\lambda^{m+1}\mu^2$, and the desired
inequality reduces to $a_m\leq \mu^2$;
however, the sequence $a_m$ is positive decreasing, and it suffices to
show that $a_4\leq \mu^2$, i.e., $\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^4\leq 2\mu\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^2$. Since,
obviously, $\lambda_5\leq 2\lambda_2\lambda_3$ and
$\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^4=\lambda_5/\lambda^5$, $\mu=\lambda_2/\lambda^2$,
$\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Y^2=\lambda_3/\lambda^3$, the desired inequality is proved.
Finally, keeping only the first few terms from the $S_i$'s,
we conclude, in view of Lemma \ref{lem.positivity},
the inequalities
\begin{eqnarray*}
S_1 & = &
\mbox{$
\lambda^{m+1}-m\lambda_2\lambda^{m-1}+\frac{(m)_2}{2}\lambda_3\lambda^{m-2}
-\theta_3(m,1)
\leq\lambda^{m+1}
-m\lambda_2\lambda^{m-1}+\frac{(m)_2}{2}\lambda_3\lambda^{m-2},
$}
\\
S_2 & = &
\mbox{$
-\frac{(m)_2}{2} \lambda_2
\left(
\lambda^{m-1}-(m-2)\lambda_2\lambda^{m-3}+\theta_2(m-2,1)\right)
\leq
-\frac{(m)_2}{2} \lambda_2
\lambda^{m-1}+\frac{(m)_3}{2}\lambda_2^2\lambda^{m-3},
$}
\\
S_3 & = &
\mbox{$
\frac{(m)_2}{2}
\left(\lambda_3\lambda^{m-2}-\theta_1(m-2,3)\right)\leq
\frac{(m)_2}{2}
\lambda_3\lambda^{m-2},
$}
\\
S_4 & = &
\mbox{$
\frac{(m)_3}{3} \lambda_3
\left(\lambda^{m-2}-\theta_1(m-3,1)\right)
\leq
\frac{(m)_3}{3} \lambda_3
\lambda^{m-2},
$}
\\
S_5 & = &
\mbox{$
-\frac{(m)_3}{3}
\theta_0(m-3,4)\leq 0,
$}
\\
S_6 & = &
\mbox{$
\frac{(m)_4}{8} T\leq
\frac{(m)_4}{8}
\lambda_2^2 \lambda^{m-3}.
$}
\end{eqnarray*}
Hence, in view of the relations
\[
\mbox{$
m+\frac{(m)_2}{2}=\frac{(m+1)_2}{2}, \ \ \
\frac{(m)_2}{2}+\frac{(m)_2}{2}+\frac{(m)_3}{3}=\frac{(m+1)_3}{2},
\ \ \
\frac{(m)_3}{2}+\frac{(m)_4}{8}=\frac{(m+1)_4}{8},
$}
\]
the inductional step is completed and the lemma is proved.
\end{pr}
\noindent
\begin{pr}{Proof of Theorem \ref{theo.main}} From Lemma \ref{lem.moments.Poisson}
with $\gamma=\lambda_2/(2\lambda^2)$ we get
\[
\mu_m(\phi_2)=\lambda^m-\frac{(m)_2}{2}
\lambda_2\lambda^{m-2},
\]
and Lemma \ref{lem.comparison} shows that
\[
0\leq \mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}(S_n)_m-\mu_m(\phi_2)\leq
\frac{(m)_3}{3} \lambda_3\lambda^{m-3}
+\frac{(m)_4}{8} \lambda_2^2\lambda^{m-4}, \ \
m=1,2,\ldots \ .
\]
Thus,
\begin{eqnarray*}
d_2(f_n,\phi_2)
&=&
\frac{1}{2}\sum_{m=1}^\infty
\frac{2^m}{m!}\left\{
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}(S_n)_m-\mu_m(\phi_2)\right\}
\\
&&
\leq \frac{\lambda_3}{6}
\sum_{m=1}^{\infty}\frac{ 2^m (m)_3\lambda^{m-3}}{m!}
+\frac{\lambda_2^2}{16}
\sum_{m=1}^{\infty}\frac{ 2^m (m)_4\lambda^{m-4}}{m!}
=\Big(\frac{4}{3}\lambda_3+\lambda_2^2\Big)e^{2\lambda}.
\end{eqnarray*}
In view of Theorem \ref{theo.d2}(b) and the fact that
$\lambda_2^2\leq \lambda\lambda_3$, the proof is complete.
\end{pr}
One more accurate bound is needed for the proof of
Theorem \ref{theo.main2}.
\begin{lem}
\label{lem.comparison2}
{\rm
If $\mu_m$ is the $m$-th factorial moment of $S_n$ then
the following lower bound holds true for all $m\geq 1$:
\begin{eqnarray*}
\mu_m
&\geq&
\lambda^m-\frac{(m)_2}{2} \lambda_2 \lambda^{m-2}
+\frac{(m)_3}{3} \lambda_3 \lambda^{m-3}
+
\frac{(m)_4}{8} \lambda_2^2 \lambda^{m-4}
-\frac{(m)_4(m)_2}{48} \lambda_4 \lambda^{m-4}.
\end{eqnarray*}
}
\end{lem}
\begin{pr}{Proof}
The proof will be done by induction on $m$.
Denoting by $L_m$ the lower bound, it is easily checked that
the inequality $\mu_m-L_m\geq 0$ is satisfied for $m=1,2,3,4$ as an equality. For $m=5$,
$\mu_5-L_5=4(6\lambda_5+5\lambda\lambda_4-5\lambda_2\lambda_3)$
and this quantity is positive
because $\lambda\lambda_4-\lambda_2\lambda_3=\lambda^5 \mbox{\rm \hspace*{.2ex}Cov\hspace*{.2ex}}(Y,Y^2)\geq 0$,
since $Y\geq 0$ and, thus, both $Y$ and $Y^2$ are increasing functions
of $Y$ (recall that the random variable $Y\in[0,1]$ is defined in the
proof of Lemma \ref{lem.positivity}).
Assuming that the bound is true for some $m\geq 5$ we get
\begin{eqnarray*}
\mu_{m+1}& \geq&
\sum_{i=1}^n p_i \left\{(\lambda-p_i)^m-\frac{(m)_2}{2}
(\lambda_2-p_i^2) (\lambda-p_i)^{m-2}
+\frac{(m)_3}{3}
(\lambda_3-p_i^3) (\lambda-p_i)^{m-3}
\right.
\\
&&\hspace*{1ex}
\left.
+\frac{(m)_4}{8}
(\lambda_2-p_i^2)^2 (\lambda-p_i)^{m-4}
-\frac{(m)_2 (m)_4}{48}
(\lambda_4-p_i^4) (\lambda-p_i)^{m-4}
\right\}
=S, \ {\rm say}.
\end{eqnarray*}
On expanding the binomial terms and interchanging the order of summation
we can express $S$ as $\sum_{j=0}^{9} S_j$ where
\begin{eqnarray*}
S_1& =& \mbox{$
\lambda^{m+1}-m\lambda_2\lambda^{m-1}
+\frac{(m)_2}{2}\lambda_3\lambda^{m-2}
-\frac{(m)_3}{6}\lambda_4\lambda^{m-3}+\theta_4(m,1)$},
\\
S_2& =&\mbox{$
-\frac{(m)_2}{2}\lambda_2\left(\lambda^{m-1}-
(m-2)
\lambda_2 \lambda^{m-3}+\frac{(m-2)_2}{2}
\lambda_3 \lambda^{m-4}-\theta_3(m-2,1) \right)$},
\\
S_3& =&\mbox{$
\frac{(m)_2}{2}
\left(\lambda_3 \lambda^{m-2}-(m-2)\lambda_4
\lambda^{m-3}+\theta_2(m-2,3)\right)$},
\\
S_4& =&\mbox{$
\frac{(m)_3}{3}\lambda_3
\left( \lambda^{m-2}-(m-3)\lambda_2\lambda^{m-4}
+\theta_2(m-3,1)\right)$},
\\
S_5& =&\mbox{$
-\frac{(m)_3}{3}\left(\lambda_4\lambda^{m-3}-\theta_1(m-3,4)\right),
$}
\\
S_6& =&\mbox{$
\frac{(m)_4}{8}\lambda_2^2
\left(\lambda^{m-3}- (m-4)\lambda_2\lambda^{m-5}
+\theta_2(m-4,1)\right),
$}
\\
S_7& =&\mbox{$
-\frac{(m)_4}{8}2\lambda_2
\left(\lambda_3\lambda^{m-4}-\theta_1(m-4,3)\right),
$}
\\
S_8& =&\mbox{$
\frac{(m)_4}{8}
\theta_0(m-4,5),
$}
\\
S_9& =&\mbox{$
-\frac{(m)_4(m)_2}{48}\lambda_4
\left(\lambda^{m-3}-\theta_1(m-4,1)\right),
$}
\\
S_{0}& =&\mbox{$
\frac{(m)_4(m)_2}{48}
\theta_0(m-4,5).
$}
\end{eqnarray*}
Hence, due to the nonnegativity of $\theta$'s, we conclude that
\begin{eqnarray*}
S&\geq& \mbox{$
\lambda^{m+1}-\frac{(m+1)_2}{2}\lambda_2\lambda^{m-1}
+\frac{(m+1)_3}{3}\lambda_3\lambda^{m-2}
+\frac{(m+1)_4}{8}\lambda_2^2\lambda^{m-3}$} \\&&
\mbox{$
-\left((m)_3+\frac{(m)_4(m)_2}{48}\right) \lambda_4\lambda^{m-3}
-\frac{5(m)_4}{6}\lambda_2\lambda_3\lambda^{m-4}
-\frac{(m)_5}{8} \lambda_2^3\lambda^{m-5}.$}
\end{eqnarray*}
Therefore, the inductional step will be proved
if for all $m\geq 5$,
\begin{eqnarray*}
\mbox{$
\small
-\left((m)_3+\frac{(m)_4(m)_2}{48}\right) \lambda_4\lambda^{m-3}
-\frac{5(m)_4}{6}\lambda_2\lambda_3\lambda^{m-4}
-\frac{(m)_5}{8} \lambda_2^3\lambda^{m-5}
$}
&&
\\
\mbox{$
\geq
-\frac{(m+1)_4(m+1)_2}{48} \lambda_4\lambda^{m-3}.
$}
&&
\end{eqnarray*}
Multiplying both sides by $\frac{-48}{(m)_4\lambda^{m-5}}$ and collecting terms, we
arrive at the equivalent inequality ($m=4,5,\ldots$)
\[
20\lambda_2\lambda_3\lambda+3(m-4)\lambda_2^3\leq
(3m+8)\lambda_4\lambda^2.
\]
From Cauchy's inequality, $\lambda_2^2\leq
\lambda_3\lambda$, and hence,
$\lambda_2^3\leq \lambda_2\lambda_3\lambda$. Therefore,
\[
20\lambda_2\lambda_3\lambda+3(m-4)\lambda_2^3\leq
(3m+8)\lambda_2\lambda_3\lambda,
\]
and it suffices to show that $\lambda_2\lambda_3\lambda\leq \lambda_4\lambda^2$, i.e.,
$\lambda_2\lambda_3\leq \lambda\lambda_4$.
But this reduces to $\mbox{\rm \hspace*{.2ex}Cov\hspace*{.2ex}}(Y,Y^2)\geq 0$, which is certainly true, and
the lemma is proved.
\end{pr}
\noindent
\begin{pr}{Proof of Theorem \ref{theo.main2}} From Lemma \ref{lem.moments.Poisson2}
we have
\[
\mu_m(\phi_{3})=\lambda^m-\frac{(m)_2}{2}
\lambda_2\lambda^{m-2}+\frac{(m)_3}{3}
\lambda_3\lambda^{m-3}+\frac{(m)_4}{8}
\lambda_2^2\lambda^{m-4},
\]
and Lemmas \ref{lem.comparison}, \ref{lem.comparison2}, show that
\[
0\leq \mu_m(\phi_{3})-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}(S_n)_m\leq \frac{(m)_4(m)_2}{48}
\lambda_4\lambda^{m-4}, \ \
m=1,2,\ldots \ .
\]
Thus,
\[
d_2(f_n,\phi_{3})=\frac{1}{2}\sum_{m=1}^\infty
\frac{2^m}{m!}\left\{\mu_m(\phi_3)-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}(S_n)_m\right\}
\leq \frac{1}{2}\lambda_4\sum_{m=0}^{\infty}\frac{2^{m+4}
\lambda^{m}}{m!}\frac{(m+4)_2}{48},
\]
and in view of Theorem \ref{theo.d2}(b), the proof is complete.
\end{pr}
\section{Concluding remarks and examples}
\label{sec.last}
\begin{exam}
\label{exam.binomial}
{\rm
Assume that $n>\lambda$.
The original upper bound for the total variation distance between the Poisson$(\lambda)$
and Bin$(n,\lambda/n)$ reads as
\[
\mbox{$
\frac{1}{2}\sum_{k=0}^\infty\left|{n\choose k}
\Big(\frac{\lambda}{n}\Big)^k\Big(1-\frac{\lambda}{n}\Big)^{n-k} -e^{-\lambda}
\frac{\lambda^k}{k!}
\right|\leq \lambda(1-e^{-\lambda})n^{-1},
$}
\]
the second order approximation of Theorem \ref{theo.main} implies the bound
\[
\mbox{$
\frac{1}{2}\sum_{k=0}^\infty\left|{n\choose k}
\Big(\frac{\lambda}{n}\Big)^k\Big(1-\frac{\lambda}{n}\Big)^{n-k} -e^{-\lambda}
\frac{\lambda^k}{k!}
\Big( 1-\frac{(k-\lambda)^2-k}{2n}\Big)
\right|\leq \lambda^3\Big(\frac{4}{3}+\lambda\Big)e^{2\lambda}n^{-2},
$}
\]
while the third order corrected Poisson approximation of
Theorem \ref{theo.main2} yields
the bound (here $\lambda_2=\lambda^2/n$,
$\lambda_3=\lambda^3/n^2$, $\lambda_4=\lambda^4/n^3$)
\begin{eqnarray*}
\mbox{$
\frac{1}{2}\sum_{k=0}^\infty\left|{n\choose k}
\Big(\frac{\lambda}{n}\Big)^k\Big(1-\frac{\lambda}{n}\Big)^{n-k} -e^{-\lambda}
\frac{\lambda^k}{k!}\Big( 1-\frac{(k-\lambda)^2-k}{2n}
+\frac{a_0+a_1 k+a_2 (k)_2+a_3 (k)_3+a_4 (k)_4}{24 n^2}\Big)
\right|
$}
\\
\mbox{$
\leq \frac{2}{3}\lambda^4(\lambda^2+4\lambda+3)e^{2\lambda} n^{-3},
$}
\end{eqnarray*}
where $a_0=\lambda^3(3\lambda-8)$, $a_1=12\lambda^2(\lambda-2)$,
$a_2=6\lambda(3\lambda-4)$, $a_3=4(2-3\lambda)$, $a_4=3$.
}
\end{exam}
\begin{REM}
\label{rem.g3tilde}
{\rm
Instead of $\phi_3$ of Theorem \ref{theo.main2}, it may appear
more natural to consider a simpler
version of the corrected Poisson distribution of order three, namely,
\[
\widetilde{\phi}_3(k)=e^{-\lambda}\frac{\lambda^k}{k!}
\left(1-\frac{\lambda_2}{2\lambda^2}P_2(k)
+\frac{\lambda_3}{3\lambda^3}P_3(k)\right).
\]
Its moments are given
by $\mu_m(\widetilde{\phi}_3)=\lambda^m-\frac{(m)_2}{2}\lambda_2
\lambda^{m-2}
+\frac{(m)_3}{3}\lambda_3\lambda^{m-3}$, and thus,
$\mu_m=\mu_m(\widetilde{\phi}_3)$, for $m=0,1,2,3$.
Moreover, the function $\widetilde{\phi}_3$
is the unique member of the parametric class
\[
e^{-\lambda}\frac{\lambda^k}{k!}
\Big(1-\gamma_1 P_1(k)-\gamma_2 P_2(k)-\gamma_3 P_3(k)\Big)
\]
with moments equal to those of $S_n$ up to order three, and one
might
expect a third order approximation, comparable to
that of Theorem \ref{theo.main2} for $\phi_3$. However,
surprisingly enough, the approximation that $\widetilde{\phi}_3$
attains is only of order $n^{-2}$, and the adjunction of the polynomial
$P_3$ does not seem helpful. To see this, it suffices to calculate
(in the case of equal $p_i=\lambda/n$) the difference
\[
f_n(0)-\widetilde{\phi}_3(0)
=\Big(1-\frac{\lambda}{n}\Big)^n-e^{-\lambda}
\Big(
1-\frac{\lambda^2}{2n}-\frac{\lambda^3}{3n^2}\Big)=\frac{e^{-\lambda} \lambda^4}{8 n^2}+o(n^{-2}), \ \ \mbox{ as } \ n\to\infty.
\]
}
\end{REM}
\begin{exam}
\label{exam.Wasserstein}
{\rm
The distance $d_2$
is helpful in proving stronger results, even for the
ordinary
Poisson approximation. For example, the
Gini-Kantorovich, or Wasserstein,
or transportation distance (see Novak (2019)) is defined by
\begin{eqnarray*}
d_W(S_n,Z_\lambda)
&:=&
\inf \mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}|S_n-Z_\lambda|
\\
&=&
\mbox{$
\sup_h \left|\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} h(S_n)-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} h(Z_\lambda)\right|
$}
\\
&=&
\mbox{$
\sum_{m=1}^\infty \Big|\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(S_n\geq m)-\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}P\hspace*{.2ex}}(Z_\lambda\geq m)\Big|,
$}
\end{eqnarray*}
where the infimum is taken over all couplings of $S_n$ and $Z_{\lambda}$
and the supremum over all functions $h:\{0,1,\ldots\}\to\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}R\hspace*{.2ex}}$ with $|h(m+1)-h(m)|\leq 1$ for all $m\geq 1$.
For $g_1,g_2\in{\cal F}_2$ we consider a distance very
similar to $d_2$, namely,
\[
\widetilde{d}_2(g_1,g_2):=\sum_{m=1}^{\infty}
\frac{2^{m-1}}{(m-1)!} \Big|\mu_m(g_1)-\mu_m(g_2)\Big|.
\]
Then, $d_W(S_n,Z_\lambda)\leq \widetilde{d}_2(S_n,Z_\lambda)$.
Indeed, a straightforward computation, using Theorem \ref{theo.d2}(a),
yields
\begin{eqnarray*}
d_W(S_n,Z_\lambda)
&\leq&
\sum_{m=1}^\infty \sum_{k=m}^\infty \big|f_n(k)-g_{\lambda}(k)\big|
=
\sum_{k=1}^\infty k\big|f_n(k)-g_{\lambda}(k)\big|
\\
&=&
\sum_{k=1}^\infty \frac{1}{(k-1)!}\left|
\sum_{m=k}^\infty\frac{(-1)^{m-k}}{(m-k)!}
(\mu_m-\lambda^m)\right|
\leq
\sum_{k=1}^\infty \frac{1}{(k-1)!}
\sum_{m=k}^\infty\frac{\left|\mu_m-\lambda^m\right|}{(m-k)!}
\\
&=&
\sum_{m=1}^\infty \frac{\left|\mu_m-\lambda^m\right|}{(m-1)!}
\sum_{k=1}^m\frac{(m-1)!}{(k-1)! (m-k)!}=
\widetilde{d}_2(S_n,Z_\lambda).
\end{eqnarray*}
On the other hand, using the lower bound from Lemma \ref{lem.comparison}
and the simple fact that $\mu_m\leq \lambda^m$ for all $m$
(the proof is omitted) we obtain
$0\leq \lambda^m-\mu_m\leq \frac{(m)_2}{2}\lambda_2\lambda^{m-2}$,
\[
d_2(S_n,Z_\lambda)=\frac{1}{2}\sum_{m=1}^\infty \frac{2^m}{m!}
\left\{ \lambda^m-\mu_m \right\}
\leq
\frac{1}{2}\sum_{m=1}^\infty \frac{2^m}{m!}
\frac{(m)_2}{2}\lambda_2\lambda^{m-2}=e^{2\lambda} \lambda_2,
\]
and similarly, $\widetilde{d}_2(S_n,Z_\lambda)\leq 2(1+\lambda)e^{2\lambda}\lambda_2$. In this way we
have produced
a one-line
proof of the inequality
$d_W(S_n,Z_\lambda)\leq 2(1+\lambda)e^{2\lambda}\sum_{i=1}^n p_i^2$.
}
\end{exam}
\begin{REM}
\label{rem.Chen}
{\rm
Chen (1974) proved a stronger version
of Poisson convergence
for $S_n$. He showed that if
$\sum_i p_i=\lambda$ and $\max_i\{p_i\}\to 0$
(equivalently, $\sum_i p_i^2\to 0$)
then, as $n\to\infty$
\[
\sum_{k=0}^\infty h(k) \big|f_n(k)-g_\lambda(k)\big|
\to 0
\]
for any function $h\geq 0$ for which $\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} h(Z_\lambda)<\infty$.
This mode of convergence is very strong; see Wang (1991).
It is worth to point out that the moment inequality
of Lemma \ref{lem.comparison} offers a one-line proof
of a slightly weaker result, namely, under the restriction
that $h\geq 0$ and $\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} Z_\lambda^2 h(Z_\lambda)<\infty$
(this weaker mode is strong enough for all practical
purposes). Indeed, we have
\begin{eqnarray*}
\sum_{k=0}^\infty h(k) \big|f_n(k)-g_\lambda(k)\big|
&=&
\sum_{k=0}^\infty \frac{h(k)}{k!}
\left|\sum_{m=k}^\infty \frac{(-1)^{m-k}}{(m-k)!}
\big(\mu_m-\lambda^m\big)\right|
\\
&\leq&
\sum_{k=0}^\infty \frac{h(k)}{k!}
\sum_{m=k}^\infty \frac{\lambda^m-\mu_m}{(m-k)!}
\\
&\leq&
\frac{\lambda_2}{2}\sum_{k=0}^\infty \frac{h(k)}{k!}
\sum_{m=k}^\infty \frac{m(m-1)\lambda^{m-2}}{(m-k)!}
\\
&=&
\left(
\frac{e^{2\lambda}}{2\lambda^2}\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{
h(Z_\lambda)\left(Z_{\lambda}^2+(2\lambda-1)Z_{\lambda}+\lambda^2
\right)\right\}\right) \sum_{i=1}^n p_i^2\to 0.
\end{eqnarray*}
}
\end{REM}
\begin{REM}
\label{rem.exact.d2}
{\rm
From the covariance identity we have
\[
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} \left\{ P_j(Z_\lambda)(Z_\lambda)_m\right\}
=
\lambda^j \mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} \left\{\Delta^j (Z_\lambda)_m\right\}
=
\lambda^j (m)_j\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}} \left\{(Z_\lambda)_{m-j}\right\}
=\lambda^m (m)_j.
\]
Hence, one advantage for considering $d_2$
in connection with $\phi_{\nu}$
when investigating the present problem is the fact that the
corrected Poisson distribution admits an exact factorial
moment sequence,
\[
\mbox{$
\mu_m(\phi_\nu)=\lambda^m\left(1-\sum_{j=2}^{2\nu-2}
\gamma_j \ (m)_j\right).
$}
\]
Hence, whenever $\mu_m\leq \mu_m(\phi_{\nu})$ for all $m$, or
$\mu_m\geq \mu_m(\phi_{\nu})$ for all $m$ (see Lemmas
\ref{lem.comparison}, \ref{lem.comparison2}), we can derive
a simple exact formula for $d_2$ as follows:
\[
\mbox{$
d_2(f_n,\phi_\nu)=\frac{1}{2}\left|
\sum_{m=0}^\infty \frac{2^m}{m!}\Big(\mu_m-\mu_m(\phi_\nu)\Big)\right|
=
\frac{1}{2}\left|
\mbox{\rm \hspace*{.2ex}I\hspace{-.5ex}E\hspace*{.2ex}}\left\{ 3^{S_n}\right\} -
e^{2\lambda}\left(1-\sum_{j=2}^{2\nu-2}\gamma_j
\
(2\lambda)^j\right)\right|.
$}
\]
Therefore, from the independence of the indicators $I_i$ we get
\[
\mbox{$
d_2(f_n,\phi_\nu)=\frac{1}{2}\left|
\prod_{i=1}^n (1+2 p_i)-
e^{2\lambda}\left(1-\sum_{j=2}^{2\nu-2}\gamma_j
\
(2\lambda)^j\right)\right|.
$}
\]
}
\end{REM}
\begin{exam}
\label{exam.binomial.exact}
{\rm
It is possible to obtain very accurate closed-form
bounds of any order for corrected Poisson approximations
in the simple case of the ordinary binomial distribution, i.e.,
under the setup of Example \ref{exam.binomial}
($n>\lambda$, $p_i=\lambda/n$).
We shall present some elementary results without proofs.
We have $\mu_m=(n)_m\lambda^m/n^m$ and
\[
(n)_m=n^m-A_1n^{m-1}+A_2n^{m-2}+\cdots+(-1)^{m-1} A_{m-1} \ n
\]
where $A_k=A_k(m-1)=\sum_{1\leq i_1<\cdots<i_k\leq m-1}\prod_{j=1}^k i_j$
are the well-known unsigned Stirling numbers of the first kind,
see, e.g., Sibuya (1988);
the usual notation is $S(m,k)=(-1)^{m-k} A_{m-k}$
(for the signed numbers) and
$\Big[\begin{array}{c}m
\\ k\end{array}\Big]
=(-1)^{m-k}S(m,k)=|S(m,k)|$ for the unsigned ones.
It can be verified that for any fixed $\nu\geq 1$ and $n\geq 1$,
\begin{eqnarray*}
(n)_m= n^m-A_1n^{m-1}+A_2n^{m-2}+\cdots+(-1)^{\nu-1} A_{\nu-1}
n^{m-\nu+1}+(-1)^\nu R_\nu n^{m-\nu},
\end{eqnarray*}
where the remainder $0\leq R_\nu\leq A_{\nu}$.
Also, the first few values of $A$'s are
\begin{eqnarray*}
&&
\mbox{$
A_1={m\choose 2}, \ A_2={m\choose 3}\frac{m-1}{4}, \
A_3={m\choose 4}\frac{(m)_2}{2}, \
A_4={m\choose 5}\frac{15 m^3-30 m^2+5 m+2}{48},
$}
\\
&&
\mbox{$
A_5={m\choose 6}\frac{m(m-1)(3m^2-7m+2)}{16}, \
A_6={m\choose 7}\frac{63m^5-315m^4+315m^3+91 m^2-42m-16}{576},
$}
\end{eqnarray*}
and in general, $A_k$ is a polynomial of degree $2k$ in $m$ containing
the binomial factor ${m\choose k+1}$ (so that $A_k=0$ for $k\geq m$).
Our strategy in choosing a suitable $\phi_\nu$
($\nu\geq 2$, fixed) is quite simple: Equate the factorial moments
$\mu_m(\phi_{\nu})$ of order
up to $m=\nu$ with the truncated part of $\mu_m$, ignoring the
remainder $R_\nu$. In this way, we obtain
the equations
\[
\mbox{$
\lambda^m\left(
1-\frac{A_1}{n}+\frac{A_2}{n^{2}}+\cdots+(-1)^{\nu-1}
\frac{A_{\nu-1}}{n^{\nu-1}}
\right)
=\mu_m(\phi_\nu)=\lambda^m
\left( 1- \sum_{j=2}^{2\nu-2} \gamma_{\nu}(j) (m)_j\right).
$}
\]
After the obvious cancelation of $\lambda^m$,
both hands are polynomials in $m$ of degree $2\nu-2$, and, by
solving a relative simple $(2\nu-3)\times(2\nu-3)$ linear system
(with a triangular matrix of coefficients) we
obtain the constants $\gamma_j(\nu)$,
needed for the construction of the suitable
corrected Poisson approximation
of order $\nu$,
\[
\mbox{$
\phi_{\nu}(k)=e^{-\lambda}\frac{\lambda^k}{k!}
\left(1-\sum_{j=2}^{2\nu -2}\gamma_j(\nu)P_j(k)\right).
$}
\]
Some values of $\gamma_j(\nu)$ are shown in the following table.
\noindent
{\small
\begin{tabular}{r||llllll}
{\small $j$} & {\small $\nu=2$} &
{\small $\nu=3$} &
{\small $\nu=4$} &
{\small $\nu=5$} &
{\small$\nu=6$} &
{\small $\nu=7$}
\\
\hline
$2$
& $\frac{1}{2n}$
& $\frac{1}{2n}$
& $\frac{1}{2n}$
& $\frac{1}{2n}$
& $\frac{1}{2n}$
& $\frac{1}{2n}$
\\
$3$
&
& $\frac{-1}{3n^2}$
& $\frac{-1}{3n^2}$
& $\frac{-1}{3n^2}$
& $\frac{-1}{3n^2}$
& $\frac{-1}{3n^2}$
\\
$4$
&
& $\frac{-1}{8n^2}$
& $\frac{-1}{8n^2} + \frac{1}{4n^3}$
& $\frac{-1}{8n^2} + \frac{1}{4n^3}$
& $\frac{-1}{8n^2} + \frac{1}{4n^3}$
& $\frac{-1}{8n^2} + \frac{1}{4n^3}$
\\
$5$ & & &
$ \frac{1}{6n^3}$
&
$ \frac{1}{6n^3} - \frac{1}{5n^4}$
&
$ \frac{1}{6n^3} - \frac{1}{5n^4}$
&
$ \frac{1}{6n^3} - \frac{1}{5n^4}$
\\
$6$ & & &
$\frac{1}{48n^3}$
&
$\frac{1}{48n^3} - \frac{13}{72n^4}$
&
$\frac{1}{48n^3} - \frac{13}{72n^4} + \frac{1}{6n^5}$
&
$\frac{1}{48n^3} - \frac{13}{72n^4} + \frac{1}{6n^5}$
\\
$7$ & & & &
$\frac{-1}{24n^4}$
&
$\frac{-1}{24n^4} + \frac{11}{60 n^5}$
&
$\frac{-1}{24n^4} + \frac{11}{60 n^5} - \frac{1}{7n^6}$
\\
$8$ & & & &
$\frac{-1}{384 n^4}$
&
$\frac{-1}{384 n^4} + \frac{17}{388n^5}$
&
$\frac{-1}{384 n^4} + \frac{17}{388n^5} - \frac{29}{160 n^6}$
\\
$9$ & & & & &
$\frac{1}{144 n^5}$
&
$\frac{1}{144 n^5} - \frac{59}{810 n^6}$
\\
$10$ & & & & &
$\frac{1}{3840n^5}$
&
$\frac{1}{3840n^5} - \frac{7}{576 n^6}$
\\
$11$ & & & & & &
$\frac{-1}{1152 n^6}$
\\
$12$
& & & & & &
$\frac{-1}{46080 n^6}$
\end{tabular}
}
{\small {\bf Table 1.} \rm Values $\gamma_j(\nu)$ needed in the correction
$\phi_{\nu}$ as coefficients of orthogonal polynomials}.
\noindent
Since the remainder $R_{\nu}$ is nonnegative and bounded by
$A_{\nu}$ (a polynomial in $m$ of degree $2\nu$),
it follows that
\[
0\leq (-1)^{\nu}\Big(\mu_m-\mu_m(\phi_\nu)\Big)
\leq A_\nu \frac{\lambda^m}{n^\nu}, \ \ m=1,2,\ldots \ .
\]
Thus,
we obtain the exact formula (cf.\ Remark
\ref{rem.exact.d2})
\[
d_2(\mbox{Bin}(n,\lambda/n),\phi_\nu)=\frac{1}{2}
\Big|\Big(1+\frac{2\lambda}{n}\Big)^n-
e^{2\lambda}\Big(1-\sum_{j=2}^{2\nu-2}\gamma_j(\nu)
(2\lambda)^j\Big)\Big|
\]
and, also, the inequality
\[
d_2(\mbox{Bin}(n,\lambda/n),\phi_\nu)\leq C_{\nu}(\lambda) n^{-\nu},
\ \ \ \
C_\nu(\lambda)=\frac{1}{2}\sum_{m=\nu}^{\infty}
\frac{(2\lambda)^m}{m!} A_{\nu}(m-1).
\]
Certainly, these results hold also for $\nu=1$; then
$\phi_1=g_{\lambda}$
is the usual Poisson$(\lambda)$ distribution
and the constant is given by $C_1(\lambda)=\lambda^2 e^{2\lambda}$.
By using the recurrence relation of Stirling numbers, see, e.g.,
Sibuya (1988), it can be verified that
$C_\nu(\lambda)=\lambda^{\nu+1}e^{2\lambda}Q_{\nu-1}(\lambda)$
where $Q_{\nu}$ is a polynomial of degree $\nu$, $Q_0=1$, and
$Q_{\nu}$ satisfies the recurrent differential equation
\[
\lambda Q_{\nu}'(\lambda)+(\nu+2)Q_{\nu}(\lambda)=
2\lambda Q_{\nu-1}'(\lambda)+2(\nu+1+2\lambda)Q_{\nu-1}(\lambda), \ \
\nu=1,2,\ldots;
\]
alternatively, one can derive the sequence $Q_\nu$ from the
integral recurrence
\[
Q_{\nu+1}(\lambda)=
2 Q_{\nu}(\lambda)+2\int_0^1 y^{\nu+2} (2\lambda y-1)
Q_{\nu}(\lambda y) dy, \ \
\nu=0,1,\ldots \ .
\]
The few first polynomials are
$Q_1=\frac{4}{3}+\lambda$,
$Q_2=2+\frac{8}{3}\lambda+\frac{2}{3}\lambda^2$,
$Q_3=\frac{16}{5}+\frac{52}{9}\lambda+\frac{8}{3}\lambda^2
+\frac{1}{3}\lambda^3$,
$Q_4=\frac{16}{3} +\frac{176}{15}\lambda
+ \frac{68}{9}\lambda^2 +\frac{16}{9} \lambda^3 +
\frac{2}{15} \lambda^4$. As a particular application, suppose we
wish to approximate the binomial distribution $\mbox{Bin}(n,\lambda/n)$
with a precision of order $n^{-4}$ for large $n$; that is, $\nu=4$.
The proposed corrected Poisson approximation depends on the constants
$\gamma_j(4)$, $j=2,\ldots,6$, appeared in column $\nu=4$
of Table 1, i.e.,
\[
\mbox{$
\phi_4(k)=e^{-\lambda}\frac{\lambda^k}{k!}
\left(1
-\frac{1}{2n} P_2(k)
+\frac{1}{3n^2} P_3(k)
+\Big(\frac{1}{8n^2}-\frac{1}{4n^3}\Big) P_4(k)
-\frac{1}{6n^3} P_5(k)
-\frac{1}{48n^3} P_6(k)
\right).
$}
\]
Then,
\[
d_2\Big(\mbox{Bin}(n,\lambda/n),\phi_4\Big)\leq C_{4}(\lambda) n^{-4}=
\lambda^5 \Big(\frac{16}{5}+\frac{52}{9}\lambda+\frac{8}{3}\lambda^2
+\frac{1}{3}\lambda^3\Big)e^{2\lambda} n^{-4}.
\]
Note that the bounds for $\nu=2$ and $\nu=3$
follow from Theorems \ref{theo.main} and
\ref{theo.main2}, respectively, when applied for equal $p_i$, and
the constants are exactly the same.
}
\end{exam}
It would be desirable to obtain general results for higher
order (than three)
corrected Poisson approximations for $S_n$,
when the $p_i$'s are
unequal. A natural conjecture, suggested by Example
\ref{exam.binomial.exact} and Theorems
\ref{theo.main}, \ref{theo.main2},
is that for every $\nu\geq 2$,
there exist (unique) constants $\gamma_2,\ldots,\gamma_{2\nu-2}$,
depending only on $\lambda,\lambda_2,\ldots,\lambda_{\nu}$,
such that
\[
d_2(f_n,\phi_\nu)\leq
Q_{\nu-1}(\lambda)e^{2\lambda}\sum_{i=1}^n p_i^{\nu+1},
\]
where $Q_{\nu}$ are the polynomials defined
in Example \ref{exam.binomial.exact}.
\end{document} | arXiv | {
"id": "2304.10314.tex",
"language_detection_score": 0.514735221862793,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title {Weak and strong approximations of reflected diffusions \\ via penalization methods} \author { {\small Leszek S\l omi\'nski} \\{\small Faculty of Mathematics and Computer Science, Nicolaus Copernicus University}\\ {\small ul. Chopina 12/18, 87--100 Toru\'n, Poland}} \date{} \maketitle \begin{abstract} We study approximations of reflected It\^o diffusions on convex subsets $D$ of ${{{\mathbb R}^d}}$ by solutions of stochastic differential equations with penalization terms. We assume that the diffusion coefficients are merely measurable (possibly discontinuous) functions. In the case of Lipschitz continuous coefficients we give the rate of ${{\mathbb L}}^p$ approximation for every $p\geq1$. We prove that if $D$ is a convex polyhedron then the rate is ${\cal O}\big((\frac{\ln n}n)^{1/2}\big)$, and in the general case the rate is ${\cal O}\big((\frac{\ln n}n)^{1/4}\big)$. \end{abstract}
\noindent {\bf Keywords} Reflected diffusions, penalization methods.\newline {\bf 2010 Mathematics Subject Classification} 60 H 20, 60 J 60, 60 F 15.\newline
\footnotetext{Research supported by Polish Ministry of Science and Higher Education Grant N N201 372 436.}
\nsubsection{Introduction} In the paper we study weak and strong approximations of solutions of $d$--dimensional stochastic differential equations (SDEs) \begin{equation} \label{eq1.1} X_t = x_0 + \int_0^t \sigma(s,X_s)\,dW_s+\int_0^t b(s,X_s)\,ds+K_t, \quad t\in{\Bbb R}^+ \end{equation}
with reflecting boundary condition on a convex domain $D$. Here $x_0\in\bar D=D\cup\partial D$, $X$
is a reflecting process on $\bar D$, $K$ is a bounded variation process with variation $|K|$ increasing only, when $X_t\in\partial D$, $W$ is a $ d$-dimensional standard Wiener process and $\sigma:{\Bbb R}^+\times{{{\mathbb R}^d}}\rightarrow{\Bbb R}^d\otimes{\Bbb R}^d$, $b:{\Bbb R}^+\times{{{\mathbb R}^d}}\rightarrow{\Bbb R}^d$ are measurable (possibly discontinuous) functions. Suppose that for $n\in\Bbb N$ we are given measurable coefficients $\sigma_n:{\Bbb R}^+\times{{{\mathbb R}^d}}\rightarrow{\Bbb R}^d\otimes{\Bbb R}^d$, $b_n:{\Bbb R}^+\times{{{\mathbb R}^d}}\rightarrow{\Bbb R}^d$ and a standard Wiener process $W^n$, and assume that there exists a solution $X^n$ of the following SDE with penalization term \begin{equation} \label{eq1.2} X^n_t = x_0 + \int_0^t \sigma_n(s,X^n_s)\,dW^n_s+\int_0^t b_n(s,X^n_s)\,ds -n\int_0^t(X^n_s-\Pi(X^n_s))ds,\quad t\in{\Bbb R}^+, \end{equation} where $\Pi(x)$ is the projection of $x$ on $\bar D$. The problem is to find conditions on $\{\sigma_n\},\,\{b_n\}$ ensuring convergence of $\{X^n\}$ to the reflected diffusion $X$, and secondly, to give the rate of such convergence.
Reflected diffusions have many applications, for instance in queueing
systems, seismic reliability analysis and finance (see e.g.
Asmussen \cite{as}, Dupuis and Ramanan \cite{dr}, Kr\'ee and Soize \cite{KS}, Pettersson \cite{p1}, Shepp and Shiryaev \cite{ss}). Therefore, the problem of practical approximations of solutions of (\ref{eq1.1}) is very important. Discrete penalization schemes based on the approximation of $X$ by solutions of equations with penalization term are well known (see e.g. Pettersson \cite{p2}, Kanagawa and Saisho \cite{ks}, Liu \cite{li}, S\l omi\'nski \cite{s4}).
Approximation of reflected diffusions via penalization methods was earlier considered by Menaldi \cite{me}, Menaldi and Robin \cite{mr}, Lions and Sznitman \cite {ls}, Lions, Menaldi and Sznitman \cite{lms}, Storm \cite{st}, Saisho and Tanaka \cite{st} and many others. Unfortunately, these authors have restricted themselves to the case of Lipschitz continuous coefficients.
In the present paper we consider measurable coefficients $\sigma_n,b_n$ such
that \begin{equation}
\label{eq1.3} \|\sigma_n(t,x)\|^2+|b_n(t,x)|^2\le C(1+|x|^2),\quad(t,x)\in{\Bbb R}^+\times{{{\mathbb R}^d}},\quad n\in{\mathbb N} \end{equation} for some $C>0$.
To prove convergence of $\{X^n\}$ to $X$ we first show that under (\ref{eq1.3}) the sequence $\{X^n\}$ is very close to the sequence $\{\Pi(X^n)\}$ and we observe that $\Pi(X^n)$ is a solution of some Skorokhod problem (for the definition of the Skorokhod problem see Section 2). Next, using a well developed theory of convergence of solutions of the Skorokhod problem (see e.g. \cite{as,rs1,s2,s4,ta}) we prove our main approximation results. Moreover, we are able to strengthen the rate of the convergence of the penalization method in the classical case of Lipschitz continuous coefficients $\sigma, b$.
The paper is organized as follows.
In Section 2 we estimate the ${\mathbb L}^p$ distance between $X^n$ and $\bar D$. Using some new estimates of ${\mathbb L}^p$-modulus of continuity of It\^o's processes from Fischer and Nappo \cite{fn} we prove that under (\ref{eq1.3}) for every $p\geq1$, $T>0$,
\[||\sup_{t\leq T}
\mbox{\rm dist}(X^n_t,\bar D)||_p={\cal O}\big((\frac{\ln n}n)^{1/2}\big),\]
where $||\cdot||_p=(E(\cdot)^p)^{1/p}$ denotes the usual ${\mathbb L}^p$ norm. We also show that $\{X^n\}$ is tight in $C({\Bbb R}^+,{\Bbb R}^d)$ and its weak limit point solve the Skorokhod problem.
Section 3 contains our main results concerning weak and strong approximations of solutions of (\ref{eq1.1}). We consider the set of conditions on coefficients from the paper by Rozkosz and S\l omi\'nski \cite{rs1} on stability of solutions of stochastic differential equations with reflecting boundary. Roughly speaking, we assume that $\{\sigma_n\}$, $\{b_n\}$ satisfy (\ref{eq1.3}) and $\{(\det \sigma_n\sigma^{\star}_n)^{-1}\}$ is locally uniformly integrable on some set ($\sigma^*_n$ denotes the matrix adjoint to $\sigma_n$). Then we show that if $\{\sigma_n\},\,\{b_n\}$ tend to $\sigma,\,b$ a.e. on the set mentioned above and uniformly on its completion, then $X^n\mathop{\longrightarrow}_{\cal D} X$, where $X$ denotes a unique weak solution of $(\ref{eq1.1})$. Under the additional assumptions that $W^n\mathop{\longrightarrow}_{\cal P} W$ and that (\ref{eq1.1}) is pathwise unique we show that $X^n\mathop{\longrightarrow}_{\cal P} X$. Thus, we generalize earlier approximation results to equations with possibly discontinuous and nonelliptic diffusion coefficients and discontinuous drift coefficients.
Section 4 is devoted to the classical case, where all coefficients are fixed Lipschitz continuous functions with respect to $x$ and all stochastic integrals are driven by the same Wiener process, i.e. $\sigma_n=\sigma$, $b_n=b$, and $W_n=W$, $n\in{\mathbb N}$, and there is $L>0$ such that \begin{equation}\label{eq1.4}
\|\sigma(t,x)-\sigma(t,y)\|^2+|b(t,x)-b(t,y)|^2\leq L|x-y|^2,\quad(t,x)\in{\Bbb R}^+\times{{{\mathbb R}^d}}. \end{equation}
In this case, if $D$ is a convex polyhedron, we prove that for every $p\geq1$, $T>0$, \[||\sup_{t\leq T}|X^n_t-X_t|||_p={\cal O}\big((\frac{\ln n}n)^{1/2}\big).\] For arbitrary convex domain we prove that for every $p\geq1$, $T>0$,
\[||\sup_{t\leq T}|X^n_t-X_t|||_p={\cal O}\big((\frac{\ln n}n)^{1/4}\big).\] Thus, we strengthen earlier results on the subject proved by Menaldi \cite{me}.
In the sequel we use the following notation. ${\Bbb R}^+=[0,\infty)$, $C({\Bbb R}^+,{\Bbb R}^d)$ is the space of continuous functions $x:{\Bbb R}^+\rightarrow {\Bbb R}^d$ equipped with the topology of uniform convergence on compact subsets of ${\Bbb R}^+$. For every $x\in C({\Bbb R}^+,{\Bbb R}^d)$, $\delta>0$, $T>0$ we set
$\omega_{\delta}(x,T)=\sup\{|x_t-x_s|;\,s,t\in[0,T],\,|s-t|\leq \delta\}$. ${\Bbb R}^d\otimes{\Bbb R}^d$ is the set of $(d\times d)$\,-\,matrices.
The abbreviation $a.e.$ means ``almost everywhere" with respect to the Lebesgue measure, "$\mathop{\longrightarrow}_{\cal D}$", "$\mathop{\longrightarrow}_{\cal P}$" denote convergence in law and in probability, respectively.
\nsubsection{General results} \label{sec2}
Let $D$ be a nonempty convex domain in ${\Bbb R}^d$ and let ${\cal N}_x$ denote the set of inward normal unit vectors at $x\in\partial D$. Note that ${\bf n}\in{\cal N}_{x} $ if and only if $<y-x,\mbox{\bf n}>\geq 0 $ for every $y\in\bar D$ (see e.g. \cite{me,st}). Moreover, if $\mbox{\rm dist}(x,\bar D)>0$, then \[
\frac{\Pi(x)-x}{|\Pi(x)-x|}\in{\cal N}_{\Pi(x)}.\] Let $Y$ be an $\{{\cal F}_t\}$\,-\,adapted process with continuous trajectories. We will say that a pair $(X,\,K)$ of $\{{\cal F}_t\}$\,-\,adapted processes is a solution of the Skorokhod problem associated with $Y$ if \begin{eqnarray*} & &X=Y+K,\\ & &X\mbox{ is $\bar D$\,-\,valued},\\ \label{eq8} & &K\mbox{ is a process with locally bounded variation such that $K_0=0$ and} \\
& &\quad K_t=\int_0^t{\bf n}_s\,d|K|_s,\qquad |K|_t=\int_0^t{\bf 1}_{\{X_s\in\partial D\}}\,d|K|_s, \quad t\in{\Bbb R}^+,\nonumber\\ & &\mbox{where }{\bf n}_s\in{\cal N}_{X_s}\mbox{ if $X_s\in\partial D$}. \end{eqnarray*} It is well known that for every process $Y$ with continuous trajectories there exists a unique solution $(X,K)$ of the Skorokhod problem associated with $Y$ (see e.g. \cite{ce} or \cite{lau}, where a more general case of c\`adl\`ag processes is considered). The theory of convergence of solutions of the Skorokhod problem is well developed (see e.g. \cite{as,rs1,s2,s4,ta}). Unfortunately, solutions of (\ref{eq1.2}) are not solutions of the Skorokhod problem and the problem of their convergence is more delicate.
Suppose that we are given a filtered probability space $(\Omega^n,\,{\cal F}^n,\,\{{\cal F}^n_t\},\,P^n)$ satisfying the usual conditions and a $d$\,-\,dimensional $\{{\cal F}^n_t\}$\,-\,Wiener process $W^n$, $n\in\Bbb N$. Let $\{X^n\}$ denote the sequence of solutions of (\ref{eq1.2}). In the present paper we will use the simple fact that under (\ref{eq1.3}) there exists a sequence of solutions of the Skorokhod problem very close to the sequence $\{X^n\}$. Observe that we can rewrite (\ref{eq1.2}) into the form \[\Pi(X^n_t)=Y^n_t-n\int_0^t(X^n_s-\Pi(X^n_s))ds,\quad t\in{\mathbb R^+},\] where $Y^n_t=x_0-X^n_t+\Pi(X^n_t)+\int_0^t \sigma_n(s,X^n_s)\,dW^n_s+\int_0^t b_n(s,X^n_s)\,ds$, $t\in{\mathbb R^+}$,
$n\in{\mathbb N}$. Since $\Pi(X^n)\in\bar D$, $|K^n|$ increases only when $\Pi(X^n)_t\in\partial D$ and
\[K^n_t=n\int_0^t\frac{\Pi(X^n_s)-X^n_s}{|\Pi(X^n_s)-X^n_s|}|\Pi(X^n_s)-X^n_s|ds=\int_0^t{\bf n}_s\,d|K^n|_s,\quad t\in{\mathbb R^+},\] it is clear that $(\Pi(X^n), K^n)$ is a solution of the Skorokhod problem associated with $Y^n$, $ n\in{\mathbb N}$. One can also observe that
\[|X^n_t-\Pi(X^n_t)|=\mbox{\rm dist}(X^n_t,\bar D),\quad t\in{\mathbb R^+},\,n\in{\mathbb N}.\] \begin{theorem} \label{thm1} Assume that (\ref{eq1.3}) is satisfied. \begin{enumerate}
\item[{\bf (i)}] For every $p\geq1$,
$T>0$ there is $C>0$ such that \[||\sup_{t\leq T}\mbox{\rm dist}(X^n_t,\bar D )|||_p\leq C(\frac{\ln n}{n})^{1/2},\quad n\in{\mathbb N}.\] \item[{\bf (ii)}]
$\{(X^n,K^n)\}_{n\in\Bbb N}$ is tight in $C({\Bbb R}^+,{\Bbb R}^{2d})$ and its every weak limit point $(X,K)$ is a solution of the Skorokhod problem. \end{enumerate} \end{theorem} \begin{proof} (i) Fix $T>0$. First observe that by \cite[Corollary 2.4]{laus} and Gronwall's lemma, \begin{equation}\label{eq2.1}
\sup_nE\sup_{t\leq T}|X^n_t|^p<+\infty \end{equation} for every $p\geq 1$. By the above and estimates from Fischer and Nappo \cite[Theorem 1]{fn} (see also \cite[Lemma 4.4]{p1} and \cite[Lemma A4]{s4}) for every $p\geq1$
there is $C>0$ such that \begin{equation}\label{eq2.2}
||\omega_{1/n}(\bar Y^n,T)||_p\leq C(\frac{\ln n}{n})^{1/2},\quad n\in{\mathbb N},\end{equation} where $\bar Y^n_t=x_0+\int_0^t \sigma_n(s,X^n_s)\,dW^n_s+\int_0^t b_n(s,X^n_s)\,ds$, $t\in{\mathbb R^+}$, $n\in{\mathbb N}$.
Fix $n\in N$ and $k=0,1,....,[nT]-1$. Clearly, $X^n$ is a solution of the equation \begin{equation}\label{eq2.3} X^n_{k/n+s}=X^n_{k/n}+\bar Y^n_{k/n+s}-\bar Y^n_{k/n}-n\int_0^s(X^n_{k/n+u}-\Pi(X^n_{k/n+u})du,\quad s\in[0,1/n]\end{equation} on the interval $[k/n, (k+1)/n]$. It is also clear that there exists a unique solution of the equation \begin{equation}\label{eq2.4} \bar X^n_s=X^n_{k/n}-n\int_0^s(\bar X^n_u-\Pi(\bar X^n_u))du,\quad s\in[0,1/n].\end{equation} One can easily check that $\bar X^n_s=\Pi(X^n_{k/n})+(X^n_{k/n}-\Pi(X^n_{k/n})e^{-ns}$, $s\in[0,1/n]$, which implies that
\begin{equation}\label{eq2.5}|\bar X^n_{1/n}-\Pi(\bar X^n_{1/n})|\leq|\bar X^n_{1/n}-\Pi(X^n_{k/n})|=|X^n_{k/n}-\Pi(X^n_{k/n})|e^{-1}.\end{equation} Subtracting (\ref{eq2.4}) from (\ref{eq2.3}) we see that \[X^n_{k/n+s}-\bar X^n_s=\bar Y^n_{k/n+s}-\bar Y^n_{k/n}-n\int_0^s(X^n_{k/n+u}-\bar X^n_u-\Pi(X^n_{k/n+u}+\Pi(\bar X^n_n))du,\quad s\in[0,1/n], \] hence that
\[|X^n_{k/n+s}-\bar X^n_s|\leq \omega_{1/n}(\bar Y^n,T)+2n\int_0^s|X^n_{k/n+u}-\bar X^n_u|du, \quad s\in[0,1/n],\] because $\Pi:{{{\mathbb R}^d}}\to\bar D$ is Lipschitz continuous with the constant equal to $1$. Applying Gronwall's lemma we conclude from the above that
\begin{equation}\label{eq2.6}|X^n_{(k+1)/n}-\bar X^n_{1/n}|\leq e^2\omega_{1/n}(\bar Y^n,T).\end{equation} Setting $A^n_k=|X^n_{k/n}-\Pi(X^n_{k/n}|$ and using (\ref{eq2.5}), (\ref{eq2.6}) we have \begin{eqnarray*}
A^n_{k+1}&\leq&|X^n_{(k+1)/n}-\Pi(\bar X^n_{1/n}|\leq
|X^n_{(k+1)/n}-\bar X^n_{1/n}|+|\bar X^n_{1/n}-\Pi(\bar X^n_{1/n})| \\ &\leq&e^2\omega_{1/n}(\bar Y^n,T)+e^{-1}A^n_k.\end{eqnarray*} Since $A^n_0=0$ and $A^n_1\leq e^2\omega_{1/n}(\bar Y^n,T)$, by induction on $k$ we obtain \begin{equation}\label{eq2.7}
\max_{0\leq k\leq[nT]}|X^n_{k/n}-\Pi(X^n_{k/n})|\leq\frac{e^2}{1-e^{-1}}\omega_{1/n}(\bar Y^n,T).\end{equation} Furthermore, for $k=0,1,....,[nT]$ and $s\in[0,1/n]$ such that $k/n+s\leq T$, \begin{eqnarray*} X^n_{k/n+s}-X^n_{k/n}&=&\bar Y^n_{k/n+s}-\bar Y^n_{k/n}\\ &&-n\int_0^s((X^n_{k/n+u}-X^n_{k/n})-(\Pi(X^n_{k/n+u})-\Pi(X^n_{k/n}))du\\ &&-ns(X^n_{k/n}-\Pi(X^n_{k/n}).\end{eqnarray*} Hence, by Gronwall's lemma,
\[\sup_{s\in[0,1/n]}|X^n_{(k/n+s)\wedge T}-X^n_{k/n}|\leq e^2(\omega_{1/n}(\bar Y^n,T)+|X^n_{k/n}-\Pi(X^n_{k/n})|),\] which when combined with (\ref{eq2.7}) gives
\begin{equation}\label{eq2.8}\max_{0\leq k\leq[nT]}\sup_{s\in[0,1/n]}|X^n_{(k/n+s)\wedge T}-X^n_{k/n}|\leq C \omega_{1/n}(\bar Y^n,T) \end{equation} for some $C>0$. Of course (\ref{eq2.7}), (\ref{eq2.8}) and (\ref{eq2.2}) imply (i).
(ii) By (\ref{eq1.3}), (\ref{eq2.1}) and the well known Aldous criterion (see e.g. \cite{al}),
\[\{\bar Y^n\}\quad\mbox{\rm is tight in}\,\,C({\Bbb R}^+,{\Bbb R}^{d}).\] Moreover, $\{\bar Y^n\}$ satisfies the so called UT condition (see e.g. \cite{s2,s4}) and hence its every weak limit point is a semimartingale. Due to part (i), the sequence $\{ Y^n\}$ is also tight in $C({\Bbb R}^+,{\Bbb R}^{d})$. Assume that $Y^{(n)}\mathop{\longrightarrow}_{\cal D} \bar Y$ in $C({\Bbb R}^+,{\Bbb R}^{d})$ along some subsequence. By \cite[Corollary A3]{s4},
\[(X^{(n)},K^{(n)})\mathop{\longrightarrow}_{\cal D} (X,K)\quad\mbox{\rm in}\,\,C({\Bbb R}^+,{\Bbb R}^{2d}),\] where $(\bar X,\bar K)$ is a unique solution of the Skorkhod problem associated with a semimartingale $\bar Y$. \end{proof} \begin{remark}{\rm Under (\ref{eq1.3}), \begin{equation}\label{eq2.9}
\sup_nE|K^n|_T^p<+\infty \end{equation} for every $p\geq 1$, $T>0$. This follows from (\ref{eq2.1}) and \cite[Theorem 2.5]{laus}.
}\end{remark}
\nsubsection{Approximations of weak and strong solutions} We say that the SDE (\ref{eq1.1}) has a strong solution if there exists a pair $(X,\,K)$ of $\{{\cal F}_t\}$\,-\,adapted processes satisfying (\ref{eq1.1}) and such that $(X,\,K)$ is a solution of the Skorokhod problem associated with \begin{equation}\label{eq3.1}
Y_t=x_0 + \int_0^t \sigma(s,X_s)\,dW_s+\int_0^t b(s,X_s)\,ds,\quad t\in{\mathbb R^+}.\end{equation} Recall also that the SDE (\ref{eq1.1}) is said to have a weak solution if there exists a probability space $(\bar\Omega,\,\bar{\cal F},\,\bar P)$, an $\{\bar{\cal F}_t\}$\,-\,adapted Wiener process $\bar W$ and a pair of $\{\bar{\cal F}_t\}$\,-\,adapted processes $(\bar X,\bar K)$ saisfying (\ref{eq1.1}) with $\bar W$ instead of $W$.
The following set of general conditions was introduced in Rozkosz and S\l omi\'nski \cite{rs1}.
We
say that condition (H) is satisfied if for some closed subsets $H,\,H_1$ of ${\Bbb R}^+\times{{{\mathbb R}^d}}$ such that $H_1\subset H$, \begin{itemize} \item $\forall_{\varepsilon>0}\, \{(\det \sigma_n\sigma^*_n)^{-1}\}_{n\in\Bbb N}$ is uniformly integrable on each bounded subset of $H^c(\varepsilon)$, \item $\sigma_n\rightarrow \sigma,\,b_n\rightarrow b$ a.e. on $H^c={\Bbb R}^+\times{{{\mathbb R}^d}}\setminus H$, \item for every $(t,x)\in H_1$ (for every $(t,x)\in H$), \[\sigma_n(t,x_n)\rightarrow \sigma(t,x),\,\, b_n(t,x_n)\rightarrow b(t,x)\]
for all $\{x_n\}$ such that $x_n\rightarrow x$ (for all $\{(t,x_n)\}\subset H$ such that $x_n\rightarrow x$). \end{itemize} Here $H^c(\varepsilon)={\Bbb R}^+\times{{{\mathbb R}^d}}\setminus H(\varepsilon)$ and $H(\varepsilon)=H\cup H_{1,\varepsilon}$, where $H_{1,\varepsilon}=\emptyset$ if $H_1=\emptyset$ and $
H_{1,\varepsilon}=\{z\in{\Bbb R}^+\times{{{\mathbb R}^d}}:\inf_{y\in H_1}|z-y| \le\varepsilon\}$, otherwise.
\begin{theorem} Assume that (\ref{eq1.3}) and (H) are satisfied. \label{thm2} \begin{enumerate} \item[{\bf(i)}] If the SDE (\ref{eq1.1}) has a unique weak solution $X$ then \[X^n\mathop{\longrightarrow}_{\cal D} X\quad in\,\,C({\Bbb R}^+,{\Bbb R}^d).\] \item[{\bf (ii)}] If $W^n\mathop{\longrightarrow}_{\cal P} W$ in $C({\Bbb R}^+,{\Bbb R}^d)$ and the SDE (\ref{eq1.1}) is pathwise unique then \[X^n\mathop{\longrightarrow}_{\cal P} X\quad in\,\,C({\Bbb R}^+,{\Bbb R}^d),\] where $X$ is a unique strong solution of (\ref{eq1.1}).\end{enumerate} \end{theorem} \begin{proof} We use notations from the proof of Theorem \ref{thm1}.
(i) Our method of proof will be adaptation of the proof of \cite[Theorem 2.2]{rs1}. Since $K^n$ is a bounded variation process, one can observe that Krylov's inequality used in \cite[Theorem 5.1]{rs1} is still in force, i.e.
there exists a constant $C$ depending only on $d$, $R$ and $t$
such that for every non-negative measurable $f:{\mathbb R^+}\times{{{\mathbb R}^d}}\to{\mathbb R^+}$, \begin{equation}\label{eq3.2}
E\int_0^{t\wedge\tau_n^R}f(s,X^n_s)ds \leq C||(\det
\sigma_n\sigma^*_n)^{-1/(d+1)}f||_{{\Bbb L}_{d+1}([0,t]\times B(0,R))},\end{equation}
where
$\tau_n^{R}=\inf\{t:|X_t|\vee|K^n|_t>R\}$,
$B(0,R)=\{y\in{{{\mathbb R}^d}}:\,|y|<R\}$. By Theorem \ref{thm1}(ii), $\{(X^n,W^n)\}$ is tight in $C({\Bbb R}^+,{\Bbb R}^{2d})$ and we may assume that $(X^{(n)},W^{(n)})\mathop{\longrightarrow}_{\cal D}(\bar X,\bar W)$ in $C({\Bbb R}^+,{\Bbb R}^{2d})$ along some subsequence, where $\bar W$ is a Wiener process with respect to the natural filtration ${\cal F}^{\bar X,\bar W}$. By (\ref{eq3.2}) and arguments from the proof of \cite[Theorem 2.2]{rs1}, \[(X^{(n)},\bar Y^{(n)},W^{(n)})\mathop{\longrightarrow}_{\cal D}(\bar X, \bar Y,\bar W)\quad\mbox{\rm in}\,\,C({\Bbb R}^+,{\Bbb R}^{3d}),\] where $\bar Y_t=x_0+\int_0^t\sigma(s,\bar X_s)d\bar W_s+\int_0^tb(s,\bar X_s)ds$, $t\in{\mathbb R^+}$. Since $Y^n=\bar Y^n-X^n+\Pi(X^n)$, it follows from Theorem \ref{thm1}(ii)
that \[(X^{(n)},K^{(n)},\bar Y^{(n)})\mathop{\longrightarrow}_{\cal D}(\bar X, \bar Y,\bar K)\quad\mbox{\rm in}\,\,C({\Bbb R}^+,{\Bbb R}^{3d}),\]
where $(\bar X,\bar K)$ is a solution of the Skorokhod problem associated with $\bar Y$.
Hence $(\bar X,\bar K)$ is a weak solution of (\ref{eq1.1}) and the result follows due to weak uniqness of (\ref{eq1.1}).
(ii) By using arguments from Gy\"ongy and Krylov \cite{gk}, to prove that $\{X^n\}$ converges in probability it is sufficient to show that from any subsequences $(l)\subset(n), (m)\subset(n) $ it is possible to choose further subsequences $(l_k)\subset(l),(m_k)\subset(m)$ such that $( X^{(l_k)}, X^{(m_k)})\mathop{\longrightarrow}_{\cal D}(\bar X,\bar X)$ in $ C({\Bbb R}^+,{\Bbb R}^{2d}) $, where $\bar X$ is a process with continous trajectories. From Theorem \ref{thm1}(ii) we deduce that \[\{( X^{(l)},\bar X^{(m)},W^{(l)},W^{(m)})\}\quad\mbox{\rm is tight in}\,\, C({\Bbb R}^+,{\Bbb R}^{d}).\]
Therefore, we can choose subsequences $(l_k)\subset(l),(m_k)\subset(m)$ such that \[ (X^{(l_k)}, X^{(m_k)},W^{(l_k)},W^{(m_k)})\mathop{\longrightarrow}_{\cal D} (\bar X',\bar X'',\bar W,\bar W),\quad \mbox{\rm in}\,\, C({\Bbb R}^+,{\Bbb R}^{4d}), \]
where $\bar X',\bar X''$ are processes with continuous trajectories and $\bar W$ is a Wiener process with respect to the natural filtration ${\cal F}^{\bar X',\bar X'',\bar W}$. In view of part (i), the processes $\bar X',\bar X''$ are solutions of (\ref{eq1.1}) with $\bar W$ in place of $W$. Since (\ref{eq1.1}) is pathwise unique, $X'=X''$, and consequently $\{X^n\}$ converges in probability in $C({\Bbb R}^+,{\Bbb R}^{4d})$ to some continuous process $X$. Hence $(X^n,W^n)\mathop{\longrightarrow}_{\cal P} (X,W)$, so using once again the pathwise uniqueness property of (\ref{eq1.1}) shows that $X$ is a unique strong solution of (\ref{eq1.1}). \end{proof} \begin{remark}{\rm From \cite{rs1} it follows that in fact in part (i) of the above theorem the assumption that $\sigma_n\rightarrow \sigma$ a.e. on $H^c$ may be replaced by a weaker assumption that $\sigma_n\sigma^*_n\rightarrow \sigma\sigma^*$ a.e. on $H^c$.} \end{remark}
\begin{remark}{\rm There are important examples of equations of the form (\ref{eq1.1}) with discontinuous coefficients having unique weak or strong solutions. For instance, in Schmidt \cite{sz} it is shown that if $d=1$, $D=(r_1,r_2)$, $b\equiv0$ and $\sigma$ is purely function of $x$, then (\ref{eq1.1}) has a unique weak solution for every starting point $x_0\in\bar D$ if and only if the set $M$ of all $x\in\bar D$ such that $\int_{\bar D\cap U_x}\sigma^{-2}(y)\,dy=+\infty$ for every open neighborhood $U_x$ of $x$ is equal to the set $N$ of zeros of $\sigma$. In multidimensional case it is known, that a solution of (\ref{eq1.1}) is unique in law if (\ref{eq1.3}) is satisfied with $\sigma_n,b_n$ replaced by $\sigma,b$, the coefficient $\sigma\sigma^*$ is bounded, continuous and uniformly elliptic, and $\partial D$ is regular (see Stroock and Varadhan \cite{sw} for more details). Recently Semrau \cite{se} considered the classical case $d=1$, $D={\mathbb R^+}$ with coefficients $\sigma,b$ depending only on $x$. She has shown that if $\sigma,b$, satisfy (\ref{eq1.3}), $\sigma$ is uniformly positive and
$(\sigma(x)-\sigma(y))^2\leq|f(x)-f(y)|$, $x,y\in{\mathbb R^+}$ for some bounded increasing function $f:{\mathbb R^+}\to{\mathbb R}$ then there exists a unique strong solution of (\ref{eq1.1}). Some weaker results on pathwise uniqueness can be found in the earlier paper by Zhang \cite{zh}.
Since in condition (H) we do not require continuity of the limit coefficients $\sigma$ and $ b$, Theorem \ref{thm2} is a useful tool for practical approximations of solutions of the equations mentioned above. }\end{remark}
\nsubsection{Rate of convergence in the case of Lipschitz continuous coefficients} In this section we assume that $\sigma_n=\sigma$, $b_n=b$, $n\in{\mathbb N}$, where $\sigma, b$ are Lipschitz continuous functions with respect to $x$, i.e. satisfy (\ref{eq1.4}). We also assume that all SDEs with penalization term are driven by a fixed $\{{\cal F}_t\}$\,-\,Wiener process $W$. In particular, this means that $X^n$ is a solution of the equation \begin{equation}\label{eq4.1}
X^n_t = x_0 + \int_0^t \sigma(s,X^n_s)\,dW_s+\int_0^t b(s,X^n_s)\,ds -n\int_0^t(X^n_s-\Pi(X^n_s))ds,\quad t\in{\Bbb R}^+. \end{equation} Tanaka \cite{ta} has shown that in the case of Lipschitz continous coefficients there exists a unique strong solution $(X,K)$ of (\ref{eq1.1}). Moreover, from \cite[Theorem 2.2]{s4} and Gronwall's lemma it follows that \begin{equation}
\label{eq4.2} E\sup_{t\leq T} |X_t|^p<+\infty\quad\mbox{\rm and}\quad E|K|_T^p<+\infty \end{equation} for every $p\geq1$, $T>0$. \begin{theorem}\label{thm3} Assume that (\ref{eq1.3}) and (\ref{eq1.4}) are satisfied. Let $X^n$ satisfy (\ref{eq4.1}), $n\in{\mathbb N}$. For every $p\in {\mathbb N}$, $T>0$ there is $C>0$ such that
\begin{description} \item[(i)] if $D$ is a convex polyhedron then \[
||\sup_{t\leq T}| X^n_t-X_t|||_p\leq C\big(\frac{\ln n}n\big)^{1/2},\quad n\in{\mathbb N}, \] \item[(ii)] if $D$ is a general convex domain then \[
||\sup_{t\leq T}| X^n_t-X_t|||_p\leq C\big(\frac{\ln n}n\big)^{1/4},\quad
n\in{\mathbb N}, \] \end{description} where $X$ is a unique strong solution of (\ref{eq1.1})\end{theorem} \begin{proof} Fix $T>0$. Without loss of generality we may assume that $p\geq2$.
(i) By Theorem 2.2 from Dupuis and Ishi \cite{di} there exists
$C>0$ such that \begin{eqnarray}
\nonumber&&\sup_{s\leq t}|\Pi(X^n_s)- X_s|\leq C \sup_{s\leq t}|
Y^n_s- Y_s|\\&&\qquad\qquad\label{eq4.3} \leq C\big(\sup_{s\leq t}|\Pi(X^n_s)- X^n_s|+ \sup_{s\leq t}|\bar Y^n_s- Y_s|\big) \end{eqnarray} for every $t\leq T$, where $\bar Y^n_s=x_0+\int_0^s\sigma(u,X^n_u)dW_u+\int_0^sb(u,X^n_u)du$, $Y^n_s=\bar Y^n_s+\Pi(X^n_s)-X^n_s$, $s\leq T$, $n\in{\mathbb N}$. Therefore, by Theorem \ref{thm1}(i), Burkholder-Davis-Gundy and Schwarz's inequalities,
\begin{eqnarray*} & & E\sup_{s\leq t}|X^n_s- X_s|^{p} \leq {\rm Const}( (\frac{\ln n}{n})^{p/2}+
E\sup_{s\leq t}|\bar Y^n_s-Y_s|^{p}\nonumber\\
& &\qquad\qquad \leq {\rm Const}( (\frac{\ln n}{n})^{p/2}+E\int_0^t||\sigma(s,X^n_s)-\sigma(s,X_s)||^pds\\
&&\qquad\qquad\qquad+E\int_0^t|b(s,X^n_s)-b(s,X_s)|^{p}ds ) \end{eqnarray*}
for every $t\leq T$. By the above and (\ref{eq1.4}), \[E\sup_{s\leq t}|X^n_s- X_s|^{p}
\leq {\rm Const}( (\frac{\ln n}{n})^{p/2}+\int_0^tE\sup_{u\leq s}|X^n_u-X_u|^{p}ds ) \]for every $t\leq T$, so (i) follows by Gronwall's lemma.
(ii) If $D$ is a general convex domain then by Lemma 2.2 in Tanaka \cite{ta}, \begin{eqnarray}
&&|\Pi(X^n_t)- X_t|^2\leq |\bar Y^n_t-Y_t|^2+ 2\int_0^t (Y^n_t-
Y_t-Y^n_s+Y_s)\,d(K^n_s- K_s)|\nonumber\\
&&\qquad\leq{\rm Const}\big(|\Pi(X^n_t)- X^n_t|^2+
|\bar Y^n_t- Y_t|^2+\sup_{t\leq T}|\Pi(X^n_t)- X^n_t|(K^n|_T+|K|_T)\nonumber\\
&&\quad\qquad+ \label{eq4.4}|\int_0^t (\bar Y^n_t- Y_t-\bar Y^n_s+Y_s)\,d(K^n_s- K_s)| \end{eqnarray} for every $t\leq T$. Since by the integration by parts formula, \begin{eqnarray*} &&\int_0^t (\bar Y^n_t- Y_t-\bar Y^n_s+Y_s)\,d(K^n_s- K_s)\\
&&\qquad=\int_0^t (X^n_s-X_s)d(\bar Y^n_s-Y_s)+\frac12([\bar Y^n-Y]_t-|\bar Y^n_t-Y_t|^2)\end{eqnarray*} (here $[\bar Y^n-Y]$ denotes the quadratic variation of $\bar Y^n-Y$), it follows from Theorem \ref{thm1}(i) and (\ref{eq4.4}) that \begin{eqnarray*}
&&E\sup_{s\leq t}|X^n_s- X_s|^{p} \leq {\rm Const}( (\frac{\ln n}{n})^{p/2}+
E\sup_{s\leq t}|\bar Y^n_s-Y_s|^{p}+E([\bar Y^n-Y]_t)^{p/2}\\
&&\qquad\qquad\qquad\qquad\qquad+E(\sup_{t\leq T}|\Pi(X^n_t)-
X^n_t|)^{p/2}(|K^n|_t+|K|_t)^{p/2}\\
& &\qquad\qquad\qquad\qquad\qquad+E\sup_{s\leq t}|\int_0^t (X^n_s-X_s)d(\bar
Y^n_s-Y_s)|^{p/2}\big). \end{eqnarray*} By Schwarz's inequality, Theorem \ref{thm1}(i), (\ref{eq2.9}) and (\ref{eq4.2}), \[
E(\sup_{t\leq T}|\Pi(X^n_t)-
X^n_t|)^{p/2}(|K^n|_T+|K|_T)^{p/2}\leq {\rm Const}((\frac{\ln n}{n})^{p/4}).\] Since $Y^n-Y$ is a continuous semimartingale with a martingale part $M^n=\int_0^{\cdot}\sigma(s,X^n_s)-\sigma(s,X_s)dW_s$ and a bounded variation part $V^n=\int_0^{\cdot}b(s,X^n_s)-b(s,X_s)ds$, using Burk\-hol\-der-Davis-Gundy and Schwarz's inequalities we get \begin{eqnarray*}
&&E\sup_{s\leq t}|\int_0^t (X^n_s-X_s)d(\bar
Y^n_s-Y_s)|^{p/2}\leq
{\rm Const}\big(E(\int_0^t|X^n_s-X_s|^2d[M^n]_s)^{p/4}\\
&&\qquad\qquad\qquad\qquad+E(\sup_{s\leq t}|X^n_s-X_s||V^n|_t)^{p/2}\big)\\
&&\qquad\qquad\qquad\leq {\rm Const}(E\sup_{s\leq
t}|X^n_s-X_s|^p)^{1/2}(E([M^n]_t)^{p/2}+(|V^n|_t)^p)^{1/2}.\end{eqnarray*} Observing that $[\bar Y^n-Y]=[M^n]$ and using the elementary inequality
$2ab\leq \epsilon^2 a^2+(b/\epsilon)^2$ with some sufficiently small $\epsilon$ we deduce from the above that
\begin{eqnarray*}
&&E\sup_{s\leq t}|X^n_s- X_s|^{p} \leq {\rm Const}\big( (\frac{\ln n}{n})^{p/4}+ E\sup_{s\leq t}|\bar Y^n_s-Y_s|^{p}\\
&&\qquad\qquad\qquad\qquad\qquad+E([M^n]_t)^{p/2}+E(|V^n|_t)^p)\big)
\\&&\qquad\qquad\qquad\leq {\rm Const}\big( (\frac{\ln n}{n})^{p/4}+E\int_0^t||\sigma(s,X^n_s)-\sigma(s,X_s)||^pds\\
&&\qquad\qquad\qquad\qquad\qquad+E\int_0^t|b(s,X^n_s)-b(s,X_s)|^{p}ds
\big)\\ &&\qquad\qquad\qquad\leq{\rm Const}\big ( (\frac{\ln n}{n})^{p/4}+\int_0^tE\sup_{u\leq s}|X^n_u-X_u|^{p}ds \big) \end{eqnarray*} for every $t\leq T$. Using Gronwall's lemma completes the proof. \end{proof}
\begin{remark}{\rm In the case of bounded convex domains and bounded Lipschitz continuous coefficients $\sigma,b$ the problem of ${\mathbb L}^p$ approximation of solutions of (\ref{eq1.1}) by sequences of solutions of (\ref{eq4.1}) was considered earlier in Menaldi
\cite{me}. In particular, in \cite[Theorem 3.1]{me} it is proved that for every $p\geq1$ and $T>0$, $||\sup_{t\leq T}|X^n_t-X_t|||_p\to0$. From the proof of \cite[Theorem 3.1]{me}
one can also deduce that \begin{equation}\label{eq4.5}
\forall_{\delta>0}\,\,\,||\sup_{t\leq T}|X^n_t-X_t|||_p={\cal O}\big((\frac{1}{n})^{1/4-\delta}\big).\end{equation} In fact, in \cite[Remark 3.1]{me} a better rate is stated. However, R. Pettersson has observed that there is a gap in the proof of \cite[Theorem 3.1]{me} (in the first line on page 741 $p$ should be replaced by $2p$). Using Menaldi's calculations and taking into account Pettersson's remark one can only prove (\ref{eq4.5}). It is also worth pointing out that Menaldi's method of proof of (\ref{eq4.5}) is completely different from our method based on estimates of ${\mathbb L}^p$-modulus of continuity for It\^o processes. } \end{remark} \mbox{}\\ {\bf Acknowledgements}\\The author is greatly indebted to R. Pettersson for stimulating conversation during the conference "Skorokhod space. 50 years on".
{\small
}
\end{document} | arXiv | {
"id": "1206.7063.tex",
"language_detection_score": 0.6562850475311279,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{An upper bound of the heat kernel along the harmonic-Ricci flow} \author{Shouwen Fang and Tao Zheng}
\subjclass[2010]{53C21, 53C44} \keywords{heat kernel, harmonic-Ricci flow, Sobolev inequality, log-Sobolev inequality}
\maketitle \begin{abstract} In this paper, we first derive a Sobolev inequality along the harmonic-Ricci flow. We then prove a linear parabolic estimate based on the Sobolev inequality and Moser's iteration. As an application, we will obtain an upper bound estimate for the heat kernel under the flow. \end{abstract}
\section{Introduction} \setcounter{equation}{0} Let $M^n$ be an $n$ dimensional closed smooth manifold and assume $n\ge 3$. In \cite{MR}, M\"{u}ller studied a system of the Ricci flow coupled with a harmonic map heat flow \begin{equation}\left \{\begin{array}{l} \large{\partial_t g=-2\text{Ric}+2\alpha(t) \nabla\phi\otimes \nabla\phi},\\ \large{\partial_t \phi=\tau_{g}\phi,} \end{array}\right.\label{O1} \end{equation} where $\phi(\cdot,t):(M,g(\cdot,t))\rightarrow(N,h)$ is a family of smooth maps between two Riemannian manifolds, both $g(\cdot,t)$ and $h$ are Riemannian metrics, $\alpha(t)$ is a positive non-increasing function, and $\tau_{g}\phi$ denotes the intrinsic Laplacian of $\phi$. This flow is also called harmonic-Ricci flow (cf. \cite{MH,MR,Zh}). The harmonic-Ricci flow may be one of helpful tools in finding the harmonic map between two Riemannian manifolds. If the target manifold $N$ is $\mathbb{R}$, the harmonic-Ricci flow reduces to the extended Ricci flow, which was first introduced by List in \cite{BL}. The extended Ricci flow is very useful in general relativity. If $\phi$ is a constant map, the system (\ref{O1}) degenerates to Hamilton's Ricci flow discussed widely recently, see for example the book \cite{CLN} and seminal papers \cite{CZ, H1, H2, P1}. Similarly as Ricci flow and the extended Ricci flow, corresponding theories for the harmonic-Ricci flow were established in \cite{MR}, such as the short time existence, the $\mathcal{W}$ entropy, the $\mathcal{F}$ entropy, reduced length and reduced volume. Hence the harmonic-Ricci flow may be investigated through the methods used in the Ricci flow.
In this paper, along the harmonic-Ricci flow, we consider the heat kernel $G(x,t,y,s)$ which is the fundamental solution of the following heat equation \begin{equation}\label{O2} (\Delta-\partial_t)u(x,t)=0. \end{equation} The estimate for heat kernel has always been an interesting topic in the study of differential equations on manifolds. In their celebrated paper \cite{LY}, Li and Yau derived some point-wise gradient estimates for the positive solutions of (\ref{O2}) on complete manifolds with fixed metric and lower bounded Ricci curvature, from which the upper and lower bounds on the heat kernel were obtained. Wang \cite{W} proved a global gradient estimate when the boundary of manifold is nonconvex, and got both upper and lower bounds for the heat kernel with Neumann conditions. Later, in \cite{M, MH, LW} the evolved metrics were studied, and some bounds on heat kernel under some geometric flows (e.g. the Ricci flow and the extended Ricci flow) were also derived with the assistance of the Sobolev inequality.
It is well-known that the Sobolev inequality is an important analytical tool in geometric analysis. Recently, there occur many interesting results on the Sobolev inequality under different geometric flows, especially the Ricci flow. In \cite{Hs, KZ, Y1, Y2, Z1, Z2, Z3}, some uniform Sobolev inequalities were proved along Ricci flow by using the monotonicity of Perelman's $\mathcal{W}$ entropy. As a consequence, Perelman's short time non-collapsing was extended to a long time version. In particular, by the Sobolev inequality, Zhang \cite{Z1} proved a global upper bound for the fundamental solution of a heat equation under backward Ricci flow with the assumption that Ricci curvature is nonnegative and the injectivity radius is bounded from below.
The main purpose of this paper is to establish the uniform Sobolev inequality and an upper bound for the heat kernel under the harmonic-Ricci flow. For convenience, we denote as in \cite{F1,F2,BL,MR} the symmetric two-tensor field $S_{y}$ with components $S_{ij}$ and its trace $S:=g^{ij}S_{ij}$ by \begin{equation*}
S_{ij}:=R_{ij}-\alpha(t)\nabla_i \phi\nabla_j \phi\text{\quad and\quad}S:=R-\alpha(t)|\nabla\phi|^2, \end{equation*} where $R_{ij}$ and $R$ are the Ricci curvature components and the scalar curvature of $(M, g)$ respectively. Using the monotonicity of the $\mathcal{W}$ entropy, we obtain the following Sobolev inequality. \begin{thm}\label{th1} Let ($g(x,t), \phi(x,t)$) be a solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T)$ with initial value ($g_0, \phi_0$). Let $A_0$ and $B_0$ be positive numbers such that the following $L^2$ Sobolev inequality holds initially, i.e. for any $v\in W^{1,2}(M, g_0)$, $$
\left(\int_M v^{\frac{2n}{n-2}}\md\mu\left(g_0\right)\right)^{\frac{n-2}{n}}\leq A_0 \int_M\left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu\big(g_0\big)+B_0 \int_M v^2\md\mu\left(g_0\right). $$ Then for all $v\in W^{1,2}(M,g(t))$ we have \begin{eqnarray*}
\left(\int_M v^{\frac{2n}{n-2}}\md\mu\big(g(t)\big)\right)^{\frac{n-2}{n}}\leq A(t) \int_M \left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu\big(g(t)\big) +B(t)\int_M v^2\md\mu\big(g(t)\big), \end{eqnarray*} where $A(t)=Ce^{\frac{8tB_0}{n A_0}}A_0$, $B(t)=Ce^{\frac{8tB_0}{n A_0}}B_0$, and $C$ is a positive constant depending only on $A_0$, $B_0$, $g_0$, $\phi_0$ and $n$. In particular, if $S\geq 0$ at the initial time, then $C$ is a positive constant depending only on $n$. \end{thm} Based on the Sobolev inequality and Moser's iteration, Ye \cite{Y1} proved a linear parabolic estimate under the Ricci flow, which was applied to get the upper bound of curvature tensor. Jiang \cite{J} gave a linear parabolic estimate along the K\"ahler-Ricci flow, from which he obtained upper bound estimates of the scalar curvature and the gradient of Ricci potential. Here from the above Sobolev inequality, we can get the following linear parabolic estimate. \begin{thm}\label{th2} Assume that ($g(x,t), \phi(x,t)$) is a smooth solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T]$ with initial value ($g_0, \phi_0$) and $S\geq0$ at the initial time. Let $f$ be a nonnegative Lipschitz continuous function on $M\times[0, T]$ satisfying \begin{equation}\label{O3} \partial_t f\leq \Delta f+af \end{equation} on $M\times[0, T]$ in the weak sense, where $a\geq 0$. Then we have for any $0<t\leq T$ and $p>0$ \begin{equation*}
\sup_{x\in M} |f(x,t)|\leq \left(C_1a+\frac{C_2}{t}\right)^{\frac{n+2}{2p}}\left(\int_{0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}, \end{equation*} where $C_1$ and $C_2$ are both positive constants depending on dimension $n$, $p$, $g_0$, $\phi_0$ and the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to $g_0$. \end{thm} Obviously, the heat equation (\ref{O2}) is a simple linear parabolic equation and the heat kernel satisfies naturally the conditions of the above theorem. As a consequence of Theorem \ref{th2}, it is not difficult to get an upper bound of the heat kernel under the harmonic-Ricci flow, which is similar to the upper bound in Bailesteanu \cite{M}, Bailesteanu and Tran \cite{MH}, and Wang \cite{W}. But our upper bound depends on the first eigenvalue of $\mathcal{F}$ entropy, which is different from their results. More precisely, we prove \begin{thm}\label{th3} Assume that ($g(x,t), \phi(x,t)$) is a smooth solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T]$ with initial value ($g_0,\phi_0$) and $S\geq0$ at the initial time. Let $G(x,t;y,s)$ be the heat kernel. Then there exists a positive constant $C$, which depends on dimension $n$, $g_0$, $\phi_0$ and the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to $g_0$, such that \begin{equation*} G(x,t;y,s)\leq \frac{C}{(t-s)^{\frac{n}{2}}}, \end{equation*} for $\forall 0\leq s<t\leq T$, and $\forall x,y\in M$. \end{thm} \begin{rem} Here the nonnegativity of $S$ in our theorem is a little weaker than the positivity in Bailesteanu and Tran \cite{MH}. Indeed, the assumption can be taken away if we impose other reliance of constant $C$ on the upper bound of time (see Corollary \ref{co4}). \end{rem} The rest of the paper is organized as follows. In section 2 we consider the $\mathcal{W}$ entropy under the harmonic-Ricci flow and derive the Sobolev inequality by using the monotonicity of $\mathcal{W}$ entropy. In section 3 we show the linear parabolic estimate for (\ref{O3}) and the upper bound estimate of the heat kernel along the harmonic-Ricci flow.
\section{ Sobolev inequalities under the harmonic-Ricci flow} \setcounter{equation}{0} In this section, we show a uniform log-Sobolev inequality along the harmonic-Ricci flow from the monotonicity of $\mathcal{W}$ entropy, and then verify the equivalence of our Sobolev inequality and the uniform log-Sobolev inequality, from which we prove Theorem \ref{th1}. As a corollary, we obtain the other uniform Sobolev inequality depending on the first eigenvalue of $\mathcal{F}$ entropy, which will be used in the next section.
First let us introduce the definition of $\mathcal{W}$ entropy via corresponding conjugate heat equation just as Perelman \cite{P1} has done in Ricci flow. Let $u(x,t)$ be a positive solution to the following conjugate heat equation \begin{equation}\label{21} \Delta u-Su+\partial_t u=0. \end{equation} From the conjugate heat equation (\ref{21}) and the harmonic-Ricci flow (\ref{O1}), we can get easily $$ \frac{\md}{\md t}\int_Mu(x,t)\md\mu(g(t))=\int_M(\partial_t-S)u\md\mu(g(t))=-\int_M\Delta u\md\mu(g(t))=0, $$ where we used the fact that $M$ is closed. Therefore, without loss of generality we assume that $u(x,t)$ satisfies \begin{equation*} \int_Mu(x,t)\md\mu(g(t))=1 \end{equation*} for any $t\in[0,T]$. The $\mathcal{W}$ entropy is given by a functional of the positive solution $u$ of (\ref{21}) as follows. \begin{defn} The $\mathcal{W}$ entropy is defined as the following functional \begin{equation*}
\mathcal{W}(g,f,\tau)=\int_M\big(\tau(S+|\nabla f|^2)+f-n\big)u\md\mu(g(t)), \end{equation*} where $f=-\ln u-\frac{n}{2}(\ln 4\pi\tau)$ and $\tau$ is a scaling factor satisfied $\frac{\md\tau}{\md t}=-1$. \end{defn}
\begin{rem} The same definition can also be found in \cite{BL,MR}. From the relationship between $f$ and $u$, $\mathcal{W}$ entropy can also be rewritten by the function $u$ directly as follow. \begin{equation}
\mathcal{W}(g,u,\tau):=\int_M\left[\tau\left(Su+\frac{|\nabla u|^2}{u}\right)-u\ln u-\frac{n}{2}\ln( 4\pi\tau)u -nu \right]\md\mu(g(t)).\label{22} \end{equation} \end{rem} Now let us recall the following monotonicity formula, which had been proved in Theorem 5.2 of \cite{F4} for general geometric flow and Proposition 7.1 of \cite{MR}(or Theorem 6.1 of \cite{BL}) for the case of constant $\alpha$. We omit the details here. \begin{prop}\label{p1}
Let ($g(x,t), \phi(x,t)$) be a solution of the harmonic-Ricci flow (\ref{O1}) and $u(x,t)$ be a positive solution of (\ref{21}). Then $\mathcal{W}$ entropy is non-decreasing in $t$. More precisely, \begin{equation*}
\frac{\md}{\md t}\mathcal{W}=\int_M\left(2\tau|S_y+Hess(f)-\frac{g}{2\tau}|^2+2\tau\alpha\left\lvert \tau_g \phi-\langle\nabla\phi, \nabla f\rangle\right\lvert^2-\tau\dot{\alpha}|\nabla\phi|^2\right)u\md\mu(g(t)) \ge0. \end{equation*} \end{prop}
To prove Theorem \ref{th1}, we first need to prove the corresponding log-Sobolev inequality for any $t\in[0,T)$. Here we use the same method as Zhang \cite{Z2} and Liu-Wang \cite{LW}. Using the monotonicity of $\mathcal{W}$ entropy, we have the following log-Sobolev inequality. \begin{lem}[Log-Sobolev Inequality]\label{lm1} Under the same assumptions of Theorem \ref{th1}. Then for any $t\in[0,T)$, $v\in W^{1,2}(M,g(t))$ with $\int_M v^2\md\mu(g(t))=1$ and any $\epsilon>0$, we have \begin{align*}
\int_M v^2\ln v^2\md\mu(g(t))\leq\epsilon^2\int_M\big(4|\nabla v|^2+Sv^2\big)\md\mu(g(t))-n\ln(2\epsilon) +4(t+\epsilon^2)\frac{B_0}{A_0}+\frac{n}{2}\ln\frac{nA_0}{2e}. \end{align*} \end{lem} \begin{proof} For any fixed $t_0\in[0,T)$ and any $\epsilon>0$, we set $\tau(t)=\epsilon^2+t_0-t.$ From the monotonicity of the $\mathcal{W}$ entropy in Proposition \ref{p1}, we get \begin{align} \nonumber\inf_{\int_M u_0\md\mu(g_0)=1}\mathcal{W}(g_0,f_0,t_0+\epsilon^2)\leq &\mathcal{W}(g_0,\widetilde{f}(\cdot,0),t_0+\epsilon^2)\\ \leq&\mathcal{W}(g(t_0),\widetilde{f}(\cdot,t_0),\epsilon^2)\nonumber\\ =&\inf_{\int_M u \md\mu(g(t_0))=1}\mathcal{W}(g(t_0),f,\epsilon^2), \label{23} \end{align} where $(4\pi\tau)^{-\frac{n}{2}}e^{-\widetilde{f}(\cdot,t)}$ satisfies the conjugate heat equation (\ref{21}), $f_0$ and $f$ are given by the formulas $u_0=\big(4\pi(t_0+\epsilon^2)\big)^{-\frac{n}{2}}e^{-f_0}$ and $u=(4\pi\epsilon^2)^{-\frac{n}{2}}e^{-f}$. The last equality holds because the infimum of $\mathcal{W}$ entropy is achieved by a minimizer $\widetilde{f}(\cdot,t_0)$ (cf. Corollary 1.5.9 in \cite{CZ} or section 3 in \cite{P1}). Using (\ref{22}) we rewrite (\ref{23}) as \begin{align*}
\inf_{\int u_0\md\mu(g_0)=1}\int_M\left[(\epsilon^2+t_0)(S+|\nabla
\ln u_0|^2)-\ln u_0 -\frac{n}{2}\ln\left(4\pi(t_0+\epsilon^2)\right)\right]u_0\md\mu(g_0)\\
\leq\inf_{\int u\md\mu(g(t_0))=1}\int_M\left[\epsilon^2\left(S+|\nabla \ln u|^2\right)-\ln u-\frac{n}{2}\ln\left(4\pi\epsilon^2\right)\right]u\md\mu(g(t_0)). \end{align*} Let $v=\sqrt{u}$ and $v_0=\sqrt{u_0}$, the above inequality leads to \begin{align}
\inf_{\int v_0^2\md\mu(g_0)=1}\int_M\left[(\epsilon^2+t_0)(Sv_0^2+4|\nabla v_0|^2)-v_0^2\ln v_0^2\right]\md\mu(g_0)-\frac{n}{2}\ln(t_0+\epsilon^2)\nonumber\\
\leq\inf_{\int v^2\md\mu(g(t_0))=1}\int_M\big[\epsilon^2(Sv^2+4|\nabla v|^2)-v^2\ln v^2\big]\md\mu(g(t_0))-\frac{n}{2}\ln\epsilon^2. \label{24} \end{align} Notice that $\ln x$ is a concave function and $\int v_0^2\md\mu(g_0)=1$, thus applying Jensen's inequality we deduce \[\int_Mv_0^2\ln v_0^{q-2}\md\mu(g_0)\leq\ln \int v_0^{q-2}v_0^2\md\mu(g_0),\] where $q=\frac{2n}{n-2}$. This means $$
\int_Mv_0^2\ln v_0^2\md\mu(g_0)\leq\frac{n}{2}\ln\|v_0\|_q^2, $$ By the assumption that the Sobolev inequality holds for the initial time $t=0$, combining with the above inequality we have
\[\int_Mv_0^2\ln v_0^2\md\mu(g_0)\leq\frac{n}{2}\ln\left(A_0\int_M\left(|\nabla v_0|^2+\frac{1}{4}Sv_0^2\right)\md\mu(g_0)+B_0\right).\] Moreover, the inequality $\ln z\leq yz-\ln y-1$ holds for any $y,z>0$. Using it in the RHS of the above we arrive at \begin{equation*}
\int_Mv_0^2\ln v_0^2\md\mu(g_0)\leq\frac{n}{2}y\left(A_0\int_M\left(|\nabla v_0|^2+\frac{1}{4}Sv_0^2\right)\md\mu(g_0)+B_0\right)-\frac{n}{2}\ln y-\frac{n}{2}. \end{equation*} Now we choose $y=\frac{8(t_0+\epsilon^2)}{n A_0}$, then the above inequality implies \begin{align}\label{25}
\int_Mv_0^2\ln v_0^2\md\mu(g_0)\leq&(t_0+\epsilon^2)\int_M\left(4|\nabla v_0|^2+Sv_0^2\right)\md\mu(g_0) +\frac{4(t_0+\epsilon^2)B_0}{A_0}\nonumber\\ &-\frac{n}{2}\ln\frac{8(t_0+\epsilon^2)}{nA_0}-\frac{n}{2}. \end{align} Substituting (\ref{25}) into (\ref{24}), we conclude that \begin{align*}
\int_M v^2\ln v^2\md\mu(g(t_0))\leq&\epsilon^2\int_M\left(4|\nabla v|^2+Sv^2\right)\md\mu(g(t_0))-n\ln(2\epsilon)\\ &+\frac{4(t_0+\epsilon^2)B_0}{A_0}+\frac{n}{2}\ln\frac{nA_0}{2e}. \end{align*} The time $t_0$ is arbitrary, thus the proof of the lemma is completed now. \end{proof} In general, the log-Sobolev inequality and the Sobolev inequality are equivalent, which can be proved via the upper bound of heat kernel. More details can be found in Zhang's Theorem 4.2.1 of \cite{Z3}. But the Sobolev inequality along a geometric flow is different from the general Sobolev inequality in closed manifolds. So it is necessary to provide the equivalence between our log-Sobolev inequality and Sobolev inequality here. We can give a proof of the following equivalence lemma by the trick in \cite{BCLS}. \begin{lem}\label{lm2} Let $(M^n, g)$ be a closed Riemannian manifold ($n\geq3$). Then the following inequalities are equivalent up to constants.\\ (I) Sobolev inequality: there exist positive constants $A$ and $B$ such that, for all $v\in W^{1,2}(M)$, \begin{equation*}
\left(\int_M v^{\frac{2n}{n-2}}\md\mu\right)^{\frac{n-2}{n}}\leq A\int_M \left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu+B\int_Mv^2\md\mu; \end{equation*}
(II) Log-Sobolev inequality: for all $v\in W^{1,2}(M)$ such that $\|v\|_2=1$ and all $\epsilon >0$, \begin{equation*}
\int_Mv^2\ln v^2 \md\mu\leq \epsilon^2\int_M\left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu-\frac{n}{2}\ln\epsilon^2+BA^{-1}\epsilon^2+\frac{n}{2}\ln\frac{nA}{2e}. \end{equation*} \end{lem} \begin{proof}$I \Rightarrow II:$ The proof is a standard application of the Jensen's inequality. The derivation is almost same with (\ref{25}). We only need to take $y=\frac{2\epsilon^2}{n A}$ instead, the log-Sobolev inequality will be obtained as desired.
$II \Rightarrow I:$ Notice that the LHS of log-Sobolev inequality is bounded from below for all $v\in W^{1,2}(M)$. Hence, the log-Sobolev inequality implies directly that for all $v\in W^{1,2}(M)$ $$
A\int_M\left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu+B>0. $$
Since the log-Sobolev inequality holds for all $\epsilon>0$, the RHS of log-Sobolev inequality can be seen as a function of $\epsilon$ and reaches its minimum. Thus we have \begin{equation}\label{26}
\int_Mv^2\ln v^2 \md\mu\leq \frac{n}{2}\ln \left[A\int_M\left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu+B\right], \end{equation}
for all $v\in W^{1,2}(M)$ such that $\|v\|_2=1$. Now we consider any function $f\in W^{1,2}(M)$. By the Kato's inequality $|\nabla|f||\leq|\nabla f|$, we only need to prove the Sobolev inequality for all nonnegative functions. So we assume that $f\geq0$. For the sake of conveniences, we denote $$
W(f)=\left[A\int_M\left(|\nabla f|^2+\frac{1}{4}Sf^2\right)\md\mu+B\int_Mf^2\md\mu\right]^{\frac{1}{2}}. $$
Taking $v=\frac{f}{\|f\|_2}$ in (\ref{26}) yields \begin{equation*}
\int_Mf^2\ln \left(\frac{f}{\|f\|_2}\right)^2 \md\mu\leq n\|f\|_2^2\ln \left(\frac{W(f)}{\|f\|_2}\right). \end{equation*} It is just $(LS_2^q)$ in Page 1067 of \cite{BCLS}, where $q=\frac{2n}{n-2}$. Using their method to treat the above estimate, we arrive at \begin{equation}\label{27}
\|f\|_2 \leq W(f)^{\theta} \|f\|_s^{1-\theta}, \end{equation} where $0<s<2$ and $\frac{1}{2}=\frac{\theta}{q}+\frac{1-\theta}{s}$. Next we need to define a family of functions $f_k$ for $k\in\mathbb{Z}$ by $$ f_k=\min\{(f-2^k)^+, 2^k\}, $$ where $(f-2^k)^+=\max\{f-2^k,0\}$. From the definition of $f_k$ it is obvious to have the following estimate for any $p>0$ \begin{equation}\label{28} 2^{pk}\mu\{f\geq2^{k+1}\}\leq\int_Mf_k^p\md\mu\leq2^{pk}\mu\{f\geq2^{k}\}. \end{equation} Set $a_k=2^{qk}\mu\{f\geq2^{k}\}$. Combining (\ref{27}) with (\ref{28}), we derive \begin{align*} a_{k+1}\leq& 2^{q(k+1)-2k}\int_Mf_k^2\md\mu\\
\leq& 2^{q(k+1)-2k}W(f_k)^{2\theta}\|f_k\|_s^{2(1-\theta)}\\ \leq& 2^{q}W(f_{k})^{2\theta }a_{k}^{\frac{2(1-\theta)}{s}}. \end{align*} Consequently, by the H\"older inequality we have \begin{align*} \sum\limits_{k\in\mathbb{Z}}a_{k} =&\sum\limits_{k\in\mathbb{Z}}a_{k+1}\\ \leq&\sum\limits_{k\in\mathbb{Z}}2^{q}W(f_{k})^{2\theta }a_{k}^{\frac{2(1-\theta)}{s}}\\ \leq&2^{q}\left(\sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2}\right)^{\theta } \left(\sum\limits_{k\in\mathbb{Z}}a_{k}^{\frac{2 }{s}}\right)^{1-\theta }\\ \leq&2^{q} \left(\sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2}\right)^{\theta } \left(\sum\limits_{k\in\mathbb{Z}}a_{k}\right)^{\frac{2(1-\theta)}{s}}, \end{align*} where the last inequality follows from $0<s<2$. This leads to \begin{equation}\label{29} \sum\limits_{k\in\mathbb{Z}}a_{k}\leq 2^{\frac{q(q-s)}{2-s}}\left(\sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2}\right)^{\frac{q}{2}}. \end{equation} Moreover, it follows from the definition of $a_k$ that \begin{align}\label{210} \int_M f^{q}\md\mu =&\sum\limits_{k\in\mathbb{Z}}\int_{2^{k}\leq f\leq 2^{k+1}}f^q\md\mu\nonumber\\ \leq&\sum\limits_{k\in\mathbb{Z}}2^{q(k+1)}\bigg(\mu(f\geq 2^{k})-\mu(f\geq 2^{k+1})\bigg)\nonumber\\ =&(2^{q}-1)\sum\limits_{k\in\mathbb{Z}}a_{k}. \end{align} Now the rest of proof is only need to control the term $W(f_k)$ in (\ref{29}). We have the following key estimate.
{\sc Claim}. If $\frac{A}{4}S+B\geq0$, then for any $0\leq f\in W^{1,2}(M)$ we have $$ \sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2} \leq W(f)^2. $$ Firstly, we note that \begin{align*}
\int_{M}|f_{k}|^{2}\mathrm{d}\mu =&2\int_{0}^{2^{k+1}-2^{k}} t^{p-1}\mu(f-2^{k}\geq t)\md t\nonumber\\ =&2\int_{2^{k}}^{2^{k+1}}(s-2^{k})^{p-1}\mu(f\geq s)\md s. \end{align*} Thus we have \begin{align*}
\sum\limits_{k\in\mathbb{Z}}\int_{M}|f_{k}|^{2}\mathrm{d}\mu =&2\sum\limits_{k\in\mathbb{Z}}\int_{2^{k}}^{2^{k+1}}(s-2^{k})\mu(f\geq s)\md s\\ =&\sup\limits_{k\in\mathbb{Z}}\left\{\sup\limits_{s\in[2^{k},\,2^{k+1}]}\frac{ s-2^{k} }{s}\right\} \left(2\sum\limits_{k\in\mathbb{Z}}\int_{2^{k}}^{2^{k+1}}s\mu(f\geq s)\md s\right)\\ \leq &\frac{ 1 }{2}\int_{M} f ^2\dmu. \end{align*} Because of the assumption that $\frac{A}{4}S+B\geq0$, we can denote $(\frac{A}{4}S+B)\dmu$ as a new measure. By the same method as above we can derive \begin{equation}\label{211} \sum\limits_{k\in\mathbb{Z}}\int_{M}\left(\frac{A}{4}S+B\right) f_{k} ^{2}\mathrm{d}\mu \leq \frac{1 }{2}\int_{M}\left(\frac{A}{4}S+B\right) f ^2\dmu. \end{equation} In addition, it also holds that \begin{align}\label{212}
\sum\limits_{k\in\mathbb{Z}}\int_{M}|\nabla f_{k}|^{2}\dmu
=&\sum\limits_{k\in\mathbb{Z}}\int_{2^{k}\leq f \leq 2^{k+1}}|\nabla f |^{2}\dmu\nonumber\\
=& \int_{M}|\nabla f |^{2}\dmu. \end{align} Combining (\ref{211}) with (\ref{212}) yields \begin{align*} \sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2}
=&\sum\limits_{k\in\mathbb{Z}}\left(A\int_{M}\left(|\nabla f_{k}|^2+\frac{1}{4}S f_{k} ^{2}\right)\mathrm{d}\mu+B \int_{M} f_{k} ^{2}\mathrm{d}\mu\right)\\
\leq& A\sum\limits_{k\in\mathbb{Z}}\int_{M} |\nabla f_{k}|^2\mathrm{d}\mu + \sum\limits_{k\in\mathbb{Z}}\int_{M} \left(\frac{A}{4}S+B\right)f_{k} ^{2}\mathrm{d}\mu\\
\leq & A\int_{M} |\nabla f |^2\mathrm{d}\mu+ \frac{1}{2}\int_{M}\left(\frac{A}{4}S +B\right)f^{2}\mathrm{d}\mu\\ \leq &W(f)^2. \end{align*} Hence the proof of the claim has been completed. Now we can use the claim to finish the proof of lemma. One should be careful because the assumption in the claim does not have to be true. But it has no effect on the final proof. Since $M$ is a closed manifold, there exists a nonnegative constant $S_0$ such that $S+S_0\geq0.$ By combining (\ref{29}) and (\ref{210}), we deduce
\begin{align}\label{213} \int_M f^{q}\md\mu \leq &(2^{q}-1)2^{\frac{q(q-s)}{2-s}}\left(\sum\limits_{k\in\mathbb{Z}}W(f_{k})^{2}\right)^{\frac{q}{2}}\nonumber\\
\leq &(2^{q}-1)2^{\frac{q(q-s)}{2-s}}\left(\sum\limits_{k\in\mathbb{Z}}\left(A\int_{M}\left(|\nabla f_{k}|^2+\frac{S+S_0}{4} f_{k} ^{2}\right)\mathrm{d}\mu+B \int_{M} f_{k} ^{2}\mathrm{d}\mu\right)\right)^{\frac{q}{2}}\nonumber\\ \leq &(2^{q}-1)2^{\frac{q(q-s)}{2-s}}\left( W(f)^2+A\int_{M}\frac{S_0}{4} f^{2}\mathrm{d}\mu\right)^{\frac{q}{2}}, \end{align} where the last inequality holds due to the claim. Note that the LHS of (\ref{26}) is bounded from below for all $v\in W^{1,2}(M)$, it implies that $$
\lambda_1=\inf\left\{W(f)^2|\int_M f^2\mathrm{d}\mu=1, f\in W^{1,2}(M)\right\}>0, $$ i.e. \begin{equation}\label{214} \int_{M} f^{2}\mathrm{d}\mu\leq \lambda_1^{-1}W(f)^2, \end{equation} for any $f\in W^{1,2}(M)$. Finally, the Sobolev inequality follows from (\ref{213}) and (\ref{214}). We complete the proof of the lemma. \end{proof} Therefore, Theorem \ref{th1} follows directly from Lemma \ref{lm1} and Lemma \ref{lm2}. \begin{proof}[Proof of Theorem \ref{th1}] Note that our log-Sobolev inequality in Lemma \ref{lm1} just has one more term $4tB_0A_0^{-1}$ than Lemma \ref{lm2}. Applying the same arguments with the second part in the proof of Lemma \ref{lm2} to our log-Sobolev inequality, we have \begin{align*} \int_M f^{q}\md\mu(g(t)) \leq (2^{q}-1)2^{\frac{q(q-s)}{2-s}}e^{\frac{8tB_0}{A_0(n-2)}}\left( 1+\frac{A_0 S_0}{4\lambda_1(t)}\right)^{\frac{q}{2}}W(f)^q, \end{align*} for any $f\in W^{1,2}(M,g(t))$. Here $$
W(f)=\left[A_0\int_M\left(|\nabla f|^2+\frac{1}{4}Sf^2\right)\md\mu(g(t))+B_0\int_Mf^2\md\mu(g(t))\right]^{\frac{1}{2}}, $$
$$\lambda_1(t)=\inf\left\{W(f)^2|\int_M f^2\mathrm{d}\mu(g(t))=1, f\in W^{1,2}(M,g(t))\right\},$$ and other symbols are all same as above.
By the definitions of $\mathcal{F}$ entropy and $W(f)$, we can see that $\lambda_1(t)$ is linearly related to the first eigenvalue of $\mathcal{F}$ entropy. On the other hand, the first eigenvalue of $\mathcal{F}$ entropy is monotone non-decreasing in time (cf. Proposition 3.3 \cite{MR}). So $\lambda_1(t)$ is also non-decreasing in time. It follows that our uniform Sobolev inequality holds, and we get $$A(t)=(2^{q}-1)^{\frac{2}{q}}2^{\frac{2(q-s)}{2-s}}e^{\frac{8tB_0}{n A_0}}\left( 1+\frac{A_0 S_0}{4\lambda_1(0)}\right)A_0,$$ and $$B(t)=(2^{q}-1)^{\frac{2}{q}}2^{\frac{2(q-s)}{2-s}}e^{\frac{8tB_0}{n A_0}}\left( 1+\frac{A_0 S_0}{4\lambda_1(0)}\right)B_0,$$ where $S_0$ depends on $g_0$ and $\phi_0$, and $\lambda_1(0)$ depends on $A_0$, $B_0$, $g_0$ and $\phi_0$. Thus the proof of Theorem \ref{th1} is completed now. \end{proof}
From Theorem \ref{th1} we can obtain a uniform Sobolev inequality along the harmonic-Ricci flow under the assumption of positive first eigenvalue of $\mathcal{F}$ entropy. Recall that $\lambda_0$ is the first eigenvalue of $\mathcal{F}$ entropy with respect to the initial metric $g_0$, i.e. \begin{equation*}
\lambda_0=\inf_{\|v\|_2=1}\int_M(4|\nabla v|^2+Sv^2)\md\mu(g_0). \end{equation*} The $\mathcal{F}$ entropy corresponds to Perelman's $\mathcal{F}$ entropy for the Ricci flow introduced in \cite{P1}. Similarly as the Ricci flow, the harmonic-Ricci flow can be interpreted as the gradient flow of $\mathcal{F}$ entropy modulo a pull-back by a family of diffeomorphisms. The eigenvalue of $\mathcal{F}$ entropy is a very powerful tool for the research on Ricci flow and Riemannian manifolds. More results can be found in \cite{F3, LJ}.
Since $(M, g_0)$ is a closed Riemannian manifold of dimension $n\geq 3$, the Sobolev inequality holds, i.e. there exist positive constants $A$ and $B$ depending only on the initial metric $g_0$ such that, for any $v\in W^{1,2}(M)$, \begin{equation}\label{215}
\left(\int_M v^{\frac{2n}{n-2}}\md\mu(g_0)\right)^{\frac{n-2}{n}}\leq A\int_M|\nabla v|^2\md\mu(g_0)+B\int_Mv^2\md\mu(g_0). \end{equation}
If $\lambda_0>0$, combining with Sobolev inequality (\ref{215}), we have \begin{eqnarray*}
\left(\int_M v^{\frac{2n}{n-2}}\md\mu(g_0)\right)^{\frac{n-2}{n}}\leq \tilde{A}_0 \int_M\left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu(g_0) \end{eqnarray*} where $\tilde{A}_0$ depends only on initial metric $g_0$, $\phi_0$ and $\lambda_0$. This means that the assumption of Sobolev inequality in Theorem \ref{th1} at initial time holds with $B_0=0$. Hence, Theorem \ref{th1} gives us the following result. \begin{cor}\label{co1} Let ($g(x,t), \phi(x,t)$) be a solution of the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T)$ with initial value ($g_0, \phi_0$). Assume that the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to the initial metric $g_0$ is positive. Then there exists a positive constant $A$, depending only on $n$, $g_0$, $\phi_0$ and $\lambda_0$, such that for all $v\in W^{1,2}(M,g(t))$, $t\in[0,T)$, it holds that \begin{eqnarray}\label{216}
\left(\int_M v^{\frac{2n}{n-2}}\md\mu\big(g(t)\big)\right)^{\frac{n-2}{n}}\leq A \int_M \left(|\nabla v|^2+\frac{1}{4}Sv^2\right)\md\mu\big(g(t)\big). \end{eqnarray} \end{cor} \begin{rem} The analogous uniform Sobolev inequalities were given in \cite{Z2} for the Ricci flow and \cite{LW} for the extended Ricci flow. \end{rem}
\section{The proof of Theorem \ref{th2} and Theorem \ref{th3}} \setcounter{equation}{0} In this section, we first prove the linear parabolic estimate under the harmonic-Ricci flow with the help of the above Sobolev inequality (\ref{216}) and Moser's iteration. As a result, we derive an upper bound for the heat kernel, which is similar to the one known for the fixed metric case.
\begin{proof} [Proof of Theorem \ref{th2}] For any constant $p\geq 1$, it follows from (\ref{O3}) that \begin{equation*} \int_M f^p\partial_tf \md\mu-\int_Mf^p\Delta f \md\mu \leq a\int_Mf^{p+1}\md\mu, \end{equation*} where the volume measure $\md\mu=\md\mu(g(t))$ for simplicity, and the same symbol will also be used in the rest of the proof. Integrating by parts, we have $$
\frac{1}{p+1}\int_M\partial_tf^{p+1}\md\mu+\frac{4p}{(p+1)^2}\int_M\left|\nabla f^{\frac{p+1}{2}}\right|^2\md\mu\leq a\int_Mf^{p+1}\md\mu. $$ Since $\partial_t \md\mu=-S\md\mu$ and $4p\geq 2(p+1)$ for all $p\geq1$, multiplying both sides by $p+1$, we get $$
\partial_t\int_Mf^{p+1}\md\mu+\int_MSf^{p+1}\md\mu+2\int_M\left|\nabla f^{\frac{p+1}{2}}\right|^2\md\mu\leq a(p+1)\int_Mf^{p+1}\md\mu. $$ Notice that the condition $S\geq0$, then we have \begin{equation}\label{31}
\partial_t\int_Mf^{p+1}\md\mu+\frac{1}{2}\int_M\left(Sf^{p+1}+4\left|\nabla f^{\frac{p+1}{2}}\right|^2\right)\md\mu\leq a(p+1)\int_Mf^{p+1}\md\mu. \end{equation} Next for any $0<\tau<\sigma<T$ we define \begin{equation*} \psi(t)=\left \{\begin{array}{ll} 0& 0\leq t\leq\tau~,\\ (t-\tau)/(\sigma-\tau)& \tau\leq t\leq \sigma~,\\ 1& \sigma\leq t\leq T~. \end{array}\right.\ \end{equation*} Multiplying (\ref{31}) by $\psi$, we obtain \begin{equation*}
\partial_t\left(\psi\int_Mf^{p+1}\md\mu\right)+\frac{1}{2}\psi\int_M\left(Sf^{p+1}+4\left|\nabla f^{\frac{p+1}{2}}\right|^2\right)\md\mu \leq [a(p+1)\psi+\psi']\int_Mf^{p+1}\md\mu. \end{equation*} Integrating this with respect to $t$ yields \begin{align*}
\sup_{\sigma\leq t\leq T}\int_Mf^{p+1}\md\mu&+\frac{1}{2}\int_{\sigma}^T\int_M\left(Sf^{p+1}+4\left|\nabla f^{\frac{p+1}{2}}\right|^2\right)\md\mu \md t\\ &\leq 2\left[a(p+1)+\frac{1}{\sigma-\tau}\right]\int_{\tau}^T\int_Mf^{p+1}\md\mu \md t. \end{align*} By the assumption that $S\geq0$ at the initial time, the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to the initial metric $g_0$ is positive. Applying H\"{o}lder inequality, the above estimate and the Sobolev inequality (\ref{216}), we deduce \begin{align*} \int_{\sigma}^T\int_Mf^{(p+1)(1+\frac{2}{n})}\md\mu \md t & \leq \int_{\sigma}^T\left(\int_Mf^{p+1}\md\mu\right)^{\frac{2}{n}}\left(\int_Mf^{(p+1)\frac{n}{n-2}}\md\mu\right)^{\frac{n-2}{n}}\md t\\
&\leq \sup_{\sigma\leq t\leq T}\left(\int_Mf^{p+1}\md\mu\right)^{\frac{2}{n}}\int_{\sigma}^T A\int_M\left(Sf^{p+1}+4\left|\nabla f^{\frac{p+1}{2}}\right|^2\right)\md\mu \md t\\ &\leq 4^{1+\frac{1}{n}}A\left[a(p+1)+\frac{1}{\sigma-\tau}\right]^{1+\frac{2}{n}}\left(\int_{\tau}^T\int_Mf^{p+1}\md\mu \md t\right)^{1+\frac{2}{n}}. \end{align*} Set $$ H(p,\tau)=\left(\int_{\tau}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}, $$ for any $p\geq 2$ and $0<\tau<T$. So we get \begin{equation}\label{32} H(p(1+\frac{2}{n}), \sigma)\leq \left(4^{1+\frac{1}{n}}A\right)^{\frac{1}{p(1+\frac{2}{n})}}\left(ap+\frac{1}{\sigma-\tau}\right)^{\frac{1}{p}}H(p,\tau). \end{equation} Now we fix $0<t_0<t_1<T$, $p_0\geq 2$. Let $\chi=1+\frac{2}{n}$, $p_k=p_0\chi^k$, $\tau_k=t_0+(1-\frac{1}{\chi^k})(t_1-t_0)$. Then it follows from (\ref{32}) that \begin{equation*} H(p_{k+1}, \tau_{k+1})\leq \left(4^{1+\frac{1}{n}}A\right)^{\frac{1}{p_{k+1}}}\left(ap_k+\frac{\chi^k}{t_1-t_0}\frac{\chi}{\chi-1}\right)^{\frac{1}{p_k}}H(p_k,\tau_k). \end{equation*} Hence by iteration, we arrive at \begin{equation*} H(p_{m+1}, \tau_{m+1})\leq\left(4^{1+\frac{1}{n}}A\right)^{\sum\limits_{k=0}^m\frac{1}{p_{k+1}}} \left(ap_0+\frac{1}{t_1-t_0}\frac{\chi}{\chi-1}\right)^{\sum\limits_{k=0}^m\frac{1}{p_k}}\chi^{\sum\limits_{k=0}^m\frac{k}{p_k}}H(p_0,\tau_0). \end{equation*} Letting $m\rightarrow\infty$, we obtain \begin{equation*} H(p_{\infty}, \tau_{\infty})\leq C_0 \left[ap_0+\frac{n+2}{2(t_1-t_0)}\right]^{\frac{n+2}{2p_0}}H(p_0,\tau_0). \end{equation*} for all $p_0\geq 2$. This means \begin{equation}\label{33}
\sup_{(x,t)\in M\times[t_1,T]}|f(x,t)|\leq \left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{2p_0}}\left(\int_{t_0}^T\int_Mf^{p_0}\md\mu \md t\right)^{\frac{1}{p_0}}, \end{equation} where $C_1$ and $C_2$ are both positive constants depending on dimension $n$, $p_0$, initial metric $g_0$, $\phi_0$ and $\lambda_0$. Moreover, for $0<p<2$, we set $$
h(s)=\sup_{(x,t)\in M\times[s,T]}|f(x,t)|. $$ Combining Young inequality with (\ref{33}), we deduce \begin{align*} h(t_1)&\leq \left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{4}}\left(\int_{t_0}^T\int_Mf^{2}\md\mu \md t\right)^{\frac{1}{2}}\\ &\leq h(t_0)^{\frac{2-p}{2}}\left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{4}}\left(\int_{t_0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{2}}\\ &\leq \frac{1}{2}h(t_0)+\left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{2p}}\left(\int_{t_0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}} \end{align*} Now we use the iteration method again. Fix $0<t_0<t_1<T$, for some $0<\theta<1$, we let $x_k=t_1-(1-\theta^k)(t_1-t_0)$. Then by iteration \begin{align*} h(t_1)=h(x_0)&\leq \frac{1}{2^{k}}h(x_k) +\left(\int_{t_0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}\sum_{i=0}^{k-1}\frac{1}{2^i}\left(C_1a+\frac{C_2}{x_i-x_{i+1}}\right)^{\frac{n+2}{2p}}\\ &\leq \frac{1}{2^{k}}h(x_k) +\left(\int_{t_0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}\left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{2p}}\sum_{i=0}^{k-1}\left(2\theta^{\frac{n+2}{2p}}\right)^{-i} \end{align*} Choose $0<\theta<1$ such that $2\theta^{\frac{n+2}{2p}}>1$, that is, $\frac{1}{2}<\theta^{\frac{n+2}{2p}}<1$. Taking $k\rightarrow\infty$, we have \begin{align}\label{34} h(t_1)\leq \left(C_1a+\frac{C_2}{t_1-t_0}\right)^{\frac{n+2}{2p}}\left(\int_{t_0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}, \end{align} for all $0<p<2$ and $0<t_0<t_1<T$. By (\ref{33}) and (\ref{34}) together, as $t_0\rightarrow0$, it follows that $$ h(t_1)\leq \left(C_1a+\frac{C_2}{t_1}\right)^{\frac{n+2}{2p}}\left(\int_{0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}},\quad \forall p>0 $$ where $C_1$ and $C_2$ are both positive constants depending on dimension $n$, $p$, initial metric $g_0$, $\phi_0$ and $\lambda_0$. Thus we complete the proof now. \end{proof}
Note that the heat equation (\ref{O2}) is a linear parabolic equation. The above proof for Theorem \ref{th2} can be applied to the heat equation almost verbatim. Thus it is not difficult to get the following corollary, which will be used to determine the upper bound of the heat kernel later. \begin{cor}\label{co2} Assume that ($g(x,t), \phi(x,t)$) is a smooth solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T]$ with the initial value ($g_0, \phi_0$) and $S\geq0$ at the initial time. Let $u$ be a nonnegative smooth solution to the heat equation (\ref{O2}) on $M\times[0, T]$. Then for any $0\leq s<t\leq T$ and $p>0$ we have \begin{equation*}
\sup_{x\in M} |u(x,t)|\leq \frac{C}{(t-s)^{\frac{n+2}{2p}}}\left(\int_{s}^t\int_Mu(x,\tau)^{p}\md\mu \md\tau\right)^{\frac{1}{p}}, \end{equation*} where $C$ is a positive constant depending on dimension $n$, $p$, $g_0$, $\phi_0$ and $\lambda_0$. \end{cor} In fact, nonnegativity of $S$ in the conditions of Theorem \ref{th2} can be removed. At this time the parabolic estimate will also depend on the negative lower bound of $S$. By the similar arguments of Theorem \ref{th2} we can obtain the following estimate. \begin{cor}\label{co3} Assume that ($g(x,t), \phi(x,t)$) is a smooth solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T]$ with initial value ($g_0, \phi_0$), $S\geq-S_0$ for a nonnegative constant $S_0$ at the initial time, and the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to $g_0$ is positive. Let $f$ be a nonnegative Lipschitz continuous function on $M\times[0, T]$ satisfying \begin{equation*} \partial_t f\leq \Delta f+af \end{equation*} on $M\times[0, T]$ in the weak sense, where $a\geq 0$. Then we have for any $0<t\leq T$ and $p>0$ \begin{equation*}
\sup_{x\in M} |f(x,t)|\leq \left(C_0S_0+C_1a+\frac{C_2}{t}\right)^{\frac{n+2}{2p}}\left(\int_{0}^T\int_Mf^{p}\md\mu \md t\right)^{\frac{1}{p}}, \end{equation*} where $C_0$, $C_1$ and $C_2$ are all positive constants depending on dimension $n$, $p$, $g_0$, $\phi_0$ and $\lambda_0$. \end{cor} Now we turn to control the heat kernel, which is easy to carry out by means of using the result in Corollary \ref{co2}. \begin{proof} [Proof of Theorem \ref{th3}] By the definition of heat kernel, we know the fact $$ \partial_tG(x,t;y,s)-\Delta_xG(x,t;y,s)=0. $$ Combining with the assumption of $S\geq0$, we have $$ \partial_t\int_MG(x,t;y,s)\md\mu(x,t)=\int_M\big [\Delta_xG(x,t;y,s)-SG(x,t;y,s)\big ]\md\mu(x,t)\leq0. $$ It implies that the above integral of heat kernel is non-increasing in $t$. So we derive $$ \int_MG(x,t;y,s)\md\mu(x,t)\leq\int_MG(x,s;y,s)\md\mu(x,s)=1, $$ for $\forall 0\leq s<t\leq T$. Therefore, by Corollary \ref{co2}, it follows that $$ G(x,t;y,s)\leq \frac{C}{(t-s)^{\frac{n+2}{2}}}\int_{s}^t\int_MG(x,\tau;y,s)\md\mu(x,\tau)\md\tau\leq \frac{C}{(t-s)^{\frac{n}{2}}}, $$ where $C$ is a positive constant depending on dimension $n$, $g_0$, $\phi_0$ and $\lambda_0$. \end{proof} Thanks to Corollary \ref{co3}, the same upper bound of heat kernel can be given without the assumption of the nonnegativity of $S$ but at the cost of the dependence on the upper bound for time. \begin{cor}\label{co4} Assume that ($g(x,t), \phi(x,t)$) is a smooth solution to the harmonic-Ricci flow (\ref{O1}) in $M^n\times[0,T]$ with initial value ($g_0, \phi_0$) and the first eigenvalue $\lambda_0$ of $\mathcal{F}$ entropy with respect to $g_0$ is positive. Let $G(x,t;y,s)$ be the heat kernel. Then for $\forall x,y\in M$ and $\forall 0\leq s<t\leq T$, we have \begin{equation*} G(x,t;y,s)\leq \frac{C}{(t-s)^{\frac{n}{2}}}, \end{equation*} where $C$ is a positive constant, which depends on dimension $n$, $g_0$, $\phi_0$, $\lambda_0$ and $T$. \end{cor} \begin{proof} By Theorem \ref{th3}, we only need to prove the estimate for the case that $S$ has negative minimum at the initial time. We can assume that $S_0=-\min\limits_{x\in(M,g_0)} S(x,0)>0$. From the definition of heat kernel and Corollary \ref{co3}, we have \begin{equation}\label{35}
\sup_{x\in M} |G(x,t;y,s)|\leq \left(C_0S_0+\frac{C_2}{t-s}\right)^{\frac{n+2}{2}}\int_{s}^t\int_MG(x,\tau;y,s)\md\mu(x,\tau) \md\tau, \end{equation} for $\forall 0\leq s<t\leq T$. In addition, since $S>-S_0$ for any time $t\in [0,T]$, we deduce \begin{align*} \partial_t\int_MG(x,t;y,s)\md\mu(x,t)=&\int_M\big [\Delta_xG(x,t;y,s)-SG(x,t;y,s)\big ]\md\mu(x,t)\\ \leq& S_0\int_MG(x,t;y,s)\md\mu(x,t). \end{align*} Integrating from $s$ to $\tau$ gives $$ \int_MG(x,\tau;y,s)\md\mu(x,\tau)\leq e^{S_0(\tau-s)}\int_MG(x,s;y,s)\md\mu(x,s)=e^{S_0(\tau-s)}, $$ for $\forall 0\leq s<\tau\leq T$. Therefore, combining with (\ref{35}), we conclude that $$ G(x,t;y,s)\leq \left(C_0S_0+\frac{C_2}{t-s}\right)^{\frac{n+2}{2}}\int_{s}^te^{S_0(\tau-s)}\md\tau\leq \frac{C}{(t-s)^{\frac{n}{2}}}, $$ where the last inequality follows from $0<t-s\leq T$ and $C$ is a positive constant depending on dimension $n$, $g_0$, $\phi_0$, $\lambda_0$ and $T$. \end{proof} \noindent{\bf{Acknowledgements.}} This work was carried out while the authors were visiting Northwestern Univerisity. We would like to thank Professor Valentino Tosatti and Professor Ben Weinkove for hospitality and helpful discussions. We also thank Wenshuai Jiang for some useful conversations.
\begin{flushleft} Shouwen Fang\\ School of Mathematical Science, Yangzhou University,\\ Yangzhou 225002, P. R. China\\ E-mail: shwfang@163.com \end{flushleft}
\begin{flushleft} Tao Zheng\\ School of Mathematics and Statistics, Beijing Institute of Technology,\\
Beijing 100081, P. R. China\\ E-mail: zhengtao08@amss.ac.cn \end{flushleft}
\end{document} | arXiv | {
"id": "1501.00639.tex",
"language_detection_score": 0.6498665809631348,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title[Homological Dimensions and Semidualizing Complexes]{Homological Dimensions with Respect to a Semidualizing Complex}
\author{Jonathan Totushek}
\thanks{This material is based on work supported by North Dakota EPSCoR and National Science Foundation Grant EPS-0814442}
\subjclass[2010]{13D02, 13D05, 13D09}
\keywords{Auslander class, Bass class, flat dimension, injective dimension, projective dimension, semidualizing complex}
\maketitle
\begin{abstract}
In this paper we build off of Takahashi and White's $\cat{P}_C$-projective dimension and $\cat{I}_C$-injective dimension to define these dimensions for when $C$ is a semidaulizing complex. We develop the framework for these homological dimensions by establishing base change results and local-global behavior. Furthermore, we investigate how these dimensions interact with other invariants. \end{abstract}
\section{Introduction}\label{141111:2}
Let $R$ be a commutative noetherian ring. The projective, flat, and injective dimensions of an $R$-module $M$ are now classical invariants that are important for studying M and $R$. These dimensions were later generalized for $R$-complexes by Foxby \cite{foxby:bcfm} and many useful results about dimensions for modules also hold true for complexes.
A finitely generated $R$-module $C$ is \textit{semidualizing} if $R\cong \Hom_R(C,C)$ and $\operatorname{Ext}^{\geqslant 1}_R (C,C)=0$. Takahashi and White \cite{takahashi:hasm} defined, for a semidualizing $R$-module $C$, the $\cat{P}_C$-projective and $\cat{I}_C$-injective dimensions. The \textit{$\cat{P}_C$-projective dimension} of an $R$-module $M$ ($\catpc\text{-}\pd_R(M)$) is the length of the shortest resolution of $M$ by modules of the form $C\otimes_R P$ where $P$ is a projective module. They define \textit{$\cat{I}_C$-injective dimension} ($\catic\text{-}\id_R(M)$) dually, and one defines the \textit{$\cat{F}_C$-projective dimension} ($\catfc\text{-}\pd_R(M)$) similarly. We extend these constructions to the realm of $R$-complexes. Note that we work in the derived category $\mathcal{D}(R)$. See Section \ref{141111:1} for some background and notation on this subject.
A complex $C\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$ is \textit{semidualizing} if the natural homothety morphism $\chi^R_C:R\to \mathbf{R}\!\operatorname{Hom}_R(C,C)$ is an isomorphism in $\mathcal{D}(R)$. To understand the $\cat{P}_C$-projective, $\cat{F}_C$-projective, and $\cat{I}_C$-injective dimensions in this context, we use the following result; see Theorem \ref{141111:6} below.
\begin{thm} \label{141111:3}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$, and let $C$ be a semidualizing $R$-complex.
\begin{enumerate}[\rm(a)]
\item We have $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{pd}_R(Y)<\infty$ and $X\simeq C\otimes^{\mathbf{L}}_R Y$ in $\mathcal{D}(R)$. When these conditions are satisfied, one has $Y\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)$ and $X\in \cat{B}_C(R)$.\label{141111:3a}
\item We have $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{fd}_R(Y)<\infty$ and $X\simeq C\otimes^{\mathbf{L}}_R Y$ in $\mathcal{D}(R)$. When these conditions are satisfied, one has $Y\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)$ and $X\in \cat{B}_C(R)$.\label{141111:3b}
\item We have $\operatorname{id}_R(C\otimes^{\mathbf{L}}_R X)<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{id}_R(Y)<\infty$ and $X\simeq \mathbf{R}\!\operatorname{Hom}_R(C,Y)$ in $\mathcal{D}(R)$. When these conditions are satisfied, one has $Y\simeq C\otimes^{\mathbf{L}}_R X$ and $X\in \cat{A}_C(R)$.\label{141111:3c}
\end{enumerate} \end{thm} With this in mind, we define e.g., $\catpc\text{-}\pd_R(X) := \sup(C) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)$; thus $\catpc\text{-}\pd_R(X)<\infty$ if and only if $X$ satisfies the equivalent conditions of Theorem \ref{141111:3}\eqref{141111:3a}. We define $\catfc\text{-}\pd_R(X)$ and $\catic\text{-}\id_R(X)$ similarly.
In Section \ref{141111:4} we develop the foundations of these homological dimensions. For instance, we establish finite flat dimension base change (\ref{141021:1}) and local-global principles (\ref{140623:10}-\ref{140623:12}). Also in Theorem \ref{141112:1} we show how these notions naturally augment Foxby Equivalence. In Section \ref{141111:5} we establish some stability results and the following; see Theorem \ref{140503:1}.
\begin{thm} \label{141117:1}
Assume $R$ has a dualizing complex $D$ and let $X\in \mathcal{D}_{\rm{b}}(R)$. Then $\catfc\text{-}\pd_R(X)<\infty$ if and only if $\mathcal{I}_{C^{\dagger}}\text{-}\operatorname{id}_R(X)<\infty$ where $C^{\dagger} = \mathbf{R}\!\operatorname{Hom}_R(C,D)$. \end{thm}
This result is key for the work in \cite{sather:fohdwrtasc}.
\section{Background}\label{141111:1}
Throughout this paper $R$ and $S$ are commutative noetherian rings with identity and $C$ is a semidualizing $R$-complex.
We work in the derived category $\mathcal{D}(R)$ of complexes of $R$-modules, indexed homologically (see e.g. \cite{gelfand:moha,hartshorne:rad}). A complex $X\in \mathcal{D}(R)$ is \textit{homologically bounded if} $\operatorname{H}_i(X) = 0$ for all $|i|\gg 0$ and $X$ is \textit{homologically finite} if $\oplus_i \operatorname{H}_i(X)$ is finitely generated. We denote by $\mathcal{D}_{\operatorname{b}}(R)$ and $\mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$ the full subcategories of $\mathcal{D}(R)$ consisting of all homologically bounded $R$-complexes and all homologically finite $R$-complexes, respectively. Isomorphisms in $\mathcal{D}(R)$ are identified by the symbol $\simeq$.
For $R$-complexes $X$ and $Y$, let $\inf(X)$ and $\sup(X)$ denote the infimum and supremum, respectively, of the set $\{i\in \mathbb{Z}\mid \operatorname{H}_i(X) = 0\}$. Let $X\otimes^{\mathbf{L}}_R Y$ and $\mathbf{R}\!\operatorname{Hom}_R(X,Y)$ denote the left-derived tensor product and right-derived homomorphism complexes, respectively.
\begin{defn} \label{140630:1}
Let $X\in \mathcal{D}_+(R)$. The \textit{projective dimension} of $X$ is
\[
\operatorname{pd}_R(X) = \inf\left\{n\in \mathbb{Z} \left|
\begin{array}{l}
P\xrightarrow{\simeq} X \text{ where } P \text{ is a complex of projective}\\
R\text{-modules such that } P_i = 0 \text{ for all } i>n
\end{array}\right.
\right\}.
\]
The \textit{flat dimension} ($\operatorname{fd}$) and \textit{injective dimension} ($\operatorname{id}$) are defined similarly. Let $\cat{P}(R)$, $\cat{F}(R)$, and $\cat{I}(R)$ denote the full subcategories of $\mathcal{D}_{\operatorname{b}}(R)$ consisting of complexes of finite projective, flat, and injective dimensions, respectively. \end{defn}
\begin{fact}[{\cite[Proposition 4.5]{avramov:hdouc}}] \label{141121:1}
Let $X,Y\in\mathcal{D}(R)$.
\begin{enumerate}[\rm(a)]
\item If $\operatorname{id}_R(Y)<\infty$, then $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))\leqslant \operatorname{id}_R(X) + \sup(Y)$.\label{141121:1a}
\item If $\operatorname{fd}_R(Y)<\infty$, then $\operatorname{id}_R(X\otimes^{\mathbf{L}}_R Y)\leqslant \operatorname{id}_R(X) - \inf(Y)$.\label{141121:1b}
\end{enumerate} \end{fact}
The following result is for use in Section \ref{141111:5}.
\begin{lem} \label{140821:1}
Let $X\in \mathcal{D}_{\mathrm{b}}(R)$.
\begin{enumerate}[\rm(a)]
\item If $I$ is a faithfully injective $R$-module and $\operatorname{id}_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))\leqslant n$, then we have $\operatorname{fd}_R(X)\leqslant n$.\label{140821:1a}
\item If $F$ is a faithfully flat $R$-module and $\operatorname{fd}_R(X\otimes^{\mathbf{L}}_R F)\leqslant n$, then $\operatorname{fd}_R(X)\leqslant n$.\label{140821:1b}
\item If $E$ is a faithfully injective $R$-module and $\operatorname{id}_R(X)\leqslant n$, then we have that $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))\leqslant n$.\label{141007:1a}
\item If $F$ is a faithfully flat $R$-module and $\operatorname{id}_R(X)\leqslant n$, then $\operatorname{id}_R(X\otimes^{\mathbf{L}}_R E)\leqslant n$.\label{141007:1b}
\end{enumerate} \end{lem}
\begin{proof} \eqref{140821:1a} Assume that $\operatorname{id}_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))\leqslant n$ and let $F\xrightarrow{\simeq} X$ be a flat resolution. \begin{comment}
Since $E$ is injective and $F_i$ is flat, we must have that $\Hom_R(F_i,E)$ is injective for all $i$. Therefore $\Hom_R(F,E)$ is a complex of injective modules such that $\mathbf{R}\!\operatorname{Hom}_R(X,E)\simeq \Hom_R(F,E)$.
\[
\Hom_R(F,E) = F^{*} = 0 \to F^*_0 \xrightarrow{\partial^{F^*}_1} \cdots \xrightarrow{\partial^{F^*}_{-(n-1)}} F^*_{-(n-1)} \xrightarrow{\partial^{F^*}_{-n)}} F^*_{-n} \xrightarrow{\partial^{F^*}_{-(n+1)}} F^*_{-(n+1)} \to \cdots
\]
where $\partial^{F^*}_{-i} = \Hom_R(\partial^F_{i},E)$ and $F^*_i = \Hom_R(F_i,E)$. Now \cite[Theorem 2.4.I]{avramov:hdouc} implies that $\Ker(\partial^{F^*}_{-(n+1)})$ is injective. Observe that there is an isomorphism $\Ker(\Hom_R(\partial^F_{n+1},E)) \cong \Hom_R(\operatorname{Coker}(\partial^F_{n+1}),E)$. Therefore $\Hom_R(\operatorname{Coker}(\partial^F_{n+1}),E)$ is injective. \end{comment}
A standard truncation argument shows that $\Hom_R(\operatorname{Coker}(\partial^F_{n+1}),E)$ is injective. Since $E$ is faithfully injective, we also have that $\operatorname{Coker}(\partial^F_{n+1})$ is flat. Thus $\operatorname{fd}_R(X)\leqslant n$.
The proofs of \eqref{140821:1b}, \eqref{141007:1a}, and \eqref{141007:1b} are similar. \end{proof}
\begin{fact}[{\cite[Lemma 4.4]{avramov:hdouc}}] \label{141009:1}
Let $L,M,N \in \mathcal{D}(R)$. Assume that $L\in \mathcal{D}^{\operatorname{f}}_{+}(R)$.
The natural \textit{tensor-evaluation} morphism
\[
\omega_{LMN}:\mathbf{R}\!\operatorname{Hom}_R(L,M)\otimes^{\mathbf{L}}_R N \to \mathbf{R}\!\operatorname{Hom}_R(L,M\otimes^{\mathbf{L}}_R N)
\]
is an isomorphism when $M\in \mathcal{D}_{-}(R)$ and either $L\in \cat{P}(R)$ or $N\in \cat{F}(R)$.
The natural \textit{Hom-evaluation} morphism
\[
\theta_{LMN}:L\otimes^{\mathbf{L}}_R\mathbf{R}\!\operatorname{Hom}_R(M,N)\to \mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(L,M),N)
\]
is an isomorphism when $M\in \mathcal{D}_{\operatorname{b}}(R)$ and either $L\in \cat{P}(R)$ or $N\in \cat{I}(R)$. \end{fact}
\begin{defn}[Foxby Classes] \label{140701:1}
\
\begin{enumerate}[\rm(1)]
\item The \textit{Auslander Class} with respect to $C$ is the full subcategory $\cat{A}_C(R)\subseteq \mathcal{D}_{\mathrm{b}}(R)$ such that a complex $X$ is in $\cat{A}_C(R)$ if and only if $C\otimes^{\mathbf{L}}_R X\in \mathcal{D}_{\mathrm{b}}(R)$ and the natural morphism $\gamma^C_X:X\to \mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)$ is an isomorphism in $\mathcal{D}(R)$. \label{140701:1,1}
\item The \textit{Bass Class} with respect to $C$ is the full subcategory $\cat{B}_C(R)\subseteq \mathcal{D}_{\mathrm{b}}(R)$ such that a complex $Y$ is in $\cat{B}_C(R)$ if and only if $\mathbf{R}\!\operatorname{Hom}_R(C,Y)\in\mathcal{D}_{\mathrm{b}}(R)$ and the natural morphism $\xi^C_Y:C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(C,Y) \to Y$ is an isomorphism in $\mathcal{D}(R)$. \label{140701:1,2}
\end{enumerate} \end{defn}
For a generalized diagramatic version of the next result, see Theorem \ref{141112:1}.
\begin{fact}[Foxby Equivalence {\cite[Theorem 4.6]{christensen:scatac}}] \label{13032511}
Let $X,Y\in \mathcal{D}_{\operatorname{b}}(R)$.
\begin{enumerate}[(a)]
\item One has $X\in \mathcal{A}_{C}(R)$ if and only if $C\otimes^{\mathbf{L}}_R X \in \mathcal{B}_{C}(R)$.\label{13032511a}
\item One has $Y\in \mathcal{B}_{C}(R)$ if and only if $\mathbf{R}\!\operatorname{Hom}_R(C,Y)\in \mathcal{A}_{C}(R)$.\label{13032511b}
\end{enumerate} \end{fact}
\begin{fact}[{\cite[Proposition 4.4]{christensen:scatac}}] \label{14012006}
Let $X\in \mathcal{D}_{\text{b}}(R)$.
\begin{enumerate}[(a)]
\item If $\operatorname{fd}_R(X)<\infty$ (e.g., $\operatorname{pd}_R(X)<\infty$), then $X\in \cat{A}_C(R)$.
\label{14012006a}
\item If $\operatorname{id}_R(X)<\infty$, then $X\in \cat{B}_C(R)$.
\label{14012006b}
\end{enumerate} \end{fact}
\section{$C$-Dimensions for Complexes}\label{141111:4}
In this section we define for the $\cat{P}_C$-projective, $\cat{F}_C$-projective, and $\cat{I}_C$-injective dimensions and build their foundations.
\begin{defn} \label{140226:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$.
\begin{enumerate}[\rm(1)]
\item The \textit{$\cat{P}_C$-projective dimension} of $X$ is defined as \label{140226:1a}
\[
\catpc\text{-}\pd_R(X) = \sup(C)+\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)).
\]
\item The \textit{$\cat{F}_C$-projective dimension} of $X$ is defined as \label{140226:1b}
\[
\catfc\text{-}\pd_R(X) = \sup(C)+\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)).
\]
\item The \textit{$\cat{I}_C$-injective dimension} of $X$ is defined as\label{140226:1c}
\[
\catic\text{-}\id_R(X) = \sup(C)+\operatorname{id}_R(C\otimes^{\mathbf{L}}_R X).
\]
\end{enumerate}
Let $\mathcal{P}_C(R)$, $\mathcal{F}_C(R)$, and $\mathcal{I}_C(R)$ denote the full subcategories of $\mathcal{D}_{\operatorname{b}}(R)$ of all complexes of finite $C$-projective, $C$-flat, and $C$-injective dimension, respectively. \end{defn}
\begin{rem}\label{140623:5}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Observe that $\sup(C)<\infty$. Hence $\catpc\text{-}\pd_R(X)<\infty$ if and only if $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$. If $\catpc\text{-}\pd_R(X)<\infty$, then Fact \ref{14012006}\eqref{14012006a} implies that $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \cat{A}_C(R)$ and Foxby Equivalence (\ref{13032511}) implies that $X\in \cat{B}_C(R)$. Similarly, $\catfc\text{-}\pd_R(X)<\infty$ if and only if $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$. If $\catfc\text{-}\pd_R(X)<\infty$, then $X\in\cat{B}_C(R)$. Also we have $\catic\text{-}\id_R(X)<\infty$ if and only if $\operatorname{id}_R(C\otimes^{\mathbf{L}}_R X)<\infty$. Hence, if $\catic\text{-}\id_R(X)<\infty$, then $X\in\cat{A}_C(R)$. \end{rem}
\begin{rem} \label{140623:2}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Note that when $C=R$ we have that $\catpc\text{-}\pd_R(X) = \sup(R) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(R,X)) = \operatorname{pd}_R(X)$. Similarly in this case $\catfc\text{-}\pd_R(X) = \operatorname{fd}_R(X)$ and $\catic\text{-}\id_R(X) = \operatorname{id}_R(X)$. \end{rem}
\begin{rem}\label{140623:1}
Let $M$ be an $R$-module. When $C$ is a semidualizing $R$-module, Takahashi and White \cite[Theorem 2.11]{takahashi:hasm}, using the definition described in Section \ref{141111:2}, showed that $\catpc\text{-}\pd_R(X) = \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))$.
Since $\sup(C) = 0$ in this case, Definition \ref{140226:1}\eqref{140226:1a} shows that our definition is consistent with the one from \cite{takahashi:hasm}. In a similar way, it can be shown that $\catic\text{-}\id$ recovers Takahashi and White's definition in this case. \end{rem}
The next result compares $\catfc\text{-}\pd$ with $\catpc\text{-}\pd$.
\begin{prop} \label{140305:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Then
\[
\catfc\text{-}\pd_R(X)\leqslant \catpc\text{-}\pd_R(X)\leqslant \catfc\text{-}\pd_R(X)+\dim(R).
\]
In particular if $\dim(R)<\infty$, then we have $\catpc\text{-}\pd_R(X)<\infty$ if and only if $\catfc\text{-}\pd_R(X)<\infty$. \end{prop}
\begin{proof}
Assume that $\catpc\text{-}\pd_R(X)=n<\infty$. Then
\[
\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))\leqslant \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) = n - \sup(C)<\infty.
\]
It now follows that $\catfc\text{-}\pd_R(X)\leqslant n$.
Next assume that $\dim(R)<\infty$ and $\catfc\text{-}\pd_R(X)=n <\infty$. By \cite{raynaud:cpptpm} we have
\[
\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))\leqslant \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))+\dim(R)=n-\sup(C)+\dim(R).
\]
Therefore $\catpc\text{-}\pd_R(X)\leqslant \dim(R) + n$.
\begin{comment}
\[
P^{*}=0\to P^{*}_m\to \cdots \to P^{*}_0\to \operatorname{Coker}(\partial^P_{n+1})\to 0
\]
be a projective resolution. Now connect $P^{*}$ with $P$ to obtain $\widetilde{P}$ a bounded projective resolution of $X$:\begin{center}
\begin{tikzpicture}
\matrix[matrix of math nodes,row sep=1em, column sep=.5em,
text height=1.5ex, text depth=0.25ex]
{|[name=P]| \widetilde{P}= & |[name=0]| 0 & & & |[name=P1]| P^{*}_m & & & |[name=P2]| \cdots & & & & |[name=A]| P^{*}_0 & & |[name=B]| P_{n-1} & & & |[name=C]| \cdots & & & |[name=p0]| P_0 & & & |[name=02]| 0\\ & & & & & & & & & & & & |[name=i1]| \operatorname{Coker}(\partial^P_{n+1}) \\& & & & & & & & & & & |[name=z3]| 0 & & |[name=z4]| 0. \\};
\draw[->,font=\scriptsize]
(0) edge (P1)
(P1) edge (P2)
(P2) edge (A)
(A) edge node[above]{$\varepsilon\circ\tau$} (B)
(B) edge node[above]{$\partial^P_{n-1}$} (C)
(A) edge node[auto]{$\tau$} (i1)
(i1) edge (z4)
(z3) edge (i1)
(i1) edge node[auto]{$\varepsilon$} (B)
(C) edge (p0)
(p0) edge (02);
\end{tikzpicture} \end{center} Hence $\widetilde{P}$ is a projective and thus $\widetilde{P}$ is a flat resolution of $\mathbf{R}\!\operatorname{Hom}_R(C,X)$. Therefore $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))\leqslant m+n-\sup(C)$ and $\catfc\text{-}\pd_R(X)\leqslant m+n\leqslant \dim(R)+n$. \end{comment} \end{proof}
\begin{comment} \begin{thm}[Foxby Equivalence]\label{140623:8}
Let $C$ be a semidualizing $R$-complex. Set $\mathcal{P}(R)$, $\mathcal{F}(R)$, and $\mathcal{I}(R)$ to be the classes of complexes of finite projective, flat, and injective dimension, respectively. There are equivalences of categories \begin{center} \begin{minipage}{.4 \textwidth} \begin{tikzpicture}
\matrix[matrix of math nodes,row sep=2em, column sep=6em,
text height=1.5ex, text depth=0.25ex]
{|[name=A]| \mathcal{D}(R) & |[name=B]| \mathcal{D}(R)\\
|[name=C]| \cat{A}_C(R) & |[name=D]| \cat{B}_C(R)\\
|[name=E]| \mathcal{P}(R) & |[name=F]| \mathcal{P}_C(R)\\
|[name=G]| \mathcal{F}(R) & |[name=H]| \mathcal{F}_C(R)\\};
\path[->,font=\scriptsize]
([yshift= 3pt]A.east) edge node[above]{$C\otimes^{\mathbf{L}}_R -$} ([yshift= 3pt]B.west)
([yshift= 3pt]C.east) edge ([yshift= 3pt]D.west)
([yshift= 3pt]E.east) edge ([yshift= 3pt]F.west)
([yshift= 3pt]G.east) edge ([yshift= 3pt]H.west);
\path[<-,font=\scriptsize]
([yshift= -3pt]A.east) edge node[below]{$\mathbf{R}\!\operatorname{Hom}_R(C,-)$} ([yshift= -3pt]B.west)
([yshift= -3pt]C.east) edge ([yshift= -3pt]D.west)
([yshift= -3pt]E.east) edge ([yshift= -3pt]F.west)
([yshift= -3pt]G.east) edge ([yshift= -3pt]H.west);
\path[right hook->]
(C) edge (A)
(E) edge (C)
(D) edge (B)
(F) edge (D)
(G) edge (E)
(H) edge (F); \end{tikzpicture} \end{minipage} \begin{minipage}{.3 \textwidth} \begin{tikzpicture}
\matrix[matrix of math nodes,row sep=2em, column sep=6em,
text height=1.5ex, text depth=0.25ex]
{|[name=A]| \mathcal{D}(R) & |[name=B]| \mathcal{D}(R)\\
|[name=C]| \cat{A}_C(R) & |[name=D]| \cat{B}_C(R)\\
|[name=E]| \mathcal{I}_C(R) & |[name=F]| \mathcal{I}(R)\\};
\path[->,font=\scriptsize]
([yshift= 3pt]A.east) edge node[above]{$C\otimes^{\mathbf{L}}_R -$} ([yshift= 3pt]B.west)
([yshift= 3pt]C.east) edge ([yshift= 3pt]D.west)
([yshift= 3pt]E.east) edge ([yshift= 3pt]F.west);
\path[<-,font=\scriptsize]
([yshift= -3pt]A.east) edge node[below]{$\mathbf{R}\!\operatorname{Hom}_R(C,-)$} ([yshift= -3pt]B.west)
([yshift= -3pt]C.east) edge ([yshift= -3pt]D.west)
([yshift= -3pt]E.east) edge ([yshift= -3pt]F.west);
\path[right hook->]
(C) edge (A)
(E) edge (C)
(D) edge (B)
(F) edge (D); \end{tikzpicture} \end{minipage} \end{center} \end{thm} \end{comment}
The following three results are versions of \cite[Theorem 2.11]{takahashi:hasm} involving a semidaulizing complex. \begin{prop} \label{14012004}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Then we have
\[
\catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X) = \sup(C)+\operatorname{pd}_R(X).
\]
In particular, $\catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X)<\infty$ if and only if $\operatorname{pd}_R(X)<\infty$. \end{prop}
\begin{proof} Let $n\in \mathbb{Z}$. We prove that $\catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X)\leqslant n$ if and only if $\sup(C) + \operatorname{pd}_R(X)\leqslant n$.
For the forward implication assume that $\catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X) \leqslant n$. Then by Definition \ref{140226:1}\eqref{140226:1a} we have
\[
\sup(C)+\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)) = \catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X) \leqslant n.
\]
Thus $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X))<\infty$. Fact \ref{14012006}\eqref{14012006a} implies $\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)\in \cat{A}_C(R)$. By Foxby Equivalence (\ref{13032511}) we have $C\otimes^{\mathbf{L}}_R X\in \cat{B}_C(R)$ and $X\in\cat{A}_C(R)$. Therefore we have $X\simeq \mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)$ and $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)) = \operatorname{pd}_R(X)$. Thus $\sup(C)+\operatorname{pd}_R(X)\leqslant n$.
For the reverse implication assume that $\sup(C) + \operatorname{pd}_R(X)\leqslant n$. In particular, we have that $\operatorname{pd}_R(X)<\infty$. Therefore $X\in \cat{A}_C(R)$ and $X\simeq \mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)$. It follows that $\operatorname{pd}_R(X) = \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)$. By Definition \ref{140226:1}\eqref{140226:1a} we have
\[
\catpc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X) = \sup(C)+\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R X)) = \sup(C) + \operatorname{pd}_R(X)\leqslant n.
\] \end{proof}
The next two results are proven like Proposition \ref{14012004}. \begin{prop} \label{140623:3}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Then we have
\[
\catfc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X)= \sup(C) + \operatorname{fd}_R(X).
\]
In particular, $\catfc\text{-}\pd_R(C\otimes^{\mathbf{L}}_R X)<\infty$ if and only if $\operatorname{fd}_R(X)<\infty$. \end{prop}
\begin{prop} \label{140623:4}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. Then we have
\[
\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))= \sup(C)+\operatorname{id}_{R}(C\otimes^{\mathbf{L}}_R X).
\]
In particular, $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$ if and only if $\operatorname{id}_R(X)<\infty$. \end{prop}
Next, we have Theorem \ref{141111:3} from the introduction.
\begin{thm} \label{141111:6}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$.
\begin{enumerate}[\rm(a)]
\item We have $\catpc\text{-}\pd_R(X)<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{pd}_R(Y) <\infty$ and $X\simeq C\otimes^{\mathbf{L}}_R Y$. When these conditions are satisfied, one has $Y\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)$ and $X\in \cat{B}_C(R)$.\label{141111:6a}
\item We have $\catfc\text{-}\pd_R(X)<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{fd}_R(Y) <\infty$ and $X\simeq C\otimes^{\mathbf{L}}_R Y$. When these conditions are satisfied, one has $Y\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)$ and $X\in \cat{B}_C(R)$.\label{141111:6b}
\item We have $\catic\text{-}\id_R(X)<\infty$ if and only if there exists $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{id}_R(Y)<\infty$ and $X\simeq \mathbf{R}\!\operatorname{Hom}_R(C,Y)$. When these conditions are satisfied, one has $Y\simeq C\otimes^{\mathbf{L}}_R X$ and $X\in \cat{A}_C(R)$.\label{141111:6c}
\end{enumerate} \end{thm}
\begin{proof}
\eqref{141111:6a} For the forward implication assume that $\catpc\text{-}\pd_R(X) <\infty$. Then by Definition \ref{140226:1}\eqref{140226:1a} we have $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) = \catpc\text{-}\pd_R(X) - \sup(C)<\infty$. Fact \ref{14012006}\eqref{14012006a} implies that $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in\cat{A}_C(R)$ and Foxby Equivalence implies that $X\in \cat{B}_C(R)$. Thus $X\simeq C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(C,X)\simeq C\otimes^{\mathbf{L}}_R Y$ with $Y= \mathbf{R}\!\operatorname{Hom}_R(C,X)$.
For the reverse implication assume that there exists a $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{pd}_R(Y) <\infty$ and $X\simeq C\otimes^{\mathbf{L}}_R Y$. Then Fact \ref{14012006}\eqref{14012006a} implies that $Y\in \cat{A}_C(R)$ and hence we have
\[
Y\simeq \mathbf{R}\!\operatorname{Hom}_R(C,C\otimes^{\mathbf{L}}_R Y) \simeq \mathbf{R}\!\operatorname{Hom}_R(C,X).\
\]
It now follows by Definition \ref{140226:1}\eqref{140226:1a} that $\catpc\text{-}\pd_R(X)<\infty$.
Parts \eqref{141111:6b} and \eqref{141111:6c} are proven similarly. \end{proof}
The previous results give rise to a generalized Foxby Equivalence.
\begin{thm}[Foxby Equivalence] \label{141112:1}
There is a commutative diagram \begin{center} \begin{tikzpicture}
\matrix[matrix of math nodes,row sep=2em, column sep=3em,
text height=1.5ex, text depth=0.25ex]
{ |[name=M]| \cat{I}_C(R) & & |[name=N]| \cat{I}(R)\\
|[name=C]| \cat{A}_C(R) & & |[name=D]| \cat{B}_C(R)\\
|[name=G]| \mathcal{F}(R) & & |[name=H]| \mathcal{F}_C(R)\\
|[name=K]| \cat{P}(R) & & |[name=L]| \cat{P}_C(R)\\};
\path[->,font=\scriptsize]
([yshift= 3pt]M.east) edge ([yshift= 3pt]N.west)
([yshift= 3pt]C.east) edge node[above]{$C\otimes^{\mathbf{L}}_R -$} ([yshift= 3pt]D.west)
([yshift= 3pt]G.east) edge ([yshift= 3pt]H.west)
([yshift= 3pt]K.east) edge ([yshift= 3pt]L.west);
\path[<-,font=\scriptsize]
([yshift= -3pt]M.east) edge ([yshift= -3pt]N.west)
([yshift= -3pt]C.east) edge node[below]{$\mathbf{R}\!\operatorname{Hom}_R(C,-)$} ([yshift= -3pt]D.west)
([yshift= -3pt]G.east) edge ([yshift= -3pt]H.west)
([yshift= -3pt]K.east) edge ([yshift= -3pt]L.west);
\path[right hook->]
(M) edge (C)
(N) edge (D);
\path[left hook->]
(K) edge (G)
(G) edge (C)
(L) edge (H)
(H) edge (D); \end{tikzpicture} \end{center} where the vertical arrows are full embeddings, and the unlabeled horizontal arrows are quasi-inverse equivalences of categories. \end{thm}
The next result shows how $\catpc\text{-}\pd$ and $\catfc\text{-}\pd$ transfer along a ring homomorphism of finite flat dimension. Note that if $\varphi: R\to S$ is a ring homomorphism of finite flat dimension, then $C\otimes^{\mathbf{L}}_R S$ is a semidualizing $S$-complex by \cite[Theorem 5.6]{christensen:scatac} and \cite[Theorem II(a)]{frankild:rrhffd}.
\begin{prop} \label{141021:1}
Let $\varphi:R\to S$ be a ring homomorphism of finite flat dimension and $X\in\mathcal{D}_{\operatorname{b}}(R)$. Then one has
\begin{enumerate}[\rm(a)]
\item $\cat{P}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_{S}(X\otimes^{\mathbf{L}}_R S)-\sup(C\otimes^{\mathbf{L}}_R S)\leqslant \catpc\text{-}\pd_R(X)-\sup(C)$\label{141021:1c},
\item $\cat{F}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_{S}(X\otimes^{\mathbf{L}}_R S)-\sup(C\otimes^{\mathbf{L}}_R S) \leqslant \catfc\text{-}\pd_R(X) - \sup(C)$\label{141021:1d},
\item $\cat{P}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_{S}(X\otimes^{\mathbf{L}}_R S)\leqslant \catpc\text{-}\pd_R(X)$\label{141021:1a}, and
\item $\cat{F}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_{S}(X\otimes^{\mathbf{L}}_R S)\leqslant \catfc\text{-}\pd_R(X)$.\label{141021:1b}
\end{enumerate}
Equality holds when $\varphi$ is faithfully flat. \end{prop}
\begin{proof}
\eqref{141021:1c} and \eqref{141021:1a} Assume first that $\catpc\text{-}\pd_R(X) - \sup(C) = n<\infty$. Then $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))= n$ and hence by base change we have
\[
\operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R S)\leqslant \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) = n.
\]
Observe by tensor-evaluation (\ref{141009:1}) and Hom-tensor adjointness, there are isomorphisms
\begin{align*}
\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R S &\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_R S)\\
&\simeq \mathbf{R}\!\operatorname{Hom}_R(C,\mathbf{R}\!\operatorname{Hom}_S(S,X\otimes^{\mathbf{L}}_R S))\\
&\simeq \mathbf{R}\!\operatorname{Hom}_S(C\otimes^{\mathbf{L}}_R S,X\otimes^{\mathbf{L}}_R S).
\end{align*}
Therefore $\operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_S(C\otimes^{\mathbf{L}}_R S,X\otimes^{\mathbf{L}}_R S))\leqslant n$. Thus we have
\[
\cat{P}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_S(X\otimes^{\mathbf{L}}_R S) - \sup(C\otimes^{\mathbf{L}}_R S) \leqslant n = \catpc\text{-}\pd_R(X) - \sup(C)
\]
that is, the inequality in \eqref{141021:1c} holds.
Observe that since $\operatorname{fd}_R(S)<\infty$, we have $S\in \cat{A}_C(R)$ and hence $\sup(C\otimes^{\mathbf{L}}_R S) \leqslant \sup(C)$ by \cite[Proposition 4.8(a)]{christensen:scatac}. Hence the inequality in \eqref{141021:1a} follows from part \eqref{141021:1c}.
Now assume that $\varphi$ is faithfully flat. Therefore one has that $\sup(C\otimes^{\mathbf{L}}_R S) = \sup(C)$. Hence it suffices to show that $\cat{P}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_R(X\otimes^{\mathbf{L}}_R S) \geqslant \catpc\text{-}\pd_R(X)$. Assume that $\cat{P}_{C\otimes^{\mathbf{L}}_R S}\text{-}\operatorname{pd}_R(X\otimes^{\mathbf{L}}_R S) = n<\infty$. Then
\[
\operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R S)=\operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_S(C\otimes^{\mathbf{L}}_RS,X\otimes^{\mathbf{L}}_R S)) = n-\sup(C\otimes^{\mathbf{L}}_R S).
\]
Therefore we have $\operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R S)\leqslant n - \sup(C)$. Observe that if $P$ is an $R$-module such that $P\otimes_R S$ is projective over $S$, then $P$ is projective over $R$ by \cite[Theorem 9.6]{perry:ffdpm} and \cite{raynaud:cpptpm}. A standard truncation argument thus shows that
\[
\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))\leqslant \operatorname{pd}_S(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R S) = n-\sup(C)
\]
as desired.
Parts \eqref{141021:1b} and \eqref{141021:1d} are proven similarly. \end{proof}
\begin{cor} \label{140618:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$, and let $U\subset R$ be a multiplicatively closed subset. Then there are equalities
\begin{enumerate}[\rm (a)]
\item $\mathcal{P}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1} R}(U^{-1}X)\leqslant \catpc\text{-}\pd_R(X)$, \label{140618:1a}
\item $\mathcal{F}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X)\leqslant \catfc\text{-}\pd_R(X)$, \label{140618:1b}
\item $\mathcal{I}_{U^{-1}C}\text{-}\operatorname{id}_{U^{-1}R}(U^{-1}X)\leqslant \catic\text{-}\id_R(X)$, \label{140618:1c}
\item $\mathcal{P}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1} R}(U^{-1}X)-\sup(U^{-1}C) \leqslant \catpc\text{-}\pd_R(X) - \sup(C)$, \label{140618:1d}
\item $\mathcal{F}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X)- \sup(U^{-1}C)\leqslant \catfc\text{-}\pd_R(X)-\sup(C)$, and \label{140618:1e}
\item $\mathcal{I}_{U^{-1}C}\text{-}\operatorname{id}_{U^{-1}R}(U^{-1}X)-\sup(U^{-1}C)\leqslant \catic\text{-}\id_R(X) - \sup(C)$. \label{140618:1f}
\end{enumerate} \end{cor}
\begin{proof}
The map $\varphi:R\to U^{-1}R$ is flat. Hence \eqref{140618:1a}, \eqref{140618:1b}, \eqref{140618:1d}, and \eqref{140618:1e} follow from Proposition~\ref{141021:1}. Parts \eqref{140618:1c} and \eqref{140618:1f} are proven similarly to Proposition \ref{141021:1}. \end{proof}
\begin{rem}\label{140623:9}
Observe that to obtain the inequality in Corollary~\ref{140618:1} we need the inequality $\sup(U^{-1}C)\leqslant \sup(C)$ to hold. If we had defined $\catpc\text{-}\pd_R(X)$ as $\inf(C) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))$,
then Corollarly \ref{140618:1} would not hold because $\inf(U^{-1}C)\not\leqslant \inf(C)$. This is why we choose $\sup(C)$ instead of $\inf(C)$ in the definition of $\catpc\text{-}\pd$. \end{rem}
The next result is a local-global principal for Bass classes.
\begin{lem} \label{141028:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $X\in \cat{B}_C(R)$;
\item $U^{-1}X\in \cat{B}_{U^{-1}C}(U^{-1}R)$ all multiplicatively closed subsets $U\subset R$;
\item $X_{\ideal{p}}\in \cat{B}_{C_{\ideal{p}}}(R_{\ideal{p}})$ for all $\ideal{p}\in \operatorname{Spec}(R)$;
\item $X_{\ideal{p}}\in \cat{B}_{C_{\ideal{p}}}(R_{\ideal{p}})$ for all $\ideal{p}\in \operatorname{Supp}(R)$;
\item $X_{\ideal{m}}\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Max}(R)$; and
\item $X_{\ideal{m}}\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Supp}(R)\cap\operatorname{Max}(R)$.
\end{enumerate} \end{lem}
\begin{proof}
The implications (i) $\Rightarrow$ (ii) $\Rightarrow$ (iii) $\Rightarrow$ (iv) $\Rightarrow$ (vi) and (iii) $\Rightarrow$ (v) $\Rightarrow$ (vi) follow from definitions. We prove (v) $\Rightarrow$ (i) and (vi) $\Rightarrow$ (v).
For the implication (v) $\Rightarrow$ (i), assume $X_{\ideal{m}}\in\cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Max}(R)$. We use the following commutative diagram in $\mathcal{D}(R)$:
\begin{center}
\begin{tikzpicture}
\matrix[matrix of math nodes,row sep=3em,column sep=4em,text height=1.5ex,text depth=0.25ex]
{|[name=A]| C_{\ideal{m}}\otimes^{\mathbf{L}}_{R_{\ideal{m}}}\mathbf{R}\!\operatorname{Hom}_R(C,X)_{\ideal{m}} & |[name=B]| \left[C\otimes^{\mathbf{L}}_R\mathbf{R}\!\operatorname{Hom}_R(C,X)\right]_{\ideal{m}}\\
|[name=C]| C_{\ideal{m}}\otimes^{\mathbf{L}}_{R_{\ideal{m}}}\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}}) & |[name=D]| X_{\ideal{m}}.\\};
\draw[->,font=\scriptsize]
(A) edge node[auto]{$\simeq$} (B)
(A) edge node[auto]{$\simeq$} (C)
(B) edge node[auto]{$(\xi^C_X)_{\ideal{m}}$} (D)
(C) edge node[auto]{$\xi^{C_{\ideal{m}}}_{X_{\ideal{m}}}$} (D);
\end{tikzpicture}
\end{center}
As $X_{\ideal{m}}\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Max}(R)$, the morphism $\xi^{C_{\ideal{m}}}_{X_{\ideal{m}}}$ is an isomorphism for all $\ideal{m}\in \operatorname{Max}(R)$. Commutativity of the above diagram now forces $(\xi^C_X)_{\ideal{m}}$ to be an isomrophism for all $\ideal{m}\in \operatorname{Max}(R)$. Therefore $\xi^C_X$ is an isomorphism.
It remains to show that $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \mathcal{D}_{\operatorname{b}}(R)$. As $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \mathcal{D}_{-}(R)$, it suffices to show that $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \mathcal{D}_{+}(R)$. By assumption $X_{\ideal{m}}\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$. Then for all $\ideal{m}\in \operatorname{Max}(R)$ we have
\begin{align*}
\inf(\mathbf{R}\!\operatorname{Hom}_R(C,X)_{\ideal{m}}) &= \inf(\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}}))\\
&\geqslant \inf(X_{\ideal{m}}) - \sup(C_{\ideal{m}})\\
&\geqslant \inf(X) - \sup(C)
\end{align*}
where the equality is by the isomorphism $\mathbf{R}\!\operatorname{Hom}_R(C,X)_{\ideal{m}} \simeq \mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}})$, the first inequality is by \cite[Proposition 4.8(c)]{christensen:scatac}, and the second inequality is by properties of localization. Thus $\inf(\mathbf{R}\!\operatorname{Hom}_R(C,X)) \geqslant \inf(X) - \sup(C)>-\infty$.
For the implication (vi) $\Rightarrow$ (v), assume $X_{\ideal{m}}\in\cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Supp}_R(X) \cap \operatorname{Max}(R)$. Then for all $\ideal{m}\in \operatorname{Max}(R) \setminus \operatorname{Supp}_R(X)$ we have $X_{\ideal{m}}\simeq 0\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$, as desired. \end{proof}
The following is proven similarly to Lemma \ref{141028:1}
\begin{lem} \label{141029:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $X\in \cat{A}_C(R)$;
\item $U^{-1}X\in \cat{A}_{U^{-1}C}(U^{-1}R)$ all multiplicatively closed subsets $U\subset R$;
\item $X_{\ideal{p}}\in \cat{A}_{C_{\ideal{p}}}(R_{\ideal{p}})$ for all $\ideal{p}\in \operatorname{Spec}(R)$;
\item $X_{\ideal{p}}\in \cat{A}_{C_{\ideal{p}}}(R_{\ideal{p}})$ for all $\ideal{p}\in \operatorname{Supp}(R)$;
\item $X_{\ideal{m}}\in \cat{A}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Max}(R)$; and
\item $X_{\ideal{m}}\in \cat{A}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Supp}(R)\cap\operatorname{Max}(R)$.
\end{enumerate} \end{lem}
\begin{prop}\label{140623:10}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$ and let $n\in \mathbb{Z}$. Consider the following conditions:
\begin{enumerate}[\rm (i)]
\item $\catpc\text{-}\pd_R(X) -\sup(C) \leqslant n$; \label{140623:10i}
\item $\mathcal{P}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X) - \sup(U^{-1}C) \leqslant n$ for each multiplicatively closed subset $U\subset R$;\label{140623:10ii}
\item $\mathcal{P}_{C_{\ideal{p}}}\text{-}\operatorname{pd}_{R_{\ideal{p}}}(X_{\ideal{p}}) - \sup(C_{\ideal{p}})\leqslant n$ for each $\ideal{p}\in \operatorname{Spec}(R)$; and \label{140623:10iii}
\item $\mathcal{P}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \leqslant n$ for each $\ideal{m}\in \operatorname{Max}(R)$. \label{140623:10iv}
\end{enumerate}
Then \eqref{140623:10i} $\Rightarrow$ \eqref{140623:10ii} $\Rightarrow$ \eqref{140623:10iii} $\Rightarrow$ \eqref{140623:10iv}. Furthermore, if $X\in\mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$, then \eqref{140623:10iv} $\Rightarrow$ \eqref{140623:10i} and
\begin{align*}
\catpc\text{-}\pd_R(X) - c
&= \sup\left\{
\begin{array}{l}
\cat{P}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X)\\
- \sup(U^{-1}C)
\end{array}
\left|
\begin{array}{l}
U\subset R \text{ is}\\
\text{multiplicatively}\\
\text{closed}
\end{array}\right.\right\}\\
&= \sup\{\cat{P}_{C_{\ideal{p}}}\text{-}\operatorname{pd}_{R_{\ideal{p}}}(X_{\ideal{p}}) - \sup(C_{\ideal{p}}) \mid \ideal{p} \in \operatorname{Spec}(R)\}\\
&= \sup\{\cat{P}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \mid \ideal{m}\in \mathrm{Max}(R)\}
\end{align*}
where $c= \sup(C)$. \end{prop}
\begin{proof}
Observe that \eqref{140623:10i} $\Rightarrow$ \eqref{140623:10ii} follows from Proposition \ref{141021:1}. The implications \eqref{140623:10ii} $\Rightarrow$ \eqref{140623:10iii} $\Rightarrow$ \eqref{140623:10iv} follow from properties of localization. For the rest of the proof assume that $X\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$.
For the implication (iv) $\Rightarrow$ (i) assume that $\cat{P}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \leqslant n<\infty$ for all $\ideal{m}\in \operatorname{Max}(R)$. Then by Remark \ref{140623:5} we have $X_{\ideal{m}}\in \cat{B}_{C_{\ideal{m}}}(R_{\ideal{m}})$ for all $\ideal{m}\in \operatorname{Max}(R)$. Therefore Lemma \ref{141028:1} implies that $X\in \cat{B}_C(R)$ and hence $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \mathcal{D}_{\operatorname{b}}(R)$. Now
\begin{align*}
\catpc\text{-}\pd_R(X) - \sup(C) &= \operatorname{pd}_{R}(\mathbf{R}\!\operatorname{Hom}_R(C,X))\\
&= \sup_{\ideal{m}\in \operatorname{Max}(R)}(\operatorname{pd}_{R_{\ideal{m}}}(\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}})))\\
& \leqslant n
\end{align*}
where the second equality is by \cite[Proposition 5.3P]{avramov:hdouc}.
For the equalities, assume first that $\catpc\text{-}\pd_R(X) -\sup(C) = n<\infty$. Then each displayed supremum in the statement is at most $n$. If any of the supremums are strictly less than $n$, then the above equivalence will force $\catpc\text{-}\pd_R(X) - \sup(C)<n$, contradicting our assumption. A similar argument establishes the desired equalities if we assume any of the supremums equal $n$.
Finally if any of the displayed values in the statement are infinite, then the above equivalences forces the other values to be infinite as well. \begin{comment}
Now assume that $X\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$ and $\cat{P}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}})-\sup(C_{\ideal{m}})<n$ for all $\ideal{m}\in \operatorname{Max}(R)$. Then by definition we must have that $\operatorname{pd}_{R_{\ideal{m}}}(\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}})) <n$. There is an isomorphism $\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},x_{\ideal{m}}) \simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)_{\ideal{m}}$. Therefore we have
\[
\operatorname{pd}_{R_{\ideal{m}}}(\mathbf{R}\!\operatorname{Hom}_R(C,X)_{\ideal{m}}) = \operatorname{pd}_{R_{\ideal{m}}}(\mathbf{R}\!\operatorname{Hom}_{R_{\ideal{m}}}(C_{\ideal{m}},X_{\ideal{m}})) < n.
\]
Since $X\in \mathcal{D}^{\operatorname{f}}_{\operatorname{b}}(R)$ and $C\in \mathcal{D}^{\operatorname{f}}_{\operatorname{b}}(R)$, we have $\mathbf{R}\!\operatorname{Hom}_R(C,X)\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$. By the local-global principle for projective dimension, we have $\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))< n$. As $C\in \mathcal{D}_{\operatorname{b}}(R)$, it follows that $\catpc\text{-}\pd_R(X) - \sup(C) <n$.
The displayed equalities follow from the above equivalences. \end{comment} \end{proof}
To prove the implication (iv) $\Rightarrow$ (i) in Proposition \ref{140623:10}, the condition $X\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$ is required. However the flat and injective versions only require $X\in \mathcal{D}_{\operatorname{b}}(R)$; see \cite[Propositions 5.3F,5.3I]{avramov:hdouc}. Thus the next two results are proven similarly to Proposition \ref{140623:10}.
\begin{prop}\label{140623:11}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$ and let $n\in \mathbb{Z}$. The following conditions are equivalent:
\begin{enumerate}[\rm (i)]
\item $\catfc\text{-}\pd_R(X) -\sup(C) \leqslant n$; \label{140623:11i}
\item $\mathcal{F}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X) - \sup(U^{-1}C) \leqslant n$ for each multiplicatively closed subset $U\subset R$;\label{140623:11ii}
\item $\mathcal{F}_{C_{\ideal{p}}}\text{-}\operatorname{pd}_{R_{\ideal{p}}}(X_{\ideal{p}}) - \sup(C_{\ideal{p}})\leqslant n$ for each prime ideal $\ideal{p}\subset R$; and \label{140623:11iii}
\item $\mathcal{F}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \leqslant n$ for each maximal ideal $\ideal{m}\subset R$. \label{140623:11iv}
\end{enumerate}
Furthermore
\begin{align*}
\catfc\text{-}\pd_R(X) - c
&= \sup\left\{
\begin{array}{l}
\cat{F}_{U^{-1}C}\text{-}\operatorname{pd}_{U^{-1}R}(U^{-1}X)\\
- \sup(U^{-1}C)
\end{array}
\left|
\begin{array}{l}
U\subset R \text{ is}\\
\text{multiplicatively}\\
\text{closed}
\end{array}\right.\right\}\\
&= \sup\{\cat{F}_{C_{\ideal{p}}}\text{-}\operatorname{pd}_{R_{\ideal{p}}}(X_{\ideal{p}}) - \sup(C_{\ideal{p}}) \mid \ideal{p} \in \operatorname{Spec}(R)\}\\
&= \sup\{\cat{F}_{C_{\ideal{m}}}\text{-}\operatorname{pd}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \mid \ideal{m}\in \mathrm{Max}(R)\}
\end{align*}
where $c= \sup(C)$. \end{prop}
\begin{prop}\label{140623:12}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$ and let $n\in \mathbb{Z}$. The following conditions are equqivalent:
\begin{enumerate}[\rm (i)]
\item $\catic\text{-}\id_R(X) -\sup(C) \leqslant n$; \label{140623:12i}
\item $\mathcal{I}_{U^{-1}C}\text{-}\operatorname{id}_{U^{-1}R}(U^{-1}X) - \sup(U^{-1}C) \leqslant n$ for each multiplicatively closed subset $U\subset R$;\label{140623:12ii}
\item $\mathcal{I}_{C_{\ideal{p}}}\text{-}\operatorname{id}_{R_{\ideal{p}}}(X_{\ideal{p}}) - \sup(C_{\ideal{p}})\leqslant n$ for each prime ideal $\ideal{p}\subset R$; and \label{140623:12iii}
\item $\mathcal{I}_{C_{\ideal{m}}}\text{-}\operatorname{id}_{R_{\ideal{m}}}(X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \leqslant n$ for each maximal ideal $\ideal{m}\subset R$. \label{140623:12iv}
\end{enumerate}
Furthermore
\begin{align*}
\catic\text{-}\id_R(X) - c
&= \sup\left\{
\begin{array}{l}
\operatorname{id}_{U^{-1}R}(U^{-1}C\otimes^{\mathbf{L}}_{U^{-1}R}U^{-1}X)\\
- \sup(U^{-1}C)
\end{array}
\left|
\begin{array}{l}
U\subset R \text{ is}\\
\text{multiplicatively}\\
\text{closed}
\end{array}\right.\right\}\\
&= \sup\{\operatorname{id}_{R_{\ideal{p}}}\text{-}\operatorname{id}_{R_{\ideal{p}}}(C_{\ideal{p}}\otimes^{\mathbf{L}}_{R_{\ideal{p}}} X_{\ideal{p}}) - \sup(C_{\ideal{m}}) \mid \ideal{p} \in \operatorname{Spec}(R)\}\\
&= \sup\{\operatorname{id}_{R_{\ideal{m}}}\text{-}\operatorname{id}_{R_{\ideal{m}}}(C_{\ideal{m}}\otimes^{\mathbf{L}}_{R_{\ideal{m}}} X_{\ideal{m}}) - \sup(C_{\ideal{m}}) \mid \ideal{m}\in \mathrm{Max}(R)\}
\end{align*}
where $c= \sup(C)$. \end{prop}
\begin{comment} \begin{ex}[{\cite[A1 Example 1]{nagata:lr}}] \label{141117:2}
There exists a noetherian ring $R$ such that $\dim(R) = \infty$ and has maximal ideals $\ideal{m}_1,\ideal{m}_2,\dots$ such that
\[
\operatorname{pd}_R(R/\ideal{m}_1)<\operatorname{pd}_R(R/\ideal{m}_2)<\cdots<\infty.
\]
Define $M:= \bigoplus_{i\geqslant 1} R/\ideal{m}_i$. Note that $\operatorname{pd}_R(M) = \infty$, but $\operatorname{pd}_{R_{\ideal{m}}}(M_{\ideal{m}})<\infty$ for all $\ideal{m}\in \operatorname{Max}(R)$. This shows why we need $X\in \mathcal{D}_{\operatorname{b}}^{\operatorname{f}}(R)$ in Proposition \ref{140623:10} to obtain the implication (iv) $\Rightarrow$ (i). \end{ex} \end{comment}
\begin{rem} \label{141029:3}
When $C$ is a semidualizing $R$-module, e.g., $C=R$, we recover the known local-global conditions for $\catpc\text{-}\pd$, $\catfc\text{-}\pd$, $\catic\text{-}\id$, $\operatorname{pd}$, $\operatorname{fd}$, and $\operatorname{id}$. \end{rem}
\section{Stability Results}\label{141111:5}
In this section we investigate the behaviour of $\catpc\text{-}\pd$, $\catfc\text{-}\pd$, and $\catic\text{-}\id$ after applying the functors $\otimes^{\mathbf{L}}$ and $\mathbf{R}\!\operatorname{Hom}$.
\begin{comment} \begin{prop} \label{140218:2}
Let $X,Y\in \mathcal{D}_{\mathrm{b}}(R)$ and $C$ a semidualizing $R$-complex. Then $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y) \leqslant \catpc\text{-}\pd_R(X) + \operatorname{pd}_R(Y)$. \end{prop}
\begin{proof}
By Definition \ref{14012003} we have $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y) = \sup(C) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_RY))$. Observe that since $\operatorname{pd}_R(Y)<\infty$ we can apply tensor-evaluation (\ref{141009:1}) to get the isomorphism
\[
\mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_RY) \simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R Y.
\]
This gives us the second equality and Fact \ref{140218:1} \eqref{140218:1a} gives us the inequality in
\begin{align*}
\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y)
&= \sup(C) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_RY))\\
&= \sup(C) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R Y)\\
&\leqslant \sup(C) + \operatorname{pd}_{R}(\mathbf{R}\!\operatorname{Hom}_R(C,X)) + \operatorname{pd}_R(Y)\\
&= \catpc\text{-}\pd_R(X) + \operatorname{pd}_R(Y)
\end{align*}
where the last equality is again by Definition \ref{14012003}. \end{proof}
\begin{prop} \label{140218:3}
Let $X,Y\in \mathcal{D}_{\rm{b}}(R)$ and $C$ a semidualizing $R$-complex. Then there is an inequality $\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R Y) \leqslant \catic\text{-}\id_R(X) - \inf(Y)$. \end{prop}
\begin{proof}
By Definition \ref{14013001} we have $\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R Y) = \sup(C) + \operatorname{id}_R( C\otimes^{\mathbf{L}}_R (X\otimes^{\mathbf{L}}_R Y))$. Then commutativity of tensor product and Fact \ref{140218:4} gives us the following
\begin{align*}
\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R Y)
&= \sup(C) + \operatorname{id}_R(C\otimes^{\mathbf{L}}_R (X\otimes^{\mathbf{L}}_R Y)\\
&= \sup(C) + \operatorname{id}_R( (C\otimes^{\mathbf{L}}_R X)\otimes^{\mathbf{L}}_R Y)\\
&\leqslant \sup(C) + \operatorname{id}_R(C\otimes^{\mathbf{L}}_R X) - \inf(Y)\\
&=\catic\text{-}\id_R(X)-\inf(Y)
\end{align*}
where the last equality is again by Definition \ref{14013001}. \end{proof} \end{comment}
\begin{prop} \label{140822:1}
Let $X,Y\in\mathcal{D}_{\mathrm{b}}(R)$. The following inequalities hold:
\begin{enumerate}[\rm(a)]
\item $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y)\leqslant \catpc\text{-}\pd_R(X) + \operatorname{pd}_R(Y)$; \label{140822:1a}
\item $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))\leqslant \catfc\text{-}\pd_R(X) + \operatorname{id}_R(Y)$; and \label{140822:1b}
\item $\catfc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y) \leqslant \catfc\text{-}\pd_R(X)+\operatorname{fd}_R(Y)$. \label{140822:1c}
\end{enumerate} \end{prop}
\begin{proof}
(a) Without loss of generality we assume that $\catpc\text{-}\pd_R(X)<\infty$ and $\operatorname{pd}_R(Y)<\infty$. It now follows that $\catpc\text{-}\pd_R(X) = \sup(C)+ \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))$. By \cite[Theorem 4.1 (P)]{avramov:hdouc} we have that
\[
\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R Y)\leqslant \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) + \operatorname{pd}_R(Y).
\]
Since $\operatorname{pd}_R(Y)<\infty$ (hence $\operatorname{fd}_R(Y)<\infty$) we get tensor-evaluation (\ref{141009:1}) is an isomorphism in $\mathcal{D}(R)$. That is $\mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_R Y)\simeq \mathbf{R}\!\operatorname{Hom}_R(C,X)\otimes^{\mathbf{L}}_R Y$. Hence we have
\[
\operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X\otimes^{\mathbf{L}}_R Y))\leqslant \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))+\operatorname{pd}_R(Y).
\]
By adding a $\sup(C)$ to each side we see that $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_RY)\leqslant \catpc\text{-}\pd_R(X) + \operatorname{pd}_R(Y)$.
\begin{comment}
(b) Again without loss of generality we will assume that $\catfc\text{-}\pd_R(X)<\infty$ and $\operatorname{id}_R(Y)<\infty$. Then by Definition we have $\catfc\text{-}\pd_R(X)+\operatorname{id}_R(Y) = \sup(C) + \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) + \operatorname{id}_R(Y)$. Then by \cite[Theorem 4.1 (I)]{avramov:hdouc} we have
\[
\operatorname{id}_R(\mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X),Y)) \leqslant \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) + \operatorname{id}_R(Y).
\]
Since $\operatorname{id}_R(Y)<\infty$, Hom-evaluation (\ref{141009:1}) is an isomorphism in $\mathcal{D}(R)$. Thus $\mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X),Y) \simeq C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(X,Y)$. It now follows that
\[
\operatorname{id}_R(C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(X,Y))\leqslant \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X)) + \operatorname{id}_R(Y).
\]
By adding $\sup(C)$ to each side we obtain $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))\leqslant \catfc\text{-}\pd_R(X) + \operatorname{id}_R(Y)$. \end{comment}
(b) and (c) are proven similarly to (a). \end{proof}
\begin{cor} \label{140822:2}
Let $X\in\mathcal{D}_{b}(R)$. The following inequalities hold:
\begin{enumerate}[\rm(a)]
\item $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(C,Y))\leqslant \catpc\text{-}\pd_R(X) + \catpc\text{-}\pd_R(Y)-\sup(C)$; \label{140822:2a}
\item $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,C\otimes^{\mathbf{L}}_R Y))\leqslant \catfc\text{-}\pd_R(X) + \catic\text{-}\id_R(Y)-\sup(C)$; and \label{140822:2b}
\item $\catfc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(C,Y))\leqslant \catfc\text{-}\pd(RX) + \catfc\text{-}\pd_R(Y) - \sup(C)$. \label{140822:2c}
\end{enumerate} \end{cor}
\begin{proof}
\eqref{140822:2a} By Proposition \ref{140822:1}\eqref{140822:1a} we have that $\catpc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R\mathbf{R}\!\operatorname{Hom}_R(C,Y))\leqslant \catpc\text{-}\pd_R(X) + \operatorname{pd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,Y))$. Add and subtract $\sup(C)$ to the right hand side to obtain the result.
\eqref{140822:2b} and \eqref{140822:2c} are proven similarly. \end{proof}
The next result is a version of Fact \ref{141121:1} involving a semidualizing complex.
\begin{prop} \label{140827:2}
Let $X,Y\in \mathcal{D}_{\mathrm{b}}(R)$.
\begin{enumerate}[\rm (a)]
\item If $\operatorname{id}_R(Y)<\infty$, then $\catfc\text{-}\pd_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))\leqslant \catic\text{-}\id_R(X) + \sup(Y)$. \label{140827:2a}
\item If $\operatorname{fd}_R(Y)<\infty$, then $\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R Y)\leqslant \catic\text{-}\id_R(X) - \inf(Y)$. \label{140827:2b}
\end{enumerate} \end{prop}
\begin{proof}
\eqref{140827:2a} Assume that $\operatorname{id}_R(Y)<\infty$. By applying Defintion \ref{140226:1} we get that $\catfc\text{-}\pd_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y)) = \sup(C) + \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,\mathbf{R}\!\operatorname{Hom}_R(X,Y)))$. Observe by Hom-Tensor adjointness there is an isomorphism
\[
\mathbf{R}\!\operatorname{Hom}_R(C,\mathbf{R}\!\operatorname{Hom}_R(X,Y))\simeq \mathbf{R}\!\operatorname{Hom}_R(C\otimes^{\mathbf{L}}_R X,Y).
\]
Therefore $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,\mathbf{R}\!\operatorname{Hom}_R(X,Y)) = \operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C\otimes^{\mathbf{L}}_R X,Y))$. Hence by Fact \ref{141121:1}\eqref{141121:1a} we have that
\[
\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C\otimes^{\mathbf{L}}_R X,Y)\leqslant \operatorname{id}_R(C\otimes^{\mathbf{L}}_R X) + \sup(Y).
\]
By adding $\sup(C)$ to each side of the above inequality we obtain the desired result.
\eqref{140827:2b} is proven similarly. \end{proof}
\begin{prop} \label{140313:2}
Let $X\in \mathcal{D}_{\mathrm{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $\catfc\text{-}\pd_R(X)<\infty$;
\item $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))<\infty$ for all $Y\in \mathcal{D}_{\mathrm{b}}(R)$ such that $\operatorname{id}_R(Y)<\infty$; and
\item $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))<\infty$ for some faithfully injective $R$-module $E$.
\end{enumerate} \end{prop}
\begin{proof}
(i)$\Rightarrow$(ii) This follows from Proposition \ref{140822:1}\eqref{140822:1b}.
(ii)$\Rightarrow$(iii) Since $E$ is a faithfully injective module it has $\operatorname{id}_R(E)=0<\infty$. Therefore (ii) implies that $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))<\infty$.
(iii)$\Rightarrow$(i) Assume that there exists a faithfully injective $R$-module $E$ such that $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))<\infty$. Then by Definition \ref{140226:1}\eqref{140226:1c} $\catic\text{-}\id_R(\mathbf{R}\!\operatorname{Hom}_R(X,E)) = \sup(C) + \operatorname{id}_R(C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(X,E))$. By Hom-evaluation (\ref{141009:1}) there is an isomorphism
\[
\mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X),E) \simeq C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(X,E).
\]
It follows that $\operatorname{id}_R(C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(X,E)) = \operatorname{id}_{R}(\mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X),E))<\infty$. Therefore by Lemma \ref{140821:1} $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(C,X))<\infty$. It now follows that $\catfc\text{-}\pd_R(X)<\infty$. \end{proof}
The following three propositions are proven similarly to Proposition \ref{140313:2}.
\begin{prop} \label{141001:2}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $\catfc\text{-}\pd_R(X)<\infty$;
\item $\catfc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R Y)<\infty$ for all $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{fd}_R(Y)<\infty$;
\item $\catfc\text{-}\pd_R(X\otimes^{\mathbf{L}}_R F)<\infty$ for some faithfully flat $R$-module $F$.
\end{enumerate} \end{prop}
\begin{comment} \begin{proof}
\eqref{141007:1a} Let $J$ be an ideal of $R$ and let $P$ be a projective resolution of $R/J$. Then
\[
\operatorname{Tor}^R_i(R/J,\mathbf{R}\!\operatorname{Hom}_R(X,E)) = \operatorname{H}_i(P\otimes_R \Hom_R(X,E)).
\]
Since $E$ is an injective $R$-module, it has finite injective dimension. Therefore by Hom-evaluation (\ref{141009:1}) we have the following isomorphism:
\[
\operatorname{H}_i(P\otimes_R \Hom_R(X,E))\cong \operatorname{H}_i(\Hom_R(\Hom_R(P,X),E)).
\]
Observe that since $E$ is faithfully injective $\operatorname{H}_i(\Hom_R(\Hom_R(P,X),E)) = 0$ if and only if $\operatorname{H}_{-i}(\Hom_R(P,X)) =\operatorname{Ext}_R^i(R/J,X) = 0$. It now follows by \cite[Theorem 2.4.F and 2.4.I]{avramov:hdouc} that $\operatorname{id}_R(X)\leqslant n$ if and only if $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))\leqslant n$.
The proof of \eqref{141007:1b} is similar. \end{proof} \end{comment}
\begin{prop} \label{141007:2}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $\catic\text{-}\id_R(X)<\infty$;
\item $\catfc\text{-}\pd_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))<\infty$ for all $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{id}_R(Y)<\infty$;
\item $\catfc\text{-}\pd_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))<\infty$ for some faithfully injective $R$-module $E$.
\end{enumerate} \end{prop}
\begin{prop} \label{141008:1}
Let $X\in \mathcal{D}_{\operatorname{b}}(R)$. The following conditions are equivalent:
\begin{enumerate}[\rm(i)]
\item $\catic\text{-}\id_R(X)<\infty$;
\item $\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R Y)<\infty$ for all $Y\in \mathcal{D}_{\operatorname{b}}(R)$ such that $\operatorname{fd}_R(Y)<\infty$;
\item $\catic\text{-}\id_R(X\otimes^{\mathbf{L}}_R F)<\infty$ for some faithfully flat $R$-module $F$.
\end{enumerate} \end{prop}
\begin{comment} \begin{cor} \label{140311:1}
Let $X\in \mathcal{D}_{\rm{b}}(R)$, $C$ a semidualizing $R$-complex and $E=\bigoplus_{\ideal{m}\in \max(R)}E_R(R/\ideal{m})$ where $E_R(R/\ideal{m})$ is the injective hull of $R/\ideal{m}$. Then $\catpc\text{-}\pd_R(X)<\infty$ if and only if $\catic\text{-}\id_R(X^{\vee})<\infty$ where $X^{\vee} = \mathbf{R}\!\operatorname{Hom}_R(X,E)$. \end{cor}
\begin{proof}
We note that $E$ is a faithfully injective $R$-module. Therefore the result follows directly from Proposition \ref{140313:2}. \end{proof} \end{comment}
\begin{cor} \label{140311:2}
Let $X\in \mathcal{D}_{\rm{b}}(R)$ and. If there exists a dualizing complex $D$ and $\catfc\text{-}\pd_R(X)<\infty$, then $\catic\text{-}\id_R(X^{\dagger})<\infty$ where $X^{\dagger} = \mathbf{R}\!\operatorname{Hom}_R(X,D)$. \end{cor}
\begin{proof}
Since $D$ is a dualizing complex, it has finite injective dimension. Therefore the result follows from Proposition \ref{140313:2}. \end{proof}
\begin{comment} \begin{prop} \label{140313:1}
Let $X\in \mathcal{D}_{\rm{b}}(R)$. The following are equivalent:
\begin{enumerate}[\hspace{.2 in} \rm(i)]
\item $\mathrm{CI}\text{-}\fcpd_R(X)<\infty$;
\item $\mathrm{CI}\text{-}\icid_R(\mathbf{R}\!\operatorname{Hom}_R(X,Y))<\infty$ for all $Y\in\mathcal{D}_{\mathrm{b}}(R)$ such that $\operatorname{id}_R(Y)<\infty$;
\item $\mathrm{CI}\text{-}\icid_R(\mathbf{R}\!\operatorname{Hom}_R(X,E))<\infty$ for some faithfully injective module $E$.
\end{enumerate} \end{prop} \end{comment} The last result of this paper establishes Theorem \ref{141117:1} from the introduction.
\begin{comment} \begin{thm}\label{140503:1}
Assume $R$ has a dualizing complex $D$ and let $X\in \mathcal{D}_{\rm{b}}(R)$. Then $\catfc\text{-}\pd_R(X)<\infty$ if and only if $\mathcal{I}_{C^{\dagger}}\text{-}\operatorname{id}_R(X)<\infty$ where $C^{\dagger} = \mathbf{R}\!\operatorname{Hom}_R(C,D)$. \end{thm}
\begin{proof}
For the forward implication assume that $\catfc\text{-}\pd_R(X)<\infty$. Then set $F = \mathbf{R}\!\operatorname{Hom}_R(C,X)$. Since $\catfc\text{-}\pd_R(X)<\infty$ we have that $F$ has finite flat dimension. By Remark \ref{140623:5} we have $X\in\cat{B}_C(R)$. This explains the first isomorphism in the following display:
\[
X\simeq C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(C,X)\simeq \mathbf{R}\!\operatorname{Hom}_R(C^{\dagger},D)\otimes^{\mathbf{L}}_R F\simeq \mathbf{R}\!\operatorname{Hom}_R(C^{\dagger},D\otimes^{\mathbf{L}}_R F).
\]
The second isomorphism is from the isomorphism $C\simeq C^{\dagger\dagger}$, and the third is by tensor-evaluation (\ref{141009:1}). Observe that since $\operatorname{id}_R(D)<\infty$ and $\operatorname{fd}_R(F)<\infty$ we have that $\operatorname{id}_R(D\otimes^{\mathbf{L}}_R F)<\infty$ by Fact \ref{141121:1}\eqref{141121:1b}. Therefore, it follows that $\mathcal{I}_{C^{\dagger}}\text{-}\operatorname{id}_R(X)<\infty$ by the displayed isomorphisms.
For the reverse implication assume that $\mathcal{I}_{C^{\dagger}}\text{-}\operatorname{id}_R(X)<\infty$. Then we can write $X\simeq \mathbf{R}\!\operatorname{Hom}_R(C^{\dagger},J)$ where $J= C^{\dagger}\otimes^{\mathbf{L}}_R X$ and $\operatorname{id}_R(J)<\infty$. We then have the following isomorphisms:
\[
X\simeq \mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C,D),J)\simeq C\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(D,J)
\]
where the second isomorphism is by Hom-evaluation (\ref{141009:1}). Since $\operatorname{id}_R(D)<\infty$ and $\operatorname{id}_R(J)<\infty$ we have that $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(D,J))<\infty$ by Fact \ref{141121:1}\eqref{141121:1a}. Hence $\catfc\text{-}\pd_R(X)<\infty$ as desired. \end{proof} \end{comment}
\begin{thm}\label{140503:1}
Assume $R$ has a dualizing complex $D$ and let $X\in \mathcal{D}_{\rm{b}}(R)$. Then $\catic\text{-}\id_R(X)<\infty$ if and only if $\mathcal{F}_{C^{\dagger}}\text{-}\operatorname{pd}_R(X)<\infty$ where $C^{\dagger} = \mathbf{R}\!\operatorname{Hom}_R(C,D)$. \end{thm}
\begin{proof}
For the forward implication assume that $\catic\text{-}\id_R(X)<\infty$. Then set $J = C\otimes^{\mathbf{L}}_R X$. Since $\catic\text{-}\id_R(X)<\infty$ we have that $J$ has finite injective dimension. By Remark \ref{140623:5} we have $X\in\cat{A}_C(R)$. This explains the first isomorphism in the following display:
\[
X\simeq \mathbf{R}\!\operatorname{Hom}_R(C,J) \simeq \mathbf{R}\!\operatorname{Hom}_R(\mathbf{R}\!\operatorname{Hom}_R(C^{\dagger},D),J) \simeq C^{\dagger}\otimes^{\mathbf{L}}_R \mathbf{R}\!\operatorname{Hom}_R(D,J).
\]
The second isomorphism is from the isomorphism $C\simeq C^{\dagger\dagger}$, and the third is by Hom-evaluation (\ref{141009:1}). Observe that since $\operatorname{id}_R(D)<\infty$ and $\operatorname{id}_R(J)<\infty$ we have that $\operatorname{fd}_R(\mathbf{R}\!\operatorname{Hom}_R(D,J))<\infty$ by Fact \ref{141121:1}\eqref{141121:1a}. Thus, it follows that $\mathcal{F}_{C^{\dagger}}\text{-}\operatorname{fd}_R(X)<\infty$ by the displayed isomorphisms.
For the reverse implication assume that $\mathcal{F}_{C^{\dagger}}\text{-}\operatorname{fd}_R(X)<\infty$. Then we can write $X\simeq C^{\dagger}\otimes^{\mathbf{L}}_R F$ where $F= \mathbf{R}\!\operatorname{Hom}_R(C^{\dagger},X)$ and $\operatorname{fd}_R(F)<\infty$. We then have the following isomorphisms:
\[
X\simeq C^{\dagger}\otimes^{\mathbf{L}}_R F = \mathbf{R}\!\operatorname{Hom}_R(C,D)\otimes^{\mathbf{L}}_R F \simeq \mathbf{R}\!\operatorname{Hom}_R(C,D\otimes^{\mathbf{L}}_R F)
\]
where the second isomorphism is by tensor-evaluation (\ref{141009:1}). Since $\operatorname{id}_R(D)<\infty$ and $\operatorname{fd}_R(F)<\infty$ we have that $\operatorname{id}_R(D\otimes^{\mathbf{L}}_R F)<\infty$ by Fact \ref{141121:1}\eqref{141121:1b}. Hence $\catic\text{-}\id_R(X)<\infty$ by Theorem \ref{141111:6}\eqref{141111:6c} as desired. \end{proof}
\end{document} | arXiv | {
"id": "1411.6563.tex",
"language_detection_score": 0.46846625208854675,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
\begin{document}
\title{\bf A Duality Theorem for Quantum Groupoids}
\author{Dmitri Nikshych} \address{Department of Mathematics, UCLA, 405 Hilgard Avenue, Los Angeles, CA 90095-1555} \email{nikshych@math.ucla.edu}
\subjclass{Primary 16S40, 16W30; Secondary 20L05} \date{}
\begin{abstract} We prove a duality theorem for quantum groupoid (weak Hopf algebra) actions that extends the well-known result for usual Hopf algebras obtained in \cite{BM} and \cite{vdB}. \end{abstract}
\maketitle
\begin{section} {Introduction}
By {\em (finite) quantum groupoids} we understand weak Hopf algebras introduced in \cite{BNSz}, \cite{BSz} as a generalization of ordinary Hopf algebras providing a good framework for studying symmetries of certain quantum field theories. These objects also generalize both ordinary groupoid algebras and their duals. A special case of quantum groupoids with involutive antipode was studied in \cite{NV1}, \cite{N}.
Finite quantum groupoids naturally arise in the theory of von Neumann algebras : it was shown in \cite{NV2} that finite index II${}_{1}$ subfactors of depth $\leq 2$ can be characterized as $C^*$-quantum groupoid smash products. This result was extended in \cite{NV3}, where a uniform description of all finite depth subfactors was obtained via a Galois correspondence. In fact, one can use subfactors in order to construct interesting concrete examples of quantum groupoids such as Temperley-Lieb algebras \cite{NV2}, \cite{NV3}.
Another motivation to study quantum groupoids comes from the fact that their representation theory provides examples of monoidal categories that can be used for constructing invariants of links and 3-manifolds \cite{NVT}.
In this paper we prove the following duality theorem for smashed products : if $H$ is a finite quantum groupoid and $A$ is an $H$-module algebra, then $(A\#H)\#H^* \cong \mbox{End}(A\#H)_A$, where $H^*$ acts on $A\#H$ in a dual way and $A\#H$ is viewed as a right $A$-module via multiplication. For usual Hopf algebras this result was proved in \cite{BM} (where infinite dimensional case was considered) and \cite{vdB}. For weak Kac algebras (i.e., finite $C^*$-quantum groupoids with an involutive antipode) it was established in \cite{N}.
The note is organized as follows.
In Preliminaries (Section $2$) we recall definitions and basic facts concerning finite quantum groupoids (weak Hopf algebras) and prove identities we need for later computations.
In Section $3$ we prove the main result by writing down explicit formulas for isomorphism between $(A\#H)\#H^*$ and $\mbox{End}(A\#H)_A$. As a corollary we obtain that $H\#H^*$ is always a semisimple algebra.
The results of this paper were presented by the author at the Colloquium on Quantum Groups and Hopf Algebras held in La Falda, Argentina in August, 1999 and he would like to thank N.~Andruskiewitsch, W.~Ferrer Santos, and H.-J.~Schneider for inviting him. The author is also grateful to L.~Vainerman for numerous discussions on quantum groupoids and his comments on the present work.
\end{section}
\begin{section} {Preliminaries}
Let $k$ be a field.
Throughout this paper we use Sweedler's notation for comultiplication, writing $\Delta(b) = b_{(1)} \otimes b_{(2)}$.
\begin{definition} \label{basic definition} By a {\em weak Hopf algebra} \cite{BNSz}, or {\em finite quantum groupoid} we understand a finite dimensional $k$-vector space $H$ that has structures of algebra $(H,\,m,\,1)$ and coalgebra $(H,\,\Delta,\,\varepsilon)$ related as follows: \begin{enumerate} \item[(1)] $\Delta$ is a (not necessarily unit-preserving) homomorphism : $$ \Delta(hg) = \Delta(h)\Delta(g), $$ \item[(2)] The unit and counit satisfy the identities \begin{eqnarray*} \varepsilon(hgf) &=& \varepsilon(hg_{(1)})\varepsilon(g\2f) = \varepsilon(hg_{(2)})\varepsilon(g\1f), \\ (\Delta \otimes \mbox{id}) \Delta(1) &=& (\Delta(1)\otimes 1)(1\otimes \Delta(1)) = (1\otimes \Delta(1))(\Delta(1)\otimes 1), \end{eqnarray*} \item[(3)] There is a linear map $S: H \to H$, called an {\em antipode}, such that \begin{eqnarray*} m(\mbox{id} \otimes S)\Delta(h) &=&(\varepsilon\otimes\mbox{id})(\Delta(1)(h\otimes 1)),\\ m(S\otimes \mbox{id})\Delta(h) &=& (\mbox{id} \otimes \varepsilon)((1\otimes h)\Delta(1)),\\ S(h_{(1)})h_{(2)} S(h_{(2)}) &=& S(h), \end{eqnarray*} \end{enumerate} for all $h,g,f\in H$. \end{definition} The antipode is unique and invertible \cite{BNSz}, moreover it is an anti-algebra and anti-coalgebra map. The right-hand sides of two first formulas in $(3)$ are called {\em target} and {\em source counital maps} and denoted $\varepsilon_t$, $\varepsilon_s$ respectively : \begin{eqnarray*} \varepsilon_t(h) = (\varepsilon\otimes\mbox{id})(\Delta(1)(h\otimes 1)),\\ \varepsilon_s(h) = (\mbox{id} \otimes \varepsilon)((1\otimes h)\Delta(1)). \end{eqnarray*} The counital maps $\varepsilon_t$ and $\varepsilon_s$ are idempotents in $\mbox{End}_k(H)$, we also have relations $S\circ \varepsilon_t = \varepsilon_s \circ S$ and $S\circ \varepsilon_s = \varepsilon_t \circ S$.
The main difference between quantum groupoids and Hopf algebras is that the ranges of counital maps are, in general, separable subalgebras of $H$ not necessarily equal to $k$. They are called {\em target} and {\em source counital subalgebras} and play a role of ``non-commutative bases'' (cf. Example~\ref{examples} below) : \begin{eqnarray*} H_t &=& \{h\in H \mid \varepsilon_t(h) =h \}
= \{h\in H \mid \Delta(h) = 1_{(1)} h\otimes 1_{(2)} = h 1_{(1)} \otimes 1_{(2)} \}, \\ H_s &=& \{h\in H \mid \varepsilon_s(h) =h \} = \{h\in H \mid \Delta(h) = 1_{(1)} \otimes h 1_{(2)} = 1_{(1)} \otimes 1_{(2)} h \}. \end{eqnarray*} The counital subalgebras commute, the restriction of the antipode gives an anti-isomorphism between $B_t$ and $B_s$, moreover, $B_t$ (resp.\ $B_s$) is a left (resp.\ right) coideal subalgebra of $B$. We also have $S\circ \varepsilon_t = \varepsilon_s \circ S$, $S^2\vert_{B_t} =\mbox{id}_{B_t}$, and $S^2\vert_{B_s} =\mbox{id}_{B_s}$.
Note that $H$ is an ordinary Hopf algebra if and only if $\Delta(1)=1\otimes 1$ if and only if $\varepsilon$ is a homomorphism if and only if $H_t=H_s =k$.
The dual vector space $H^*$ has a natural structure of a quantum groupoid with the structure operations dual to those of $H$ : \begin{eqnarray*} & & {<} \phi\psi,\,h{>} = {<} \phi\otimes\psi,\,\Delta(h) {>}, \\ & & {<} \Delta(\phi),\,h \otimes g {>} = {<} \phi,\,hg {>}, \\ & & {<} S(\phi),\,h{>} = {<} \phi,\,S(h) {>}, \end{eqnarray*} for all $\phi,\psi \in H^*,\, h,g\in H$. The unit of $H^*$ is $\varepsilon$ and counit is $\phi \mapsto {<}\phi,\, 1{>}$.
\begin{example} \label{examples} Let $G$ be a finite {\em groupoid} (a category with finitely many morphisms, such that each morphism is invertible) then the groupoid algebra $kG$ (generated by morphisms $g\in G$ with the product of two morphisms being equal to their composition if the latter is defined and $0$ otherwise) is a quantum groupoid via : $$ \Delta(g) = g\otimes g,\quad \varepsilon(g) =1,\quad S(g)=g^{-1},\quad g\in G. $$ The dual quantum groupoid $(kG)^*$ is generated by idempotents $p_g,\, g\in G$ such that $p_g p_h= \delta_{g,h}p_g$ and $$ \Delta(p_g) =\sum_{uv=g}\,p_v\otimes p_v,\quad \varepsilon(p_g)= \delta_{g,gg^{-1}}, \quad S(p_g) =p_{g^{-1}}. $$ \end{example}
It is known that any group action on a set gives rise to a finite groupoid \cite{R}. Similarly, in the ``quantum'' situation, one can associate a weak Hopf algebra (quantum groupoid) with every action of a usual Hopf algebra on a separable algebra, see \cite{NVT} for details.
Finally, the most non-trivial examples of quantum groupoids known so far come from the theory of von Neumann II${}_1$ subfactors \cite{GHJ} : in \cite{NV2} finite index subfactors of depth $\leq 2$ were characterized as quantum groupoid smash products and it was explained in \cite{NV3} that it is possible to construct concrete examples of quantum groupoids from subfactors of arbitrary finite depth.
An algebra $A$ is a left {\em $H$-module algebra} \cite{NSzW} if $A$ is a left $H$-module via $h\otimes x \mapsto h\cdot x$ and $$ h\cdot (xy) = (h_{(1)}\cdot x)(h_{(2)}\cdot y), \qquad h\cdot 1 =\varepsilon_t(h)\cdot 1, $$ for all $h\in H,\,x,y\in A$.
A {\em smash product} algebra $A\#H$ of $A$ and $H$ is defined on a $k$-vector space $A \otimes_{H_t} H$ (relative tensor product), where $H$ is a left $H_t$-module via multiplication and $A$ is a right $H_t$-module via $$ x\cdot z = S(z)\cdot x = x(z\cdot 1) \qquad x\in A, z\in H_t. $$ Let $x\#h$ be the class of $x\otimes h$ in $A \otimes_{H_t} H$, then the multiplication of $A\#H$ is given by the familiar formula : $$ (x\#h)(y\#g) = x(h_{(1)} \cdot y)\#h_{(2)} g, \qquad x,y\in A,\, h,g\in H $$ and the unit of $A\#H$ is $1\#1$.
\begin{example} \label{actions} The target counital subalgebra $H_t$ is a {\em trivial $H$-module} algebra with the action of $H$ given by $h\cdot z = \varepsilon_t(hz)$, where $h\in H,\, z\in H_t$.
The dual quantum groupoid $H^*$ is an $H$-module algebra via $$ h\rightharpoonup \phi = \phi_{(1)} {<} \phi_{(2)},\, h{>}, $$ for all $h\in H,\, \phi\in H^*$. \end{example}
In the following Lemma we collect the identities we will use in what follows. They can be found in \cite{BNSz} and \cite{NV1}, we include them here for the convenience of the reader.
\begin{lemma} \label{a lemma} For every quantum groupoid $H$ and elements $h\in H,\,z\in H_t$ the following identities hold true : \begin{enumerate} \item[(i)] $h_{(1)} \otimes \varepsilon_t(h_{(2)}) =1\1h \otimes 1_{(2)}\quad$ and $\quad\varepsilon_s(h_{(1)}) \otimes h_{(2)} = 1_{(1)} \otimes h 1_{(2)}$, \item[(ii)] $1_{(1)} S(z) \otimes 1_{(2)} = 1_{(1)} \otimes 1\2z$, \item[(iii)] $h_{(2)} S^{-1}(h_{(1)})\otimes h_{(3)} = S(\varepsilon_t(h_{(1)})\otimes h_{(2)} = 1_{(1)}\otimes 1\2h$. \end{enumerate} \end{lemma}
\begin{proof} (i) We have : \begin{eqnarray*} h_{(1)} \otimes \varepsilon_t(h_{(2)}) &=& h_{(1)} \varepsilon(1\1h_{(2)}) \otimes 1_{(2)} \\ &=& 1\1h_{(1)} \varepsilon(1\2h_{(2)}) \otimes 1_{(3)} = 1\1h \otimes 1_{(2)}, \end{eqnarray*} where we used the definition of $\varepsilon_t$ and the axiom (2) of Definition~\ref{basic definition}. The second identity is similar. \newline(ii) Since $S(z) \in H_s$ we can compute : \begin{eqnarray*} 1_{(1)} S(z) \otimes 1_{(2)} &=& S(z)_{(1)} \otimes \varepsilon_t(S(z)_{(2)}) \\ &=& 1_{(1)} \otimes \varepsilon_t(1\2S(z)) \\ &=& 1_{(1)} \otimes \varepsilon_t(1\2z) = 1_{(1)} \otimes 1\2z, \end{eqnarray*} using part (i), definition of the source counital subalgebra, and the identity $\varepsilon_t(hg) = \varepsilon_t(h \varepsilon_t(g))$ that follows from axiom (2) of Definition~\ref{basic definition}. Observe that $S(1_{(1)}) \otimes 1_{(2)}$ is a separability idempotent \cite{P} of $H_t$. \newline (iii) Using part (ii) and the fact that $H_s$ and $H_t$ commute, we have \begin{eqnarray*} h_{(2)} S^{-1}(h_{(1)})\otimes h_{(3)} &=& S(\varepsilon_t(h_{(1)})) \otimes h_{(2)} \\ &=& S(\varepsilon_t(1\1h_{(1)})) \otimes 1\2h_{(2)} \\ &=& 1_{(1)} S(\varepsilon_t(h_{(1)})) \otimes 1\2h_{(2)} \\ &=& 1_{(1)} \otimes 1_{(2)} \varepsilon_t(h_{(1)}) h_{(2)} = 1_{(1)} \otimes 1\2h. \end{eqnarray*} \end{proof} \end{section}
\begin{section} {Main result}
Let $H$ be a finite quantum groupoid and $A$ be a left $H$-module algebra. Then the smash product $A\#H$ is a left $H^*$-module algebra via $$ \phi\cdot (a\#h) = a\#(\phi\rightharpoonup h), \quad \phi\in H^*,\,h\in H,\,a\in A. $$ In the case when $H$ is an ordinary finite dimensional Hopf algebra, it follows from \cite{BM} that there is an isomorphism $(A\#H)\#H^* \cong M_n(A)$, where $n=\dim H$ and $M_n(A)$ is an algebra of $n$-by-$n$ matrices over $A$.
We will show that this result extends to quantum groupoid action in the form $(A\#H)\#H^* \cong \mbox{End}(A\#H)_A$, where $A\#H$ is a right $A$-module via multiplication (note that $A\#H$ is not necessarily a free $A$-module, so that we have $\mbox{End}(A\#H)_A\not\cong M_n(A)$ in general; see (\cite{NV2}, 7) for an example when $H$ is not free over $H_t$). We will explicitly write down canonical isomorphisms between $(A\#H)\#H^*$ and $\mbox{End}(A\#H)_A$.
\begin{lemma} \label{alpha} The map $\alpha : (A\#H)\#H^* \to \mbox{End}(A\#H)_A$ defined by $$ \alpha((x\#h)\#\phi)(y\#g) = (x\#h)(y\#(\phi\rightharpoonup g)) = x(h_{(1)}\cdot y) \#h_{(2)} (\phi\rightharpoonup g) $$ for all $x,y\in A,\, h,g\in H,\,\phi\in H^*$ is a homomorphism of algebras. \end{lemma}
\begin{proof} First, we need to check that $\alpha$ is well defined. For all $z\in H_t$ and $\xi\in H_t^*$ we have : \begin{eqnarray*} \alpha((x\#zh)\#\phi)(y\#g) &=& x(zh_{(1)}\cdot b)\#h_{(2)}(\phi\rightharpoonup g) \\ &=& x(z\cdot 1)(h_{(1)}\cdot b)\#h_{(2)}(\phi\rightharpoonup g) \\ &=& \alpha( (x\cdot z)\#h)\#\phi )(y\#g), \\ \alpha((x\#h)\#\xi\phi)(y\#g) &=& x(h_{(1)}\cdot b)\#h_{(2)}(\xi\rightharpoonup 1)(\phi\rightharpoonup g) \\ &=& \alpha( (x\#h(\xi\rightharpoonup 1))\#\phi )(y\#g) \\ &=& \alpha( (x\#(S(\xi)\rightharpoonup h))\#\phi )(y\#g)\\ &=& \alpha( (x\# (h\cdot \xi)) \# \phi )(y\#g) \end{eqnarray*} where we used definition of the target counital subalgebra, Lemma~\ref{a lemma}(ii), and that $(\xi\rightharpoonup 1)\in H_s$ for all $\xi\in H_t^*$.
Next, we verify that $\alpha((x\#h)\#\phi) \in \mbox{End}(A\#H)_A$ for all $x\in A,\,h\in H,\,\phi\in H^*$. For all $z\in H_t$ we have : \begin{eqnarray*} \alpha((x\#h)\#\phi) (y\#zg) &=& x(h_{(1)}\cdot y)\# h\2z(\phi\rightharpoonup g) \\ &=& x(h\1S(z) \cdot y)\# h_{(2)} (\phi\rightharpoonup g) \\ &=& \alpha((x\#h)\#\phi)((y \cdot z) \# g), \end{eqnarray*} using the identity $\phi\rightharpoonup zg = z(\phi\rightharpoonup g)$ and Lemma~\ref{a lemma} (ii).
The following computation shows that $\alpha$ commutes with the right action of all $w\in A$ : \begin{eqnarray*} \alpha((x\#h)\#\phi) ((y\#g)\cdot w) &=& \alpha((x\#h)\#\phi)(y(g_{(1)}\cdot w)\# g_{(2)}) \\ &=& (x\#h)(y(g_{(1)}\cdot w) \# (\phi\rightharpoonup g_{(2)}) \\ &=& (x\#h)(y\# (\phi\rightharpoonup g))(w\# 1) \\ &=& (\alpha((x\#h)\#\phi)(y\#g) )\cdot w. \end{eqnarray*} Finally, \begin{eqnarray*} \lefteqn{ \alpha(\, ((x\#h)\#\phi) ((x'\#h')\#\phi') \,)(y\#g) = } \\ &=& \alpha((x\#h)(x'\# (\phi_{(1)}\rightharpoonup h')) \# \phi_{(2)}\phi') (y\#g) \\ &=& (x\#h) (x'\# (\phi_{(1)}\rightharpoonup h')) (y\# (\phi_{(2)}\phi'\rightharpoonup g)) \\ &=& (x\#h) ( \phi \cdot ( (x'\#h') (y\# (\phi'\rightharpoonup g)) ) )\\ &=& \alpha(((x\#h)\#\phi) ( (x'\#h')(y\# (\phi'\rightharpoonup g)) ) \\ &=& \alpha(((x\#h)\#\phi) \circ \alpha((x'\#h')\#\phi') (y\#g), \end{eqnarray*} for all $x,x',y\in A,\,h,h',g\in H,\, \phi,\phi'\in H^*$, therefore, $\alpha$ is a homomorphism. \end{proof}
Let $\{ f_i\}$ be a basis of $H$ and $\{ \psi_i\}$ be the dual basis of $H^*$, i.e., such that ${<} f_i,\, \psi_j {>} =\delta_{ij}$ for all $i,j$. Then we have identities $$ \sum_i\, f_i {<} h,\,\psi_i {>} = h,\qquad \sum_i\, {<} f_i ,\, \phi {>} \psi_i =\phi, $$ for all $h\in H$ and $\phi\in H^*$, moreover the element $\Sigma_i\, f_i\otimes \psi_i \in H\otimes H^*$ does not depend on the choice of $\{ f_i\}$.
Let us define a linear map $\beta : \mbox{End}(A\# H)_A \to (A\# H)\# H^*$ by $$ \beta : T \mapsto \sum_i\, T(1\# {f_i}_{(2)})(1\# S^{-1}({f_i}_{(1)})) \# \psi_i. $$
\begin{lemma} \label{inverses} The maps $\alpha$ and $\beta$ are inverses of each other. \end{lemma}
\begin{proof} We need to check that $$\beta\circ \alpha = \mbox{id}_{(A\# H)\# H^*} \quad \mbox{ and } \quad \alpha \circ \beta = \mbox{id}_{\mbox{End}(A\# H)_A}. $$ For all $x\in A,\, h\in H$, and $\phi\in H^*$ we compute \begin{eqnarray*} \beta\circ \alpha((x\#h)\#\phi) &=& \Sigma_i\,(x(h_{(1)}\cdot 1) \# h_{(2)}(\phi\rightharpoonup {f_i}_{(2)})S^{-1}{f_i}_{(1)}) \# \psi_i\\ &=& \Sigma_i\,(x\# h {<} \phi,\, {f_i}_{(3)} {>} {f_i}_{(2)} S^{-1}{f_i}_{(1)}) \# \psi_i\\ &=& \Sigma_i\,(x \# h{<} \phi,\,1\2f_i{>} 1_{(1)}) \# \psi_i\\ &=& (x \# h (\phi_{(1)}\rightharpoonup 1)) \# \phi_{(2)} \\ &=& (x \# h) \# \varepsilon_t(\phi_{(1)}) \phi_{(2)} = (x \# h)\# \phi, \end{eqnarray*} where we used Lemma~\ref{a lemma} (iii) and the properties of the element $\Sigma_i\,f_i\otimes \psi_i$.
Also, for every $T\in \mbox{End}(A\# H)_A$ we have : \begin{eqnarray*} \alpha\circ\beta(T)(y\# g) &=& \Sigma_i\, \alpha( T(1\# {f_i}_{(2)})(1\# S^{-1}({f_i}_{(1)})) \# \psi_i)(y\# g)\\ &=& \Sigma_i\, T(1\# {f_i}_{(2)})(1\# S^{-1}({f_i}_{(1)})) (y\# (\psi_i\rightharpoonup g) ) \\ &=& \Sigma_i\, T(1\# {f_i}_{(3)})( (S^{-1}({f_i}_{(2)}) \cdot y) \#
S^{-1}({f_i}_{(1)}) g_{(1)} ) {<} \psi_i,\, g_{(2)}{>} \\ &=& T(1\# g_{(4)})( (S^{-1}(g_{(3)}) \cdot y) \# S^{-1}(g_{(2)}) g_{(1)}) \\ &=& T(1\# g_{(3)})( (S^{-1}(g_{(2)}) \cdot y) (\varepsilon_s(g_{(1)})\cdot 1) \# 1)\\ &=& T(1\# g_{(2)}) ((S^{-1}(g_{(1)} 1_{(2)}) \cdot y)(1_{(1)} \cdot 1) \# 1) \\ &=& T(1\# g_{(2)}) ((S^{-1}(g_{(1)}) \cdot y) \# 1) \\ &=& T( (g_{(2)} S^{-1}(g_{(1)}) \cdot y) \# g_{(3)}) \\ &=& T((1_{(1)}\cdot y) \# 1\2g) = T( y\# g), \end{eqnarray*} where we used that $T$ commutes with the right multiplication by elements from $A$ and identities from Lemma~\ref{a lemma}(i) and (iii). \end{proof}
\begin{theorem} \label{duality} For any $H$-module algebra $A$ there is a canonical isomorphism between the algebras $(A\# H)\# H^*$ and $\mbox{End}(A\# H)_A$. \end{theorem}
\begin{proof} Follows from Lemmas~\ref{alpha} and \ref{inverses} \end{proof}
\begin{corollary} $H\# H^* \cong \mbox{End}(H)_{H_t}$, in particular, $H\# H^*$ is a semisimple algebra. \end{corollary}
\begin{proof} We know that $H \cong H_t \# H$, where $H_t$ is the trivial $H$-module algebra, therefore applying Theorem~\ref{duality} to $A=H_t$ we see that $H$ is a projective generating $H_t$-module such that $\mbox{End}(H)_{H_t} \cong H\# H^*$. Therefore, $H_t$ and $H\# H^*$ are Morita equivalent. Since $H_t$ is always semisimple (as a separable algebra), $H\# H^*$ is semisimple. \end{proof} \end{section}
\end{document} | arXiv | {
"id": "9912226.tex",
"language_detection_score": 0.6480469107627869,
"char_num_after_normalized": null,
"contain_at_least_two_stop_words": null,
"ellipsis_line_ratio": null,
"idx": null,
"lines_start_with_bullet_point_ratio": null,
"mean_length_of_alpha_words": null,
"non_alphabetical_char_ratio": null,
"path": null,
"symbols_to_words_ratio": null,
"uppercase_word_ratio": null,
"type": null,
"book_name": null,
"mimetype": null,
"page_index": null,
"page_path": null,
"page_title": null
} | arXiv/math_arXiv_v0.2.jsonl | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.